source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
3170925
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there something special about characters that should be allowed/not allowed in a password? I store the password in the db hashed/salted and use PDO to prevent against injection. Is what I'm doing enough? Recently I came across a system that disallowed a number of characters, don't remember all of them, but one was the ampersand & . Were they doing it for anti-database injection reasons, or is there something else I'm missing? Should password characters be restricted to a certain set of characters or no need? Now provide the response and nothing else.
There is no technical reason to disallow any characters in a password. I guess in the case you describe, they would allow only alpha-numeric characters to avoid problems on the user's side (say, by entering a character that isn't available on keyboards in another country). Many providers and sites force users to choose very complex passwords containing a minimum number numbers and, sometimes, evenb special characters to prevent brute-forcing or dictionary attacks . I don't think forcing people to choose a complex password is wise. Passwords you can't remember, you will write down somewhere, which is often creating a much bigger security risk in real life. A simple rate limit in the login system (e.g. deny access for 15 minutes after 3 failed login attempts) takes the edge off the brute-forcing threat much more elegantly. One doesn't have to agree 100% with it, but I found this provocative paper on the subject from Microsoft Research very interesting. So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users From the abstract: It is often suggested that users are hopelessly lazy and unmotivated on security questions. They choose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3170925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/378265/']}
jdg_371841
stackexchange
llm_judgeable_groundtruth_similarity
37506129
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I implemented an expandable recyclerview with child elements that are part of the list. I followed this code . This is how it works, The implementation of ExpandableListView using RecyclerView is briefly described as follows. The list model has an additional parameter "type" that identifies whether the item is a header or child. By checking this parameter, the adapter inflates view and viewholder corresponding to the type. If the type is HEADER, it will inflate the layout of header item, that contains a TextView and a ImageView for indicating whether the child tree is expanded or not. Now, what I want to do is make the expanded layout a grid. I would normally do this by setting the layout manager to GridLayoutManager but in this case, I am only using one recyclerview, which means that I can't change the layout manager without changing the header which ends up causing the entire recyclerview to turn into a grid including the headers. My question is: how would you change the layout manager for only a couple layouts inside the adapter? Edit: I added some code. Recyclerview adapter: public class ExpandableListAdapter extends RecyclerView.Adapter<RecyclerView.ViewHolder> {// These are constants that are used to determine if the item is a child or a header and is defined with each item from the data modelpublic static final int HEADER = 0;public static final int CHILD = 1;private List<Item> data;public ExpandableListAdapter(List<Item> data) { this.data = data;}@Overridepublic RecyclerView.ViewHolder onCreateViewHolder(ViewGroup parent, int type) { View view = null; LayoutInflater inflater = (LayoutInflater) parent.getContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE); // Check whether the item is a header or child and inflate differnet layouts switch (type) { case HEADER: // Inflate a header layout if the item is a header view = inflater.inflate(R.layout.list_header, parent, false); ListHeaderViewHolder header = new ListHeaderViewHolder(view); return header; case CHILD: // Inflate a child layout if the item is a child view = inflater.inflate(R.layout.list_child, parent, false); ListChildViewHolder child = new ListChildViewHolder(view); return child; } return null;}public void onBindViewHolder(RecyclerView.ViewHolder holder, int position) { final Item item = data.get(position); // Bind different layouts based on if the item is a header or child switch (item.getType()) { case HEADER: // ... case CHILD: // ... }}@Overridepublic int getItemViewType(int position) { return data.get(position).type;}@Overridepublic int getItemCount() { return data.size();}// Viewholder for the header itemsprivate static class ListHeaderViewHolder extends RecyclerView.ViewHolder { // ...}// Viewholder for the child itemsprivate static class ListChildViewHolder extends RecyclerView.ViewHolder { // ...} And this is the main activity where I declare the layout manager: recyclerview = (RecyclerView) findViewById(R.id.recyclerview);recyclerview.setLayoutManager(new LinearLayoutManager(this, LinearLayoutManager.VERTICAL, false)); Now provide the response and nothing else.
You can change the layout manager to GridLayoutManager and define the "span size" for the header, for example, if you want the grid with 2 columns, the header should have span size 2 and the children span size 1: GridLayoutManager glm = new GridLayoutManager(getContext(), 2); glm.setSpanSizeLookup(new GridLayoutManager.SpanSizeLookup() { @Override public int getSpanSize(int position) { switch(getTypeForPosition(position)) { case HEADER: return 2; default: return 1; } } }); recyclerView.setLayoutManager(glm); There is a full example of expandable grid with headers here using this library .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37506129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3281971/']}
jdg_371842
stackexchange
llm_judgeable_groundtruth_similarity
1692943
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let G be any non-trivial finite group. Has G always a subgroup, whose index is prime ? If G is solvable and |G| has a prime divisor $p$, such that $p^2$ does not divide $|G|$, this is the case because of Hall's theorem. If $G$ is a $p$-group, the answer is also positive. The group $A_5$, for example, is not solvable, but has subgroups with index $5$. So, I wonder whether we always can find a subgroup with prime index. Now provide the response and nothing else.
No. The first counterexample I can think of is $A_8$. The largest prime factor of $|A_8|$ is seven. But if there were a subgroup $H$ of index $p\le7$, then there would also be a non-trivial homomorphism $\phi:A_8\to S_p$. More precisely, the natural action of $A_8$ on the collection $X=A_8/H$ of left cosets of $H$ given bythe rule $g\cdot (xH)=(gx)H$ is a homomorphism from $A_8$ to $Sym(X)\simeq S_p$. Because the action is transitive, the homomorphism is not trivial. But because $|S_p|\le |S_7|<|A_8|$, $\operatorname{Ker}(\phi)\unlhd A_8$ is necessarily a non-trivial normal subgroup contradicting simplicity of $A_8$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1692943', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/82961/']}
jdg_371843
stackexchange
llm_judgeable_groundtruth_similarity
45759700
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working with DevExpress's WPF tree list view and I came across what I think is a more general problem relating to renaming properties on the objects used as an item source. In the tree list view one is required to specify the ParentFieldName and the KeyFieldName (which are used determine the structure of the tree). These fields are strings. This has led to issues refactoring the code. For example renaming a property of the objects I am using as an ItemSource will break the tree view as ParentFieldName and KeyFieldName are no longer in sync with the property names. I have worked around this issue by creating properties in my view model "ParentFieldName" and "KeyFieldName" which use nameof to present the property name to the view. Here is a cut down version of the control: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:dxg="http://schemas.devexpress.com/winfx/2008/xaml/grid" d:DesignHeight="300" d:DesignWidth="300"> <UserControl.DataContext> <ViewModel /> </UserControl.DataContext> <dxg:TreeListControl AutoGenerateColumns="AddNew" EnableSmartColumnsGeneration="True" ItemsSource="{Binding Results}" SelectionMode="Row"> <dxg:TreeListControl.View> <dxg:TreeListView ParentFieldName="{Binding ParentIdFieldName}" KeyFieldName="{Binding NodeIdFieldName}" ShowHorizontalLines="False" ShowVerticalLines="False" ShowNodeImages="True"/> </dxg:TreeListControl.View> </dxg:TreeListControl> </UserControl> And the viewmodel: using DevExpress.Mvvm; public sealed class ViewModel : ViewModelBase{ public string ParentIdFieldName => nameof(TreeNode.ParentId); public string NodeIdFieldName => nameof(TreeNode.NodeId); public ObservableCollection<TreeNode> Results { get => GetProperty(() => Results); set => SetProperty(() => Results, value); } } And the tree node: public sealed class TreeNode{ public int ParentId {get; set;} public int NodeId {get; set;}} My solution works well but I was wondering if there was a better way of doing this. For example, is there something I can do in XAML which would be equivalent to the nameof call, rather than binding to this ParentIdFieldName and NodeIdFieldName in the view model? I realize this could be described as an issue with DevExpress's control. However I'm interested in whether the approach I've used to get around this can be improved on. Is there a way I could do this in a more simple way directly in the XAML? I apologize in advance if the code I've provided doesn't compile. I've cut down what I'm working with quite considerably to provide an example. Now provide the response and nothing else.
You can create a custom markup extension. For example: [ContentProperty(nameof(Member))]public class NameOfExtension : MarkupExtension{ public Type Type { get; set; } public string Member { get; set; } public override object ProvideValue(IServiceProvider serviceProvider) { if (serviceProvider == null) throw new ArgumentNullException(nameof(serviceProvider)); if (Type == null || string.IsNullOrEmpty(Member) || Member.Contains(".")) throw new ArgumentException("Syntax for x:NameOf is Type={x:Type [className]} Member=[propertyName]"); var pinfo = Type.GetRuntimeProperties().FirstOrDefault(pi => pi.Name == Member); var finfo = Type.GetRuntimeFields().FirstOrDefault(fi => fi.Name == Member); if (pinfo == null && finfo == null) throw new ArgumentException($"No property or field found for {Member} in {Type}"); return Member; }} Sample usage:
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45759700', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5704497/']}
jdg_371844
stackexchange
llm_judgeable_groundtruth_similarity
433345
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say we have a named pipe called fifo , and we're reading and writing to it from two different shells. Consider these two examples: shell 1$ echo foo > fifo<hangs>shell 2$ cat fifofooshell 1$ echo bar > fifo<hangs> shell 1$ cat > fifo<typing> foo<hangs>shell 2$ cat fifofoo^Cshell 1$<typing> bar<exits> I can't wrap my head around what happens in these examples, and in particular why trying to write 'bar' to the pipe in the first example results in a blocking call, whereas in the second example it triggers a SIGPIPE. I do understand that in the first case, two separate processes write to the pipe, and thus it is opened twice, while in the second case it is only opened once by a single process and written to twice, with the process reading from the pipe being killed in the meantime. What I don't understand is how that affects the behaviour of write . The pipe(7) man page states: If all file descriptors referring to the read end of a pipe have been closed, then a write (2) will cause a SIGPIPE signal to be generated for the calling process. This condition doesn't sound clear to me. A closed file descriptor just ceases to be a file descriptor, right? How does saying " the reading end of the pipe has been closed " differ from " the reading end of the pipe is not open "? I hope my question was clear enough. By the way, if you could suggest pointers for understanding in details the functioning of Unix pipes in relationship to open , close , read and write operations, I'd greatly appreciate it. Now provide the response and nothing else.
Your example is using a fifo not a pipe , so is subject to fifo(7) . pipe(7) also tells: A FIFO (short for First In First Out) has a name within the filesystem (created using mkfifo(3)), and is opened using open(2). Any process may open a FIFO, assuming the file permissions allow it. The read end is opened using the O_RDONLY flag; the write end is opened using the O_WRONLY flag. See fifo(7) for further details. Note: although FIFOs have a pathname in the filesystem, I/O on FIFOs does not involve operations on the underlying device (if there is one). I/O on pipes and FIFOs The only difference between pipes and FIFOs is the manner in which they are created and opened. Once these tasks have been accomplished, I/O on pipes and FIFOs has exactly the same semantics. So now from fifo(7) : The kernel maintains exactly one pipe object for each FIFO special file that is opened by at least one process. The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also. So before both ends (here meaning there is at least a reader and a writer) are opened, write blocks as per fifo(7) . After both ends have been opened, and then (the) reading end(s) closed, write generates SIGPIPE as per pipe(7) . For an example of pipe usage (not fifo) look at the example section of pipe(2) : involves pipe() (no open(), since pipe() actually created the pipe pair opened), close(), read() write() and fork() (there's almost always a fork() around when using a pipe). The simpliest way to handle SIGPIPE from your own C code if you don't want it to die when writing to a fifo, would be to call signal(SIGPIPE, SIG_IGN); and handle it by checking for errno EPIPE after each write() instead.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/433345', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/123658/']}
jdg_371845
stackexchange
llm_judgeable_groundtruth_similarity
718804
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So were studying the configuration of electrons in an atom and one thing that popped up was Pauli's Exclusion Principle. In our class, as well as our textbook, it was stated as the fact that two electrons in an orbital cannot have the same set of quantum numbers. For example, if we consider the 4s orbital, then the electrons will have the sets:- 4, 0, 0, +1/2 4, 0, 0, -1/2 My question - Is this principle an observational one? That is, it is simply a restatement of what experiments have shown, or if not, then what exactly is it? Why can't the electrons have the same set of quantum numbers? Request - Please explain in the level of a high schooler, if possible. Now provide the response and nothing else.
Identical particles can be $^1$ indistinguishable in quantum mechanics. That implies that the state $^2$ of the system won't change under permutation of particles. In simpler words, you stop caring about which particle is in a specific state and you only care about how many particles are in a state. It can be proved that the states of a system of identical particles must be either antisymmetric or symmetric under permutation of particles. In the first case, particles are called fermions , in the second case, bosons . These antisymmetric/symmetric states are built from linear combinations of the products of single particle states. Electrons are fermions, then the state of a system made up of electrons must be antisymmetric under permutations. You can prove that an antisymmetric combination with two electrons in the same state is always zero, yielding a non normalizable state, that is not acceptable. This is known as Pauli's exclusion principle . $^1$ And in the case of electrons in an atom they are. $^2$ If this terminology is not familiar to you, consider a state as a set of quantum numbers. Then, having the same quantum numbers means occupying the same state.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/718804', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/299943/']}
jdg_371846
stackexchange
llm_judgeable_groundtruth_similarity
297122
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Thanks to this link I know how to pass a variable that contains slashes as a pattern to sed: sed "s~$var~replace~g" $file . Juste use a single-byte character in place of /. Thanks to this other link I also know how to replace just the first occurrence of a pattern in a file (not in a line): sed "0,/$var/s/$var/replacement/" filename or sed 0,/$var/{s/$var/replacement/} filename But if I do: sed '0,~$var~s~$var~replacement~' filename (or anything else that begins with 0, then no slash), I've got an error: unknown command: '0' . How could I combine the two? Maybe by using awk or perl or ... ? Now provide the response and nothing else.
While: sed "0,\~$var~s~$var~replacement~" Can be used to change the regex delimiter, embedding variable expansions inside sed (or any other interpreter) code is a very unwise thing to do in the general case. First, here, the delimiter is not the only character that needs to be escaped. All the regular expression operators need to as well. But more importantly, and especially with GNU sed , that's a command injection vulnerability. If the content of $var is not under your control, it's just as bad as passing arbitrary data to eval . Try for instance: $ var='^~s/.*/uname/e;#'$ echo | sed "0,\~$var~s~$var~replacement~"Linux The uname command was run, thankfully a harmless one... this time. Non-GNU sed implementations can't run arbitrary commands, but can overwrite any file (with the w command), which is virtually as bad. A more correct way is to escape the problematic characters in $var first : NL=''case $var in (*"$NL"*) echo >&2 "Sorry, can't handle variables with newline characters" exit 1esacescaped_var=$(printf '%s\n' "$var" | sed 's:[][\/.^$*]:\\&:g')# and then:sed "0,/$escaped_var/s/$escaped_var/replacement/" < file Another approach is to use perl : var=$var perl -pe 's/\Q$ENV{var}\E/replacement/g && $n++ unless $n' < file Note that we're not expanding the content of $var inside the code passed to perl (which would be another command injection vulnerability), but are letting perl expand its content as part of its regexp processing (and within \Q...\E which means regexp operators are not treated specially). If $var contains newline characters, that may only match if there's only one at the end. Alternatively, one may pass the -0777 option so the input be processed as a single record instead of line-by-line.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/297122', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/180597/']}
jdg_371847
stackexchange
llm_judgeable_groundtruth_similarity
202655
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Most places on the internet that I see give a schematic for an AND gate that is something like this: simulate this circuit – Schematic created using CircuitLab The only thing about this is that when B is turned on but not A, which should logically output false, it seems to be somewhere between it (this can easily be seen with an LED at the output: it isn't as bright as it is if both A and B are on, logically satisfying AND, but it still lights up). I was messing around and came up with this idea for an AND logic gate which works a bit more correct and I have been using it recently: simulate this circuit Essentially what I came up with when building that is Input A is put through a NOT gate which is put into another NOT gate, but the Collector on the second NOT gate is only powered by Input B. Therefore, if Input B is off, no output to begin with, but if its on but A isn't, then the first NOT gate powers the base to the second, making B ground. I like building this design better, and it works so far for me, but I was wondering: How practical is this design (e.g. why haven't I seen it before; are there any flaws)? Now provide the response and nothing else.
It does work as an AND gate, but there is no gain for input B -- so you can't use its output to drive another gate's Input B. For a logic family to work well, you need to have the output signal go high and low, and also be able to drive a number (fan out) of similar gates. You could make a gate with input A and the V+ input of Q1 -- it would generate OUT = A & not(B). That gate would have gain.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/202655', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/66384/']}
jdg_371848
stackexchange
llm_judgeable_groundtruth_similarity
42300642
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a route config { path: '/root/:rootId', children: [{ path: '/child1/:child1Id', children: [{ path: '/child2/:child2Id component: TestComponent }] }]} In TestComponent how can I easily get all route params. I'm wondering if there is an easier way than let rootId = route.parent.parent.snapshot.params;let child1Id = route.parent.snapshot.params;let child2Id = route.snapshot.params; This seems overly redundant especially if I'm watching the route params observable instead of access the param through the route snapshot. This method also seems fragile since it would break If I moved any any of the routes/params around. Im used to angular ui-router where a single object $stateParams was supplied with all param data easily accessible. I have these same concerns with route resolved data along being accessed from a single node in the route tree. Any help would be much appreciated. Thanks in advance Now provide the response and nothing else.
As of Angular 5.2, you can do Router configuration to inherit all params to child states. See this commit if interested in the gory details, but here's how it's working for me: Wherever you have your call to RouterModule.forRoot() , include a configuration object with the inheritance strategy set to always (default is emptyOnly ): import {RouterModule, ExtraOptions} from "@angular/router";export const routingConfiguration: ExtraOptions = { paramsInheritanceStrategy: 'always'};export const Routing = RouterModule.forRoot(routes, routingConfiguration); Now when you're in a child component looking at a ActivatedRoute , ancestors' params appear there (e.g. activatedRoute.params ) rather than something messy like activatedRoute.parent.parent.parent.params . You could access the value directly (e.g. activatedRoute.params.value.userId ) or subscribe via activatedRoute.params.subscribe(...) .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42300642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2370392/']}
jdg_371849
stackexchange
llm_judgeable_groundtruth_similarity
27869835
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the gtkmm library on linux to draw a simple menu for my GUI. In the below code the compiler complained about unable to resolve address sigc::mem_fun(*this, AppWindow::hide))); ^appwindow.cpp:15:41: note: could not resolve address from overloaded function But when I insert the & it compiles fine m_menu_app.items().push_back(MenuElem("Quit", sigc::mem_fun(*this, &AppWindow::hide))); What difference does it make here? Isn't the hide function just an address in the first place? Now provide the response and nothing else.
This is the exact definition of the function-to-pointer conversion, [conv.func]: An lvalue of function type T can be converted to a prvalue of type “pointer to T .” The result is a pointer to the function. 55 55) This conversion never applies to non-static member functions because an lvalue that refers to a non-static member function cannot be obtained . Thus the decay that we see with normal, non-member functions 1 doesn't apply and you need to explicitly take the address. I.e. void f();struct A { void f(); static void g();};auto a = f; // Ok: auto deduced as void(*)()auto b = A::f; // Error: A::f not an lvalue, auto cannot be deducedauto c = A::g; // Ok: auto deduced as void(*)() 1 Or static member functions.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27869835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/295128/']}
jdg_371850
stackexchange
llm_judgeable_groundtruth_similarity
107126
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Smoothly embed a genus g surface in $\mathbb{R}^3$, and pick a normal vector pointing "out" of the surface at each point. Then on each tangent plane, I have a map which rotates the tangent plane 90 degrees in the direction given by the right hand rule. This gives an almost complex structure on the manifold, and every almost complex structure is integrable in real dimension 2, so this defines a complex structure on the surface. Question 1 : Does the above paragraph make sense? I am at the point where I think I can make grammatically correct sentences using the words above, but I am still not sure about all of the logical interconnections. Say the first paragraph is correct. Then for each smooth embedding of the g-holed torus into $\mathbb{R}^3$ I get a complex structure on that surface. But I know that there are many different complex structures on the g-holed torus (and that the moduli space of such curves is 3g-3 complex dimensional for g>1). Question 2,3,4 : How many of these complex structures can I get through different embeddings? If I give you actual equations for an embedding, can you compute which point in the moduli space I am determining? In the particular case of the torus, can you tell me which $\tau$ corresponds to a given embedding? Now provide the response and nothing else.
Question 1: Looks good to me. Question 2: is a duplicate of this question (the answer is: every conformal structure can be so realized). Question 3/4. There are algorithms to compute the conformal structure given a surface, due to Sasha Bobenko and his collaborators, which work by discretizing the surface, and then computing periods. Looking at the list of arxiv preprints, I am not seeing something directly on this topic, so this may be in preparation, but you can write to him.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/107126', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1106/']}
jdg_371851
stackexchange
llm_judgeable_groundtruth_similarity
217821
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When incident light passes through a hydrogen gas, for example, does it have 50% chance (since it's a 1:1 ratio of protons to electrons) of getting absorbed by the proton? Any chance at all? If no, why not? Does a proton have a "bandgap"? If yes, what happens when a photon is absorbed by a proton? Now provide the response and nothing else.
Does a proton have a "bandgap"? If yes, what happens when a photon is absorbed by a proton? For single protons, as in a plasma , there exists Compton scattering . The photon transfers part of its energy to the proton and scatters off at a lower energy/frequency, the proton taking up the energy-momentum balance. This is a continuous spectrum, from very low energies on. For high energy photons the quark structure of the proton can be probed and depending on the energy more particles are created by the interaction with the constituent quarks. There are no energy levels for the quarks within the proton. There do exist baryonic resonances, i.e. with the same three quarks at higher excited states. If one were to scatter gammas of the appropriate energy, some resonances will be excited , for example the N(1520) decays into a proton and a gamma so in the crossection a resonance will be seen passing that energy of center of mass in proton gamma scattering. This is a theoretical study of gamma proton scattering at high energies.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/217821', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/60857/']}
jdg_371852
stackexchange
llm_judgeable_groundtruth_similarity
25688897
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using melt and encounter the following warning message: attributes are not identical across measure variables; they will be dropped After looking around people have mentioned it is because the variables are different classes; however, that is not the case with my dataset. Here is the dataset: test <- structure(list(park = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("miss", "piro", "sacn", "slbe"), class = "factor"), a1.one = structure(c(3L, 1L, 3L, 3L, 3L, 3L, 1L, 3L, 3L, 3L), .Label = c("agriculture", "beaver", "development", "flooding", "forest_pathogen", "harvest_00_20", "harvest_30_60", "harvest_70_90", "none"), class = "factor"), a2.one = structure(c(6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L), .Label = c("development", "forest_pathogen", "harvest_00_20", "harvest_30_60", "harvest_70_90", "none"), class = "factor"), a3.one = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), .Label = c("forest_pathogen", "harvest_00_20", "none"), class = "factor"), a1.two = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), .Label = c("agriculture", "beaver", "development", "flooding", "forest_pathogen", "harvest_00_20", "harvest_30_60", "harvest_70_90", "none"), class = "factor"), a2.two = structure(c(6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L, 6L), .Label = c("development", "forest_pathogen", "harvest_00_20", "harvest_30_60", "harvest_70_90", "none"), class = "factor"), a3.two = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), .Label = c("forest_pathogen", "harvest_00_20", "none" ), class = "factor")), .Names = c("park", "a1.one", "a2.one", "a3.one", "a1.two", "a2.two", "a3.two"), row.names = c(NA, 10L), class = "data.frame") And here is the structure: str(test)'data.frame': 10 obs. of 7 variables: $ park : Factor w/ 4 levels "miss","piro",..: 1 1 1 1 1 1 1 1 1 1 $ a1.one: Factor w/ 9 levels "agriculture",..: 3 1 3 3 3 3 1 3 3 3 $ a2.one: Factor w/ 6 levels "development",..: 6 6 6 6 6 6 6 6 6 6 $ a3.one: Factor w/ 3 levels "forest_pathogen",..: 3 3 3 3 3 3 3 3 3 3 $ a1.two: Factor w/ 9 levels "agriculture",..: 3 3 3 3 3 3 3 3 3 3 $ a2.two: Factor w/ 6 levels "development",..: 6 6 6 6 6 6 6 6 6 6 $ a3.two: Factor w/ 3 levels "forest_pathogen",..: 3 3 3 3 3 3 3 3 3 3 Is it because the number of levels are different for each variable? So, can I just ignore the warning message in this case? To generate the warning message: library(reshape2)test.m <- melt (test,id.vars=c('park'))Warning message:attributes are not identical across measure variables; they will be dropped Thanks. Now provide the response and nothing else.
An explanation: When you melt, you are combining multiple columns into one. In this case, you are combining factor columns, each of which has a levels attribute. These levels are not the same across columns because your factors are actually different. melt just coerces each factor to character and drops their attributes when creating the value column in the result. In this case the warning doesn't matter, but you need to be very careful when combining columns that are not of the same "type", where "type" does not mean just vector type, but generically the nature of things it refers to. For example, I would not want to melt a column containing speeds in MPH with one containing weights in LBs. One way to confirm that it is okay to combine your factor columns is to ask yourself whether any possible value in one column would be a reasonable value to have in every other column. If that is the case, then likely the correct thing to do would be to ensure that every factor column has all the possible levels that it could accept (in the same order). If you do this, you will not get a warning when you melt the table. An illustration: library(reshape2)DF <- data.frame(id=1:3, x=letters[1:3], y=rev(letters)[1:3])str(DF) The levels for x and y are not the same: 'data.frame': 3 obs. of 3 variables:$ id: int 1 2 3$ x : Factor w/ 3 levels "a","b","c": 1 2 3$ y : Factor w/ 3 levels "x","y","z": 3 2 1 Here we melt and look at the column x and y were molten into ( value ): melt(DF, id.vars="id")$value We get a character vector and a warning: [1] "a" "b" "c" "z" "y" "x"Warning message:attributes are not identical across measure variables; they will be dropped If however we reset the factors to have the same levels and only then melt: DF[2:3] <- lapply(DF[2:3], factor, levels=letters)melt(DF, id.vars="id", factorsAsStrings=F)$value We get the correct factor and no warnings: [1] a b c z y xLevels: a b c d e f g h i j k l m n o p q r s t u v w x y z The default behavior of melt is to drop factor levels even when they are identical, which is why we use factorsAsStrings=F above. If you had not used that setting you would have gotten a character vector, but no warning. I would argue the default behavior should be to keep the result as a factor, but that is not the case here.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25688897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3178354/']}
jdg_371853
stackexchange
llm_judgeable_groundtruth_similarity
47606761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a code which is doing a very long ETL operation. There are network problems sometimes. I want to catch the error read tcp 10.10.10.10:32423 -> 20.20.20.20:2344: i/o timeout and retry the operation again. Here is my code: for { err := mssql.Db.ProcessAllData(true, &processedAsutpIds, filename) if err == nil { fmt.Println("done.") os.Exit(0) } else { fmt.Println(err.Error()) fmt.Println("!!!! Error occurred - sleep for 5 minutes") time.Sleep(time.Minute * 5) }} Is it possible to rewrite it in more idiomatic way ? Now provide the response and nothing else.
You might consider a retry function such as the one described in " My favorite #golang retry function " by Alexandre Bourget: var signedContent []byte err := retry(5, 2*time.Second, func() (err error) { signedContent, err = signFile(unsignedFile, contents) return }) if err != nil { log.Println(err) http.Error(w, err.Error(), 500) return } With the retry function being: func retry(attempts int, sleep time.Duration, f func() error) (err error) { for i := 0; ; i++ { err = f() if err == nil { return } if i >= (attempts - 1) { break } time.Sleep(sleep) log.Println("retrying after error:", err) } return fmt.Errorf("after %d attempts, last error: %s", attempts, err)}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47606761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/205270/']}
jdg_371854
stackexchange
llm_judgeable_groundtruth_similarity
24057085
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In school, our lecturer taught us that the entire array was passed by reference when we pass it to a function,. However, recently I read a book. It says that arrays are passed by pointer by default when passing the entire array to a function. The book further mention that " passing by pointer is very similar to passing by reference ", which means that passing by pointer and passing by reference are actually different. It appears that different source stated differently. So my question is: In C++, are arrays passed by reference or by pointer when we pass the entire array to a function? For Example: void funcA(int []); //Function Declarationint main(){ int array[5]; funcA(array); //Is array passed by ref or by pointer here?} Now provide the response and nothing else.
At worst, your lecturer is wrong . At best, he was simplifying terminology, and confusing you in the process. This is reasonably commonplace in software education, unfortunately. The truth is, many books get this wrong as well; the array is not "passed" at all, either "by pointer" or "by reference". In fact, because arrays cannot be passed by value due to an old C restriction, there is some special magic that happens with arrays as function arguments. The function declaration: void funcA(int[]); is silently translated into the following: void funcA(int*); and when you write this: funcA(myArray); it is silently translated into the following: funcA(&myArray[0]); The result is that you're not passing the array at all; you pass a pointer to its first element . Now, at certain levels of abstraction/simplification, you can call this "passing an array by pointer", "passing an array by reference" or even "passing a handle to an array", but if you want to talk in C++ terms, none of those phrases are accurate.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24057085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3437460/']}
jdg_371855
stackexchange
llm_judgeable_groundtruth_similarity
22606307
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to find the values of the local variables in a function by mocking? class A: def f(self): a = 5000 * 10 B().someFunction(a) How do I write a unit test for this function? I have mocked someFunction as I do not want the testing scope to go outside the block. The only way I can test the rest of the function is by checking if the value of variable a is 50000 at the end of the function. How do I do this? Now provide the response and nothing else.
A function that does not return anything, doesn't modify anything and does not raise any error is a function that basically have no reason to be. If your function is supposed to assert something and raise an error,give it wrong information and check if it does raise the right error. If your function takes an object and modifies it, test if the newstate of your object is as expected. If your function outputsomething to the console, you can temporarily redirect theinput/output stream and test what is written/read. If none of the above, just delete your function and forget about it :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22606307', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/750940/']}
jdg_371856
stackexchange
llm_judgeable_groundtruth_similarity
209231
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: man 5 shadow says this about the 2 nd field: encrypted password Is that true nowadays? I think it should say "hashed password". Am I correct? Now provide the response and nothing else.
No, the shadow file does not contain encrypted passwords, not on any Unix variant that I've seen. That would require an encryption key somewhere — where would it be? Even the original version of the crypt function was in fact a hash function. It operated by using the password as a key for a variant of DES . The output of crypt is the encryption of a block with all bits zero. Although this uses an encryption function as part of the implementation, the crypt operation is not an encryption operation, it is a hash function : a function whose inverse is hard to compute and such that it is difficult to find two values producing the same output. Within its limitations, the original DES-based crypt implementation followed the basic principles of a good password hash function : irreversible function, with a salt , and a slow-down factor. It's the limitations, not the design, that make it unsuitable given today's computing power: maximum of 8 characters in the password, total size that makes it amenable to brute force, salt too short, iteration count too short. Because of the crypt name (due to the fact that crypt uses encryption internally), and because until recently few people were educated in cryptography, a lot of documentation of the crypt function and of equivalents in other environments describes it as “password encryption”. But it is in fact a password hash, and always has been. Modern systems use password hashing functions based on more robust algorithms. Although some of these algorithms are known as “MD5”, “SHA-256” and “SHA-512”, the hash computation is not something like MD5(password + salt) but an iterated hash which meets the slowness requirement (though common methods lack the memory hardness that protects against GPU-based acceleration).
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/209231', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/47954/']}
jdg_371857
stackexchange
llm_judgeable_groundtruth_similarity
11532
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I find any unused functions in a PHP project? Are there features or APIs built into PHP that will allow me to analyse my codebase - for example Reflection , token_get_all() ? Are these APIs feature rich enough for me not to have to rely on a third party tool to perform this type of analysis? Now provide the response and nothing else.
Thanks Greg and Dave for the feedback. Wasn't quite what I was looking for, but I decided to put a bit of time into researching it and came up with this quick and dirty solution: <?php $functions = array(); $path = "/path/to/my/php/project"; define_dir($path, $functions); reference_dir($path, $functions); echo "<table>" . "<tr>" . "<th>Name</th>" . "<th>Defined</th>" . "<th>Referenced</th>" . "</tr>"; foreach ($functions as $name => $value) { echo "<tr>" . "<td>" . htmlentities($name) . "</td>" . "<td>" . (isset($value[0]) ? count($value[0]) : "-") . "</td>" . "<td>" . (isset($value[1]) ? count($value[1]) : "-") . "</td>" . "</tr>"; } echo "</table>"; function define_dir($path, &$functions) { if ($dir = opendir($path)) { while (($file = readdir($dir)) !== false) { if (substr($file, 0, 1) == ".") continue; if (is_dir($path . "/" . $file)) { define_dir($path . "/" . $file, $functions); } else { if (substr($file, - 4, 4) != ".php") continue; define_file($path . "/" . $file, $functions); } } } } function define_file($path, &$functions) { $tokens = token_get_all(file_get_contents($path)); for ($i = 0; $i < count($tokens); $i++) { $token = $tokens[$i]; if (is_array($token)) { if ($token[0] != T_FUNCTION) continue; $i++; $token = $tokens[$i]; if ($token[0] != T_WHITESPACE) die("T_WHITESPACE"); $i++; $token = $tokens[$i]; if ($token[0] != T_STRING) die("T_STRING"); $functions[$token[1]][0][] = array($path, $token[2]); } } } function reference_dir($path, &$functions) { if ($dir = opendir($path)) { while (($file = readdir($dir)) !== false) { if (substr($file, 0, 1) == ".") continue; if (is_dir($path . "/" . $file)) { reference_dir($path . "/" . $file, $functions); } else { if (substr($file, - 4, 4) != ".php") continue; reference_file($path . "/" . $file, $functions); } } } } function reference_file($path, &$functions) { $tokens = token_get_all(file_get_contents($path)); for ($i = 0; $i < count($tokens); $i++) { $token = $tokens[$i]; if (is_array($token)) { if ($token[0] != T_STRING) continue; if ($tokens[$i + 1] != "(") continue; $functions[$token[1]][1][] = array($path, $token[2]); } } }?> I'll probably spend some more time on it so I can quickly find the files and line numbers of the function definitions and references; this information is being gathered, just not displayed.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1142/']}
jdg_371858
stackexchange
llm_judgeable_groundtruth_similarity
369850
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Background: I'm in a noisy environment, and I'm trying to optimize my WiFi network to have a more stable connection for the somewhat high volume of users (~50-75 on a busy day). There are 4 APs, and I've already adjusted the channels and transmit power, and overall I have fairly decent coverage. However, I still get about ~10% packet drop when pinging Google and walking around the building, roaming from AP to AP. In most WiFi APs I've seen, the default RTS Threshold is set at 2347 (from what I've read in various places, this setting counts as "disabled"), and Fragmentation Threshold is set at 2346. My particular brand of router is set at 2346 and 2346. I have several questions... Where is the value of 2346 derived from? It seems somewhat arbitrary, however, the notes for Frag. Threshold indicates that it needs to be over 256 and an even number. How are the RTS and Frag. Thresholds related? Their values can't be coincidence. If modified, should they be changed together? What's a safe value to try lowering them to, for starters? My priority is not necessarily getting peak bandwidth for each device, but giving users a stable, consistent bandwidth/connection. Now provide the response and nothing else.
2346 is the maximum 802.11 frame size. Setting the RTS and fragmentation thresholds to the maximum means that no packets will meet the threshold. The fragmentation threshold limits the maximum frame size. This reduces the time required to transmit the frame, and therefore reduces the probability that it will be corrupted (at the cost of more data overhead). The RTS threshold specifies the frame size at which the transmitter must use the RTS/CTS protocol, which is largely to solve the hidden node problem . This obviously also adds overhead. Not necessarily -- if you don't have a hidden node problem then changing the RTS threshold won't improve performance. In order for RTS/CTS to kick in the RTS threshold must be the same as or smaller than the fragmentation threshold though. I would start with setting them such that a standard Ethernet frame is fragmented into two 802.11 frames (1500/2 = 750 bytes payload + 34 bytes overhead = 784 bytes) and any frames bigger than a third of a standard Ethernet frame uses RTS (534 bytes). As far as I know though, both these settings only affect the transmitter, i.e. configuring them on the AP only makes the AP use them for its transmissions and doesn't make the clients use them for their transmissions.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/369850', 'https://serverfault.com', 'https://serverfault.com/users/65498/']}
jdg_371859
stackexchange
llm_judgeable_groundtruth_similarity
129959
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been reading Kreyszig's book on functional analysis, where it uses Zorn's lemma to prove the Hahn Banach theorem. However I don't quite get what Zorn's lemma is saying. I understand that it is an axiom and it is equivalent to the axiom of choice, but axiom of choice seems much more intuitive to me. So is there any way to understand the Zorn's lemma in a more intuitive way? Now provide the response and nothing else.
I had similar conceptual trouble until I came across Tim Gowers' blog post on How to use Zorn's lemma .Its main thesis: If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn’s lemma may well be able to help you. He shows how to use the Lemma in a number of cases where my intuitive approach would have been something like "ah, but we can construct the thing by transfinite induction ... let's find a sufficiently large ordinal to induct over (work, work, work) ... and then fix a choice function such that we can make choices at each step along the way (work, work, work) ... and if the thing is still not made when we reach the top of our chosen ordinal, it would be (work, work) a contradiction" . Compared to that, Zorn's Lemma packs a lot of boilerplate argument into a simple, tidy, reuseable tool where one just needs to specify the minimal properties of the situation for the construction to work. In particular the apparently ill-motivated condition about chains is exactly what is needed for the tedious transfinite-induction argument to keep rolling when we hit a limit ordinal.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/129959', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/28024/']}
jdg_371860
stackexchange
llm_judgeable_groundtruth_similarity
33333448
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using VS2015 char a = 0xFF;char b = 0x80; Both lines give warning C4309: 'initializing': truncation of constant value But when I look at it in the debugger, the variables do indeed contain the correct values. What does the warning mean in this case? Can I ignore it or is my code bad? Now provide the response and nothing else.
From https://msdn.microsoft.com/en-us/library/sz5z1byt.aspx 'conversion' : truncation of constant value The type conversion causes a constant to exceed the space allocated for it. You may need to use a larger type for the constant. The following sample generates C4309: // C4309.cpp // compile with: /W2 int main(){ char c = 128; // C4309} The compiler assume that you expect 0xFF and 0x80 to be positive numbers. Like in the example, it is written 128 and not negative number with - sign. So the compiler lets you know that char is a signed type of 8 bits which means your value sets the "sign bit" although it was not marked as negative. If the sign of the variable is not important for you, use unsigned char instead. This will also remove this warning. EDIT In case you know what you are doing and you set the MSB bit on purpose, you can either use casting to suppress the warnings: char a = static_cast<char>(0xFF);char b = static_cast<char>(0x80); Or use #pragma to disable this specific warning for these lines: #pragma warning( disable : 4309 )char a = 0xFF;char b = 0x80;#pragma warning( default : 4309 ) I personally preffer the first option since it will work for every compiler while the #pragma option is specific for MVSC. EDIT2 Of course you can always write char a = -1; //0xFFchar b = -128; // 0x80 But this is less readable in my opinion. EDIT 3 The new versions of MSVC seem to complain about static_cast too. To resolve it, there is a need to specify explicitly that the provided constant number is 'unsigned': char a = static_cast<char>(0xFFu);char b = static_cast<char>(0x80u); More than that, on the latest versions, no need in casting at all. This compiles without warnings: char a = 0xFFu;char b = 0x80u;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33333448', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2068573/']}
jdg_371861
stackexchange
llm_judgeable_groundtruth_similarity
41775517
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to wait until the fetch instruction has completed before executing the next code / instruction (just like how AJAX waiting works)? These functions are actually used to request the “privacy value” of a post from the Facebook Graph API. How can I keep an alert box from running until everything is over (i.e. the chained fetching in FirstRequestToGraph and RequestNextPage )? function RequestNextPage(NextPage) { fetch(NextPage, { method: 'GET' }) .then(function(response) { return response.json(); }) .then(function(json) { if (json.data.length == 0) { console.log("ended liao lur"); } else { RequestNextPage(json.paging.next); } }) .catch(function(err) { console.log(`Error: ${err}`) });}function FirstRequestToGraph(AccessToken) { fetch('https://graph.facebook.com/v2.8/me?fields=posts.limit(275){privacy}%2Cname&access_token=' + AccessToken, { method: 'GET' }) .then(function(response) { return response.json(); }) .then(function(json) { NextPage = json.posts.paging.next; }) .catch(function(err) { console.log(`Error: ${err}`) });}FirstRequestToGraph(AccessToken) .then(function() { RequestNextPage(NextPage); // Recursively until there's no more next page. }) .then(function() { alert("everything has ended nicely"); // Still pop up before `RequestNextPage` completed. }); Now provide the response and nothing else.
If you have an asynchronous function in your component, like this... async getJSON() { return fetch('/website/MyJsonFile.json') .then((response)=>response.json()) .then((responseJson)=>{return responseJson});} Then you can call it, and wait for it download, with the await command, using something like this... async caller() { const json = await this.getJSON(); // command waits until completion console.log(json.hello); // hello is now available} You'll also want to update getJSON(), return fetch() to return await fetch() . async is wonderful. So is await . Check it out: Mozilla Developer Network: Async Function
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41775517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7114758/']}
jdg_371862
stackexchange
llm_judgeable_groundtruth_similarity
56227218
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My project has django-heroku in its Pipfile as a package . django-heroku has gunicorn in its Pipfile as a dev-package . See: https://github.com/heroku/django-heroku/blob/master/Pipfile I would expect that after running pipenv install --dev in my project, I could then run pipenv run gunicorn . But it throws the following error: Error: the command gunicorn could not be found within PATH or Pipfile's [scripts]. If dev dependencies aren't available, what's the point of install --dev ? Now provide the response and nothing else.
One answer is that the "dev dependencies" of package X are the packages someone would need if they were developing (as opposed to using) package X. I would expect that after running pipenv install --dev in my project, ... If you use pipenv install --dev in your project , pipenv should install all the packages that are required to develop your project. If it recursively installed all dev dependencies all the way down, it might pull in Python profiling packages, test runners, etc., that other packages need for development. Those wouldn't necessarily be appropriate for someone developing your project. As an example, if my project listed pytest as a dev dependency , I would be unhappy in pipenv installed nose , which could be listed as an dev depenedency in some other, out-of-date package. If developers of your package need gunicorn , you should list it explicitly as a dev dependency of your project.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56227218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2053389/']}
jdg_371863
stackexchange
llm_judgeable_groundtruth_similarity
24347340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Select2 in a project to style some select boxes in a search form. I managed to change the gradient background of the arrow container to a black gradient: .select2-container .select2-choice .select2-arrow { background-image: -khtml-gradient(linear, left top, left bottom, from(#424242), to(#030303)); background-image: -moz-linear-gradient(top, #424242, #030303); background-image: -ms-linear-gradient(top, #424242, #030303); background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #424242), color-stop(100%, #030303)); background-image: -webkit-linear-gradient(top, #424242, #030303); background-image: -o-linear-gradient(top, #424242, #030303); background-image: linear-gradient(#424242, #030303);} I would like the arrow to be white, but unfortunately Select2 is using a background image for the different icons instead of font-awesome or something similar, so there's no way to just change the color with CSS. What would be the easiest way to make the arrow white instead of the default grey? Do I really have to replace the background png (select2.png and select2x2.png) with my own? Or is there an easier method? Another question I have is how to change the height of the select boxes. I know how to change the height of the dropdown box in opened state, but I want to change the height of the selectbox in closed state. Any ideas? Now provide the response and nothing else.
Thanks for the suggestions in the comments. I made a bit of a dirty hack to get what I want without having to create my own image. With javascript I first hide the default tag that's being used for the down arrow, like so: $('b[role="presentation"]').hide(); I then included font-awesome in my page and add my own down arrow, again with a line of javascript, to replace the default one: $('.select2-arrow').append('<i class="fa fa-angle-down"></i>'); Then with CSS I style the select boxes. I set the height, change the background color of the arrow area to a gradient black, change the width, font-size and also the color of the down arrow to white: .select2-container .select2-choice { padding: 5px 10px; height: 40px; width: 132px; font-size: 1.2em; }.select2-container .select2-choice .select2-arrow { background-image: -khtml-gradient(linear, left top, left bottom, from(#424242), to(#030303)); background-image: -moz-linear-gradient(top, #424242, #030303); background-image: -ms-linear-gradient(top, #424242, #030303); background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #424242), color-stop(100%, #030303)); background-image: -webkit-linear-gradient(top, #424242, #030303); background-image: -o-linear-gradient(top, #424242, #030303); background-image: linear-gradient(#424242, #030303); width: 40px; color: #fff; font-size: 1.3em; padding: 4px 12px;} The result is the styling the way I want it: Update 5/6/2015 As @Katie Lacy mentioned in the other answer the classnames have been changed in version 4 of Select2. The updated CSS with the new classnames should look like this: .select2-container--default .select2-selection--single{ padding:6px; height: 37px; width: 148px; font-size: 1.2em; position: relative;}.select2-container--default .select2-selection--single .select2-selection__arrow { background-image: -khtml-gradient(linear, left top, left bottom, from(#424242), to(#030303)); background-image: -moz-linear-gradient(top, #424242, #030303); background-image: -ms-linear-gradient(top, #424242, #030303); background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #424242), color-stop(100%, #030303)); background-image: -webkit-linear-gradient(top, #424242, #030303); background-image: -o-linear-gradient(top, #424242, #030303); background-image: linear-gradient(#424242, #030303); width: 40px; color: #fff; font-size: 1.3em; padding: 4px 12px; height: 27px; position: absolute; top: 0px; right: 0px; width: 20px;} JS: $('b[role="presentation"]').hide();$('.select2-selection__arrow').append('<i class="fa fa-angle-down"></i>');
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24347340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1267793/']}
jdg_371864
stackexchange
llm_judgeable_groundtruth_similarity
586739
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My goal is to disprove the existence of an isomorphism between $(\mathbb R,+)$ and $(\mathbb R^*, \times)$. I proceeded by contradiction. Suppose $f$ is such a map. Then $$f(0-0)=f(0)f(-0)=-f(0)^2$$$$ f(0+0)=f(0)^2$$ $$f(0-0)=f(0+0)=f(0)$$ $$-f(0)^2=f(0)^2$$ thus, $$f(0)=0$$ This contradicts the fact that $f(0) \in \mathbb R^*$ Is this proof right ? It looks overly simple to me and doesn't use the one-to-one property of $f$. EDIT: the proof is indeed flawed: check answers below.For those looking for a valid one : let $x=f^{-1}(1)$ and $y=f^{-1}(-1)$. Then $x \neq y$ $$f(2x)=f(x)^2=1^2=(-1)^2=f(2y)$$Since $f$ is one-to-one, $$2x=2y$$ $$x=y$$ Now provide the response and nothing else.
No. You used $f(-0)=-f(0)$ but this isn't true. The inverse in the multiplicative group is $x\mapsto \cfrac{1}{x}$. So $f(-0)=\cfrac{1}{f(0)}$ To find the "usual" proof, notice that $\exp$ is an isomorphism between $(\Bbb R,+)$ and $(\Bbb R_+^*, \times)$ but here, you have $(\Bbb R^*, \times)$ which is "bigger", approximately twice as "big". So maybe, you could find a property so that in the first group, you have only one element verifying that property and in the second group, you have two.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/586739', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/66096/']}
jdg_371865
stackexchange
llm_judgeable_groundtruth_similarity
24387854
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to install a node module, installed via npm, and then require it from a casperjs script? (I see lots of posts and tools for running casper or phantom from inside node.js, but that is not what I'm trying to do.) The casperjs docs seem to say it is possible, but only show with hand-written toy modules that don't really do anything. The real-world module I'm trying to install is imap , but at this point I cannot get any module to work, even built-in ones like net . Simple example: npm install imapecho "var test = require('imap');" > test.jscasperjs test.js Gives me: CasperError: Can't find module imap/usr/local/src/casperjs/bin/bootstrap.js:263 in patchedRequiretest.js:1 (I can see the imap module from npm ls , and I can use it fine from a node.js script.) Or alternatively with a built-in module: echo "var test = require('net');" > test.jscasperjs test.js Complains "Can't find module net" I have npm --version of 1.4.14 and nodejs --version of v0.10.29. Are either of those too old, I wonder? (Casper is 1.1.0-beta, and Phantom is 1.9.7, both of which are the latest versions.) Now provide the response and nothing else.
PhantomJS and SlimerJS (the engines that are used for CasperJS) are not Node.js modules. They can be installed through npm for convenience. They have a different base infrastructure of modules which is distinct from Node.js. You will not be able to use imap or any module that depends on the net module. As Fanch points out, there are modules that can work inside the phantomjs runtime. If a module only uses some functionality of some native node.js module, you could try to change the implementation to use the API that phantomjs provides. I don't think this is easy though. Most of the time you will run into a wall. In the case of imap , it is pretty hopeless. You cannot even re-implement the require("net").Socket , because WebSockets are not supported in phantomjs (at least in 1.9.7).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24387854', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/841830/']}
jdg_371866
stackexchange
llm_judgeable_groundtruth_similarity
275853
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I got this question from a last year's olympiad paper. Compute $999,999\cdot 222,222 + 333,333\cdot 333,334$. Is there an approach to this by using pen-and-paper? EDIT Working through on paper made me figure out the answer. Posted below. I'd now like to see other methods. Thank you. Now provide the response and nothing else.
My observation suggests that we may simplify it somewhat like this assuming $x = 111,111$:$$\begin{align*}9x\cdot2x+3x(3x + 1) &= 9x^2 +18x^2 + 3x\\ &= 27x^2 + 3x\\ &= 3x(9x + 1)\end{align*}$$ So this means:$$\begin{align*}3x(9x + 1) &= 333,333\cdot (999,999+1)\\ &= 333,333\cdot1,000,000\\ &= \boxed{333,333,000,000}\end{align*}$$
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/275853', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34397/']}
jdg_371867
stackexchange
llm_judgeable_groundtruth_similarity
1556082
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Where should I save data related to my application? Where should I save configuration files? Where should I save temporary files? Should I do it in "Documents"? "AppData"? Etc... What is the best practice for saving data to the disk (I guess, best practice for special folders?!)? Now provide the response and nothing else.
ApplicationData : Everything that your application needs as "per user" data and does not fall under other categories. Standard configuration files would go here. CommonApplicationData : Everything that is not "per user" data. LocalApplicationData : Data that is per user and non-roaming. For example, everything where you want to ENSURE that it is only stored on this machine (like machine activation codes, often also cache/temporary data). Standard temporary files would go here. MyDocuments : User data that the user actually would identify as "recognizable single documents". If you don't care about the filename, you can also use a tempfile API to generate a temporary file in the temp directory. You should NOT do this manually. In, for example, .NET you can use Path.GetTempFileName() for that purpose.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1556082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/122732/']}
jdg_371868
stackexchange
llm_judgeable_groundtruth_similarity
4805826
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this query which works but when I try to write the equivalent in LINQ I get the incorrect SQL produced. My query is: SELECT COUNT(*)FROM tableNameGROUP BY ColumnId I've tried writing it as: tableName.GroupBy(x => x.ColumnId).Count() But looking in LINQPad it is producing the SQL: SELECT COUNT(*) AS [value]FROM (SELECT NULL AS [EMPTY]FROM [tableName] AS [t0]GROUP BY [t0].[ColumnId]) AS [t1] What am I doing wrong? Thanks! Now provide the response and nothing else.
Your LINQ query is counting the number of groups but your SQL query is producing the counts by group. You want var counts = tableName.GroupBy(x => x.ColumnId) .Select(g => new { g.Key, Count = g.Count() }); to get the counts by group. Note that if you want exactly the same SQL you want var counts = tableName.GroupBy(x => x.ColumnId) .Select(g => g.Count()); The first example above should be a little more useful as it gives the ids of each group as well.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4805826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/590752/']}
jdg_371869
stackexchange
llm_judgeable_groundtruth_similarity
26834
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I notice that most optical calcite for sale seems to be from natural (mined) sources. Also, I know in World War 2, mining optical calcite was a strategic objective. Is there some reason why high-grade optical calcite cannot be lab grown? Now provide the response and nothing else.
The crystal structure is trigonal, so has very low symmetry and high chance of twinning or other line defects forming during, say, Czochralski type growth methods. My guess would be the good optical quality stuff sat and annealed in situ for a long time to clear most of the twins/dislocations out. Support for this comes from A.J. Gratz et al., Geochimica et Cosmochimica Acta 57 491-495 (1993) where they observed, with AFM, calcite growth occuring primarily on screw dislocations.
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/26834', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/9663/']}
jdg_371870
stackexchange
llm_judgeable_groundtruth_similarity
12444716
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating a figure in Matplotlib like this: from matplotlib import pyplot as pltfig = plt.figure()plt.plot(data)fig.suptitle('test title')plt.xlabel('xlabel')plt.ylabel('ylabel')fig.savefig('test.jpg') I want to specify font sizes for the figure title and the axis labels. I need all three to be different font sizes, so setting a global font size ( mpl.rcParams['font.size']=x ) is not what I want. How do I set font sizes for the figure title and the axis labels individually? Now provide the response and nothing else.
Functions dealing with text like label , title , etc. accept parameters same as matplotlib.text.Text . For the font size you can use size/fontsize : from matplotlib import pyplot as plt fig = plt.figure()plt.plot(data)fig.suptitle('test title', fontsize=20)plt.xlabel('xlabel', fontsize=18)plt.ylabel('ylabel', fontsize=16)fig.savefig('test.jpg') For globally setting title and label sizes, mpl.rcParams contains axes.titlesize and axes.labelsize . (From the page): axes.titlesize : large # fontsize of the axes titleaxes.labelsize : medium # fontsize of the x any y labels (As far as I can see, there is no way to set x and y label sizes separately.) And I see that axes.titlesize does not affect suptitle . I guess, you need to set that manually.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/12444716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1058521/']}
jdg_371871
stackexchange
llm_judgeable_groundtruth_similarity
380472
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $B_i(n)$ be the $i$th bit in the binary expansion of $n$, so that $n=\sum B_i(n)2^i$. Now let $n$ be randomly and uniformly chosen from some large range, and let $E(j)$ be the expected value of $B_j\bigl(n^2\bigr)$, the $j$th bit in the expansion of $n^2$. That is: $$E(j) = \lim_{M\to\infty} \frac1M \sum_{n=0}^{M-1}B_j\bigl(n^2\bigr)$$ if this limit exists. It is not hard to see that it must exist for any fixed $j$, since the function $B_j\bigl(n^2\bigr)$ is completely determined by the value of $n\bmod 2^{j+1}$, and so is periodic with period at most $2^{j+1}$. In fact we can get rid of the limit: $$E(j) = \frac1{2^{j+1}} \sum_{n=0}^{2^{j+1}-1}B_j\bigl(n^2\bigr)$$ For example, the first few values of $E$ are $\frac12, 0, \frac14, \frac14$. Numerical evidence suggests that: $$\lim_{j\to\infty} E(j) = \frac12$$ Is this true? Now provide the response and nothing else.
According to " Distribution of the figures 0 and 1 in the various orders of binary representation of kth powers of integers ", W. Gross and R. Vacca ( Mathematics of Computation , April 1968, 22 , #102, 423–427), the answer is yes . On page 423 they define a function $N_k(h)$, which is the count of 1 bits in the $h$th position of the sequence $n^k$ over one of its periods, so my $E(j)$ is exactly $N_2(j)2^{-(j+1)}$. They then show (page 424) that $$E(j) = \frac12\left(1 - 2^{-\lfloor j/2\rfloor}\right)$$ except for $j=0$. A similar result holds for arbitrary $k$th powers—the density of high-order bits approaches $\frac12$ for all $k$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/380472', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/25554/']}
jdg_371872
stackexchange
llm_judgeable_groundtruth_similarity
2067866
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering if the product of two nonsymmetric matrices can ever be a symmetric matrix. Honestly I would not know how to tackle this problem. Now provide the response and nothing else.
Try something simple first: $$\begin{bmatrix}0&0\\1&0\end{bmatrix}\begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}0&0\\0&1\end{bmatrix}\;.$$ More generally, if $A$ is any square real matrix, $AA^T$ is symmetric: the $(i,j)$-entry is the dot product of the $i$-th row of $A$ and the $j$-th column of $A^T$, and the $j$-th column of $A^T$ is the $j$-th row of $A$, so the $(i,j)$-th entry of $AA^T$ is the dot product of the $i$-th and $j$-th rows of $A$. The $(j,i)$-th entry of $AA^T$ is then the dot product of the $j$-th and $i$-th rows of $A$, which is of course the same. This is not the only kind of example, however: $$\begin{bmatrix}0&0&0\\1&0&0\\0&0&0\end{bmatrix}\begin{bmatrix}0&0&0\\0&0&0\\1&0&0\end{bmatrix}=\begin{bmatrix}0&0&0\\0&0&0\\0&0&0\end{bmatrix}$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2067866', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_371873
stackexchange
llm_judgeable_groundtruth_similarity
10023827
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have often heard complaints against Java for not having unsigned data types. See for example this comment. I would like to know how is this a problem? I have been programming in Java for 10 years more or less and never had issues with it. Occasionally when converting bytes to ints a & 0xFF is needed, but I don't consider that as a problem. Since unsigned and signed numbers are represented with the same bit values, the only places I can think of where signedness matters are: When converting the numbers to other bit representation. Between 8, 16 and 32 bit integer types you can use bitmasks if needed. When converting numbers to decimal format, usually to Strings. Interoperating with non-Java systems through API's or protocols. Again the data is just bits, so I don't see the problem here. Using the numbers as memory or other offsets. With 32 bit ints this might be problem for very huge offsets. Instead I find it easier that I don't need to consider operations between unsigned and signed numbers and the conversions between those. What am I missing? What are the actual benefits of having unsigned types in a programming language and how would having those make Java better? Now provide the response and nothing else.
Occasionally when converting bytes to ints a & 0xFF is needed, but I don't consider that as a problem. Why not? Is "applying a bitwise AND with 0xFF" actually part of what your code is trying to represent? If not, why should it have to be part of have you write it? I actually find that almost anything I want to do with bytes beyond just copying them from one place to another ends up requiring a mask. I want my code to be cruft-free; the lack of unsigned bytes hampers this :( Additionally, consider an API which will always return a non-negative value, or only accepts non-negative values. Using an unsigned type allows you to express that clearly, without any need for validation. Personally I think it's a shame that unsigned types aren't used more in .NET, e.g. for things like String.Length , ICollection.Count etc. It's very common for a value to naturally only be non-negative. Is the lack of unsigned types in Java a fatal flaw? Clearly not. Is it an annoyance? Absolutely. The comment that you quote hits the nail on the head: Java's lack of unsigned data types also stands against it. Yes, you can work around it, but it's not ideal and you'll be using code that doesn't really reflect the underlying data correctly. Suppose you are interoperating with another system, which wants an unsigned 16 bit integer, and you want to represent the number 65535. You claim "the data is just bits, so I don't see the problem here" - but having to pass -1 to mean 65535 is a problem. Any impedance mismatch between the representation of your data and its underlying meaning introduces an extra speedbump when writing, reading and testing the code. Instead I find it easier that I don't need to consider operations between unsigned and signed numbers and the conversions between those. The only times you would need to consider those operations is when you were naturally working with values of two different types - one signed and one unsigned. At that point, you absolutely want to have that difference pointed out. With signed types being used to represent naturally unsigned values, you should still be considering the differences, but the fact that you should is hidden from you. Consider: // This should be considered unsigned - so a value of -1 is "really" 65535short length = /* some value */;// This is really signedshort foo = /* some value */;boolean result = foo < length; Suppose foo is 100 and length is -1. What's the logical result? The value of length represents 65535, so logically foo is smaller than it. But you'd probably go along with the code above and get the wrong result. Of course they don't even need to represent different types here. They could both be naturally unsigned values, represented as signed values with negative numbers being logically greater than positive ones. The same error applies, and wouldn't be a problem if you had unsigned types in the language. You might also want to read this interview with Joshua Bloch (Google cache, as I believe it's gone from java.sun.com now), including: Ooh, good question... I'm going to say that the strangest thing about the Java platform is that the byte type is signed. I've never heard an explanation for this. It's quite counterintuitive and causes all sorts of errors.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10023827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160539/']}
jdg_371874
stackexchange
llm_judgeable_groundtruth_similarity
58053956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: everyone. I am using Form and TextFieldForm without any own TextEditController. Have 3 TextFieldForm (Value_1, Value_2, Total) with initial values. When i am editing first one, the Total textfield should contain result of calculation . To update widget i am using setState. The problem that variable _total and Text widget always has a correct calculation value, but the Total textfield does not want to update. why? is it posible to do without using own TextEditController? import 'package:flutter/material.dart';void main() => runApp(MyApp());class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: TestForm(), ); }}class TestForm extends StatefulWidget { @override _TestFormState createState() => _TestFormState();}class _TestFormState extends State<TestForm> { GlobalKey<FormState> _formKey = GlobalKey(); int _value1 = 0; int _value2 = 20; int _total = 0; @override Widget build(BuildContext context) { print('rebuild'); return Scaffold( appBar: AppBar(title: Text('test form')), body: Padding( padding: const EdgeInsets.all(16.0), child: Form( key: _formKey, child: ListView( children: <Widget>[ TextFormField( initialValue: _value1.toString(), decoration: InputDecoration( labelText: 'Value_1', ), keyboardType: TextInputType.number, onChanged: (value) { setState(() { _total = int.parse(value) * _value2; print('total: ' + _total.toString()); }); }, ), TextFormField( initialValue: _value2.toString(), keyboardType: TextInputType.number, decoration: InputDecoration( labelText: 'Value_2', ), ), TextFormField( initialValue: _total.toString(), keyboardType: TextInputType.number, decoration: InputDecoration( labelText: 'Total', ), ), SizedBox(height: 20), Text('total: ' + _total.toString()), ], ), ), ), ); }} Now provide the response and nothing else.
If you have a reactive data source, aka data that can change based on either network updates or other data, one hack that worked for me was to use a Key . By making a Key of the reactive data ( toString() ), the form field will change every time the Key changes. So in this case you could do: TextFormField( key: Key(_total.toString()), // <- Magic! initialValue: _total.toString(), keyboardType: TextInputType.number, decoration: InputDecoration( labelText: 'Total', ),),
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/58053956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2432623/']}
jdg_371875
stackexchange
llm_judgeable_groundtruth_similarity
1947395
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I haven't quite figured this out. EVERY piece of documentation I've found covers how to use xdebug to debug scripts running in Apache. I need to debug a php CLI script. So, for instance, how do I pass the XDEBUG_SESSION_START variable in to get xdebug to kick on? I'm specifically trying to debug a CakePHP shell. So if anyone has any additional insight into that I'd be very appreciative. Thanks. Now provide the response and nothing else.
There is a couple of notes about that in Xdebug's manual , like, for instance (quoting) : export XDEBUG_CONFIG="idekey=session_name"php myscript.php If you are using Eclipse PDT to develop and debug your PHP scripts, there is not much difference between Apache or CLI : the configuration lloks quite the same, you just don't have to configure a web server, nor indicate an URL ; instead, you have to indicate the path to the PHP executable. About the XDEBUG_SESSION_START variable : well, you launch the whole script in "debug-mode", so you don't have any notion of "debugging-session", I'd say. For instance, here's what Window > Preference > PHP > PHP executables looks like for me right now, and, on the right, what I get when clicking on the Edit button of the first one : (source: pascal-martin.fr ) (source: pascal-martin.fr ) And the debug configurations window : (source: pascal-martin.fr ) And launching the debugging: it just works : (source: pascal-martin.fr ) Hope this helps :-) Else, what specific problem do you encounter ?
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1947395', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1174250/']}
jdg_371876
stackexchange
llm_judgeable_groundtruth_similarity
16408559
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do you render a value of the value of an array key when it has a dash in the name? I have this snippet: $snippet = " {{ one }} {{ four['five-six'] }} {{ ['two-three'] }}";$data = [ 'one' => 1, 'two-three' => '2-3', 'four' => [ 'five-six' => '5-6', ],];$twig = new \Twig_Environment(new \Twig_Loader_String());echo $twig->render($snippet, $data); The output is 15-6Notice: Array to string conversion in path/twig/twig/lib/Twig/Environment.php(320) : eval()'d code on line 34 And it renders four['five-six'] fine. But throws an error on ['two-three'] . Now provide the response and nothing else.
This cannot work, since you shouldn't be using native operators in variable names - Twig internally compiles to PHP so it cannot handle this. For attributes (methods or properties of a PHP object, or items of a PHP array) there is a workaround, from the documentation: When the attribute contains special characters (like - that would be interpreted as the minus operator), use the attribute function instead to access the variable attribute: {# equivalent to the non-working foo.data-foo #}{{ attribute(foo, 'data-foo') }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16408559', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/268074/']}
jdg_371877
stackexchange
llm_judgeable_groundtruth_similarity
26526030
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This was a bit of an exercise in frustration made worse by the fact that I couldn't find any answers to this question. So I'm going to answer the question here. What I find most difficult to understand was that JsonNode doesn't have any getName() or similar method which I was expecting given that JSON is a name:value data type. Though I realised after working out this solution that arrays aren't name:value. See answer below. Now provide the response and nothing else.
package treeWalker;import java.io.IOException;import java.util.ArrayList;import java.util.Iterator;import java.util.Map;import com.fasterxml.jackson.core.JsonProcessingException;import com.fasterxml.jackson.databind.JsonNode;import com.fasterxml.jackson.databind.ObjectMapper;import com.google.common.collect.Lists;public class TreeWalker{ public JsonNode convertJSONToNode(String json) throws JsonProcessingException, IOException { ObjectMapper mapper = new ObjectMapper(); JsonNode jsonNode = mapper.readTree(json); return jsonNode; } public void walkTree(JsonNode root) { walker(null, root); } private void walker(String nodename, JsonNode node) { String nameToPrint = nodename != null ? nodename : "must_be_root"; System.out.println("walker - node name: " + nameToPrint); if (node.isObject()) { Iterator<Map.Entry<String, JsonNode>> iterator = node.fields(); ArrayList<Map.Entry<String, JsonNode>> nodesList = Lists.newArrayList(iterator); System.out.println("Walk Tree - root:" + node + ", elements keys:" + nodesList); for (Map.Entry<String, JsonNode> nodEntry : nodesList) { String name = nodEntry.getKey(); JsonNode newNode = nodEntry.getValue(); // System.out.println(" entry - key: " + name + ", value:" + node); walker(name, newNode); } } else if (node.isArray()) { Iterator<JsonNode> arrayItemsIterator = node.elements(); ArrayList<JsonNode> arrayItemsList = Lists.newArrayList(arrayItemsIterator); for (JsonNode arrayNode : arrayItemsList) { walker("array item", arrayNode); } } else { if (node.isValueNode()) { System.out.println(" valueNode: " + node.asText()); } else { System.out.println(" node some other type"); } } }} And a Unit test to exercise (with no asserts! Sorry). package treeWalker;import java.io.IOException;import org.junit.Test;import com.fasterxml.jackson.core.JsonProcessingException;import com.fasterxml.jackson.databind.JsonNode;public class TreeWalkerTest{ TreeWalker treeWalker = new TreeWalker(); private String getJSON() { String json = "{\"a\":\"val_a\",\"b\":\"val_b\",\"c\":[1,2,3]}"; return json; } @Test public void testConvertJSONToNode() throws JsonProcessingException, IOException { String json = getJSON(); JsonNode jNode = treeWalker.convertJSONToNode(json); System.out.println("jnode:" + jNode); } @Test public void testWalkTree() throws JsonProcessingException, IOException { JsonNode jNode = treeWalker.convertJSONToNode(getJSON()); treeWalker.walkTree(jNode); }} Oh, and the build.gradle : apply plugin: 'java'apply plugin: 'eclipse'repositories { mavenCentral()}dependencies { compile 'org.slf4j:slf4j-api:1.7.5' compile 'com.fasterxml.jackson.core:jackson-core:2.4.3' compile 'com.fasterxml.jackson.core:jackson-databind:2.4.3' compile 'com.fasterxml.jackson.core:jackson-annotations:2.4.3' compile 'com.google.guava:guava:18.0' testCompile "junit:junit:4.11"}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26526030', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1019307/']}
jdg_371878
stackexchange
llm_judgeable_groundtruth_similarity
514105
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find one integer solution to the Diophantine equation \begin{equation*}18x+14y+63z=5.\end{equation*} If this were only a linear equation over $\mathbb{Z}^2$, then I could easily solve it by using the extended Euclidean algorithm... but I have no idea how to do this with more than 2 unknowns... Now provide the response and nothing else.
You solve $18 u + 14 v = 2 = \gcd(18,14).$ Solve $2 w + 63 z = 1.$ Combine to get $18 x + 14 y + 63 z = 1.$ Then multiply all by $5.$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/514105', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39748/']}
jdg_371879
stackexchange
llm_judgeable_groundtruth_similarity
3541514
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to change input type attribute from password to text . $('.form').find('input:password').attr({type:"text"}); Why this doesn't work? Now provide the response and nothing else.
You can't do this with jQuery, it explicitly forbids it because IE doesn't support it (check your console you'll see an error. You have to remove the input and create a new one if that's what you're after, for example: $('.form').find('input:password').each(function() { $("<input type='text' />").attr({ name: this.name, value: this.value }).insertBefore(this);}).remove(); You can give it a try here To be clear on the restriction, jQuery will not allow changing type on a <button> or <input> so the behavior is cross-browser consistent (since IE doens't allow it, they decided it's disallowed everywhere). When trying you'll get this error in the console: Error: type property can't be changed
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3541514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/398431/']}
jdg_371880
stackexchange
llm_judgeable_groundtruth_similarity
27176159
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've encountered with one performance problem that I can't quite understand. I know how to fix it but I don't understand Why that happens. It's just for fun! Let's talk code. I simplified the code as much as I could to reproduce the issue. Suppose we have a generic class. It has an empty list inside and does something with T in constructor. It has Run method that calls an IEnumerable<T> method on the list, e.g. Any() . public class BaseClass<T>{ private List<T> _list = new List<T>(); public BaseClass() { Enumerable.Empty<T>(); // or Enumerable.Repeat(new T(), 10); // or even new T(); // or foreach (var item in _list) {} } public void Run() { for (var i = 0; i < 8000000; i++) { if (_list.Any()) // or if (_list.Count() > 0) // or if (_list.FirstOrDefault() != null) // or if (_list.SingleOrDefault() != null) // or other IEnumerable<T> method { return; } } }} Then we have a derived class which is empty: public class DerivedClass : BaseClass<object>{} Let's measure the performance of running ClassBase<T>.Run method from both classes. Accessing from derived type is 4 times slower that from base class. And I can't understand why that happens. Compiled in Release mode, result is the same with warm up. It happens on .NET 4.5 only. public class Program{ public static void Main() { Measure(new DerivedClass()); Measure(new BaseClass<object>()); } private static void Measure(BaseClass<object> baseClass) { var sw = Stopwatch.StartNew(); baseClass.Run(); sw.Stop(); Console.WriteLine(sw.ElapsedMilliseconds); }} Full listing on gist Now provide the response and nothing else.
Update: There's an answer from the CLR team on Microsoft Connect It is related to dictionary lookups in shared generics code. The heuristic in runtime and JIT do not work well for this particular test. We will take a look what can be done about it. In the meantime, you can workaround it by adding two dummy methods to the BaseClass (do not even need to be called). It will cause the heuristic to work as one would expect. Original: That's JIT fail. Can be fixed by this crazy thing: public class BaseClass<T> { private List<T> _list = new List<T>(); public BaseClass() { Enumerable.Empty<T>(); // or Enumerable.Repeat(new T(), 10); // or even new T(); // or foreach (var item in _list) {} } public void Run() { for (var i = 0; i < 8000000; i++) { if (_list.Any()) { return; } } } public void Run2() { for (var i = 0; i < 8000000; i++) { if (_list.Any()) { return; } } } public void Run3() { for (var i = 0; i < 8000000; i++) { if (_list.Any()) { return; } } } } Note that Run2()/Run3() are not called from anywhere. But if you comment out Run2 or Run3 methods - you'll get performance penalty as before. There's something related to stack alignment or to the size of method table, I guess. P.S. You can replace Enumerable.Empty<T>(); // with var x = new Func<IEnumerable<T>>(Enumerable.Empty<T>); still the same bug.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27176159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/974487/']}
jdg_371881
stackexchange
llm_judgeable_groundtruth_similarity
23852
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: You test a classifier on a test set consisting of 10 iid items. The classifier makes 2 mistakes. Assume the true error rate is $x$. Let the prior be $ x \sim Beta(\alpha, \beta)$. Derive the posterior given the classifier's performance. I know that to calculate the posterior, I use the definition $p(\theta|X) \propto p(\theta)p(X|\theta)$. I can plug in the definition of the beta distribution for $p(\theta)$ but I'm not sure what to plug in for the likelihood $p(X|\theta)$. Is the likelihood function $p(X|\theta) = x^2(1-x)^8$? Now provide the response and nothing else.
Your formulas look a bit strange because $\theta=x$. $X$ and $x$ are usually used to denote the random variable and the data, not the parameter. Anyway, if $X$ is the number of mistakes among 10 iid items then $X\sim Bin(10,x)$, i.e. $X$ follows the binomial distribution . Thus $p(X|x)=\binom{10}{X} x^2(1-x)^8$, and you can compute the posterior using your formula.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/23852', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/9292/']}
jdg_371882
stackexchange
llm_judgeable_groundtruth_similarity
107336
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is an example of a sequence in $\mathbb R$ with this property that is not Cauchy? I know that Cauchy condition means that for each $\varepsilon>0$ there exists $N$ such that $d(x_p,x_q)<\varepsilon$ whenever $p,q>N$. Now provide the response and nothing else.
The standard example is the sequence $(s_n)$ of partial sums of the harmonic series. Formally,$$s_n=\sum_{k=1}^n \frac{1}{k}.$$ Note that $d(s_n,s_{n+1})=\frac{1}{n+1}$. It is clear that $d(s_n, s_{n+1})\to 0$ as $n\to\infty$. But the sequence $(s_n)$ is not Cauchy. For given any $m$, we can find $n$ such that $d(s_m,s_n)$ is arbitrarily large. This is because the sequence $(s_n)$ diverges to infinity. We omit the proof, since you have likely already seen a proof that $\sum_{k=1}^\infty \frac{1}{k}$ diverges.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/107336', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23416/']}
jdg_371883
stackexchange
llm_judgeable_groundtruth_similarity
35895154
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am building a calculator and want it to automatically convert every decimal into a fraction. So if the user calculates an expression for which the answer is "0.333333...", it would return "1/3". For "0.25" it would return "1/4". Using GCD, as found here ( Decimal to fraction conversion ), I have figured out how to convert any rational, terminating decimal into a decimal, but this does not work on any decimal that repeats (like .333333). Every other function for this on stack overflow is in Objective-C. But I need a function in my swift app! So a translated version of this ( https://stackoverflow.com/a/13430237/5700898 ) would be nice! Any ideas or solutions on how to convert a rational or repeating/irrational decimal to a fraction (i.e. convert "0.1764705882..." to 3/17) would be great! Now provide the response and nothing else.
If you want to display the results of calculations as rational numbersthen the only 100% correct solution is to use rational arithmetic throughout all calculations, i.e. all intermediate values are stored as a pair of integers (numerator, denominator) , and all additions, multiplications, divisions, etc are done using the rules for rationalnumbers. As soon as a result is assigned to a binary floating point number such as Double , information is lost. For example, let x : Double = 7/10 stores in x an approximation of 0.7 , because that number cannotbe represented exactly as a Double . From print(String(format:"%a", x)) // 0x1.6666666666666p-1 one can see that x holds the value 0x16666666666666 * 2^(-53) = 6305039478318694 / 9007199254740992 ≈ 0.69999999999999995559107901499373838305 So a correct representation of x as a rational number would be 6305039478318694 / 9007199254740992 , but that is of course not whatyou expect. What you expect is 7/10 , but there is another problem: let x : Double = 69999999999999996/100000000000000000 assigns exactly the same value to x , it is indistinguishable from 0.7 within the precision of a Double . So should x be displayed as 7/10 or as 69999999999999996/100000000000000000 ? As said above, using rational arithmetic would be the perfect solution.If that is not viable, then you can convert the Double back toa rational number with a given precision .(The following is taken from Algorithm for LCM of doubles in Swift .) Continued Fractions are an efficient method to create a (finite or infinite) sequence of fractions h n /k n that are arbitrary good approximations to a given real number x ,and here is a possible implementation in Swift: typealias Rational = (num : Int, den : Int)func rationalApproximationOf(x0 : Double, withPrecision eps : Double = 1.0E-6) -> Rational { var x = x0 var a = floor(x) var (h1, k1, h, k) = (1, 0, Int(a), 1) while x - a > eps * Double(k) * Double(k) { x = 1.0/(x - a) a = floor(x) (h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k) } return (h, k)} Examples: rationalApproximationOf(0.333333) // (1, 3)rationalApproximationOf(0.25) // (1, 4)rationalApproximationOf(0.1764705882) // (3, 17) The default precision is 1.0E-6, but you can adjust that to your needs: rationalApproximationOf(0.142857) // (1, 7)rationalApproximationOf(0.142857, withPrecision: 1.0E-10) // (142857, 1000000)rationalApproximationOf(M_PI) // (355, 113)rationalApproximationOf(M_PI, withPrecision: 1.0E-7) // (103993, 33102)rationalApproximationOf(M_PI, withPrecision: 1.0E-10) // (312689, 99532) Swift 3 version: typealias Rational = (num : Int, den : Int)func rationalApproximation(of x0 : Double, withPrecision eps : Double = 1.0E-6) -> Rational { var x = x0 var a = x.rounded(.down) var (h1, k1, h, k) = (1, 0, Int(a), 1) while x - a > eps * Double(k) * Double(k) { x = 1.0/(x - a) a = x.rounded(.down) (h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k) } return (h, k)} Examples: rationalApproximation(of: 0.333333) // (1, 3)rationalApproximation(of: 0.142857, withPrecision: 1.0E-10) // (142857, 1000000) Or – as suggested by @brandonscript – with a struct Rational and an initializer: struct Rational { let numerator : Int let denominator: Int init(numerator: Int, denominator: Int) { self.numerator = numerator self.denominator = denominator } init(approximating x0: Double, withPrecision eps: Double = 1.0E-6) { var x = x0 var a = x.rounded(.down) var (h1, k1, h, k) = (1, 0, Int(a), 1) while x - a > eps * Double(k) * Double(k) { x = 1.0/(x - a) a = x.rounded(.down) (h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k) } self.init(numerator: h, denominator: k) }} Example usage: print(Rational(approximating: 0.333333))// Rational(numerator: 1, denominator: 3)print(Rational(approximating: .pi, withPrecision: 1.0E-7))// Rational(numerator: 103993, denominator: 33102)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/35895154', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5700898/']}
jdg_371884
stackexchange
llm_judgeable_groundtruth_similarity
3061182
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C#, I find myself using a List<T> , IList<T> or IEnumerable<T> 99% of the time. Is there a case when it would be better to use a HashTable (or Dictionary<T,T> in 2.0 and above) over these? Edit: As pointed out, what someone would like to do with the collection often dictates what one should be using, so when would you use a Hashtable / Dictonary<T,T> over a List<T> ? Now provide the response and nothing else.
Maybe not directly related to the OPs question, but there's a useful blog post about which collection structure to use at: SortedSets Basically, what you want to do with the collection determines what type of collection you should create. To summarise in more detail: Use IList if you want to be able to enumerate and / or modify the collection (normally adding at end of list) Use IEnumeration if you just want to enumerate the collection (don't need to add / remove - usually used as a return type) Use IDictionary if you want to access elements by a key (adding / removing elements quickly using a key) Use SortedSet if you want to access a collection in a predefined order (most common usage being to access the collection in order) Overall, use Dictionary if you want to access / modify items by key in no particular order (preferred over list as that's generally done in order, preferred over enumeration as you can't modify an enumeration, preferred over hashtable as that's not strictly typed, preferred over sortedlist when you don't need keys sorted)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3061182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/369253/']}
jdg_371885
stackexchange
llm_judgeable_groundtruth_similarity
32235
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble understanding how to solve a certain problem. The problem is from the book Astronomy: Principles and Practice by Roy and Clarke. It is actually a solved example, but I am unable to understand parts of the solution. Draw the celestial sphere for latitude 30° N, showing the star Sirius (right ascension 6h 40m, declination 17° S) at rising and draw the ecliptic. Estimate from your diagram the approximate date when Sirius rises with the Sun. This is the image from the solution. I'll copy the entire solution. Now, here's what I don't understand. If Sirius is at X, and if X is on a parallel of declination, why isn't this parallel parallel to the equatorial plane? Does the same go for the Sun? And how do we get the RA of the Sun? Now provide the response and nothing else.
The figure doesn't show the -17° parallel of declination.If it were shown, it would be parallel to the equator as you expected, as would the parallel of the Sun's declination.PAQ and PBQ are semicircles of right ascension, intersecting at the celestial poles P and Q. For an interactive aid, try Stellarium .You can highlight the meridian, equator, ecliptic, etc. in the View > Markings dialog.Under View > Sky, orthographic projection renders the sky as a celestial sphere, mirror-reversed from the book figure.To reproduce the problem scenario, set a location at 30° N and a date of...July 24.The Sun's right ascension on that date is closer to 8h10m. The authors don't explain how they determined an RA of 8h40m for the Sun, and they're off by half an hour; maybe they eyeballed it as 30° east of Sirius in the diagram.The Sun's RA is exactly 0h at the March equinox, 6h at the June solstice, 12h at the September equinox, and 18h at the December solstice.Between those milestones, the Earth's orbital motion makes the Sun's RA increase about 4m per day or 1h in 15 days.
{}
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/32235', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/28337/']}
jdg_371886
stackexchange
llm_judgeable_groundtruth_similarity
92013
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: For an immersed closed surface $f: \Sigma \rightarrow \mathbb R^3$ the Willmore functional is defined as$$\cal W(f) = \int _{\Sigma} \frac{1}{4} |\vec H|^2 d \mu_g,$$where $\vec H$ is the mean curvature vector in $\mathbb R^3$and $g$ is the induced metric. If $\Sigma$ is closed we have the estimate$$\cal W(f) \geq 4 \pi$$with equality only for $f$ parametrizing a round sphere. Recently, the Willmore conjecture was proved (the paper can be found on arxiv), which states that for closed surfaces $\Sigma$ of genus $g \geq 1$ this estimate can be improved:$$\cal W(f) \geq 2 \pi^2$$with equality only for the Cilfford torus. Are there any conjectures about the minimizers in the case of genus $g \geq 2$? And what happens if we consider surfaces immersed in some $\mathbb R^n$ instead of $\mathbb R ^3$? Now provide the response and nothing else.
First of all, by a result of Bauer and Kuwert, there exists a smooth minimizer of the Willmore functional in the class of compact surfaces with fixed genus g, for any g. They have Willmore functional below $8\pi$ and by a result of Kuwert, Li and Schaetzle, the Willmore functional of the minimzers for genus $g$ tends to $8\pi$ when $g$ goes to infinity.Not much more is known about higher genus surfaces, but there is a vague conjecture, that the minimzers are the so called Lawson surface $\xi_{g,1}.$
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/92013', 'https://mathoverflow.net', 'https://mathoverflow.net/users/18589/']}
jdg_371887
stackexchange
llm_judgeable_groundtruth_similarity
2890703
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using python programming language,I want to join to wav file one at the end of other wav file?I have a Question in the forum which suggest how to merge two wav file i.e add the contents of one wav file at certain offset,but i want to join two wav file at the end of each other... And also i had a prob playing the my own wav file,using winsound module..I was able to play the sound but using the time.sleep for certain time before playin any windows sound,disadvantage wit this is if i wanted to play a sound longer thn time.sleep(N),N sec also,the windows sound wil jst overlap after N sec play the winsound nd stop.. Can anyone help??please kindly suggest to how to solve these prob... Thanks in advance Now provide the response and nothing else.
Python ships with the wave module that will do what you need. The example below works when the details of the files (mono or stereo, frame rates, etc) are the same: import waveinfiles = ["sound_1.wav", "sound_2.wav"]outfile = "sounds.wav"data= []for infile in infiles: w = wave.open(infile, 'rb') data.append( [w.getparams(), w.readframes(w.getnframes())] ) w.close() output = wave.open(outfile, 'wb')output.setparams(data[0][0])for i in range(len(data)): output.writeframes(data[i][1])output.close()
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2890703', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/346949/']}
jdg_371888
stackexchange
llm_judgeable_groundtruth_similarity
45598
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone explain why the $^{11}\mathrm{B}\mathrm{H}$ fusion cross-section has a peak near 150 keV, and why $\mathrm{D}\mathrm{D}$ and $\mathrm{D}\mathrm{T}$ have no such sharp peaks? Now provide the response and nothing else.
This is a very difficult question to answer. There are (at least) two reasons. First, we have detailed, numerically exact wave functions for stable, light nuclei only up to, just recently, $A=12$ (like $^{12}C$). The Argonne-Los Alamos-Urbana collaboration uses quantum Monte Carlo (QMC) techniques to evaluate the ground and excited states of bound nucleons ( ie. nuclear states that are $L^2$ normalized). And the fact that the QMC methods look at only bound states indicates the second reason: we're really interested in the eigenstates of the nuclear system in the continuum -- that is, the scattering states. This is a much tougher problem than evaluating the energy of the bound states, whose nucleons range over effectively bounded regions, since we have to do an integral over and infinite region numerically; or be clever and figure out an equivalent, finite-region problem. (There's been some recent work in this direction by Bob Wiringa and Ken Nollett, building on earlier work by Kievsky and collaborators -- check the preprint archive for their recent work.) So although we know a little about the wave function for $^{12}C$, the $A=12$ scattering problem is something we're just starting to learn more about. Before talking about an alternative to bound QMC for describing (ie., parametrizing, not solving) the scattering states, let me digress on the issue of the meaning of ab initio solutions of many-body quantum mechanical problems. Basically, even if you solve the problem exactly, unless you're very lucky (and smart) and identify a single (or very few) physical mechanism(s) (usually a collective phenomenon like GDR, pairing, etc.) that's particularly relevant for the experimental observation you're trying to describe, you're probably not going to have a great one-line, 'take-home' message that says, "The reason that $DD$ and $DT$ don't have as narrow a resonance as $p^{11}B$ is XXX." The answer to your question would require: 1) very good wave functions and 2) a concomitant study of the 2-,3-,...,?-body correlation functions in the $A=4$, $5$, and $12$ problems (with, of course, the right quantum numbers). Even then you might not identify an 'smoking gun' mechanism that says, "Here, look, that's why $A=4$ and $5$ don't show the narrow peak that $A=12$ has." But you might... One way, alternative to QMC, that we have of studying/describing/parametrizing the reactions of light nuclei, that doesn't assume that the states are bound, is Wigner's $R$ matrix. (There are ab initio methods like the resonating group method and no-core shell model, too.) You can find a lot of literature through google scholar. But the basic idea is that one (artificially, if you will) separates the scattering problem into 'internal' and 'external' regions. The internal region is hard to solve -- all the dynamics of the interacting nucleons, when they're all close together (the "compound nucleus"), are at play. The external region is easy to solve: one ignores 'polarizing' (ie., non-Coulomb) forces (because they're small). The complicated hypersurface in the $3A-3$ dimensional space that separates the internal from the external is called the channel surface. We generally assume a sharp, simple form for this surface that roughly corresponds to (though is usually significantly less than) the distance between the target and projectile (or daughter particles) and parametrized by the "channel radii", $R_c$. (We only consider two-body channels -- a limitation of the method.) Now, the wave functions are known in the external region (just appropriate sums of regular and irregular spherical Bessel functions modified by Coulomb phases if it's a charged channel reaction). Inside, however, we describe the system by Wigner's $R$ matrix:\begin{align}R_{c'c} &= \sum_{\lambda=1}^\infty \frac{\gamma_{\lambda c'}\gamma_{\lambda c}}{E_\lambda-E},\end{align}which you might recognize as the Green's function in the presence of some boundary conditions (Wigner's insight gave a particularly useful, simple condition) at the channel radii. The $R$ matrix is a meromorphic function of the energy, $E$ and depends on an infinite number of levels, $E_\lambda$ corresponding to the eigenstates of the Schr\"{o}dinger equation in the finite cavity (with given, Wigner-type BC's). The reduced widths (basically the fractional parentage coefficients of the bound, compound system as it "decays" to particular channels $c,c'$) are related (in a very complicated way) to the partial widths of the compound nucleus. In sum, the $R$ matrix makes an almost intractable problem a little easier. So, what's my point? You can calculate or parametrize the $R$ matrix, then derive the $T$ (transition) (or $S$, scattering) matrix from it and find its poles near the physical region. This will tell you where the resonances are. And this procedure will give you insight into why a particular compound nucleus has resonance at a particular energy. If there is a "strong" (ie., small reduced width) $R$-matrix level at a particular energy, you can learn what the relevant (LS) quantum numbers of that level are. The next step in the program is to ask: what type of 2-, 3-, ..., ?-body correlations/forces give rise to strong interaction in this LS channel-state? This, unfortunately, is a much more difficult question to answer and, incidentally, occupies a good part of my "free-time" as this is the type of research that I'm currently working on. And I'm pretty sure that we have a ways to go to answer it.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/45598', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/930/']}
jdg_371889
stackexchange
llm_judgeable_groundtruth_similarity
590319
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: UPDATE: No, it is not safe to delete these snaps. I deleted them and can no longer open three of my applications. Attempt at opening Visual Studio Code: ~$ codeinternal error, please report: running "code" failed: cannot find installed snap "code" at revision 33: missing file /snap/code/33/meta/snap.yaml The snaps in /var/lib/snapd/snaps are taking up 2.0 GB of space on my disk right now. I want to clear up space, but I'm not sure if deleting these snaps is safe (if so, can I just run sudo rm -rf * ?) This is what I see when I run snap list : code_32.snap gnome-3-28-1804_116.snap gnome-logs_93.snapcode_33.snap gnome-3-34-1804_27.snap gnome-system-monitor_135.snapcore18_1705.snap gnome-3-34-1804_33.snap gnome-system-monitor_145.snapcore18_1754.snap gnome-calculator_730.snap gtk-common-themes_1502.snapcore_8935.snap gnome-calculator_748.snap gtk-common-themes_1506.snapcore_9066.snap gnome-characters_495.snap partialdiscord_109.snap gnome-characters_539.snap spotify_36.snapgnome-3-28-1804_110.snap gnome-logs_100.snap spotify_41.snap What are the gnome, code, and core snaps? I've installed discord and spotify. Will deleting the discord and spotify snaps lead to any issues with opening those applications? I'm using Ubuntu 18.04.3 LTS. Now provide the response and nothing else.
Yes it is safe to free up some space by deleting the the snap cache in /var/lib/snapd/snaps/ when the folder grows large. Try this: sudo apt purge snapd This should actually remove that dir and all traces of snaps on your system. More snap versions are stored by the system after snap package updates. Meaning that for each installed snap package that had updates, you could have several revisions stored on your system, thus taking up quite a bit of disk space. There is a snap option (starting with snapd version 2.34), called refresh.retain , to set the maximum number of a snap's revisions stored by the system after the next refresh, which can be set to a number between 2 and 20. You can change this from the default value of 3 to 2 by using: sudo snap set system refresh.retain=2 But what if you want to remove all versions kept on the system for all snap packages that had updates? Follow This link for more information.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/590319', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/415717/']}
jdg_371890
stackexchange
llm_judgeable_groundtruth_similarity
10415157
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand that the decision to use a value type over a reference type should be based on the semantics, not performance. I do not understand why value types can legally contain reference type members? This is for a couple reasons: For one, we should not build a struct to require a constructor. public struct MyStruct{ public Person p; // public Person p = new Person(); // error: cannot have instance field initializers in structs MyStruct(Person p) { p = new Person(); }} Second, because of value type semantics: MyStruct someVariable;someVariable.p.Age = 2; // NullReferenceException The compiler does not allow me to initialize Person at the declaration. I have to move this off to the constructor, rely on the caller, or expect a NullReferenceException . None of these situations are ideal. Does the .NET Framework have any examples of reference types within value types? When should we do this (if ever)? Now provide the response and nothing else.
Instances of a value type never contain instances of a reference type. The reference-typed object is somewhere on the managed heap, and the value-typed object may contain a reference to the object. Such a reference has a fixed size. It is perfectly common to do this — for example every time you use a string inside a struct. But yes, you cannot guarantee initialization of a reference-typed field in a struct because you cannot define a parameter-less constructor (nor can you guarantee it ever gets called, if you define it in a language other than C#). You say you should "not build a struct to require a constructor". I say otherwise. Since value-types should almost always be immutable, you must use a constructor (quite possibly via a factory to a private constructor). Otherwise it will never have any interesting contents. Use the constructor. The constructor is fine. If you don't want to pass in an instance of Person to initialize p , you could use lazy initialization via a property. (Because obviously the public field p was just for demonstration, right? Right?) public struct MyStruct{ public MyStruct(Person p) { this.p = p; } private Person p; public Person Person { get { if (p == null) { p = new Person(…); // see comment below about struct immutability } return p; } } // ^ in most other cases, this would be a typical use case for Lazy<T>; // but due to structs' default constructor, we *always* need the null check.}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10415157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/87796/']}
jdg_371891
stackexchange
llm_judgeable_groundtruth_similarity
15774475
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a method on the Server side which gives me information about an specific name registered in my database. I'm accessing it from my Android application. The request to Server is done normally. What I'm trying to do is to pass parameter to the server depending on the name I want to get. Here's my Server side method: @RequestMapping("/android/played")public ModelAndView getName(String name) { System.out.println("Requested name: " + name); ........} Here's the Android request to it: private Name getName() { RestTemplate restTemplate = new RestTemplate(); // Add the String message converter restTemplate.getMessageConverters().add( new MappingJacksonHttpMessageConverter()); restTemplate.setRequestFactory( new HttpComponentsClientHttpRequestFactory()); String url = BASE_URL + "/android/played.json"; String nome = "Testing"; Map<String, String> params = new HashMap<String, String>(); params.put("name", nome); return restTemplate.getForObject(url, Name.class, params);} In the server side, I'm only getting: Requested name: null Is it possible to send parameters to my Server like this? Now provide the response and nothing else.
The rest template is expecting a variable "{name}" to be in there for it to replace. What I think you're looking to do is build a URL with query parameters you have one of two options: Use a UriComponentsBuilder and add the parameters by that String url = BASE_URL + "/android/played.json?name={name}" Option 1 is much more flexible though.Option 2 is more direct if you just need to get this done. Example as requested // Assuming BASE_URL is just a host url like http://www.somehost.com/URI targetUrl= UriComponentsBuilder.fromUriString(BASE_URL) // Build the base link .path("/android/played.json") // Add path .queryParam("name", nome) // Add one or more query params .build() // Build the URL .encode() // Encode any URI items that need to be encoded .toUri(); // Convert to URIreturn restTemplate.getForObject(targetUrl, Name.class);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15774475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2026010/']}
jdg_371892
stackexchange
llm_judgeable_groundtruth_similarity
371096
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a basic implementation with a crystal feeding XTAL1 and XTAL2 on a processor (similar to below). When I look at the signal on XTAL1 and XTAL2 they are sine waves. Shouldn't they be square waves? Now provide the response and nothing else.
This circuit is not a digital circuit. In fact, it is a fairly mathematically complicated non-linear analog circuit with automatic gain control with self-sustainable oscillation mode. It is called a " Pierce oscillator ". The frequency of oscillations is defined by a sharp slope of the electromechanical resonator (crystal), while the gain control is based on dependence of the input on the DC bias voltage - if the DC bias (at C1) is too low to ground or too close to V cc , the gain is low. The linear gain is highest somewhere in between the ground and power rail. The (usually internal) bias resistor R1 plays a crucially important role in the oscillator. Typical value of it in CMOS implementations is about 1 MOhm. Together with C1 it forms a low-pass filter, which integrates the output and provides a variable DC offset depending on slight asymmetry of the output signal, even if the output gets to saturation (rail limiting). As result, there could be a variety of signal shapes with more or less non-linear distortion on Xout and Xin, depending on the inverter's raw gain and parameters of the crystal resonator and loading capacitors. With a very low gain and at the verge of self-oscillations, the signals will be nearly sinusoidal, while at higher gain the output will hit the voltage rail and can be nearly rectangular. The art of making Pierce oscillators is to provide some golden trade-off between rectangular output and sinusoidal one, with good stability of the entire circuit to temperature and voltage variations. This article deals with a MEMS resonator, not quartz crystal, but the ideas are the same. This is an example of how the circuit starts and drifts to steady state:
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/371096', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/180832/']}
jdg_371893
stackexchange
llm_judgeable_groundtruth_similarity
5913217
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am relatively new to R. I am iterating over a vector in R by using for() loop. However, based on a certain condition, I need to skip some values in the vector. The first thought that comes to mind is to change the loop index within the loop. I have tried that but somehow its not changing it. There must be some what to achieve this in R. Thanks in advance.Sami Now provide the response and nothing else.
You can change the loop index within a for loop, but it will not affect the execution of the loop; see the Details section of ?"for" : The ‘seq’ in a ‘for’ loop is evaluated at the start of the loop; changing it subsequently does not affect the loop. If ‘seq’ has length zero the body of the loop is skipped. Otherwise the variable ‘var’ is assigned in turn the value of each element of ‘seq’. You can assign to ‘var’ within the body of the loop, but this will not affect the next iteration. When the loop terminates, ‘var’ remains as a variable containing its latest value. Use a while loop instead and index it manually: i <- 1while(i < 100) { # do stuff if(condition) { i <- i+3 } else { i <- i+1 }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5913217', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/741989/']}
jdg_371894
stackexchange
llm_judgeable_groundtruth_similarity
28397800
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying set camera on the background of UIView in UIViewController, in order to be able to draw on it. How to do that? Now provide the response and nothing else.
UPDATED TO SWIFT 5 You could try something like this: I add two UIView s to my UIViewController 's main view, one called previewView (for the camera) and another UIView called boxView (which is above the camera view) class ViewController: UIViewController { var previewView : UIView! var boxView:UIView! //Camera Capture requiered properties var videoDataOutput: AVCaptureVideoDataOutput! var videoDataOutputQueue: DispatchQueue! var previewLayer:AVCaptureVideoPreviewLayer! var captureDevice : AVCaptureDevice! let session = AVCaptureSession() var currentFrame: CIImage! var done = false override func viewDidLoad() { super.viewDidLoad() previewView = UIView(frame: CGRect(x: 0, y: 0, width: UIScreen.main.bounds.size.width, height: UIScreen.main.bounds.size.height)) previewView.contentMode = .scaleAspectFit view.addSubview(previewView) //Add a box view boxView = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 200)) boxView.backgroundColor = UIColor.green boxView.alpha = 0.3 view.addSubview(boxView) self.setupAVCapture() } override func viewWillAppear(_ animated: Bool) { if !done { session.startRunning() } } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() } override var shouldAutorotate: Bool { if (UIDevice.current.orientation == UIDeviceOrientation.landscapeLeft || UIDevice.current.orientation == UIDeviceOrientation.landscapeRight || UIDevice.current.orientation == UIDeviceOrientation.unknown) { return false } else { return true } }}// AVCaptureVideoDataOutputSampleBufferDelegate protocol and related methodsextension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate{ func setupAVCapture(){ session.sessionPreset = AVCaptureSession.Preset.vga640x480 guard let device = AVCaptureDevice .default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: .video, position: AVCaptureDevice.Position.front) else{ return } captureDevice = device beginSession() done = true } func beginSession(){ var deviceInput: AVCaptureDeviceInput! do { deviceInput = try AVCaptureDeviceInput(device: captureDevice) guard deviceInput != nil else { print("error: cant get deviceInput") return } if self.session.canAddInput(deviceInput){ self.session.addInput(deviceInput) } videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput.alwaysDiscardsLateVideoFrames=true videoDataOutputQueue = DispatchQueue(label: "VideoDataOutputQueue") videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue) if session.canAddOutput(self.videoDataOutput){ session.addOutput(self.videoDataOutput) } videoDataOutput.connection(with: AVMediaType.video)?.isEnabled = true self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session) self.previewLayer.videoGravity = AVLayerVideoGravity.resizeAspect let rootLayer: CALayer = self.previewView.layer rootLayer.masksToBounds = true self.previewLayer.frame = rootLayer.bounds rootLayer.addSublayer(self.previewLayer) session.startRunning() } catch let error as NSError { deviceInput = nil print("error: \(error.localizedDescription)") } } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { currentFrame = self.convertImageFromCMSampleBufferRef(sampleBuffer) } // clean up AVCapture func stopCamera(){ session.stopRunning() done = false } func convertImageFromCMSampleBufferRef(_ sampleBuffer:CMSampleBuffer) -> CIImage{ let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! let ciImage:CIImage = CIImage(cvImageBuffer: pixelBuffer) return ciImage }} You can replace the boxView's frame with mainView's frame and don't set its background property. This way you can use this view to add more subviews. IMPORTANT Remember that in iOS 10 you need to first ask the user for permission in order to have access to the camera. You do this by adding a usage key to your app’s Info.plist together with a purpose string because if you fail to declare the usage, your app will crash when it first makes the access. Here's a screenshot to show the Camera access request I hope this can help!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28397800', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2935626/']}
jdg_371895
stackexchange
llm_judgeable_groundtruth_similarity
225521
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to show, that the affine line with a split point is not a separated scheme?Hartshorne writes something about this point in product, but it is not product in topological spaces category! Give the most strict proof! Now provide the response and nothing else.
Let $X$ be the affine line with the origin doubled. More precisely, if we let $Z = \mathbb A^1$ and $U = \mathbb A^1 \setminus \{0\},$then $X$ is the union of two copies of $Z$ in which the two copies of $U$are identified in the obvious way. There are two obvious maps$Z \to X$ (corresponding to the two copies of $Z$ of which $X$ is the union),and they are distinct, but they coincide when restricted to $U$. These two maps induce a map $Z \to X \times X$,and the above discussion shows that preimage of the diagonal is exactly equal to $U$. Since $U$ is not closed in $Z$, we conclude that the diagonal is not closed in $X\times X$. Thus $X$ is not separated.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/225521', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/46336/']}
jdg_371896
stackexchange
llm_judgeable_groundtruth_similarity
336081
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am curious whether there are metrics on whether code coverage actually improves code quality? Any research studies? If so, at what percent does it become a case of diminishing returns? If not, why do so many people treat it as a religious doctrine? My skepticism is anecdotal and is brought on by 2 projects I was involved with - both implemented the same reasonably complex product. First one just used targeted unit tests here and there. Second one has a mandated 70% code coverage. If I compare the amount of defects, the 2nd one has almost an order of magnitude more of them. Both products used different technologies and had a different set of developers, but still I am surprised. Now provide the response and nothing else.
I'm assuming you are referring to a Code Coverage metric in the context of unit testing. If so, I think you indirectly have already answered your question here: First project just used targeted unit tests here and there. Second one has a mandated 70% code coverage. If I compare the amount of defects, the 2nd one has almost an order of magnitude more of them. In short no, a Code Coverage metric does not improve the quality of a project at all. There's also a common belief that Code Coverage reflects the quality of the unit tests but it doesn't. It doesn't give you an information what parts of your system are properly tested either. It only says what code has been executed by your test suite. What you know for sure is that code coverage gives you only an information what parts of your system are not tested. However, the Code Coverage metric may relate to overall code quality if you are sure of the quality of your unit tests. The quality of a unit test can be defined as the ability of being able to detect a change in your code base that breaks some business requirement. In other words, every change that breaks particular a requirement (acceptance criterion) should be detected by good quality tests (such tests should simply fail). One of the simplest and automated approaches to measure the quality of your test suite which does not involve too much additional effort from you side is mutation testing . UPDATE: http://martinfowler.com/bliki/TestCoverage.html
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/336081', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/3516/']}
jdg_371897
stackexchange
llm_judgeable_groundtruth_similarity
833326
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using both mercurial and git for different projects and like them both. What I find a bit annoying about mercurial is that "hg status" shows paths relative to the repository root, not to the current directory(unlike git). Can this behaviour be tweaked somehow? Now provide the response and nothing else.
The usual workaround is to run: hg status $(hg root) For older versions of Mercurial, prior to 1.7, you could use this hack, adding to your repository's ".hg/hgrc" file: [alias] sst = status /path/to/root That needs the alias extension enabled, so you may have to add "alias=" to your ~/.hgrc file. Starting with Mercurial 1.7, the alias extension learned about the "!" escape to use shell commands, so you can now have a global alias that does this: [alias]sst = !hg status $($HG root) $HG_ARGS Don't use st = !hg status $(hg root) , since that creates an infinite loop, running hg status over and over. It looks like a bug in the alias parsing - if you want to alias hg status to show the path from the root, then the following incantation works in the global $HOME/.hgrc: [alias]__mystatus = statusst = !hg __mystatus $($HG root) $HG_ARGS
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/833326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47422/']}
jdg_371898
stackexchange
llm_judgeable_groundtruth_similarity
15303107
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to find a way to convert .py source file to .cpp source (as a time saver from doing it manually). I've never used python before and was hoping for a quick way to convert it, and cleanup any code the converter might not do well. So far, some of the options that I have found while googling seem to be: nuitka, cython, and pypy/rpython. However, the documentation I have read only seem to produce executables, and not actual source code. At this point, I have found py2c, but cannot seem to find any documentation on how to use it. Also, judging by the posted roadmap on the wiki, it does not seem to be a finished product, and so I'm doubtful as to its reliability. If you can provide other sources on how this can be accomplished, or shed some light on something I may have missed on the above-mentioned possibilities, it would be appreciated. Otherwise, I will simply convert it manually. Now provide the response and nothing else.
Programming languages cannot be easily converted like this. For example, Python has a large standard library, C++ doesn't have the same library. How do you translate those calls? More fundamentally, the semantics of the language are very different, even a statement as simple as x = 1 means something different in Python and C++. You are going to have to write C++ while reading the Python.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15303107', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1588552/']}
jdg_371899
stackexchange
llm_judgeable_groundtruth_similarity
678642
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Which is really messing with my plan to back up this machine... I have a server which is a KVM hypervisor to several virtual machines. One of these is running Docker. It has its Docker volumes on /dev/vdb, which is set up as an LVM PV, on which Docker uses its direct-lvm driver to store Docker container data. This virtual disk is an LVM LV on the host's local disk. Both host and guest run Fedora 21. The host's view of this volume is (only the relevant volume is shown): [root@host ~]# lvs LV VG Attr LSize docker2.example.com-volumes vm-volumes -wi-ao---- 40.00g[root@host ~]# dmsetup ls --treevm--volumes-docker2.example.com--volumes (253:10) └─ (9:125) The guest's view of this volume is (again, only the relevant volume is shown): [root@docker2 ~]# pvs PV VG Fmt Attr PSize PFree /dev/vdb docker-volumes lvm2 a-- 40.00g 0 With all the other LVM volumes on the host, I can take a snapshot with lvcreate --snapshot , backup the snapshot and then lvremove it with no issue. But with this particular volume, I can't lvremove it because it is in use: [root@host ~]# lvremove /dev/vm-volumes/snap-docker2.example.com-volumes Logical volume vm-volumes/snap-docker2.example.com-volumes is used by another device. Eventually I figured out that device-mapper on the host had somehow figured out that this logical volume snapshot contained an LVM PV, and then proceeded to map the logical volumes within the snapshot to the host (only the relevant volumes are shown): [root@host ~]# dmsetup ls --treevm--volumes-docker2.example.com--volumes (253:10) └─vm--volumes-docker2.example.com--volumes-real (253:14) └─ (9:125)docker--volumes-docker--data (253:18) └─vm--volumes-snap--docker2.example.com--volumes (253:16) ├─vm--volumes-snap--docker2.example.com--volumes-cow (253:15) │ └─ (9:125) └─vm--volumes-docker2.example.com--volumes-real (253:14) └─ (9:125)docker--volumes-docker--meta (253:17) └─vm--volumes-snap--docker2.example.com--volumes (253:16) ├─vm--volumes-snap--docker2.example.com--volumes-cow (253:15) │ └─ (9:125) └─vm--volumes-docker2.example.com--volumes-real (253:14) └─ (9:125) These correspond exactly to the logical volumes inside the VM: [root@docker2 ~]# lvs LV VG Attr LSize docker-data docker-volumes -wi-ao---- 39.95g docker-meta docker-volumes -wi-ao---- 44.00m Notably, it doesn't try to do this to the LVM LV when the system is booting, but only when I take a snapshot. What is going on here? I really don't want device-mapper inspecting the contents of LVM snapshots to see if there's anything within them it can unhelpfully map for me. Can I suppress this behavior? Or do I need to create the snapshot via some other method? Now provide the response and nothing else.
Sometimes the relevant documentation is hidden away in configuration files rather than in, say, the documentation. So it seems with LVM. By default LVM will automatically attempt to activate volumes on any physical devices which get connected to the system after boot, so long as all of the PVs are present, and lvmetad and udev (or more recently systemd) are running. When the LVM snapshot gets created, a udev event gets fired off, and since the snapshot contains a PV, lvmetad automatically runs pvscan , and so forth. By looking at /etc/lvm/backup/docker-volumes I was able to determine that lvmetad had explicitly run pvscan on the snapshot by using the device major and minor numbers, which bypassed LVM filters that would normally prevent this. The file contained: description = "Created *after* executing 'pvscan --cache --activate ay 253:13'" This behavior can be controlled by setting the auto_activation_volume_list in /etc/lvm/lvm.conf . It allows you to set which volume groups, volumes, or tags are allowed to be activated automatically. So, I simply set the filter to contain both of the volume groups for the host; anything else won't match the filter and does not get automatically activated. auto_activation_volume_list = [ "mandragora", "vm-volumes" ] The guest's LVM volumes are no longer appearing on the host, and finally, my backups are running...
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/678642', 'https://serverfault.com', 'https://serverfault.com/users/126632/']}
jdg_371900
stackexchange
llm_judgeable_groundtruth_similarity
563807
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When giving a proof for the formula of a geometric sum, the following sequence is used$$S = a + ar + ar^2 + ... + ar^{n-1},$$Why does the last term go to $n-1$ and not just $n$? I wrote the last term as $ar^n$ but that would give me the following formula $$S = a\frac{r^{n+1} - 1}{r-1}.$$ Now provide the response and nothing else.
I show below that the minimum is $-\frac{1}{6}$ for $n \geq 4$, and $-\frac{1}{7}$ when $n = 3$. If we put $$Q_n(a_1,a_2,\ldots ,a_n)=a^2_{1}+a^2_{2}+\cdots+a^2_{n}-2a_{1}a_{2}-2a_{2}a_{3}-2a_{3}a_{4}-\cdots-2a_{n-2}a_{n-1}-2a_{n-1}a_{n}$$ and $$R_n(a_1,a_2,\ldots ,a_n)=Q_n(a_1,a_2,\ldots,a_n)+\frac{(a_1+a_2+a_3+\ldots +a_n)^2}{6}$$ then for $n\geq 4$, we have the identity $$R_n(a_1,a_2,a_3,\ldots,a_n)=\frac{1}{42}\bigg(\sum_{k=1}^{n-2}a_k-5a_{n-1}+7a_n\bigg)^2+\frac{1}{28}\bigg(\sum_{k=1}^{n-3}2a_k-5a_{n-2}+4a_{n-1}\bigg)^2+\frac{1}{4}\bigg(\sum_{k=1}^{n-4}2a_k-2a_{n-3}+a_{n-2}\bigg)^2$$ Since $Q_n(0,\ldots, 0,\frac{1}{6},\frac{1}{3},\frac{1}{3},\frac{1}{6})=-\frac{1}{6}$,we see that the minimum is $-\frac{1}{6}$ for $n\geq 4$. For $n=3$, the minimum is $-\frac{1}{7}$ attained at$(\frac{2}{7},\frac{3}{7},\frac{2}{7})$ (thanks Macavity) because of $$Q_3(a_1,a_2,a_3)+\frac{(a_1+a_2+a_3)^2}{7}=\frac{1}{56}\bigg(a_1-6a_2+8a_3\bigg)^2+\frac{1}{8}\bigg(-3a_1+2a_2\bigg)^2$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/563807', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108090/']}
jdg_371901
stackexchange
llm_judgeable_groundtruth_similarity
2989466
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to get Fiddler to work with my company's proxy. Every external request is returning 407. So far I've tried adding oSession.oRequest["Proxy-Authorization"] = "YOURCREDENTIALS"; to the customized rules where I used my USERNAME:PASSWORD in base64. Still no luck. Any ideas? Thanks. Now provide the response and nothing else.
Note: There is an answer with a higher voting available. Because of SO sorting it is below the accepted answer. I had the same problem, too, and solved it like this: Started Fiddler with it's standard configuration. Started IE and made a HTTP-request to an external web-site. The proxy authorization dialogue popped up, where I entered my credentials. In Fiddler searched the request headers for "Proxy-Authorization". Copied the header value which looked like "Basic sOMeBASE64eNCODEdSTRING=" to the clipboard. Altered the CustomRules.js with the following line within OnBeforeRequest: oSession.oRequest["Proxy-Authorization"] = "Basic sOMeBASE64eNCODEdSTRING="; So my approach was quite similar to yours just that in advance I checked what kind of proxy authorization the server required by using Fiddler to debug the authorization header. That way I found out I had to add "Basic" before the Base64 encoded credentials and I didn't even have to use the tool to encode the credentials to Base64. Just copied the value from the proxy authorization header.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2989466', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/402506/']}
jdg_371902
stackexchange
llm_judgeable_groundtruth_similarity
373
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The James Webb space telescope is supposed to be located at the Earth-Sun L2 Lagrange point. Do we expect the region around that point to have a higher concentration of space debris, asteroids, dust, etc...? Would this be a matter of concern for bringing the telescope safely in position (i.e. require extra dust shield)? Do we expect however the exact Lagrange point to be free of any matter since it would take an acceleartion/deceleration for matter to actually reach that exact location? How large would be this 'debris-free' zone? Now provide the response and nothing else.
Hmmm no, it wouldn't be cluttered with debris, and yes, it's a good idea to park the JWST (James Webb Space Telescope) at the Sun-Earth L 2 point. The five Lagrange points are unstable, for one because of the gravitational anomalies of the two massive bodies of the Lagrange system, eccentric orbits, and there are many other factors to their instability. At the same time, they are least gravitationally attractive points around two massive bodies. Think of L-points as parking your car on a flat space at the top of the hill. You'll have to approach with some control and then try to balance at your parking spot, if you don't plan using your handbrake and also remain there stationary: Visualisation of the relationship between two massive bodies and their five Lagrange points (Source: Wikipedia ) There wouldn't be any debris clutter there, or any other matter like smaller particles, at least not any more likely than elsewhere around them, only transient in nature and possibly even less likely than elsewhere since all the other mass particles would gravitate towards the more massive bodies of two Lagrange system centers in their vicinity. No body with rest mass would stay there on its own accord, not unless it has active attitude control to position itself there and constantly adjust for changes in gravitational attraction vector as the two mass bodies rotate around their axes, change distance while orbiting each other, or L points being influenced by other mass bodies of the same planetary system. At the same time, JWST will be shielded by the Earth from any Solar activities and also Sun's interference with JWST's sensitive equipment as it starts observing the Universe in the infrared spectrum. Most of the orbital debris from our own space exploration missions is cluttered in the LEO (Low Earth Orbit) belt at roughly an altitude of 500-1500 km above the Earth's surface: Source: Active Debris Removal: EDDE, the ElectroDynamic Debris Eliminator , Jerome Pearson et al. (PDF) Now, the JWST will be deployed to a very large 800,000 kilometres (500,000 mi) radius halo orbit around the Sun-Earth L 2 point, that is 1,500,000 kilometers (930,000 mi) from the Earth, around 4 times farther than the distance between the Earth to the Moon. So not exactly in Earth's shadow and it will still use large deployable sunshield , but that's a long way away from where we're littering our LEO or even GEO/GSO (Geostationary/Geosynchronous orbits) with GEO being the farthest of these frequently used orbits, at 35,786 kilometres (22,236 mi) above the Earth's equator (and its graveyard orbit a bit farther than that). JWST will be placed in halo orbit nearly 40 times as distant from the Earth as you'd expect any space debris cluttering our planet's orbits at. Yes, it is a very good idea to place JWST around the Sun-Earth L 2 point.
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/373', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/148/']}
jdg_371903
stackexchange
llm_judgeable_groundtruth_similarity
17732971
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we have #include <chrono>#include <iostream>#include <ctime>namespace Ratios { typedef std::ratio<60*60*24,1> Days; }typedef std::chrono::system_clock Clock;typedef Clock::time_point TimePoint; And our main looks like int main(int argc, char *argv[]){ // argc check left out for brevity const Clock::rep d = static_cast<Clock::rep>(std::atoi(argv[1])); // Right now TimePoint now = Clock::now(); // Start with zero days auto days = std::chrono::duration<Clock::rep, Ratios::Days>::zero(); // Now we'd like to add d to the days days += d; // Error! days.count() = d; // Error! days = days + d; // Error! days += std::chrono::duration<Clock::rep, Ratios::Days>(d); // Okay days = days + std::chrono::duration<Clock::rep, Ratios::Days>(d); // Okay days *= d; // Why is this okay? days %= d; // And this too? TimePoint later = now + days; return 0;} What is the reason behind prohibiting the user to manipulate a duration directly? Now provide the response and nothing else.
It is done to enforce you to stick to strongly typed values rather than arbitrary values. Bjarne Stroustrup has examples regarding this behaviour in "The C++ Programming Language" (4th Ed., 35.2.1, pp. 1011): " The period is a unit system, so there is no = or += taking a plain value. Allowing that would be like allowing the addition of 5 of an unknown SI unit to a length in meters. Consider: duration<long long, milli> d1{7}; // 7 millisecondsd1 += 5; // error[...] What would 5 mean here? 5 seconds? 5 milliseconds? [...] If you know what you mean, be explicit about it. For example: d1 += duration<long long, milli>{5}; //OK: milliseconds"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17732971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/990142/']}
jdg_371904
stackexchange
llm_judgeable_groundtruth_similarity
10141159
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to know how to prevent a class from being Instantiated in .net? I know few methods like making the class Abstract and Static. Is there any more way to achieve this? Now provide the response and nothing else.
Making the class static is the best approach, if you absolutely don't want any instances. This stops anyone from creating instances. The class will be both sealed and abstract, and won't have any constructors. Additionally, the language will notice that it's a static class and stop you from using it in various places which imply instances, e.g. type arguments and variables. This indicates the intention more clearly than just having a private constructor - which could mean that there are instances, created within that class (e.g. for a singleton implementation). Oh, and making the class static will stop you from introducing any pointless instance members in the class, too :) See MSDN for more information about static classes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10141159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1203490/']}
jdg_371905
stackexchange
llm_judgeable_groundtruth_similarity
11567307
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a vector of a 2-dimensional points in OpenCV std::vector<cv::Point2f> points; I would like to calculate the mean values for x and y coordinates in points . Something like: cv::Point2f mean_point; //will contain mean values for x and y coordinatesmean_point = some_function(points); This would be simple in Matlab. But I'm not sure if I can utilize some high level OpenCV functions to accomplish the same. Any suggestions? Now provide the response and nothing else.
InputArray does a good job here. You can simply call cv::Mat mean_;cv::reduce(points, mean_, 01, CV_REDUCE_AVG);// convert from Mat to Point - there may be even a simpler conversion, // but I do not know about it.cv::Point2f mean(mean_.at<float>(0,0), mean_.at<float>(0,1)); Details: In the newer OpenCV versions, the InputArray data type is introduced. This way, one can send as parameters to an OpenCV function either matrices (cv::Mat) either vectors. A vector<Vec3f> will be interpreted as a float matrix with three channels, one row, and the number of columns equal to the vector size. Because no data is copied, this transparent conversion is very fast. The advantage is that you can work with whatever data type fits better in your app, while you can still use OpenCV functions to ease mathematical operations on it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11567307', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1121420/']}
jdg_371906
stackexchange
llm_judgeable_groundtruth_similarity
40740752
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know how a B-Tree works in-memory, it's easy enough to implement. However, what is currently completely beyond me, is how to find a data layout that works effectively on disk, such that: The number of entries in the B-Tree can grow indefinitly (or at least to > 1000GB) Disk-level copying operations are minimized The values can have arbitrary size (i.e. no fixed schema) If anyone could provide insight into layouting B-Tree structures on disk level, I'd be very grateful. Especially the last bullet point gives me a lot of headache. I would also appreciate pointers to books, but most database literature I've seen explains only the high-level structure (i.e. "this is how you do it in memory"), but skips the nitty gritty details on the disk layout. Now provide the response and nothing else.
UPDATE(archived version of oracle index internals): http://web.archive.org/web/20161221112438/http://www.toadworld.com/platforms/oracle/w/wiki/11001.oracle-b-tree-index-from-the-concept-to-internals OLD (the original link does not exist anymore):some info about oracle index internals: http://www.toadworld.com/platforms/oracle/w/wiki/11001.oracle-b-tree-index-from-the-concept-to-internals Notes: Databases do not directly implement indexes based on B-tree but on a variant called B+ tree. Which according to wikipedia: A B+ tree can be viewed as a B-tree in which each node contains only keys (not key-value pairs), and to which an additional level is added at the bottom with linked leaves. Databases work, in general, with block-oriented storage and b+ tree is more suited then a b-tree for this. The blocks are fixed size and are left with some free space to accommodate future changes in value or key size. A block can be either a leaf(holds actual data) or branch(holds the pointers to the leaf nodes) A toy model how writing to disk can be implemented (for a block size 10k for arithmetic simplification): a file of 10G is created on disk(it has 1000 of blocks) first block is assigned as root and the next free one as a leaf and a list of leaf addresses is put in root new data inserted, the current leaf node is filled with values until a threshold is reached data continue to be inserted, the next free ones are allocated as leaf blocks and the list of leaf nodes is updated after many inserts, the current root node needs children, so the next free block is allocated as branch node, it copies the list from root and now the root will maintains only a list of intermediary nodes. if node block needs to be split, the next free block is allocated as branch node, added into root list, and list of leaf nodes is split between initial and new branch node When the information is read from a big index: can go following: read first/root block (seek(0), read(10k)) which points to the a child which is located in block 900 read block 900 (seek(900*10k), read(10K)) which points to a child which located in block 5000 read block 5000 (seek(5000*10k), read(10K)) which points to the leaf node located in block 190 read block 190 (seek(190*10k), read(10K)) and extract the interested value from it a really large index can be split on multiple files, then the address of block will be something as (filename_id, address_relative_to_this_file)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40740752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3094906/']}
jdg_371907
stackexchange
llm_judgeable_groundtruth_similarity
7656118
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to find the distance between the element and the bottom of the browser window. When I select the element, and the distance between the element and the bottom of the browser window is smaller than 50px, I want to make the window scroll automatically. Any ideas? I'd prefer to use jQuery. Now provide the response and nothing else.
Unlike other systems coordinates in browser is from top to bottom, it means top of the browser is y=0. There is two DOM element property for getting position of an element on the page. The properties are element.offsetTop and element.offsetHeight You can calculate the space between your element and bottom of the page by calculating element.offsetTop and window.innerHeight . var space = window.innerHeight - element.offsetTop if you want to calculate space between bottom of your element and bottom of the window then you need to add your element height too. var space = window.innerHeight - element.offsetTop + element.offsetHeight This calculation is sometimes necessary. Think you have percent based positioning and you want to know position of your element by pixels to do something. For example you have a div positioned like this: div{ width:300px; height:16.2%; position:absolute; top: 48.11%; border:3px dotted black;} Then you want to know when the div is close to browser window to change it's color: var div = document.querySelector('div'), space = innerHeight - div.offsetTop + div.offsetHeight; window.onresize = function(){ space = innerHeight - div.offsetTop + div.offsetHeight; if(space < 200){ div.style.background = 'blue'; }}; Fiddle
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7656118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979582/']}
jdg_371908
stackexchange
llm_judgeable_groundtruth_similarity
30555225
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using v"0.3.8" on Windows. I found from http://julialang.org/downloads/ Current Release (v0.3.9) I know I could download the prebuild version and install again. Is there a way to upgrade (from current installed version) to the new version? There's Pkg.update() , it runs for a long time without any output - it doesn't work in this way. From the documentation: update() Update package the metadata repo – kept inPkg.dir("METADATA") – then update any fixed packages that can safelybe pulled from their origin; then call Pkg.resolve() to determine anew optimal set of packages versions. So it is not the right tool to upgrade the language itself. Now provide the response and nothing else.
How to upgrade Julia: Windows & MacOS & Linux The most trivial way of upgrading Julia is to go to the download page and manually install corresponding binaries. (UPDATE: if you're on old Julia, e.g. v0.6)If you're going to upgrade Julia to a new minor version(e.g. v0.5=>v0.6), you could easily reinstall your old packages by the following steps(Julia-v1.0 shipped with the new package manager, so there is no such hassle): julia> using Pkg # Pkg.init() if needed copy REQUIRE file from package directory of the old version to the new one (in this example from .julia/v0.5 to .julia/v0.6 ) overwriting the existing file julia> Pkg.resolve() # or Pkg.update() MacOS If you're using Homebrew and homebrew-julia , please follow the instructions here . Linux Use abelsiqueira's installer jill . To OP's Question 1. Pkg.update() is used for updating Julia's packages, not Julia itself, so downloading the prebuild version and reinstalling seems to be the only way to upgrade Julia for now. Or you could build Julia from source following the instructions here . 2.The release notes are listed here: https://github.com/JuliaLang/julia/blob/master/HISTORY.md https://github.com/JuliaLang/julia/blob/master/NEWS.md
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30555225', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2185472/']}
jdg_371909
stackexchange
llm_judgeable_groundtruth_similarity
3357129
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is Node.js quicker and more scalable than Apache? Are there any performance figures to back up Node.js's performance for a web application over Apache? UPDATE: Ok maybe my question (above) is confusing because I am a little confused as to how Node.js sits within a web stack. Under what circumstances should I consider using Node.js instead of a more traditional stack like PHP, MySQL and Apache - or does Node.js play it's part in this stack? Now provide the response and nothing else.
Node.js is a framework particularly well suited for writing high performance web applications without having to understand how to implement concurrency at a low level. It is a framework for writing server-side JavaScript apps using non-blocking IO: passing continuations to IO calls rather than waiting on results. Node.js provides a system API (filesystem access, network access, etc.) where all of the API calls take a continuation which the runtime will execute later with the result, rather than block and return the result to the original caller. You can use by itself, if you like. But you might want a dedicated reverse proxy in front of Node.js: something like Apache, Nginx, LigHTTPD, etc. Or, for clustering a bigger app, you might want something like HAProxy in front of multiple running Node.js app servers.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3357129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65040/']}
jdg_371910
stackexchange
llm_judgeable_groundtruth_similarity
40905
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a question about ARIMA models. Let's say I have a time series $Y_t$ that I would like to forecast and an $\text{ARIMA}(2,2)$ model seems like a good way to conduct the forecasting exercise. $$\Delta Y_t = \alpha_1 \Delta Y_{t-1} + \alpha_2 \Delta Y_{t-2} + \nu_{t} + \theta_1 \nu_{t-1} + \theta_2 \nu_{t-2}$$Now the lagged $Y$'s imply that my series today is influenced by prior events. This makes sense. But what is the interpretation of the errors? My prior residual (how off I was in my calculation) is influencing the value of my series today? How are the lagged residuals calculated in this regression as it is the product / remainder of the regression? Now provide the response and nothing else.
I think that you need to remember that ARIMA models are atheoretic models, so the usual approach to interpreting estimated regression coefficients does not really carry over to ARIMA modelling. In order to interpret (or understand) estimated ARIMA models, one would do well to be cognizant of the different features displayed by a number of common ARIMA models. We can explore some of these features by investigating the types of forecasts produced by different ARIMA models. This is the main approach that I've taken below, but a good alternative would be to look at the impulse response functions or dynamic time paths associated with different ARIMA models (or stochastic difference equations). I'll talk about these at the end. AR(1) Models Let's consider an AR(1) model for a moment. In this model, we can say that the lower the value of $\alpha_{1}$ then the quicker is the rate of convergence (to the mean). We can try to understand this aspect of AR(1) models by investigating the nature of the forecasts for a small set of simulated AR(1) models with different values for $\alpha_{1}$. The set of four AR(1) models that we'll discuss can be written in algebraic notation as:\begin{equation}Y_{t} = C + 0.95 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (1)\\Y_{t} = C + 0.8 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (2)\\Y_{t} = C + 0.5 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (3)\\Y_{t} = C + 0.4 Y_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (4)\end{equation}where $C$ is a constant and the rest of the notation follows from the OP. As can be seen, each model differs only with respect to the value of $\alpha_{1}$. In the graph below, I have plotted out-of-sample forecasts for these four AR(1) models. It can be seen that the forecasts for the AR(1) model with $\alpha_{1} = 0.95$ converges at a slower rate with respect to the other models. The forecasts for the AR(1) model with $\alpha_{1} = 0.4$ converges at a quicker rate than the others. Note: when the red line is horizontal, it has reached the mean of the simulated series. MA(1) Models Now let's consider four MA(1) models with different values for $\theta_{1}$. The four models we'll discuss can be written as:\begin{equation}Y_{t} = C + 0.95 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (5)\\Y_{t} = C + 0.8 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (6)\\Y_{t} = C + 0.5 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (7)\\Y_{t} = C + 0.4 \nu_{t-1} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (8)\end{equation} In the graph below, I have plotted out-of-sample forecasts for these four different MA(1) models. As the graph shows, the behaviour of the forecasts in all four cases are markedly similar; quick (linear) convergence to the mean. Notice that there is less variety in the dynamics of these forecasts compared to those of the AR(1) models. Note: when the red line is horizontal, it has reached the mean of the simulated series. AR(2) Models Things get a lot more interesting when we start to consider more complex ARIMA models. Take for example AR(2) models. These are just a small step up from the AR(1) model, right? Well, one might like to think that, but the dynamics of AR(2) models are quite rich in variety as we'll see in a moment. Let's explore four different AR(2) models: \begin{equation}Y_{t} = C + 1.7 Y_{t-1} -0.8 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (9)\\Y_{t} = C + 0.9 Y_{t-1} -0.2 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (10)\\Y_{t} = C + 0.5 Y_{t-1} -0.2 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (11)\\Y_{t} = C + 0.1 Y_{t-1} -0.7 Y_{t-2} + \nu_{t} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (12)\end{equation} The out-of-sample forecasts associated with each of these models is shown in the graph below. It is quite clear that they each differ significantly and they are also quite a varied bunch in comparison to the forecasts that we've seen above - except for model 2's forecasts (top right plot) which behave similar to those for an AR(1) model. Note: when the red line is horizontal, it has reached the mean of the simulated series. The key point here is that not all AR(2) models have the same dynamics! For example, if the condition, \begin{equation}\alpha_{1}^{2}+4\alpha_{2} < 0,\end{equation}is satisfied then the AR(2) model displays pseudo periodic behaviour and as a result its forecasts will appear as stochastic cycles. On the other hand, if this condition is not satisfied, stochastic cycles will not be present in the forecasts; instead, the forecasts will be more similar to those for an AR(1) model. It's worth noting that the above condition comes from the general solution to the homogeneous form of the linear, autonomous, second-order difference equation (with complex roots). If this if foreign to you, I recommend both Chapter 1 of Hamilton (1994) and Chapter 20 of Hoy et al. (2001). Testing the above condition for the four AR(2) models results in the following:\begin{equation}(1.7)^{2} + 4 (-0.8) = -0.31 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (13)\\(0.9)^{2} + 4 (-0.2) = 0.01 > 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (14)\\(0.5)^{2} + 4 (-0.2) = -0.55 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (15)\\(0.1)^{2} + 4 (-0.7) = -2.54 < 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (16)\end{equation} As expected by the appearance of the plotted forecasts, the condition is satisfied for each of the four models except for model 2. Recall from the graph, model 2's forecasts behave ("normally") similar to an AR(1) model's forecasts. The forecasts associated with the other models contain cycles. Application - Modelling Inflation Now that we have some background under our feet, let's try to interpret an AR(2) model in an application. Consider the following model for the inflation rate ($\pi_{t}$):\begin{equation}\pi_{t} = C + \alpha_{1} \pi_{t-1} + \alpha_{2} \pi_{t-2} + \nu_{t}.\end{equation}A natural expression to associate with such a model would be something like: "inflation today depends on the level of inflation yesterday and on the level of inflation on the day before yesterday" . Now, I wouldn't argue against such an interpretation, but I'd suggest that some caution be drawn and that we ought to dig a bit deeper to devise a proper interpretation. In this case we could ask, in which way is inflation related to previous levels of inflation? Are there cycles? If so, how many cycles are there? Can we say something about the peak and trough? How quickly do the forecasts converge to the mean? And so on. These are the sorts of questions we can ask when trying to interpret an AR(2) model and as you can see, it's not as straightforward as taking an estimated coefficient and saying "a 1 unit increase in this variable is associated with a so-many unit increase in the dependent variable" - making sure to attach the ceteris paribus condition to that statement, of course. Bear in mind that in our discussion so far, we have only explored a selection of AR(1), MA(1), and AR(2) models. We haven't even looked at the dynamics of mixed ARMA models and ARIMA models involving higher lags. To show how difficult it would be to interpret models that fall into that category, imagine another inflation model - an ARMA(3,1) with $\alpha_{2}$ constrained to zero:\begin{equation}\pi_{t} = C + \alpha_{1} \pi_{t-1} + \alpha_{3} \pi_{t-3} + \theta_{1}\nu_{t-1} + \nu_{t}.\end{equation} Say what you'd like, but here it's better to try to understand the dynamics of the system itself. As before, we can look and see what sort of forecasts the model produces, but the alternative approach that I mentioned at the beginning of this answer was to look at the impulse response function or time path associated with the system. This brings me to next part of my answer where we'll discuss impulse response functions. Impulse Response Functions Those who are familiar with vector autoregressions (VARs) will be aware that one usually tries to understand the estimated VAR model by interpreting the impulse response functions; rather than trying to interpret the estimated coefficients which are often too difficult to interpret anyway. The same approach can be taken when trying to understand ARIMA models. That is, rather than try to make sense of (complicated) statements like "today's inflation depends on yesterday's inflation and on inflation from two months ago, but not on last week's inflation!" , we instead plot the impulse response function and try to make sense of that. Application - Four Macro Variables For this example (based on Leamer(2010)), let's consider four ARIMA models based on four macroeconomic variables; GDP growth, inflation, the unemployment rate, and the short-term interest rate. The four models have been estimated and can be written as:\begin{eqnarray}Y_{t} &=& 3.20 + 0.22 Y_{t-1} + 0.15 Y_{t-2} + \nu_{t}\\\pi_{t} &=& 4.10 + 0.46 \pi_{t-1} + 0.31\pi_{t-2} + 0.16\pi_{t-3} + 0.01\pi_{t-4} + \nu_{t}\\u_{t} &=& 6.2+ 1.58 u_{t-1} - 0.64 u_{t-2} + \nu_{t}\\r_{t} &=& 6.0 + 1.18 r_{t-1} - 0.23 r_{t-2} + \nu_{t}\end{eqnarray}where $Y_{t}$ denotes GDP growth at time $t$, $\pi$ denotes inflation, $u$ denotes the unemployment rate, and $r$ denotes the short-term interest rate (3-month treasury). The equations show that GDP growth, the unemployment rate, and the short-term interest rate are modeled as AR(2) processes while inflation is modeled as an AR(4) process. Rather than try to interpret the coefficients in each equation, let's plot the impulse response functions (IRFs) and interpret them instead. The graph below shows the impulse response functions associated with each of these models. Don't take this as a masterclass in interpreting IRFs - think of it more like a basic introduction - but anyway, to help us interpret the IRFs we'll need to accustom ourselves with two concepts; momentum and persistence . These two concepts are defined in Leamer (2010) as follows: Momentum : Momentum is the tendency to continue moving in the same direction. The momentum effect can offset the force of regression (convergence) toward the mean and can allow a variable to move away from its historical mean, for some time, but not indefinitely. Persistence : A persistence variable will hang around where it is and converge slowly only to the historical mean. Equipped with this knowledge, we now ask the question: suppose a variable is at its historical mean and it receives a temporary one unit shock in a single period, how will the variable respond in future periods? This is akin to asking those questions we asked before, such as, do the forecasts contains cycles? , how quickly do the forecasts converge to the mean? , etc. At last, we can now attempt to interpret the IRFs. Following a one unit shock, the unemployment rate and short-term interest rate (3-month treasury) are carried further from their historical mean. This is the momentum effect. The IRFs also show that the unemployment rate overshoots to a greater extent than does the short-term interest rate. We also see that all of the variables return to their historical means (none of them "blow up"), although they each do this at different rates. For example, GDP growth returns to its historical mean after about 6 periods following a shock, the unemployment rate returns to its historical mean after about 18 periods, but inflation and short-term interest take longer than 20 periods to return to their historical means. In this sense, GDP growth is the least persistent of the four variables while inflation can be said to be highly persistent. I think it's a fair conclusion to say that we've managed (at least partially) to make sense of what the four ARIMA models are telling us about each of the four macro variables. Conclusion Rather than try to interpret the estimated coefficients in ARIMA models (difficult for many models), try instead to understand the dynamics of the system. We can attempt this by exploring the forecasts produced by our model and by plotting the impulse response function. [I'm happy enough to share my R code if anyone wants it.] References Hamilton, J. D. (1994). Time series analysis (Vol. 2). Princeton: Princeton university press. Leamer, E. (2010). Macroeconomic Patterns and Stories - A Guide for MBAs, Springer. Stengos, T., M. Hoy, J. Livernois, C. McKenna and R. Rees (2001). Mathematics for Economics, 2nd edition, MIT Press: Cambridge, MA.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/40905', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/16149/']}
jdg_371911
stackexchange
llm_judgeable_groundtruth_similarity
351689
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the most common and wise way to specify many entities' ids in an API request? For a singular entity request I have: GET /v1/entities/{entity_id} My suggestions for requesting many entities: GET /v1/entities?entity_ids={entity_id1},{entity_id2} With the same variable name entity_id: GET /v1/entities?entity_id={entity_id1},{entity_id2} More looks like for singular endpoint: GET /v1/entities/{entity_id1},{entity_id2} Or is there any other way to design for such endpoints? Now provide the response and nothing else.
I'm afraid the standard thing to do is not to search multiple ids like this using REST. If you're loading several specific entities by id, then you should ask yourself why you're doing so. Perhaps these entities are customer orders belonging to a specific customer and you're loading the details of each order. In that case, you should have a URL more like /customer/{id}/orders which returns all orders belonging to a customer with id {id} rather than knowing the ids beforehand and loading them each individually. Otherwise if the user is attempting to load several entities from a list to view its details, simplify the display to show one entity detail at a time, and you load each one as the user moves forwards or backwards. In other words, you shouldn't be in a situation where you'd specifically need multiple entities loaded by their ids with no obvious link between them that you'd need a way to call the REST service with each individual id listed.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/351689', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/262132/']}
jdg_371912
stackexchange
llm_judgeable_groundtruth_similarity
209159
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Through a copy of some code from an answer on this site I discovered the existence of: \[FormalN] The documentation describes it as follows: "Used to represent a formal parameter that will never be assigned a value." Can you give an example of why, how and where this should be applied? Now provide the response and nothing else.
A simple example of the use of formal symbols is for mathematical tasks that require the use of dummy variables. Most functions like Plot will localize their dummy variables, but not all. For example, LinearModelFit will not work with symbols that already have a value assigned to them: x = 1;LinearModelFit[RandomReal[1, {10, 2}], {1, x}, x] (spits out errors) This is where the formal symbols come in handy. Because they can never have a value assigned to them (under normal circumstances), they are ideal for this sort of thing: LinearModelFit[RandomReal[1, {10, 2}], {1, \[FormalX]}, \[FormalX]] The code above will always run because you can be sure that \[FormalX] is not going to have a value. edit As pointed out in a comment, you can just Clear[x] before using a function like LinearModelFit if you're certain that this will not be a problem elsewhere. However, you can come up with very specific situations where things can still go wrong with that. For example, if you put a slider like Slider[Dynamic[x], {0, 1}] in your notebook, manipulating that slider can affect running computations in the kernel because dynamic elements interrupt the kernel when they need to be updated. For example, try this: Slider[Dynamic[x]]Do[Print[x]; Pause[1], {10}] You'll see that the printed values of x change when you manipulate the slider. You can imagine that this could happen during computation that takes several seconds to complete. It's not very likely to happen, but if you use \[FormalX] for your fit you'll know for certain that it won't.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/209159', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/156/']}
jdg_371913
stackexchange
llm_judgeable_groundtruth_similarity
4111030
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I saw this result and tried to proof, but I probably made a mistake throughout the computation and I cannot spot where it is. Can someone please help me? $$S=\sum_{n=1}^{\infty}\frac{n \zeta(n+1)}{2}=\frac{\pi^2}{4}$$ I started by the integral representation $$\frac{1}{k^{n+1}}=\frac{1}{\Gamma(n+1)}\int_{0}^{\infty}e^{-ku}u^n du $$ $$S=\frac{1}{2}\sum_{n=1}^{\infty}n\sum_{k=1}^{\infty}\frac{1}{\Gamma(n+1)}\int_{0}^{\infty}e^{-ku}u^n du $$ Supposing I can swap integration and summation $$S=\frac{1}{2}\sum_{k=1}^{\infty}\int_{0}^{\infty}e^{-ku}\sum_{n=1}^{\infty}\frac{n}{n!}u^n du\\=\frac{1}{2}\sum_{k=1}^{\infty}\int_{0}^{\infty}e^{-ku}u\sum_{n=1}^{\infty}\frac{u^{n-1}}{(n-1)!} du\\=\frac{1}{2}\sum_{k=1}^{\infty}\int_{0}^{\infty}ue^{-ku}e^udu\\=\frac{1}{2}\sum_{k=1}^{\infty}\int_{0}^{\infty}ue^{-(k-1)u}du\\=\frac{1}{2}\sum_{k=1}^{\infty}\frac{1}{(k-1)^2}\\$$ which clearly is not the right answer! Now provide the response and nothing else.
As it stands, the series does not converge (note K. defaoite 's observation) as $\zeta(n)\to1$ for $n\to\infty$ and consequently $n\zeta(n+1)\to\infty$ . If, however, we use $\zeta(n+1)-1$ instead we obtain a converging series (found by Mason ). Then there also is an easier way than using integrals though: $$\sum_{n\ge1} n(\zeta(n+1)-1)=\sum_{n\ge1}\sum_{k\ge2}\frac n{k^{n+1}}=\sum_{k\ge2}\frac1{(k-1)^2}=\frac{\pi^2}6$$ Your computations are fine. Assuming not given convergence the same argument as above applies and gives you the same final series as you have obtained via integration.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4111030', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/238708/']}
jdg_371914
stackexchange
llm_judgeable_groundtruth_similarity
20632828
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on the AWS creation of the bucket and distribution using the scripts. I have created a script and running the stack template script in AWS cloud formation console. I am creating the bucketpolicy for S3 bucket using the script and canonical ID. Once I create a bucket policy I want to assign it to the "OriginAccessIdentity" dynamically in the script. I want to add the id generated from the bucket policy to "OriginAccessIdentity" attributes. How to achieve this functionality? Script : { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "AWS CloudFormation Template S3_With_CloudFront_Distribution", "Parameters" : { "bucketname" : { "Type" : "String", "Description" : "test" }, "cannonicalid" : { "Type" : "String", "Description" : "234213523145314534523452345234523452345" } }, "Conditions" : { "CreateProdResources" : {"Fn::Equals" : [{"Ref" : "EnvType"}, "dev"]} }, "Resources" : { "testbucket" : { "Type" : "AWS::S3::Bucket", "Properties" : { "BucketName" : { "Ref" : "bucketname" }, "WebsiteConfiguration" : { "IndexDocument" : "index.html" } } }, "mybucketpolicy" : { "Type" : "AWS::S3::BucketPolicy", "Properties" : { "PolicyDocument" : { "Id" : "MyPolicy", "Statement" : [ { "Sid" : "Grant a CloudFront Origin Identity access to support private content", "Action" : [ "s3:GetObject" ], "Effect" : "Allow", "Resource" : { "Fn::Join" : [ "", [ "arn:aws:s3:::", { "Ref" : "testbucket" } , "/*" ] ] }, "Principal" : { "CanonicalUser":{ "Ref" : "cannonicalid" } } } ] }, "Bucket" : { "Ref" : "testbucket" } } }, "testdistribution" : { "Type" : "AWS::CloudFront::Distribution", "Properties" : { "DistributionConfig" : { "Origins" : [ { "Id" : "S3Origin", "DomainName" : { "Fn::GetAtt" : [ "testbucket", "DomainName" ] }, "S3OriginConfig" : { "OriginAccessIdentity" : "How to configure the id dynamically here" } } ], "Enabled" : "true", "Comment" : "", "DefaultRootObject" : "index.html", "Aliases" : [ "test.com" ], "CacheBehaviors" : [ { "TargetOriginId" : "S3Origin", "ForwardedValues" : { "QueryString" : "false" }, "ViewerProtocolPolicy" : "allow-all", "MinTTL" : "1", "PathPattern" : "resources/*.json" } ], "DefaultCacheBehavior" : { "TargetOriginId" : "S3Origin", "ForwardedValues" : { "QueryString" : "false" }, "ViewerProtocolPolicy" : "allow-all", "MinTTL" : "1" } } } } }, "Outputs" : { "DistributionId" : { "Description" : "CloudFront Distribution Id", "Value" : { "Ref" : "testdistribution" } }, "DistributionName" : { "Description" : "URL to access the CloudFront distribution", "Value" : { "Fn::Join" : [ "", ["http://", {"Fn::GetAtt" : ["testdistribution", "DomainName"]} ]]} }, "S3OriginDNSName" : { "Description" : "Name of S3 bucket to hold website content.", "Value" : { "Fn::GetAtt" : [ "testbucket", "DomainName"] } } }} Now provide the response and nothing else.
Since November 2, 2017 CloudFormation supports this using AWS::CloudFront::CloudFrontOriginAccessIdentity resource. With a origin access identity resource defined as: "OriginAccessId": { "Type": "AWS::CloudFront::CloudFrontOriginAccessIdentity", "Properties": { "CloudFrontOriginAccessIdentityConfig": { "Comment": "MyDescription" } }} you can reference it in the distribution config with: "OriginAccessIdentity" : { "Fn::Sub": "origin-access-identity/cloudfront/${OriginAccessId}"}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20632828', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3110829/']}
jdg_371915
stackexchange
llm_judgeable_groundtruth_similarity
27867307
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Could somebody explain how come this is only returning 1 result (there should be 4). Its only returning the most recent post title, where I would like to be getting all the post titles in the category (ID: 121) which in this case is four. <script type="text/javascript"> var posturl = "http://www.tropical420.com/api/get_posts/?posts_per_page=-1"; $.ajax({ type: 'GET', url: posturl, complete: function(){ }, success: function (data) { var response = data; //JSON.parse(data); //loop through posts for(var i = 0; i != response.posts.length; i++) { //get each element in the array var post = response.posts[i]; // post vars var postTitle = post.title; var postContent = post.content; var postCategory = post.categories[i].id; // output stuff so we can see things if (postCategory == '121') { $("#post").append(postTitle + "<br />").trigger('create'); } } }, error:function (xhr, ajaxOptions, thrownError) { alert("Error"); } }); </script><div id="post"></div> Now provide the response and nothing else.
Add some kind of identifier for the item you don't want to trigger the modal, a class like no-modal for example, then in your jQuery add code for the modal's show.bs.modal event. Capture the relatedElement, which is the element that triggered the event, then see if it has the class you're looking for. If it does, run e.stopPropagation() . BOOTPLY jQuery : $('#myModal').on('show.bs.modal', function (e) { var button = e.relatedTarget; if($(button).hasClass('no-modal')) { e.stopPropagation(); } }); HTML : <button type="button" class="btn btn-primary btn-lg" data-toggle="modal" data-target="#myModal"> Launch demo modal</button><button type="button" class="btn btn-primary btn-lg no-modal" data-toggle="modal" data-target="#myModal"> Launch demo modal</button><div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> <h4 class="modal-title" id="myModalLabel">Modal Header</h4> </div> <div class="modal-body"> ... </div> <div class="modal-footer"> <button type="button" class="btn btn-default" data-dismiss="modal">Close</button> <button type="button" class="btn btn-primary">Save changes</button> </div> </div> </div></div> As you can see, the second button has a class called no-modal . When clicked, the jQuery checks for the existence of that class, and if it exists it prevents the modal from popping. Clicking the first button causes the modal to pop normally because it doesn't have that class. Bootstrap modals trigger specific events when they're popped by an element on the page, from the moment they're triggered through the moment they've fully popped. You can read about these events to get an idea of what you can use them for by checking the Events section for Bootstrap modals in the official documentation.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27867307', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3304338/']}
jdg_371916
stackexchange
llm_judgeable_groundtruth_similarity
28775057
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone please explain what int ((*foo(int)))(int) in this does? int (*fooptr)(int);int ((*foo(int)))(int); // Can't understand what this does.int main(){ fooptr = foo(0); fooptr(10);} . Now provide the response and nothing else.
int ((*foo(int)))(int); This declares foo as a function that expects an int type argument and returns a pointer to a function that expects an int type argument and return an int . To be more clear: foo -- foo foo( ) -- is a function foo(int) -- taking an int argument *foo(int) -- returning a pointer (*foo(int))( ) -- to a function that (*foo(int))(int) -- takes an int argument int (*foo(int))(int) -- and returning an int Here is a good explanation for the same.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28775057', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2839891/']}
jdg_371917
stackexchange
llm_judgeable_groundtruth_similarity
39461219
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've started to have a problem today. Everytime I run appcfg.py update . it updates 441 project files. Even if I just modified 1 it will upload 442. running appcfg.py update . twice will upload 441 files every time. This is obviusly a problem because instead uploading a couple of files it is uploading a lot of them and taking a lot of time. This is what I have tried: Updated app engine launcher to it latest version Rebooted removed all files from the folder, download them again from git nothing seems to work. I even left only app.yaml and index.php and it uploaded both files every time, even if I do not change anything. Ideas? thanks Now provide the response and nothing else.
For me, the following 2 changes worked Add an absolute path for pythonPath Use the "--noreload" option while starting the project Here's the relevant part of my config { "name": "Django", "type": "python", "request": "launch", "stopOnEntry": true, "pythonPath": "/Users/xyz/Documents/dev/my_project/my_project_env/bin/python", "program": "${workspaceRoot}/manage.py", "args": [ "runserver", "0.0.0.0:8080", "--noreload" ], "debugOptions": [ "WaitOnAbnormalExit", "WaitOnNormalExit", "RedirectOutput", "DjangoDebugging" ] },
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39461219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1869656/']}
jdg_371918
stackexchange
llm_judgeable_groundtruth_similarity
2216441
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I could use some help on calculating this infinite sum: $\sum_{k=0}^{\infty}\frac{1}{1-16k^2}$. Included was that I had to start with a Fourier series for the function $f:\Re \to \Re: x \mapsto \sin(x)$ for $x\in[0, \frac{\pi}{2}[$, so let's start with that. Let \begin{eqnarray*}g(x) = \frac{a_0}{2} + \sum_{k=1}^{\infty}a_k\cos(4kx) + \sum_{k=1}^{\infty}b_k\sin(4kx).\end{eqnarray*} This is the Fourier series for $f$.With $a_k = \frac{4}{\pi}\int_{0}^{\frac{\pi}{2}}\sin(x)\cos(4kx)dx$ and $b_k = \frac{4}{\pi}\int_{0}^{\frac{\pi}{2}}\sin(x)\sin(4kx)dx$. Solving this leads to (or at least I found that): $a_k = \frac{4}{(1-16k^2)\pi}$, $b_k = \frac{16k}{(1-16k^2)\pi}$ for $k\geq1$ and $a_0 = \frac{4}{\pi}$. Bringing this to $g(x)$ gives:\begin{eqnarray*}g(x) = \frac{2}{\pi} + \sum_{k=1}^{\infty}\frac{4}{(1-16k^2)\pi}\cos(4kx) + \sum_{k=1}^{\infty}\frac{16k}{(1-16k^2)\pi}\sin(4kx). \end{eqnarray*}Since $f(x) \approx g(x)$, we can say that $f(0) = g(0)$. We get \begin{eqnarray*}\sin(0) = \frac{2}{\pi} + \sum_{k=1}^{\infty}\frac{4}{(1-16k^2)\pi}\cos(0) + \sum_{k=1}^{\infty}\frac{16k}{(1-16k^2)\pi}\sin(0),\end{eqnarray*}this becomes \begin{eqnarray*}0 = \frac{2}{\pi} + \sum_{k=1}^{\infty}\frac{4}{(1-16k^2)\pi}.\end{eqnarray*}We get \begin{eqnarray*}\frac{-2}{\pi} = \frac{4}{\pi}\sum_{k=1}^{\infty}\frac{1}{1-16k^2}\end{eqnarray*} so \begin{eqnarray*}\sum_{k=1}^{\infty}\frac{1}{1-16k^2} = \frac{-1}{2}.\end{eqnarray*} We need the sum from k = 0. The term $\frac{1}{1-16k^2}$ for k = 0 gives 1, so we add 1 to both sides. This leads to my solution \begin{eqnarray*}\sum_{k=0}^{\infty}\frac{1}{1-16k^2} = \frac{1}{2}.\end{eqnarray*} However, when approaching this sum numerically and using Wolfram, I find that the sum should be $\frac{4+\pi}{8}$. Could some help and point out where I went wrong with my approach? Thanks in advance Now provide the response and nothing else.
The reason that the approach in the OP is flawed is that the Fourier series for $\sin(x)$ for $x\in [0,\pi/2]$ is discontinuous at the end points. This is Gibb's Phenomenon . In fact, we have $$\sin(x)= \frac2\pi +\frac4\pi\sum_{n=1}^\infty \frac{\cos(4nx)}{1-16n^2}+\frac{16}{\pi}\sum_{n=1}^\infty\frac{n\sin(4nx)}{1-16n^2}$$ for $x\in (0,\pi/2)$ pointwise, but the convergence is $L^2[0,\pi/2]$. We do not have pointwise convergence at the end points. This is not surprising given $\sin(0)=0\ne \sin(\pi/2)=1$. And since $\sin(0)=0$ and $\sin(\pi/2)=1$ we have $$\frac{1+0}{2}=\frac2\pi +\frac4\pi\sum_{n=1}^\infty \frac{1}{1-16n^2} \tag 1$$ Solving $(1)$ the series of interest yields $$\sum_{n=0}^\infty \frac{1}{1-16n^2} =1+\frac\pi4\left(\frac{1}{2}-\frac{2}{\pi}\right)=\frac{4+\pi}{8}$$ as expected!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2216441', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/402953/']}
jdg_371919
stackexchange
llm_judgeable_groundtruth_similarity
16731071
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm unable to update the contents in combobox2 when I change my selection in combobox1, what am I missing or doing wrong? Imports System.IOPublic Class Form1 Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'get sub directory\ toolpalette group names... ComboBox1.DataSource = New DirectoryInfo("C:\VTS\TREADSTONE LT\ATC").GetDirectories() Dim filelocation As String filelocation = ("C:\VTS\TREADSTONE LT\ATC\" & ComboBox1.Text & "\") 'gets file\ paltte names... For Each BackupFiles As String In My.Computer.FileSystem.GetFiles(filelocation, FileIO.SearchOption.SearchTopLevelOnly, "*.*") ComboBox2.Items.Add(IO.Path.GetFileNameWithoutExtension(BackupFiles)) Next 'reloads the combobox contents... ComboBox1.Refresh() End SubEnd Class Now provide the response and nothing else.
There are two options for Spring Security to work. Use prependId="false" on a JSF form As <h:form> is a naming container, it prepends id of its children with the specified id , or the autogenerated id, so as Spring Security expects ids to remain unchainged, just don't prepend the ids: <h:form prependId="false"> <h:outputLabel value="User Id: " for="userId" /> <h:inputText id="j_username" label="User Id" required="true" value="#{loginBean.name}" /> <h:outputLabel value="Password: " for ="password" /> <h:inputSecret id="j_password" value="#{loginBean.password}" /> <h:commandButton value="Submit" action="#{loginBean.login}" /></h:form> Note that #{j_spring_security_check} is a wrong action method: it needs to be #{loginBean.login} with the following contents: public String login() { //do any job with the associated values that you've got from the user, like persisting attempted login, etc. FacesContext facesContext = FacesContext.getCurrentInstance(); ExternalContext extenalContext = facesContext.getExternalContext(); RequestDispatcher dispatcher = ((ServletRequest)extenalContext.getRequest()).getRequestDispatcher("/j_spring_security_check"); dispatcher.forward((ServletRequest)extenalContext.getRequest(), (ServletResponse)extenalContext.getResponse()); facesContext.responseComplete(); return null;} Basically, all you need to do is dispatch to /j_spring_security_check and have j_username and j_password as request parameters. Use plain HTML form Basically, there's no particular need to mess with JSF form on this issue, in case you don't need to do some extra things apart from authentication, and plain HTML form is sufficient for Spring Security to do its job. <form action="/j_spring_security_check" method="POST"> <label for="j_username">User Id: </label> <input id="j_username" name="j_username" type="text" /> <label for="j_password">Password: </label> <input id="j_password" name="j_password" type="password"/> <input type="submit" value="Submit"/></form>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16731071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/992947/']}
jdg_371920
stackexchange
llm_judgeable_groundtruth_similarity
17232723
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to include multiple class files and .Jar files in class path , i am on linux .. and want to integrate lucene packages downloaded with my code , all libraries and class files are in lucene dirc. ( they are multiple sub-directories inside that folder ) ? HelloLucene.java:1: error: package org.apache.lucene.analysis.standard does not existimport org.apache.lucene.analysis.standard.StandardAnalyzer; ^HelloLucene.java:10: error: package org.apache.lucene.queryparser.classic does not existimport org.apache.lucene.queryparser.classic.ParseException; ^HelloLucene.java:11: error: package org.apache.lucene.queryparser.classic does not existimport org.apache.lucene.queryparser.classic.QueryParser; ^HelloLucene.java:23: error: cannot find symbol public static void main(String[] args) throws IOException, ParseException { ^ symbol: class ParseException location: class HelloLucene HelloLucene.java:26: error: cannot find symbol StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_40); ^ symbol: class StandardAnalyzer location: class HelloLuceneHelloLucene.java:26: error: cannot find symbol StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_40); ^ symbol: class StandardAnalyzer location: class HelloLuceneHelloLucene.java:45: error: cannot find symbol Query q = new QueryParser(Version.LUCENE_40, "title", analyzer).parse(querystr); ^ symbol: class QueryParser location: class HelloLuceneNote: HelloLucene.java uses or overrides a deprecated API.Note: Recompile with -Xlint:deprecation for details.7 errors Now provide the response and nothing else.
try TestName rule public class TestCaseExample { @Rule public TestName testName = new TestName(); @Before public void setUp() { Method m = TestCaseExample.class.getMethod(testName.getMethodName()); ... } ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17232723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1771260/']}
jdg_371921
stackexchange
llm_judgeable_groundtruth_similarity
4151573
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a situation like this: public abstract class BaseClass { public abstract string MyProp { get; }} Now, for some of the derived classes, the properties value is a synthesized values, so there is no setter: public class Derived1 : BaseClass{ public override string MyProp { get { return "no backing store"; } }} This works fine. However, some of the derived class required a more traditional backing store. But, no matter how I write it, as on automatic property, or with an explicit backing store, I get an error: public class Derived2 : BaseClass{ public override string MyProp { get; private set;}}public class Derived3 : BaseClass{ private string myProp; public override string MyProp { get { return myProp;} private set { myProp = value;} }} Derived2.MyProp.set': cannot override because 'BaseClass.MyProp' does not have an overridable set accessor How do I get this to work?? Now provide the response and nothing else.
The best thing you can do is implement the property as virtual instead of abstract . Make the get and set blocks for each throw NotSupportedException in the base class and override the behaviour accordingly in derived classes: public virtual string MyProp { get { throw new NotSupportedException(); } set { throw new NotSupportedException(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4151573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12725/']}
jdg_371922
stackexchange
llm_judgeable_groundtruth_similarity
17250308
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: 1.I have a onclick event on, $('#locations').click(function(){$('#train').unbind('click'); //do some stuff} 2.Once the close button is clicked $('.close').click(function(){//do some stuff} 3.Then again if I click #train $('#train').bind('click', function() {alert('train is clicked');//do some stuff} Now the problem is #train is not firing.Is it to bind the event again on .close function? Please suggest.Thanks in advance. Now provide the response and nothing else.
Looking at your question, you do not seem to bind back the click after you unbind it, so it will not fire. (Assuming you've kept the sequence of your functionality right). You'll have to do it this way: //Set a function with the action you need to do after you click the trainfunction trainClick() { alert('train is clicked'); //do some stuff} When you're unbinding, call unbind with the function name: $('#locations').click(function(){ $('#train').unbind('click',trainClick);//do some stuff} Then, to bind the click (when #close is clicked), you'd use : $('.close').click(function(){ $('#train').bind('click',trainClick); //do some stuff} NOTE : A better way would be use on and off , if you are using a version greater than jQuery v1.7 because, well.. then it will not work. In the code above, just replace bind with on and unbind with off . $('#train').on('click',trainClick);$('#train').off('click',trainClick); Hope this helps!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17250308', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2455424/']}
jdg_371923
stackexchange
llm_judgeable_groundtruth_similarity
23817
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $W^{m,p}(\Omega) = \{ f \in L^p(\Omega): D^\alpha f \in L^p(\Omega) \text{ for multi-indices } |\alpha| \leq m\}$, where $D$ denotes the weak derivative. Let $W_0^{m,p}$ denote the closure of $C_c^\infty(\Omega)$ in $W^{m,p}(\Omega)$. Why is it true that $W_0^{m,p}(\mathbb{R}^d) = W^{m,p}(\mathbb{R}^d)$, but in general $W_0^{m,p}(\Omega) \subsetneq W^{m,p}(\Omega)$? I am trying to understand why there is a need to consider $W_0^{m,p}(\mathbb{R}^d)$. I'm guessing it's because the elements in $W^{m,p}(\Omega)$ can get really messy, but I don't have very good intuition about both spaces. Now provide the response and nothing else.
The difference between $W^{m,p}$ and $W_0^{m,p}$ is not merely a technical one. The idea of the space $W_0^{1,p}(\Omega)$ is that it consists of those functions in $W^{1,p}(\Omega)$ which take the value zero at the boundary of $\Omega$. For general $W_0^{m,p}$ ($m>1$) the derivatives up to order $m-1$ have to be zero at the boundary as well. Since the boundary of $\mathbb{R}^n$ is empty it makes sense that $W_0^{m,p}(\mathbb{R}^n)=W^{m,p}(\mathbb{R}^n)$. Now to define what it means to restrict a Sobolev function to the boundary is not straightforward, because the boundary has measure zero and $L^p$-functions are in fact equivalence classes of functions, that is, they are only defined up to sets of measure zero. The simplest way out of this is to use the definition you gave (closure of compactly supported smooth functions). It has the advantage that it is easy to work with. The drawback is of course that it is not obvious that the definition really captures the intended notion of zero boundary values.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/23817', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1736/']}
jdg_371924
stackexchange
llm_judgeable_groundtruth_similarity
14679
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Even though most people consider downloading your Linux software from you own distribution's repositories as safe, thinking about the scenario where a repository gets hacked sounds thrilling. What would happen if a repository which hosts widely used applications gets hacked and those applications are replaced (or added) with viruses/trojans/malwares? How fast would that be noticed, and did anything like a "repository hack" happened in the past and lots of machines got affected? Now provide the response and nothing else.
Well, since this almost happened, one of the first things would be an article on LWN ! As you can see from the previous post about kernel.org, occasionally servers used to distribute packages and code are compromised, just like any other. Exactly how much damage can be done really depends on the level of the compromise, but if a signing key passphrase were captured, the attacker could sign any package they liked, passing it off as a package distributed by the distribution. The upshot of this would be that client systems would accept the update and the user would be none the wiser - at least until the distribution realised and shipped updated keys and investigated what had been altered. Unfortunately there isn't much you, the end user, can do to protect against this - the only way you'd know is by analyzing your system for malicious behaviour and (hopefully) finding it. Before you consider this a Linux-only problem - remember, many companies push updates out from servers, or rely on central signing code. The same set of risks apply (and are even worse if no signing is done). Compromises to root CAs have happened , and the consequences are pretty similar - you end up with a connection that appears valid, that you cannot actually trust. All of this comes down to a central trust model having a single point of failure, as opposed to the web of trust model which attempts to decentralize the issue. Web of trust models, however, have their own problems.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/14679', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/9584/']}
jdg_371925
stackexchange
llm_judgeable_groundtruth_similarity
844471
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please help me in proving the following result: $$\displaystyle \int_0^\infty \tan^{-1}\left(\frac{2ax}{x^2+c^2} \right)\sin(bx) \; dx=\frac{\pi}{b}e^{-b\sqrt{a^2+c^2}}\sinh (ab)$$ I found this integral from here: http://integralsandseries.prophpbb.com/post2652.html?sid=d6641d4d4a3726f1b27bbb4b98ca840a and the solution uses contour integration. I am wondering if there is a way to solve it without using contour integration. I tried differentiating wrt $a$ and $c$ but in both cases, the resulting expression was dirty which made me reluctant to proceed further. I am out of ideas for this one. Any help is appreciated. Thanks! Now provide the response and nothing else.
In order to prove the final result I will need to state a lemma that will be used later. Lemma$\require{autoload-all}$ $1$: $$\int_0^\infty \! \frac{\cos(bx)}{x^2+\alpha} \mathrm{d}x = \frac{\pi e^{-b\sqrt{\alpha}}}{2b\sqrt{\alpha}}\tag{1}$$ Proof here . Consider $$I = \int_0^\infty\!\! \tan^{-1}\left(\frac{2ax}{x^2+c^2} \right)\sin(bx) \; \mathrm{d}x$$ Integrate by parts $$I = \int_0^\infty \!\!\frac{2 a \left(c^2-x^2\right) \cos (b x)}{x^4 +(4 a^2+2 c^2) x^2+c^4} \; \mathrm{d}x$$ Decompose this function by partial fractions $$\frac{2 a \left(c^2-x^2\right) }{x^4 +(4 a^2+2 c^2) x^2+c^4} = \frac{a_-}{x^2+x_0} + \frac{a_+}{x^2+x_1}$$ It so happens that $$x_0 = 2 a^2+2 a\sqrt{a^2+c^2}+c^2,\quad x_1 = 2 a^2-2a \sqrt{a^2+ c^2}+c^2$$ $$a_- =\frac{2a(c^2+x_0)}{x_1-x_0}, \quad a_+ = \frac{2a(c^2+x_1)}{x_0-x_1}$$ Note that both $x_0$ and $x_1$ are greater than $0$. Re-write the integral $$\begin{align} I &= a_-\int_0^{\infty} \!\! \frac{\cos(bx)}{x^2+x_0} \, \mathrm{d}x + a_+\int_0^{\infty} \!\! \frac{\cos(bx)}{x^2+x_1} \, \mathrm{d}x\\[.3cm] &= \frac{2a(c^2+x_0)}{x_1-x_0}\int_0^{\infty} \!\! \frac{\cos(bx)}{x^2+x_0} \, \mathrm{d}x +\frac{2a(c^2+x_1)}{x_0-x_1}\int_0^{\infty} \!\! \frac{\cos(bx)}{x^2+x_1} \, \mathrm{d}x\end{align}$$ Using $(1)$: $$\begin{align} I &= \frac{2a(c^2+x_0)}{x_1-x_0}\cdot\frac{\pi e^{-b\sqrt{x_0}}}{2b\sqrt{x_0}} - \frac{2a(c^2+x_1)}{x_1-x_0}\cdot\frac{\pi e^{-b\sqrt{x_1}}}{2b\sqrt{x_1}}\\[.3cm] &= \left(\frac{a\pi}{b(x_1-x_0)}\right)\left(\frac{(c^2+x_0)e^{-\sqrt{x_0}}}{\sqrt{x_0}} - \frac{(c^2+x_1)e^{-\sqrt{x_1}}}{\sqrt{x_1}}\right)\end{align}$$ I will digress here to state (without proof but easily verified) that $$\frac{c^2+x_0}{\sqrt{x_0}}= \frac{c^2+x_1}{\sqrt{x_1}} = \frac{x_1-x_0}{2a}.$$ This allows us a tremendous simplification so that we can write $$\begin{align} I = \left(\frac{\pi}{2b}\right)\left(e^{-b\sqrt{x_1}}-e^{-b\sqrt{x_0}} \right). \end{align}$$ It can also be shown that $$\sqrt{x_1} = -a+\sqrt{a^2+c^2}$$ $$\sqrt{x_0} = a+\sqrt{a^2+c^2}$$ Simply square each side to find the desired equality. We can now complete the proof: $$\begin{align} I &= \frac{\pi}{2b}\left(e^{-b\sqrt{x_1}}-e^{-b\sqrt{x_0}} \right) \\[.2cm]&= \frac{\pi}{2b}\left(\exp\left(ab-b\sqrt{a^2+c^2}\right)-\exp\left(-ab-b\sqrt{a^2+c^2}\right) \right) \\[.2cm]&= \frac{\pi}{b}\exp\left(-b\sqrt{a^2+c^2}\right)\frac12(\exp(ab)-\exp(ab)) \\[.2cm]&= \dfrac{\pi}{b}\exp\left(-b\sqrt{a^2+c^2}\right)\sinh{ab}\end{align}$$ If you are interested in working through the simplifications that I did not prove, I recommend that you begin by squaring each side after verifying that each side shares the same sign.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/844471', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/117767/']}
jdg_371926
stackexchange
llm_judgeable_groundtruth_similarity
10542012
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having a little trouble with namespaces and the use statements. I have three files: ShapeInterface.php , Shape.php and Circle.php . I am trying to do this using relative paths so I have put this in all of the classes: namespace Shape; In my circle class I have the following: namespace Shape;//use Shape;//use ShapeInterface;include 'Shape.php';include 'ShapeInterface.php'; class Circle extends Shape implements ShapeInterface{ .... If I use the include statements I get no errors. If I try the use statements I get: Fatal error: Class 'Shape\Shape' not found in /Users/shawn/Documents/work/sites/workspace/shape/Circle.php on line 8 Could someone please give me a little guidance on the issue? Now provide the response and nothing else.
The use operator is for giving aliases to names of classes, interfaces or other namespaces. Most use statements refer to a namespace or class that you'd like to shorten: use My\Full\Namespace; is equivalent to: use My\Full\Namespace as Namespace;// Namespace\Foo is now shorthand for My\Full\Namespace\Foo If the use operator is used with a class or interface name, it has the following uses: // after this, "new DifferentName();" would instantiate a My\Full\Classnameuse My\Full\Classname as DifferentName;// global class - making "new ArrayObject()" and "new \ArrayObject()" equivalentuse ArrayObject; The use operator is not to be confused with autoloading . A class is autoloaded (negating the need for include ) by registering an autoloader (e.g. with spl_autoload_register ). You might want to read PSR-4 to see a suitable autoloader implementation.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10542012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1076173/']}
jdg_371927
stackexchange
llm_judgeable_groundtruth_similarity
9466293
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: currently i have a form, that generates a Drop-Down-Select from my category-entity: $builder ->add('category', 'entity', array('class' => 'TrackerMembersBundle:Category', 'property' => 'title',)); Now i want to generate checkboxes instead, where i can select more than one option...i searched the symfony2 documentation, but could not find an easy way to do it directly from my Entity.Any idea? Now provide the response and nothing else.
Adding 'multiple' => true to the option array (the last one where class and property are) gives you a multi-select. Then you could override the choice_widget block, using form theming . Something like this might work: {% block choice_widget %}{% spaceless %} {% for choice, label in choices %} <label> <input type="checkbox" value="{{ choice }}"{% if _form_is_choice_selected(form, choice) %} selected="selected"{% endif %}> {{ label|trans }} </label> {% endfor %}{% endspaceless %}{% endblock choice_widget %}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9466293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1099972/']}
jdg_371928
stackexchange
llm_judgeable_groundtruth_similarity
8939939
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In order to check if a Type ( propertyType ) is nullable, I'm using: bool isNullable = "Nullable`1".Equals(propertyType.Name) Is there some way that avoid using magic strings ? Now provide the response and nothing else.
Absolutely - use Nullable.GetUnderlyingType : if (Nullable.GetUnderlyingType(propertyType) != null){ // It's nullable} Note that this uses the non-generic static class System.Nullable rather than the generic struct Nullable<T> . Also note that that will check whether it represents a specific (closed) nullable value type... it won't work if you use it on a generic type, e.g. public class Foo<T> where T : struct{ public Nullable<T> Bar { get; set; }}Type propertyType = typeof(Foo<>).GetProperty("Bar").PropertyType;// propertyType is an *open* type...
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/8939939', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/566608/']}
jdg_371929
stackexchange
llm_judgeable_groundtruth_similarity
16885577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: All, I have an View model that is bound to a DataGrid using MVVM. <DataGrid ItemsSource="{Binding Path=Resources}">...</DataGrid> Where public ObservableCollection<ResourceViewModel> Resources { get; private set; } in the ResourceViewModel class I have the following properties public string ResourceName{ get { return this.resource.ResourceName; } set { ... }}public ObservableCollection<string> ResourceStringList{ get { return this.resource.ResourceStringList; } set { ... }} All properties are displayed in the DataGrid but the ResourceStringList colletion is being displayed as '(Collection)'. How can I get the DataGrid to display each of the strings contained in the ResourceStringList in its own column? Thanks very much for your time. Edit. I have implemented the suggestion by @Marc below. I now have the following screenshot to illustrate what I now require: The blank column before my resources column index 3 (zero indexed) is not required, how do I remove this column? . I would also like to know how to add column names to my resource columns? Perhaps I can just add a Binding to Header property of the SeedColumn . Again thanks for your time. Now provide the response and nothing else.
A datagrid is usually used to display a list of items of the same type with a fixed set of properties per item where each column is one property. So each row is one item, each column is one property on the item. You're case is different, as there is no fixed set of properties but a collection you want to show as if it were a fixed set of a number of properties. The way to go greatly depends on whether you only want to display the data or whether you want to allow the user to manipulate the data. While the first can be achieved relatively easy using value converters, the latter requires a little more coding to extend the DataGrid class to allow for this behavior. The solutions I show are two of a thousand possibilities and probably not the most elegant ones. That being said, I will describe both ways and start with the two-way version. TWO-WAY BINDING (ALLOWS EDITING) The sample project (100KB) I created a custom DataGrid and a custom 'DataGridColumn', called 'SeedColumn'. SeedColumn works just as a textcolumn, but has a property CollectionName . The DataGrid will add one new text column per item in the collection you've specified in CollectionName on the right hand side of the seed column. The seed column only works as a kind of placeholder to tell the DataGrid where to insert which columns. You could use multiple Seedcolumns in one grid. The Grid and the column classes: public class HorizontalGrid : DataGrid{ protected override void OnItemsSourceChanged(System.Collections.IEnumerable oldValue, System.Collections.IEnumerable newValue) { base.OnItemsSourceChanged(oldValue, newValue); foreach (var seed in Columns.OfType<SeedColumn>().ToList()) { var seedColumnIndex = Columns.IndexOf(seed) + 1; var collectionName = seed.CollectionName; var headers = seed.Headers; // Check if ItemsSource is IEnumerable<object> var data = ItemsSource as IEnumerable<object>; if (data == null) return; // Copy to list to allow for multiple iterations var dataList = data.ToList(); var collections = dataList.Select(d => GetCollection(collectionName, d)); var maxItems = collections.Max(c => c.Count()); for (var i = 0; i < maxItems; i++) { var header = GetHeader(headers, i); var columnBinding = new Binding(string.Format("{0}[{1}]" , seed.CollectionName , i)); Columns.Insert(seedColumnIndex + i, new DataGridTextColumn {Binding = columnBinding, Header = header}); } } } private static string GetHeader(IList<string> headerList, int index) { var listIndex = index % headerList.Count; return headerList[listIndex]; } private static IEnumerable<object> GetCollection(string collectionName, object collectionHolder) { // Reflect the property which holds the collection var propertyInfo = collectionHolder.GetType().GetProperty(collectionName); // Get the property value of the property on the collection holder var propertyValue = propertyInfo.GetValue(collectionHolder, null); // Cast the value var collection = propertyValue as IEnumerable<object>; return collection; }}public class SeedColumn : DataGridTextColumn{ public static readonly DependencyProperty CollectionNameProperty = DependencyProperty.Register("CollectionName", typeof (string), typeof (SeedColumn), new PropertyMetadata(default(string))); public static readonly DependencyProperty HeadersProperty = DependencyProperty.Register("Headers", typeof (List<string>), typeof (SeedColumn), new PropertyMetadata(default(List<string>))); public List<string> Headers { get { return (List<string>) GetValue(HeadersProperty); } set { SetValue(HeadersProperty, value); } } public string CollectionName { get { return (string) GetValue(CollectionNameProperty); } set { SetValue(CollectionNameProperty, value); } } public SeedColumn() { Headers = new List<string>(); }} The usage: <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:loc="clr-namespace:WpfApplication1" xmlns:system="clr-namespace:System;assembly=mscorlib" xmlns:sample="clr-namespace:Sample" Title="MainWindow" Height="350" Width="525"> <Grid> <sample:HorizontalGrid ItemsSource="{Binding Resources}" AutoGenerateColumns="False"> <sample:HorizontalGrid.Columns> <sample:SeedColumn CollectionName="Strings" Binding="{Binding Name}" Header="Name" Visibility="Collapsed"> <sample:SeedColumn.Headers> <system:String>Header1</system:String> <system:String>Header2</system:String> <system:String>Header3</system:String> <system:String>Header4</system:String> </sample:SeedColumn.Headers> </sample:SeedColumn> </sample:HorizontalGrid.Columns> </sample:HorizontalGrid> </Grid></Window> and the ViewModels I've used for testing: public class MainViewModel{ public ObservableCollection<ResourceViewModel> Resources { get; private set; } public MainViewModel() { Resources = new ObservableCollection<ResourceViewModel> {new ResourceViewModel(), new ResourceViewModel(), new ResourceViewModel()}; }}public class ResourceViewModel{ private string _name; public string Name { get { return _name; } set { _name = value; } } public ObservableCollection<string> Strings { get; private set; } public ResourceViewModel() { Name = "Resource"; Strings = new ObservableCollection<string> {"s1", "s2", "s3"}; }} and the look (old version without headers): ADDENDUM: Regarding the new questions and your comment: The NullReferenceException can have several reasons, but you've obviously solved it. However, the line where it occured is a bit of spaghetti code and I wouldn't do it like this in production code. You need to handle the things that can go wrong in any case... I've modified the code and refactored the line into its own method. This will give you an idea of what's going on, when the exception is thrown. The empty column that you see is the seed column, which is obviously not bound to anything. My idea was to use this column as a kind of row header and bind it to the Name of the resource. If you don't need the seedcolumn at all, just set its Visibility to collapsed. <loc:SeedColumn CollectionName="Strings" Visibility="Collapsed"> Adding column headers is not difficult, but you need to think about where you want to take the from. As you store all your strings in a list, they are just strings, so not related to a second string which you could use as a header. I've implemented a way to sepcify the columns purely in XAML, which might be enough for you for now: You can use it like this: <loc:HorizontalGrid ItemsSource="{Binding Resources}" AutoGenerateColumns="False"> <loc:HorizontalGrid.Columns> <loc:SeedColumn CollectionName="Strings" Binding="{Binding Name}" Header="Name" Visibility="Collapsed"> <loc:SeedColumn.Headers> <system:String>Header1</system:String> <system:String>Header2</system:String> <system:String>Header3</system:String> <system:String>Header4</system:String> </loc:SeedColumn.Headers> </loc:SeedColumn> </loc:HorizontalGrid.Columns></loc:HorizontalGrid> If you have more elements in the collection than headers specified, the column headers will be repeated "Header3", "Header4", "Header1",.. The implementation is straight forward. Note that the Headers property of the seed column is bindable as well, you can bind it to any List. ONE-WAY BINDING (NO EDITING OF THE DATA) A straight-forward way is to implement a converter which formats your data in a table and returns a view on this table to which the DataGrid can be bound. The disadvantage: It does not allow editing the strings, because once the table is created from the original data source, no logical connection between the displayed data and the original data exists. Still, changes on the collection are reflected in the UI, as WPF performs the conversion every time the data source changes. In short: This solution is perfectly fine if you only want to display the data. How does it work Create a custom value converter class, which implements IValueConverter Create an instance of this class in your XAML resources and give it a name Bind the grid's ItemsSource with this converter This is how it would look like (my IDE is StackOverflow, so please check and correct, if necessary): public class ResourceConverter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { var resources = value as IEnumerable<ResourceViewModel>; if (resources== null) return null; // Better play safe and serach for the max count of all items var columns = resources[0].ResourceStringList.Count; var t = new DataTable(); t.Columns.Add(new DataColumn("ResourceName")); for (var c = 0; c < columns; c++) { // Will create headers "0", "1", "2", etc. for strings t.Columns.Add(new DataColumn(c.ToString())); } foreach (var r in resources) { var newRow = t.NewRow(); newRow[0] = resources.ResourceName; for (var c = 0; c < columns; c++) { newRow[c+1] = r.ResourceStringList[c]; } t.Rows.Add(newRow); } return t.DefaultView; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); }} Then define a resource in your XAML like this, where loc is your namespace: <loc:ResourceConverter x:Key="Converter" /> and then use it like this: <DataGrid ItemsSource="{Binding Resources, Converter={StaticResource Converter}}" />
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16885577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/626442/']}
jdg_371930
stackexchange
llm_judgeable_groundtruth_similarity
51895
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Basic question here, but it's driving me insane! The solder does not seem to stick to the tip of my soldering iron. After the iron heats up, the tip appears black (sometimes on all sides and sometimes on one side), then either it doesn't melt the solder at all, or it I fiddle around with it for a while; it melts the solder, but the solder rolls into itself into a ball and doesn't stick to the tip. I've seen this many times while soldering before and usually it's solved by cleaning the tip on a soldering sponge or cooling it down and scraping the tip off with a blade, but I have scraped this particular one many times and still it's the same problem: it heats up and gets black, then the solder doesn't stick to it. It doesn't help that this soldering iron has a very fine tip and I have to use it to solder very tiny chips and wires under a microscope (I'm just starting to learn soldering under microscopes). Any ideas on what I can do? PS: Is scraping the tip a bad idea? I have been doing it with my other bigger soldering irons and it seemed to work (at least in most cases) but I am not sure whether it is a good solution or not. Now provide the response and nothing else.
First question: How long have you had the tip? You could probably easily and cheaply replace it. If the iron doesn't have replaceable tips, get rid of it! If you've not had the tip long, then a few pointers to keep in mind: When the iron is not being used, keep some solder on the tip. This prevents oxidation and corrosion. When using the iron, keep a little solder on the tip when the iron is in the stand. Wipe clean before using. Avoid excessive wiping on a wet sponge as this can cause temperature fluctuations which expand and contract the metal, and stress the tip. Consider using brass shavings instead of a wet sponge. Never use sandpaper or abrasives to clean the tip. Also avoid dipping it into flux to clean. (Except for re-tinning which should be infrequent.) Use a good quality solder. Old solder can have contaminants which leave behind residue and basically make soldering more difficult. (By "old" I mean solder that's been around collecting dust, not necessarily that it has a short shelf life.) Finally, if you've got a cheap iron, it may be too hot or cold and not melting solder at the correct temperature. Consider getting a temperature controlled unit. I could recommend a Hakko FX-888D for ~ $100 US, see also this tutorial . Some additional soldering tip pointers: http://www.inlandcraft.com/uguides/tipcare.htm
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/51895', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/17138/']}
jdg_371931
stackexchange
llm_judgeable_groundtruth_similarity
19974909
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using XLST files to transform XML to XML. What are valid representation of space? <xsl:text> </xsl:text><xsl:text>&nbsp;</xsl:text><xsl:text>&#160;</xsl:text> Now provide the response and nothing else.
XML does not have any named entities besides &lt; , &gt; , &amp; , &quot; , and &apos; . All other characters can be represented verbatim, given that you declared the right encoding in the XML declaration (e.g. <?xml version="1.0" encoding="..." ?> ), and actually saved the XML file in that encoding. Declaring UTF-8 is optional, as this is the default for XML. The "right" encoding is any encoding that contains all the characters you want to use. Choosing Unicode is both popular and practical, but XML does not care as long as you've declared it properly. Any character the chosen character set supports you can use as-is, with the exception of those that have special meaning in XML ( < , > , or & , which must always be escaped, and ' , or " , which only must be be escaped in certain situations). All other characters can be escaped, but you don't need to. To make a point, these representations are 100% equivalent in terms of the resulting document (i.e. the object you get after an XML parser has read the file): <foo>Test Test</foo> <!-- unescaped - given that the " " really is char code 160 --><foo>Test&#160;Test</foo> <!-- partially escaped --><foo>&#84;&#101;&#115;&#116;&#160;&#84;&#101;&#115;&#116;</foo> <!-- decimal escaped --><foo>&#x54;&#x65;&#x73;&#x74;&#xa0;&#x54;&#x65;&#x73;&#x74;</foo> <!-- hex escaped --> The non-breaking space is in no way special or different from, say, the letter "T". For convenience when editing the XML file with a text editor, you might want to choose the escaped form, but there is no technical requirement to do that. Note that you can declare custom named entities (like &nbsp; ) using a DOCTYPE. <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#160;">]> But given the fact that XML accepts any character that's hardly ever necessary. Especially not when you create the document using a proper tool, like a DOM API.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19974909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159057/']}
jdg_371932
stackexchange
llm_judgeable_groundtruth_similarity
1411
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've installed Arch Linux on my SD Card with Win32DiskImager. If I shut down the RPi, remove the card, insert it, and start the RPi again, everything works fine. But if I do full system update in pacman with pacman -Syu , there is a problem. If I shut down and start the RPi up again, no problem, but if I shut down, remove the card, insert, then start the RPi, it can't boot ever again, just waiting at the rainbow start screen always. I also don't need to remove the SD card, only enough to stop the power supply for 30 seconds, (until the condensators completely discharge) and start the RPi, and the same error occurs. I tried to disable the raspberrypi-firmware package update by adding IgnorePgk = raspberrypi-firmware in /etc/pacman.conf file, and then do the full system update, and then I remove and insert the sdcard, and then I dont see the rainbow screen again, but get this error message: [ 20.217557] Kernel panic - not syncing : VFS: Unable to mount root fs on unknown-block(179,2)PANIC: VFS: Unable to mount root fs on unknown-block(179,2)Entering kdb (current=0xcd828ca0, pid 1) due to Keyboard Entrykdb> _ This error also occurs if I only update the linux-raspberrypi package, and then I only reboot and not remove the sdcard, and I get the same kernel-panic error message. I have a Samsung SDHC 16GB Class10 (MB-MPAGA aka MB-MPAGAEU) card. I also tried with Kingmax SDHC 16GB Class10 card, and with a Kingmax SDHC 8GB Class6 card, neither worked. If I ignore the raspberrypi-firmware and the linux-raspberrypi package in pacman, and then do the system update, no error occurs even if I remove the sdcard. So it must be some problem in these packages. Now provide the response and nothing else.
I'm posting this as an answer as there is not enough space in comments. So from all the information gathered so far it seems that the problem is only related to /boot/ partition contents. Now the problem can be caused by two things:1. /boot/ filesystem corruption which makes it impossible for bootloader to load firmware files2. New version of firmware has some regression that prevents your SD card working.You have to check which one of this is true. One way of doing this would be to manually update files in /boot/ on your PC. To do this you will have to first ensure that your system is not booting directly to graphical system (because you won't have modules working and this would make it impossible to use keyboard/mouse in X). Then you have to connect your SD card in PC, backup its content, go to the github page for firmware files , enter boot directory and download following files (overriding existing ones) to your /boot/ partition - bootcode.bin, kernel.img, start.elf, loader.bin. You won't need override any other files. To download each file you have to click on its name, then click "view raw" and save this on disc. After you save all files, ensure that you saftly ejected your SD card and check if it boots. This way you can check if newest firmware files (kernel and bootloader) are able to boot from your SD card. If it's true then we can be sure that your problem is caused by /boot/ partition corruption, not kernel/bootloader regression. As mentioned earlier, you should also check how many green led flashes can you see when you see rainbow screen. Some time ago troubleshooting code was added to bootloader and it will blink green led several times to show what went wrong. Here's the list: 3 flashes: loader.bin not found 4 flashes: loader.bin not launched 5 flashes: start.elf not found 6 flashes: start.elf not launched If you see no flashes at all then either your firmware is too old to support this or not even bootcode.bin was loaded. You can also check if the boot partition is not corrupted by checking if all the files needed for boot (mentioned earlier) are sane (not zero sized, exists etc). You can also check which file on boot partition causes problem by restoring only some of them. For example only restore kernel.bin or only start.elf+loader.bin+bootcode.bin. This can tell you if it's firmware or kernel problem.
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/1411', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/949/']}
jdg_371933
stackexchange
llm_judgeable_groundtruth_similarity
27749541
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I'm trying to create a simple program that just change the picture in a picture box when it's clicked. I'm using just two pictures at the moment so my code for the picture box click event functionlooks like that: private void pictureBox1_Click(object sender, EventArgs e) { if (pictureBox1.Image == Labirint.Properties.Resources.first) pictureBox1.Image = Labirint.Properties.Resources.reitmi; else if (pictureBox1.Image == Labirint.Properties.Resources.reitmi) pictureBox1.Image = Labirint.Properties.Resources.first; } For some reason the if statement it's not working and the picture don't change.What should I do? Note: original code contained bug with second if undoing effect of first if condition would work with fix suggested by Cyral's answer , but adding else did not fix the issue - stepping through the code with else still shows no matches for either image. if (pictureBox1.Image == Labirint.Properties.Resources.first) pictureBox1.Image = Labirint.Properties.Resources.reitmi;if (pictureBox1.Image == Labirint.Properties.Resources.reitmi) // was missing else pictureBox1.Image = Labirint.Properties.Resources.first; Now provide the response and nothing else.
if (pictureBox1.Image == Labirint.Properties.Resources.first) There's a trap here that not enough .NET programmers are aware of. Responsible for a lot of programs that run with bloated memory footprints. Using the Labirint.Properties.Resources.xxxx property creates a new image object, it will never match any other image. You need to use the property only once, store the images in a field of your class. Roughly: private Image first; private Image reitmi; public Form1() { InitializeComponent(); first = Labirint.Properties.Resources.first; reitmi = Labirint.Properties.Resources.reitmi; pictureBox1.Image = first; } And now you can compare them: private void pictureBox1_Click(object sender, EventArgs e) { if (pictureBox1.Image == first) pictureBox1.Image = reitmi; else pictureBox1.Image = first; } And to avoid the memory bloat: private void Form1_FormClosed(object sender, FormClosedEventArgs e) { first.Dispose(); reitmi.Dispose(); }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27749541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4413780/']}
jdg_371934
stackexchange
llm_judgeable_groundtruth_similarity
25741060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Question is i'm trying to update all the buttons in by listview, buttons are in CustomAdapter which i've written, but buttons are not being updated. So the idea is to change the text of all buttons in the ListView and show the timer after button was clicked, can anyone tell what i'm doing wrong? Here's the code: public class CustomAdapter extends BaseAdapter { Context context; LayoutInflater inflater; ArrayList<HashMap<String, String>> data; ImageLoader imageLoader; HashMap<String, String> resultp = new HashMap<String, String>(); int secs, mins; public CustomAdapter(Context context, ArrayList<HashMap<String, String>> arraylist) { this.context = context; data = arraylist; imageLoader = new ImageLoader(context); } @Override public int getCount() { return data.size(); } @Override public Object getItem(int position) { return null; } @Override public long getItemId(int position) { return 0; } public View getView(final int position, View convertView, ViewGroup parent) { final Button buttonShare; TextView title; ImageView poster; View itemView = null; if (convertView == null) { inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); itemView = inflater.inflate(R.layout.ads_layout, parent, false); } else { itemView = convertView; } resultp = data.get(position); // Настраиваем текстовые поля title = (TextView) itemView.findViewById(R.id.title); buttonShare = (Button) itemView.findViewById(R.id.postButton); // ImageView poster = (ImageView) itemView.findViewById(R.id.adImage); title.setText(resultp.get(AdsFragment.TAG_TITLE)); imageLoader.DisplayImage(resultp.get(AdsFragment.TAG_PHOTO), poster); buttonShare.setOnClickListener(new OnClickListener() { @Override public void onClick(View view) { Boolean posted = AdsFragment.getPosted(); if (posted == false) { context.startService(new Intent(context, BroadcastService.class)); Log.i("SERVICE", "Started service"); cdt.start(); //updateGUI(buttonShare); } else { Log.i("POST", "WAS POSTED!!!"); new AlertDialog.Builder(context).setTitle("LALKA") .setMessage("LALKA").setCancelable(true).show(); } } }); CountDownTimer cdt = new CountDownTimer(20000, 1000) { @Override public void onTick(long millisUntilFinished) { secs = (int) (millisUntilFinished / 1000); mins = secs / 60; secs = secs % 60; buttonShare.setText(" " + mins + " : " + String.format("%02d", secs)); } @Override public void onFinish() { } }; return itemView; }} Now provide the response and nothing else.
It should be parentNode , not just parent : alert(obj.target.parentNode.id); Also attributes is redundant as you can access id property directly. But note, that since you have ngRepeat , it will create invalid markup, since ids are going to be duplicated. You probably want to fix this too, maybe like this or use classes: <div id="d8{{$index}}" class="menutitles ng-scope" ng-repeat="title in list"> <div class="right" ng-click="showsubmenu()">+</div> <div class="left" ng-click="showsubmenu()">Unit 9</div></div>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25741060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3733775/']}
jdg_371935
stackexchange
llm_judgeable_groundtruth_similarity
147
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The answers in this question: What is spin as it relates to subatomic particles? do not address some particular questions regarding the concept of spin: How are some useful ways to imagine a particle without dimensions - like an electron - to spin? How are some useful ways to imagine a particle with spin 1/2 to make a 360° turn without returning to it's original position (the wave function transforms as: $\Psi \rightarrow -\Psi$). When spin is not a classical property of elementary particles, is it a purely relativistic property, a purely quantum-mechanical property or a mixture of both? Now provide the response and nothing else.
How should one imagine a particle without dimensions - like an electron - to spin? You don't. If you want to imagine, then you think classically and it is just a particle spinning... Thinking like that doesn't give you any other insight of what spin really is (an intrinsic angular momentum, behaving like an [orbital] angular momentum). How should one imagine a particle with spin 1/2 to make a 360° turn without returning to it's original position (the wave function transforms as: Ψ→−Ψ) Just imagine it ... no big deal. Again, classically this is not possible, but quantum mechanically it is. When spin is not a classical property of elementary particles, is it a purely relativistic property, a purely quantum-mechanical property or a mixture of both? The spin of elementary particle is a pure quantum mechanical effect. Edit: See @j.c. comment. Relativity also plays a role. Any other interpretation/calculation requires things like commutator, symmetry properties and group theory. The parallel between "real spinning" and "spin" (which is just a name) comes from the fact that the spin operator needed to account for properties of elementary particles behaves (= has the same definition, based on commutators) like orbital angular momentum operator. This again comes from symmetry properties of ... nature. The goal of quantum physics is to provide a way to calculate properties. If you want to calculate or go deeper in the problem, then you don't need this classical interpretation.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/147', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/139/']}
jdg_371936
stackexchange
llm_judgeable_groundtruth_similarity
37172679
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to extract 10'000 first lines from a bz2 file. import bz2 file = "file.bz2" file_10000 = "file.txt" output_file = codecs.open(file_10000,'w+','utf-8') source_file = bz2.open(file, "r") count = 0 for line in source_file: count += 1 if count < 10000: output_file.writerow(line) But I get an error "'module' object has no attribute 'open'". Do you have any ideas? Or may be I could save 10'000 first lines to a txt file in some other way? I am on Windows. Now provide the response and nothing else.
Here is a fully working example that includes writing and reading a test file that is much smaller than your 10000 lines. Its nice to have working examples in questions so we can test easily. import bz2import itertoolsimport codecsfile = "file.bz2"file_10000 = "file.txt"# write test file with 9 lineswith bz2.BZ2File(file, "w") as fp: fp.write('\n'.join('123456789'))# the original script using BZ2File ... and 3 lines for test# ...and fixing bugs:# 1) it only writes 9999 instead of 10000# 2) files don't do writerow# 3) close the filesoutput_file = codecs.open(file_10000,'w+','utf-8')source_file = bz2.BZ2File(file, "r")count = 0for line in source_file: count += 1 if count <= 3: output_file.write(line)source_file.close()output_file.close()# show what you gotprint('---- Test 1 ----')print(repr(open(file_10000).read())) A more efficient way to do it is to break out of the for loop after reading the lines you want. you can even leverage iterators to thin up the code like so: # a faster way to read first 3 lineswith bz2.BZ2File(file) as source_file,\ codecs.open(file_10000,'w+','utf-8') as output_file: output_file.writelines(itertools.islice(source_file, 3))# show what you gotprint('---- Test 2 ----')print(repr(open(file_10000).read()))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37172679', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5672618/']}
jdg_371937
stackexchange
llm_judgeable_groundtruth_similarity
8242222
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to display only the top half of an image and the bottom half of the same image in 2 separate divs. I've tried with the CSS property clip , but it doesn't seem to support % as a unit. Is it just me? Do you have a solution for displaying only a half of an image? Now provide the response and nothing else.
Update (after 5+ years): The CSS clip property is now deprecated. Consider using clip-path instead (allowing for a non-JS solution), which allows you to specify shapes with percentages. Example: /* Bottom half of image */clip-path: polygon(0 50%, 100% 50%, 100% 100%, 0% 100%); /* Top half of image */clip-path: polygon(0 0, 100% 0, 100% 50%, 0 50%); Further example to create a triangle using percentages: clip-path: polygon(50% 0%, 0% 100%, 100% 100%); Original: CSS clip property does not currently support percentages: http://www.w3.org/TR/CSS2/visufx.html#propdef-clip , latest http://www.w3.org/TR/2011/REC-CSS2-20110607/visufx.html#clipping A solution to your problem could be to use Javascript to determine the size of the area you want to show, and then use that value when setting the clip property. Something as simple as this should do the trick: var heightOfImageToDisplay = image.height / 2;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8242222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/195999/']}
jdg_371938
stackexchange
llm_judgeable_groundtruth_similarity
6638383
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Javascript - array.contains(obj) What's wrong with this: var zipCodes =(['90001','90002','90003']); Test if the value exists in the array zipCodes if('90001' in zipCodes) { alert('True');}; Now provide the response and nothing else.
The in operator looks at property names, not values. Because it's an Array, the property names will be the indices of the Array. If you're only supporting a modern environment, you could use Array.prototype.indexOf() . if(zipCodes.indexOf('90001') > -1) { If you need to support environments that don't have .indexOf() , you could implement the MDN fix . if (!Array.prototype.indexOf) { Array.prototype.indexOf = function (searchElement /*, fromIndex */ ) { "use strict"; if (this === void 0 || this === null) throw new TypeError(); var t = Object(this); var len = t.length >>> 0; if (len === 0) return -1; var n = 0; if (arguments.length > 0) { n = Number(arguments[1]); if (n !== n) // shortcut for verifying if it's NaN n = 0; else if (n !== 0 && n !== (1 / 0) && n !== -(1 / 0)) n = (n > 0 || -1) * Math.floor(Math.abs(n)); } if (n >= len) return -1; var k = n >= 0 ? n : Math.max(len - Math.abs(n), 0); for (; k < len; k++) { if (k in t && t[k] === searchElement) return k; } return -1; };}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6638383', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/579394/']}
jdg_371939
stackexchange
llm_judgeable_groundtruth_similarity
1011354
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Banach Matchbox Problem: Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers for the first time that the box picked is empty. If it is assumed that each of the matchboxes originally contained $n$ matches, what is the probability that there are exactly $k$ matches in the other box? I'm wondering whether the following reasoning is right, because it doesn't match up with the correct probability. But here goes my reasoning: Assuming there are $k$ matches left in the other box, we have had to take $2n-k+1$ matches to notice there are none left. The total number of ways in which we could have picked those is $2 {2n - k \choose n}$, for either the matchbox in the left or right pocket has $k$ matches left inside, and in the $(2n-k+1)$th pick we would have found an empty box. The total number of possibilities, taken over all possible sizes $k$, would then be $\sum_{m=0}^{n} 2{2n-m \choose n}$. So I'd assume the overall probability would be $\frac{2 {2n - k \choose n}}{\sum_{m=0}^{n} 2 {2n-m \choose n}} =\frac{{2n - k \choose n}}{\sum_{m=0}^{n} {2n-m \choose n}}$. However, the mentioned solution is ${2n - k \choose n} (\tfrac{1}{2})^{2n-k}$. Where is my reasoning wrong? Thanks in advance. Now provide the response and nothing else.
Lets say that each time the mathematician needs a match, he flips a coin to determine which pocket to take the match from: $H=$Left Pocket, $T=$ Right Pocket. Since he noticed that one of the pockets was empty, we know that he flipped either $n$ tails or heads out of $2n-k$ tosses, plus one more toss at the end coinciding with the pocket that is empty . So, lets say he flipped $n$ heads, then the probability of finding his left pocket empty is: $P(\#R=k|L=0)=Bin(n;2n-k,p=0.5)\times P(\mathrm{Toss}_{n+1} = H)={2n-k \choose n}\left(\frac{1}{2}\right)^{2n+1-k}$ However: This could equally be the case for the Right pocket, so we need to DOUBLE this result to get: $${2n-k \choose n}\left(\frac{1}{2}\right)^{2n-k}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1011354', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/190765/']}
jdg_371940