source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47967135
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 2 variables where I get 2 times from datePicker and I need to save on a variable the difference between them. let timeFormatter = DateFormatter() timeFormatter.dateFormat = "HHmm" time2 = timeFormatter.date(from: timeFormatter.string(from: datePicker.date))! I have tried to get the timeIntervalSince1970 from both of them and them substract them and get the difference on milliseconds which I will turn back to hours and minutes, but I get a very big number which doesn't corresponds to the actual time. let dateTest = time2.timeIntervalSince1970 - time1.timeIntervalSince1970 Then I have tried using time2.timeIntervalSince(date: time1), but again the result milliseconds are much much more than the actual time. How I can get the correct time difference between 2 times and have the result as hours and minutes in format "0823" for 8 hours and 23 minutes?
Now provide the response and nothing else.
|
The recommended way to do any date math is Calendar and DateComponents let difference = Calendar.current.dateComponents([.hour, .minute], from: time1, to: time2)let formattedString = String(format: "%02ld%02ld", difference.hour!, difference.minute!)print(formattedString) The format %02ld adds the padding zero. If you need a standard format with a colon between hours and minutes DateComponentsFormatter() could be a more convenient way let formatter = DateComponentsFormatter()formatter.allowedUnits = [.hour, .minute]print(formatter.string(from: time1, to: time2)!)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47967135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4752617/']}
|
jdg_376041
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3280323
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can i get the week number of month using javascript / jquery? For ex.: First Week: 5th July, 2010. / Week Number = First monday Previous Week: 12th July, 2010. / Week Number = Second monday Current Date: 19th July, 2010. / Week Number = Third Monday Next week: 26th July, 2010. / Week Number = Last monday
Now provide the response and nothing else.
|
This is an old question, here is my cross-browser solution based on: Weeks start on Sunday The first week of a month is the one that contains the first of the month So in March 2013: Fri 1 Mar is the first day of week 1 Sun 3 Mar is the start of week 2 Sun 31 Mar is the start of week 6 (and is the only day in the that week) Mon 1 Apr is the first day of week 1 in April. Date.prototype.getWeekOfMonth = function(exact) { var month = this.getMonth() , year = this.getFullYear() , firstWeekday = new Date(year, month, 1).getDay() , lastDateOfMonth = new Date(year, month + 1, 0).getDate() , offsetDate = this.getDate() + firstWeekday - 1 , index = 1 // start index at 0 or 1, your choice , weeksInMonth = index + Math.ceil((lastDateOfMonth + firstWeekday - 7) / 7) , week = index + Math.floor(offsetDate / 7) ; if (exact || week < 2 + index) return week; return week === weeksInMonth ? index + 5 : week; }; // Simple helper to parse YYYY-MM-DD as local function parseISOAsLocal(s){ var b = s.split(/\D/); return new Date(b[0],b[1]-1,b[2]); } // Tests console.log('Date Exact|expected not exact|expected'); [ ['2013-02-01', 1, 1],['2013-02-05', 2, 2],['2013-02-14', 3, 3], ['2013-02-23', 4, 4],['2013-02-24', 5, 6],['2013-02-28', 5, 6], ['2013-03-01', 1, 1],['2013-03-02', 1, 1],['2013-03-03', 2, 2], ['2013-03-15', 3, 3],['2013-03-17', 4, 4],['2013-03-23', 4, 4], ['2013-03-24', 5, 5],['2013-03-30', 5, 5],['2013-03-31', 6, 6], ['2013-04-01', 1, 1] ].forEach(function(test){ var d = parseISOAsLocal(test[0]) console.log(test[0] + ' ' + d.getWeekOfMonth(true) + '|' + test[1] + ' ' + d.getWeekOfMonth() + '|' + test[2]); }); You don't need to put it directly on the prototype if you don't want to. In my implementation, 6 means "Last", not "Sixth". If you want it to always return the actual week of the month, just pass true . EDIT: Fixed this to handle 5 & 6-week months. My "unit tests", feel free to fork: http://jsfiddle.net/OlsonDev/5mXF6/1/ .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3280323', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/111435/']}
|
jdg_376042
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
49545853
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created a Cordova hybrid app and somehow managed to sign them for Google Play, starting the process with two commands: 1. keytool ...2. cordova run android --release keystore==... password=... After uploading the signed app on the google console and opting for their upload key, I got access to two certificate files: deployment_cert.der and upload_cert.der , and their "hash-codes". This makes me pretty confused. What should I do with them? How can I use them to sign the app for Google Play? (I only use JavaScript for the project, and Cordova commands. Do I need Java or something?) What makes me even more perplexed is the fact that I was able to ignore those certificates and launch the app on Google Play using just the original keystore (by repeating the command 2 above). So did I sign the app with the upload key? I don't believe so. And how could I?
Now provide the response and nothing else.
|
When you enrolled your app in the Google Play App Signing program, you let Google create and own the private key that will be used to sign the APKs that get delivered to the end users (i.e. the deployment key ). The private key that you used to sign the APKs you first uploaded to the Play Console is what Play calls the upload key . This is the key that you will need to use to sign all future APKs you upload to the Play Console. What the Play Console does is that it looks at the first APK you upload, extract its certificate, and will verify for every future uploads that the APK uploaded matches the same certificate. The certificates being displayed in the App signing page are just there for informative purposes. The upload certificate is the certificate associated with the private key you used to sign your APK, and the deployment certificate is the certificate associated with the private key that Play Console created and uses to re-sign the APKs. You only need these hashes if your app uses a third-party API that requires authentication to register your app with that third-party service. I hope that clarifies it a bit. The official documentation can be found here: https://support.google.com/googleplay/android-developer/answer/7384423
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49545853', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9566746/']}
|
jdg_376043
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5222212
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a select query in which I am fetching several columns (by joining 3-4 tables).I use group by clause to group my results. Query - select ci.Candidate_Id, ci.FirstName, ci.DetailXMLfrom Candidate_Instance ci where ci.Candidate_Instance_Id=2group by ci.Candidate_Id, ci.FirstName, ci.DetailXML One of the tables have a column which is of XML data type. When I add the column in the select list, I get this error - Column 'table.myXML' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. and when I add the column in the group by clause, I get this error - The XML data type cannot be compared or sorted, except when using the IS NULL operator. I am quite confused as to how to come out of this.I want to get the XML data from the column. Thanks
Now provide the response and nothing else.
|
You cannot group by XML or TEXT columns, you would first need to convert to varchar(max) select ci.Candidate_Id, ci.FirstName, convert(xml,convert(varchar(max),ci.DetailXML)) DetailXMLfrom Candidate_Instance ci where ci.Candidate_Instance_Id=2group by ci.Candidate_Id, ci.FirstName, convert(varchar(max),ci.DetailXML) On the first line, it is converted to varchar(max) to match the GROUP BY clause, and later it is re-cast back to XML.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5222212', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587237/']}
|
jdg_376044
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15172111
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My PreferenceActivity works great except for one thing. The ActionBar icon, which perfectly returns the user to the previous activity in all my other activities doesn't work in the PreferenceActivity. When I click the icon it flashes as if it was going to return to the previous activity but the PreferenceActivity stays on screen. Interestingly the back button does return the user to the previous activity. Is there a way to make the ActionBar's Home icon work "normally" in the PreferenceActivity? Here is the code: public class SettingsActivity extends PreferenceActivity implements OnSharedPreferenceChangeListener {protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Set actionBar controls for Settings TextView actionBarTitle = (TextView) findViewById(Resources.getSystem().getIdentifier("action_bar_title", "id", "android")); ActionBar actionBar = getActionBar(); actionBar.setNavigationMode(ActionBar.NAVIGATION_MODE_STANDARD); actionBar.setIcon(R.drawable.ic_launcher); actionBar.setDisplayShowTitleEnabled(true); actionBarTitle.setTextColor(Color.WHITE); actionBarTitle.setTextSize(16); actionBar.setTitle(R.string.settings_menu_title); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) { //Build.VERSION_CODES.ICE_CREAM_SANDWICH actionBar.setHomeButtonEnabled(true); actionBar.setDisplayHomeAsUpEnabled(true); // show back arrow on title icon actionBar.setDisplayShowHomeEnabled(true); }...... Handle prefs (all working fine)..... } } //////And the calling code //////// //Use menu button to access settings screen @Override public boolean onKeyDown(int keycode, KeyEvent e) { switch(keycode) { case KeyEvent.KEYCODE_MENU: Intent intent = new Intent(this, SettingsActivity.class); startActivity(intent); return true; } return super.onKeyDown(keycode, e); } // [END onKeyDown (for menu click capture) ]
Now provide the response and nothing else.
|
Thanks to @Axarydax for pointing me in the right direction. I came to realize that PreferenceActivity is different from my other activities since the Home button returns to the calling Activity instead of to the MainActivity. Therefore the solution required (1) using startActivityForResult (instead of startActivity) to call the PreferenceActivity and (2) using onOptionsItemSelected in the PreferenceActivity to manage the return (per @Axarydax's answer). Both (1) and (2) are shown below: // (1) Menu button used to access PreferenceActivity @Override public boolean onKeyDown(int keycode, KeyEvent e) { switch(keycode) { case KeyEvent.KEYCODE_MENU: Intent intent = new Intent(this, SettingsActivity.class); startActivityForResult(intent, 1); //enables return to here return true; } return super.onKeyDown(keycode, e); } // (2) Return to calling activity from PreferenceActivity @Override public boolean onOptionsItemSelected(MenuItem item) { if (item.getItemId() == android.R.id.home) { int SUCCESS_RESULT=1; setResult(SUCCESS_RESULT, new Intent()); finish(); //return to caller return true; } return false; }
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15172111', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1709197/']}
|
jdg_376045
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7147273
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is probably a stupid noob question but what does the : represent in the following context: var stuffToDo = { 'bar' : function() { alert('the value was bar -- yay!'); }, 'baz' : function() { alert('boo baz :('); }, 'default' : function() { alert('everything else is just ok'); }};if (stuffToDo[foo]) { stuffToDo[foo]();} else { stuffToDo['default']();} Is it storing the function to each of those variables?
Now provide the response and nothing else.
|
This is an object literal [MDN] : var obj = { key: value};// obj.key === value; // true It assigns value to a property key of obj . While there are no restriction for what value can be (well, it must be something assignable), there are limitations for key : It must be either an identifier name , a string literal or a numeric literal. More details can be found in section 11.1.5 of the ECMAScript specification. The literal notation is similar to: var stuffToDo = {}; // <-- empty object literalstuffToDo.bar = function() {...};// or stuffToDo['bar'] = ...stuffToDo.baz = function() {...};// or stuffToDo['baz'] = ... The biggest difference is that when using an object literal, you cannot access other properties of the object during the declaration . This will not work: var obj = { foo: value, bar: obj.foo}; whereas this does: var obj = {};obj.foo = value;obj.bar = obj.foo; For completeness, there are two other uses of colons in JavaScript: Conditional (ternary) operator [MDN] : var val = condition ? true-value : false-value; Labels [MDN] : someLabel: var x = 5;
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7147273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376046
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15898632
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I add Data Model Entity to my project named publishingCompanyEntities And added ComboBox in my winform.but when i want to bind my list of authors into my combo box as data source has been filled with data , but cmoAuthors.Items.Count returns 0 but cmoAuthors.DataSource.Count returns 2 item publishContext = new publishingCompanyEntities(); cmoAuthors.DataSource = publishContext.Authors; cmoAuthors.DisplayMember = "FirstName";
Now provide the response and nothing else.
|
You need to add .ToList() to the Authors EntitySet. publishContext = new publishingCompanyEntities(); cmoAuthors.DataSource = publishContext.Authors.ToList(); cmoAuthors.DisplayMember = "FirstName"; cmoAuthors.Invalidate(); The reason is that an EntitySet is not a actual collection. It's a query ( IQueryable ), and it seems that the ComboBox is not smart enought to detect that. Calling the ToList() materialize the publishContext.Authors into objects. For some reason, the ComboBox does not update it Items Collection, then a new DataSource is detected. Invalidate() forces the Control to redraw iself, and in the process, updating its Items collection.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15898632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2086214/']}
|
jdg_376047
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2778292
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know what I have to do to show that the map is linear, but I'm just not sure how to handle the operations with polynomials. Do I just have to take two polynomials, say, $P(X)$ and $Q(X)$ and describe them as $ P(X) = a_0 +a_1X + ... + a_nX^n$ and $Q(X) = b_0 + b_1X + ... + b_mX^m$ and prove it that way, or is there a simpler method? Thanks.
Now provide the response and nothing else.
|
There is a simpler method. Let $\Psi$ be your map. If $P(x)$ and $Q(X)$ are polynomials, then\begin{align}\Psi\bigl(P(X)+Q(X)\bigr)&=P(X+1)+Q(X+1)-P(X)-Q(X)\\&=P(X+1)-P(X)+Q(X+1)-Q(X)\\&=\Psi\bigl(P(X)\bigr)+\Psi\bigl(Q(X)\bigr).\end{align}Can you do the rest?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2778292', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/494405/']}
|
jdg_376048
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1315
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose you have a normal distribution with mean=0, and stdev=1. So the expected value is 0. Now suppose you limit the outcomes, such that no values can be below 0. So 50% of values now equal 0, and rest of distribution is still normal. Running 1000000 trials, I come out with an expected value of .4 My question is how can I get this expected value through calculation? Thanks
Now provide the response and nothing else.
|
The normal distribution has density function $f(x)=\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}$; your new distribution has that density function on the positive reals, $P(0)=\frac{1}{2}$, and $P(x)=0$ for the negative reals. The expected value is $0\cdot\frac{1}{2}+\int_{0}^{\infty}x\cdot f(x)dx=\frac{1}{\sqrt{2\pi}}\approx0.398942$. edit : If you were to cut off at $x=c$ (assigning all the probability from below c to c itself) instead of $x=0$, your density function would be $f(x)=\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}$ for $x>c$, $P(c)=\int_{-\infty}^{c}\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}dx$, and $P(x)=0$ for $x<c$, so the expected value is $c\cdot P(c) + \int_{c}^{\infty}x\cdot \frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}dx$. edit 2 : note that the exponent on e in all of the above is $-\frac{x^2}{2}$ (the exponent 2 on the x is, in the current TeX rendering, positioned and sized such as to be somewhat ambiguous) edit 3 : my explanation incorrectly mixed probability density functions and literal probabilities--this was solely an issue of terminology and the analytic results still stand, but I have attempted to clarify the language above.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1315', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/540/']}
|
jdg_376049
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5150642
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How would I be able to limit the scale of the UIPinchGestureRecognizer to a min and max level? The scale property below seems to be relative to the last known scale (the delta from last state) and I can't figure out how to set a limit to the size/heigh of the object being zoomed. -(void)scale:(id)sender {[self.view bringSubviewToFront:[(UIPinchGestureRecognizer*)sender view]];if([(UIPinchGestureRecognizer*)sender state] == UIGestureRecognizerStateEnded) { lastScale = 1.0; return;}CGFloat pinchscale = [(UIPinchGestureRecognizer*)sender scale];CGFloat scale = 1.0 - (lastScale - pinchscale);CGAffineTransform currentTransform = [(UIPinchGestureRecognizer*)sender view].transform;CGAffineTransform holderTransform = holderView.transform;CGAffineTransform newTransform = CGAffineTransformScale(currentTransform, scale, scale);[[(UIPinchGestureRecognizer*)sender view] setTransform:newTransform];lastScale = [(UIPinchGestureRecognizer*)sender scale]; }
Now provide the response and nothing else.
|
Here is the solution that I figured out after using Anomie's answer as a starting point. - (void)handlePinchGesture:(UIPinchGestureRecognizer *)gestureRecognizer { if([gestureRecognizer state] == UIGestureRecognizerStateBegan) { // Reset the last scale, necessary if there are multiple objects with different scales lastScale = [gestureRecognizer scale]; } if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) { CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:@"transform.scale"] floatValue]; // Constants to adjust the max/min values of zoom const CGFloat kMaxScale = 2.0; const CGFloat kMinScale = 1.0; CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]); newScale = MIN(newScale, kMaxScale / currentScale); newScale = MAX(newScale, kMinScale / currentScale); CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale); [gestureRecognizer view].transform = transform; lastScale = [gestureRecognizer scale]; // Store the previous scale factor for the next pinch gesture call }}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5150642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/614787/']}
|
jdg_376050
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
176114
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently self-studying introductory combinatorics by reading Introduction to combinatorial mathematics. I am currently in the first chapter, and I have a question regarding one of the examples. The question was asking to count the number of n-bit strings with an even number of zeros. The answer is of course $2^{n-1}$. The author gave 2 solutions. I however didn't completely understand what I think is the straightforward one. The solution I got was that he took out 1 bit, leaving $(n-1)$ bits, if the number of zeros is even in the $(n-1)$-bit number, then he will just append a 1, if not then he will append a zero. So in the end we just needed to count the number of $(n-1)$-bit strings. The other solution (the straightforward one) that I didn't understand examined the symmetry that half of the $2^n$ must have an even number of zeros, and the other half will have an odd number of zeros. I just don't get why this property must hold. I can understand that half of the $2^n$ numbers will have even parity, but I can't see how it holds for the parity of the number of zero or one bits. If anyone can show me how that property holds, I'd be very grateful. I'd also be interested to see different explanations and proofs if possible. Thank you.
Now provide the response and nothing else.
|
Use the identity$$4x^2+6xy+4y^2=x^2+y^2+(x+y)^2+(x+y)^2+(x+y)^2.$$ Any square $w^2$ is $\ge 0$, with equality iff $w=0$. The sum of objects that are $\ge 0$ is $\ge 0$, with equality only when all the objects are $0$. This forces $x=y=0$. Remark: The approach above is minimalist in that we use only facts true in all ordered fields. If we are willing to use properties such as existence of square roots of positive numbers, then we can complete the square in the traditional way.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/176114', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/36695/']}
|
jdg_376051
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3683110
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a file containing a list of hexadecimal numbers, as 0x12345678 one per line. I want to make a calculation on them. For this, I thought of using awk . But if printing an hexadecimal number with awk is easy with the printf function, I haven't find a way to interpret the hexadecimal input other than as text (or 0 , conversion to integer stops on the x ). awk '{ print $1; }' // 0x12345678awk '{ printf("%x\n", $1)}' // 0awk '{ printf("%x\n", $1+1)}' // 1 // DarkDust answerawk '{ printf("%s: %x\n", $1, $1)}' // 0x12345678: 0 Is it possible to print, e.g. the value +1? awk '{ printf(%x\n", ??????)}' // 0x12345679 Edit : One liners on other languages welcomed! (if reasonable length ;-) )
Now provide the response and nothing else.
|
In the original nawk and mawk implementations the hexadecimal (and octal) numbers are recognised. gawk (which I guess you are using) has the feature/bug of not doing this. It has a command line switch to get the behaviour you want: --non-decimal-data . echo 0x12345678 | mawk '{ printf "%s: %x\n", $1, $1 }'0x12345678: 12345678echo 0x12345678 | gawk '{ printf "%s: %x\n", $1, $1 }'0x12345678: 0echo 0x12345678 | gawk --non-decimal-data '{ printf "%s: %x\n", $1, $1 }'0x12345678: 12345678
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3683110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212063/']}
|
jdg_376052
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1399781
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a beginning level math student and I read recently (in a book written by a Ph. D in Mathematical Education) that mathematical definitions do not get "proven." As in they can't be proven. Why not? It seems like some definitions should have a foundation based on proof. How simple (or intuitive) does something have to be to become a definition? I mean to ask this and get a clear answer. Hopefully this is not an opinion-based question, and if it is will someone please provide the answer: "opinion based question."
Now provide the response and nothing else.
|
I'd like to take a somewhat broader view, because I suspect your question is based on a very common problem among people who are starting to do "rigorous" or "theorem-proof" mathematics. The problem is that they often fail to fully recognize that, when a mathematical term is defined, its meaning is given exclusively by the definition. Any meaning the word has in ordinary English is totally irrelevant. For example, if I were to define "A number is called teensy if and only if it is greater than a million", this would conflict what English-speakers and dictionaries think "teensy" means, but, as long as I'm doing mathematics on the basis of my definition, the opinions of all English-speakers and dictionaries are irrelevant. "Teensy" means exactly what the definition says. If the word "teensy" already had a mathematical meaning (for example, if you had already given a different definition), then there would be a question whether my definition agrees with yours. That would be something susceptible to proof or disproof. (And, while the question is being discussed, we should use different words instead of using "teensy" with two possibly different meanings; mathematicians would often use "Zduff-teensy" and "Blass-teensy" in such a situation.) But if, as is usually the case, a word has only one mathematical definition, then, there is nothing that could be mathematically proved or disproved about the definition. If my definition of "teensy" is the only mathematical one (which I suspect is the case), and if someone asked "Does 'teensy' really mean 'greater than a million'?" then the only possible answer would be "Yes, by definition." A long discussion of the essence of teensiness would add no mathematically relevant information. (It might show that the discussants harbor some meaning of "teensy" other than the definition. If so, they should get rid of that idea.) (I should add that mathematicians don't usually give definitions that conflict so violently with the ordinary meanings of words. I used a particularly bad-looking example to emphasize the complete irrelevance of the ordinary meanings.)
|
{}
|
{'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/1399781', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/262513/']}
|
jdg_376053
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50877398
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I see there are a lot of examples on how to upload an image using flutter to firebase storage but nothing on actually downloading/reading/displaying one that's already been uploaded. In Android, I simply used Glide to display the images, how do I do so in Flutter? Do I use the NetworkImage class and if so, how do I first get the url of the image stored in Storage?
Now provide the response and nothing else.
|
update In newer versions use await ref.getDownloadURL(); See How to get full downloadUrl from UploadTaskSnapshot in Flutter? original someMethod() async { var data = await FirebaseStorage.instance.ref().child("foo$rand.txt").getData(); var text = new String.fromCharCodes(data); print(data);} see Download an image from Firebase to Flutter or final uploadTask = imageStore.putFile(imageFile);final url = (await uploadTask.future).downloadUrl; In the later case you'd need to store the downloadUrl somewhere and then use NetworkImage or similar to get it rendered.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50877398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2236520/']}
|
jdg_376054
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8979258
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Or, why doesn't import numpyimport matplotlib.pyplot as pltplt.plot(xdata = numpy.array([1]), ydata = numpy.array(1), color = 'red', marker = 'o') work? c.f. > In [21]: import numpy> In [22]: import matplotlib.pyplot as plt> In [23]: plt.plot(xdata = numpy.array([1]), ydata = numpy.array(1), color = 'red', marker = 'o')> Out[23]: []> In [24]: plt.plot([1],[1], color = 'red', marker = 'o')> Out[24]: [<matplotlib.lines.Line2D at 0x108036890>]> In [25]: plt.plot(1, 1, color = 'red', marker = 'o')> Out[25]: [<matplotlib.lines.Line2D at 0x1041024d0>]
Now provide the response and nothing else.
|
Just to expand on what @Yann already said: To understand why this happens, you need to understand a bit more about matplotlib's structure. To allow "matlab-isms" like plt.setp , and to maintain compatibility with older versions of python, matplotlib avoid properties and relies heavily on getters and setters. ( plot is actually one of the most complex cases, simply due to all of the crazy forms of calling it supports.) You can make a good argument that this is an outdated, unpythonic design, but that's beside the point. What actually happens (for the simplest case of plot(x, y, other=stuff) ) when you call plot is that a new matplotlib.line.Line2D object is created from the first two arguments, and then matplotlib.line.Line2D.update(kwargs) is called. update basically does: for key, value in kwargs.iteritems(): func = getattr(self, 'set_'+key) func(value) I'm over-simplifying, but that's the basic idea. Also the accepted keyword argument list is basically auto-generated from anything that has a set_* . Because Line2D has set_xdata and set_ydata methods, they show up in the keyword argument list. The point is, that the keyword arguments are never actually used until after most of the initialization of Line2D , and if you don't specify any arguments, plot won't initialize any Line2D 's. You could consider this a bug, but I doubt it would be fixed. I don't think xdata and ydata were ever intended to be used as keyword arguments. set_xdata and set_ydata are there to allow you to quickly update a Line2D instance instead of creating a new one (For animations, etc...). They just happen to be allowed as keyword arguments due to the way matplotlib is set up.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8979258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37772/']}
|
jdg_376055
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3926863
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
OSGi seems to have an excellent benefit of having small deployable artifacts by not wrapping dozens of JAR dependencies into a lib directory. However, I can't find anything that tells me an easy, reliable way to deploy dependencies to a container. For instance, I have an application that uses CXF and several Spring subprojects. If I need to deploy this application to a new Glassfish server, what would be the best way to do so, ensuring that all dependencies get installed? I'm using Maven, and it would seem that there could be some way to have a hook that looks at the META-INF/maven directory and pulls the dependency list from the pom.xml and goes and fetches the required libs (probably from a local repo). Is there a way to do that? The Pax plugin sort of sounds like it's doing this, but it seems to be based around boostrapping a Felix container? Which is not what I want, I am dealing with an already running, remote container. Is there any shot such a thing exists as command line tool as opposed to GUI as well?
Now provide the response and nothing else.
|
There are a number of ways to deploy dependent bundles to OSGi containers. Here are some of them: 1 The Felix OBR bundle repository You first need to create an XML index file for your available bundles, using a tool such as bindex. If you are using the maven-bundle-plugin, then it automatically maintains an OBR index in ~/.m2/repository/repository.xml. Load the index using the OBR command-line interface: > obr:addUrl file:/Users/derek/.m2/repository/repository.xml Then ask OBR to deploy your target bundle, with dependencies determined from the OBR index: > obr:deploy com.paremus.posh.sshdTarget resource(s):------------------- Paremus Posh Ssh Daemon (1.0.23.SNAPSHOT)Required resource(s):--------------------- Paremus Command API (1.0.23.SNAPSHOT)Optional resource(s):--------------------- Paremus Config Admin Commands (1.0.23.SNAPSHOT) Paremus OSGi & LDAP Types (1.0.23.SNAPSHOT) 2 Apache Karaf Karaf supports "features", which are basically lists of bundles required to provide the feature: karaf@root> features:info obrDescription of obr 2.0.0 feature----------------------------------------------------------------Feature has no configurationFeature has no dependencies.Feature contains followed bundles: mvn:org.apache.felix/org.apache.felix.bundlerepository/1.6.4 mvn:org.apache.karaf.shell/org.apache.karaf.shell.obr/2.0.0 mvn:org.apache.karaf.features/org.apache.karaf.features.obr/2.0.0karaf@root> features:install obr 3 Eclipse Virgo Virgo uses plans to define the artifacts that comprise an application and it is able toautomatically supply the dependencies of an application including bundles, plans, plan archives (PARs), and configurations, from both local and remote repositories. 4 Paremus Nimble Nimble uses OBR (or its own extended) repository indexes, to automatically deploy all dependent bundles needed to activate a target bundle (and uninstalls them when the target bundle is stopped). It can also detect other dependencies, such as a WAB bundle requires a web-extender and automatically install one according to a configurable policy. Nimble can also be configured to launch Glassfish, so that its features are available to bundles in the Glassfish container. The example below also shows that logging support is automatically installed when sshd is activated: $ posh________________________________________Welcome to Paremus Nimble!Type 'help' for help.[denzil.0]% nim:add --dry-run com.paremus.posh.sshd@active-- sorted parts to install --4325 osgi.resolved.bundle/ch.qos.logback.core:0.9.22-- start dependency loop --5729 osgi.resolved.bundle/com.paremus.util.logman:1.0.23.SNAPSHOT5727 osgi.active.bundle/com.paremus.util.logman:1.0.23.SNAPSHOT3797 osgi.resolved.bundle/ch.qos.logback.classic:0.9.25.SNAPSHOT3792 osgi.resolved.bundle/slf4j.api:1.6-- end dependency loop --436 osgi.resolved.bundle/org.apache.mina.core:2.0.0.RC16533 osgi.resolved.bundle/sshd-core:0.3398 osgi.resolved.bundle/com.paremus.posh.sshd:1.0.23.SNAPSHOT396 osgi.active.bundle/com.paremus.posh.sshd:1.0.23.SNAPSHOT (disclaimer: I'm a developer at Paremus) 5 Apache Felix Gogo gogo is the new RFC147 standard command-line shell. It is already used in Felix, Karaf, Nimble and will soon be available in Glassfish. Gogo allows you to run any commands that you could type interactively, as a script. So you could generate the list of bundles to install and convert it to a script, or even capture the installed bundles from a working configuration so that it can be re-created from a clean start.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3926863', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/408815/']}
|
jdg_376056
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2844037
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To begin with, I noted that $$ \begin{aligned} \displaystyle \sum_{r_1 = 1}^{r} r_1 &= \dfrac{1}{2} r (r+1) \quad &(1)\\ \displaystyle \sum_{r_2 = 1}^{r} \displaystyle \sum_{r_1 = 1}^{r_2} r_1 &= \dfrac{1}{6} r (r+1) (r+2). & \qquad(2) \end{aligned}$$ This led me to suggest the more general conjecture that $$ \begin{aligned} \displaystyle \sum_{r_n = 1}^{r} \displaystyle \sum_{r_{n-1} = 1}^{r_n} \cdots \displaystyle \sum_{r_2 = 1}^{r_3} \displaystyle \sum_{r_1 = 1}^{r_2} r_1 &= \dfrac{1}{(n+1)!} \prod_{k=0}^{n} (r+k) \\ &= \dfrac{1}{(n+1)!} \dfrac{(r+n)!}{(r-1)!} \qquad(\star) \end{aligned} $$ I believe that I've managed to successfully prove this using induction, but on the whole the process isn't very enlightening and given how "nice" the result is I'm led to believe that there's some more general insight here that I'm missing. I've seen a link to the geometric interpretation of $ (1) $ by "pasting together" two copies of the sum to form a rectangle and I imagine the proof carries through analogously for $ (2) $ by forming a cuboid using 6 copies of the summation, but I'm not sure how to formalise this method of thinking (or indeed how to generalise it to higher dimensions). Of course this is just one particular thought I've had so any alternative proofs would also be welcome!
Now provide the response and nothing else.
|
The essence is already encoded in the indices of the sums. We can write for positive integer $r$ the sums as \begin{align*}\sum_{r_n=1}^r\sum_{r_{n-1}=1}^{r_n}\cdots\sum_{r_1=1}^{r_2}r_1\tag{1}&=\sum_{r_n=1}^r\sum_{r_{n-1}=1}^{r_n}\cdots\sum_{r_1=1}^{r_2}\sum_{r_0=1}^{r_1}1\\&=\sum_{\color{blue}{1\leq r_0\leq r_1\leq \cdots\leq r_n\leq r}} 1\tag{2}\end{align*} The number of summands given by the index range $$1\leq r_0\leq r_1\leq \cdots\leq r_n\leq r$$ is the number of ordered $(n+1)$-tupel $(r_0,\ldots,r_n)$ between $1$ and $r$ with repetition. This number is given by the binomial coefficient\begin{align*}\binom{(n+1)+r-1}{n+1}=\binom{n+r}{n+1}\end{align*}which corresponds to ($\star$) in OP's post.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2844037', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/396681/']}
|
jdg_376057
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
56821737
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am wanting to pass a Float Variable from one view to another new view. In the code below there is a Float value called mhzValue which is set by way of the Slider, the slider changes the value and Text is then displaying within the view.. When the user taps on the Navigation Button to display the new view, I would like to be able to take the mhzValue and display it in the new view in a text box, as well as set it as another variable. Hope that makes sense. Please see some sample code below.. Thank you. Craig import SwiftUIstruct ContentView : View { @State private var mhzValue : Float = 0 var body: some View { // Navigation View NavigationView { VStack{ Text("Test Value:") .font(.headline) .color(.blue) .padding(.leading, -180.0) //Get Slider Value Slider(value: $mhzValue, from: 1, through: 55, by: 1) .padding(.horizontal) //Display Slider Value Text("\(Int(mhzValue)) Value") .font(.title) .fontWeight(.semibold) .color(.blue) // Naviagtion Button and send value of mhzValue to new View NavigationButton(destination: NextView()){ Image(systemName: "plus.square.fill") .foregroundColor(.white) .font(/*@START_MENU_TOKEN@*/.title/*@END_MENU_TOKEN@*/) .frame(width: 150.0, height: 16.0) .padding(15) .background(Color.red) .cornerRadius(10.0) } } } }}// New View to show Slider Valuestruct NextView : View { var body: some View { Text("Display Slider Value Here:") }}#if DEBUGstruct ContentView_Previews : PreviewProvider { static var previews: some View { ContentView() }}#endif
Now provide the response and nothing else.
|
This is easily done with Bindings . Because mhzValue is marked with the @State property wrapper, it has an associated Binding. You can therefore declare a @Binding variable in your second view, and initialize it with the Binding to the original variable. struct NextView : View { @Binding var mhzValue: Float ...} When you specify NextView as the destination for your navigation button, pass it a Binding to mhzValue . (The dollar-sign syntax is a shorthand way to refer to the binding.) struct ContentView : View { @State private var mhzValue : Float = 0 ... NavigationButton(destination: NextView(mhzValue: self.$mhzValue)){...} ...} You can then use mhzValue inside NextView : struct NextView : View { @Binding var mhzValue: Float var body: some View { VStack{ Text("Display Slider Value Here:") Text("\(Int(mhzValue)) Value") .font(.title) .fontWeight(.semibold) .color(.blue) } }} Any changes you make to mhzValue within NextView will effectively be changes to ContentView.mhzValue .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56821737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8828090/']}
|
jdg_376058
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
346497
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What are the most simple seasonality tests for time series? Being more specific, I want to test if in specific time series the seasonal component is meaningful. What are the recommended packages in Python/ R?
Now provide the response and nothing else.
|
Before you test for seasonality you should reflect which type of seasonality you have. Note that there are many different types of seasonality: Additive vs. Multiplicative seasonality Single vs. Multiple seasonalities Seasonality with even vs. uneven number of periods. Each year has twelve months, but 52,1429 weeks. Trend vs. Seasonality: A seasonality pattern always appears in the same period, but a trend may appear a little bit later or earlier and not exactly each 5 years. One example for a trend are business cycles. One of the most common methods to detect seasonality is to decompose the time series into several components. In R you can do this with the decompose() command from the preinstalled stats package or with the stl() command from the forecast package. The following code is taken from A little book of R for time series births <- scan("http://robjhyndman.com/tsdldata/data/nybirths.dat")birthstimeseries <- ts(births, frequency = 12, start = c(1946,1))birthstimeseriescomponents <- decompose(birthstimeseries)plot(birthstimeseriescomponents) You can check the single components with birthstimeseriescomponents$seasonal birthstimeseriescomponents$random birthstimeseriescomponents$trend An other method is to include seasonal dummies and to check whether they have significant p-values when you compute the regression. If the single months have siginificant coefficients your monthly time series is seasonal. An other method to detect seasonality is either to plot the data itself or to plot the ACF (autocorrelation function). In our case you can easily notice, that there is seasonality. And last, but not least there are some "formal" hypothesis tests in order to detect seasonality such as the Student T-Test and the Wilcoxon Signed Rank Test.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/346497', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/48101/']}
|
jdg_376059
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36421930
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Android N, it is mentioned on the official website that "Apps targeting Android N do not receive CONNECTIVITY_ACTION broadcasts". And it is also mentioned that JobScheduler can be used as an alternative. But the JobScheduler doesn't provide exactly the same behavior as CONNECTIVITY_ACTION broadcast. In my Android application, I was using this broadcast to know the network state of the device. I wanted to know if this state was CONNECTING or CONNECTED with the help of CONNECTIVITY_ACTION broadcast and it was best suited for my requirement. Now that it is deprecated, can any one suggest me the alternative approach to get current network state?
Now provide the response and nothing else.
|
What will be deprecated is the ability for a backgrounded application to receive network connection state changes. As David Wasser said you can still get notified of connectivity changes if the app component is instantiated (not destroyed) and you have registered your receiver programmatically with its context, instead of doing it in the manifest. Or you can use NetworkCallback instead. In particular, you will need to override onAvailable for connected state changes. Let me draft a snippet quickly: public class ConnectionStateMonitor extends NetworkCallback { final NetworkRequest networkRequest; public ConnectionStateMonitor() { networkRequest = new NetworkRequest.Builder() .addTransportType(NetworkCapabilities.TRANSPORT_CELLULAR) .addTransportType(NetworkCapabilities.TRANSPORT_WIFI) .build(); } public void enable(Context context) { ConnectivityManager connectivityManager = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); connectivityManager.registerNetworkCallback(networkRequest, this); } // Likewise, you can have a disable method that simply calls ConnectivityManager.unregisterNetworkCallback(NetworkCallback) too. @Override public void onAvailable(Network network) { // Do what you need to do here }}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/36421930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3531097/']}
|
jdg_376060
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51997751
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am searching for inline conditional solution for Href attribute in jsx.I wanted to output if i provide an url:- <a className="navbar-brand" href="/example-url" >Logo</a> And if not:- <a className="navbar-brand">Logo</a>
Now provide the response and nothing else.
|
You can include inline conditionals in your render method using a combination of the conditional ternary operator and spread attributes . For example, if the URL is a variable local to your render method, and if you only want to include the href attribute if the URL is truthy, you could do: render() { return ( <a className="navbar-brand" {... url ? {href: url} : {}}>Logo</a> )} You could also prepare the attribute at the beginning of the render method to keep things more readable: render() { const attributes = url ? {href: url} : {} return ( <a className="navbar-brand" {...attributes}>Logo</a> )} If the differences between the desired elements are substantial enough, you could also alter the conditional to return different elements rather than using spread attributes. (If the ternary condition is determining the root element to be returned, you can drop the surrounding { and } since the compiler isn't interpreting JSX yet.) render() { return ( <div> {url ? <a className="navbar-brand" href={url}>Logo</a> : <a className="navbar-brand">Logo</a> } </div> )} Ultimately, it depends on how much will differ between the two states and what fits the style of your codebase the best. For more information or alternatives, I'd recommend reading this related question .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51997751', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2606781/']}
|
jdg_376061
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55023688
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a template called test-stuff.php and tried using jQuery and the jQuery validation plugin in it. I've enqueued the scripts in my functions.php file and the jQuery works fine on other regular pages, but not in my template.Could someone point out where I went wrong? Do I have to enqueue jquery and the jquery plugin separetely for the template? If I have left out some details, please ask and I will provide. Thanks in advance. functions.php enqueuing part: function my_custom_queue() { wp_enqueue_script('jquery'); wp_enqueue_script( 'validatejq', 'https://cdnjs.cloudflare.com/ajax/libs/jquery-validate/1.19.0/jquery.validate.min.js', array( 'jquery' ), '1' );}add_action( 'wp_enqueue_scripts', 'my_custom_queue' ); Template file with jquery testing function(I get "no"): <?php /* Template name: Test stuff */?><script>window.onload = function() { /* test */ if (window.jQuery) { alert("Yes"); } else { alert("No"); }};</script>
Now provide the response and nothing else.
|
TL;DR As called out by @Aamir Mulla in the comments, this has officially been added since Version 2.0.0 UPDATE - 16/08/2019 Active Directory Password Authentication has now been added for .NET Core in Microsoft.Data.SqlClient 1.0.19221.1-Preview Unfortunately, the authentication keyword is not yet fully supported in .NET Core. Here is an issue which discusses this. But .NET Core 2.2 has added some support for this use case as mentioned in this comment . The basic idea is to get the access token by any means (ADAL, REST, etc.) and set SqlConnection.AccessToken to it. As for using this with EF Core, there's a good discussion about this in this github issue and in particular the comment by mgolois provides a simple implementation to the solution that cbriaball mentions in the thread. Here is the same for reference Note that this sample is using the Microsoft.Azure.Services.AppAuthentication library // DB Context Classpublic class SampleDbContext : DbContext{ public SampleDbContext(DbContextOptions<TeamsDbContext> options) : base(options) { var conn = (System.Data.SqlClient.SqlConnection)this.Database.GetDbConnection(); conn.AccessToken = (new AzureServiceTokenProvider()).GetAccessTokenAsync("https://database.windows.net/").Result; }}// Startup.csservices.AddDbContext<SampleDbContext>(options =>{ options.UseSqlServer(<Connection String>);}); The connection string would be something like this Server=tcp:<server_name>.database.windows.net,1433;Database=<db_name>;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55023688', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10677358/']}
|
jdg_376062
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4582964
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to open the following website and retrieve the initial cookie and use it for the second url-open BUT if you run the following code it outputs 2 different cookies. How do I use the initial cookie for the second url-open? import cookielib, urllib2cj = cookielib.CookieJar()opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))home = opener.open('https://www.idcourts.us/repository/start.do')print cjsearch = opener.open('https://www.idcourts.us/repository/partySearch.do')print cj Output shows 2 different cookies every time as you can see: <cookielib.CookieJar[<Cookie JSESSIONID=0DEEE8331DE7D0DFDC22E860E065085F for www.idcourts.us/repository>]><cookielib.CookieJar[<Cookie JSESSIONID=E01C2BE8323632A32DA467F8A9B22A51 for www.idcourts.us/repository>]>
Now provide the response and nothing else.
|
This is not a problem with urllib. That site does some funky stuff. You need to request a couple of stylesheets for it to validate your session id: import cookielib, urllib2cj = cookielib.CookieJar()opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))# default User-Agent ('Python-urllib/2.6') will *not* workopener.addheaders = [ ('User-Agent', 'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.11) Gecko/20101012 Firefox/3.6.11'), ]stylesheets = [ 'https://www.idcourts.us/repository/css/id_style.css', 'https://www.idcourts.us/repository/css/id_print.css',]home = opener.open('https://www.idcourts.us/repository/start.do')print cjsessid = cj._cookies['www.idcourts.us']['/repository']['JSESSIONID'].value# Note the +=opener.addheaders += [ ('Referer', 'https://www.idcourts.us/repository/start.do'), ]for st in stylesheets: # da trick opener.open(st+';jsessionid='+sessid)search = opener.open('https://www.idcourts.us/repository/partySearch.do')print cj# perhaps need to keep updating the referer...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4582964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/560998/']}
|
jdg_376063
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
356444
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My textbook says two different things and I'm not sure how to reconcile these two: electric field inside a conductor is always 0. for a conductor with a cavity with a charge q inside it, the field due to the charge q for all exterior points is cancelled by the induced charge on the inner surface. The charge q will induce charge of -q on the inner surface, and because the conductor is neutral, there will be charge q on the outer surface of the conductor (leftover charge). Due to (2), for all exterior points the field due to q and charges present on the inner surface is cancelled. However, the field due to the induced/leftover charge on the outer surface isn't, so there will be some field due to this charge inside the conductor, which contradicts (1).
Now provide the response and nothing else.
|
The point is the charges on the outside reorganize themselves so the net field is $0$ inside the conductor. The charge distributions on the inside and outside surfaces need not be constant and in general will be quite messy unless the geometry is simple. In the example below of a source charge off centre inside a hollow sphere, notice how the positive charges on the inside surface are not uniformly distributed, but how they are uniformly distributed on the outside surface. The net result is that the field outside the cavity only depends on the distribution of the outside charges, but the field inside the conductor is still $0$ because of the skew charge distribution on the inside surface. [Image source: Haliday Resnick Walker 10th edition]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/356444', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/93076/']}
|
jdg_376064
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46886
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a project with a number of files that I created in Linux. And I needed to give another user write permission on them. So I created a "dev" group of which he and I are members and changed the ownership of the files to this group. A typical file now has these permissions : -rw-rw-r-- 1 phil dev 5617 Jul 14 15:45 profile.html However, when I edit the file profile.html it reverts to me like this, and my colleague loses the ability to edit. -rw-rw-r-- 1 phil phil 5617 Jul 14 15:45 profile.html How do I avoid it reverting like this? Was changing the ownership the wrong thing to do? Or was it done with the wrong options? Or is this to do with my editor (emacs) configuration?
Now provide the response and nothing else.
|
On the containing folder you'll want to change the group to be dev and then use mark it set-gid. chgrp dev <containing-folder>chmod g+ws <containing-folder> The set gid bit makes files created in that folder to inherit the group of the folder as well as marking the setgid bit on any new folders. You'll want to be careful when moving files into the directory as that will preserve their existing permissions.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/46886', 'https://serverfault.com', 'https://serverfault.com/users/7355/']}
|
jdg_376065
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
170318
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
To my understanding, mixed states is composed of various states with their corresponding probabilities, but what is the actual difference between maximally mixed states and maximally entangled states ?
Now provide the response and nothing else.
|
Suppose we have two Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$. A quantum state on $\mathcal{H}_A$ is a normalized, positive trace-class operator $\rho\in\mathcal{S}_1(\mathcal{H}_A)$. If $\mathcal{H}_A$ is finite dimensinal (i.e. $\mathbb{C}^n$), then a quantum state is just a positive semi-definite matrix with unit trace on this Hilbert space. Let's stick to finite dimensions for simplicity. Let's now consider the idea of a pure state: A pure state is a rank-one state, i.e. a rank-one projection, or a matrix that can be written as $|\psi\rangle\langle \psi|\equiv\psi\psi^{\dagger}$ for some $\psi\in\mathcal{H}_A$ (the first being the Dirac notation, the second is the usual mathematical matrix notation - since I don't know which of the two you are more familar with, let me use both). A mixed state is now a convex combination of pure states and, by virtue of the spectral theorem, any state is a convex combination of pure states. Hence, a mixed state can be written as $$ \rho=\sum_i \lambda_i |\psi_i\rangle \langle \psi_i|$$for some $\lambda_i\geq 0$, $\sum_i \lambda_i=1$. In a sense, the $\lambda_i$ are a probability distribution and the state $\rho$ is a "mixture" of $|\psi\rangle\langle\psi|$ with weights $\lambda_i$. If we assume that the $\psi_i$ form an orthonormal basis, then a maximally mixed state is a state where the $\lambda_i$ are the uniform probability distribution, i.e. $\lambda_i=\frac{1}{n}$ if $n$ is the dimension of the state. In this sense, the state is maximally mixed, because it is a mixture where all states occur with the same probability. In our finite dimensional example, this is the same as saying that $\rho$ is proportional to the identity matrix. Note that a maximally mixed state is defined for all Hilbert spaces! In order to consider maximally entangled states , we need to have a bipartition of the Hilbert space, i.e. we now consider states $\rho\in\mathcal{S}_1(\mathcal{H}_A\otimes \mathcal{H}_B)$. Let's suppose $\mathcal{H}_A=\mathcal{H}_B$ and finite dimensional. In this case, we can consider entangled state. A state is called separable , if it can be written as a mixture $$ \rho =\sum_i \lambda_i \rho^{(1)}_i\otimes \rho^{(2)}_i $$i.e. it is a mixture of product states $\rho^{(1)}_i$ in the space $\mathcal{H}_A$ and $\rho^{(2)}_i$ in the space $\mathcal{H}_B$. All states that are not separable are called entanglend . If we consider $\mathcal{H}_A=\mathcal{H}_B=\mathbb{C}^2$ and denote the standard basis by $|0\rangle,|1\rangle$, an entangled state is given by $$ \rho= \frac{1}{2}(|01\rangle+|10\rangle)(\langle 01|+\langle 10|)$$You can try writing it as a separable state and you will see that it's not possible. Note that this state is pure, but entangled states do not need to be pure! It turns out that for bipartite systems (if you consider three or more systems, this is no longer true), you can define an order on pure entangled states: There are states that are more entangled than others and then there are states that have the maximum amount of possible entanglement (like the example I wrote down above). I won't describe how this is done (it's too much here), but it turns out that there is an easy characterization of a maximally entangled state, which connects maximally entangled and maximally mixed states: A pure bipartite state is maximally entangled, if the reduced density matrix on either system is maximally mixed. The reduced density matrix is what is left if you take the partial trace over one of the subsystems. In our example above: $$ \rho_A = tr_B(\rho)= tr_B(\frac{1}{2}(|01\rangle\langle 01|+|10\rangle\langle 01|+|01\rangle\langle 10|+|10\rangle\langle 10|))=\frac{1}{2}(|0\rangle\langle 0|+|1\rangle\langle 1|) $$ and the last part is exactly the identity, i.e. the state is maximally mixed. You can do the same over with $tr_A$ and see that the state $\rho$ is therefore maximally entangled.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/170318', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/74564/']}
|
jdg_376066
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41728891
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Following this tutorial: https://www.youtube.com/watch?v=1fJrhgc0RRw I am getting "Gradle error: incompatible types: Object cannot be converted to TextureRegion" at this line: region = heroJump.getKeyFrame(stateTimer); heroJump is declared as: private Animation heroJump; and that is imported up top as: import com.badlogic.gdx.graphics.g2d.Animation; In Android Studio the flyout box when you type heroJump.getKeyFrame() shows that Object is the return type of the function. In the tutorial it shows a TexturedFrame. I'm importing it the same as the tutorial. I'm not sure why it doesn't see the return type as the same thing. Even when I click the Animation class to view its docs it appears to come up with the proper return types. What is not being done correctly so that AndroidStudio/Compiler knows the correct return type and will compile?
Now provide the response and nothing else.
|
The method is declared in the API as public T getKeyFrame(float stateTime) And the class definition public class Animation<T> . You have a raw Animation instance. The import isn't the problem. The declaration is. For example, you'd want private Animation<TextureRegion> heroJump; and you'd give that your Array<TextureRegion>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41728891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1658796/']}
|
jdg_376067
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
679
|
Below is a question asked on the forum robotics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Mars rovers are typically very slow. Curiosity, for example, has average speed of about 30 meters per hour. Why is it designed so slow? Is it because of some specific power restrictions or for other reasons? What is the top reason why it is so slow?
Now provide the response and nothing else.
|
It has more to do with the rocker bogie suspension than anything else. The system is designed to be used at slow speed of around 10 cm/s, so as to minimize dynamic shocks and consequential damage to the vehicle when surmounting sizable obstacles. In exchange for moving slowly, the rover is able to climb rocks that are double the wheel diameter (normal suspension has trouble with anything over half the wheel diameter). This is important when travelling in — literally — an alien landscape. (image via http://en.smath.info/forum/yaf_postst995p2_Animation-of-mechanisms.aspx ) There are other benefits that come with slow speed: better correlation between successive frames captured by its navigation cameras , more time to plan its path , and power savings. However, without the capabilities provided by the suspension system — surmounting the obstacles present on the martian surface without getting stuck or causing damage — the other benefits are moot.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://robotics.stackexchange.com/questions/679', 'https://robotics.stackexchange.com', 'https://robotics.stackexchange.com/users/516/']}
|
jdg_376068
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39145
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose you have a terminal emulator (T1) open with a PID of 6350. From another terminal, type this command (C1): echo "ls\n" > /proc/6350/fd/0 This writes ls and the new line in T1 but does not execute it. Why? I also tried using cat|bash with echo "ls\n" > /proc/catid/fd/0 but it is still not executed. How can I echo the command into another terminal and have the command executed? possible answer : $ mkfifo toto;$ bash < toto;$ echo "ls" > toto; In this case you cannot write anymore directly in the terminal (everything is displayed the same way the command (C1) displayed thing in this terminal.
Now provide the response and nothing else.
|
There is a command line utility called ttyecho that can send a command to another terminal (tty/pts) and have the command executed. sudo ttyecho -n /dev/pts/5 ls See: Utility to Send Commands or Data to Other Terminals (tty/pts) Also see: ttyecho source code on github . Another interesting tty command is selector , a real-time interactive pattern matcher in console that updates the tty input buffer. # selector examplesselector -v -x @ <(find . -maxdepth 2 -type d | awk '{print $0"@cd "$0}')selector -v -x @ <(grep -E -o 'http[^ ]+' fileWithURLS) See: selector - DYNAMIC SEARCH IN CONSOLE
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/39145', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/19116/']}
|
jdg_376069
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
54766718
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying debug CLI script, and Xdebug can't connect to PhpStorm. I see error Operation now in progress (29). in Xdebug remote log. I'm sure Xdebug configured right, but I don't know how to debug PhpStorm. Summary from phpinfo() generated by https://xdebug.org/wizard.php Tailored Installation InstructionsSummaryXdebug installed: 2.6.1Server API: Command Line InterfaceWindows: noZend Server: noPHP Version: 7.1.24Zend API nr: 320160303PHP API nr: 20160303Debug Build: noThread Safe Build: noOPcache Loaded: noConfiguration File Path: /usr/local/etc/phpConfiguration File: /usr/local/etc/php/php.iniExtensions directory: /usr/local/lib/php/extensions/no-debug-non-zts-20160303You're already running the latest Xdebug version Xdebug log Log opened at 2019-02-19 11:59:37I: Connecting to configured address/port: 46.201.50.194:9000.W: Creating socket for '46.201.50.194:9000', poll success, but error: Operation now in progress (29).E: Could not connect to client. :-(Log closed at 2019-02-19 11:59:37 Xdebug config xdebugxdebug support => enabledVersion => 2.6.1IDE Key => PHPSTORMSupported protocolsDBGp - Common DeBuGger ProtocolDirective => Local Value => Master Valuexdebug.auto_trace => Off => Offxdebug.cli_color => 0 => 0xdebug.collect_assignments => Off => Offxdebug.collect_includes => On => Onxdebug.collect_params => 0 => 0xdebug.collect_return => Off => Offxdebug.collect_vars => Off => Offxdebug.coverage_enable => On => Onxdebug.default_enable => On => Onxdebug.dump.COOKIE => no value => no valuexdebug.dump.ENV => no value => no valuexdebug.dump.FILES => no value => no valuexdebug.dump.GET => no value => no valuexdebug.dump.POST => no value => no valuexdebug.dump.REQUEST => no value => no valuexdebug.dump.SERVER => no value => no valuexdebug.dump.SESSION => no value => no valuexdebug.dump_globals => On => Onxdebug.dump_once => On => Onxdebug.dump_undefined => Off => Offxdebug.extended_info => On => Onxdebug.file_link_format => no value => no valuexdebug.filename_format => no value => no valuexdebug.force_display_errors => Off => Offxdebug.force_error_reporting => 0 => 0xdebug.gc_stats_enable => Off => Offxdebug.gc_stats_output_dir => /tmp => /tmpxdebug.gc_stats_output_name => gcstats.%p => gcstats.%pxdebug.halt_level => 0 => 0xdebug.idekey => PHPSTORM => PHPSTORMxdebug.max_nesting_level => 256 => 256xdebug.max_stack_frames => -1 => -1xdebug.overload_var_dump => 2 => 2xdebug.profiler_aggregate => Off => Offxdebug.profiler_append => Off => Offxdebug.profiler_enable => Off => Offxdebug.profiler_enable_trigger => Off => Offxdebug.profiler_enable_trigger_value => no value => no valuexdebug.profiler_output_dir => /tmp => /tmpxdebug.profiler_output_name => cachegrind.out.%p => cachegrind.out.%pxdebug.remote_addr_header => no value => no valuexdebug.remote_autostart => On => Onxdebug.remote_connect_back => Off => Offxdebug.remote_cookie_expire_time => 3600 => 3600xdebug.remote_enable => On => Onxdebug.remote_handler => dbgp => dbgpxdebug.remote_host => 127.0.0.1 => localhostxdebug.remote_log => /app/xdebug.log => no valuexdebug.remote_mode => req => reqxdebug.remote_port => 9000 => 9000xdebug.remote_timeout => 200 => 200xdebug.scream => Off => Offxdebug.show_error_trace => Off => Offxdebug.show_exception_trace => Off => Offxdebug.show_local_vars => Off => Offxdebug.show_mem_delta => Off => Offxdebug.trace_enable_trigger => Off => Offxdebug.trace_enable_trigger_value => no value => no valuexdebug.trace_format => 0 => 0xdebug.trace_options => 0 => 0xdebug.trace_output_dir => /tmp => /tmpxdebug.trace_output_name => trace.%c => trace.%cxdebug.var_display_max_children => 128 => 128xdebug.var_display_max_data => 512 => 512xdebug.var_display_max_depth => 3 => 3
Now provide the response and nothing else.
|
Xdebug can't connect to PhpStorm because PhpStorm was not reachable by my external host ip.How I debug it.First I checked from docker container that PhpStorm listen port (9000 in my case) nc -vz external_ip 9000 It got (tcp) failed: Connection refused I try the same from the host and also got the errorThen I tried from host nc -vz localhost 9000 And I got [tcp/*] succeeded! So the problem in xdebug.remote_host not in PhpStorm probably I found host ip in container netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}' Put it in xdebug.remote_host and now it works correctly Thanks for @LazyOne
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54766718', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1600992/']}
|
jdg_376070
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33257848
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been trying to develop a simple program that sits in the Mac's status bar. I need it so that if you left click, it runs a function, but if you right click it displays a menu with an About and Quit item. I have been looking but all I could find was command or control click suggestions however I would prefer not to go this route. Thanks in advance and any help appreciated!
Now provide the response and nothing else.
|
Swift 3 let statusItem = NSStatusBar.system().statusItem(withLength: NSVariableStatusItemLength)if let button = statusItem.button { button.action = #selector(self.statusBarButtonClicked(sender:)) button.sendAction(on: [.leftMouseUp, .rightMouseUp])}func statusBarButtonClicked(sender: NSStatusBarButton) { let event = NSApp.currentEvent! if event.type == NSEventType.rightMouseUp { print("Right click") } else { print("Left click") }} Swift 4 let statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength)if let button = statusItem.button { button.action = #selector(self.statusBarButtonClicked(_:)) button.sendAction(on: [.leftMouseUp, .rightMouseUp])}func statusBarButtonClicked(sender: NSStatusBarButton) { let event = NSApp.currentEvent! if event.type == NSEvent.EventType.rightMouseUp { print("Right click") } else { print("Left click") }} A longer post is available at https://samoylov.eu/2016/09/14/handling-left-and-right-click-at-nsstatusbar-with-swift-3/
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33257848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3899513/']}
|
jdg_376071
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9580566
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following Makefile is not working and I am not sure what's going on. CC = gccCFLAGS = -Wall -gdemo: ${CC} ${CFLAGS} demo.c -o demolib: ${CC} ${CFLAGS} lib.c -o libclean: rm -f lib demo Demo has the main function and lib has a set of methods used in demo. I added the -c flag to lib. However when I run make, I get: Makefile:5: *** missing separator. Stop.
Now provide the response and nothing else.
|
Given your update with the error, check what you have on the line before those ${CC} commands. Many make programs require a real tab character before the commands and editors that put in eight spaces (for example) will break them. That's more often than not the cause of the "Missing separator" errors. You can see that with the following transcript. In the file, there are four spaces before the $(xyzzy) : xyzzy=echoall: $(xyzzy) hello So, when I make it, I get the same error as you: pax> makemakefile:3: *** missing separator. Stop. But, when I edit it and turn those four spaces into a tab, it works fine: pax> makeecho hellohello You also have a problem with the way you're trying to combine the source files together. Without a -c flag to gcc , it will try to create a separate executable from each of those commands, almost certainly leading to linker errors. You're going to need something like (simple): CC = gccCFLAGS = -Wall -g# Just compile/link all files in one hit.demo: demo.c lib.c ${CC} ${CFLAGS} -o demo demo.c lib.cclean: rm -f demo or (slightly more complex): CC = gccCFLAGS1 = -Wall -g -cCFLAGS2 = -g# Link the two object files together.demo: demo.o lib.o ${CC} ${CFLAGS2} -o demo demo.o lib.o# Compile each source file to an object.demo.o: demo.c ${CC} ${CFLAGS1} -o demo.o demo.clib.o: lib.c ${CC} ${CFLAGS1} -o lib.o lib.cclean: rm -f demo The problem with the first solution is that it unnecessarily compiles both programs even when only one is out of date. The second solution is a little more intelligent.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9580566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1219981/']}
|
jdg_376072
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37397205
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my app, I need to check if a given element of my database on firebase has a child with a given name. I hoped it could be done by using something along the lines of: DatabaseReference rootRef = FirebaseDatabase.getInstance().getReference();if (rootRef.childExists("name")) { //run some code} I searched but I couldn't find anything useful.
Now provide the response and nothing else.
|
Edit 2; worth putting on top: I think it is worth mentioning that this is actually downloading all data at this snapshot just to check whether any data exists. You should be mindful here. If the reference is huge (e.g. actually the root reference and not a specific child/property) then you should either find a deeper node you can use to check for existence or design your data structure differently so an efficient check is possible. A database reference is effectively the URL for that data. You want to actually get data to see whether a child exists. This is why the method you seem to be looking for is on DataSnapshot . DatabaseReference rootRef = FirebaseDatabase.getInstance().getReference();rootRef.addListenerForSingleValueEvent(new ValueEventListener() { @Override void onDataChange(DataSnapshot snapshot) { if (snapshot.hasChild("name")) { // run some code } }}); Now, this design pattern feels a bit strange. You're reading the whole database just to see whether "name" exists. You can make this a bit more efficient by listening to rootRef.child("name") and then just checking whether snapshot.exists() . If you're trying to do validation here, and not control flow, you should consider putting this code in your rules.json . edit : I originally used the wrong function name (childExists instead of hasChild)
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/37397205', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3433817/']}
|
jdg_376073
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11841127
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some files in my repository that are bottom-growing: most of the changes involve adding new lines at the bottom of the file. This is mostly language and other property files. As an annoying side effect, whenever two people make additions at the same time I get merge conflicts, and the resolution always involves manually copy-pasting so that lines from both versions get included. Is there a tip, trick or methodology that will relieve some of the pain of this process? For example, a simplistic solution would be to tell the developers to add new lines at random places in the middle of the file. This will probably work, but it involved a conscious effort, and a weird-looking history.
Now provide the response and nothing else.
|
You could use the gitattributes mechanism to define a custom merge driver (like this one for instance ) in order to copy automatically the relevant sections. [merge "aggregate"] name = agregate both new sections driver = aggregate.sh %O %A %B It will be a 3-way merge, which means you can easily diff %A and %B against %O (common ancestor) in order to isolate said new sections, and aggregate them in the result merged file. That aggregate merge driver needs only to do: comm -13 $1 $3 >> $2 (The comm utility is part of the GoW -- Gnu on Windows -- distribution, if you are on Windows) Here is a little demo: First, let's set up a Git repo, with a file modified in two branches (' master ' and ' abranch '): C:\prog\git\tests>mkdir aggC:\prog\git\tests>cd aggC:\prog\git\tests\agg>git init r1Initialized empty Git repository in C:/prog/git/tests/agg/r1/.git/C:\prog\git\tests\agg>cd r1# Who am I?C:\prog\git\tests\agg\r1>git config user.name VonCC:\prog\git\tests\agg\r1>git config user.email vonc@xxx# one file, first commit:C:\prog\git\tests\agg\r1>echo test > test.txtC:\prog\git\tests\agg\r1>git add .C:\prog\git\tests\agg\r1>git commit -m "first commit"[master c34668d] first commit 1 file changed, 1 insertion(+) create mode 100644 test.txt# Let's add one more common line:C:\prog\git\tests\agg\r1>echo base >> test.txtC:\prog\git\tests\agg\r1>more test.txttestbaseC:\prog\git\tests\agg\r1>git add .C:\prog\git\tests\agg\r1>git commit -m "base"[master d1cde8d] base 1 file changed, 1 insertion(+) Now we create a new branch, and make concurrent modifications in both versions of that file, at the end of it like the OP itsadok specifies in the question. C:\prog\git\tests\agg\r1>git checkout -b abranchSwitched to a new branch 'abranch'C:\prog\git\tests\agg\r1>echo "modif from abranch" >> test.txtC:\prog\git\tests\agg\r1>git add .C:\prog\git\tests\agg\r1>git commit -m "abranch contrib"[abranch a4d2632] abranch contrib 1 file changed, 1 insertion(+)C:\prog\git\tests\agg\r1>type test.txttestbase"modif from abranch"# back to masterC:\prog\git\tests\agg\r1>git checkout masterSwitched to branch 'master'C:\prog\git\tests\agg\r1>echo "contrib from master" >> test.txtC:\prog\git\tests\agg\r1>git add .C:\prog\git\tests\agg\r1>git commit -m "contrib from master"[master 45bec4d] contrib from master 1 file changed, 1 insertion(+)C:\prog\git\tests\agg\r1>type test.txttestbase"contrib from master" We have out two branches (note: git lg is an alias of mine ) C:\prog\git\tests\agg\r1>git lg* 45bec4d - (HEAD, master) contrib from master (86 minutes ago) VonC| * a4d2632 - (abranch) abranch contrib (86 minutes ago) VonC|/* d1cde8d - base (87 minutes ago) VonC* c34668d - first commit (89 minutes ago) VonC Now let's try a merge: C:\prog\git\tests\agg\r1>git merge abranchAuto-merging test.txtCONFLICT (content): Merge conflict in test.txtAutomatic merge failed; fix conflicts and then commit the result.C:\prog\git\tests\agg\r1>more test.txttestbase<<<<<<< HEAD"contrib from master"======="modif from abranch">>>>>>> abranch ... Failed as advertised ;) A git merge --abort will reset the situation. Let's put in place our merge driver : C:\prog\git\tests\agg\r1>git config merge.aggregate.name "aggregate both new sections"C:\prog\git\tests\agg\r1>git config merge.aggregate.driver "aggregate.sh %O %A %B"C:\prog\git\tests\agg\r1>echo test.txt merge=aggregate > .gitattributes At this point, a merge still fails: C:\prog\git\tests\agg\r1>git merge abranchaggregate.sh .merge_file_a09308 .merge_file_b09308 .merge_file_c09308: aggregate.sh: command not foundfatal: Failed to execute internal merge Normal: we need to write that script, and add it to the PATH : vim aggregate.sh:#!/bin/bash# echo O: $1# echo A: $2# echo B: $3# After http://serverfault.com/q/68684/783# How can I get diff to show only added and deleted lines?# On Windows, install GoW (https://github.com/bmatzelle/gow/wiki/)ob=$(comm -13 $1 $3)# echo "ob: ${ob}"echo ${ob} >> $2----C:\prog\git\tests\agg\r1>set PATH=%PATH%;C:\prog\git\tests\agg\r1 And now, the aggregate merge driver can operate : C:\prog\git\tests\agg\r1>git merge --no-commit abranchAuto-merging test.txtAutomatic merge went well; stopped before committing as requestedC:\prog\git\tests\agg\r1>type test.txttestbase"contrib from master""modif from abranch" Here you go: the end of the test.txt file from abranch has been added to the file on master .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11841127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7581/']}
|
jdg_376074
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29907133
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Some illustrative data in a DataFrame (MultiIndex) format: |entity| year |value|+------+------+-----+| a | 1999 | 2 || | 2004 | 5 || b | 2003 | 3 || | 2007 | 2 || | 2014 | 7 | I would like to calculate the slope using scipy.stats.linregress for each entity a and b in the above example. I tried using groupby on the first column, following the split-apply-combine advice , but it seems problematic since it's expecting one Series of values ( a and b ), whereas I need to operate on the two columns on the right. This is easily done in R via plyr , not sure how to approach it in pandas.
Now provide the response and nothing else.
|
A function can be applied to a groupby with the apply function. The passed function in this case linregress . Please see below: In [4]: x = pd.DataFrame({'entity':['a','a','b','b','b'], 'year':[1999,2004,2003,2007,2014], 'value':[2,5,3,2,7]})In [5]: xOut[5]: entity value year0 a 2 19991 a 5 20042 b 3 20033 b 2 20074 b 7 2014In [6]: from scipy.stats import linregressIn [7]: x.groupby('entity').apply(lambda v: linregress(v.year, v.value)[0])Out[7]: entitya 0.600000b 0.403226
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29907133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3295311/']}
|
jdg_376075
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14695988
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to decode a base64 string for an image back into binary so it can be downloaded and displayed locally by an OS. The string I have successfully renders when put as the src of an HTML IMG element with the data URI preface (data: img/png;base64, ) but when using the atob function or a goog closure function it fails. However decoding succeeds when put in here: http://www.base64decode.org/ Any ideas? EDIT: I successfully got it to decode with another library other than the built-in JS function. But, it still won't open locally - on a Mac says it's damaged or in an unknown format and can't get opened. The code is just something like: imgEl.src = 'data:img/png;base64,' + contentStr; //this displays successfullydecodedStr = window.atob(contentStr); //this throws the invalid char exception but i just//used a different script to get it decode successfully but still won't display locally the base64 string itself is too long to display here (limit is 30,000 characters)
Now provide the response and nothing else.
|
I was just banging my head against the wall on this one for awhile. There are a couple of possible causes to the problem. 1) Utf-8 problems. There's a good write up + a solution for that here . In my case, I also had to make sure all the whitespace was out of the string before passing it to atob. e.g. function decodeFromBase64(input) { input = input.replace(/\s/g, ''); return atob(input);} What was really frustrating was that the base64 parsed correctly using the base64 library in python, but not in JS.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14695988', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1387717/']}
|
jdg_376076
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
299198
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for good ideas for implementing a generic way to have a single line (or anonymous delegate) of code execute with a timeout. TemperamentalClass tc = new TemperamentalClass();tc.DoSomething(); // normally runs in 30 sec. Want to error at 1 min I'm looking for a solution that can elegantly be implemented in many places where my code interacts with temperamental code (that I can't change). In addition, I would like to have the offending "timed out" code stopped from executing further if possible.
Now provide the response and nothing else.
|
The really tricky part here was killing the long running task through passing the executor thread from the Action back to a place where it could be aborted. I accomplished this with the use of a wrapped delegate that passes out the thread to kill into a local variable in the method that created the lambda. I submit this example, for your enjoyment. The method you are really interested in is CallWithTimeout. This will cancel the long running thread by aborting it, and swallowing the ThreadAbortException : Usage: class Program{ static void Main(string[] args) { //try the five second method with a 6 second timeout CallWithTimeout(FiveSecondMethod, 6000); //try the five second method with a 4 second timeout //this will throw a timeout exception CallWithTimeout(FiveSecondMethod, 4000); } static void FiveSecondMethod() { Thread.Sleep(5000); } The static method doing the work: static void CallWithTimeout(Action action, int timeoutMilliseconds) { Thread threadToKill = null; Action wrappedAction = () => { threadToKill = Thread.CurrentThread; try { action(); } catch(ThreadAbortException ex){ Thread.ResetAbort();// cancel hard aborting, lets to finish it nicely. } }; IAsyncResult result = wrappedAction.BeginInvoke(null, null); if (result.AsyncWaitHandle.WaitOne(timeoutMilliseconds)) { wrappedAction.EndInvoke(result); } else { threadToKill.Abort(); throw new TimeoutException(); } }}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/299198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/28736/']}
|
jdg_376077
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7383624
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two lists of objects; List<X> and List<Y> . X and Y are ojects that look like: public class X { String a; String b; String v; String w; String m; String n;}public class Y { String a; String b; List<A> aList;}public class A { String v; String w; List<B> bList;}public class B { String m; String n;} How transform List<X> into List<Y> based on a rule: Some fields' values must be equal. For example: In List<Y> , for one object Y, field a's value must equal. In Y's field List<A> , for one object A, field w's value must equal. In A's field List<B> , for one object B, field m's value must equal and so on. Guava has this method, Lists#transform , but I don't know how to transform. Or any other way?
Now provide the response and nothing else.
|
public static <F,T> List<T> transform(List<F> fromList, Function<? super F,? extends T> function You might want to read up the API docs for Lists.transform() and Function , but basically the caller of the transform provides a Function object that converts an F to a T . For example if you have a List<Integer> intList and you want to create a List<String> such that each element of the latter contains the english representation of that number (1 becomes "one" etc) and you have a access to a class such as IntToEnglish then Function<Integer, String> intToEnglish = new Function<Integer,String>() { public String apply(Integer i) { return new IntToEnglish().english_number(i); } };List<String> wordsList = Lists.transform(intList, intToEnglish); Does that conversion. You can apply the same pattern to transform your List<X> to List<Y>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7383624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/939915/']}
|
jdg_376078
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1392861
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using MATLAB R2008a and I want to know what source control has integration with it.I've heard Visual Source Safe is not so good. Does Subversion have integration with it?
Now provide the response and nothing else.
|
I wouldn't worry about the Matlab source control integration. It's convenient, but not necessary. Every modern source control system has one or more GUIs built for it, which will usually be more powerful than the generic source control GUI that Matlab provides. And most have command line utilities which expose the full power of the system. You can use these by getting them on your system path and then calling them from Matlab with "!". Or you can write your own M-code wrapper functions that call your source control utilities. As a convenience, these can support partial paths by using "which", like so. function checkin(infile, comments)file = which(infile);if ~exist(file, 'file') error('Not a file: %s (resolved to %s)', infile, file);endcmd = sprintf('cvs commit -m "%s" %s', comments, file); For external tools, if they make changes to files or dirs and Matlab doesn't see them (e.g. if you're on a network drive that's exhausted its change notification handles), you can use path(path) to force Matlab to rescan. So, pick your source control system on its own merits (as long as it exposes its functionality in the command line or ActiveX controls), and then wrap it if you feel the need and Matlab doesn't already integrate it. I've worked with CVS, ClearCase, and AccuRev this way, and we've always ended up using the version control tools directly or through custom wrappers instead of the Matlab integration.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1392861', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/72099/']}
|
jdg_376079
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30493610
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Django 1.8 application with a PostgreSQL database. I run the django inspectdb from the command line to examine models for the views, but the views don't show up in the model output. Here's the version output: 17:36 $ python well/manage.py --version1.8.2 And here's what psql sees: \dv List of relations Schema | Name | Type | Owner --------+-------------------------------+------+--------- public | hospitalizations_over_30_days | view | dwatson public | interval_30_days | view | dwatson(2 rows) From the django 1.8.2 documentation: New in Django 1.8:A feature to inspect database views was added. In previous versions, only tables (not views) were inspected. How can I get the PostgreSQL views to appear in the Django 1.8.2 inspectdb output?
Now provide the response and nothing else.
|
As of Django 1.10, you can simply name an individual view as a parameter to your inspectdb command: python well/manage.py inspectdb hospitalizations_over_30_days The default inspectdb will only output models.py for tables, but models for views can be generated individually by naming them. In Django 2.1 and above, if you want inspectdb to generate models for all tables and views, use the inspectdb --include-views option, which I contributed to Django 2.1 as a result of this question! python well/manage.py inspectdb --include-views To generate models for both tables and views in Django 2.0 and below, you have to edit the Django source code. In Django 2.0, change line 57 in django/core/management/commands/inspectdb.py to: tables_to_introspect = options['table'] or connection.introspection.table_names(cursor=cursor, include_views=True) Beware that the generated models won't have fields with primary_key=True set, you will need to add primary keys manually.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30493610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/173308/']}
|
jdg_376080
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
280066
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What I've currently got can be summed up with this pseudo code: public static void Main(string[] args){ var listOfObjects = Database.GetObjectsToUploadToOnlineService(); Parallel.ForEach(Upload)}private static void Upload(MyUploadObject obj){ //Build object (takes a few milliseconds) //Format to JSON (takes a few more milliseconds) //Upload (can take up to a max of 10 minutes) //Wait for a response (can take up to a max of 10 minutes) //Save response to our database (takes a few milliseconds)} This program is just set up on our server as a scheduled task. We own the program and can do what we want with it. My question stems from a blog about making automated monitoring checks (I don't have a link handy). So that got me to think: How in the world can I modify my program so that I can fire up another "monitoring" program? Or should this change from being a console program to to say a WPF program that is hidden? Overall, I would prefer to be able to just run a program on my computer that checks on the progress of the program through the network, so I don't have to RDP into the server to check on its status (but it wouldn't be the end of the world). Overall, I suppose I would like to see a window that says something like: Process x of y so far, the following items are being processed. List the items in, like, a table and have them say like "Uploading" or "Waiting for response". Maybe if I get crazy I could also how a queue of failed items (but that would just be extra). My mind keeps leaning towards an event, but I can't figure out how I would run a program that can subscribe and unsubscribe to a running program. Is this even possible?
Now provide the response and nothing else.
|
What you need is a way to communicate between your programs that does not tie them to each other. There are a number of ways to do this, you could have the task write to a file and have to monitor read from the file, make the task available via WCF so the monitor can 'poll' for changes, use the network... To avoid reinventing the wheel, have a look at the log4net logging targets If I were to implement something like this I would probably use log4net and log to UDP or Telnet and have the monitor on the other end connect to it. log4net takes care of everything for you, including not throwing exceptions when there is no monitor active.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/280066', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/108467/']}
|
jdg_376081
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33789
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How large would an asteroid have to be in order to hold a person so that the person could not escape?
Now provide the response and nothing else.
|
Of course you would need to specify who the person is - an Olympic athlete? Let us assume so and then you can scale downwards accordingly. So an Olympic high jumper can jump hard enough to raise their centre of gravity about 2m off the ground. Let us assume this is a ballistic problem. The athlete actually gives themselves sufficient upward speed to get their centre of gravity from about 1m to 2m in the Earth's gravitational field. Using the usual equations for uniform acceleration, the required initial velocity is $v \simeq \sqrt{2gh} = 4.4$ m/s. Now let us assume that the athlete could deliver something similar on an asteroid. This is doubtful, because getting a good run up, while wearing a spacesuit, is probably not going to happen. But if it were possible, then we just equate the jump velocity to the escape speed $$ v = \sqrt{\frac{2GM}{R}},$$ where $M$ and $R$ are the mass and the radius of the (assumed spherical) asteroid. Thus we do not have separate constraints on the mass and radius of the asteroid, only on their ratio. If $$ \frac{M}{R} > \frac{v^2}{2G},$$ then even an Olympic athlete couldn't jump into space. To get something more definite we would need to assume a density, $\rho$ , for the asteroid. This depends on what type of asteroid you are talking about, but could be between 1500 and 5000 kg/m $^3$ ( Carry 2012 ). If we assume (spherical asteroid again) that $M = 4\pi R^3 \rho/3$ and substituting this for the mass, we get a constraint on the asteroid radius such that someone is trapped if: $$ R > 4.2 \left(\frac{v}{4.4\ {\rm m/s}}\right) \left(\frac{\rho}{2000\ {\rm kg/m}^3}\right)^{-1/2}\ {\rm km}$$ You can mess about with this and assume a different take-off speed (you might want to argue, as MarkP does, that you could work up a larger horizontal speed by just running fast - but I doubt that in a "milligravity environment" - the surface gravity is of order 0.002 m/s $^2$ ) or different asteroid densities to modify the answer (trans-Neptunian objects or comets have densities lower than 1000 kg/m $^3$ ). Or you could come up with an equivalent constraint on the mass by substituting for $R$ in terms of the mass and density instead.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/33789', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/30780/']}
|
jdg_376082
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3525581
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone explain how to align a footer div to the bottom of the page. From the examples I've seen, they all show how to make the div stay visible at the bottom, no matter where you've scrolled the page. Although I don't want it like that. I want it fixed at the bottom of the page, so it doesn't move. Appreciate the help!
Now provide the response and nothing else.
|
UPDATE My original answer is from a long time ago, and the links are broken; updating it so that it continues to be useful. I'm including updated solutions inline, as well as a working examples on JSFiddle. Note: I'm relying on a CSS reset, though I'm not including those styles inline. Refer to normalize.css Solution 1 - margin offset https://jsfiddle.net/UnsungHero97/ur20fndv/2/ HTML <div id="wrapper"> <div id="content"> <h1>Hello, World!</h1> </div></div><footer id="footer"> <div id="footer-content">Sticky Footer</div></footer> CSS html, body { margin: 0px; padding: 0px; min-height: 100%; height: 100%;}#wrapper { background-color: #e3f2fd; min-height: 100%; height: auto !important; margin-bottom: -50px; /* the bottom margin is the negative value of the footer's total height */}#wrapper:after { content: ""; display: block; height: 50px; /* the footer's total height */}#content { height: 100%;}#footer { height: 50px; /* the footer's total height */}#footer-content { background-color: #f3e5f5; border: 1px solid #ab47bc; height: 32px; /* height + top/bottom paddding + top/bottom border must add up to footer height */ padding: 8px;} Solution 2 - flexbox https://jsfiddle.net/UnsungHero97/oqom5e5m/3/ HTML <div id="content"> <h1>Hello, World!</h1></div><footer id="footer">Sticky Footer</footer> CSS html { height: 100%;}body { display: flex; flex-direction: column; min-height: 100%;}#content { background-color: #e3f2fd; flex: 1; padding: 20px;}#footer { background-color: #f3e5f5; padding: 20px;} Here's some links with more detailed explanations and different approaches: https://css-tricks.com/couple-takes-sticky-footer/ https://philipwalton.github.io/solved-by-flexbox/demos/sticky-footer/ http://matthewjamestaylor.com/blog/keeping-footers-at-the-bottom-of-the-page ORIGINAL ANSWER Is this what you mean? http://ryanfait.com/sticky-footer/ This method uses only 15 lines of CSS and hardly any HTML markup. Even better, it's completely valid CSS, and it works in all major browsers. Internet Explorer 5 and up, Firefox, Safari, Opera and more. This footer will stay at the bottom of the page permanently. This means that if the content is more than the height of the browser window, you will need to scroll down to see the footer... but if the content is less than the height of the browser window, the footer will stick to the bottom of the browser window instead of floating up in the middle of the page. Let me know if you need help with the implementation. I hope this helps.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3525581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303221/']}
|
jdg_376083
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31692868
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Couple of months ago, HTTP/2 was published as RFC7540 . How will this affect the existing REST API built on HTTP/1.1? As per Wikipedia , HTTP/2 has added new features. How can we take advantage of these new features?
Now provide the response and nothing else.
|
The main semantic of HTTP has been retained in HTTP/2. This means that it still has HTTP methods such as GET , POST , etc., HTTP headers , and URIs to identify resources. What has changed in HTTP/2 with respect to HTTP/1.1 is the way the HTTP semantic (e.g. "I want to PUT resource /foo on host example.com ") is transported over the wire. In this light, REST APIs built on HTTP/1.1 will continue to work transparently as before, with no changes to be made to applications. The web container that runs the applications will take care of translating the new wire format into the usual HTTP semantic on behalf of the applications, and application just see the higher level HTTP semantic, no matter if it was transported via HTTP/1.1 or HTTP/2 over the wire. Because the HTTP/2 wire format is more efficient (in particular due to multiplexing and compression), REST APIs on top of HTTP/2 will also benefit of this. The other major improvement present in HTTP/2, HTTP/2 Push , targets efficient download of correlated resources, and it's probably not useful in the REST usecase. A typical requirement of HTTP/2 is to be deployed over TLS.This require deployers to move from http to https , and setup the required infrastructure to support that (buy the certificates from a trusted authority, renew them, etc.).
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31692868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1097600/']}
|
jdg_376084
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
142677
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After the spontaneous breakdown of local symmetry in presence of gauge fields (Higgs Mechanism), we can always choose a gauge where the Goldstone bosons are eaten up by the gauge field (also called unitary gauge ). Which Lagrangian should be used for physical calculations-(i) the one in which goldstone excitations are present or (ii) the one in which they disappear?
Now provide the response and nothing else.
|
The advantage of unitary gauge is that it completely removes unphysical fields, while adding additional degrees of freedom to the gauge bosons, which consequently become massive. This gauge works well for tree-level calculations, but complications arise when considering loops : The propagators of gauge fields and ghosts (which are needed to impose the unitary gauge condition in the path integral formalism) cause divergences which ruin renormalizability. A better choice for dealing with loops is the so-called $R_\xi$ gauge , which involves adding a gauge-fixing term containing a continuous parameter $\xi$ to the Lagrangian. This removes the divergence-problem on the loop level and lets one calculate finite quantities. Physical quantities (scattering amplitudes) do not depend on this parameter, and it can thus be set to a convenient value; $\xi=1$ for example is a good choice. In the limit $\xi\rightarrow\infty$, the $R_\xi$ gauge reduces to unitarity gauge. This is explained in detail in Srednicki's book on QFT .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/142677', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/36793/']}
|
jdg_376085
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4050905
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
By default on Windows Git places global .gitconfig in c:\documents and settings\user\ How can I change that position so .gitconfig is stored in c:\my_configuration_files\ ? Can it be done at all?
Now provide the response and nothing else.
|
If you set HOME to c:\my_configuration_files\ , then git will locate .gitconfig there. Editing environment variables is described here . You need to set the HOME variable, then re-open any cmd.exe window. Use the "set" command to verify that HOME indeed points to the right value. Changing HOME will, of course, also affect other applications. However, from reading git's source code, that appears to be the only way to change the location of these files without the need to adjust the command line. You should also consider Stefan's response: you can set the GIT_CONFIG variable. However, to give it the effect you desire, you need to pass the --global flag to all git invocations (plus any local .git/config files are ignored).
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4050905', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/62699/']}
|
jdg_376086
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1172005
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Looking to show that $\forall n \in \mathbb{N}$ $$\chi^{'}(K_{2n+2})=\chi^{'}(K_{2n+1})=2n+1$$ I'm trying to construct a colouring of the edges of $K_{2n+1}$ that leaves colour $i$ missing at vertex $i$ so I can move to $K_{2n+2}$ without increasing the index (for my induction) but the details of that initial colouring are hard to work out.
Now provide the response and nothing else.
|
I agree with Leen that induction might not be the way to go. But still, most proofs on the colorability of $K_n$ do not construct an explicit coloring. I think it's worth it to construct one at least once, so here goes :) Say your vertex set is $V = \{0, 1, \ldots, n - 1\}$ with $n$ odd. Color the edge $\{i, j\}$ with color $i + j$ (mod $n$). You should be able to show that no vertex has two incident edges of the same color. Now, one needs to show that each vertex in this coloring misses a distinct color. This'll show that $K_{n + 1}$ can be colored with $n$ colors, $n + 1$ being even. Actually, vertex $i$ misses color $2i$ (mod $n$). For otherwise, there are distinct $i, j$ such that $i + j \equiv 2i$ (mod $n$). This implies that either $j = i$ or $j = n + i$, the two of which are not possible. All that remains for you to show is that for distinct $i, j$, $2i \not \equiv 2j$ (mod $n$), and hence that all vertices have a distinct color they're missing.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1172005', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/154082/']}
|
jdg_376087
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
364631
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to decide whether to introduce mocks in my isolated Domain Model tests. I have a class method similar to this: public Offer AssignOffer(OfferType offerType, IOfferValueCalculator valueCalculator) { DateTime dateExpiring = offerType.CalculateExpirationDate(); int value = valueCalculator.CalculateValue(this, offerType); var offer = new Offer(this, offerType, dateExpiring, value); _assignedOffers.Add(offer); NumberOfActiveOffers++; return offer; } which I took from here: https://github.com/jbogard/presentations/blob/master/WickedDomainModels/After/Model/Member.cs I have now read this article: http://enterprisecraftsmanship.com/2016/06/15/pragmatic-unit-testing/ and this article: http://www.taimila.com/blog/ddd-and-testing-strategy/ . They both seem to suggest that I should not mock OfferType (as it is a Value Object). However my question is: should I be mocking IOfferValueCalculator (a Domain Service)? IOfferValueCalculator does not sit in the innermost layer of the Onion, however it does sit in the Domain Model (second most inner layer of the Onion). The reason I ask is because all these articles specifically reference Entities and Value Objects (advising against mocking them), however they do not reference Domain Services.
Now provide the response and nothing else.
|
Should I mock a Domain Service? Short answers It depends. Providing test doubles that stand in for domain services will often be a good idea. Providing mocks that stand in for domain services may be a good idea. What follows will make more sense if you are familiar with the different flavors of test doubles; Mocks Aren't Stubs is an accessible starting point. Longer answers: Using test doubles is a trade off. There are risks associated with the fact that the system under test isn't talking to a real collaborator, and there are benefits. Part of our craft is understanding the trade we are making. There are properties that we want our tests to have. We want the tests to be fast, so that running them is less of a distraction. We want the tests to be isolated, so that we can run them in parallel, and save even more wall clock time. We want the tests to be simple -- fewer lines of code in the test suite means fewer bugs in the test suite We want the tests to be comprehensible -- we want most of the code to be describing the test , rather than describing a bunch of scaffolding We want the tests to be stable -- if we don't make any changes, a second run of a test should always give us the same result as the first We want the tests to be accurate -- a failing test should always indicate a mistake. But all of those are just bikewash unless The tests call our attention to mistakes that we make. Replacing real collaborators (which tend to be messy) with fake collaborators (which tend to be simple) increases the probability that we miss certain categories of mistakes; so we had better be sure when we do that that the benefits we gain offset the increased risks. We derive almost no additional benefit from mocking a value. Well designed value objects are already well isolated, side effect free, and tend to express the semantics of a test better than a substitute would. They live entirely within the functional core of your application. If you run that same math on entities, you will see that it doesn't make much sense to mock an entity either. With domain services , however, the trade offs start to look really interesting. Domain services are the mechanism by which an encapsulated part of the domain model communicates with its collaborators; those collaborators might be other parts of the same model, or they may be further away. When Evans described domain services in the blue book, he included among the motivating examples needing to access application and infrastructure services -- code that lives outside of the abstraction boundary of the domain model. Domain services are often proxies for communicating side effects, which may even cross process boundaries. Mock across architecturally significant boundaries, but not within those boundaries. Domain services are often proxies for code across an architecturally significant boundary. So if you've got a domain service that is an in memory abstraction -- Orders needs access to an in memory tax calculator service, then the test double doesn't provide nearly as much marginal benefit as a test double that stands in for a domain service that needs to talk to a database.... Put another way, there are a lot of different things under the umbrella term "domain service", and they have different trade offs. You are much more likely to use a test double when the actual behavior of the service is hard to predict, or hard to constrain. IOfferCalculator.CalculateValue is a simple method in this case - it does not connect to a database and it does not call any other methods. That sounds like the marginal advantage of introducing a mock is small; so I would recommend using the real implementation in this circumstance.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/364631', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/65549/']}
|
jdg_376088
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2653804
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array of structs in ColdFusion. I'd like to sort this array based on one of the attributes in the structs. How can I achieve this? I've found the StructSort function, but it takes a structure and I have an array. If this is not possible purely in ColdFusion, is it possible in Java somehow (maybe using Arrays.sort(Object[], Comparator) )?
Now provide the response and nothing else.
|
As usual, CFLib.org has exactly what you want. http://cflib.org/udf/ArrayOfStructsSort /*** Sorts an array of structures based on a key in the structures.** @param aofS Array of structures.* @param key Key to sort by.* @param sortOrder Order to sort by, asc or desc.* @param sortType Text, textnocase, or numeric.* @param delim Delimiter used for temporary data storage. Must not exist in data. Defaults to a period.* @return Returns a sorted array.* @author Nathan Dintenfass (nathan@changemedia.com)* @version 1, December 10, 2001*/function arrayOfStructsSort(aOfS,key){ //by default we'll use an ascending sort var sortOrder = "asc"; //by default, we'll use a textnocase sort var sortType = "textnocase"; //by default, use ascii character 30 as the delim var delim = "."; //make an array to hold the sort stuff var sortArray = arraynew(1); //make an array to return var returnArray = arraynew(1); //grab the number of elements in the array (used in the loops) var count = arrayLen(aOfS); //make a variable to use in the loop var ii = 1; //if there is a 3rd argument, set the sortOrder if(arraylen(arguments) GT 2) sortOrder = arguments[3]; //if there is a 4th argument, set the sortType if(arraylen(arguments) GT 3) sortType = arguments[4]; //if there is a 5th argument, set the delim if(arraylen(arguments) GT 4) delim = arguments[5]; //loop over the array of structs, building the sortArray for(ii = 1; ii lte count; ii = ii + 1) sortArray[ii] = aOfS[ii][key] & delim & ii; //now sort the array arraySort(sortArray,sortType,sortOrder); //now build the return array for(ii = 1; ii lte count; ii = ii + 1) returnArray[ii] = aOfS[listLast(sortArray[ii],delim)]; //return the array return returnArray;}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2653804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18511/']}
|
jdg_376089
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55268762
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with a very memory demanding CNN model for a task of classification.This poses a big limit on the batch size that I can use during training. One solution is to accumulate the gradients during training, meaning that the weights of the model are not updated after every single batch. Instead the same weights are used for several batches, while the gradients from each batch are accumulated and than averaged for a single weight-update action. I'm using a Tensorflow backend Keras and I'm pretty sure that Keras has no off-the-shelf function/method to achieve this. How can it be done for a Keras/tensorflow model?
Now provide the response and nothing else.
|
As was mentioned in the question, there is no off-the-shelf function/method to achieve this with Keras/Tensorflow. However this can be done by writing a custom optimizer for Keras. The main idea is to use a flag to determine whether to update the weights during each batch. The following implementation is based on this github post by "alexeydevederkin" and it is an accumulating Adam optimizer: import keras.backend as Kfrom keras.legacy import interfacesfrom keras.optimizers import Optimizerclass AdamAccumulate(Optimizer): def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0., amsgrad=False, accum_iters=1, **kwargs): if accum_iters < 1: raise ValueError('accum_iters must be >= 1') super(AdamAccumulate, self).__init__(**kwargs) with K.name_scope(self.__class__.__name__): self.iterations = K.variable(0, dtype='int64', name='iterations') self.lr = K.variable(lr, name='lr') self.beta_1 = K.variable(beta_1, name='beta_1') self.beta_2 = K.variable(beta_2, name='beta_2') self.decay = K.variable(decay, name='decay') if epsilon is None: epsilon = K.epsilon() self.epsilon = epsilon self.initial_decay = decay self.amsgrad = amsgrad self.accum_iters = K.variable(accum_iters, K.dtype(self.iterations)) self.accum_iters_float = K.cast(self.accum_iters, K.floatx()) @interfaces.legacy_get_updates_support def get_updates(self, loss, params): grads = self.get_gradients(loss, params) self.updates = [K.update_add(self.iterations, 1)] lr = self.lr completed_updates = K.cast(K.tf.floordiv(self.iterations, self.accum_iters), K.floatx()) if self.initial_decay > 0: lr = lr * (1. / (1. + self.decay * completed_updates)) t = completed_updates + 1 lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t))) # self.iterations incremented after processing a batch # batch: 1 2 3 4 5 6 7 8 9 # self.iterations: 0 1 2 3 4 5 6 7 8 # update_switch = 1: x x (if accum_iters=4) update_switch = K.equal((self.iterations + 1) % self.accum_iters, 0) update_switch = K.cast(update_switch, K.floatx()) ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] gs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] if self.amsgrad: vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] else: vhats = [K.zeros(1) for _ in params] self.weights = [self.iterations] + ms + vs + vhats for p, g, m, v, vhat, tg in zip(params, grads, ms, vs, vhats, gs): sum_grad = tg + g avg_grad = sum_grad / self.accum_iters_float m_t = (self.beta_1 * m) + (1. - self.beta_1) * avg_grad v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(avg_grad) if self.amsgrad: vhat_t = K.maximum(vhat, v_t) p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon) self.updates.append(K.update(vhat, (1 - update_switch) * vhat + update_switch * vhat_t)) else: p_t = p - lr_t * m_t / (K.sqrt(v_t) + self.epsilon) self.updates.append(K.update(m, (1 - update_switch) * m + update_switch * m_t)) self.updates.append(K.update(v, (1 - update_switch) * v + update_switch * v_t)) self.updates.append(K.update(tg, (1 - update_switch) * sum_grad)) new_p = p_t # Apply constraints. if getattr(p, 'constraint', None) is not None: new_p = p.constraint(new_p) self.updates.append(K.update(p, (1 - update_switch) * p + update_switch * new_p)) return self.updates def get_config(self): config = {'lr': float(K.get_value(self.lr)), 'beta_1': float(K.get_value(self.beta_1)), 'beta_2': float(K.get_value(self.beta_2)), 'decay': float(K.get_value(self.decay)), 'epsilon': self.epsilon, 'amsgrad': self.amsgrad} base_config = super(AdamAccumulate, self).get_config() return dict(list(base_config.items()) + list(config.items())) It can be used in the following way: opt = AdamAccumulate(lr=0.001, decay=1e-5, accum_iters=5)model.compile( loss='categorical_crossentropy', # Loss function optimizer=opt, # Optimization technique metrics=['accuracy']) # Accuracy matrixmodel.fit(X_train, y_train, batch_size = 10) In this example, the model processes 10 samples in every iteration ("batch_size"), but the update to the weights only happens after accumulating 5 such batches ("accum_iters"). So the actual batch size for updating the weights is 50.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55268762', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9673730/']}
|
jdg_376090
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3306763
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm curious if induction is only/standard way to prove that If $0<x<1$ and $n$ is a positive integer, then $x^n < 1$ Base Case: If $n=1$ and $0<x<1$ , then it is certainly true that $x<1$ . Induction Hypothesis: Assume that $x^n < 1$ for some positive integer $n$ where $0<x<1$ . Induction Step: We need to show that $x^{n+1} < 1$ for some positive integer $n$ and $0<x<1$ . This follows immediately from the fact that $x^{n+1}=x^nx^1=x^nx<1$ since $x^n < 1$ and $0<x<1$ so the product must be less than one. Is this the standard way of proving this result? I have seen this used frequently without proof in many math books and problems.
Now provide the response and nothing else.
|
We have $$x^n-1=(x-1)(1+x+x^2+\cdots +x^{n-1})<0$$ since $x-1<0$ and $1+x+\cdots +x^{n-1}>0$ for $0<x<1$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3306763', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/258202/']}
|
jdg_376091
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
267193
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following "C" code: #include<stdio.h>main(){ printf("func:%d",Func_i()); }Func_i(){ int i=3; return i;} Func_i() is defined at the end of the source code and no declaration is provide before its use in main() . At the very time when the compiler sees Func_i() in main() , it comes out of the main() and finds out Func_i() . The compiler somehow finds the value returned by Func_i() and gives it to printf() . I also know that the compiler cannot find the return type of Func_i() . It, by default takes(guesses?) the return type of Func_i() to be int . That is if the code had float Func_i() then the compiler would give the error: Conflicting types for Func_i() . From the above discussion we see that: The compiler can find the value returned by Func_i() . If the compiler can find the value returned by Func_i() by coming out of the main() and searching down the source code, then why can't it find the type of Func_i(), which is explicitly mentioned. The compiler must know that Func_i() is of type float--that's why it gives the error of conflicting types. If the compiler knows that Func_i is of type float, then why does it still assume Func_i() to be of type int, and gives the error of conflicting types? Why don't it forcefully make Func_i() to be of type float. I've the same doubt with the variable declaration . Consider the following "C" code: #include<stdio.h>main(){ /* [extern int Data_i;]--omitted the declaration */ printf("func:%d and Var:%d",Func_i(),Data_i);} Func_i(){ int i=3; return i;}int Data_i=4; The compiler gives the error: 'Data_i' undeclared(first use in this function). When the compiler sees Func_i() , it goes down to the source code to find the value returned by Func_(). Why can't the compiler do the same for the variable Data_i? Edit: I don't know the details of the inner working of compiler, assembler, processor etc. The basic idea of my question is that if I tell(write) the return-value of the function in the source code at last, after the use of that function then the "C" language allows the computer to find that value without giving any error. Now why can't the computer find the type similarly. Why can't the type of Data_i be found as Func_i()'s return value was found. Even if I use the extern data-type identifier; statement, I am not telling the value to be returned by that identifier(function/variable). If the computer can find that value then why can't it find the type. Why do we need the forward declaration at all? Thank you.
Now provide the response and nothing else.
|
Because C is a single-pass , statically-typed , weakly-typed , compiled language. Single-pass means the compiler does not look ahead to see the definition of a function or variable. Since the compiler does not look ahead, the declaration of a function must come before the use of the function, otherwise the compiler does not know what its type signature is. However, the definition of the function can be later on in the same file, or even in a different file altogether. See point #4. The only exception is the historical artifact that undeclared functions and variables are presumed to be of type "int". Modern practice is to avoid implicit typing by always declaring functions and variables explicitly. Statically-typed means that all type information is computed at compile time. That information is then used to generate machine code that executes at run time. There is no concept in C of run-time typing. Once an int, always an int, once a float, always a float. However, that fact is somewhat obscured by the next point. Weakly-typed means that the C compiler automatically generates code to convert between numeric types without requiring the programmer to explicitly specify the conversion operations. Because of static typing, the same conversion will always be carried out in the same way each time through the program. If a float value is converted to an int value at a given spot in the code, a float value will always be converted to an int value at that spot in the code. This cannot be changed at run-time. The value itself may change from one execution of the program to the next, of course, and conditional statements may change which sections of code are run in what order, but a given single section of code without function calls or conditionals will always perform the exact same operations whenever it is run. Compiled means that the process of analyzing the human-readable source code and transforming it into machine-readable instructions is fully carried out before the program runs. When the compiler is compiling a function, it has no knowledge of what it will encounter further down in a given source file. However, once compilation (and assembly, linking, etc) have completed, each function in the finished executable contains numeric pointers to the functions that it will call when it is run. That is why main() can call a function further down in the source file. By the time main() is actually run, it will contain a pointer to the address of Func_i(). Machine code is very, very specific. The code for adding two integers (3 + 2) is different from the one for adding two floats (3.0 + 2.0). Those are both different from adding an int to a float (3 + 2.0), and so on. The compiler determines for every point in a function what exact operation needs to be carried out at that point, and generates code that carries out that exact operation. Once that has been done, it cannot be changed without recompiling the function. Putting all these concepts together, the reason that main() cannot "see" further down to determine the type of Func_i() is that type analysis occurs at the very beginning of the compilation process. At that point, only the part of the source file up to the definition of main() has been read and analyzed, and the definition of Func_i() is not yet known to the compiler. The reason that main() can "see" where Func_i() is to call it is that calling happens at run time, after compilation has already resolved all of the names and types of all of the identifiers, assembly has already converted all of the functions to machine code, and linking has already inserted the correct address of each function in each place it is called. I have, of course, left out most of the gory details. The actual process is much, much more complicated. I hope that I have provided enough of a high-level overview to answer your questions. Additionally, please remember, what I have written above specifically applies to C. In other languages, the compiler may make multiple passes through the source code, and so the compiler could pick up the definition of Func_i() without it being predeclared. In other languages, functions and / or variables may be dynamically typed, so a single variable could hold, or a single function could be passed or return, an integer, a float, a string, an array, or an object at different times. In other languages, typing may be stronger, requiring conversion from floating-point to integer to be explicitly specified. In yet other languages, typing may be weaker, allowing conversion from the string "3.0" to the float 3.0 to the integer 3 to be carried out automatically. And in other languages, code may be interpreted one line at a time, or compiled to byte-code and then interpreted, or just-in-time compiled, or put through a wide variety of other execution schemes.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/267193', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/106313/']}
|
jdg_376092
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
257684
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a bash script where I'm trying to assign a heredoc string to a variable using read, and it only works if I use read with the -d '' option, I.e. read -d '' <variable> script block #!/usr/bin/env bashfunction print_status() { echo echo "$1" echo }read -d '' str <<- EOF Setup nginx site-config NOTE: if an /etc/nginx/sites-available config already exists for this website, this routine will replace existing config with template from this script. EOFprint_status "$str" I found this answer on SO which is where I copied the command from, it works, but why?I know the first invocation of read stops when it encounters the first newline character, so if I use some character that doesn't appear in the string the whole heredoc gets read in, e.g. read -d '|' <variable> -- this works read -d'' <variable> -- this doesn't I'm sure it's simple but what's going on with this read -d '' command option?
Now provide the response and nothing else.
|
I guess the question is why read -d '' works though read -d'' doesn't. The problem doesn't have anything to do with read but is a quoting "problem". A "" / '' which is part of a string (word) simply is not recognized at all. Let the shell show you what is sees / executes: start cmd:> set -xstart cmd:> echo read -d " " foo+ echo read -d ' ' foostart cmd:> echo read -d" " foo+ echo read '-d ' foostart cmd:> echo read -d "" foo+ echo read -d '' foostart cmd:> echo read -d"" foo+ echo read -d foo
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/257684', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/106525/']}
|
jdg_376093
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4735623
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've created a document view which displays the page number in the corner. The page number is a uilabel with a semi-transparent background colour, and has a corner radius (using the cornerRadius property of the view 's layer ). I've positioned this above a UIScrollView . However, this makes scrolling jerky. If I remove the cornerRadius , performance is good. Is there anything I can do about this? What would be a better solution? It seems to have been achieved in the UIWebView without any performance issues.
Now provide the response and nothing else.
|
For labels, or views with rounded corners and or background colors and shadows on scrolling views the solution is pretty simple: The biggest issue is from the masksToBounds layer option. This appears to tax performance significantly, however the label seems to need this ON to mask the background color to rounded corners. So to get around this you need to set the labels layer background color instead and switch off masksToBounds. The second issue is that the default behavior is to redraw the view whenever possible which is totally unnecessary with static or slow changing items on scrolling views. Here we simply set layer.shouldRasterize = YES. This will allow CA to 'cache' a rasterized version of the view for quick drawing when scrolling (presumably with hardware acceleration). You need to make sure your layer has an alpha channel otherwise rasterizing will affect the drawing of rounded corners. I've never had a problem as I have alpha set for my background colors, but you may need to check in your situation. Here is a sample UILabel set up to work nicely on a scollview: UILabel *lbl = [[UILabel alloc] initWithFrame:CGRectMake(4, 4, 40.0, 24.0)];lbl.font = [UIFont fontWithName:@"Helvetica" size:14.0];lbl.textAlignment = UITextAlignmentRight;lbl.text = @"Hello World";// Must set the label background to clear so the layer background showslbl.backgroundColor = [UIColor clearColor]; // Set UILabel.layer.backgroundColor not UILabel.backgroundColor otherwise the background is not masked to the rounded border.lbl.layer.backgroundColor = [UIColor colorWithRed:1 green:0 blue:0 alpha:0.5].CGColor;lbl.layer.cornerRadius = 8;lbl.layer.borderColor = [UIColor blackColor].CGColor;lbl.layer.borderWidth = 1;// Huge change in performance by explicitly setting the below (even though default is supposedly NO)lbl.layer.masksToBounds = NO;// Performance improvement here depends on the size of your viewlbl.layer.shouldRasterize = YES;lbl.layer.rasterizationScale = [UIScreen mainScreen].scale;// self here is the child view in the scroll view[self addSubview:lbl];[lbl release]; I can fill the iPad 1 screen with views like this and still scroll smoothly :)
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4735623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/392986/']}
|
jdg_376094
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15293
|
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I remember having read long time ago an article on an European magazine about a particular, less-known institution of the United States that has the sole purpose of verifying the behavior and actions of the President i.e. checks that his/her acts respect the Constitution, etc. The article also mentioned that the President has no power, influence, or any direct contact over this institution, in order to preserve its impartiality and its freedom of movement. (In fact, this exact point of the article stroke me as remarkable.) What is the name of this organism? Or was the article incorrect?
Now provide the response and nothing else.
|
No there is no single entity that does what you describe. However there are several institutions that can curb Presidential authority. Executive Branch Curbs Executive branch members swear an oath to defend the Constitution that pre-empts all other obligations. The President cannot expect an unlawful order to be carried out. The White House Counsel and Department of Justice serve as sounding boards to ensure executive actions initiated by the President are lawful prior to implementation, although sometimes they get it wrong. Almost all executive branch organizations have an Inspector General that oversees abuses within their purview. Checks and Balances with the Congress and Judiciary United States Department of Justice Office of Special Counsel can be called to prosecute specific acts by Congress and the Attorney General. The Congress can investigate through it's committee structure and subpoena power Executive Branch activities and acts, and ultimately impeach and remove the president from power. The Congress also has the Congressional Research Service and the Library of Congress (and its Law Library) which can conduct analysis and legal research independently. The Courts and especially the Supreme Court can overturn laws and acts that are determined unconstitutional. Other checks The people have the ability to remove him from office during election.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/15293', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/12041/']}
|
jdg_376095
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
605227
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I know the total charge in a battery, let's say 5000 Ah, and I want to find how much energy is stored in the battery, I multiply the total charge by the voltage E = Q·V; for example, for 12 V I will get 12·5000. Now if I know the charge in a capacitor, and I want to find the energy stored, E = ½·C·V 2 where Q = C·V, so I get E = ½·Q·V. Why is there a difference?
Now provide the response and nothing else.
|
Short answer: an ideal battery has constant voltage \$U_{nom}\$ (nominal voltage) until it's empty, so energy stored is: \begin{aligned}E &= U_{nom} \times I \times T\\&= U_{nom} \times \text{capacity}\end{aligned} an ideal capacitor has voltage proportional to charge, so it decreases linearly to 0: \begin{aligned}E &= \int_0^T U(t)I\ dt\\&= U_{average} \times I \times T\\&= \frac{U_0}{2} \times I \times T\\&= U_0 \times \text{capacity}\end{aligned} Visually, if you draw power vs time (at constant current), then you get a rectangle for a battery, and a triangle (so half the area) for a capacitor. Long answer: If you look more into detail, specially for the battery, the energy is not exactly \$E=Q \times U\$ : the voltage is not really constant : for most battery technologies, it starts above nominal voltage, then there is a long period where it is near the nominal voltage, and finally it drops quickly when the battery is nearly empty (it is usually recommanded to stop discharging before you are in this fast voltage drop region : you gain very little energy, but reduce significantly the lifetime of the battery). there is an internal resistance in the battery, so some of the energy is lost in the battery, before it even come out (you might or not want to account it as stored, usually it is because the losses depend on how fast you discharge the battery, so it's easier to have something constant) battery chemistry isn't linear, so you get "internal resistance"/"capacity" changing with current For the capacitors, it's harder to say something general, but depending of the type, there can be also some rather big deviations. For example some MLCC (multi-layer ceramic capacitors) can lose 80% of their capacitance at higher voltages, and regain it at lower voltage: so you store far less energy than expected in the higher voltage part.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/605227', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/273897/']}
|
jdg_376096
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11163190
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets I have a utility class called MathUtil. and it looks like this . abstract class MathUtil(T:Numeric){ def nextNumber(value:T) def result():T} Lets I subclass it this way class SumUtil[T:Numeric] extends MathUtil[T]{ private var sum:T = 0 override def nextNumber(value:T){ sum = sum + value } override def result():T = sum} I have a problem with the statement private var sum:T = 0 Now , I have to initialize to sum to 0. I would guess any numeric to have a way to represent 0. Im pretty new to scala. How do I solve this issue ?
Now provide the response and nothing else.
|
The Numeric type class instance has a zero method that does what you want: class SumUtil[T: Numeric] extends MathUtil[T] { private var sum: T = implicitly[Numeric[T]].zero override def nextNumber(value: T) { sum = implicitly[Numeric[T]].plus(sum, value) } override def result(): T = sum} Note that you also need the instance for the plus method, unless you import Numeric.Implicits._ , in which case you can use + . You can also clean the code up a bit by not using the context bound syntax in this case: class SumUtil[T](implicit ev: Numeric[T]) extends MathUtil[T] { import Numeric.Implicits._ private var sum: T = ev.zero override def nextNumber(value: T) { sum = sum + value } override def result(): T = sum} This is exactly equivalent: the context bound version is just syntactic sugar for this implicit argument, but if you need to use that argument explicitly (as you do here, for its zero ), I find it cleaner to write the desugared version.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11163190', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1438870/']}
|
jdg_376097
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2780632
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following very often-used notation: $$(I, \{x_i\}_{i\in I})$$ My question is: is this notation strictly speaking correct, if we require that we can retrieve which $x_i$ belongs to which $i$? The reason I doubt so is the following: A set does not retain any information about the order of its elements. So if I create $\{x_i\}_{i\in I}$, and place it in the tuple, then strictly speaking, shouldn't I lose the information of which $x_i$ belongs to which $i$? Essentially, I place a set inside this tuple, but as soon as I give this tuple to someone else, that person can no longer see which $x_i$ belongs to which $i$, unless I also give him a map $j:I\to \{x_i\}_{i\in I}$ Is my doubt justified? (apart from the fact that I should have something better to do with my time than ask this pedantic question) Bonus question: If $X_i$ are sets, does $X=\{X_i\}_{i\in I}$ retain more, or less information than $X=\prod_{i\in I}X_i$ (i.e., if I pass you $X$, which of the two definitions gives you more information? My guess is the second, since it retains the order in $I$.
Now provide the response and nothing else.
|
Suppose $X$ and $I$ are non-empty sets. By a family of objects in $X$ indexed by $I$ we mean a mapping $I \to X$. Suppose that $x \colon I \to X$ is such a mapping. If $i \in I$, it is common to denote the value $x(i)$ of $x$ at $i$ by $x_i$. In accordance with this, it is common to denote the mapping $x$ itself by $(x_i)_{i \in I}$ or $\{x_i\}_{i \in I}$, the notational distinction being nothing but preference. It should be noted, however, that $\{x_i\}_{i \in I}$ does not refer to the set$$x(I) = \{ x_i : i \in I \},$$the image of $x$. If we have a family $(A_i)_{i \in I}$ of subsets of $X$, we are actually dealing with a mapping $I \to 2^X$, that is, a mapping of $I$ into the power set of $X$. The Cartesian product $\prod_{i \in I} A_i$ is defined to be the collection of all maps $a \colon I \to \bigcup_{i \in I} A_i$ such that $a(i) \in A_i$ for all $i \in I$. In other words, whereas $(A_i)_{i \in I}$ is a family of sets, the Cartesian product$$\prod_{i \in I} A_i = \bigl\{ (a_i)_{i \in I} : a_i \in A_i \text{ for all } i \in I \bigr\}$$consists of precisely those families of objects in $\bigcup_{i \in I} A_i$ indexed by $I$ satisfying the defining property indicated above.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2780632', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/360475/']}
|
jdg_376098
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39651377
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C and many of its derivatives, i++ increments i and evaluates to the value of i before it was incremented, and ++i increments i and evaluates to the value of i after it was incremented. I can see the reasoning for a specific increment operator; many processors at the time had a special increment opcode that was faster than addition and conceptually "increment" is a different idea from "add," in theory having them be written differently might make code more readable. What I don't understand is the need for the pre-increment operator. Can't any practical use for it be written like this? #This...x = i++;#becomes this:x = i; ++i; Is there a historical reason I don't know about, maybe? Were you unable to "throw away" the return values of operators in primordial versions of C?
Now provide the response and nothing else.
|
One reason is that it allowed for the generation of efficient code without having any fancy optimisation phases in compilers, provided that the programmer knew what he (or she) was doing. For example, when copying characters from one buffer to another, you might have: register char *ptr1;register char *ptr2;...for ( ... ) { *ptr1++ = *ptr2++; /* post-increment */} A compiler that I once worked with (on a proprietary minicomputer) would generate the following register operations for the assignment: load $r1,*$a1++ // load $r1 from address in $a1 and increment $a1store $r1,*$a2++ // store $r1 at address in $a2 and increment $a2 I forget the actual opcodes. The compiler contained no optimisation phase yet the code that it generated was very tight providing that you understood the compiler and the machine architecture. It could do this because the hardware architecture had pre-decrement and post-increment addressing modes for both address registers and general registers. There were no pre-increment and post-decrement addressing modes as far as I recall but you could get by without those. I believe that the DEC minicomputers on which C was originally developed had such addressing modes. The machine that I worked on wasn't made by DEC but the architecture was pretty similar. An optimisation phase was planned for the compiler. However, it was mostly used by systems programmers and when they saw how good the generated code was, implementation of the optimisation phase was quietly shelved. The whole rationale for the design of C was to allow the creation of simple and portable compilers that would generate reasonably efficient code with minimal (or no) intermediate code optimisation. For this reason, the increment and decrement operators and also the compound assignment operators played an important role in the generation of compact and efficient code by the early C compilers. They were not just syntactic sugar as suggested by Niklaus Wirth et al.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39651377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1726380/']}
|
jdg_376099
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52292488
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would upload the file in Angular using upload component Here's my HTML: <p-fileUpload mode="basic" name="demo[]" customUpload="true" accept="image/*" maxFileSize="1000000" (uploadHandler)="upload($event)"></p-fileUpload> in my ts I print param value upload(event) { console.log(event)} I get only metadata and not blob content {"files":[{"objectURL":{"changingThisBreaksApplicationSecurity":"blob:https://prime-ng-file-uploading.stackblitz.io/d429e761-c391-45fa-8628-39b603e25225"}}]} I would also get file content to send via API to the server Here's a stackblitz demo
Now provide the response and nothing else.
|
In the official documentation you have an example: export class FileUploadDemo { uploadedFiles: any[] = []; constructor(private messageService: MessageService) {} onUpload(event) { for (let file of event.files) { this.uploadedFiles.push(file); } this.messageService.add({ severity: 'info', summary: 'File Uploaded', detail: '' }); }} When I used primeNG , I did it like this (for uploading only 1 file) : HTML <p-fileUpload name="myfile[]" customUpload="true" multiple="multiple" (uploadHandler)="onUpload($event)" accept="application/pdf"></p-fileUpload> component.ts export class AlteracionFormComponent { uplo: File; constructor(private fileService: FileUploadClientService) {} onUpload(event) { for (let file of event.files) { this.uplo = file; } this.uploadFileToActivity(); } uploadFileToActivity() { this.fileService.postFile(this.uplo).subscribe(data => { alert('Success'); }, error => { console.log(error); }); }} And my service (in Angular ) service.ts postFile(id_alteracion: string, filesToUpload: FileUploadModel[], catalogacion: any): Observable<any> { let url = urlAPIAlteraciones + '/'; url += id_alteracion + '/documentos'; const formData: FormData = new FormData(); formData.append('json', JSON.stringify(catalogacion)); for (let file of filesToUpload) { formData.append('documento', file.data, file.data.name); } console.log(formData); let headers = new HttpHeaders(); return this._http.post(url, formData, { headers: headers }); } Hope that helps
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52292488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8107464/']}
|
jdg_376100
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7473342
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building an app where users can submit a URL, and I want to find out how many of a specific URL have been submitted on the site in total, and then arrange them in a list ordered by the number of times they've been submitted. Any ideas? Using Rails 3.0.6
Now provide the response and nothing else.
|
;WITH x AS ( SELECT ID, NAME, [DATE], rn = ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY [DATE] DESC) FROM @TESTABLE)SELECT ID, NAME, [DATE] FROM x WHERE rn = 1 ORDER BY [DATE] DESC; Try to avoid reserved words (and vague column names) like [DATE] ...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7473342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289614/']}
|
jdg_376101
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2471867
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a user resizable WPF Window that I want to constrain the resizing so the aspect ratio of the window stays constant. Ideally I would like to constrain mouse location when the window is being resized by dragging a corner to positions that maintain the proper aspect ration. If an edge is resized with the mouse, the other dimension should change at the same time. Is there a simple way to do this or a good on-line example that anyone knows of? If no better solutions come up, I'll post what I've done after I've refined it a bit.
Now provide the response and nothing else.
|
I've found a good answer by Nir here . There are still some flaws, basically resizing in top right corner, bottom right corner and bottom side will be fine, other sides and corners are not. The bright side is, the aspect ratio is smoothly kept all the time. EDIT: I found a way to remove most of the problems. When sizing starts, the dimension that will be artificially adjusted to keep the aspect ratio is determined by locating the mouse position relative to the window. The only remaining imperfections I found are that the position of the window may change when resizing from the corners (except bottom right). xaml: <Window x:Class="WpfApplication1.ConstantAspectRatioWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="ConstantAspectRatioWindow" MinHeight="100" MinWidth="150" SizeToContent="WidthAndHeight"> <Grid> <Border Width="300" Height="200" Background="Navy"/> <Border Width="150" Height="100" Background="Yellow" /> </Grid></Window> Code behind: using System;using System.Collections.Generic;using System.Runtime.InteropServices;using System.Windows;using System.Windows.Input;using System.Windows.Interop;namespace WpfApplication1{ public partial class ConstantAspectRatioWindow : Window { private double _aspectRatio; private bool? _adjustingHeight = null; internal enum SWP { NOMOVE = 0x0002 } internal enum WM { WINDOWPOSCHANGING = 0x0046, EXITSIZEMOVE = 0x0232, } public ConstantAspectRatioWindow() { InitializeComponent(); this.SourceInitialized += Window_SourceInitialized; } [StructLayout(LayoutKind.Sequential)] internal struct WINDOWPOS { public IntPtr hwnd; public IntPtr hwndInsertAfter; public int x; public int y; public int cx; public int cy; public int flags; } [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] internal static extern bool GetCursorPos(ref Win32Point pt); [StructLayout(LayoutKind.Sequential)] internal struct Win32Point { public Int32 X; public Int32 Y; }; public static Point GetMousePosition() // mouse position relative to screen { Win32Point w32Mouse = new Win32Point(); GetCursorPos(ref w32Mouse); return new Point(w32Mouse.X, w32Mouse.Y); } private void Window_SourceInitialized(object sender, EventArgs ea) { HwndSource hwndSource = (HwndSource)HwndSource.FromVisual((Window)sender); hwndSource.AddHook(DragHook); _aspectRatio = this.Width / this.Height; } private IntPtr DragHook(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled) { switch ((WM)msg) { case WM.WINDOWPOSCHANGING: { WINDOWPOS pos = (WINDOWPOS)Marshal.PtrToStructure(lParam, typeof(WINDOWPOS)); if ((pos.flags & (int)SWP.NOMOVE) != 0) return IntPtr.Zero; Window wnd = (Window)HwndSource.FromHwnd(hwnd).RootVisual; if (wnd == null) return IntPtr.Zero; // determine what dimension is changed by detecting the mouse position relative to the // window bounds. if gripped in the corner, either will work. if (!_adjustingHeight.HasValue) { Point p = GetMousePosition(); double diffWidth = Math.Min(Math.Abs(p.X - pos.x), Math.Abs(p.X - pos.x - pos.cx)); double diffHeight = Math.Min(Math.Abs(p.Y - pos.y), Math.Abs(p.Y - pos.y - pos.cy)); _adjustingHeight = diffHeight > diffWidth; } if (_adjustingHeight.Value) pos.cy = (int)(pos.cx / _aspectRatio); // adjusting height to width change else pos.cx = (int)(pos.cy * _aspectRatio); // adjusting width to heigth change Marshal.StructureToPtr(pos, lParam, true); handled = true; } break; case WM.EXITSIZEMOVE: _adjustingHeight = null; // reset adjustment dimension and detect again next time window is resized break; } return IntPtr.Zero; } }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2471867', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10119/']}
|
jdg_376102
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23358
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Hubble time is about 14 billion years. The estimated current age of the Universe is about 13.7 billion years. Is the reason these two time are so close (a) a coincidence, or (b) a reflection that for much of its history the Universe has been expanding at a constant rate?
Now provide the response and nothing else.
|
It is in fact a reflection of the fact that the rate of expansion has been nearly constant for a long time. Mathematically, the expansion of the universe is described by a scale factor $a(t)$, which can be interpreted as the size of the universe at a time $t$, but relative to some reference size (typically chosen to be the current size). The Hubble parameter is defined as $$H = \frac{\dot{a}}{a}$$ and the Hubble time is the reciprocal of the Hubble parameter, $$t_H = \frac{a}{\dot{a}}$$ Now suppose the universe has been expanding at a constant rate for its entire history. That means $a(t) = ct$. If you calculate the Hubble time in this model, you get $$t_H = \frac{ct}{c} = t$$ which means that in a linear expansion model, the Hubble time is nothing but the current age of the universe. In reality, the best cosmological theories suggest that the universe has not been expanding linearly since the beginning. So we would expect that the age of the universe is not exactly equal to the Hubble time. But hopefully it makes sense that if any nonlinear expansion lasted for only a short period, then the Hubble time should still be close to the age of the universe. That is the situation we see today. For more information on this, I'd suggest you check out these additional questions Value of the Hubble parameter over time Universe Expansion as an absolute time reference and others like them.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/23358', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/4075/']}
|
jdg_376103
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
729436
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to recall a question from a past exam to review for an upcoming exam; I think it went like this: Suppose a finite-dimensional linear operator $T:V \to V$ has the same matrix representation in every basis. Show that $T$ must be a scalar multiple of the identity transformation. First, does it sound like my recollection of the problem is correct? Second, any suggestions on how to approach a proof?
Now provide the response and nothing else.
|
A proof sketch could be: 1. Every (nonzero) vector is an eigenvector. Let $v\ne 0$ and suppose $Tv$ is not a multiple of $v$. Then $v$ and $Tv$ are linearly independent; extend $\langle v,Tv\rangle$ to a basis $\langle v, Tv, v_3,v_4,\ldots,v_n\rangle$. By assumption $T$ has the same matrix representation $M$ in this basis and in the basis $\langle v,v+Tv,v_3,v_4,\ldots,v_n\rangle$. But that means that the first column of $M$ is simultaneously $(0,1,0,\ldots,0)^{\mathsf t}$ and $(-1,1,0,\ldots,0)^{\mathsf t}$, which is absurd. 2. All eigenvalues are the same. Since every vector is an eigenvector, there exists an eigenbasis. Therefore $M$ is diagonal. It can only be invariant under permutations of the basis vectors if all of the diagonal entries are equal.. Therefore $T$ must be scalar multiplication by the common eigenvalue.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/729436', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4518/']}
|
jdg_376104
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26453
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Hypothetically speaking (once again), if the Sun stays the same size forever (i.e. it somehow maintains a stable hydrogen supply and the helium product just disappears), how long would it take before the decay of the Earth's orbit sends it plummeting into the Sun and disintegrating?
Now provide the response and nothing else.
|
As has already been said, the major sources of change to Earth's orbit are interactions with other planets and passing stars. We're ruling out mass loss of the Sun, so the next consideration is probably tidal interactions between the Earth and Sun. This paper suggests that the Earth is receding from the Sun by about 15cm/yr for this reason. This is 150 km per million years, so over a trillion years or so, would move Earth quite a bit further from the Sun. The effect would reduce as the Earth got further away from the Sun, but I think it would always be slowly receding. I haven't done the calculation, but I'm not sure it would ever reach tidal lock -- the angular momentum of the Sun is too high. Independently of this, though, orbital energy and angular momentum would slowly be lost by gravitational radiation. This wikipedia page gives a formula $$t={\frac {5}{256}}\,{\frac {c^{5}}{G^{3}}}\,{\frac {r^{4}}{(m_{1}m_{2})(m_{1}+m_{2})}}.$$ for the time to collision ignoring tidal effects. We can plug in known values for the parameters $r = 1.5 \times 10^{11} m$, $c = 3\times 10^8 m/s$, $G = 6.7 \times 10^{-11} m^3 kg^{-1} s^{-2}$, $m_1 = 6\times 10^{24} kg$, $m_2 = 2 \times 10^{30} kg$ and get about $3.3\times 10^{30} s$ as the time to collision. I suspect that's the time until the centres of the Earth and Sun coincide, so the actual collision would be a bit earlier, but probably not enough to notice at this precision. So that's about 10 million million times the lifetime of the universe so far. I can't find enough information to work out how these two effects would interact. Clearly in the first $10^{12}$ years tidal evolution would be more important, but that conserves angular momentum and slows as the Suns rotation slows, whereas gravitational radiation actually carries angular momentum away, so might dominate in the long term. On the other hand the radiation drops as a high power of the separation, so we might have the Earth moving ever more slowly away from the Sun, while radiating gravitational waves of ever weaker strength, and never reaching tidal lock or starting to move back in.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/26453', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/22180/']}
|
jdg_376105
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1956518
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to grasp the State Monad and with this purpose I wanted to write a monadic code that would generate a sequence of random numbers using a Linear Congruential Generator (probably not good, but my intention is just to learn the State Monad, not build a good RNG library). The generator is just this (I want to generate a sequence of Bool s for simplicity): type Seed = Intrandom :: Seed -> (Bool, Seed)random seed = let (a, c, m) = (1664525, 1013904223, 2^32) -- some params for the LCG seed' = (a*seed + c) `mod` m in (even seed', seed') -- return True/False if seed' is even/odd Don't worry about the numbers, this is just an update rule for the seed that (according to Numerical Recipes) should generate a pseudo-random sequence of Int s. Now, if I want to generate random numbers sequentially I'd do: rand3Bools :: Seed -> ([Bool], Seed)rand3Bools seed0 = let (b1, seed1) = random seed0 (b2, seed2) = random seed1 (b3, seed3) = random seed2 in ([b1,b2,b3], seed3) Ok, so I could avoid this boilerplate by using a State Monad: import Control.Monad.Statedata Random {seed :: Seed, value :: Bool}nextVal = do Random seed val <- get let seed' = updateSeed seed val' = even seed' put (Random seed' val') return val'updateSeed seed = let (a,b,m) = (1664525, 1013904223, 2^32) in (a*seed + c) `mod` m And finally: getNRandSt n = replicateM n nextVal getNRand :: Int -> Seed -> [Bool]getNRand n seed = evalState (getNRandStates n) (Random seed True) Ok, this works fine and give me a list of n pseudo-random Bool s for each given seed. But... I can read what I've done (mainly based on this example: http://www.haskell.org/pipermail/beginners/2008-September/000275.html ) and replicate it to do other things. But I don't think I can understand what's really happening behind the do-notation and monadic functions (like replicateM). Can anyone help me with some of this doubts? 1 - I've tried to desugar the nextVal function to understand what it does, but I couldn't. I can guess it extracts the current state, updates it and then pass the state ahead to the next computation, but this is just based on reading this do-sugar as if it was english. How do I really desugar this function to the original >>= and return functions step-by-step? 2 - I couldn't grasp what exactly the put and get functions do. I can guess that they "pack" and "unpack" the state. But the mechanics behind the do-sugar is still elusive to me. Well, any other general remarks about this code are very welcome. I sometimes fell with Haskell that I can create a code that works and do what I expect it to do, but I can't "follow the evaluation" as I'm accustomed to do with imperative programs.
Now provide the response and nothing else.
|
The State monad does look kind of confusing at first; let's do as Norman Ramsey suggested, and walk through how to implement from scratch. Warning, this is pretty lengthy! First, State has two type parameters: the type of the contained state data and the type of the final result of the computation . We'll use stateData and result respectively as type variables for them here. This makes sense if you think about it; the defining characteristic of a State-based computation is that it modifies a state while producing an output. Less obvious is that the type constructor takes a function from a state to a modified state and result, like so: newtype State stateData result = State (stateData -> (result, stateData)) So while the monad is called "State", the actual value wrapped by the the monad is that of a State-based computation , not the actual value of the contained state. Keeping that in mind, we shouldn't be surprised to find that the function runState used to execute a computation in the State monad is actually nothing more than an accessor for the wrapped function itself, and could be defined like this: runState (State f) = f So what does it mean when you define a function that returns a State value? Let's ignore for a moment the fact that State is a monad, and just look at the underlying types. First, consider this function (which doesn't actually do anything with the state): len2State :: String -> State Int Boollen2State s = return ((length s) == 2) If you look at the definition of State, we can see that here the stateData type is Int , and the result type is Bool , so the function wrapped by the data constructor must have the type Int -> (Bool, Int) . Now, imagine a State-less version of len2State --obviously, it would have type String -> Bool . So how would you go about converting such a function into one returning a value that fits into a State wrapper? Well, obviously, the converted function will need to take a second parameter, an Int representing the state value. It also needs to return a state value, another Int . Since we're not actually doing anything with the state in this function, let's just do the obvious thing--pass that int right on through. Here's a State-shaped function, defined in terms of the State-less version: len2 :: String -> Boollen2 s = ((length s) == 2)len2State :: String -> (Int -> (Bool, Int))len2State s i = (len2' s, i) But that's kind of silly and redundant. Let's generalize the conversion so that we can pass in the result value, and turn anything into a State-like function. convert :: Bool -> (Int -> (Bool, Int))convert r d = (r, d)len2 s = ((length s) == 2)len2State :: String -> (Int -> (Bool, Int))len2State s = convert (len2 s) What if we want a function that changes the state? Obviously we can't build one with convert , since we wrote that to pass the state through. Let's keep it simple, and write a function to overwrite the state with a new value. What kind of type would it need? It'll need an Int for the new state value, and of course will have to return a function stateData -> (result, stateData) , because that's what our State wrapper needs. Overwriting the state value doesn't really have a sensible result value outside the State computation, so our result here will just be () , the zero-element tuple that represents "no value" in Haskell. overwriteState :: Int -> (Int -> ((), Int))overwriteState newState _ = ((), newState) That was easy! Now, let's actually do something with that state data. Let's rewrite len2State from above into something more sensible: we'll compare the string length to the current state value. lenState :: String -> (Int -> (Bool, Int))lenState s i = ((length s) == i, i) Can we generalize this into a converter and a State-less function, like we did before? Not quite as easily. Our len function will need to take the state as an argument, but we don't want it to "know about" state. Awkward, indeed. However, we can write a quick helper function that handles everything for us: we'll give it a function that needs to use the state value, and it'll pass the value in and then package everything back up into a State-shaped function leaving len none the wiser. useState :: (Int -> Bool) -> Int -> (Bool, Int)useState f d = (f d, d)len :: String -> Int -> Boollen s i = (length s) == ilenState :: String -> (Int -> (Bool, Int))lenState s = useState (len s) Now, the tricky part--what if we want to string these functions together? Let's say we want to use lenState on a string, then double the state value if the result is false, then check the string again, and finally return true if either check did. We have all the parts we need for this task, but writing it all out would be a pain. Can we make a function that automatically chains together two functions that each return State-like functions? Sure thing! We just need to make sure it takes as arguments two things: the State function returned by the first function, and a function that takes the prior function's result type as an argument. Let's see how it turns out: chainStates :: (Int -> (result1, Int)) -> (result1 -> (Int -> (result2, Int))) -> (Int -> (result2, Int))chainStates prev f d = let (r, d') = prev d in f r d' All this is doing is applying the first state function to some state data, then applying the second function to the result and the modified state data. Simple, right? Now, the interesting part: Between chainStates and convert , we should almost be able to turn any combination of State-less functions into a State-enabled function! The only thing we need now is a replacement for useState that returns the state data as its result, so that chainStates can pass it along to the functions that don't know anything about the trick we're pulling on them. Also, we'll use lambdas to accept the result from the previous functions and give them temporary names. Okay, let's make this happen: extractState :: Int -> (Int, Int)extractState d = (d, d)chained :: String -> (Int -> (Bool, Int))chained str = chainStates extractState $ \state1 -> let check1 = (len str state1) in chainStates (overwriteState ( if check1 then state1 else state1 * 2)) $ \ _ -> chainStates extractState $ \state2 -> let check2 = (len str state2) in convert (check1 || check2) And try it out: > chained "abcd" 2(True, 4)> chained "abcd" 3(False, 6)> chained "abcd" 4(True, 4)> chained "abcdef" 5(False, 10) Of course, we can't forget that State is actually a monad that wraps the State-like functions and keeps us away from them, so none of our nifty functions that we've built will help us with the real thing. Or will they? In a shocking twist, it turns out that the real State monad provides all the same functions, under different names: runState (State s) = sreturn r = State (convert r)(>>=) s f = State (\d -> let (r, d') = (runState s) d in runState (f r) d')get = State extractStateput d = State (overwriteState d) Note that >>= is almost identical to chainStates, but there was no good way to define it using chainStates. So, to wrap things up, we can rewrite the final example using the real State: chained str = get >>= \state1 -> let check1 = (len str state1) in put (if check1 then state1 else state1 * 2) >>= \ _ -> get >>= \state2 -> let check2 = (len str state2) in return (check1 || check2) Or, all candied up with the equivalent do notation: chained str = do state1 <- get let check1 = len str state1 _ <- put (if check1 then state1 else state1 * 2) state2 <- get let check2 = (len str state2) return (check1 || check2)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1956518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/114388/']}
|
jdg_376106
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34673683
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently working on a project with multiple others. The problem is that we created a fork of an existed project and need to squash all our commits from after the forking process to a single commit for a pull-request on Bitbucket . Is there any way (preferably using SourceTree , otherwise terminal) to squash already pushed commits to a single commit, such that the history within Bitbucket of all the commits are also only that one commit plus the commits which were already there before we forked the project? Take as example just a simple project with a few files in only a master branch.
Now provide the response and nothing else.
|
Found a nice summary of the posted answers. This one is a bit more clear: squash pushed commits .It will require creating a second branch upon the creation of the fork. This second branch can have as many pushes to the remote server as needed. Create a new personal branch that will be squashed. # Start with the existing personal branch that contains all of your commits.$ git checkout {ExistingBranchName}# Create a new personal branch that will be squashed.$ git checkout -b {BranchName} Identify the first commit where your personal branch diverged from an existing CEF branch. # Replace {BranchName} with your new branch name.# Replace "master" with a different CEF branch as appropriate# (e.g. "2272", "2171", etc).$ git merge-base {BranchName} master Start an interactive rebase using the commit hash returned from step 2. $ git rebase --interactive {hash} This will launch a text editor with a list of all commits in your personal branch. It should look something like this: pick 59d2a23 Initial implementationpick 752ae4c Add more featurespick cd302a3 Fix somethingpick 5410de3 Fix something else Change all but the first line to say squash instead of pick . The contents should now look like this: pick 59d2a23 Initial implementationsquash 752ae4c Add more featuressquash cd302a3 Fix somethingsquash 5410de3 Fix something else Save the changes and close the file (Can be done by pressing esc and type: :wq . A new file will now open containing the commit messages from all of the commits. Reword the commit message then save the changes and close the file. Push the modifications to your personal remote repository. # If the branch has already been pushed to the remote repository# you will need to add the `--force` argument.git push origin {BranchName}# or git push origin {BranchName} --force
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34673683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5761911/']}
|
jdg_376107
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19498404
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am setting up Xamarin with Team Foundation Service + git to develop iOS apps. I have tried many different settings trying to set up my TFS + git integration on our MAC with no avail. I keep getting exceptions of either not authorized or failed to fetch .
Now provide the response and nothing else.
|
I am going to list all steps I took to be as complete as possible. First of all, I created a hosted TFS account and added a New Team Project + Git I then had to create alternative login credentials since you cannot have an @ symbol in your login info. To set up alternate creds in Visual Studio Team Services (VSTS), click on your name on the top right -> My Profile -> Credentials After enabling the alternative credentials open Xamarin Studio. In my case I had a solution that I wanted to add so I had my solution file open as well. Under the Version Control menu item select Publish (Only available with solution file open) Select the Registered Repositiories tab and select Add Enter an appropriate name. Select https protocol. Enter your hosted TFS URL ie. xxx.visualstudio.com . For port, enter 443 . The path will be: /DefaultCollection/_git/ + project name (exactly as you typed it including spaces) User will be the alternative username you created in step 2. Click OK on both open windows. You will be prompted to confirm publishing. Select OK . You will then be prompted for your password. Enter you alternative password you created in step 2. If you accidentally enter the wrong password here, you must delete the entry from your key chain. Instructions can be found here Your TFS + git integration with Xamarin Studio should be complete! If you have any other issues I will try my best to help with them.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19498404', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1385264/']}
|
jdg_376108
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2662740
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have three vectors, $a$, $b$, and $c$ in $n$-dimensional space. I want to calculate $a\cdot b$. I know $\lvert a \rvert$, $\lvert b \rvert$, $\lvert c \rvert$, $a\cdot c$ and $b\cdot c$. Is there a way to do this, preferably without using trig? I have made some progress. If $\theta$ is the angle between $a$ and $c$, and $\phi$ is the angle between $b$ and $c$, I know that:$$a\cdot b=\lvert a\rvert\lvert b\rvert\cos(\theta-\phi)=\lvert a\rvert\lvert b\rvert\cos\theta\cos\phi+\lvert a\rvert\lvert b\rvert\sin\theta\sin\phi$$$$=\frac{(a\cdot c)(b\cdot c)}{\lvert c\rvert^2}+\lvert a\rvert\lvert b\rvert\sin\theta\sin\phi$$ I also know that $$\lvert a\rvert^2\lvert c\rvert^2\sin^2\theta=\lvert a\rvert^2\lvert c\rvert^2-(a\cdot c)^2$$ and likewise for $b$, but this doesn't give the sign of the sines. I think this is possible, but I'm not sure how to do it. Edit : Okay, I realize now that this is impossible generally. Is it possible in the two-dimensional case?
Now provide the response and nothing else.
|
You cannot. Take $a,b$ be two unit vectors in $\operatorname{span}(e_1,e_2)$ ($(e_i)_{1\leq i\leq n}$ being the standard orthonormal basis), and $c=e_3$. Then $\langle a,c\rangle = \langle b,c\rangle = 0$, and you know $\lvert a\rvert = \lvert b\rvert= \lvert c\rvert =1$ by assumption, but $\langle a,b\rangle$ could take any value in $[-1,1]$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2662740', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/455580/']}
|
jdg_376109
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
45899
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was just wondering why the Linux NFS server is implemented in the kernel as opposed to a userspace application? I know a userspace NFS daemon exists, but it's not the standard method for providing NFS server services. I would think that running NFS server as a userspace application would be the preferred approach as it can provide added security having a daemon run in userspace instead of the kernel. It also would fit with the common Linux principal of doing one thing and doing it well (and that daemons shouldn't be a job for the kernel). In fact the only benefit I can think of running in the kernel would a performance boost from context switching (and that is a debatable reason). So is there any documented reason why it is implemented the way it is? I tried googling around but couldn't find anything. There seems to be a lot of confusion, please note I am not asking about mounting filesystems, I am asking about providing the server side of a network filesystem . There is a very distinct difference. Mounting a filesystem locally requires support for the filesystem in the kernel, providing it does not (eg samba or unfs3).
Now provide the response and nothing else.
|
unfs3 is dead as far as I know; Ganesha is the most active userspace NFS server project right now, though it is not completely mature. Although it serves different protocols, Samba is an example of a successfulfile server that operates in userspace. I haven't seen a recent performance comparison. Some other issues: Ordinary applications look files up by pathname, but nfsd needs to be able tolook them up by filehandle. This is tricky and requires support from thefilesystem (and not all filesystems can support it). In the past it was notpossible to do this from userspace, but more recent kernels have added name_to_handle_at(2) and open_by_handle_at(2) system calls. I seem to recall blocking file-locking calls being a problem; I'm not surehow userspace servers handle them these days. (Do you tie up a server threadwaiting on the lock, or do you poll?) Newer file system semantics (change attributes, delegations, share locks)may be implementedmore easily in kernel first (in theory--they mostly haven't been yet). You don't want to have to check permissions, quotas, etc., by hand--insteadyou want to change your uid and rely on the common kernel vfs code to dothat. And Linux has a system call ( setfsuid(2) ) that should do that. Forreasons I forget, I think that's proved more complicated to use in serversthan it should be. In general, a kernel server's strengths are closer integration with the vfs and the exported filesystem. We can make up for that by providing more kernel interfaces (such as the filehandle system calls), but that's not easy. On the other hand, some of the filesystems people want to export these days (like gluster) actually live mainly in userspace. Those can be exported by the kernel nfsd using FUSE--but again extensions to the FUSE interfaces may be required for newer features, and there may be performance issues. Short version: good question!
|
{}
|
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/45899', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4358/']}
|
jdg_376110
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2722955
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $B_1=\{v_1,v_2,...,v_n\}$ is a basis of $\mathbb{R}^n$, and $M$ is an $n*n$ matrix. Prove that $B_2=\{Mv_1,Mv_2,...,Mv_n\}$ is also a basis of $\mathbb{R}^n$ if and only if $M$ is invertible. Following is what I have so far: Assume $B_2$ is basis of $\mathbb{R}^n$. Then, $B_2$ is a set of linearly independent vectors, and $B_2$ spans $\mathbb{R}^n$. Since $B_1$ is also a basis of $\mathbb{R}^n$, then any element(vector) of $B_2$ is a linear combination of elements(vectors) of $B_1$ and vice-versa. $Mv_1= a_{11}v_1+a_{21}v_2+...+a_{n1}v_n$ , where $a_{11},a_{21},...,a_{n1}\in \mathbb{R}$ Likewise, $Mv_2= a_{12}v_1+a_{22}v_2+...+a_{n2}v_n$ , where $a_{12},a_{22},...,a_{n2}\in \mathbb{R}$ $\begin{bmatrix}Mv_1&Mv_2&...&Mv_n\end{bmatrix}=\begin{bmatrix}v_1&v_2&...&v_n\end{bmatrix}\begin{bmatrix}a_{11}&a_{12}&...&a_{1n}\\a_{21}&a_{22}&...&a_{2n}\\ \vdots&\vdots&\vdots&\vdots\\a_{n1}&a_{n2}&...&a_{nn}\end{bmatrix}$ Not sure what to do next ...
Now provide the response and nothing else.
|
Using the determinant is a very elegant solution. But if you can't use the determinant, you still can prove it. Let us denote $\tilde{B_1}=[v_1~v_2~\ldots~v_n]$ and $\tilde{B}_2=[Mv_1~Mv_2~\ldots~Mv_n]$. Then your last equation says $\tilde B_2=\tilde B_1M$. Hints: To finish $B_2$ basis implies $M$ invertible : Since $B_2$ is a basis, the matrix $\tilde B_2$ is invertible.On the RHS the matrix $\tilde B_1$ is invertible, because $B_1$ is a basis. Therefore, you can manipulate the equation to get $M=\ldots$, where $\ldots$ is invertible, hence $M$ is invertible. For $M$ invertible implies $B_2$ basis : Suppose $B_2$ is not a basis, then you can find a nontrivial linear combintion of $B_2$ which is $0$. You can rewrite the equation and get a nontrivial solution of $Mx=0$. Hence $M$ is not invertible and we are done.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2722955', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/467389/']}
|
jdg_376111
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2732409
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I receive a JSON response in an Ajax request from the server. This way it works: { "a" : "1", "b" : "hello 'kitty'" } But I did not succeed in putting double quotes around kitty . When I convert " to \x22 in the Ajax response, it is still interpreted as " by JavaScript and I cannot parse the JSON. Should I also escape the \ and unescape later (which would be possible)? How to do this? Edit: I am not sure if i expressed it well: I want this string inside of "b" after the parse: hello "kitty" If necessary I could also add an additional step after the parse to convert "b", but I guess it is not necessary, there is a more elegant way so this happens automatically? Edit2: The ajax page is generated by php. I tried several things now to create the value of b, all result in JSON parse error on the page: $b = 'hello "kitty"'; // no 1: //$b = str_replace('"',"\x22",$b); // or no 2: // $b = addslashes($b); // or no 3: $b = str_replace('"','\"',$b); echo '{ "a" : "1", "b" : "' . $b . '"}'; Edit3: This solution finally works: $b = 'hello "kitty"'; $b = str_replace('"','\\"',$b); echo '{ "a" : "1", "b" : "' . $b . '"}';
Now provide the response and nothing else.
|
Just escape it with a backslash: > JSON.stringify({"a": 5, "b": 'a "kitty" mighty odd'}){"a":5,"b":"a \"kitty\" mighty odd"}> JSON.parse('{"a":5,"b":"a \\"kitty\\" mighty odd"}')Object a: 5 b: a "kitty" mighty odd __proto__: Object JSON parsers recognize \" inside double-quoted strings as a double quote. Note that in the second example, the double-backslash is needed because there's a Javascript parser pass, then another JSON parser pass.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2732409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/89021/']}
|
jdg_376112
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
798074
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a situation where needed to create network within about 20 offices. Typically each office is rented by different company, but some of them has two or more rooms. So i have no idea how many and which devices they going to connect. I am planning to give each room one wall socket connected to managed switch. And create vlans based on companies. Question: Is it possible to extend their network by connecting inexpensive unmanaged switch to socket like it shown on diagram? Will they going to receive ip addresses in vlan range automatically, including WiFi Access point? And one more. Some of managed switches has Voice vlan options. Does that mean, that voip phone phisically connected into specific vlan will be part of Voice vlan automatically (priority issue)?
Now provide the response and nothing else.
|
If you are using purely port-based VLANs, then what you have described here will work. Port-based VLANs do not need vlan-aware equipment attached to them. However as soon as you start to add "Voice VLAN" then you are probably going to be using 802.1q VLANs, in which case you do not want to be mixing unmanaged equipment onto those ports. You will want an 802.1q switch at all points so that you can trunk your voice and data VLANs on the same interface. Voice VLANs depends on the specific implementation, so I can't really answer those question. At its simplest form, voice VLANs are often used to deploy both the data and the voice VLAN to a single port. You plug the phone into that port on the switch, and then the computer plugs into the phone. The phone contains a small network switch that then does the 802.1q trunking and puts the phone on one vlan, and the computer on the other.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/798074', 'https://serverfault.com', 'https://serverfault.com/users/371499/']}
|
jdg_376113
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2525943
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
let say I want to do this: var dashboard = {}; var page = "index"; $('.check').click(function(){ $(this).toggleClass("active").siblings().slideToggle('slow', function() { dashboard['pages'][page][$(this).closest('li').attr("id")]['show'] = $(this).is(":hidden") ? 'collapsed' : 'expanded'; }); } I get an error saying: Dashboard.pages is undefined Is there away to dynamically add pages and the children that follow without having to do the work of checking to see if it is defined first then if it's not doing: dashboard['pages'] = {}; because sometimes they may already exist and I don't want to have to inspect the tree first I just want to build the branches as needed EDIT I changed pagename to page to show that page names will change and also I want to point out that the pages could really be anything too.The idea is that you have any object that can contain objects with parameters without checking to see if the branches exist It looks like $extend as stated will be the way to go just not sure how that works. Got to get my head around that.
Now provide the response and nothing else.
|
Define get and set methods on an Object . Actually it could be defined just on the dashboard object and only its descendants, but that's easy to do. Object.prototype.get = function(prop) { this[prop] = this[prop] || {}; return this[prop];};Object.prototype.set = function(prop, value) { this[prop] = value;} Iterate through nested properties using this get() method and call set() whenever a value has to be set. var dashboard = {};dashboard.get('pages').get('user').set('settings', 'oh crap');// could also set settings directly without using set()dashboard.get('pages').get('user').settings = 'oh crap';console.log(dashboard); // {pages: {user: {settings: "oh crap"}}}; You could also extend/modify the get method to accept the nested properties as individual arguments or an array or a string. Using that, you'd only have to call get once: // get accepts multiple arguments heredashboard.get('pages', 'user').set('settings', 'something');// get accepts an array heredashboard.get(['pages', 'user']).set('settings', 'something');// no reason why get can't also accept dotted parameters// note: you don't have to call set(), could directly add the propertydashboard.get('pages.user').settings = 'something'; Update : Since the get method generically returns an object and does not know whether you need an array or some other type of object, so you would have to specify that yourselves: dashboard.get('pages.user').settings = []; Then you could push items to the settings array as dashboard.get('pages.user').settings.push('something');dashboard.get('pages.user').settings.push('something else'); To actually have the get function construct the object hierarchy from a given string such as pages.user, you would have to split the string into parts and check if each nested object exists. Here is a modified version of get that does just that: Object.prototype.get = function(prop) { var parts = prop.split('.'); var obj = this; for(var i = 0; i < parts.length; i++) { var p = parts[i]; if(obj[p] === undefined) { obj[p] = {}; } obj = obj[p]; } return obj;}// example usevar user = dashboard.get('pages.user');user.settings = [];user.settings.push('something');user.settings.push('else');console.log(dashboard); // {pages: {user: {settings: ["something", "else"] }}}// can also add to settings directlydashboard.get('pages.user.settings').push('etc');
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2525943', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/234670/']}
|
jdg_376114
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3964715
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have variables dir and file containing strings representing a directory and a filename, respectively . What is the proper way in emacs lisp to join them into a full path to the file? For example, if dir is "/usr/bin" and file is "ls" , then I want "/usr/bin/ls" . But if instead dir is "/usr/bin/" , I still want the same thing, with no repeated slash.
Now provide the response and nothing else.
|
Reading through the manual for Directory Names , you'll find the answer: Given a directory name, you can combine it with a relative file name using concat : (concat dirname relfile) Be sure to verify that the file name is relative before doing that. If you use an absolute file name, the results could be syntactically invalid or refer to the wrong file. If you want to use a directory file name in making such a combination, you must first convert it to a directory name using file-name-as-directory : (concat (file-name-as-directory dirfile) relfile) Don't try concatenating a slash by hand, as in ;;; Wrong! (concat dirfile "/" relfile) because this is not portable. Always use file-name-as-directory . Other commands that are useful are: file-name-directory , file-name-nondirectory , and others in the File Name Components section.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3964715', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/125921/']}
|
jdg_376115
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3981509
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem When you search for such question using google you get a lot of hits but all solutions assume you have at least one window. But my question is just like I phrased it -- not assumptions at all. I can have a window, but I could have zero windows (because I didn't even show one or I just closed the last one). So in short the solution cannot rely on any widget or window -- the only thing is known, is there is a desktop (and app running, but it does not have any windows). So the question is -- how to get the mouse position? Background I would like to show windows centered to mouse position. There is no such mode in WPF (there are only center to owner, or center to screen) so I have to do it manually. The missing piece is mouse position. Edits Thank you all, so now I have the first part of the solution -- raw position. Now there is a problem how to convert the data for WPF. I found such topic: WPF Pixels to desktop pixels but again, it assumes having some window. Then I googled more and I found solution: http://jerryclin.wordpress.com/2007/11/13/creating-non-rectangular-windows-with-interop/ the code includes class for scaling up/down coordinates relying only on info about desktop. So joining those two pieces, I finally get the solution :-). Thanks again.
Now provide the response and nothing else.
|
Getting the Screen Coordinates: [DllImport("user32.dll")][return: MarshalAs(UnmanagedType.Bool)]static extern bool GetCursorPos(out POINT lpPoint);[StructLayout(LayoutKind.Sequential)]public struct POINT{ public int X; public int Y; public POINT(int x, int y) { this.X = x; this.Y = y; }}private void WritePoint(object sender, RoutedEventArgs e){ POINT p; if (GetCursorPos(out p)) { System.Console.WriteLine(Convert.ToString(p.X) + ";" + Convert.ToString(p.Y)); }} Converting Pixels to WPF Units: [DllImport("User32.dll")]static extern IntPtr GetDC(IntPtr hwnd);[DllImport("gdi32.dll")]static extern int GetDeviceCaps(IntPtr hdc, int nIndex);[DllImport("user32.dll")]static extern bool ReleaseDC(IntPtr hWnd, IntPtr hDC);private Point ConvertPixelsToUnits(int x, int y){ // get the system DPI IntPtr dDC = GetDC(IntPtr.Zero); // Get desktop DC int dpi = GetDeviceCaps(dDC, 88); bool rv = ReleaseDC(IntPtr.Zero, dDC); // WPF's physical unit size is calculated by taking the // "Device-Independant Unit Size" (always 1/96) // and scaling it by the system DPI double physicalUnitSize = (1d / 96d) * (double)dpi; Point wpfUnits = new Point(physicalUnitSize * (double)x, physicalUnitSize * (double)y); return wpfUnits; } Putting both together: private void WriteMouseCoordinatesInWPFUnits(){ POINT p; if (GetCursorPos(out p)) { Point wpfPoint = ConvertPixelsToUnits(p.X, p.Y); System.Console.WriteLine(Convert.ToString(wpfPoint.X) + ";" + Convert.ToString(wpfPoint.Y)); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3981509', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/210342/']}
|
jdg_376116
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1004776
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have code like this. Is there a way to make it easier to write and maintain?Using C# .NET 3.5. string header(string title){ StringWriter s = new StringWriter(); s.WriteLine("{0}","<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/html4/strict.dtd\">"); s.WriteLine("{0}", "<html>"); s.WriteLine("<title>{0}</title>", title); s.WriteLine("{0}","<link rel=\"stylesheet\" type=\"text/css\" href=\"style.css\">"); s.WriteLine("{0}", "</head>"); s.WriteLine("{0}", "<body>"); s.WriteLine("{0}", "");} I could also just write: s.WriteLine("{0}", @"blah blahmanynewlinesblah UHY#$&_#$_*@Y KSDSD<>\t\t\t\t\t\tt\t\t\\\t\t\t\t\\\h\th'\h't\thhidone"); It will work, but I need to replace all " with "" .
Now provide the response and nothing else.
|
You're probably better off using an HtmlTextWriter or an XMLWriter than a plain StringWriter . They will take care of escaping for you, as well as making sure the document is well-formed. This page shows the basics of using the HtmlTextWriter class, the gist of which being: StringWriter stringWriter = new StringWriter();using (HtmlTextWriter writer = new HtmlTextWriter(stringWriter)){ writer.AddAttribute(HtmlTextWriterAttribute.Class, classValue); writer.RenderBeginTag(HtmlTextWriterTag.Div); // Begin #1 writer.AddAttribute(HtmlTextWriterAttribute.Href, urlValue); writer.RenderBeginTag(HtmlTextWriterTag.A); // Begin #2 writer.AddAttribute(HtmlTextWriterAttribute.Src, imageValue); writer.AddAttribute(HtmlTextWriterAttribute.Width, "60"); writer.AddAttribute(HtmlTextWriterAttribute.Height, "60"); writer.AddAttribute(HtmlTextWriterAttribute.Alt, ""); writer.RenderBeginTag(HtmlTextWriterTag.Img); // Begin #3 writer.RenderEndTag(); // End #3 writer.Write(word); writer.RenderEndTag(); // End #2 writer.RenderEndTag(); // End #1}// Return the result.return stringWriter.ToString();
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1004776', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376117
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
374644
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: <script type="text/javascript"> function SubmitForm() { form1.submit(); } function ShowResponse() { }</script>...<div> <a href="#" onclick="SubmitForm();">Click</a></div> I want to capture the html response of form1.submit ? How do I do this? Can I register any callback function to form1.submit method?
Now provide the response and nothing else.
|
You won't be able to do this easily with plain javascript. When you post a form, the form inputs are sent to the server and your page is refreshed - the data is handled on the server side. That is, the submit() function doesn't actually return anything, it just sends the form data to the server. If you really wanted to get the response in Javascript (without the page refreshing), then you'll need to use AJAX, and when you start talking about using AJAX, you'll need to use a library. jQuery is by far the most popular, and my personal favourite. There's a great plugin for jQuery called Form which will do exactly what it sounds like you want. Here's how you'd use jQuery and that plugin: $('#myForm') .ajaxForm({ url : 'myscript.php', // or whatever dataType : 'json', success : function (response) { alert("The server says: " + response); } });
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/374644', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/46279/']}
|
jdg_376118
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9343601
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to create a XML file, using Java, that is a collection of GPS coordinates (GPX). Every time I receive a coordinate from my android device (approximately 1 every second) I need to append the results to an existing XML file. The output that I am looking for is shown below with the trkpt element as the repeated item. The problem is that I can't just add the new trkpt to the end of the file because it needs to be inside the trkseg parent element. So far I have tried two different APIs, SIMPLEXML and JDOM. With SIMPLEXML I couldn't figure out how to append a child element to an existing file so I switched to JDOM. JDOM allowed me to append the trkpt element as shown below, but as the file started growing it quickly slowed down the user interface of the program. With JDOM I was using the SAXBuilder to reopen the file and append. I think the issue with this was that it had to reproduce the entire file in memory before it added the new element and rewrote the file. So the larger the file got the more demanding the operation was on the device. I need a solution that doesn't examine/copy the entire file before writing the new data. Is there a more efficient way to accomplish this with Java or an API for Java? Thanks for any help! <?xml version="1.0" encoding="UTF-8"?><gpx xmlns="http://www.topografix.com/GPX/1/1"> <trk> <trkseg> <trkpt lon="9.860624216140083" lat="54.9328621088893"> <ele>228.0</ele> </trkpt> <trkpt lon="9.860624216140100" lat="54.9328621088754"> <ele>234.0</ele> </trkpt> <trkpt lon="9.860624216140343" lat="54.9328621088678"> <ele>227.0</ele> </trkpt> </trkseg> </trk></gpx>
Now provide the response and nothing else.
|
Asynchronous programming does "grow" through the code base. It has been compared to a zombie virus . The best solution is to allow it to grow, but sometimes that's not possible. I have written a few types in my Nito.AsyncEx library for dealing with a partially-asynchronous code base. There's no solution that works in every situation, though. Solution A If you have a simple asynchronous method that doesn't need to synchronize back to its context, then you can use Task.WaitAndUnwrapException : var task = MyAsyncMethod();var result = task.WaitAndUnwrapException(); You do not want to use Task.Wait or Task.Result because they wrap exceptions in AggregateException . This solution is only appropriate if MyAsyncMethod does not synchronize back to its context. In other words, every await in MyAsyncMethod should end with ConfigureAwait(false) . This means it can't update any UI elements or access the ASP.NET request context. Solution B If MyAsyncMethod does need to synchronize back to its context, then you may be able to use AsyncContext.RunTask to provide a nested context: var result = AsyncContext.RunTask(MyAsyncMethod).Result; *Update 4/14/2014: In more recent versions of the library the API is as follows: var result = AsyncContext.Run(MyAsyncMethod); (It's OK to use Task.Result in this example because RunTask will propagate Task exceptions). The reason you may need AsyncContext.RunTask instead of Task.WaitAndUnwrapException is because of a rather subtle deadlock possibility that happens on WinForms/WPF/SL/ASP.NET: A synchronous method calls an async method, obtaining a Task . The synchronous method does a blocking wait on the Task . The async method uses await without ConfigureAwait . The Task cannot complete in this situation because it only completes when the async method is finished; the async method cannot complete because it is attempting to schedule its continuation to the SynchronizationContext , and WinForms/WPF/SL/ASP.NET will not allow the continuation to run because the synchronous method is already running in that context. This is one reason why it's a good idea to use ConfigureAwait(false) within every async method as much as possible. Solution C AsyncContext.RunTask won't work in every scenario. For example, if the async method awaits something that requires a UI event to complete, then you'll deadlock even with the nested context. In that case, you could start the async method on the thread pool: var task = Task.Run(async () => await MyAsyncMethod());var result = task.WaitAndUnwrapException(); However, this solution requires a MyAsyncMethod that will work in the thread pool context. So it can't update UI elements or access the ASP.NET request context. And in that case, you may as well add ConfigureAwait(false) to its await statements, and use solution A. Update, 2019-05-01: The current "least-worst practices" are in an MSDN article here .
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/9343601', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1043807/']}
|
jdg_376119
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
740042
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a high school student and I am confused in one thing:"Why do we always assume in problems that if things are initially in contact with each other then they would be like that always?" 1)Suppose we have two blocks(A and B) initially in contact with each other and we apply force "F" on block "A". At first, I used to think that the contact force between block A and B would rise to the point when they acquire the same acceleration and at this moment I thought they would move together so at this point the contact force would become constant as there is no "deformation-reformation" thing happening after this moment. But if that's the case then at this moment the velocity of block A would still be higher than block B because its initial acceleration was more. If I have to plot a curve of acceleration-time it would be something like this: . the shape of the curve is random but I think it doesn't matter to my point(unless curve A crosses B and go even below it), the area under curve A would be higher than B so its velocity should be higher even at the moment when both have acquired the same acceleration. So, the deformation-reformation should still be happening as A is still approaching B and they would "never" reach to a common acceleration. So, why do we "assume" in problem solving that if these block are initially in contact, they would maintain that contact? But in reality, we have seen them moving together so how does this happen? This question came to my mind because I was thinking of a similar question of "water in a bucket revolving in a vertical plane" . We know that the water doesn't spill out from the bucket because the internal forces changes in a way so as to make the exact centripetal force required for water to move along with bucket. But the real question is: "How the internal forces "knows" that they have to maintain thing in contact at any moment in time?" Please answer in simple language if possible. Or if its a difficult concept for a high school student to learn ,please answer only in brief and let me know.
Now provide the response and nothing else.
|
Why do we always assume in problems that if things are initially in contact with each other then they would be like that always? Because it is simpler. At this point in your studies you are not sufficiently advanced to realistically model the interaction of the two blocks. So we simplify it for you. The purpose of the simplification is not that it is realistic but that it makes the problem solvable for someone with your background. The idea is that in order for you to eventually get to the point that you can handle more realistic physics you need "a very particular set of skills". Developing those skills requires working problems. Those problems don’t need to be realistic, they need to help you develop the skills. So we deliberately sacrifice realism in that pursuit. A more realistic, but still simplified, approach would be to treat the interaction as a sort of spring. Even more realistic would be to treat it as a spring and dashpot. Even more realistic would be to use some experimentally measured force-deformation profile. But all of these can wait until you have acquired the basic skills necessary. And the simplified model is a good “first order” approximation to all of these more realistic models.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/740042', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/273556/']}
|
jdg_376120
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
356847
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Very often when coding in Python (or any language where type is not specified) I run into the following problem: I want to write a function that can accept either a thing, or a container of such things in general. For example consider this subset function: def subset(df, months=all_months): return df[pd.DatetimeIndex(df['Timestamp']).month.isin(months)] I would now like to be able to call this both with a list of months, or a single month. The second case however is only possible by something ugly like: subset(df, months=["June"]) # Why not just subset(df, "June")? I can think of 2 solutions, neither of which feels elegant to me: I can use isinstance and treat scalars different that containers.This is not the best because I type check and also there are cases where the code in the function is complex and needs major revision when treating scalar instead of list I could cast the argument to list: months = [months] . Again I would be using the isinstance though. Is there are better looking solution than the ones I thought of? Even better, is there a well known solution that I am not aware of? EDIT Both answers provided acceptable solutions. I decided to stick with the one proposed by @Doc Brown because I can easily isolate the type checking into a decorator and want to avoid introducing unwrapping into the API for workplace specific reasons.
Now provide the response and nothing else.
|
Use an argument list: def subset(df, *months): for month in months: ...subset(df, 1)subset(df, 1, 2)subset(df, *[1, 2]) It's easiest for users of your api to understand how things work when functions work in exactly one way. Behaving differently depending on what you're passed can be very confusing.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/356847', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/282385/']}
|
jdg_376121
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34847981
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the curl command below, is it possible to allow newline in JSON (without the minify) and execute directly in bash (Mac/Ubuntu) curl -0 -v -X POST http://www.example.com/api/users \-H "Expect:" \-H 'Content-Type: text/json; charset=utf-8' \-d \'{ "field1": "test", "field2": { "foo": "bar" }}' When I run the command above, seems error occurred at the second { How to fix the above command? Updated : actually I was able to run the command without issue previously, not sure why problem happen recently.
Now provide the response and nothing else.
|
I remembered another way to do this with a "Here Document" as described in the Bash man page and detailed here . The @- means to read the body from STDIN, while << EOF means to pipe the script content until "EOF" as STDIN to curl. This layout may be easier to read than using separate files or the "echo a variable" approach. curl -0 -v -X POST http://www.example.com/api/users \-H "Expect:" \-H 'Content-Type: application/json; charset=utf-8' \--data-binary @- << EOF{ "field1": "test", "field2": { "foo": "bar" }}EOF NOTE: Use the --trace <outfile> curl option to record exactly what goes over the wire. For some reason, this Here Document approach strips newlines. (Update: Newlines were stripped by curl -d option. Corrected!)
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34847981', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/398230/']}
|
jdg_376122
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16089144
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have constructors and we can treat them as contracts to follow for object instantiation. There's no other way to create an instance without providing exact set of parameters to constructor. But how can we (and should we ever bother) enforce some pre-mortem activity?We've got the finalizer but they are not recommended for general purpose finalization.We also have IDisposable to implement. But if we work with a disposable object without using we have no guarantee that Dispose will be ever called. Why is there now way to enforce some state of the object before it will be let go off? Tiding up in finalizer is impossible because there's no guarantee that object graph is intact and referenced object of dying object are not already nulled by GC. Sure, not calling for instance object's SaveState() by the client code gives some troubles to it, not to my object. Nonetheless it is considered to be a good practice to require all needed dependencies to be injected in constructor (if no default value is available). Nobody readily says: "leave default constructor, create properties and throw exceptions if object is in invalid state." Update: As there are many votes for closing the question I'd say that some design patterns for this can also be an answer. Whether you use DI or not you can just count how many times object was requested/created. But without explicit release call you do not know the moment when you should call dispose. I simply do not understand how to implement disposals at right time.
Now provide the response and nothing else.
|
Why is there no way to enforce some state of the object before it will be let go of? Because the whole point of a garbage collector is to simulate a machine that has an infinite amount of memory . If memory is infinite then you never need to clean it up. You're conflating a semantic requirement of your program -- that a particular side effect occur at a particular time -- with the mechanisms of simulating infinite storage . In an ideal world those two things should not have anything to do with each other. Unfortunately we do not live in an ideal world; the existence of finalizers is evidence of that. If there are effects that you want to achieve at a particular time, then those effects are part of your program and you should write code that achieves them . If they are important then they should be visible in the code so that people reading the code can see and understand them.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16089144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/921054/']}
|
jdg_376123
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6346238
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get the id of the element that triggered the jQuery .change() function?The function itself works properly, but I need a specific action for a selector with id="next" . $("select").change(function() { [...snip....] alert( $(this).attr('id') ); // <---- not working} Any ideas why the alert above isn't working?
Now provide the response and nothing else.
|
this is the DOM element on which the event was hooked. this.id is its ID. No need to wrap it in a jQuery instance to get it, the id property reflects the attribute reliably on all browsers. $("select").change(function() { alert("Changed: " + this.id);} Live example You're not doing this in your code sample, but if you were watching a container with several form elements, that would give you the ID of the container . If you want the ID of the element that triggered the event, you could get that from the event object's target property: $("#container").change(function(event) { alert("Field " + event.target.id + " changed");}); Live example (jQuery ensures that the change event bubbles, even on IE where it doesn't natively.)
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6346238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/792513/']}
|
jdg_376124
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
415151
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $f=(u,v)\in \mathscr{D}'(U,\mathbb{C})$ be a distribution, where $U\subset\mathbb{C}=\mathbb{R}^2$ is an open set and $u$ and $v$ are the projection of $f$ onto the real and imaginary axis (ie $\langle f,\phi\rangle=\langle u,\phi\rangle+i\langle v,\phi\rangle$ ). Suppose that $$\frac{\partial}{\partial \overline{z}}f=0\qquad\text{in U,}$$ where $\frac{\partial}{\partial \overline{z}}=\frac{1}{2}\bigg(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\bigg)$ and the derivatives are in distributional sense.Does it follow that $f$ is holomorphic in the classical sense, ie $f\in C^\infty(U,\mathbb{C})$ and the Cauchy-Riemann equations are satisfied? The obvious idea would be to mollify, get holomorphic functions and then take the limit. But how can we conclude that the limit is still holomorphic?
Now provide the response and nothing else.
|
Remarks (2) and (3) are added in this edit. What Cohen's quoted proof outline is leaving implicit is the following statement in which $\mathrm{Con}(T)$ means " $T$ is consistent". $(*)$ Assuming $\mathrm{Con(ZF + SM)}$ , $\mathrm{V} \neq \mathrm {L}$ is not provable from $\mathrm{ZF + SM}$ , where $\mathrm{SM}$ stands for the statement "there is standard (i.e., well-founded) model of ZF". $(*)$ is an immediate consequence of the the well-known fact that $\mathrm{Con(ZF + SM)}$ implies $\mathrm{Con(ZF + SM + V = L)}$ . This well-known fact, in turn, follows from absoluteness considerations: if $\mathcal{M}\models \mathrm{ZF + SM}$ , then $\mathrm{L}^{\mathcal{M}} \models \mathrm{ZF + SM+V=L}$ , where $\mathrm{L}^{\mathcal{M}}$ is the constructible universe as computed in $\mathcal{M}$ . By the way: The quoted statement of Cohen in his article is phrased as the theorem below on pages 108-109 of his book "Set Theory and the Continuum Hypothesis". In Cohen's terminology SM stands for the statement "there is standard (i.e., well-founded) model of $\mathrm{ZF}$ ". Theorem. From $\mathrm{ZF + SM}$ or indeed from any axiom system containing $\mathrm{ZF}$ which is consistent with $\mathrm{V = L}$ , one cannot prove the existence of an uncountable standard model in which $\mathrm{AC}$ is true and $\mathrm{CH}$ is false, nor even one in which AC holds and which contains nonconstructible real numbers . Three remarks are in order: Remark (1) In unpublished work, Cohen and Solovay noted that one can use forcing over a countable standard model of ZF to build uncountable standard models of $\mathrm{ZF}$ (in which AC fails by Cohen's aforementioned result). Later, Harvey Friedman extended their result by showing that every countable standard model of $\mathrm{ZF}$ of (ordinal) height $\alpha$ can be generically extended to a model with the same height but whose cardinality is $\beth_{\alpha}$ ( Friedman, Harvey , Large models of countable height , Trans. Am. Math. Soc. 201, 227-239 (1975). ZBL0296.02036 ). Remark (2) It is easy to see (using the reflection theorem and relativizing to the constructible universe) that, assuming the consistency of $\mathrm{ZF + SM}$ , the theory $\mathrm{ZF + SM}$ + "there is no uncountable standard model of $\mathrm{ZFC}$ " is also consistent. Remark (3) Within $\mathrm{ZF}$ + "there is an uncountable standard model $\mathcal{M} \models \mathrm{ZFC+V=L}$ such that $\omega_3^{\mathcal{M}}$ is countable", one can use forcing to build a generic extension $\mathcal{N}$ of $\mathcal{M}$ that violates $\mathrm{CH}$ ; thus $\mathcal{N}$ is an uncountable standard model of $\mathrm{ZFC + \lnot CH}$ . More specifically, the assumption of countability of $\omega_3^{\mathcal{M}}$ , and the fact that GCH holds in $\mathcal{M}$ , assures us that there exists a $\mathbb{P}$ -generic filter over $\mathcal{M}$ , where $\mathbb{P}$ is the usual notion of forcing in $\mathcal{M}$ for adding $\omega_2$ Cohen reals. Thus, in the presence of the principle " $0^{\sharp}$ exists" (which is implied by sufficiently large cardinals, and implies that every definable object in the constructible universe is countable) there are lots of uncountable standard models of $\mathrm{ZFC + \lnot CH}$ .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/415151', 'https://mathoverflow.net', 'https://mathoverflow.net/users/351083/']}
|
jdg_376125
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4226
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Quantum Mechanics has generalized probability theory to negative/imaginary numbers, mostly to explain interference patterns, wave/particle duality and generally weird things like that. It can be seen more abstractly, however, as a noncommutative generalisation of Bayesian probability (quote from Terrence Tao). I'm curious about these things, though by no means an expert. Does this have any applications outside Quantum Mechanics? Just curious.
Now provide the response and nothing else.
|
Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities . Phys. Rep. 133 (6) 337-401. It's a physics paper for sure, but the applications there are not all related to quantum physics. My personal favorite application relates to de Finetti's Theorem (also Bayesian in flavor): if we don't mind negative probabilities then it turns out that all exchangeable sequences (even finite, perhaps negatively correlated ones) are a (signed) mixture of IID sequences. Of course, this itself has applications in quantum mechanics, in particular, that Fermi-Dirac statistics yield the same type of (signed) mixture representation that Bose-Einstein statistics do. My second personal favorite application (outside of physics proper) relates to infinite divisible (ID) distributions, which classically includes normal, gamma, poisson, ... the list continues. It isn't too hard to show that ID distributions must have unbounded support, which immediately kills distributions like the binomial or uniform (discrete+continuous) distributions. But if we permit negative probabilities then these problems disappear and the binomial, uniform (discrete+continuous), and a whole bunch of other distributions then become infinitely divisible - in this extended sense, please bear in mind. ID distributions relate to statistics in that they are limiting distributions in generalized central limit theorems. By the way, the first application is whispered folklore among probabilists and the infinite divisibility stuff is proved here , an informal electronic copy being here . Presumably there is a bunch of material on arXiv , too, though I haven't checked there in quite some time. As a final remark, whuber is absolutely right that it isn't really legal to call anything a probability that doesn't lie in $[0,1]$ , at the very least, not for the time being. Given that "negative probabilities" have been around for so long I don't see this changing in the near future, not without some kind of colossal breakthrough.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/4226', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1760/']}
|
jdg_376126
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9970495
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a table that looks something like this: Name Year Value A 2000 5 A 2001 3 A 2002 7 A 2003 1 B 2000 6 B 2001 1 B 2002 8 B 2003 2 The user can query based on a range of years, and it will return the sum of Value grouped by Name, as such (assume queried years are 2000 - 2001): Name SUM(Value) A 8 B 7 Now, I wanted to insert a calculated field that outputs the ratio of the sum of the values of each name for ALL years to the sum of all values. Basically the percentage of all values attributed to A and B, respectively, like: Name SUM(Value) % Of Total A 8 0.484 (16 / 33) B 7 0.516 (17 / 33) Note that even though the user queried only 2000-2001, I want the calculation to use the sum across all years. I've been searching and experimenting for hours and I cannot figure out how. I only know how to sum across the queried years like so: SELECT `Name`, SUM(`Value`)/(SELECT SUM(`Value`) FROM `table1`) AS "% of Total"FROM `table1`WHERE `Year` BETWEEN 2000 AND 2001GROUP BY `Name`;
Now provide the response and nothing else.
|
You can calculate the total (and from that the desired percentage) by using a subquery in the FROM clause: SELECT Name, SUM(Value) AS "SUM(VALUE)", SUM(Value) / totals.total AS "% of Total"FROM table1, ( SELECT Name, SUM(Value) AS total FROM table1 GROUP BY Name ) AS totalsWHERE table1.Name = totals.NameAND Year BETWEEN 2000 AND 2001GROUP BY Name; Note that the subquery does not have the WHERE clause filtering the years.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9970495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1307146/']}
|
jdg_376127
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37116204
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
ID <- 1:10 group <- c(1,1,1,2,2,2,3,3,3,3)var1 <- c(6:15) var2 <- c(7:16) var3 <- c(6:11, NA, NA, NA, NA)var4 <- c(4:9, NA, NA, NA, NA) data <- data.frame(ID, group, var1, var2, var3, var4)library(dplyr) data %>% group_by(group) %>% boxplot(var1, var2) The last line does not work as i wish. The idea is to get 4 boxplots in one graphic. Two for each variable. Maybe i need to use ggplot2?
Now provide the response and nothing else.
|
You need to reorganize the data if you want to get both variables in the same plot. Here is a ggplot2 solution: # load library library(ggplot2) library(tidyr) library(ggthemes)# reorganize data df <- gather(data, "ID","group") #rename columns colnames(df) <- c("ID","group","var","value")# plot ggplot(data=df) + geom_boxplot( aes(x=factor(group), y=value, fill=factor(var)), position=position_dodge(1)) + scale_x_discrete(breaks=c(1, 2, 3), labels=c("A", "B", "C")) + theme_minimal() + scale_fill_grey() Making boxplots with the same width is a whole different question (solution here) , but one simple alternative would be like this: # recode column `group` in the `data.frame`. df <- transform(df, group = ifelse(group==1, 'A', ifelse(group==2, 'B', "C")))# plot ggplot(data=df) + geom_boxplot( aes(x=factor(var), y=value, fill=factor((var))), position=position_dodge(1)) + geom_jitter(aes(x=factor(var), y=value, color=factor((var)))) + facet_grid(.~group, scales = "free_x") + theme_minimal()+ scale_fill_grey() + theme(axis.text.x=element_blank(), axis.title.x=element_blank(), axis.ticks=element_blank())
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37116204', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5794837/']}
|
jdg_376128
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3444
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In these notes by Terence Tao is a proof of Stirling's formula. I really like most of it, but at a crucial step he uses the integral identity $$n! = \int_{0}^{\infty} t^n e^{-t} dt$$ , coming from the Gamma function. I have a mathematical confession to make: I have never "grokked" this identity. Why should I expect the integral on the right to give me the number of elements in the symmetric group on $n$ letters? (It's not that I don't know how to prove it. It's quite fun to prove; my favorite proof observes that it is equivalent to the integral identity $\int_{0}^{\infty} e^{(x-1)t} dt = \frac{1}{1 - x}$ . But if someone were to ask me, "Yes, but why, really ?" I would have no idea what to say.) So what are more intuitive ways of thinking about this identity? Is there a probabilistic interpretation? What kind of random variable has probability density function $\frac{t^n}{n!} e^{-t}$ ? (What does this all have to do with Tate's thesis?) As a rough measure of what I'm looking for, your answer should make it obvious that $t^n e^{-t}$ attains its maximum at $t = n$ . Edit: The kind of explanation I'm looking for, as I described in the comments, is similar to this explanation of the beta integral.
Now provide the response and nothing else.
|
I haven't quite got this straight yet, but I think one way to go is to think about choosing points at random from the positive reals. This answer is going to be rather longer than it really needs to be, because I'm thinking about this in a few (closely related) ways, which probably aren't all necessary, and you can decide to reject the uninteresting parts and keep anything of value. Very roughly, the idea is that if you "randomly" choose points from the positive reals and arrange them in increasing order, then the probability that the $(n+1)^\text{th}$ point is in a small interval $(t,t+dt)$ is a product of probabilities of independent events, $n$ factors of $t$ for choosing $n$ points in the interval $[0,t]$, one factor of $e^{-t}$ as all the other points are in $[t,\infty)$, one factor of $dt$ for choosing the point in $(t,t+dt)$, and a denominator of $n!$ coming from the reordering. At least, as an exercise in making a simple problem much harder, here it goes... I'll start with a bit of theory before trying to describe intuitively why the probability density $\dfrac{t^n}{n!}e^{-t}$ pops out. We can look at the homogeneous Poisson process (with rate parameter $1$). One way to think of this is to take a sequence on independent exponentially distributed random variables with rate parameter $1$, $S_1,S_2,\ldots$, and set $T_n=S_1+\cdots+S_n$. As has been commented on already, $T_{n+1}$ has the probability density function $\dfrac{t^n}{n!}e^{-t}$. I'm going to avoid proving this immediately though, as it would just reduce to manipulating some integrals. Then, the Poisson process $X(t)$ counts the number of times $T_i$ lying in the interval $[0,t]$. We can also look at Poisson point processes (aka, Poisson random measures , but that Wikipedia page is very poor). This is just makes rigorous the idea of randomly choosing unordered sets of points from a sigma-finite measure space $(E,\mathcal{E},\mu)$. Technically, it can be defined as a set of nonnegative integer-valued random variables $\{N(A)\colon A\in\mathcal{E}\}$ counting the number of points chosen from each subset A, such that $N(A)$ has the Poisson distribution of rate $\mu(A)$ and $N(A_1),N(A_2),\ldots$ are independent for pairwise disjoint sets $A_1,A_2,\ldots$. By definition, this satisfies$$\begin{array}{}\mathbb{P}(N(A)=n)=\dfrac{\mu(A)^n}{n!}e^{-\mu(A)}.&&(1)\end{array}$$The points $T_1,T_2,\ldots$ above defining the homogeneous Poisson process also define a Poisson random measure with respect to the Lebesgue measure $(\mathbb{R}\_+,{\cal B},\lambda)$. Once you forget about the order in which they were defined and just regard them as a random set that is, which I think is the source of the $n!$. If you think about the probability of $T_{n+1}$ being in a small interval $(t,t+\delta t)$ then this is just the same as having $N([0,t])=n$ and $N((t,t+\delta t))=1$, which has probability $\dfrac{t^n}{n!}e^{-t}\delta t$. So, how can we choose points at random so that each small set $\delta A$ has probability $\mu(\delta A)$ of containing a point, and why does $(1)$ pop out? I'm imagining a hopeless darts player randomly throwing darts about and, purely by luck, hitting the board with some of them. Consider throwing a very large number $N\gg1$ of darts, independently, so that each one only has probability $\mu(A)/N$ of hitting the set, and is distributed according to the probability distribution $\mu/\mu(A)$. This is consistent, at least, if you think about the probability of hitting a subset $B\subseteq A$. The probability of missing with all of them is $(1-\mu(A)/N)^N=e^{-\mu(A)}$. This is a multiplicative function due to independence of the number hitting disjoint sets. To get the probability of one dart hitting the set, multiply by $\mu(A)$ (one factor of $\mu(A)/N$ for each individual dart, multiplied by $N$ because there are $N$ of them). For $n$ darts, we multiply by $\mu(A)$ $n$ times, for picking $n$ darts to hit, then divide by $n!$ because we have over-counted the subsets of size $n$ by this factor (due to counting all $n!$ ways of ordering them). This gives $(1)$. I think this argument can probably be cleaned up a bit. Getting back to choosing points randomly on the positive reals, this gives a probability of $\dfrac{t^n}{n!}e^{-t}dt$ of picking $n$ in the interval $[0,t]$ and one in $(t,t+dt)$. If we sort them in order as $T_1\lt T_2\lt\cdots$ then $\mathbb{P}(T_1\gt t)=e^{-t}$, so it is exponentially distributed. Conditional on this, $T_2,T_3,\ldots$ are chosen randomly from $[T_1,\infty)$, so we see that the differences $T_{i+1}-T_{i}$ are independent and identically distributed. Why is $\dfrac{t^n}{n!}e^{-t}$ maximized at $t=n$? I'm not sure why the mode should be a simple property of a distribution. It doesn't even exist except for unimodal distributions. As $T_{n+1}$ is the sum of $n+1$ IID random variables of mean one, the law of large numbers suggests that it should be peaked approximately around $n$. The central limit theorem goes further, and gives $\dfrac{t^n}{n!}e^{-t}\approx\dfrac{1}{\sqrt{2\pi n}}e^{-(t-n)^2/{2n}}$. Stirling's formula is just this evaluated at $t=n$. What's this to do with Tate's thesis? I don't know, and I haven't read it (but intend to), but have a vague idea of what it's about. If there is anything to do with it, maybe it is something to do with the fact that we are relating the sums of independent random variables $S_1+\cdots+S_n$ distributed with respect to the Haar measure on the multiplicative group $\mathbb{R}_+$ (edit: oops, that's not true, the multiplicative Haar measure has cumulative distribution given by $\log$, not $\exp$) with randomly chosen sets according to the Haar measure on the additive group $\mathbb{R}$.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/3444', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/232/']}
|
jdg_376129
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
220440
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $P_n$ be the set of degree $n$ polynomials that pass through $(0,1)$ and $(1,1)$ and are non-negative on the interval $[0,1]$ (but may be negative elsewhere). Let $a_n = \min_{p\in P_n} \int_0^1 p(x)\,\mathrm{d}x$ and let $p_n$ be the polynomial that attains this minimum. Are $a_n$ or $p_n$ known sequences? Is there some clever way to rephrase this question in terms of linear algebra and a known basis of polynomials?
Now provide the response and nothing else.
|
Following Robert Israel's answer, we also scale everything to $[-1,1]$ (thus multiplying the result by 2). As he mentions, the optimal polynomial is always a square of some other polynomial, $p_{2n}=p_{2n+1}=q_n^2$, and $q_n$ is either even or odd (see Lemma below). So we are left to find the minimal $L_2[-1,1]$-norm of an odd/even polynomial $q_n$ such that $\deg q_n\leq n$ and $q_n(\pm1)=(\pm1)^n$. In other words (recall the division by 2 in the first line!), $a_{2n}=a_{2n+1}=d_n^2/2$, where $d_n$ is the distance from $0$ to the hyperplane defined by $q(1)=1$ in the space of all odd/even polynomials of degree at most $n$ with $L_2[-1,1]$-norm. Now, this hyperplane is the affine hull of the Legendre polynomials $ P_i(x)=\frac1{2^ii!}\frac{d^i}{dx^i}(x^2-1)^i$, where $i$ has the same parity as $n$ (since we know that $ P_i(1)=1$ and $P_i(-1)=(-1)^i$). Next, by $\| P_i\|^2=\frac{2}{2i+1}$, our distance is$$ \left(\sum_{j\leq n/2}\| P_{n-2j}\|^{-2}\right)^{-1/2} =\left(\sum_{j\leq n/2}\frac{2(n-2j)+1}{2}\right)^{-1/2}=\sqrt{\frac4{(n+1)(n+2)}},$$attained at$$ q_n(x)=\left(\sum_{i\leq n/2}\frac{2}{2(n-2i)+1}\right)^{-1} \sum_{i\leq n/2}\frac{2P_{n-2i}(x)}{2(n-2i)+1}.$$Thus the answer for the initial question is $a_{2n}=a_{2n+1}=\frac2{(n+1)(n+2)}$ and $p_{2n}=p_{2n+1}=q_n^2$. Lemma. For every $n$, one of optimal polynomials on $[-1,1]$ is a square of an odd or even polynomial. Proof. Let $r(x)$ be any polynomial.which is nonnegative on $[-1,1]$ with $r(\pm 1)=1$. We will replace it by some other polynomial which has the required form, has the same (or less) degree and the same values at $\pm 1$, is also nonnegative on $[-1,1]$, and is not worse in the integral sense. Due to the compactness argument, this yields the required result. Firstly, replacing $r(x)$ by $\frac12(r(x)+r(-x))$, we may assume that $r$ is even (and thus has an even degree $2n$). Let $\pm c_1,\dots,\pm c_{n}$ be all complex roots of $r$ (regarding multiplicities); then$$ r(x)=\prod_{j=1}^n\frac{x^2-c_j^2}{1-c_j^2},$$due to $r(\pm 1)=1$. Now, for all $c_j\notin[-1,1]$ we simultaneously perform the following procedure. (a) If $c_j$ is real, then we replace $\pm c_j$ by $\pm x_j=0$. Notice that $$ \frac{|x^2-c_j^2|}{|1-c_j^2|}\geq 1\geq \frac{|x^2-0^2|}{|1-0^2|}.$$for all $x\in[-1,1]$. (b) If $c_j$ is non-real, then we choose $x_j\in[-1,1]$ such that $\frac{|c_j-1|}{|c_j+1|}=\frac{|x_j-1|}{|x_j+1|}$. Notice that all complex $z$ with $\frac{|c_j-z|}{|x_j-z|}=\frac{|c_j-1|}{|x_j-1|}$ form a circle passing through $-1$ and $1$, and the segment $[-1,1]$ is inside this circle. Therefore, for every $x\in[-1,1]$ we have$$ \frac{|c_j-x|}{|x_j-x|}\geq\frac{|c_j-1|}{|x_j-1|},$$thus$$ \frac{|x^2-c_j^2|}{|1-c_j^2|} =\frac{|c_j-x|}{|c_j-1|}\cdot\frac{|c_j+x|}{|c_j+1|} \geq \frac{|x_j-x|}{|x_j-1|}\cdot\frac{|x_j+x|}{|x_j+1|} =\frac{|x^2-x_j^2|}{|1-x_j^2|}.$$So, we replace $\pm c_j$ and $\pm\bar c_j$ by $\pm x_j$ and $\pm x_j$ (or simply $\pm c_j$ by $\pm x_j$ if $c_j$ is purely imaginary). After this procedure has been applied, we obtain a new polynomial whose roots are in $[-1,1]$ and have even multiplicities, and its values at $\pm1$ are equal to $1$. So it is a square of some polynomial which is even/odd (since the roots are still split into pairs of opposite numbers). On the other hand, its values at every $x\in[-1,1]$ do not exceed the values of $r$ at the same points, as was showed above. So the obtained polynomial is not worse in the integral sense, as required. The lemma is proved.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/220440', 'https://mathoverflow.net', 'https://mathoverflow.net/users/50796/']}
|
jdg_376130
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
317337
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be an irreducible smooth projective variety over $\mathbb{C}$ . Let $G$ be an affine algebraic group over $\mathbb{C}$ . Let $p : E_G \longrightarrow X$ be a holomorphic principal $G$ -bundle on $X$ . Let $ad(E_G) = E_G \times^G \mathfrak{g}$ be the adjoint vector bundle of $E_G$ associated to the adjoint representation $ad : G \longrightarrow End(\mathfrak{g})$ of $G$ on its Lie algebra $\mathfrak{g}$ . The fibers of $ad(E_G)$ are $\mathbb{C}$ -linearly isomorphic to $\mathfrak{g}$ . Consider $ad(E_G)$ as a sheaf of $\mathcal{O}_X$ -modules on $X$ . Question: Is there any $\mathcal{O}_X$ -bilinear homomorphism $[,] : ad(E_G)\times ad(E_G) \to ad(E_G)$ giving a Lie algebra structure on the sheaf $ad(E_G)$ ?
Now provide the response and nothing else.
|
A principal $G$ -bundle gives a monoidal functor from the category of representations of $G$ to the category of vector bundles. In particular, it takes the morphism $$[-,-] \colon \mathfrak{g} \otimes \mathfrak{g} \to \mathfrak{g}$$ of $G$ -representations (for the adjoint action) to a morphism of vector bundles $$[-,-] \colon ad(E_G) \otimes ad(E_G) \to ad(E_G).$$ By functoriality, it is skew-symmetric and satisfies the Jacobi identity, hence provides the sheaf $ad(E_G)$ with a Lie algebra structure.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/317337', 'https://mathoverflow.net', 'https://mathoverflow.net/users/124771/']}
|
jdg_376131
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36802586
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to do something like: var list = [1,2,3,4,5]if(2 in list){ return true} from a ng-class , so I tried: ng-class="this.id in list ? 'class-1' : 'class-2' "> But doesn't worked, throws an error Syntax Error: Token 'in' is an unexpected token at ...
Now provide the response and nothing else.
|
For arrays you'd use indexOf , not in , which is for objects if ( list.indexOf(this.id) !== -1 ) { ... } so ng-class="{'class-1' : list.indexOf(this.id) !== -1, 'class-2' : list.indexOf(this.id) === -1}"
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36802586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4709517/']}
|
jdg_376132
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7953531
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to have Lookup Transformation Task, where Connection will be a Excel Source. But SSIS allow to set only OLEDB Connection as Connection. Is there any possibility to use Excel file as Connection for Lookup Transformation?
Now provide the response and nothing else.
|
Merge sort is an option, but if you really want to use Excel as a source for the SSIS Lookup task, you can do it. As you have correctly discerned, you cannot use the Excel Connection Manager in a Lookup task, it only accepts the OLE DB connection manager. The trick then, is to use an OLE DB connection manager with an Excel spreadsheet. For this example, I have a spreadsheet with state codes and their full name and my source data only has the abbreviations flowing through. This walk through will wire up a lookup task against Excel to retrieve those values. Keep these caveats in mind: Lookups are case sensitive, regardless of whether the source database (or file in this case) is case sensitive. Excel strings are always going to be interpreted as unicode/nvarchar. Source Data Given a simple file sitting at C:\tmp\LookupReference.xlsx that looks like the following Connection Manager We must first establish an OLE DB Connection Manager. Instead of the default "Native OLE DB\SQL Server Native Client 10.0" change that to "Native OLE DB\Microsoft Office 12.0 Access Database Engine OLE DB Provider". In the Server or file name, locate the source file. Clicking Test Connection at this point will result in an error. Here comes the "magic." Click the All tab and scroll to the top. In Extended Properties, add the indicated string "Excel 12.0;HDR=YES; IMEX=1;" This tells the provider that we are going to use Excel 12.0 (.xlsx format) with a header row and the IMEX 1 tells the driver there will be intermixed data. Your package sould now look something like this. A connection manager with extended properties set and assumes a preexisting data flow Data Flow To simplify matters, I have a script source that generates 3 rows of data with state codes MO, KS and NE and sends them down the pipeline. Your source will obviously be different but the concept will remain the same. Sample code provided in the annotation. In your lookup transformation, you will need to write a query against the spreadsheet. It's similar to a normal database query except your table is going to be Sheet1$ unless you have a named range in which your table would be MyRange Note the $ is required when referencing a sheet. Using the sample spreadsheet above, my query would be SELECT S.StateCode, S.StateNameFROM `Sheet1$` S I map my Script task column StateCode to the reference query's StateCode column and check the StateName field as I want to add that to my data flow. I've put a Data viewer after the Lookup task to verify my lookups worked Everything works and we're all happy. Old Excel If you are using a .xls file, you need to make the following changes.- In your Connection Manager, instead of the Office 12 provider, select the "Native OLE DB\Microsoft Jet 4.0 OLE DB Provider"- The Extended Properties become "EXCEL 8.0;HDR=Yes; IMEX=1;" Minor Edit Blogged this at Using Excel in an SSIS lookup transformation . You can also use a Cached Connection Manager to use any source for lookups in SSIS 2008+
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7953531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/205270/']}
|
jdg_376133
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6874346
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like a function that is not a member of a class and is accessible from any class. I assume I would have to #include the header file where the function is declared, but I don't know where to define such a global function. Are there good reasons against having such a function in the first place?
Now provide the response and nothing else.
|
you need a body (in a cpp file): int foo(){ return 1;} and a definition/prototype in a header file, which will be included before any use of the function: #ifndef MY_FOO_HEADER_#define MY_FOO_HEADER_ int foo();#endif then using it somewhere else: #include foo.hvoid do_some_work(){ int bar = foo();} or use an inline function (doesn't guarantee it'll be inlined, but useful for small functions, like foo ): #ifndef MY_FOO_HEADER_#define MY_FOO_HEADER_ inline int foo() { return 1; }#endif alternatively you can abuse the C-style header based functions (so this goes in a header, the static forces it to exist in a single compilation unit only, you should avoid this however ): #ifndef MY_FOO_HEADER_#define MY_FOO_HEADER_ static int foo() { return 1; }#endif
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6874346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/446554/']}
|
jdg_376134
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35615413
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning lodash. Is it possible to use lodash to find a substring in an array of strings? var myArray = [ 'I like oranges and apples', 'I hate banana and grapes', 'I find mango ok', 'another array item about fruit' ] is it possible to confirm if the word 'oranges' is in my array? I've tried _.includes, _.some, _.indexOf but they all failed as they look at the full string, not a substring
Now provide the response and nothing else.
|
You can easily construct an iteratee for some() using lodash's higher-order functions. For example: _.some(myArray, _.unary(_.partialRight(_.includes, 'orange'))); The unary() function ensures that only one argument is passed to the callback. The partialRight() function is used to apply the 'orange' value as the second argument to includes() . The first argument is supplied with each iteration of some() . However, this approach won't work if case sensitivity matters. For example, 'Orange' will return false. Here's how you can handle case sensitivity: _.some(myArray, _.method('match', /Orange/i)); The method() function creates a function that will call the given method of the first argument passed to it. Here, we're matching against a case-insensitive regular expression. Or, if case-sensitivity doesn't matter and you simply prefer the method() approach, this works as well for ES2015: _.some(myArray, _.method('includes', 'orange'));
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35615413', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4608244/']}
|
jdg_376135
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38775044
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to use OpenCV on a python web application I created on an Amazon EC2 Micro instance running apache. I've got everything configured and working, except OpenCV isn't installing. This is the output I got from the Apache Error Log. [Thu Aug 04 18:31:54 2016] [error] [client 72.219.147.5] import cv2[Thu Aug 04 18:31:54 2016] [error] [client 72.219.147.5] ImportError: No module named cv2 Here is what I've tried: I've installed pip and tried running pip install pyopencv That doesn't work and gives me errors. I've also tried manually installing it by following this: How to install OpenCV on Amazon Linux? and this: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html?highlight=download#installing-opencv-python-from-pre-built-binaries and this: http://techieroop.com/install-opencv-in-centos/ Even after installation, the cv2.so file is nowhere to be be found. I tried to search for it using sudo find / -name "cv2.so" but nothing came up. I do, however, have the following .so files installed: /usr/local/lib/libopencv_photo.so/usr/local/lib/libopencv_stitching.so/usr/local/lib/libopencv_flann.so/usr/local/lib/libopencv_imgcodecs.so/usr/local/lib/libopencv_videostab.so/usr/local/lib/libopencv_ml.so/usr/local/lib/libopencv_objdetect.so/usr/local/lib/libopencv_imgproc.so/usr/local/lib/libopencv_superres.so/usr/local/lib/libopencv_core.so/usr/local/lib/libopencv_video.so/usr/local/lib/libopencv_highgui.so/usr/local/lib/libopencv_features2d.so/usr/local/lib/libopencv_shape.so/usr/local/lib/libopencv_videoio.so/usr/local/lib/libopencv_calib3d.so Also, when running the cmake command, this is the output I'm getting: -- Python 2:-- Interpreter: /usr/bin/python2.7 (ver 2.7.10)-- Libraries: NO-- numpy: NO (Python wrappers can not be generated)-- packages path: lib/python2.7/dist-packages Any help is appreciated.
Now provide the response and nothing else.
|
tested and working on amzn-ami-hvm-2016.03.1.x86_64-gp2 sudo yum install git cmake gcc-c++ numpy python-devel sudo pip install --upgrade pipsudo ln -rs /usr/local/bin/pip /usr/bin/wget https://pypi.python.org/packages/18/eb/707897ab7c8ad15d0f3c53e971ed8dfb64897ece8d19c64c388f44895572/numpy-1.11.1-cp27-cp27mu-manylinux1_x86_64.whlsudo pip install numpy-1.11.1-cp27-cp27mu-manylinux1_x86_64.whl git clone https://github.com/Itseez/opencv.gitcd opencvgit checkout 3.1.0mkdir buildcd buildcmake .. -DBUILD_opencv_python2=ONmake -j4sudo make installecho 'export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/site-packages/:/usr/local/lib/python2.7/dist-packages/'>>~/.bashrc;. ~/.bashrcpython -c 'import cv2; print "cv2 imported"' most importantly after cmake step. you should see this in the output. -- Python 2:-- Interpreter: /usr/bin/python2.7 (ver 2.7.10)-- Libraries: /usr/lib64/libpython2.7.so (ver 2.7.10)-- numpy: /usr/local/lib64/python2.7/site-packages/numpy/core/include (ver 1.11.1)-- packages path: lib/python2.7/dist-packages now if it is not showing up, you need to completely remove build folder and rerun cmake again after correctly installing numpy, just rerunning cmake inside your already existing build folder will not work.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38775044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1376311/']}
|
jdg_376136
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22476778
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
PHPStorm showed that all the variables from other files, both required and included, are undefined. I found this solution here , but after I disabled that option Ignore 'include' and 'require' statements , the IDE ignored all undefined variables. For example, I have a file a.php with content $name = 'Bob' , and file b.php , which require the file a.php . When I type echo $name in file b.php it works as expected and displays 'Bob' . The IDE, however, highlights the variable $name claiming it's undefined. If I disable that option 'Undefined variable' - Ignore 'include' and 'require' statements , the IDE stops highlighting it. With that option I can now write any variables, for example $sdadasdasdas in file b.php and the IDE doesn't highlight it. Can PHPStorm understand which variables are set in included/required files and which ones are not?
Now provide the response and nothing else.
|
All above answers removes or suppress warnings which imho is not good solution. Best solution is to add header with doc block with used variables. Example: <?php/** * @var string $name * @var FormData $form */?> This will not only prevent from showing "Undefined variable" warning, but also document your code and makes autocomplete working as expected.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22476778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2206822/']}
|
jdg_376137
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17385388
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
- (void) addABlock {void (^aBlock)(void) = ^() { [someObject doSomething]; };[self.myMutableArray addObject: aBlock]; // Oops..[self.myMutableArray addObject: [aBlock copy]]; // works fine} In the above simplified example I am seeing undefined behavior if the block copy is not performed. This case is specifically listed in apple's ARC transition guide. The part that I do not understand is why I have to manually call copy. The block is created on the stack so a block_copy needs to be performed- that much is clear. NSArray does not call copy, but it should call retain on objects that get added. So why doesn't [NSBlock retain] simply call through to [NSBlock copy] ? http://developer.apple.com/library/mac/#releasenotes/ObjectiveC/RN-TransitioningToARC/Introduction/Introduction.html http://www.galloway.me.uk/2013/05/a-look-inside-blocks-episode-3-block-copy/
Now provide the response and nothing else.
|
Update Despite what Apple documentation says: Blocks “just work” when you pass blocks up the stack in ARC mode, such as in a return. You don’t have to call Block Copy any more. You still need to use [^{} copy] when passing “down” the stack into arrayWithObjects: and other methods that do a retain. it's no longer necessary to manually call copy on a block when adding it to a container.The lack of automatic copy in this case has been considered a compiler bug and fixed in llvm long time ago. "We consider this to be a compiler bug, and it has been fixed for months in the open-source clang repository." ( John McCall , LLVM developer) I personally tested this in Xcode 5, using the latest Apple LLVM 5.0 compiler. - (NSArray *)blocks { NSMutableArray *array = [NSMutableArray array]; for (NSInteger i = 0; i < 3; i++) { [array addObject:^{ return i; }]; } return array;}- (void)test { for (NSInteger (^block)() in [self blocks]) { NSLog(@"%li", block()); }} The above example correctly prints 012 under ARC and it crashes with EXC_BAD_ACCESS in MRC. Note that this is - finally - coherent with the llvm documentation , which states whenever these semantics call for retaining a value of block-pointer type, it has the effect of a Block_copy meaning that whenever ARC has to retain a pointer and this pointer happens to be a block-pointer type, Block_copy will be called instead of retain . Original answer The part that I do not understand is why I have to manually call copy. Blocks are one of the few examples of Objective-C objects allocated on the stack (for performance reasons), so when you return from the method call you lose them, due to the tear down of the current stack frame. Sending copy on a stack-block will call Block_copy on it and it will move it on the heap, allowing you to keep a valid reference to the block. So why doesn't [NSBlock retain] simply call through to [NSBlock copy] This would break the usual semantic of retain , which is supposed to return the object itself, with an incremented retain count. Since incrementing a retain count on an stack-block doesn't make any sense, calling retain on a stack-block doesn't have any effect. Apple could have implemented it differently, as you suggest, but they preferred to stick as much as possible to the common contracts of memory management methods. As a further reference on blocks, you may want to have a look at this great blog post by @bbum. It's pre-ARC but the majority of concepts hasn't changed.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17385388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/355993/']}
|
jdg_376138
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
63397534
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using react-query to make API calls, and in this problem case I want to only call the API if certain conditions are met. I have an input box where users enter a search query. When the input value is changed, a search server is called with the contents of the input as the search query ... but only if the input value is more than 3 chars long. In my react component I'm calling: const {data, isLoading} = useQuery(['search', searchString], getSearchResults); And my getSearchResults function will, conditionally, make an API call. const getSearchResults = async (_, searchString) => { if (searchString.length < 3) return {data: []} const {data} = await axios.get(`/search?q=${searchString}`) return data;} We can't use a hook inside a conditional - so I put the condition into my API calling function. This almost works. If I enter a short query string, there is no API request made and I get an empty array back for the data . Yay! But - isLoading will flip to true briefly - even though there is no HTTP request being made. So my loading indicator shows when there is no actual network activity. Am I misunderstanding how to best solve my use case, is there a way to enure that isLoading will return false if there is no HTTP activity?
Now provide the response and nothing else.
|
The key was to use Dependent Queries So, in my main component, I create a boolean and pass that to the enabled option of the useQuery hook: const isLongEnough = searchString.length > 3;const {data, isLoading} = useQuery(['search', searchString], getSearchResults, {enabled: isLongEnough}); and the API calling method is simply the API call - not any conditional: const getSearchResults = async (_, searchString) => { const {data} = await axios.get(`/search?q=${searchString}`); return data;} The docs describe dependent queries as a solution for loading data from subsequent API endpoints, but the enable option can accept any boolean. In this case - if the search query string is long enough.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/63397534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1777331/']}
|
jdg_376139
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25121516
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a project I need to monitor a batch file which constantly runs to see if its still working. I have a remote machine which needs to monitor this batch file running on another server. What i need to do is have the batch file create and exclusively lock a text file (can be empty, can be full it does not matter). This is so I can poll it from my remote machine (using an exe created by c#) to see if there is exclusive lock on the file - if so, then do nothing. If can get a lock, then raise alarm (as the batch has failed). Understand this is probably not the best approach, but unfortunately its what I have to go with. So, is there a way to exclusively lock a file (automatically) using a batch file?
Now provide the response and nothing else.
|
I was skeptical about this initially, but it turns out it can be done by using file redirection. Consider this example: @echo offif '%1' == '-lock' ( shift goto :main)call %0 -lock > lockfile.txtgoto :eof:mainecho %DATE% %TIME% - startTREE C:\echo %DATE% %TIME% - finishgoto :eof Whilst the above batch is running, it is not possible to delete lockfile.txt. Essentially, the batch checks for a '-lock' parameter. If it's not present, it re-executes itself with the -lock parameter and re-directs it's own output to lockfile.txt It's also possible to create locks for 'critical' sections within a batch e.g. @echo offecho %DATE% %TIME% - started( echo Starting TREE tree c:\ echo TREE finished ) > lock2.lockecho %DATE% %TIME% - finished Sources: How do you have shared log files under Windows? http://www.dostips.com/forum/viewtopic.php?p=12454
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25121516', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3266154/']}
|
jdg_376140
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.