source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
22618778
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are to solve the recurrence relation through repeating substitution: T(n)=T(n-1)+logn I started the substitution and got the following. T(n)=T(n-2)+log(n)+log(n-1) By logarithm product rule, log(mn)=logm+logn, T(n)=T(n-2)+log[n*(n-1)] Continuing this, I get T(n)=T(n-k)+log[n*(n-1)*...*(n-k)] We know that the base case is T(1), so n-1=k -> k=n+1, and substituting this in we get T(n)=T(1)+log[n*(n-1)*...*1] Clearly n*(n-1)*...*1 = n! so, T(n)=T(1)+log(n!) I do not know how to solve beyond this point. Is the answer simply O(log(n!)) ? I have read other explanations saying that it is Θ(nlogn) and thus it follows that O(nlogn) and Ω(nlogn) are the upper and lower bounds respectively. Now provide the response and nothing else.
This expands out to log (n!). You can see this because T(n) = T(n - 1) + log n = T(n - 2) + log (n - 1) + log n = T(n - 3) + log (n - 2) + log (n - 1) + log n = ... = T(0) + log 1 + log 2 + ... + log (n - 1) + log n = T(0) + log n! The exact answer depends on what T(0) is, but this is Θ(log n!) for any fixed constant value of T(0). A note - using Stirling's approximation , Θ(log n!) = Θ(n log n). That might help you relate this back to existing complexity classes. Hope this helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22618778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2431040/']}
jdg_374441
stackexchange
llm_judgeable_groundtruth_similarity
5816695
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know that SharedPreferences has putString() , putFloat() , putLong() , putInt() and putBoolean() . But I need to store an object that is of type Serializable in SharedPreferences . How can I achieve this? Now provide the response and nothing else.
In short you cant, try serializing your object to a private file, it amounts to the same thing. sample class below: import java.io.FileInputStream;import java.io.FileNotFoundException;import java.io.FileOutputStream;import java.io.IOException;import java.io.ObjectInputStream;import java.io.ObjectOutputStream;import android.app.Activity;import android.content.Context;/** * * Writes/reads an object to/from a private local file * * */public class LocalPersistence { /** * * @param context * @param object * @param filename */ public static void witeObjectToFile(Context context, Object object, String filename) { ObjectOutputStream objectOut = null; try { FileOutputStream fileOut = context.openFileOutput(filename, Activity.MODE_PRIVATE); objectOut = new ObjectOutputStream(fileOut); objectOut.writeObject(object); fileOut.getFD().sync(); } catch (IOException e) { e.printStackTrace(); } finally { if (objectOut != null) { try { objectOut.close(); } catch (IOException e) { // do nowt } } } } /** * * @param context * @param filename * @return */ public static Object readObjectFromFile(Context context, String filename) { ObjectInputStream objectIn = null; Object object = null; try { FileInputStream fileIn = context.getApplicationContext().openFileInput(filename); objectIn = new ObjectInputStream(fileIn); object = objectIn.readObject(); } catch (FileNotFoundException e) { // Do nothing } catch (IOException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } finally { if (objectIn != null) { try { objectIn.close(); } catch (IOException e) { // do nowt } } } return object; }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5816695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/718764/']}
jdg_374442
stackexchange
llm_judgeable_groundtruth_similarity
16041835
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having issues posting an Array to a PHP page using AJAX. I've been using this question as guidance, but for whatever reason I still can't get it to work. From what I can tell by using print_r($_POST) , I am posting an empty Array, but on the HTML/Javascript page I use an alert to see that the Array has been filled. The post is working because it inputs blank values into a MySQL database on post, but I can't figure out why it is passing an empty Array. The code is as follows: Javascript: <script type="text/javascript"> var routeID = "testRoute"; var custID = "testCustID"; var stopnumber = "teststopnumber"; var customer = "testCustomer"; var lat = 10; var lng = 20; var timeStamp = "00:00:00"; var dataArray = new Array(7); dataArray[0]= "routeID:" + routeID; dataArray[1]= "custID:" + custID; dataArray[2]= "stopnumber:" + stopnumber; dataArray[3]= "customer:" + customer; dataArray[4]= "latitude:" + lat; dataArray[5]= "longitude:" + lng; dataArray[6]= "timestamp:" + timeStamp; var jsonString = JSON.stringify(dataArray); function postData(){ $.ajax({ type: "POST", url: "AddtoDatabase.php", //includes full webserver url data: {data : jsonString}, cache: false, success: function(){ alert("OK"); } }); window.location = "AddtoDatabase.php"; //includes full webserver url }alert(JSON.stringify(dataArray))</script> PHP: <?php print_r($_POST);$routeID = $_POST['routeID']; $custID = $_POST['custID']; $stopnumber = $_POST['stopnumber']; $customer = $_POST['customer']; $latitude = $_POST['latitude']; $longitude = $_POST['longitude']; $timestamp = $_POST['timestamp'];$mysqli= new mysqli("fdb5.biz.nf","username","password","database");mysqli_select_db($mysqli,"database"); $sql = "INSERT INTO Locations (routeID, custID, stopnumber, customer, latitude, longitude, timestamp) VALUES " . "('$routeID','$custID','$stopnumber','$customer','$latitude','$longitude','$timestamp')"; mysqli_query($mysqli, $sql); $error = mysqli_error($mysqli); echo $error;?> print_r($_POST) only displays Array() on the php page while the jsonString alert on the javascript page shows ["routeID:testRoute","custID:testCustID","stopnumber:teststopnumber","customer:testCustomer","latitude:10","longitude:20","timestamp:00:00:00"] Anyone see what I'm doing wrong? Now provide the response and nothing else.
Note: The main cause for your code to output array() is the fact that you're redirecting the client before the asynchronous (AJAX) request has been sent/processed Basically move window.location = "AddtoDatabase.php"; to the success callback, as mentioned further down. First problem: Instead of using an array, you should use an object literal (~= assoc array in php). To do so, change this bit: var dataArray = new Array(7);//<== NEVER do this again, btwdataArray[0]= "routeID:" + routeID;dataArray[1]= "custID:" + custID;dataArray[2]= "stopnumber:" + stopnumber;dataArray[3]= "customer:" + customer;dataArray[4]= "latitude:" + lat;dataArray[5]= "longitude:" + lng; dataArray[6]= "timestamp:" + timeStamp; And write this, instead: var dataObject = { routeID: routeID, custID: custID, stopnumber: stopnumber customer: customer, latitude: lat, longitute: lng, timestamp: timeStamp}; There's nothing more too it. To finish off, just send the data like so: function postData(){ $.ajax({ type: "POST", url: "AddtoDatabase.php", data: dataObject,//no need to call JSON.stringify etc... jQ does this for you cache: false, success: function(resopnse) {//check response: it's always good to check server output when developing... console.log(response); alert('You will redirect in 10 seconds'); setTimeout(function() {//just added timeout to give you some time to check console window.location = 'AddtoDatabase.php'; },10000); } }); Secondly, your postData function redirects the client before the AJAX request has been sent! After the call to $.ajax , you have a window.location = "AddtoDatabase.php"; statement in your code. If you want the client to be redirected after the ajax call, you will have to move that expression to your success callback function (the one where I log the response ) in the second snippet ^^. When you've changed all this, your $_POST variable should look about right. If not, print out the $_REQUEST object and see what the response of an ajax call is then. Lastly, please be aware that using an api that supports prepared statements (and thus protects you against most injection attacks), that doesn't mean stringing unchecked POST/GET data into a query is any safer than it used to be... Bottom line: When you use an API that supports critical safety features such as prepared statements use those features . Just to be absolutely clear, and complete, here's a slightly reworked version of the PHP code, too: $routeID = $_POST['routeID'];$custID = $_POST['custID'];$stopnumber = $_POST['stopnumber'];$customer = $_POST['customer'];$latitude = $_POST['latitude'];$longitude = $_POST['longitude'];$timestamp = $_POST['timestamp'];//you're connecting OO-style, why do you switch to procedural next?//choose one, don't mix them, that makes for fugly code:$mysqli = mysqli_connect('fdb5.biz.nf', 'username', 'password', 'database');//procedural//or, more in tune with the times:$mysqli= new mysqli("fdb5.biz.nf","username","password","database");//OOmysqli_select_db($mysqli,"database");//or$mysqli->select_db('database'); Check the docs to see the procedural counterpart of all methods I'll be using from here on end, if you want. I prefer the OOP-API //making a prepared statement:$query = 'INSERT INTO Locations (routeID, custID, stopnumber, customer, latitude, longitude, timestamp) VALUES (?,?,?,?,?,?,?)';if (!($stmt = $mysqli->prepare($query))){ echo $query.' failed to prepare'; exit();}$stmt->bind_param('s', $routeID);$stmt->bind_param('s',$custID);//and so on$stmt->bind_param('d', $latitude);//will probably be a double$stmt->execute();//query DB Useful links on prepared statements: mysqli::prepare doc page mysqli_stmt::bind_result doc page is invaluable when it comes to fetching data... quick tutorial 1 Q&A-styled tutorial 2 Just in case: a PDO tutorial, too
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16041835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1949479/']}
jdg_374443
stackexchange
llm_judgeable_groundtruth_similarity
88377
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The following question is part (1/4) of a 2.30h written exam for the course "Probability and Statistics" in a school of engineering. So, although tricky and difficult (because the Professor is really demanding from his students), it should be solvable in a logical amount of time and with a logical amount of calculations. Let $X_1, \ldots, X_n$ be a random sample (i.i.d. r.v.) from the exponential distribution $\exp(\lambda)$, where $\lambda$ is unknown. Let $M_n=\max\{X_1, \ldots, X_n\}$ with probability distribution function $$G(x)=(1-e^{-\lambda x})^{n}, \qquad x>0$$and zero elsewhere. Q1. Find the probability density function of $M_n$. Q2. If $M_n$ is the only information that you have for $X_1,X_2,\ldots,X_n$, find the maximum likelihood estimator (MLE) $\hat{\lambda}_n$ of $\lambda$. Q3. Using $(1+x)^n>1+nx$ (or any other way) prove that $\hat{\lambda}_n$ is consistent, i.e. that $P(| \hat{\lambda}_n-\lambda|>\epsilon)\longrightarrow0$, for $n\rightarrow \infty$ For Q1, I took the derivative of the cdf of $M_n$ which I found to be equal to $$g(x)=n\lambda e^{-\lambda x}(1-e^{-\lambda x})^{n-1}$$ (doublechecked with Wolfram|Alpha). For Q2, I thought that the function I should maximize (with respect to $\lambda$) is $g(x)$ because that is my single observation from the sample of size $n$. If I understand the exercise correctly someone takes a sample of $n$ observations $X_1,X_2,\ldots X_n$ and tells me only their maximum $M_n$. Now, from this single information I have to calculate a MLE for $\lambda$. So, I will maximize the pdf of $M_n$ which is know my likelihood function, no? Is my mistake here? However, if I took as $$L(x;\lambda)=g(x)$$ and $$l(x;\lambda)=\ln\left(L(x;\lambda)\right)=\ln\left(g(x)\right)=\ln(n)+\ln(\lambda)-\lambda x+(n-1)\ln(1-e^{-\lambda x})$$ Then, as usually, I calculated the derivative of $l(x;\lambda)$ and set it equal to $0$ $$\frac{d}{d\lambda}l(x;\lambda)=\frac{1}{\lambda}-x+(n-1)\frac{xe^{-\lambda x}}{1-e^{-\lambda x}}=0$$ which reduces to $$e^t=\frac{1-nt}{1-t}$$ where $t=\lambda x$. But I cannot solve this equation (called transcendental as someone told me). Now provide the response and nothing else.
Since you are a tutor, any knowledge is always for a good cause. So I will provide some bounds for the MLE. We have arrived at $$(1-\lambda x_{(n)})e^{\lambda x_{(n)} } + \lambda n x_{(n)} - 1 = 0$$with $x_{(n)}\equiv M_n$. So $$(1-\hat \lambda x_{(n)})e^{\hat \lambda x_{(n)}} = 1-\hat \lambda x_{(n)}n $$Assume first that $1-\hat \lambda x_{(n)} >0$. Then we must also have $1-\hat \lambda x_{(n)}n>0$ since the exponential is always positive. Moreover since $x_{(n)}, \hat \lambda > 0\Rightarrow e^{\hat \lambda x_{(n)}}>1$. Therefore we should have $$\frac {1-\hat \lambda x_{(n)}n}{1-\hat \lambda x_{(n)}}>1 \Rightarrow \hat \lambda x_{(n)}>\hat \lambda x_{(n)}n$$which is impossible. Therefore we conclude that $$\hat \lambda >\frac 1{x_{(n)}},\;\; \hat \lambda = \frac c{x_{(n)}}, \;\; c>1$$ Inserting into the log-likelihood we get $$\ell(\hat\lambda(c)\mid x_{(n)}) = \log \frac c{x_{(n)}} + \log n - \frac c{x_{(n)}} x_{(n)} + (n-1) \log (1 - e^{-\frac c{x_{(n)}} x_{(n)}})$$ $$= \log \frac n{x_{(n)}} + \log c - c + (n-1) \log (1 - e^{-c})$$ We want to maximize this likelihood with respect to $c$. Its 1st derivative is $$\frac{d\ell}{dc}=\frac 1c -1 +(n-1)\frac 1{e^{c}-1}$$ Setting this equal to zero, we require that $$e^{c}-1 - c\left(e^{c}-1\right)+(n-1)c =0$$ $$\Rightarrow \left(n-e^c\right)c = 1-e^c$$ Since $c>1$ the RHS is negative. Therefore we must also have $n-e^c <0 \Rightarrow c > \ln n$. For $n\ge 3$ this provides a tighter lower bound for the MLE, but it doesn't cover the $n=2$ case, so $$\hat \lambda > \max \left\{\frac 1{x_{(n)}}, \frac {\ln n}{x_{(n)}}\right\}$$ Moreover (for $n\ge 3$) rearranging the 1st-order condition we have that $$c= \frac{e^c-1}{e^c-n} > \ln n \Rightarrow e^c -1 > e^c\ln n -n\ln n $$ $$\Rightarrow n\ln n-1>e^c(\ln n -1) \Rightarrow c< \ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$So for $n\ge 3$ we have that $$\frac 1{x_{(n)}}\ln n < \hat \lambda < \frac 1{x_{(n)}}\ln{\left[\frac{n\ln n-1}{\ln n -1}\right]}$$ This is a narrow interval, especially if $x_{(n)}\ge 1$. For example (truncated at 3d digit ) $$\begin{align}n=10 & &\frac 1{x_{(n)}}2.302 < \hat \lambda < \frac 1{x_{(n)}}2.827\\n=100 & & \frac 1{x_{(n)}}4.605 < \hat \lambda < \frac 1{x_{(n)}}4.847\\n=1000 & & \frac 1{x_{(n)}}6.907 < \hat \lambda < \frac 1{x_{(n)}}7.063\\n=10000 & & \frac 1{x_{(n)}}9.210< \hat \lambda < \frac 1{x_{(n)}}9.325\\\end{align}$$ Numerical examples indicate that the MLE tends to be equal to the upper bound, up to second decimal digit. ADDENDUM: A CLOSED FORM EXPRESSION This is just an approximate solution (it only approximately maximizes the likelihood), but here it is: manipulating the 1st-order condition we want to have $$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {\lambda x_{(n)}n -1}{\lambda x_{(n)} -1}\right]$$ Now, one can show (see for example here ) that $$E[X_{(n)}] = \frac {H_n}{\lambda},\;\; H_n = \sum_{k=1}^n\frac 1k$$ Solving for $\lambda$ and inserting into the RHS of the implicit 1st-order condition, we obtain $$\lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n\frac {x_{(n)}}{E[X_{(n)}]} -1}{ H_n\frac {x_{(n)}}{E[X_{(n)}]} -1}\right]$$ We want an estimate of $\lambda$, given that $X_{(n)}=x_{(n)}$, $\hat \lambda \mid \{X_{(n)}=x_{(n)}\}$. But in such a case, we also have $E[X_{(n)}\mid \{X_{(n)}=x_{(n)}\}] =x_{(n)}$. this simplifies the expression and we obtain $$\hat \lambda = \frac 1{x_{(n)}}\ln \left[\frac {nH_n -1}{ H_n -1}\right]$$ One can verify that this closed form expression stays close to the upper bound derived previously, but a bit less than the actual (numerically obtained) MLE.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/88377', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/41164/']}
jdg_374444
stackexchange
llm_judgeable_groundtruth_similarity
13984871
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My scenario : Activity 1 consists of Fragments A-> B-> C. All the fragments are added using this code : FragmentManager fm = getSupportFragmentManager(); FragmentTransaction ft = fm.beginTransaction(); ft.replace(R.id.content, fragment, TAG); ft.addToBackStack(TAG); ft.commit(); Now, from fragment C, I want to directly return to Fragment A. Therefore, I've commented ft.addToBackStack(TAG) while adding Fragment C. So when I press back button from C I directly get Fragment A on the screen. However, Fragment C is not replaced by A. In fact, both the fragments are visible. How do I solve this issue? Now provide the response and nothing else.
You need to do 2 things - name the FragmentTransaction from A->B and then override onBackPressed() in your containing activity to call FragmentManager#popBackStack (String name, int flags) when you are on Fragment C. Example: Transition from A->B getSupportFragmentManager() .beginTransaction() .replace(R.id.container, new FragmentB(), "FragmentB") .addToBackStack("A_B_TAG") .commit(); Transition from B->C will use a similar transaction with "FragmentC" as its tag. Then in your containing Activity override onBackPressed(): @Overridepublic void onBackPressed() { if (getSupportFragmentManager().findFragmentByTag("FragmentC") != null) { // I'm viewing Fragment C getSupportFragmentManager().popBackStack("A_B_TAG", FragmentManager.POP_BACK_STACK_INCLUSIVE); } else { super.onBackPressed(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13984871', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1117531/']}
jdg_374445
stackexchange
llm_judgeable_groundtruth_similarity
122725
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I believe that the following questions are very basic, but I don't know how to get a reference. Consider a curve in the plane $C\in \mathbb C^2$ with a singularity at $0$ and suppose it is unibranch at zero (i.e. analytically irreducible). Then I guess one should be able to define "arithmetic genus defect" of the curve at $0$. Namely if one smooths analytically $C$, its geometric genus will grow by a positive number (in case of the cusp $x^2=y^3$ it will grow by one), and let us call this number the defect. Question 1. Is this defect well defined (independent of a smoothing)? How is it called and how one should calculate it (say it terms of the local ring of $C$ at $0$)? Question 2. Suppose we have an explicit local parametrisation of $C$ at $0$, say by two holomorphic functions $f(t), g(t)$ (polynomials if you wish). Is it possible to find this "defect" as a certain invariant of this pair of functions at $t=0$? Question 1 is settled in the answer of unknown and Question 2 in comments to it by Roy and Vivek Now provide the response and nothing else.
The difference between the geometric genus of the singularity and the geometric genus of asmoothing (this one being called the arithmetic genus of the singularity) is often calledthe delta invariant . If $A$ is the local ring of the singularity, $B$ its normalization, then the delta invariant is the dimension of the complex vector space $B/A$. It is rather easy to compute the delta invariant if one knows an equation $f(x,y)=0$ of the curve by a formula due to Milnor (see the book "Singularities of hypersurfaces"): $2 \delta = \mu + b - 1 $ where $\delta$ is the delta invariant, $\mu = \dim_{\mathbb{C}} \mathbb{C}[[x,y]]/(\partial_{x}f, \partial_{y}f )$ and $b$ is the number of branches. In the unibranch case, it is simply $2 \delta = \mu$ (example: for the cusp, $\delta = 1$, $\mu =2$).
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/122725', 'https://mathoverflow.net', 'https://mathoverflow.net/users/13441/']}
jdg_374446
stackexchange
llm_judgeable_groundtruth_similarity
193843
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I use Tor Browser on Whonix. Why I see IPv6 (something like 3221:23f9:c:67h:0:0:0:3 ) on whatismyip.com? Is that website can see my MAC Address? Now provide the response and nothing else.
As indicated on the IPv6 roadmap of the Tor project , tor has had support for exit nodes contacting destinations over IPv6 since 2.4.8-alpha. The closest stable release I could find that followed is 2.4.19 from the end of 2013. I don't know exactly which nodes need to support it for a connection to be established over IPv6, but by now, every node in your chain should have at least that version. So all you're seeing is your exit node contacting whatismyip.com over IPv6, and whatismyip answering with the exit node's IPv6 address (the address format seems to confirm this is a manually chosen address as it ends in 0:0:0:3). If you still have the exact IP, you can check it was indeed an exit node on that day using exonerator . By the way, IPv6 implementations do not necessarily leak the MAC address by default thanks to RFC4941 , which, in addition to a MAC address based IP, assigns a randomly chosen temporary address on every interface, which changes regularly and is used by default for outgoing connections. On Linux this can be controlled using net.ipv6.conf.all.use_tempaddr net.ipv6.conf.default.use_tempaddr . Many distributions don't enable it by default, saying it can cause problems for unaware users, and I'm having trouble finding out what the default value is for whonix, but as a privacy oriented distro, I would hope it is activated... Of course, manually assigned IPv6 addresses also don't have to leak the Mac address.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/193843', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/186195/']}
jdg_374447
stackexchange
llm_judgeable_groundtruth_similarity
57129669
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a text editor app that has multiple pages in Xamarin.forms but when trying to build or run it I get this error: Input string was not in a correct format.   at System.Number.StringToNumber(String str, NumberStyles options, NumberBuffer& number, NumberFormatInfo info, Boolean parseDecimal)   at System.Number.ParseInt32(String s, NumberStyles style, NumberFormatInfo info)   at Xamarin.Android.Tasks.ManifestDocument.CalculateVersionCode(String currentAbi, String versionCodePattern, String versionCodeProperties)   at Xamarin.Android.Tasks.Aapt.GenerateCommandLineCommands(String ManifestFile, String currentAbi, String currentResourceOutputFile)   at Xamarin.Android.Tasks.Aapt.ProcessManifest(ITaskItem manifestFile)   at System.Threading.Tasks.Parallel.<>c__DisplayClass30_0`2.<ForEachWorker>b__0(Int32 i)   at System.Threading.Tasks.Parallel.<>c__DisplayClass17_0`1.<ForWorker>b__1()   at System.Threading.Tasks.Task.InnerInvoke()   at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask)   at System.Threading.Tasks.Task.<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object ) It does not give me the location of the error,it just tells me that it is in the solution and I'm not entirely sure how i could fix the error, or even find where it is. I made the app in Xamarin.forms and is a text editor with an interface allowing you to save and delete your pieces of code. And help with finding the error or even better solving it would be really appreciated. Sorry for the lack of included code as there is a lot, and I don't actually know the location of the error so I wouldn't know what to show. Thanks! Now provide the response and nothing else.
To support state restoration in iOS 13 you will need to encode enough state into the NSUserActivity : Use this method to return an NSUserActivity object with information about your scene's data. Save enough information to be able to retrieve that data again after UIKit disconnects and then reconnects the scene. User activity objects are meant for recording what the user was doing, so you don't need to save the state of your scene's UI The advantage of this approach is that it can make it easier to support handoff, since you are creating the code necessary to persist and restore state via user activities. Unlike the previous state restoration approach where iOS recreated the view controller hierarchy for you, you are responsible for creating the view hierarchy for your scene in the scene delegate. If you have multiple active scenes then your delegate will be called multiple times to save the state and multiple times to restore state; Nothing special is needed. The changes I made to your code are: AppDelegate.swift Disable "legacy" state restoration on iOS 13 & later: func application(_ application: UIApplication, viewControllerWithRestorationIdentifierPath identifierComponents: [String], coder: NSCoder) -> UIViewController? { if #available(iOS 13, *) { } else { print("AppDelegate viewControllerWithRestorationIdentifierPath") // If this is for the nav controller, restore it and set it as the window's root if identifierComponents.first == "RootNC" { let nc = UINavigationController() nc.restorationIdentifier = "RootNC" self.window?.rootViewController = nc return nc } } return nil}func application(_ application: UIApplication, willEncodeRestorableStateWith coder: NSCoder) { print("AppDelegate willEncodeRestorableStateWith") if #available(iOS 13, *) { } else { // Trigger saving of the root view controller coder.encode(self.window?.rootViewController, forKey: "root") }}func application(_ application: UIApplication, didDecodeRestorableStateWith coder: NSCoder) { print("AppDelegate didDecodeRestorableStateWith")}func application(_ application: UIApplication, shouldSaveApplicationState coder: NSCoder) -> Bool { print("AppDelegate shouldSaveApplicationState") if #available(iOS 13, *) { return false } else { return true }}func application(_ application: UIApplication, shouldRestoreApplicationState coder: NSCoder) -> Bool { print("AppDelegate shouldRestoreApplicationState") if #available(iOS 13, *) { return false } else { return true }} SceneDelegate.swift Create a user activity when required and use it to recreate the view controller. Note that you are responsible for creating the view hierarchy in both normal and restore cases. func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { print("SceneDelegate willConnectTo") guard let winScene = (scene as? UIWindowScene) else { return } // Got some of this from WWDC2109 video 258 window = UIWindow(windowScene: winScene) let vc = ViewController() if let activity = connectionOptions.userActivities.first ?? session.stateRestorationActivity { vc.continueFrom(activity: activity) } let nc = UINavigationController(rootViewController: vc) nc.restorationIdentifier = "RootNC" self.window?.rootViewController = nc window?.makeKeyAndVisible()}func stateRestorationActivity(for scene: UIScene) -> NSUserActivity? { print("SceneDelegate stateRestorationActivity") if let nc = self.window?.rootViewController as? UINavigationController, let vc = nc.viewControllers.first as? ViewController { return vc.continuationActivity } else { return nil }} ViewController.swift Add support for saving and loading from an NSUserActivity . var continuationActivity: NSUserActivity { let activity = NSUserActivity(activityType: "restoration") activity.persistentIdentifier = UUID().uuidString activity.addUserInfoEntries(from: ["Count":self.count]) return activity}func continueFrom(activity: NSUserActivity) { let count = activity.userInfo?["Count"] as? Int ?? 0 self.count = count}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57129669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11813614/']}
jdg_374448
stackexchange
llm_judgeable_groundtruth_similarity
20904191
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I find(in mm) the width and the height of a pdf page using PDFBox? Currently, I'm using this: System.out.println(page.getMediaBox().getHeight());System.out.println(page.getMediaBox().getWidth()); but the result is(not in mm): 842.0595.22 Now provide the response and nothing else.
Measurement units inside a PDF are in points , a traditional graphic industry measurement unit. Adobe uses the following definition: 1 pt = 1/72 inch and since one inch is defined to be exactly 25.4 mm (really!), you can convert from points to mm using the formula mm = pt*25.4 / 72 Your values, by the way, translate (loosely) to the A4 paper dimensions 210 x 297 mm. ("Loosely", for 2 reasons. First: A x dimensions are derived from 1 square meter, in the metric system. Points are based (according to Adobe's usage) in the imperial system; therefore, all conversions between points and millimeters are approximations. Second: the given value in mm for A4 is rounded as well. A x relative and absolute sizes are based on an irrational number .) Footnote Inside an object stream, units of measurement can be scaled to something else. The above is only true for top level base objects.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20904191', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2013736/']}
jdg_374449
stackexchange
llm_judgeable_groundtruth_similarity
28656780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm coding the local login in NodeJS following this tutorial: https://scotch.io/tutorials/easy-node-authentication-setup-and-local In the file config/passport.js function(req, email, password, done){ process.nextTick(function(){ User.findOne({'local.email' : email}, function(err, user){ if(err) return done(err); if (user){ return done(null, false, req.flash('signupMessage', 'message')); } I'm rookie in NodeJS and Javascript, and I don't understand how a value like "done" can be a function (return done(err)). Is any system function? Thanks a lot! Now provide the response and nothing else.
done is a callback that you need to call once you are done with your work. As you can see it is given in the first line of your code: function(req, email, password, done){ This means that besides the incoming request you get the user-specified email and password . Now you need to do whatever you need to do to verify the login. Somehow you need to tell Passport whether you succeeded or not. Normally, you may use a return value for this, but in this case the Passport author thought about the option that your check may be asynchronous, hence using a return value would not work. This is why a callback is being used. Most often callbacks are being called callback , but this is just for convenience, there is no technical reason to do so. In this case, since the callback is being used for showing that you are done , the Passport author suggested to call it done . Now you can either call done with an error if credential validation failed, or with the appropriate parameters to show that it succeeded. This works because functions are so-called first-class citizens in JavaScript, i.e. there is no actual difference between code and data: In JavaScript you can pass functions around as parameters and return values as you can with data. And that's it :-)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28656780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4229896/']}
jdg_374450
stackexchange
llm_judgeable_groundtruth_similarity
303945
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was viewing Stack Overflow for questions. Then I saw one error notification in the Firefox debug tool. I further inspected and found this: This site makes use of an SHA-1 Certificate; it's recommended you use certificates with signature algorithms that use hash functions stronger than SHA-1. Why are they not using some better algorithm? Now provide the response and nothing else.
So it looks like SO doesn't run its own certificate but uses Cloudflare's Universal SSL . What surprises me is that they have any SHA1 certificates https://www.ssllabs.com/ssltest/analyze.html?d=stackoverflow.com&s=104.16.104.85&hideResults=on I ran the same test against a website I have behind Cloudflare and it came back with a SHA256 certificate. So this server just appears to have not yet been updated yet. From the Cloudflare blog Sites that have tried to upgrade to SHA-2 have seen a backlash due to browser incompatibility. In July, mozilla.org upgraded their site to use a SHA-2 certificate. In doing so they lost around 145,000 Firefox downloads per week due to browser incompatibility. Even google.com (as of November 10, 2014) continues to use SHA-1 for compatibility reasons, despite the company’s push to deprecate SHA-1 in Chrome. To support both Chrome and Windows XP SP2 it’s necessary to use a SHA-1 certificate that expires before 2016. This is the option we have chosen for CloudFlare-managed certificates.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/303945', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/3626698/']}
jdg_374451
stackexchange
llm_judgeable_groundtruth_similarity
24494784
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Getting the classname of an object as String using: object_getClassName(myViewController) returns something like this: _TtC5AppName22CalendarViewController I am looking for the pure version: "CalendarViewController" . How do I get a cleaned up string of the class name instead? I found some attempts of questions about this but not an actual answer. Is it not possible at all? Now provide the response and nothing else.
String from an instance : String(describing: self) String from a type : String(describing: YourType.self) Example: struct Foo { // Instance Level var typeName: String { return String(describing: Foo.self) } // Instance Level - Alternative Way var otherTypeName: String { let thisType = type(of: self) return String(describing: thisType) } // Type Level static var typeName: String { return String(describing: self) }}Foo().typeName // = "Foo"Foo().otherTypeName // = "Foo"Foo.typeName // = "Foo" Tested with class , struct and enum .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/24494784', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/417096/']}
jdg_374452
stackexchange
llm_judgeable_groundtruth_similarity
264448
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $M$ be a smooth manifold (with boundary). Suppose I have a smooth vector field $T$ defined on the complement of a compact subset $K$ of $M$ and I wish to extend $T$ to the whole of $M$. What are the obstructions to doing so? Does $M$ need to be compact? Does $K$ need to be a submanifold? I have a feeling that an argument involving a partition of unity of $K$ would suffice for extending $T$ into $K$, but have not been able to find a decent reference. Now provide the response and nothing else.
You can construct antichains of size $\frak c$ without using choice, you can even have them to be antichains in a stronger sense of the word: i.e. every two have a finite intersection. The one thing you'd want choice for this is to make sure your antichains are maximal. So we can't do that. But do we need that? No. Once you have one antichain of size $\frak c$, you have $2^\frak c$ of them automatically. To see how you'd go about doing that, enumerate the rational numbers as $q_n$, then for every real number $r$ construct a rational sequence converging to it: $r_n$, the $n$th element in the sequence approaching $r$, would be the least indexed rational in the enumeration whose distance from $r$ is at most $\frac1n$. Now look at $A_r=\{n\in\omega\mid\exists k: q_n=r_k\}$. Then $\{A_r\mid r\in\Bbb R\}$ would an antichain as wanted. (Note that we can easily get $2^\frak c$ distinct antichains by partitioning the reals into $\omega$ intervals, and then replacing $\frac1n$ in the definition of $r_n$, by $1/f(k,n)$ when $r$ is in the $k$th interval, for any function $f\colon\omega\times\omega\to\omega$, satisfying that $\lim_{n\to\infty}f(k,n)=\infty$ for every $k$.)
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/264448', 'https://mathoverflow.net', 'https://mathoverflow.net/users/104213/']}
jdg_374453
stackexchange
llm_judgeable_groundtruth_similarity
581440
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a procedure to return a result set which is limited by page number and some other stuff. As an OUTPUT parameter I need to return a total amount of selected rows according to the parameters except the page number. So I have something like that: WITH SelectedItems AS(SELECT Id, Row1, Row2, ROW_NUMBER() OVER (ORDER BY Row1) AS PositionFROM ItemsWHERE Row2 = @Row2)SELECT Id, Row1, Row2FROM SelectedItemsWHERE Position BETWEEN @From AND @To And then I need to set the OUTPUT parameter to the number of rows in the innerquery. I can just copy the query and count it, but this query could returns thousands of rows (and will be more in the future), so I am looking for method to do that with a good performance. I was thinking about table variables, is it a good idea? Or any other suggestions? To be more specific, it's the Microsoft SQL Server 2008. Thank you, Jan Now provide the response and nothing else.
You can count the total rows as a separate column in your main query using COUNT(*). Like this: WITH SelectedItems AS(SELECT Id, Row1, Row2, ROW_NUMBER() OVER (ORDER BY Row1) AS Position, COUNT(*) OVER () AS TotalRowsFROM ItemsWHERE Row2 = @Row2)SELECT Id, Row1, Row2FROM SelectedItemsWHERE Position BETWEEN @From AND @To This will return the count in your result set rather than in a output parameter, but that should fit your requirements. Otherwise, combine with a temp table: DECLARE @tmp TABLE (Id int, RowNum int, TotalRows int);WITH SelectedItems AS(SELECT Id, Row1, Row2, ROW_NUMBER() OVER (ORDER BY Row1) AS Position, COUNT(*) OVER () AS TotalRowsFROM ItemsWHERE Row2 = @Row2)INSERT @tmpSELECT Id, Row1, Row2FROM SelectedItemsWHERE Position BETWEEN @From AND @ToSELECT TOP 1 @TotalRows = TotalRows FROM @tmpSELECT * FROM @tmp You will find using a temp table for just your paged result will not use much memory (depending on your page size of course) and you're only keeping it live for a short period of time. Selecting the full result set from the temp table and selecting the TotalRows will only take a tiny bit longer. This will be much faster than running a totally separate query, which in my test (repeating the WITH) doubled the execution time.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/581440', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_374454
stackexchange
llm_judgeable_groundtruth_similarity
22566284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to read a list of images into Python/Matplotlib and then plot this images instead of other markers (like points) in a graph. I have tried with imshow but I didn't succeed, because I cannot shift the image to another position and scale it appropriately. Maybe somebody has a good idea : ) Now provide the response and nothing else.
There are two ways to do this. Plot the image using imshow with the extent kwarg set based on the location you want the image at. Use an OffsetImage inside an AnnotationBbox . The first way is the easiest to understand, but the second has a large advantage. The annotation box approach will allow the image to stay at a constant size as you zoom in. Using imshow will tie the size of the image to the data coordinates of the plot. Here's an example of the second option: import numpy as npimport matplotlib.pyplot as pltfrom matplotlib.offsetbox import OffsetImage, AnnotationBboxfrom matplotlib.cbook import get_sample_datadef main(): x = np.linspace(0, 10, 20) y = np.cos(x) image_path = get_sample_data('ada.png') fig, ax = plt.subplots() imscatter(x, y, image_path, zoom=0.1, ax=ax) ax.plot(x, y) plt.show()def imscatter(x, y, image, ax=None, zoom=1): if ax is None: ax = plt.gca() try: image = plt.imread(image) except TypeError: # Likely already an array... pass im = OffsetImage(image, zoom=zoom) x, y = np.atleast_1d(x, y) artists = [] for x0, y0 in zip(x, y): ab = AnnotationBbox(im, (x0, y0), xycoords='data', frameon=False) artists.append(ax.add_artist(ab)) ax.update_datalim(np.column_stack([x, y])) ax.autoscale() return artistsmain()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22566284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1559276/']}
jdg_374455
stackexchange
llm_judgeable_groundtruth_similarity
8955
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to use the elements of Subsets in a Do loop.I have: w = Subsets[{3, 4, 5, 6}, {2}] which yields: {{3, 4}, {3, 5}, {3, 6}, {4, 5}, {4, 6}, {5, 6}} Now I want to use these ordered pairs in a Do loop. For example: Do[ma[i]+mb[j],{i,...},{j,...}] where {i,j} should be only these ordered pairs. Now provide the response and nothing else.
w = Subsets[{3, 4, 5, 6}, {2}] {{3, 4}, {3, 5}, {3, 6}, {4, 5}, {4, 6}, {5, 6}} This will run only over this ordered pairs. Is this what you need? - Do[Print[k], {k, w}]; In Mathematica you can also use other constructs similar to Do . For example Table[Print[k], {k, w}]; Or even better, use functional programming: Print /@ w; All these give same result. Here /@ is a shortcut notation for Map which in full form would be Map[Print, w] . The major difference with Do and Table - there is no running index specification. If you need to run a computation that separates the indexes (as you mentioned in the comment), it is still easy: ma[#1] + mb[#2] & @@@ w where @@@ is is a flavor of Apply . These will work too: ma[#[[1]]] + mb[#[[2]]] & /@ wTable[ma[k[[1]]] + mb[k[[2]]], {k, w}] but why would you stray away from elegance? ;-) These [[...]] mean Part .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/8955', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1894/']}
jdg_374456
stackexchange
llm_judgeable_groundtruth_similarity
40189084
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As we all know the IPv4 address for localhost is 127.0.0.1 (loopback address). What is the IPv6 address for localhost and for 0.0.0.0 as I need to block some ad hosts. Now provide the response and nothing else.
As we all know that IPv4 address for localhost is 127.0.0.1 (loopback address). Actually, any IPv4 address in 127.0.0.0/8 is a loopback address. In IPv6, the direct analog of the loopback range is ::1/128 . So ::1 (long form 0:0:0:0:0:0:0:1 ) is the one and only IPv6 loopback address. While the hostname localhost will normally resolve to 127.0.0.1 or ::1 , I have seen cases where someone has bound it to an IP address that is not a loopback address. This is a bit crazy ... but sometimes people do it. I say "this is crazy" because you are liable to break applications assumptions by doing this; e.g. an application may attempt to do a reverse lookup on the loopback IP and not get the expected result. In the worst case, an application may end up sending sensitive traffic over an insecure network by accident ... though you probably need to make other mistakes as well to "achieve" that. Blocking 0.0.0.0 makes no sense. In IPv4 it is never routed. The equivalent in IPv6 is the :: address (long form 0:0:0:0:0:0:0:0 ) ... which is also never routed. The 0.0.0.0 and :: addresses are reserved to mean "any address". So, for example a program that is providing a web service may bind to 0.0.0.0 port 80 to accept HTTP connections via any of the host's IPv4 addresses. These addresses are not valid as a source or destination address for an IP packet. Finally, some comments were asking about ::/128 versus ::/0 versus :: . What is this difference? Strictly speaking, the first two are CIDR notation not IPv6 addresses. They are actually specifying a range of IP addresses. A CIDR consists of a IP address and an additional number that specifies the number of bits in a netmask. The two together specify a range of addresses; i.e. the set of addresses formed by ignoring the bits masked out of the given address. So: :: means just the IPv6 address 0:0:0:0:0:0:0:0 ::/128 means 0:0:0:0:0:0:0:0 with a netmask consisting of 128 bits. This gives a network range with exactly one address in it. ::/0 means 0:0:0:0:0:0:0:0 with a netmask consisting of 0 bits. This gives a network range with 2 128 addresses in it.; i.e. it is the entire IPv6 address space! For more information, read the Wikipedia pages on IPv4 & IPv6 addresses, and CIDR notation: https://en.wikipedia.org/wiki/IPv4_address https://en.wikipedia.org/wiki/IPv6_address https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/40189084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6758300/']}
jdg_374457
stackexchange
llm_judgeable_groundtruth_similarity
172313
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In a certain text on Perron-Frobenius theory, it is postulated that every irreducible nonnegative matrix with period $1$ is primitive and this proposition is said to be obvious. However, when I tried to prove the theorem by myself, I found that I was unable to come up with a formal proof. Intuitively, of course, I am sure that the theorem holds (and the converse statement is in fact easy to prove), but I am unable to prove it. According to the text, the theorem should be obvious from the graph-theoretic interpretation of these notions. For a given nonnegative matrix, it is possible to construct a digraph as follows: There is an edge from the $i$-th vertex to the $j$-th vertex if and only if the entry $(i,j)$ of the matrix is positive. Thus, the matrix is irreducible if and only if its digraph is strongly connected. The period is defined to be the greatest common divisor of lengths of cycles (more precisely, the closed paths) of the graph. And finally, the matrix is said to be primitive if there is a positive integer $n$ such that for each pair of vertices of the graph there is a path of length $n$ interconnecting these vertices. The theorem to be proved can thus be restated as follows: For every strongly connected digraph with the greatest common divisor of the lengths of closed paths equal to $1$, there is a positive integer $n$ such that for every pair of vertices of the digraph there is a path of length $n$ interconnecting these vertices. It seems to me that the theorem might be proved by means of number theory, but I have not been able to find a proof up to now. To be more specific, I am looking for a proof without the use of the Perron-Frobenius theorem (the proposition is used in the text to prove the Perron-Frobenius theorem). Any ideas? Thank you in advance. Now provide the response and nothing else.
Here is the argument given in section 1.3 of Gregory F. Lawler's Introduction to Stochastic Processes . It treats stochastic matrices $P$,but I think the argument applies to general non-negative matrices. For each state $i$ define $J_i=\{n: P^n(i,i)>0\}$. This is a semigroup and sincewe have assumed that $P$ is aperiodic, we have $\gcd(J_i)=1$ and it follows that $J_i$ contains all sufficiently large integers. That is, there is some integer $M(i)$ so that for all $n\geq M(i)$ we have$P^n(i,i)>0$. Since $P$ is irreducible, there exists some $m(i,j)$ such that $P^{m(i,j)}(i,j)>0$. Hence for $n\geq M(i)$, $$P^{n+m(i,j)}(i,j)\geq P^n(i,i)\,P^{m(i,j)}(i,j)>0.$$ Let $M$ be the maximum value of $M(i)+m(i,j)$ over all pairs $(i,j)$. Then for $n\geq M$, $P^n(i,j)>0$ for all states $i,j$. Essentially the same argument is found in section 1.3 of Markov Chains and Mixing Times by Levin, Peres, and Wilmer. So it looks like probabilists have not found a better proof.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/172313', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15768/']}
jdg_374458
stackexchange
llm_judgeable_groundtruth_similarity
335306
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Tikhonov regularization (or ridge regression) adds a constraint that $\|\beta\|^2$, the $L^2$-norm of the parameter vector, is not greater than a given value (say $c$). Equivalently, it may solve an unconstrained minimization of the least-squares penalty with $\alpha\|\beta\|^2$ added, where $\alpha$ is a constant (this is the Lagrangian form of the constrained problem). The above is from Wikipedia . Why is the unconstrained LS with $\alpha\|\beta\|^2$ added to the cost equivalent to the LS problem with an additional constraint that $\|\beta\|^2 \leq c$? What is the relation between $\alpha$ and $c$? Thanks! Now provide the response and nothing else.
Let us first define the two problems: Problem 1: \begin{equation} \min_{\beta} ~f_\alpha(\beta):=\frac{1}{2}\Vert y-X\beta\Vert^2 +\alpha\Vert \beta\Vert^2\end{equation} Problem 2: \begin{align} \min_{\beta} ~&\frac{1}{2}\Vert y-X\beta\Vert^2\\ s.t.~&\Vert \beta\Vert^2-c\leq 0\end{align} The Lagrangian for Problem 2 reads:\begin{equation}\mathcal{L}(\beta,\lambda)=\frac{1}{2}\Vert y-X\beta\Vert^2+\lambda (\Vert \beta\Vert^2-c)\end{equation}and you probably already see the resemblance with Problem 1 (identical except for the constant term $-\lambda c$). Now let us look at the necessary conditions for optimality. For Problem 1, these read:\begin{equation}\nabla_\beta f_\alpha(\beta^*(\alpha))=0\end{equation}where we voluntarily write $\beta^*(\alpha)$ to show that this is the optimal solution for a given $\alpha$. For Problem 2, the KKT conditions imply that we have:\begin{align*}\nabla_\beta \mathcal{L}(\beta^*,\lambda^*)&=\nabla_\beta f_\lambda(\beta^*)=0\\\lambda^* (\Vert \beta^*\Vert^2-c)&=0\end{align*}The first line says that the gradient of the Lagrangian with respect to $\beta$ should be null and the second is the complementary condition. (We also need $\lambda^* \geq 0$, but this is less important for our discussion). Also observe that the gradient of the Lagrangian is equal to the gradient of $f_\lambda$ (objective function of problem 1 but with $\lambda$ instead of $\alpha$). Now suppose we solve Problem 1 for a given $\alpha$ and obtain its solution $\beta^*(\alpha)$. Let $c=\Vert \beta^*(\alpha)\Vert^2$, the squared norm of the solution to Problem 1. Then $\lambda^*=\alpha$ and $\beta^*=\beta^*(\alpha)$ satisfy the KKT conditions for Problem 2, showing that both Problems have the same solution. Conversely, if you solved Problem 2, you could set $\alpha=\lambda^*$ to retrieve the same solution by solving Problem 1. To sum it up, both problems are equivalent when $c=\Vert \beta^*(\alpha)\Vert^2$.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/335306', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1281/']}
jdg_374459
stackexchange
llm_judgeable_groundtruth_similarity
332
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm a student studying electronics and I'm having trouble understanding the concept behind current sourcing and current sinking. We've covered it in a lab using a 7404 and an LED and all that. Just having trouble getting an intuitive understanding of what exactly is happening. If anyone can take a crack at explaining, it would be greatly appreciated. Just to be sure, I understand what the process is, in terms of current flow and from input to output and vice versa. Just not getting why one is preferred over the other, and what it has to do with having a floating Hi input or why I wouldn't want to have a floating Hi. Input would be greatly appreciated. Thanks! Now provide the response and nothing else.
short version: current sources connect things to Vcc, current sinks connect them to ground. longer version:The following is a practical explanation of current-sources/sinks as used in microcontrollers & TTL logic. For a more theoretical description, see the Wikipedia page on current source . Some devices are very good at creating a connection to ground. (or whatever the lowest voltage is in the system, e.g. 0V) Other devices are very good at creating a connection to Vcc. (or whatever is the highest voltage in the system, e.g. +5V) Those devices that are good connecting to ground are called current sinks; those good at connecting to Vcc are called current sources. Until recently (the last decade or so), it was unusual for integrated circuits to be good at being both. Most were good at being current sinks but were terrible at being current sources. So at lot of circuits were designed so all the chip had to do was connect to ground to make the circuit do its thing. Many chips still have an asymmetric current drive ability and function better switching to ground than switching to Vcc. To me a good example of current source and current since are the standard "switch" configuration of a PNP and NPN transistor. A PNP is a good current source: you almost always connect its emitter to Vcc, and it switches it on/off. A NPN is a good current sink: its emitter is almost always connected to ground and it switches the ground connection on/off. Why you choose one over the other often depends on the capabilities of the parts available to you. For instance, an RGB LED is often a "common-anode" type where the anode (positive lead) is connected on all three LED elements, so to turn on an element you need to connect its lead to ground. You can use three pins on a microcontroller to do this (or three NPN transistors) and they would be acting as current sinks.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/332', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/217/']}
jdg_374460
stackexchange
llm_judgeable_groundtruth_similarity
2618959
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using jQuery's ajax method to acquire a static JSON file. The data is loaded from the local file system, hence there is no server, so I can't change the MIME type. This works fine in Safari, but Firefox (3.6.3) reports the file to be "not well-formed". I am aware of, and have reviewed, a similar post here on Stack Overflow: "not well-formed" error in Firefox when loading JSON file with XMLHttpRequest I believe my JSON is well-formed: { "_": ["appl", "goog", "yhoo", "vz", "t"]} My ajax call is straightforward: $.ajax({ url: 'data/tickers.json', dataType: 'json', async: true, data: null, success: function(data, textStatus, request) { callback(data); }}); If I wrap the JSON with a document tag: <document>JSON data</document> as was mentioned in the other Stack Overflow question referenced above, the ajax call fails with a parse error. So: is there a way to avoid the Firefox warning when reading in client-side JSON files? Now provide the response and nothing else.
Sometimes using an HTTP server is not an option, which may mean that MIME types won't be automatically provided for some files. Adapted from Peter Hoffman's answer for jQuery .getJSON Firefox 3 Syntax Error Undefined , use this code before you make any $.getJSON() calls: $.ajaxSetup({beforeSend: function(xhr){ if (xhr.overrideMimeType) { xhr.overrideMimeType("application/json"); }}}); Or, if you're using $.ajax(): $.ajax({ url: url, beforeSend: function(xhr){ if (xhr.overrideMimeType) { xhr.overrideMimeType("application/json"); } }, dataType: 'json', data: data, success: callback});
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2618959', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65934/']}
jdg_374461
stackexchange
llm_judgeable_groundtruth_similarity
2701263
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know i can extract a file's icon using using (System.Drawing.Icon sysicon = System.Drawing.Icon.ExtractAssociatedIcon(filePath)){ icon = System.Windows.Interop.Imaging.CreateBitmapSourceFromHIcon( sysicon.Handle, System.Windows.Int32Rect.Empty, System.Windows.Media.Imaging.BitmapSizeOptions.FromEmptyOptions());} But how can I, with no file, get the icon for a given extension? Now provide the response and nothing else.
Use the GetFileIcon method from this CodeProject article from Paul Ingles and pass .ext as the name parameter. The GetFileIcon method is a wrapper around the native SHGetFileInfo and copied here for illustration: public static System.Drawing.Icon GetFileIcon(string name, IconSize size, bool linkOverlay){ Shell32.SHFILEINFO shfi = new Shell32.SHFILEINFO(); uint flags = Shell32.SHGFI_ICON | Shell32.SHGFI_USEFILEATTRIBUTES; if (true == linkOverlay) flags += Shell32.SHGFI_LINKOVERLAY; /* Check the size specified for return. */ if (IconSize.Small == size) { flags += Shell32.SHGFI_SMALLICON ; // include the small icon flag } else { flags += Shell32.SHGFI_LARGEICON ; // include the large icon flag } Shell32.SHGetFileInfo( name, Shell32.FILE_ATTRIBUTE_NORMAL, ref shfi, (uint) System.Runtime.InteropServices.Marshal.SizeOf(shfi), flags ); // Copy (clone) the returned icon to a new object, thus allowing us // to call DestroyIcon immediately System.Drawing.Icon icon = (System.Drawing.Icon) System.Drawing.Icon.FromHandle(shfi.hIcon).Clone(); User32.DestroyIcon( shfi.hIcon ); // Cleanup return icon;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2701263', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/173437/']}
jdg_374462
stackexchange
llm_judgeable_groundtruth_similarity
194462
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have a MySQL database that contains almost 100 tables. I want to set up N additional MySQL databases on the same server, each running on a different port. And I want each additional database to have the same schema/table structures as the original database. Is there any way to automatically make N duplicates of the original database and set them up on N different ports? Thanks Now provide the response and nothing else.
Have a look at mysqldump and it's --no-data option to get a copy of the schema. Once you have that you will have to make a script to source in mysql e.g. mysqldump --no-data -u user -p database >database-schema.sql will generate a copy of the schema for database database . If you had a script like below in a file called for example makedbs.sql create database N; set permissions here use N; source database-schema.sql; create database N1; set permissions here use N1; source database-schema.sql; Then you could run mysql and source makedbs.sql mysql -u user -pmysql> source makedbs.sql which would create 2 databases N and N1 which have the same schema as your original.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/194462', 'https://serverfault.com', 'https://serverfault.com/users/10217/']}
jdg_374463
stackexchange
llm_judgeable_groundtruth_similarity
3275041
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $I$ be an ideal of a ring $R$ . Define $I^2$ to be the set of all finite sums of elements of the form $a_1a_2$ where $a_1,a_2\in I$ . Why isn't it necessarily the case that $I=I^2$ ? If $a\in I$ , then $a=1a\in I^2$ , so $I\subseteq I^2$ . And any element of $I^2$ must also be in $I$ since $I$ is closed under addition and multiplication, which then means $I^2\subseteq I$ , and so it follows that $I=I^2$ . Am I missing something? Now provide the response and nothing else.
Why is $1a\in I^2$ ? If $I$ is a proper ideal, we must have $1\notin I$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3275041', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/452318/']}
jdg_374464
stackexchange
llm_judgeable_groundtruth_similarity
14458246
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When running code coverage for my solution which contains multiple projects, I noticed that Code Coverage includes also my test assemblies. I found an article which suggests the following template for .runsettings: <?xml version="1.0" encoding="utf-8"?><RunSettings> <DataCollectionRunSettings> <DataCollectors> <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"> <Configuration> <CodeCoverage> <ModulePaths> <Include> <!-- Do not specify any includes. This will attempt to include all binaries --> </Include> <Exclude> <!-- Exclude modules that aren't to be processed, by their name / path --> <ModulePath>.*Tests\.dll$</ModulePath> <!-- I put it here --> </Exclude> </ModulePaths> <UseVerifiableInstrumentation>True</UseVerifiableInstrumentation> <AllowLowIntegrityProcesses>True</AllowLowIntegrityProcesses> <CollectFromChildProcesses>True</CollectFromChildProcesses> <CollectAspDotNet>False</CollectAspDotNet> <!--Additional paths to search for symbol files. Symbols must be found for modules to be instrumented.If symbols are alongside the binaries, they are automatically picked up. Otherwise specify the here.Note that searching for symbols increases code coverage runtime. So keep this small and local.<SymbolSearchPaths> <Path>C:\Users\User\Documents\Visual Studio 11\Projects\ProjectX\bin\Debug</Path> <Path>\\mybuildshare\builds\ProjectX</Path></SymbolSearchPaths>--> <Functions> <Exclude> <Function>^std::.*</Function> <Function>^ATL::.*</Function> <Function>.*::__GetTestMethodInfo.*</Function> <Function>^Microsoft::VisualStudio::CppCodeCoverageFramework::.*</Function> <Function>^Microsoft::VisualStudio::CppUnitTestFramework::.*</Function> <Function>.*::YOU_CAN_ONLY_DESIGNATE_ONE_.*</Function> </Exclude> </Functions> <Attributes> <Exclude> <Attribute>^System.Diagnostics.DebuggerHiddenAttribute$</Attribute> <Attribute>^System.Diagnostics.DebuggerNonUserCodeAttribute$</Attribute> <Attribute>^System.Runtime.CompilerServices.CompilerGeneratedAttribute$</Attribute> <Attribute>^System.CodeDom.Compiler.GeneratedCodeAttribute$</Attribute> <Attribute>^System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute$</Attribute> </Exclude> </Attributes> <Sources> <Exclude> <Source>.*\\atlmfc\\.*</Source> <Source>.*\\vctools\\.*</Source> <Source>.*\\public\\sdk\\.*</Source> <Source>.*\\microsoft sdks\\.*</Source> <Source>.*\\vc\\include\\.*</Source> </Exclude> </Sources> <CompanyNames> <Exclude> <CompanyName>.*microsoft.*</CompanyName> </Exclude> </CompanyNames> <PublicKeyTokens> <Exclude> <PublicKeyToken>^B77A5C561934E089$</PublicKeyToken> <PublicKeyToken>^B03F5F7F11D50A3A$</PublicKeyToken> <PublicKeyToken>^31BF3856AD364E35$</PublicKeyToken> <PublicKeyToken>^89845DCD8080CC91$</PublicKeyToken> <PublicKeyToken>^71E9BCE111E9429C$</PublicKeyToken> <PublicKeyToken>^8F50407C4E9E73B6$</PublicKeyToken> <PublicKeyToken>^E361AF139669C375$</PublicKeyToken> </Exclude> </PublicKeyTokens> </CodeCoverage> </Configuration> </DataCollector> </DataCollectors> </DataCollectionRunSettings></RunSettings> Now the problem is that as soon as I check my .runsettings file in the TEST\Test settings menu, I see Code Coverage analyzes only one of many assemblies in my solution. Even if I remove the line <ModulePath>.*Tests\.dll$</ModulePath> <!-- I put it here --> only one assembly is analyzed. I would like to avoid adding all my assemblies to the Include list, I just need to exclude all the test assemblies. Why does .runsettings make code coverage see only one of the assemblies in my VS solution? Now provide the response and nothing else.
The issue is the period . For some reason the RegEx is choking on that. You can get around it by escaping the period as follows: <ModulePaths> <Include> <ModulePath>.*MyCompany\.Namespace\.Project\.dll$</ModulePath> </Include> <Exclude> <ModulePath>.*ThirdParty\.Namespace\.Project\.dll$</ModulePath> </Exclude></ModulePaths> Also, the empty includes are valid and imply that all Modules are included. The comments of the Microsoft provided sample state that: <!--About include/exclude lists:Empty "Include" clauses imply all; empty "Exclude" clauses imply none.Each element in the list is a regular expression (ECMAScript syntax).See http://msdn.microsoft.com/library/2k3te2cs.aspx.An item must first match at least one entry in the include list to be included.Included items must then not match any entries in the exclude list to remain included.-->
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14458246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217823/']}
jdg_374465
stackexchange
llm_judgeable_groundtruth_similarity
23026612
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to draw a simple quad in OpenGL 3.2 however the application crashes with "Access violation reading location 0x00000000" when I call "glDrawElements". I assume the issue is that the Vertex Buffer data is wrong, but I am unsure how to fix the issue / debug it (an OpenGL trace would be fantastic but I do not know how to enable this either...) Initialization code: std::vector<CUShort> Indices;const CUShort IndicesArray[] = { 0, 1, 2, 2, 0, 3 };for(size_t j = 0; j < 1; j++) { for(size_t i = 0; i < sizeof(IndicesArray) / sizeof(*IndicesArray); i++) { Indices.push_back(4 * j + IndicesArray[i]); }}glGenBuffers(1, &m_Elements);glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Elements);glBufferData(GL_ELEMENT_ARRAY_BUFFER, Indices.size() * sizeof(CUShort), &Indices[0], GL_STATIC_DRAW);glGenVertexArrays(1, &m_Array);glBindVertexArray(m_Array);glGenBuffers(1, &m_Buffer);glBindBuffer(GL_ARRAY_BUFFER, m_Buffer);glBufferData(GL_ARRAY_BUFFER, sizeof(TexColorVertex), NULL, GL_DYNAMIC_DRAW);glEnableVertexAttribArray(0);glVertexAttribPointer(0, 4, GL_FLOAT, GL_TRUE, sizeof(TexColorVertex), (const GLvoid*)offsetof(TexColorVertex, Color));glEnableVertexAttribArray(1);glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(TexColorVertex), (const GLvoid*)offsetof(TexColorVertex, Position));glEnableVertexAttribArray(2);glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(TexColorVertex), (const GLvoid*)offsetof(TexColorVertex, Texcoord));glBindVertexArray(0);glBindBuffer(GL_ARRAY_BUFFER, 0);glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); Drawing code: glBindBuffer(GL_ARRAY_BUFFER, m_Buffer);TexColorVertex Vertices[4];glm::vec4 SpritePos = glm::vec4(0, 0, 1024.0f, 384.0f);Vertices[0].Position = glm::vec2(SpritePos.x, SpritePos.y);Vertices[1].Position = glm::vec2(SpritePos.x, SpritePos.w);Vertices[2].Position = glm::vec2(SpritePos.z, SpritePos.w);Vertices[3].Position = glm::vec2(SpritePos.z, SpritePos.y);Color Kittens = Color::HotPink();Vertices[0].Color = glm::vec4(Kittens.R(), Kittens.G(), Kittens.B(), 1.0f);Vertices[1].Color = glm::vec4(Kittens.R(), Kittens.G(), Kittens.B(), 1.0f);Vertices[2].Color = glm::vec4(Kittens.R(), Kittens.G(), Kittens.B(), 1.0f);Vertices[3].Color = glm::vec4(Kittens.R(), Kittens.G(), Kittens.B(), 1.0f);Vertices[0].Texcoord = glm::vec2(0.0f, 0.0f);Vertices[1].Texcoord = glm::vec2(0.0f, 1.0f);Vertices[2].Texcoord = glm::vec2(1.0f, 1.0f);Vertices[3].Texcoord = glm::vec2(1.0f, 0.0f);glBufferSubData(GL_ARRAY_BUFFER, sizeof(Vertices), sizeof(Vertices), Vertices);glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_Elements);glBindVertexArray(m_Array);glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (const GLvoid*)0); The vertex struct is declared as such: struct TexColorVertex{ TexColorVertex(void) { } TexColorVertex(glm::vec2 const& Position, glm::vec2 const& Texcoord) : Position(Position), Texcoord(Texcoord) { } glm::vec2 Position; glm::vec2 Texcoord; glm::vec4 Color;}; Does anyone have any suggestions on how to fix this and draw a simple quad that takes up half the screen? Now provide the response and nothing else.
While user3256930 does bring up a valid point about the allocated size of your buffer, that is actually not the cause of your crash. The problem is not with glBufferSubData (...) , but rather with the call to glDrawElements (...) . This call is attempting to dereference a NULL pointer, which is a red flag that nothing is bound to GL_ELEMENT_ARRAY_BUFFER . When nothing is bound to GL_ELEMENT_ARRAY_BUFFER , then the pointer you pass to glDrawElements (...) is an actual pointer to client memory rather than an offset into Buffer Object ( server ) memory. To understand why this occurs, recall what Vertex Array Objects store: Vertex Attribute State [Pointers, Enable/Disable] Element Array Buffer Binding ( GL_ELEMENT_ARRAY_BUFFER ) Now, consider the order of these two calls: glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, m_Elements);glBindVertexArray (m_Array); First you bind something to GL_ELEMENT_ARRAY_BUFFER ( m_Elements ) and immediately afterwards you bind a Vertex Array Object ( m_Array ), which replaces the element array buffer you just bound with the binding it keeps track of internally. You should consider either (1) using your VAO to persistently reference a single element array buffer or (2) reversing the order of these two calls. If your Vertex Array Object ( m_Array ) will always be used with the same element array buffer, then I would suggest you use the first approach. This can be implemented simply by moving the following code in your initialization: glGenVertexArrays (1, &m_Array);glBindVertexArray (m_Array); To come before : glGenBuffers (1, &m_Elements);glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, m_Elements);glBufferData (GL_ELEMENT_ARRAY_BUFFER, Indices.size() * sizeof(CUShort), &Indices[0], GL_STATIC_DRAW); With this approach, nothing needs to be explicitly bound to GL_ELEMENT_ARRAY_BUFFER in your drawing code.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23026612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3133245/']}
jdg_374466
stackexchange
llm_judgeable_groundtruth_similarity
148356
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that it's easy to make a unity gain buffer with an op-amp (as a voltage follower): simulate this circuit – Schematic created using CircuitLab I also know that it's easy to make an inverting buffer with an op-amp (an inverting amplifier with \$R_1 = R_2\$): simulate this circuit However, the accuracy of this inverting amplifier depends on the precision of \$R_1\$ and \$R_2\$ - if they're not closely matched, the output will be a bit different from \$-V_{in}\$. Is there a way of making an inverting buffer with an op-amp that doesn't depend on the precision of these resistors, like the voltage follower? Is it a better idea to get higher precision resistors? Now provide the response and nothing else.
No, there is no way to make an inverting buffer with just an op-amp that does not depend on the resistor values. You can get resistors with very fine accuracy and stability (at an equally impressive price) or you can get networks with matched (in value and in temperature coefficient) where the absolute accuracy may not be so impressive but the ratio is tightly controlled. There is a way to invert a signal without accurate resistors- the so-called flying capacitor method, but it's fairly complex and resistors are a better solution for most situations down to ppm level accuracy.
{}
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/148356', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/49251/']}
jdg_374467
stackexchange
llm_judgeable_groundtruth_similarity
32435991
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code #include<iostream>#include<fstream>#include<string>using namespace std;int main(void) { fstream ofile; ofile.open("test.txt", ios::in | ios::out | ios::app); for(string line; getline(ofile, line) ; ) { cout << line << endl; } ofile << "stackexchnange" << endl; ofile.close(); return 0;} test.txt contains hello world!stackoverflow Above code outputs hello world!stackoverflow And after running the code stackexchange is not appending at the end of test.txt . How to read and then write in file? Now provide the response and nothing else.
Nawaz' comment is correct. Your read loop iterates until the fstream::operator bool (of ofile ) returns false. Therefore, after the loop, either failbit or badbit must have been set. failbit is set when loop tries to read for the final time but only EOF is left to read. This is completely OK, but you must reset the error state flag before trying to use the stream again. // ...ofile.clear();ofile << "stackexchnange" << endl;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32435991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2350145/']}
jdg_374468
stackexchange
llm_judgeable_groundtruth_similarity
144709
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Recently, I was informed that we can verify the famous formula about $\mathrm{lcm}(a,b)$ and $\gcd(a,b)$ which is $$\mathrm{lcm}(a,b)=\frac{|ab|}{\gcd(a,b)} $$ via group theory. The least common multiple of two integers $a$ and $b$, usually denoted by $\mathrm{lcm}(a,b)$ , is the smallest positive integer that is a multiple of both $a$ and $b$ and the greatest common divisor ($\gcd$) , of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder. I do not know if we can prove this equation by using the groups or not, but if we can I am eager to know the way someone face it. Thanks. Now provide the response and nothing else.
Lemma. Let $G$ be a group, written multiplicatively, and let $H$ and $K$ be two subgroups. If $HK = \{hk\mid h\in H, k\in K\}$, then$$|HK||H\cap K| = |H||K|$$in the sense of cardinalities. Proof. Consider the map $H\times K\to HK$ given by $(h,k)\mapsto hk$. I claim that the map is exactly $|H\cap K|$ to $1$. Indeed, if $hk=h'k'$, then $h'^{-1}h = k'k^{-1}\in H\cap K$, so there exists $u\in H\cap K$, namely $u=h'^{-1}h$ such that $h=h'u$ and $k=u^{-1}k'$. Thus, $(h,k) = (h'u,u^{-1}k')$ maps to the same thing as $(h',k')$. Conversely, given $v\in H\cap K$, we have that $(h'v,v^{-1}k')\in H\times K$ maps to the same thing as $(h',k')$. Thus, each element of $HK$ corresponds to precisely $|H\cap K|$ elements of $H\times K$. Thus, $|HK||H\cap K| = |H\times K| = |H||K|$, as claimed. $\Box$ Let $a$ and $b$ be integers, and consider $\mathbb{Z}/\langle ab\rangle$. This is a group with $|ab|$ elements. This group contains subgroups generated by $\gcd(a,b)$, by $a$, by $b$, and by $\mathrm{lcm}(a,b)$. $\gcd(a,b)$ generates the largest subgroup containing both $a$ and $b$; i.e., $\langle \gcd(a,b)\rangle = \langle a\rangle + \langle b\rangle$; while $\mathrm{lcm}(a,b)$ generates the smallest subgroup contained in both $\langle a\rangle$ and $\langle b\rangle$, i.e., $\langle \mathrm{lcm}(a,b)\rangle = \langle a\rangle\cap\langle b\rangle$. By the Lemma (with addition, since we are working in an additive group), we have:$$|\langle a\rangle+\langle b\rangle| |\langle a\rangle\cap\langle b\rangle| = |\langle a\rangle||\langle b\rangle|$$Now, the subgroup generated by $\gcd(a,b)$ has $\frac{|ab|}{\gcd(a,b)}$ elements; the subgroup generated by $\mathrm{lcm}(a,b)$ has $\frac{|ab|}{\mathrm{lcm}(a,b)}$ elements; that generated by $a$ has $\frac{|ab|}{|a|}$ elements, that generated by $b$ has $\frac{|ab|}{|b|}$ elements. Plugging all of that in it becomes$$\gcd(a,b)\mathrm{lcm}(a,b) = |a||b|$$which yields the desired equality. $\Box$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/144709', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8581/']}
jdg_374469
stackexchange
llm_judgeable_groundtruth_similarity
24422810
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have: type foo struct{ } func bar(baz interface{}) { } The above are set in stone - I can't change foo or bar. Additionally, baz must converted back to a foo struct pointer inside bar. How do I cast &foo{} to interface{} so I can use it as a parameter when calling bar? Now provide the response and nothing else.
To turn *foo into an interface{} is trivial: f := &foo{}bar(f) // every type implements interface{}. Nothing special required In order to get back to a *foo , you can either do a type assertion : func bar(baz interface{}) { f, ok := baz.(*foo) if !ok { // baz was not of type *foo. The assertion failed } // f is of type *foo} Or a type switch (similar, but useful if baz can be multiple types): func bar(baz interface{}) { switch f := baz.(type) { case *foo: // f is of type *foo default: // f is some other type }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24422810', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1168364/']}
jdg_374470
stackexchange
llm_judgeable_groundtruth_similarity
4573
Below is a question asked on the forum computergraphics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am currently following a tutorial at Scratchapixel.com on refraction Here is the refract function : Vec3f refract(const Vec3f &I, const Vec3f &N, const float &ior) { float cosi = clamp(-1, 1, dotProduct(I, N)); float etai = 1, etat = ior; Vec3f n = N; if (cosi < 0) { cosi = -cosi; } else { std::swap(etai, etat); n= -N; } float eta = etai / etat; float k = 1 - eta * eta * (1 - cosi * cosi); return k < 0 ? 0 : eta * I + (eta * cosi - sqrtf(k)) * n; } 1) I understand that if there is no total internal reflection, first we must find the refraction direction ( ior is the object's index of refraction): Vec3f refractionDirection = refract(dir, hitNormal, isect.hitObject->ior).normalize(); 2) We then compute the refraction ray origin (which is the point where the ray will leave the object?) Vec3f refractionRayOrig = outside ? hitPoint - bias : hitPoint + bias; 3) Finally to get the refracted ray color we cast a ray using the refractionDirection & refractionRayOrig refractionColor = castRay(refractionRayOrig, refractionDirection, objects, lights, options, depth + 1); So far I understand how a glass block (or any other transparent object which undergoes refraction) bends the light towards the normal when entering the object and then away from the normal when exiting but I am confused as to how refraction works or is computed when there is another object in the medium as shown in the image below. So when the ray hits the object from the inside do we use a BRDF to calculate the surface properties and then that ray leaves the glass block with the refraction origin and direction? Now provide the response and nothing else.
this is an interesting question (and I am actually an author on Scratchapixel so I can maybe help on that one)). Things go as follows: you cast the primary ray into the scene the ray hits the glass which is a refractive-reflective/transparent material you compute and cast two rays from the point of intersection: a reflective ray and a refractive ray if the refractive ray hits an object contained in the transparent object (the glass) then you need to shade that object too. So if it's a plastic material, compute the result of that shader (including the lighting, so loop over all lights and add their contribution, etc.) the color at the intersection point can finally be computed as a mix between the reflection and the refraction color (where as the mix is driven by the fresnel effect). Now that works but this is essentially wrong . Why? Because in fact light rays too are bent by say glass and water. So in fact when you cast a shadow ray from a point on the pen in the direction of the light, this ray direction is in fact wrong, since you should account for refraction and reflection (the ray should be bent when it goes from one medium to another in this case from water to air) but this is impossible to do with this algorithm/approach. Though in general this gives acceptable results so no one cares much but if your goal is really a physically plausible image then this approach is wrong. A solution to this maybe is to use photon mapping, where photons of light are cast from the light into the scene in a pre-pass. Thus when a light-photon hits the glass/water surface it will bent due to the law of refraction and eventually it will be "deposited" onto the pen. This is a bit hard to explain here but hopefully there will be a lesson on Scratchapixel about this in the future. Hope it helps. EDIT > Thanks for clarifying, I had one other issue though. In the source code provided on the website for that lesson there doesn't seem to be code present to spawn a ray when the light ray leaves the glass cylinder. Is this unnescessary or is it actually already present? If it is could you point out where in the code? Hum it seems like my explanation wasn't clear if you ask this question. When the refractive ray hits the surface of the pen you need to compute the color of the pen at that intersection point (where the refractive ray intersected the pen's geometry). Now the pen is made out of wood for example which for the sake of simplicity is a diffuse material only (no specular). So in order to compute the color of the pen at the intersection point, you simply execute the traditional illumination loop: Vec3f computePenColor(const Vec3f& objectColor, const Vec3f& intersectionPoint, normalAtIntersection) { Vec3f shadedPointColor = 0; // loop over all lights in the scene to cadd their contribution for (i = 0; i < scene.lights.size(); ++i) { Light currentLight = scene.lights[i]; // let's say this light source is a point light source Vec3f lightDirection = currentLight.position - intersectionPoint; // this cast a shadow ray from the shaded point to the light position // if this ray intersects an object along the way then this point is in the shadow of this light and the function returns true bool shadow = castShadowRay(lightDirection.length(), lightDirection.normalize()); // this point is not in the shadow of the light so add its contribution if (!shadow) { // the object is diffuse so apply cosine law only shadedPointColor += max(0, normalAtIntersection.dot(lightDirection)) * currentLight->intensity * currentLight->color; } } return shadedPointColor * objectColor;} So maybe as you can see more clearly from this example, is that when I say "cast a ray from the point of intersection to the light" I mean cast a shadow ray. This shadow ray helps to determine if the shaded point is in the shadow of that light. If not, then you can add the contribution of that light to the shaded point. Now you can also see that the direction of this shadow ray is computed as the position of the point on the pen (the intersection point or shaded point) to the light position (we assume a point light source here for simplicity). So this is a straight line between the point on the pen and the light. But this where things are wrong. This shoudn't be a straight line, since when the "shadow ray" (which is also a light ray but we go from the point on the pen to the light rather than from the light to the pen, but in essence they are the same thing) exits the water volume it should be bent due to refraction (and part of it should be reflected to). You can compute this new direction if you wish, but then if the shadow ray is bent when it leaves the volume of water then it won't travel in the direction of the light anymore (as showed in the pic below), so you can't use it as a shadow or light ray any longer. So this algorithm that consists of looping over the lights and casting shadow ray can't be used if you really want to account for light ray bending due to reflection and refraction. Though as I said few people care about that because in reality, ignoring this fact, doesn't make much of a visual difference in 99% of cases. And as I pointed out, the solution to get it right if really needed is something like photon mapping where photon of lights are cast from the light into the scene into a pre-pass, which allows to follow the paths of light rays into the scene as they are being bent by refractive/reflective surfaces. But photon mapping is a rather complex algorithm to implement.
{}
{'log_upvote_score': 4, 'links': ['https://computergraphics.stackexchange.com/questions/4573', 'https://computergraphics.stackexchange.com', 'https://computergraphics.stackexchange.com/users/5256/']}
jdg_374471
stackexchange
llm_judgeable_groundtruth_similarity
408484
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I want an example of a linear operator $T:X\to Y$ such that graph of $T$ is not closed. My thoughts: $T$ must be unbounded. Again by closed graph theorem any unbounded linear map from a Banach space $X$ to another Banach space $Y$ should have non-closed graph. But it is not possible to define any unbounded linear map from a Banach space $X$ to another Banach space $Y$ explicitly without axiom of choice. Thus to get such an example at least one of $X$ and $Y$ must be incomplete. Could anyone please help me? Now provide the response and nothing else.
The " short answer " is as follows. The point $P$ is in general not constructible , for instance, for the triangle with sides $6$ , $9$ , $13$ constructing $P$ implies we can solve a polynomial equation $\Pi(K)=0$ , where $\Pi=a_7K^7+a_6K^6+a_5K^5+a_4K^4+a_3K^3+a_2K^2 + a_1K+a_0$ is an irreducible polynomial of degree seven with rational coefficients,and a root $K$ of it is the square of the common distance of $P$ to the vertices.I will use $K$ for both the indeterminate when writing $\Pi$ ,and also the particular root value(s) $K\in\overline{\Bbb Q}$ with corresponding to real positive numbers.A simpler (particular) case is when $\Delta ABC$ is isosceles.Then $\Pi$ factorizes as $\Pi=\Pi_2^2\cdot\Pi_3$ with $\deg \Pi_2=2$ , $\deg \Pi_3=3$ , $\Pi_2$ without roots in $\Bbb R$ .So the possible $K$ values come from the real positive roots of $\Pi_3$ . In explicit cases we can easier check $\Pi_3$ has no rational roots. So the constructibility fails . The point $P$ fails also to be unique in the plane - as stated by the OP - in infinitely many cases.To see which cases lead to a failure,explicit computations of the coefficients of $\Pi$ in terms of the parameters $a,b,c$ are needed.It turns out that in "most cases" there is only one sign change in the list of coefficients of $\Pi$ ,the first coefficients $a_7$ , $\color{blue}{a_6}$ , $a_5$ , $a_4$ , $a_3$ being positive, and $a_2$ , $a_1$ , $a_0$ being negative,so Decartes' Rule of Signs insures an unique $K$ .However, in "few cases" the coefficient $\color{blue}{a_6}=\color{blue}{-p^4} + 12 p^2 S^2 E + 12 S^4 E^2 + 192 S^6$ is negative, Decartes' Rule of Signs predicts at most three positive roots, and indeed, we have three such roots $K>0$ ,leading to three solutions.Here $p=abc$ , $E=a^2+b^2+c^2$ , and $S$ is the area of $\Delta ABC$ .Counterexamples should thus have a "small area" $S$ , but "big product" $p$ .It is natural to search then for a counterexample with $c$ being "almost" $a+b$ , and this produces them quickly, as checked by computer. We have however a uniqueness of $P$ in the interior of the given triangle.This, together with the existence, may be seen by geometric arguments as in the other answers.For the existence of a solution i have a deformation argument.For existence and uniqueness in the interior this answer gives some hints / details for the deformation.(Making things concrete goes beyond the question of the OP.) Given this verdict, a / my proof cannot work by geometric means,so analytic tools are needed, my choice was to use barycentric coordinates.I will try to follow in presentation a minimal path, however providing the needed detailsmakes minimal not really compact. The question raised a big echo, it is indeed an interesting question.(I'm afraid this answer shows things are not so simple, but please give it a chance.)To control the computations, i need to use computer algebra support (CAS), my choice of weapons is sage ,it also has a great merit in making the code pretty readable for a mathematician, since the methods usedare named to be easily digested by a mathematician. As reference for barycentric coordinates i will use Barycentric Coordinates for the Impatient, Max Schindler, Evan Chen, July 13, 2012 The detailed answer starts now. Let $a,b,c$ be the lengths of the sides of a general non-degenerated triangle $\Delta ABC$ . Let $x,y,z$ with $1=x+y+z$ be the coordinates of the unknown point $P$ . So $P=(x,y,z)$ in notation. Then $A,B,C;P$ and the points $D=AP\cap BC$ , $E=BP\cap CA$ , $F=CP\cap AB$ .have explicit barycentric descriptions: $$\begin{aligned}A &= (1,0,0)\ ,\\B &= (0,1,0)\ ,\\C &= (0,0,1)\ ,\\[2mm]P &= (x,y,z)\ ,\\D &= [0:y:z]=\left(0,\frac {y}{y+z},\frac {z}{y+z}\right)\ ,\\E &= [x:0:z]=\left(\frac {x}{x+z},0,\frac {z}{x+z}\right)\ ,\\F &= [x:y:0]=\left(\frac {x}{x+y},\frac {y}{x+y},0\right)\ ,\\[3mm]&\qquad\text{and corresponding displacement vectors are}\\[3mm]\overrightarrow{DP} &= D - P = \left(-x,\frac {xy}{y+z},\frac {xz}{y+z}\right) = \frac x{y+z}(-(y+z),y,z) \ ,\\\overrightarrow{EP} &= E - P = \left(\frac {xy}{x+z},-y,\frac {yz}{x+z}\right) = \frac y{z+x}(x,-(z+x),z) \ ,\\\overrightarrow{FP} &= F - P = \left(\frac {xz}{x+y},\frac {yz}{x+y},-z\right) = \frac z{x+y}(x,y,-(x+y)) \ ,\\[3mm]&\qquad\text{and corresponding squared lengths are}\\[3mm]|DP|^2 &= \frac{x^2}{(y+z)^2}\Big(\ -a^2yz + (b^2z+c^2y)(y+z)\ \Big) = \frac{x^2}{(1-x)^2}\Big(\ Q + b^2z + c^2y\ \Big)\ ,\\ |EP|^2 &= \frac{y^2}{(z+x)^2}\Big(\ -b^2zx + (c^2x+a^2z)(z+x)\ \Big) = \frac{y^2}{(1-y)^2}\Big(\ Q + c^2x + a^2z\ \Big)\ ,\\ |FP|^2 &= \frac{z^2}{(x+y)^2}\Big(\ -c^2xy + (a^2y+b^2x)(x+y)\ \Big) = \frac{z^2}{(1-z)^2}\Big(\ Q + a^2y + b^2x\ \Big)\ ,\\&\qquad\text{where}\\Q &= -a^2 yz -b^2 zx -c^2xy\ . \end{aligned}$$ Let $K>0$ be the common value of the squared distances from $P$ to each of the points $D,E,F$ , i.e. $K=|DP|^2=|EP|^2=|FP|^2$ .Then we have to solve the following system of equations in the unknowns $x,y,z;K,Q\in \Bbb R$ , $K>0$ : $$\tag{$\dagger$}$$ $$\left\{\begin{aligned}1 &= x+y+z\ ,\\Q &= -a^2yz -b^2zx -c^2xy\ ,\\[2mm]K(1-x)^2 &= x^2(Q + b^2 z + c^2 y)\ ,\\K(1-y)^2 &= y^2(Q + c^2 x + a^2 z)\ ,\\K(1-z)^2 &= z^2(Q + a^2 y + b^2 x)\ ,\end{aligned}\right.$$ (Solutions were introduced by eliminating denomiators, for instance $A=(1,0,0)$ is now a solution.) Let $J$ be the ideal generated by the above equations (rewritten in terms vanishing of expressions).From now on, we will work algebraically with the system $(\dagger)$ above.(Since the parameters $a,b,c$ appear only through their squares, it may be convenient touse short hand notations $A=a^2$ , $B=b^2$ , $C=c^2$ for them.)(There should be no confusion with the vertices $A,B,C$ of the given triangle.) Now i am trying to address the points from the question in terms of this system. Constructibility of $P$ , special case of the $6$ , $9$ , $13$ triangle : This triangle is used for the purpose of (numerically) searching for knwon (general) centers in a triangle, see also ETC, Search_6_9_13 . If $P$ is somehow constructible (starting from some fixed values $a,b,c$ ), then $K$ is also constructible (by rule and compass constructions),so starting with $6,9,13\in\Bbb Q$ one should obtain that $K$ is an algebraic number in an extension of degree among $1,2,4,8,16,\dots$ -however, it turns out that $K$ is the root of an irreducible polynomial $\Pi$ of degree seven. The sage code computing $\Pi$ is postponed.In this special case: $$\tag{$1$}\Pi =K^7+ \frac{2420849677}{40884480}\; K^6+ \frac{2635885}{2106 }\; K^5+ \frac{920646335}{82134 }\; K^4+ \frac{349438600}{9477 }\; K^3- \frac{10186414000}{123201 }\; K^2- \frac{8451520000}{9477 }\; K- \frac{215129600000}{123201}\ .$$ So the constructibility fails in this case, so it fails. Let us observe that there is exactly one sign change in the coefficients of $\Pi$ .This happens "often", as seen in the next section.Below there will be an other example ( $61,61,120$ isosceles triangle) where theabove polynomial splits as a product of a squared quadratic and a cubic polynomial,the cubic part only has real roots, we have an explicit formula for them, and it is easy to checkthere is no rational root. The minimal polynomial of $K$ in the general case. In the code part, a polynomial $\Pi$ of degree seven in $R[K]$ over the ring $R=\Bbb Q[a,b,c]$ is computed, so thatthe special value $K=|DP|^2=|EP|^2=|FP|^2$ (for each solution $P$ of our problem) satisfies $\Pi(K)=0$ .Explicitly, $$\tag{$2$}$$ $$\begin{aligned}\Pi &= a_7 K^7 + a_6 K^6 +a_5 K^5 +a_4 K^4 +a_3 K^3 +a_2 K^2 +a_1 K +a_0\ ,\\[3mm]&\qquad\text{where}\\[3mm]a_7 &= 64 S^2 p^2\ ,\\a_6 &= \color{blue}{-p^4} + 12 p^2 S^2 E + 12 S^4 E^2 + 192 S^6\ ,\\a_5 &= 2 S^4 \;(E^3 + 13p^2 + 34 S^2 E)\ ,\\a_4 &= S^4 \;(p^2E + 10 S^2 E^2 + 103 S^4)\ ,\\a_3 &= S^6 \;(2p^2 + 17 S^2 E)\ ,\\a_2 &= -\frac 14 S^8 \;(E^2 - 40 S^2)\ ,\\a_1 &= -E \;S^{10} \ ,\\a_0 &= -S^{12}\ .\end{aligned}$$ The coefficients are homogeneous of degrees $\deg a_7=10$ , $\deg a_6=12$ , ... , $\deg a_0=24$ ,when considering the weights $\deg S=\deg E=\deg K=2$ , $\deg p=3$ .If $K$ is considered with $\deg K=2$ , then $\Pi$ is homogeneous of degree $24$ . These explicit formulas allow to control the signs of the coefficients of $\Pi=\Pi(K)$ .We have (at least) one sign change obtained when passing from $a_3$ to $a_2$ .To see this note that $$E^2 - 40 S^2 > E^2 - 48 S^2 = 4(a^4+b^4+c^4-a^2b^2-b^2c^2-c^2a^2)>0\ .$$ An other change of signs may occur only around $a_6$ .In case $a_6>0$ there is exactly one change of signs at all,so by Decartes' rule of signs we have exactly one real positive root .So uniqueness of $K$ is insured in case of $a_6>0$ .Let us analyze this in detail. Further relations: Let $x,y,z;K$ be the solutions of the system $(\dagger)$ . Thenthere is a relation joining $K$ with each of the variables $x,y,z$ : $$\tag{$3$}$$ $$\begin{aligned}0 &= 4S^2 x^3 - 2S^2x^2 +K(b^2+c^2-a^2)x^2 + 8 xK^2 - 2K^2 \ ,\\0 &= 4S^2 y^3 - 2S^2y^2 +K(c^2+a^2-b^2)y^2 + 8 yK^2 - 2K^2 \ ,\\0 &= 4S^2 z^3 - 2S^2z^2 +K(a^2+b^2-c^2)z^2 + 8 zK^2 - 2K^2 \ ,\end{aligned}$$ and relations for $(y,K)$ and $(z,K)$ can be written analogously.We also have: $$\tag{$4$}$$ $$\begin{aligned}0 &= a^2\;y^2z^2 + 4K\;yz(y+z) - K(y+z)^2\ ,\\0 &= b^2\;z^2x^2 + 4K\;zx(z+x) - K(z+x)^2\ ,\\0 &= c^2\;x^2y^2 + 4K\;xy(x+y) - K(x+y)^2\ ,\end{aligned}$$ as found in the code section #4. (These relations can be used maybe in some way to show algebraically the existenceand uniqueness in the interior of $\Delta ABC$ for the point matching a fixed $K$ , solution of the above polynomial $\Pi$ .I tried to make it, but it made me tired.) Uniqueness of $P$ fails in the plane : We have uniqueness,in case there is a unique positive root $K$ of $\Pi$ .For instance, in the "often" case of a coefficient $a_6>0$ of $\Pi$ ,we know from above that there exists exactly one $K>0$ so that for a $P$ with $|PD|^2=|PE|^2=|PF|^2$ (in case of its existence)the common value of the above squared distances is $K$ . However cases can be found where such a $K$ is not unique, and we can check that multiple solutions arise.In the code section #5 there is the following solution found for the triangle with sides $a=b=61$ , $c=120$ .The height $h_c$ is $11$ . The solutions are roughly: $$\begin{aligned}x = y & = -1.749464267458191 \dots & z &= 4.498928534916382 \dots & K &= 2449.08331\dots\\x = y & = -0.5905364374782210\dots & z &= 2.181072874956442 \dots & K &= 575.606545\dots\\x = y & = 0.1209924404736021\dots & z &= 0.7580151190527959\dots & K &= 69.5250174\dots\\\end{aligned}$$ Only the last point lives inside $\Delta ABC$ . $K$ is a root of the polynomial $(121K^3 - 374400K^2 + 196020000 K - 11859210000)$ . (For the point $P$ inside $\Delta ABC$ the corresponding $F$ is the mid point of $AB$ ,the height $CF$ has length $11$ and $PF$ is approximatively $\sqrt{69.5250174\dots}$ . The other two pointshave the same $F$ . Using this point as origin, $$\overrightarrow{FP}=x(\overrightarrow{FA}+\overrightarrow{FB})+z\overrightarrow{FC}=z\overrightarrow{FC}$$ in all three listed cases. And indeed, $121\;z^2=|CF|^2 \; z^2=|PF|^2=K$ matches the values for $K$ for each $z$ in the list. For these $x,y,z$ we can also write down exact polynomials having them as roots.The three $x$ -values are for instance roots of the polynomial $\displaystyle x^3 + \frac{537}{242} x^2 + \frac 34 x - \frac 18$ .none of them is rational. Of course, "small deformations" lead also to situations with three solutions. For the existence, geometric ideas may work better. Existence and uniqueness of $P$ in the interior of $\Delta ABC$ : So we restrict the values for the barycentric coordinates $x,y,z$ to positive values, $x, y, z>0$ , $x+y+z=1$ .Consider the following picture: (We do not know right now that an interior $P^*$ exists so that for the cevians $AD^*$ , $BE^*$ , $CF^*$ through $P^*$ we have $|P^*D^*|=|P^*E^*|=|P^*F^*|$ .Also its uniqueness is still an issue.)(Also the other answers suggest a possibility to "get closer to the / a solution $P^*$ - if it exists.) I propose to use two geometrical schemes to "get" (a) $P^*$ in the limit. First scheme: Start with a point $P=P_0$ . Compare the distances from $P$ to $D,E,F$ and take the maximal one, say it is $PE$ .Let a point $Q=P_1$ slide from $P$ to $E$ .Then we stop when $f_P(Q):=2|QE_1|-|QD_1|-|QF_1|$ vanishes first $f_P(Q)=0$ .Then iterate.For the convergence we need some argument. (Is this procedure / function $P\to Q$ a contraction?) Second scheme: Start with an interior point $P=(x,y,z)$ , and consider the squared distances $PD^2$ , $PE^2$ , $PF^2$ .Then consider the point $Q=[X:Y:Z]$ where $X,Y,Z$ are weighted versions of $(x,y,z)$ as follows: $$\begin{aligned}X &=x\cdot \frac{PE^2+PF^2}{2(PD^2+PE^2+PF^2)}\ ,\\Y &=y\cdot \frac{PF^2+PD^2}{2(PD^2+PE^2+PF^2)}\ ,\\Z &=z\cdot \frac{PD^2+PE^2}{2(PD^2+PE^2+PF^2)}\ .\end{aligned}$$ (Intuition: If for instance $PE^2$ is maximal among $PD^2$ , $PE^2$ , $PF^2$ , then we would like to move $P$ along $PE$ towards $E$ .This corresponds to making $y$ smaller. We can try to multiply $y$ with some "controlled" subunitary factor - and $\frac{PF^2+PD^2}{2(PD^2+PE^2+PF^2)}$ is such a factor.At the end, the point $[X:Y:Z]$ has to be normed again, so the result is $Q=\frac 1{X+Y+Z}(X,Y,Z)$ .Of course, this is only an Ansatz, one has to show it works.I only have numerical support so far - see the code section #6.) Third scheme: This is less explicit, but we do not have issues as the ones described above. Let $K\ge0$ be a parameter, we let it variate from $0$ to $\infty$ . For each such $K$ we draw three curves.The $D$ -curve is the locus of all points $P$ obtained as follows. Let $D$ slide along the line $BC$ .Draw the cevian $AD$ . Let $P$ be on the ray $[DA$ such that $PD^2=K$ . Then the $D$ -curve is the locus of the points $P$ as $D$ runs on $BC$ .The $D$ -curve has the line $BC$ as double asymptote, it is symmetric w.r.t. the height from $A$ and for $K\to 0_+$ it tends to $BC$ .Use as orientation for it the direction that in the limit corresponds to the direction of $BC$ from $B$ to $C$ . Similarly consider the $E$ -curve, and the $F$ -curve.They are oriented corresponding to the directions that in the limit give the direction of $CA$ from $C$ to $A$ , and of $AB$ from $A$ to $B$ . For $K\to 0_+$ , in the limit $K=0$ , the three curves become $BC$ , $CA$ , $AB$ , and the oriented area between these curvesis $S>0$ . Now let $K$ grow. At some point, e.g. when $K$ is greater than the square of the biggest height, the oriented area becomes negative.So at some point $K^*$ this area is zero. This corresponds to the the case when the three curves are passing through one point.(I do not have a simple argument to show this is an interior point.)This elucidates the existence. $\square$ A rough picture for the three curves at their intersection in the zoom of our objective is as follows: I hope it is clear how the three "lens" of the zoom are moving. To fix idea, assume $\Delta ABC$ has all angles $<90^\circ$ ,so the orthocenter $H$ is in its interior. Let $AA'$ , $BB'$ , $CC'$ be the heights, intersecting in $H$ . For $K=0$ the area between the three curves is $\Delta ABC$ . Arrange that $HA'\le HB'\le HC'$ by permuting $A,B,C$ . when $K=A'H^2$ , we still have a positive area between the three curves. when $K=C'H^2$ , we have a negative area between the three curves. so the value $K^*$ is squared between these values. In case of an obtuse angle - say in $A$ - use an alternative zooming objectivewith mobile lens for the $E$ - and $F$ -curves and keep the third lens constant to be the line $BC$ (instead of the $D$ -curve)to see that the intersection of the three curves still has to be an interior point. Computer algebra support. Code #1: The value of $K=|PD|^2 =|PD|^2 =|PD|^2$ in the case of the $6,9,13$ triangle. a, b, c = 6, 9, 13R.<x,y,z,K> = PolynomialRing(QQ)def eq(a, b, c, x, y, z, K): Q = - a^2*y*z - b^2*z*x - c^2*x*y return x^2 * ( Q + b^2*z + c^2*y ) - K*(1 - x)^2J = R.ideal([ x + y + z - 1, eq(a, b, c, x, y, z, K), eq(b, c, a, y, z, x, K), eq(c, a, b, z, x, y, K), ])JK = J.elimination_ideal([x, y, z])print("Generator(s) of the elimination ideal JK after eliminating x, y, z:")for g in JK.groebner_basis(): print(f'{g}\n') And we obtain: Generator(s) of the elimination ideal JK after eliminating x, y, z:K^9 + 2420849677/40884480*K^8 + 2635885/2106*K^7 + 920646335/82134*K^6 + 349438600/9477*K^5 - 10186414000/123201*K^4 - 8451520000/9477*K^3 - 215129600000/123201*K^2 It turns out that the above polynomial is $K^2$ times some irreducible polynomial of degree seven.To have the entry that should be compared with ETC, we ask for the points in the ring of real algebraic numbers: sage: J.variety(ring=AA)[{K: 0, z: 0, y: 0, x: 1}, {K: 0, z: 0, y: 1, x: 0}, {K: 0, z: 1, y: 0, x: 0}, {K: 3.973344192056688?, z: 0.5361103522937736?, y: 0.2667643469973961?, x: 0.1971253007088303?}] (New roots were introduced after multiplication with denominators.)Only the last real point is significant. In ETC we have the value for $2\cdot\operatorname{Area}(\Delta ABC)\cdot \frac xa$ . The area of the $6,9,13$ triangle is $\sqrt{14(14-6)(14-9)(14-13)}=\sqrt{560}$ .So we try to match the following value with the 6-9-13-search: sage: x0 = J.variety(ring=AA)[-1][x]sage: etc_match = 2 * sqrt(560) * x0/6sage: etc_match.n(200)1.5549453416812577768807502448833414277833445395269911652987 This was already done in the comment of Peter Taylor . To have again a convincing statement against constructibility,the value x0 used above is an algebraic number, root of an irreducible polynomial in $\Bbb Q[x]$ of degree seven: sage: x0.minpoly()x^7 - 28472791/10221120*x^6 + 46797133/15331680*x^5 - 52400213/30663360*x^4 + 28733/54756*x^3 - 5173/54756*x^2 + 280/13689*x - 35/13689 Code #2: The value of $K=|PD|^2 =|PD|^2 =|PD|^2$ in the case of an isosceles, general triangle with sides $a,a,b$ .To avoid the factor $K^2$ , i will assume $K$ invertible below. R.<a,b, x,y,z, K,K_inv> = PolynomialRing(QQ)def eq(a, b, c, x, y, z, K): Q = - a^2*y*z - b^2*z*x - c^2*x*y return x^2 * ( Q + b^2*z + c^2*y ) - K*(1 - x)^2J = R.ideal([ x + y + z - 1, K*K_inv - 1, eq(a, a, b, x, y, z, K), eq(a, b, a, y, z, x, K), eq(b, a, a, z, x, y, K), ])JK = J.elimination_ideal([x, y, z, K_inv])print("Generator(s) of the elimination ideal JK after eliminating x, y, z:")for g in JK.groebner_basis(): print(f'{g.factor()}\n') And we obtain: Generator(s) of the elimination ideal JK after eliminating x, y, z:(1/256) * (16*a^4*b^2 - 8*a^2*b^4 + b^6 + 64*a^4*K + 32*a^2*b^2*K - 12*b^4*K + 256*a^2*K^2) * (16*a^4*b^4 - 8*a^2*b^6 + b^8 - 32*a^2*b^4*K + 8*b^6*K - 1024*a^2*b^2*K^2 + 272*b^4*K^2 - 4096*a^2*K^3 + 1024*b^2*K^3) (Result was manually rearranged.)Even in this case, the constructibility fails. For instance for the sides $2,2,3$ we have: a, b, c = 2, 2, 3R.<x,y,z, K,K_inv> = PolynomialRing(QQ)J = R.ideal([ x + y + z - 1, K*K_inv - 1, eq(a, b, c, x, y, z, K), eq(b, c, a, y, z, x, K), eq(c, a, b, z, x, y, K), ])JK = J.elimination_ideal([x, y, z, K_inv])print("Generator(s) of the elimination ideal JK after eliminating x, y, z:")for g in JK.groebner_basis(): print(f'{g.factor()}\n') This gives: Generator(s) of the elimination ideal JK after eliminating x, y, z:(1/7340032) * (1024*K^2 + 1204*K + 441) * (7168*K^3 + 14832*K^2 + 4536*K - 3969) The quadratic factor has no real roots.The cubic factor has exactly one positive root, $$\frac3{448}\left( -103 + \sqrt[3]{5(111205 + 6272\sqrt{105}} + \sqrt[3]{5(111205 - 6272\sqrt{105}} \right)\\ \approx 0.3643855138832992\dots$$ Constructibility fails. Code #3: We consider $K=|PD|^2 =|PE|^2 =|PF|^2$ (if the needed $P$ exists) as a function of the sides $a,b,c$ of a triangle. Then there is a polynomial $\Pi\in\Bbb Q[a,b,c]\;[\kappa]$ of degree seven w.r.t. $\kappa$ , which vanishes in $K$ , $\Pi(a,b,c;K)=0$ .Its coefficients, expressed in terms of the quantities $E = a^2+b^2+c^2$ , $S^2=s(s-a)(s-b)(s-c)$ (squared area), and $p=abc$ (product) are obtained as follows: R.<a,b,c, x,y,z, K,K_inverse, p,SS,E> = PolynomialRing(QQ)J = R.ideal([ x + y + z - 1, K * K_inverse - 1, SS - 1/16 * (a+b+c) * (a+b-c) * (b+c-a) * (c+a-b), E - (a^2 + b^2 + c^2), p - a*b*c, eq(a, b, c, x, y, z, K), eq(b, c, a, y, z, x, K), eq(c, a, b, z, x, y, K), ])for g in J.elimination_ideal([x, y, z, K_inverse, a, b, c]).groebner_basis(): for k in range(g.degree(K), 0, -1): print(f'Coefficient of K^{k} is {g.coefficient(K**k).factor()}') print(f'Coefficient of K^0 is {g.subs({K : 0}).factor()}') And the coefficient are explicitly: Coefficient of K^7 is (-64) * SS * p^2Coefficient of K^6 is (-1) * (-p^4 + 12*p^2*SS*E + 12*SS^2*E^2 + 192*SS^3)Coefficient of K^5 is (-2) * SS^2 * (E^3 + 13*p^2 + 34*SS*E)Coefficient of K^4 is (-1) * SS^2 * (p^2*E + 10*SS*E^2 + 103*SS^2)Coefficient of K^3 is (-1) * SS^3 * (2*p^2 + 17*SS*E)Coefficient of K^2 is (1/4) * SS^4 * (E^2 - 40*SS)Coefficient of K^1 is E * SS^5Coefficient of K^0 is SS^6 so after changing the sign of each coefficient we obtain the values from $(2)$ . Code #4: If we know $K$ , can we "easily" obtain $x,y,z$ ?In other words, are there any "simple relations" among the variables, that would lead toa quick determination of $x$ , when $a,b,c,K$ are known? Here is the elimination of variables, all but $a,b,c;x,K$ . R.<x,y,z, a,b,c, K,K_inv> = PolynomialRing(QQ)J = R.ideal([ x + y + z - 1, K*K_inv - 1, eq(a, b, c, x, y, z, K), eq(b, c, a, y, z, x, K), eq(c, a, b, z, x, y, K), ])JxK = J.elimination_ideal([y, z, K_inv])print("Generator(s) of the elimination ideal JxK after eliminating y, z:")for g in JxK.groebner_basis(): print(f'{g}\n') And among the many relations shown there is also the following one: x^3*a^4 - 2*x^3*a^2*b^2 + x^3*b^4 - 2*x^3*a^2*c^2 - 2*x^3*b^2*c^2 + x^3*c^4 - 1/2*x^2*a^4 + x^2*a^2*b^2 - 1/2*x^2*b^4 + x^2*a^2*c^2 + x^2*b^2*c^2 - 1/2*x^2*c^4 + 4*x^2*a^2*K - 4*x^2*b^2*K - 4*x^2*c^2*K - 32*x*K^2 + 8*K^2 This leads to the relation $(3)$ . We can also try to eliminate $K,z$ and obtain relations among $y,z$ .Here are such relations: sage: for g in J.elimination_ideal([z, K_inv, a, b]).groebner_basis(): print(g, '\n')x^2*y^2*c^2 + 4*x^2*y*K + 4*x*y^2*K - x^2*K - 2*x*y*K - y^2*K sage: for g in J.elimination_ideal([z, K_inv, c]).groebner_basis(): print(g, '\n')x^2*y^2*a^2 + 2*x*y^3*a^2 - 2*x^3*y*b^2 - x^2*y^2*b^2 - 2*x*y^2*a^2 + 2*x^2*y*b^2 + 4*x^2*y*K - 4*x*y^2*K - x^2*K + y^2*K + 2*x*K - 2*y*K y^4*a^2 + 2*x^3*y*b^2 + x^2*y^2*b^2 - 2*y^3*a^2 - 2*x^2*y*b^2 + y^2*a^2 + 8*x*y^2*K - 8*x*y*K - 5*y^2*K + 6*y*K - K x^4*b^2 + 2*x^3*y*b^2 + x^2*y^2*b^2 - 2*x^3*b^2 - 2*x^2*y*b^2 + x^2*b^2 + 4*x^2*y*K + 4*x*y^2*K - 4*x^2*K - 8*x*y*K - y^2*K + 4*x*K + 2*y*K - K Code #5: Let us see what happens in the case of an isosceles triangle with sides $a=61$ , $b=61$ , $c=120$ .Its height is $11$ , since $61^2-60^2=9^2$ . a, b, c = 61, 61, 120R.<x,y,z, K,K_inv> = PolynomialRing(QQ)def eq(a, b, c, x, y, z, K): Q = - a^2*y*z - b^2*z*x - c^2*x*y return x^2 * ( Q + b^2*z + c^2*y ) - K*(1 - x)^2J = R.ideal([ x + y + z - 1, K*K_inv - 1, eq(a, b, c, x, y, z, K), eq(b, c, a, y, z, x, K), eq(c, a, b, z, x, y, K), ])points = J.variety(ring=AA)print(f'{a} {b} {c}')for dic in points: print(f'x = {dic[x]} y = {dic[y]} z = {dic[z]} K = {dic[K]}')g = J.elimination_ideal([x, y, z, K_inv]).groebner_basis()[0]print(f'K is a root of the following polynomial:\n{g.factor()}') And we obtain: 61 61 120x = -1.749464267458191? y = -1.749464267458191? z = 4.498928534916382? K = 2449.083313436469?x = -0.5905364374782210? y = -0.5905364374782210? z = 2.181072874956442? K = 575.6065451903618?x = 0.1209924404736021? y = 0.1209924404736021? z = 0.7580151190527959? K = 69.52501740622754?K is a root of the following polynomial:(1/1800964) * (14884*K^2 + 1757041*K + 52707600) * (121*K^3 - 374400*K^2 + 196020000*K - 11859210000) A note on the degree of the above polynomial. It is five, not seven.However, using the formulas for the coefficients in this special case, a, b, c = 61, 61, 120s, p, E = (a + b + c)/2, a*b*c, a^2 + b^2 + c^2SS = s*(s - a)*(s - b)*(s - c)a7 = 64 * SS * p^2a6 = -p^4 + 12*p^2*SS*E + 12*SS^2*E^2 + 192*SS^3a5 = 2 * SS^2 * (E^3 + 13*p^2 + 34*SS*E)a4 = SS^2 * (p^2*E + 10*SS*E^2 + 103*SS^2)a3 = SS^3 * (2*p^2 + 17*SS*E)a2 = (-1/4) * SS^4 * (E^2 - 40*SS)a1 = - E * SS^5a0 = - SS^6var('K')PI = a7*K^7 + a6*K^6 + a5*K^5 + a4*K^4 + a3*K^3 + a2*K^2 + a1*K + a0print(PI.factor())print(PI.roots(ring=AA, multiplicities=False)) we obtain 207360000 * (121*K^3 - 374400*K^2 + 196020000*K - 11859210000) * (14884*K^2 + 1757041*K + 52707600)^2[69.52501740622754?, 575.6065451903618?, 2449.083313436469?] So the quadratic factor appears squared. Code #6: Let us implement the recursion described in the theoretical section. def recursion(a, b, c, bit_precision=40): IR = RealField(bit_precision) # IR is "real field" - precision is given as argument a, b, c = IR(a), IR(b), IR(c) # pass to the numerical world with the given sides a, b, c def d2(P, Q): x1, y1, z1 = P x2, y2, z2 = Q x, y, z = x1 - x2, y1 - y2, z1 - z2 return - a^2*y*z - b^2*z*x - c^2*x*y P = vector(IR, 3, [ IR(1/3), IR(1/3), IR(1/3) ]) # start recursion in centroid for k in range(10): print(P) x, y, z = P D = vector(IR, 3, [ 0, y/(y+z), z/(y+z)]) E = vector(IR, 3, [ x/(x+z), 0, z/(x+z)]) F = vector(IR, 3, [ x/(x+y), y/(x+y), 0]) PD2, PE2, PF2 = d2(P, D), d2(P, E), d2(P, F) d2sum = PD2 + PE2 + PF2 X = x * (PE2 + PF2) / 2. / d2sum Y = y * (PF2 + PD2) / 2. / d2sum Z = z * (PD2 + PE2) / 2. / d2sum X, Y, Z = X/(X+Y+Z), Y/(X+Y+Z), Z/(X+Y+Z) P = vector(IR, 3, [X, Y, Z]) Then calling recursion(6, 9, 13) we return the following result: sage: recursion(6, 9, 13)(0.33333333333, 0.33333333333, 0.33333333333)(0.22960372960, 0.30827505828, 0.46212121212)(0.20266594188, 0.26478637847, 0.53254767965)(0.19806794747, 0.26656776311, 0.53536428942)(0.19727171883, 0.26670350678, 0.53602477439)(0.19715195703, 0.26675849714, 0.53608954584)(0.19712950067, 0.26676259568, 0.53610790365)(0.19712606659, 0.26676417869, 0.53610975472)(0.19712542143, 0.26676429665, 0.53611028192)(0.19712532272, 0.26676434216, 0.53611033512) This "seems to converge" to the algebraic exact value found by code #1.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/408484', 'https://mathoverflow.net', 'https://mathoverflow.net/users/41137/']}
jdg_374472
stackexchange
llm_judgeable_groundtruth_similarity
41958820
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: var app = angular.module("myApp", []);app.controller("myCtrl", function($scope) { $scope.records = [ { "Name" : "Alfreds Futterkiste", "Country" : "Germany" }, { "Name" : "Berglunds snabbköp", "Country" : "Sweden" }, { "Name" : "Centro comercial Moctezuma", "Country" : "Mexico" }, { "Name" : "Ernst Handel", "Country" : "Austria" } ]}); <!DOCTYPE html><html><script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js"></script><body ng-app="myApp" ng-controller="myCtrl"><div class="container"> <div class="panel-group" id="accordion"> <div class="panel panel-default" ng-repeat="x in records"> <div class="panel-heading"> <h4 class="panel-title"> <a data-toggle="collapse" data-parent="#accordion" ng-click="showCompAcc=!showCompAcc" showtab="">{{x.Name}}</a> </h4> </div> <div ng-hide="showCompAcc" class="panel-collapse collapse "> <div class="panel-body">dfdfdfd</div> </div> </div> </div> </div> </body> </html> This is snippet of accordion with ng-repeat data.Currently It showing all tabs. What I want is by default it should show only first tab open and on click of other tab it should get display other tab details and hide the default tab details. Now provide the response and nothing else.
I know it's kind of late, but I had similar issue when using the Symfony\Component\HttpFoundation\StreamedResponse . As you said the problem was the Call to undefined method ... ::header() So clearly the header method is not present on an object. For me the solution was to use headers method, which returns you the \Symfony\Component\HttpFoundation\ResponseHeaderBag . Use it like so: public function handle($request, Closure $next){ $response = $next($request); $response->headers->set('Access-Control-Allow-Origin', '*'); $response->headers->set('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH, DELETE, OPTIONS'); $response->headers->set('Access-Control-Allow-Headers', 'Content-Type, Authorization, X-XSRF-TOKEN'); return $response;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41958820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6845316/']}
jdg_374473
stackexchange
llm_judgeable_groundtruth_similarity
1070514
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was surfing the web, as usual and I wanted to take a look at Bing 's source code because I was curious and wanted to know what they were using to make their image fade in. I was surprised by what could only be qualified by the following line : Holy wall of text! And it made me wonder if there is any advantage to obfuscate your (X)HTML, CSS and JavaScript like this? To reduce the size of the file perhaps? Edit : After looking around, I realized this was a pretty common practice. Never looked at it from a minification point of view before! Now provide the response and nothing else.
They are not obfuscating. They are minifying in order to reduce the bandwidth used by the millions of requests they see each day. The goal of JavaScript and CSS minification is always to preserve the operational qualities of the code while reducing its overall byte footprint (both in raw terms and after gzipping, as most JavaScript and CSS served from production web servers is gzipped as part of the HTTP protocol). You might also be interested in reading the Yahoo! User Interface blog post titled "Minification v. Obfuscation" .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1070514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126912/']}
jdg_374474
stackexchange
llm_judgeable_groundtruth_similarity
303471
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Apologies for the confusing title, it is late here. I'm wondering exactly what meaning the "time coordinate" has in General Relativity. We always write the line element as $$\tag{1} ds^2=g_{00}(dx^{0})^2+2g_{0i}dx^0dx^i+g_{ij}dx^idx^j,$$ with the assumption that $x^0$ is some kind of time coordinate, and $x^i$ are spatial coordinates. However, as light cone coordinates show, if we pick a random coordinate system, the metric will not be in the form (1). Is the existence of coordinate systems as in (1) an axiom or can it be derived somehow? By a time coordinate I mean that the time coordinate lines should have timelike tangent vectors, and similarly with spacelike coordinates. Now provide the response and nothing else.
Let $(M^{n+1},g)$ be a Lorentzian manifold. Given $p\in M$, we will show that there is a coordinate system $(x^\mu)$ defined on an open set $p\in U\subset M$ such that $\partial_0$ is a timelike vector field, and $\partial_i$ are spacelike vector fields for $i=1,\dotsc,n$. Let $(x^\mu)$ be an arbitrary chart defined on $U\ni p$. It is known that $T_pM$ is the span of $\{\partial_0,\partial_1,\dotsc,\partial_n\}$. As $g_p$ has signature $(-,+,\dotsc,+)$, we may find linearly independent vectors $v_\mu$, $\mu=0,1,\dotsc,n$, such that $g_p(v_0,v_0)=-1,$ $g_p(v_i,v_i)=+1$. These vectors are linear combinations of $\{\partial_0,\partial_1,\dotsc,\partial_n\}$. By a linear change of coordinates, we can find a coordinate system $(y^\mu)$ such that $\partial/\partial y^\mu=\partial_\mu'=v_\mu$ at $p$. By continuity, there is a neighborhood $V_0\subset U$ of $p$ such that $g(\partial_0',\partial_0')<0$, i.e., $\partial_0'$ is timelike on $V_0$. Similarly, there exist neighborhoods $V_i$ such that $g(\partial_i',\partial_i')>0$ on $V_i$. We take $V=V_0\cap\cdots \cap V_n$, which is a neighborhood of $p$. By changing each coordinate value by a constant, we can adjust the origin without changing the aforementioned vector fields. Then $(y^\mu)$ is the desired coordinate system on $V$.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/303471', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/59215/']}
jdg_374475
stackexchange
llm_judgeable_groundtruth_similarity
29037821
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have moduleA setup as an application wide singleton provider, ModuleB as a user related object provider My user display fragment will use system wide bus to send message to others and use user related object to display. Problem cannot inject different scrope class into one object. Use component.getX method works fine, but inject is prefered way. Error message: @UserScope may not reference bindings with difference scopes: @Provides @Singleton Bus ModuleA.provideBus() @Modulepublic class ModuleA { @Provides @Singleton Bus provideBus() {...}} Module B as user related Info provider @Modulepublic class ModuleB{ private final User user; public ModuleB(User user) {...} @Provides @UserScope User provideUser() {} @Provides @UserScope UserManager provideUserManager() {}} Components setup like following: @Component (modules={ModuleA.class})@Singletonpublic interface ComponentA { Bus getBus(); void inject(ClassA target);}@Component(modules={ModuleB.class})@UserScopepublic interface ComponentB { User getUser(); UserManager getUserManager(); void inject(ClassA target);}class UserFragment exrtends Fragment { @Inject Bus bus; @Inject UserManager userManager; public void onCreate() { getComponentA().inject(this); getComponentB().inject(this); }} Now provide the response and nothing else.
Try this configuration, it works for me. There is really a lack of good documentation about Dagger2 so I studied a few open-source examples of code that you can find in GitHub etc by keyword like Dagger2. Application level Component @Singleton@Component(modules = AppModule.class)public interface AppComponent { // exported for child-components Bus eventBus();} Application level Module @Modulepublic class AppModule { @Provides @Singleton Bus provideBus() { return BusProvider.getInstance(); }} Activity level Component @ActivityScope@Component(dependencies=AppComponent.class, modules=MainActivityModule.class)public interface MainActivityComponent { void inject( MainActivity mainActivity );} Activity level Module @Modulepublic class MainActivityModule { private final MainActivity mActivity; public MainActivityModule( MainActivity activity ) { mActivity = activity; } @Provides MainActivityTitleController provideTitleController() { return new MainActivityTitleController( mActivity ); }} Android Application class public class MyApplication extends Application { private AppComponent mAppComponent; @Override public void onCreate() { super.onCreate(); // Dagger2 mAppComponent = Dagger_AppComponent.builder() .appModule( new AppModule( this )) .build(); } public AppComponent getComponent() { return mAppComponent; } public static AppComponent getComponent( Context context ) { return ((MyApplication)context.getApplicationContext()).getComponent(); }} And finally Activity public class MainActivity extends ActionBarActivity { // Injectable fields @Inject Bus mEventBus; @Inject MainActivityTitleController mTitleController; private MainActivityComponent mComponent; @Override protected void onCreate( Bundle savedInstanceState ) { // Dagger2 mComponent = Dagger_MainActivityComponent.builder() .appComponent( ((MyApplication)getApplication()).getComponent() ) .mainActivityModule( new MainActivityModule( this ) ) .build(); mComponent.inject( this ); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29037821', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1113920/']}
jdg_374476
stackexchange
llm_judgeable_groundtruth_similarity
12115258
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to increment the position of an element by, say, x pixels. Here is what I've tried so far: var top = document.getElementById("something").style.top;top = top + "300px" I know that this is not going to work, but I was wondering if it was possible to increment a position value like this. Now provide the response and nothing else.
Because style.top is a string with units on the end of it like "300px" you can only do math with it when you convert just the numeric part to an actual number. Assuming you have a positioned element (so setting the top value will do something) and you already have a top style set directly on the element and not set via CSS (so getting obj.style.top will actually get you something), you can do it by parsing the number out of the style value like this: var obj = document.getElementById("something");var topVal = parseInt(obj.style.top, 10);obj.style.top = (topVal + 300) + "px"; Working example: http://jsfiddle.net/jfriend00/pt46X/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12115258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1623428/']}
jdg_374477
stackexchange
llm_judgeable_groundtruth_similarity
27638960
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying this piece of code expecting that it returns a string of text after reversing with [::-1] but it is not working. class meh: def __init__(self, bla): self.bla = bla def __reversed__(self): return "Reversing!!!! %s" % self.bla[::-1]a = meh("Reversed successfully!!!!")print a[::-1] How to properly call the __reverse__ function?Example: __gt__ is called with a > 25 , what calls __reverse__ ? Now provide the response and nothing else.
[::-1] is a slice . object.__reversed__() is instead used by the reversed() function , and is only applicable to sequences (objects that provide both a __len__ and a __getitem__ method). If you don't supply __reversed__ , the function uses those __len__ and __getitem__ methods to access indices in reverse. __reversed__ must itself return an iterator: class Reversable(object): def __init__(self, seq): self.seq = seq def __len__(self): return len(self.seq) def __getitem__(self, item): return self.seq[item] def __reversed__(self): for elem in 'Reversing: ' + self.seq[::-1]: yield elem Demo: >>> class Reversable(object):... def __init__(self, seq):... self.seq = seq... def __len__(self):... return len(self.seq)... def __getitem__(self, item):... return self.seq[item]... def __reversed__(self):... for elem in 'Reversing: ' + self.seq[::-1]:... yield elem... >>> r = Reversable('Foo bar baz!')>>> list(r)['F', 'o', 'o', ' ', 'b', 'a', 'r', ' ', 'b', 'a', 'z', '!']>>> list(reversed(r))['R', 'e', 'v', 'e', 'r', 's', 'i', 'n', 'g', ':', ' ', '!', 'z', 'a', 'b', ' ', 'r', 'a', 'b', ' ', 'o', 'o', 'F']
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27638960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3858173/']}
jdg_374478
stackexchange
llm_judgeable_groundtruth_similarity
66624
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using a mean centered signal and applying fft to it, however the dc and nyquist values are a lot higher than all other values. Wondering if I am doing anything wrong or need to add anything else. The first image is the signal and the other is the fft output. load Dog_1_interictal_segment_0001.mat%interictal_segement is a struct with a 16 x 10000 matrix named datadata=interictal_segment_1.data;%take first two seconds of the first eeg channeldata=data(1,1:10000);freq=5000;N=freq*2;hz=linspace(0,freq/2,N);averaged_mean=data-mean(data);channel_frequency=abs(fft(averaged_mean)/N).^2;figure(1)plot(averaged_mean)figure(2)plot(hz,channel_frequency,'ko-','markerfacecolor','m','linew',1)xlabel('Frequency (norm.)'), ylabel('Power'); Now provide the response and nothing else.
Below summarizes efficient phase estimators for this application updated to include both a phase range of +/- 30 degrees and +/- 60 degrees. This is given in two parts, estimators for a real IF (intermediate frequency) signal, and estimators for a baseband complex signal. At the end are additional considerations related to acquisition. For an additional estimator provided by Richard Lyons, please see his answer at this other post . Efficient Phase Estimators for Real IF Signals Product detector: For real signals, a common phase estimator (detector) is a multiplier followed by a low pass. For this application where sensitivity is desired over $\pm 30$ degrees, the signals are nominally 90 degrees in phase resulting is an estimate that is proportional to the sine of the phase between two signals: $$y(t,\phi) =A_1\cos(\omega_ct)A_2\sin(\omega_ct+\phi) = \frac{A_1 A_2\sin(\phi) + A_1A_2\sin(\omega_ct+\phi)}{2} $$ Where when followed with a low pass filter removes the time varying component resulting in: $$y(\phi) = \text{LPF}[y(t,\phi)] =\frac{A_1A_2}{2}\sin(\phi) $$ Where $\text{LPF}[\cdot]$ is the time average provided by a low pass filter. As a demodulator, this can be implemented either in a coherent receiver where $A_1\cos(\omega_ct)$ is estimated during acquisition and provided as an NCO in a digital implementation (or VCO in an analog), or in a non-coherent receiver (where the interest is in the phase difference between two successive symbols) where the demodulation is done by multiplying the received signal with a time delayed copy of itself, delayed by one symbol duration plus the duration of a quarter cycle of the carrier to convert $\cos$ to $\sin$ (when the IF carrier is sufficiently larger than the symbol rate): $$y(\phi_2-\phi_1) = \text{LPF}\bigg[A\cos(\omega_ct+\phi_1)A\cos(\omega_c(t-T_s-T_c)+\phi_2)\bigg]$$ Where $T_s$ is the symbol duration in seconds and $T_c = 1/(4f_c)$ is a quarter cycle of the IF carrier in seconds with the IF carrier frequency as $f_c$ in Hz. Resulting in: $$y(\phi_2-\phi_1) = \text{LPF}\bigg[A\cos(\omega_ct+\phi_1)A\sin(\omega_c(t-T_s)+\phi_2)\bigg]$$ $$ y(\phi_2-\phi_1) = \frac{A^2}{2}\sin(\phi_2-\phi_1)$$ $$ y(\Delta\phi) = \frac{A^2}{2}\sin(\Delta\phi)$$ A very efficient way to implementing either of the above approaches digitally is by hard limiting the input signal which reduces the above to a simple XOR of the most significant bit of the waveform. For accuracy, this requires ensuring the inputs to the XOR operation are a 50% duty cycle, but the result is linearly proportional to phase! It is usable over a $\pm 90°$ range with a linear phase result. Further, hard limiting a phase modulated waveform provides a 3 dB SNR improvement in positive SNR conditions (since all AM noise is removed), but can be more susceptible to jamming an interference (3 dB loss in negative SNR conditions). This is an approach to be considered due to the simplicity and high phase linearity. As above, the X-or phase detector could be used in a coherent receiver where the NCO is also simplified to a 1 bit output (basically the MSB of a counter and you increment the count rate to adjust the frequency as part of a carrier tracking loop), or non-coherently where the MSB of the received signal is XOR'd with a delayed copy. As with the multiplier the two input signals would be in quadrature to center the detector over its unambiguous range. Efficient Phase Estimators for Complex Baseband Signals Given a generic signal as $Ae^{j\phi}= I + jQ$ , the actual phase is given by $\phi =\tan^{-1}(Q/I)$ or $\phi =\sin^{-1}(Q/A)$ .The following is a summary of efficient phase demodulation approximations for a variation over $\pm30$ degrees and $\pm60$ degrees, assuming carrier recovery and timing is established during the $0 / 180$ acquisition period. Initial thoughts on approaches to efficient acquisition are also included at the bottom of this post. Summary of Results Below is a table summarizing the peak and rms phase error for various estimators. Estimators that were included in earlier versions of this post that offer no advantage to those listed below have been removed. As Ben suggests in the comments, the Q/A estimators are attractive for FPGA implementation since A is assumed to be constant over the duration of the packet. Plots showing the relative performance are included below: Detailed Descriptions The estimators that are scaled by the envelope magnitude $A$ (Q/A, Q/A Juha and Q/Est(A)) are preferred since $A$ can be readily determined during acquisition of the 0/180 signal, and only needs to be determined once for relatively short packets, or is a parameter from the AGC otherwise. In a constant envelope phase modulated signal such as this, the received signal can be simply hard limited if there isn't a concern with potential 3dB loss with stronger out of band interference (or the complete loss from hard-limiting in the presence of a coherent jammer). Further, there is no need to actually divide by $A$ , assuming $A$ is maintained to be constant over the packet duration the result will be linearly proportional to the phase and the decision thresholds can be set accordingly. Q/A $$\phi =\sin^{-1}\bigg(\frac{Q}{A}\bigg)$$ $$\frac{Q}{A} = sin(\phi)$$ for small $\phi$ , $sin(\phi) \approx \phi$ for $\phi$ in radians: $$\phi \approx \frac{Q}{A}$$ Q/A Juha Similar to @JuhaP's suggestion in the comments of removing the linear slope error for the $Q/I$ estimator, here applied to the Q/A Estimator. The coefficient is found from the linear portion of the remaining terms in the Taylor Series expansion that weren't used, minimizing the error: For ±30° Operation: $$\phi \approx 1.0475\frac{Q}{A}$$ For ±60° Operation: $$\phi \approx 1.150\frac{Q}{A}$$ Q/Est(A) A fast and very efficient approach for estimating magnitude is the $\alpha$ max plus $\beta$ min algorithm where the maximum between $|I|$ and $|Q|$ scaled by coefficient $\alpha$ is added to the minimum scaled by coefficient $\beta$ . At 30° range, $Q$ would always be the minimum and $I$ always positive so this would simplify to $\alpha I + \beta|Q|$ . A common choice for FPGA implementation is $\alpha = 1$ and $\beta =1/2$ since this minimized the error over all phases with bit shift divisions, but in this case $\alpha = 1$ and $\beta =1/4$ is a better choice given the narrowed phase range of $±30°$ . If multipliers were acceptable, the optimized coefficients are $\alpha = 0.961$ and $\beta =0.239$ . The plot below summarizes the two choices: $$\phi \approx \frac{Q}{\alpha I + \beta |Q|}$$ option 1: $\alpha =1$ , $\beta = 0.25$ option 2: $\alpha =0.961$ , $\beta = 0.239$ Also not plotted below but shown above is the option optimized for use over ±60°: $\alpha =0.85$ , $\beta = 0.45$ Note these are not optimized for the estimate of $A$ , but to minimize the phase estimation error. Q/I Phase Approximation $$\phi =\tan^{-1}\bigg(\frac{Q}{I}\bigg)$$ $$\frac{Q}{I} = tan(\phi)$$ for small $\phi$ , $tan(\phi) \approx \phi$ for $\phi$ in radians: $$\phi \approx \frac{Q}{I}$$ As @JuhaP mentioned in the comment, the linear slope component of the error could be removed by multiplying by 0.9289 resulting in (This one is labeld Q/A JuhaP in the plot). The coefficient below is slightly different than his suggestion but minimizes the error as it was found from the linear portion of the remaining terms in the Taylor Series expansion that weren't used rather than his approach of a first order terms of a polynomial fit to arctan: $$\phi \approx 0.9289\frac{Q}{I}$$ Taylor Series Phase Approximations The first term is the Q/A and Q/I approximations covered above for $\sin^{-1}$ and $\tan^{-1}$ respectively. Going beyond that is NOT recommended if efficiency is paramount but included for accuracy comparison. arcsin $$sin^{-1}(n) = \sum_{n=0}^\infty \frac{2n!}{2^{2n}(n!)^2}\frac{x^{2n+1}}{2n+1} \text{ for } |n|\le1$$ $$sin^{-1}\bigg(\frac{Q}{A}\bigg)= \frac{Q}{A} +\frac{1}{6} \bigg(\frac{Q}{A}\bigg)^3 +\frac{3}{40}\bigg(\frac{Q}{A}\bigg)^5 ... \text{ for } |Q/A|\le1$$ arctan $$tan^{-1}(n) = \sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{2n+1} \text{ for } |n|\le1$$ $$tan^{-1}\bigg(\frac{Q}{I}\bigg) = \frac{Q}{I} -\frac{1}{3} \bigg(\frac{Q}{I}\bigg)^3 +\frac{1}{5}\bigg(\frac{Q}{I}\bigg)^5 ... \text{ for } |Q/I|\le1$$ Using the first two terms for each results in: $$\phi \approx \frac{Q}{A} +\frac{1}{6} \bigg(\frac{Q}{A}\bigg)^3$$ . $$\phi \approx \frac{Q}{I} -\frac{1}{3} \bigg(\frac{Q}{I}\bigg)^3$$ . A linear slope can also be removed from either of these with a gain constant multiplication as was done with the Q/A and Q/I estimators to further minimize the error. Other Estimators Juha Squared @JuhaP offered this interesting estimator in the comments. Not very efficient but highly accurate with square terms: $$\phi \approx \frac{3QI}{Q^2 + 3I^2}$$ . Acquisition Efficient Acquisition for 0/180 preamble: One idea that comes to mind for acquisition during the $0/180$ transitions is to use $\text{sign}(I_2)Q_1-\text{sign}(I_1)Q_2$ to get the change in phase between two symbols, which can the be corrected in a fast coverging and simple loop by derotating the incoming signal. This appraoch would work well if the frequency offset is such that the phase does not rotate more than $\pm \pi/2$ between successive signals, otherwise a course FLL can be used first to get the offset within this acquisition range.For a coherent receiver approach a PLL would be used to lock/track an NCO or PLL to the carrier and my squaring the received signal a reference tone at twice the carrier can be tracked for all modulations presented here (both the bi-phase acquisition interval and the 30 degree phase modulation when doubled will produce a distinct tone at 2x the carrier). Similarly a Costas Loop would track both signals while providing the reference signal that is nominally 90 degrees in phase with the carrier, thus providing both carrier recovery and phase demodulation. Sources: Taylor Series Expansions for arcsin and arctan: https://proofwiki.org/wiki/Book:Murray_R._Spiegel/Mathematical_Handbook_of_Formulas_and_Tables
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/66624', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/49927/']}
jdg_374479
stackexchange
llm_judgeable_groundtruth_similarity
4586204
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to visualize the longitudes on a sphere from close-up, with correct perspective. It seems this is the "Blue Marble" problem i.e. to show how earth looks from a realistic distance (say 0.5 to 4 earth radii), and not from infinity with visible poles. These are 6 ellipses fitted into the visible horizon. Note that the poles have both sunken below the horizon; the upper pole is marked by a small circle. It is not easy to fit the ellipses into the circle by hand, because the center is shifted to the right, and both height and width have to be adapted. I do have 5 points to nail it down: both poles, the chosen intersection with the x-axis, and the two tangent points with the circle. With only the y component of the center known (=0), I would start with this equation: $$\frac{(x - x_o)^2}{a^2} + \frac{y^2}{b^2}=1$$ with the three unknowns $x_0, a$ and $b$ . The poles give two points on the ellipse. Now I would add a chosen third point on the x-axis, for purple about (0.7, 0). This leaves me with a bundle of ellipses: some wider than tall running to the left; some very high poking through the horizon, some too short to touch the circle from the inside. I gather - and faintly recall from school - that I can find a single solution with a discriminant of zero. So can anybody tell me how to proceed from here? Can it be done with (simple, maybe messy) algebra, or is there a shortcut? Do I have to find that 4th and symmetric 5th point first and then determine the ellipse? Thank you Added after comment: Yes the uniform change in angle is important; now at least I can determine where I want the meridians (x-axis crossing would be a bit more practical), while by hand I had to leave them more or less where they fit. My goal is to put the 0-degree and 41E-degree fulldisc satellite images side by side and stretch them so I can merge them with a straight 20.5 meridian. This works quite well, but only after I changed/softened the stretching point from 1-x (the circle) to 1-sqrt(x) by trial and error. My graphic is meant as a test grid. I did not find anything similar on the internet. This is the usual perspective: This is not central: A "globe beach ball" from walmart comes - also literally speaking - closest: And this is my result for now on youtube . The projection itself is not fantastic, but at least the seam in the middle is gone. There is also a "Mollweide" projection by NOAA where they combined East and West coast satellites. But a correctly drawn complete grid would look nice on its own: shows the "fatness" of a sphere. That would be distance to determine the (un)visible poles, and degrees to choose the ellipse? I think that really is what I want! Now provide the response and nothing else.
Here's one way to do it. Assume the blue silhouette $c$ is the unit circle $x^2+y^2-1=0$ , and that the poles are at $(0,\pm a)$ . We wish to find the ellipse $e$ passing through the poles and tangent to the circle at the points that are the intersection of the circle with the line $x=b$ , i.e. the points on the circle with $x$ -coordinate $b$ . In general,when conics $c$ and $e$ touch (are mutually tangent) at two points $P,Q$ they are said to be in double contact . If $\ell=0$ is the equation of the line $PQ$ , a conic in double contact with $c$ at the points $c\cap \ell$ is given by the equation $c+s\ell^2=0$ for some $s$ . So in our case we need to solve for such a conic passing through the poles. Here $\ell=0$ is the line $x-b=0$ , so we are solving for $s$ such that $c+s\ell^2=x^2+y^2-1+s(x-b)^2=0$ passes through $(0,a)$ . This gives us $s=\dfrac{1-a^2}{b^2}$ . So the family of ellipses corresponding to the perspective projections of circles of longitude is $$x^2+y^2-1+\dfrac{1-a^2}{b^2}(x-b)^2=0$$ Note that values of $b$ in the interval $(-1,1)$ correspond to lines of longitude that are partially visible in this perspective. Values of $b$ outside of this interval will correspond to lines that are entirely occluded (i.e. on the far side of the silhouette).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4586204', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1124426/']}
jdg_374480
stackexchange
llm_judgeable_groundtruth_similarity
4258849
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My java application references a 3rd-party jar file which uses log4j logging. The problem is that this jar contains its own log4j.properties file which causes access denied exceptions on my machine, but I don't have control over the jar file to change its contents. I have tried adding my own log4j.properties file in my application's classpath, but it doesn't seem to have an effect. If I try to use PropertyConfigurator to import my own settings programmatically, log4j seems to load the jar file's properties file first (causing an exception). How can I short-circuit log4j to ignore a 3rd-party jar file's log4j.properties file and use my own? Now provide the response and nothing else.
There are several way to override log4j.properties, one of them is: Use log4j.xml please see the extension Another approach is: Setting the log4j.defaultInitOverride system property to any other valuethen "false" will cause log4j to skip the default initialization procedure (this procedure). Set the resource string variable to the value of the log4j.configuration system property. The preferred way to specify the default initialization file is through the log4j.configuration system property. In case the system property log4j.configuration is not defined, then set the string variable resource to its default value "log4j.properties". Attempt to convert the resource variable to a URL. If the resource variable cannot be converted to a URL, for example due to a MalformedURLException, then search for the resource from the classpath by calling org.apache.log4j.helpers.Loader.getResource(resource, Logger.class) which returns a URL. Note that the string "log4j.properties" constitutes a malformed URL. See Loader.getResource(java.lang.String) for the list of searched locations. If no URL could not be found, abort default initialization. Otherwise, configure log4j from the URL. The PropertyConfigurator will be used to parse the URL to configure log4j unless the URL ends with the ".xml" extension, in which case the DOMConfigurator will be used. You can optionaly specify a custom configurator. The value of the log4j.configuratorClass system property is taken as the fully qualified class name of your custom configurator. The custom configurator you specify must implement the Configurator interface. Ref: http://logging.apache.org/log4j/1.2/manual.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4258849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/257712/']}
jdg_374481
stackexchange
llm_judgeable_groundtruth_similarity
630311
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to relate the Tate module $T_{l}(A \times B)$ of the product of two abelian varieties $A$ and $B$ over a field $k$ (where $l \neq \text{char}(k)$), to the Tate modules $T_{l}(A)$ and $T_{l}(B)$? The only thing I have so far is that the rank of $T_{l}(A \times B)$ as a $\mathbb{Z}_{l}$-module is the same as the sum of the ranks of the Tate modules of $A$ and $B$ as $\mathbb{Z}_{l}$-modules. Now provide the response and nothing else.
The Tate module of a product $A\times B$ of abelian varieties over $k$ is naturally isomorphic, as a $G_k$-module, to $T_\ell A\times T_\ell B$. This follows directly from the universal property of a direct product:$$ (A\times B)(k^\text{sep}) \simeq A(k^\text{sep})\times B(k^\text{sep}) \text{.}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/630311', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/119590/']}
jdg_374482
stackexchange
llm_judgeable_groundtruth_similarity
1421577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a method which accepts a filename as a parameter, all filenames should end with '.csv'. Which exception should I throw if a filename that does not end with .csv is passed? Or should I take a different approach? Now provide the response and nothing else.
ArgumentOutOfRangeException - What you're describing is in line with an out of range exception: The exception that is thrown when the value of an argument is outside the allowable range of values as defined by the invoked method. ArgumentException is used to validate the characters in the path string not the file type. The path parameter is a zero-length string, contains only white space, or contains one or more invalid characters. IMHO the path validation fall-through chart looks like this: Input path is null = ArgumentNullException Invalid characters in the path = ArgumentException File doesn't exist = FileNotFoundException File isn't the right type = ArgumentOutOfRangeException Permissions problem = UnauthorizedAccessException File system doesn't support thisoperation = NotSupportedException System read error = IOException If that's not descriptive enough for you then create your own exception class: public class InvalidFileTypeException : System.IO.IOException{ public InvalidFileTypeException(string path, string acceptedTypeMask) { this.Message = string.Format( "File type '{0}' does not fall within the expected range: '{1}'", path, acceptedTypeMask); }} ... throw new InvalidFileTypeException("foo.txt", "*.csv");
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1421577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20048/']}
jdg_374483
stackexchange
llm_judgeable_groundtruth_similarity
2520179
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have this code: var list = new List<int>();for(int i=0;i<10;i++) list.Add(i); for(int i=0;i<10;i++){ ThreadPool.QueueUserWorkItem( new WaitCallback(x => { Console.WriteLine(x); }), list[i]);} And i want to know when all threadpools threads finished their work. How i can to do that? Now provide the response and nothing else.
You'll need to track this yourself. One option for this is to use a counter and a reset event: int toProcess = 10;using(ManualResetEvent resetEvent = new ManualResetEvent(false)){ var list = new List<int>(); for(int i=0;i<10;i++) list.Add(i); for(int i=0;i<10;i++) { ThreadPool.QueueUserWorkItem( new WaitCallback(x => { Console.WriteLine(x); // Safely decrement the counter if (Interlocked.Decrement(ref toProcess)==0) resetEvent.Set(); }),list[i]); } resetEvent.WaitOne();}// When the code reaches here, the 10 threads will be doneConsole.WriteLine("Done");
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2520179', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/290082/']}
jdg_374484
stackexchange
llm_judgeable_groundtruth_similarity
12182291
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: F# seems slower than other languages… what can I do to speed it up? I am a bit curious about the performance of pattern match, so I did the following test: poolEven contains 10000 elements of 0,1,2,3, (2500 equal) testSize = 100000 IfelseEven(100000) takes 650ms (switch would be faster but I didn't attach the code) while MatchEven(100000) takes 7000ms that's 10x time Does the performance degradation come from Array.Fold ? I am 100% sure that if I go for IEnumerable.Aggregate the speed would greatly decrease. But I thought F# handled Array.Fold better than C# with IEnumerable.Aggregate . I want to compare performance of most common (equivalent) ways of coding in 2 languages but not rigid ways to make them identical. The tests are done in x64 release, with 10+ trials taken average with proper warm up C#: public void IfelseEven(int testSize){ Ifelse(testSize, poolEven);}void Ifelse(int testSize, int[] pool){ long sum = 0; for (int i = 0; i < testSize; i++) { for (int j = 0; j < poolCapacity;j++ ) { var item = pool[j]; if (item == 0) { sum += 5; } else if (item == 1) { sum += 1; } else if (item == 2) { sum += 2; } else if (item == 3) { sum += 3; } else { sum += 4; } } }}public void MatchEven(int testSize){ PatternMatch.myMatch(testSize, poolEven);} F#: module PatternMatchlet mat acc elem = acc + match elem with | 0 -> 5L | 1 -> 1L | 2 -> 2L | 3 -> 3L | _ -> 4Llet sum (pool:int[])= Array.fold mat 0L pool;let myMatch testSize pool= let mutable tmp = 0L for i=0 to testSize do tmp <- sum(pool) + tmp tmp Now provide the response and nothing else.
Voting to close—we could play this game all day long. For more commentary on why different code might have different execution times see this question and answers . If you just want to speed up your F# function, try this: let ifElse testSize (pool: _[]) = let mutable sum = 0L for i = 0 to testSize - 1 do for j = 0 to pool.Length - 1 do match pool.[j] with | 0 -> sum <- sum + 5L | 1 -> sum <- sum + 1L | 2 -> sum <- sum + 2L | 3 -> sum <- sum + 3L | _ -> sum <- sum + 4L sum By my measurements this handily licks the C# function (and it's still shorter and more readable): C# 5655 F# 4003 Incidentally, leppie nailed it in the comments. I profiled your code and 78% of the time was spent in Array.fold —not good in a tight loop.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12182291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/691867/']}
jdg_374485
stackexchange
llm_judgeable_groundtruth_similarity
13952491
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an argument being passed on launch: -D DEBUG which allows me to use the following in my release app: #ifndef DEBUG /* Some code I only want to be run when using the release app */#endif When I run git status , it shows me that the file changed when adding -D DEBUG is MyExampleProject.xcodeproj/xcuserdata/myusername.xcuserdatad/xcschemes/MyExampleProject.xcscheme Which should be excluded using the commonly used Xcode .gitignore file . Is there any other way to include this argument that complies with .gitignore and doesn't rely on my user accounts xcuserdata ? Now provide the response and nothing else.
Generally the xcuserdata is safe to ignore for individual projects. Each user gets their own file that saves userstate, folders opened, last file opened, that sort of stuff. It does contain your schemes. If it is the first time opening and the file doesn't exist, Xcode will create it for you. However... we have run into this issue at the office when you have a continuous build server, like Hudson or Jenkins, that copies the source from Git or SVN without ever opening it and tries to build it. If you ignore this file, there will be no schemes to build against or it will force someone to open the project to auto create it the first time. We solved this by checking the shared box under manage schemes. This moves the schemes out from under your individual xcuserdata into a shared folder that can be committed via source control and used by the continuous build servers. Hope this helps.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/13952491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/816002/']}
jdg_374486
stackexchange
llm_judgeable_groundtruth_similarity
67026
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a simple utility which I can pipe output to on Linux that will: Return a success code if there is no output on standard out (and / or standard error). Return a failure code if output is produced on standard out (and / or standard error). To provide some context, the command I'm running is: svn mergeinfo --show-revs eligiblehttp://mysvnserver.example.com/SVF/repos/common/abc/branches/abc-1.7http://mysvnserver.example.com/SVF/repos/common/abc/trunk If there are any unmerged entries on the branch, the command will return a list of revision numbers on standard out. Ideally, the additional command that I'm talking about would: Detect entries on standard out and return an error condition to Linux. Pass on the standard out so that it does end up appearing on the terminal. I'd rather not suppress it. Now provide the response and nothing else.
That's grep you're looking for: if svn ... 2>&1 | grep '^'; then echo "there was some output"else echo "there wasn't"fi You can replace grep '^' with grep . or grep '[^[:blank:]]' to check for non-empty or non-blank lines (but that will remove the empty/blank ones from the output). (note the behaviour will vary across grep implementations if the input contains non-text data like NUL bytes or too long or non-terminated lines (which wouldn't happen for svn though)).
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/67026', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/33653/']}
jdg_374487
stackexchange
llm_judgeable_groundtruth_similarity
429993
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have done both of these in my math courses, but without understanding what they actually are intuitively. I would be very much grateful if you could give me an intuitive explanation of them. Now provide the response and nothing else.
So first things first: the FFT simply refers to the algorithm by which one may compute the DFT. So, if you understand the DFT, you understand the FFT as far as intuition goes (I think). Now with the DFT, our goal is to write of $N$ points $x_0,x_1,...,x_{N-1}$ as the sum of complex exponentials. That is, we say that $X_n$ is the DFT of $x_n$ exactly when$$x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k \cdot e^{(2 \pi i\, k\, n) / N}$$You might recognize this as the inverse DFT of $X_n$. The key is that for each $k$, each $e^{(2 \pi i\, k\, n) / N}$ can be though of as a complex vector with $N$ entries. We could write $$v_k=\left(\frac 1N,\frac1N e^{(2 \pi i\, k) / N},\frac1N e^{(4 \pi i\, k) / N},\dots,\frac1N e^{(2 \pi i\, k\, (N-1)) / N}\right)$$There are $N$ vectors of this form (taking $k$ from $0$ to $N-1$), and we use them because they form an orthonormal basis of $\mathbb C ^N$. That is, $\langle v_k,v_j \rangle$ is $1$ if $k=j$ and $0$ if $k≠j$. Because these vectors are orthonormal, we can change basis (i.e. find the DFT) by using the dot product rather than by solving a system of $N$ equations. That is, if $x=(x_0,x_1,\dots,x_{N-1})$ is the vector of complex entries of our time domain sequence, then $k^{th}$ entry the DFT of $x_0,x_1,\dots,x_{N-1}$ is simply given by$$X_k = \langle x,v_k \rangle$$and the IDFT is computed by finding$$x = \sum_{k=0}^{N-1} X_k v_k$$That is certainly my intuition for the computational process, and I find that helps. What this doesn't really help with is why we'd want to deal with complex exponentials in the first place, but if you've seen DFTs already I suppose you have some idea.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/429993', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71939/']}
jdg_374488
stackexchange
llm_judgeable_groundtruth_similarity
6859514
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In CLRS , third Edition, on page 155, it is given that in MAX-HEAPIFY, "the worst case occurs when the bottom level of the tree is exactly half full" I guess the reason is that in this case, Max-Heapify has to "float down" through the left subtree. But the thing I couldn't get is "why half full" ? Max-Heapify can also float down if left subtree has only one leaf. So why not consider this as the worst case ? Now provide the response and nothing else.
Read the entire context: The children's subtrees each have size at most 2n/3 - the worst case occurs when the last row of the tree is exactly half full Since the running time T(n) is analysed by the number of elements in the tree ( n ), and the recursion steps into one of the subtrees, we need to find an upper bound on the number of nodes in a subtree, relative to n , and that will yield that T(n) = T(max num. nodes in subtree) + O(1) The worst case of number of nodes in a subtree is when the final row is as full as possible on one side, and as empty as possible on the other. This is called half full. And the left subtree size will be bounded by 2n/3 . If you're proposing a case with only a few nodes, then that's irrelevant, since all base cases can be considered O(1) and ignored.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6859514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317283/']}
jdg_374489
stackexchange
llm_judgeable_groundtruth_similarity
2434149
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I needed to find $\gcd(255255, 373)$ and then explain why that proves $373$ to be prime. I understand the first part, but not the prime part at all. Here is how I figured the first part out using the euclidean algorithm: $255255 = 684(373)+123$ $373 = 3(123)+4$ $123 = 30(4)+3 $ $4 = 1(3)+1$ $3=3(1)+0$ I could say that obviously $373$ would have no even divisors, and now I know that any factor of $255255$ (other than 1) would not be a factor of $373$, so is that connected somehow? Now provide the response and nothing else.
we can write,$$255255=3.5.7.11.13.17$$Therefore $373$ is not divisible by $3,5,7,11,13,17$ If $373$ is not prime, $$373= k×z$$Such that $k,n\in \text{ co-prime to }3,7,5,11,13,17\text{ and } \gt 17$The lowest values of $k,z$ are $19$, but $$19.19=361$$ so we try next lowest possibility. $$19.21=399\gt 373$$ All the next numbers will give us value $\gt 373$ Hence we cannot get any $k,z$ whose multiplication equals $373$ as all of them will be $\ge 399\gt 373$ Hence $373$ is prime.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2434149', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/481907/']}
jdg_374490
stackexchange
llm_judgeable_groundtruth_similarity
26055749
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use the svm classifier for facial expression detection. I know opencv has a svm api, but I have no clue what should be the input to train the classifier. I have read many papers till now, all of them says after facial feature detection train the classifier. so far what I did, Face detection, 16 facial points calculation in every frame. below is an output of facial feature detection![enter image description A vector which holds the features points pixel address Note: I know how I can train the SVM only with positive and negative images, I saw this code here , But I don't know how I combine the facial feature information with it. Can anybody please help me to start the classification with svm. a. what should be the sample input to train the classifier? b. How do I train the classifier with this facial feature points? Regards, Now provide the response and nothing else.
the machine learning algos in opencv all come with a similar interface. to train it, you pass a NxM Mat offeatures (N rows, each feature one row with length M) and a Nx1 Mat with the class-labels. like this: //traindata //trainlabelsf e a t u r e 1 f e a t u r e -1f e a t u r e 1f e a t u r e 1f e a t u r e -1 for the prediction, you fill a Mat with 1 row in the same way, and it will return the predicted label so, let's say, your 16 facial points are stored in a vector, you would do like: Mat trainData; // start emptyMat labels;for all facial_point_vecs:{ for( size_t i=0; i<16; i++ ) { trainData.push_back(point[i]); } labels.push_back(label); // 1 or -1}// now here comes the magic:// reshape it, so it has N rows, each being a flat float, x,y,x,y,x,y,x,y... 32 element arraytrainData = trainData.reshape(1, 16*2); // numpoints*2 for x,y// we have to convert to float:trainData.convertTo(trainData,CV_32F);SVM svm; // params omitted for simplicity (but that's where the *real* work starts..)svm.train( trainData, labels );//later predict:vector<Point> points;Mat testData = Mat(points).reshape(1,32); // flattened to 1 rowtestData.convertTo(testData ,CV_32F);float p = svm.predict( testData );
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26055749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1284889/']}
jdg_374491
stackexchange
llm_judgeable_groundtruth_similarity
18293088
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why am I having such a hard time getting into the swing of Android development? I've been developing in various languages over the years and, for some reason, just can't seem to get beyond that "jeez-i-still-feel-a-total-noob" stage with Android. So I'm trying to add an Action Bar to my Android 2.3.3 project. No problem, right? The Android developers website has a nice and clear article explaining exactly how to do it. Of course it involves adding the appcompat v7 support library. No problem, even this is documented step-by-step on this page . But, as with just about every such exercise in Android, I find that you can do exactly what the page tells you to do, it still won't work without significant amount of Googling to fix the errors that you encounter afterwards. So I've followed the steps under "Using Eclipse" in the section "Adding libraries with resources" in the above link. The first error I get is Unable to resolve target 'android-16' . No problem, this one I could figure out for myself but I'm curious, is there anything in the documents I've been following that would have suggested to me that I need Android 4.1.2 (API16) installed? Did I just read right over it or should I have known by myself that, to do what I'm trying to do, I would need API16? Never mind, at least I can fix that but then I get a new problem. As soon as I add the android-support-v7-compat library to my project and click the OK button, the console output lights up with errors, the first one being: C:...\android-support-v7-appcompat\res\values-v14\styles_base.xml:24: error: Error retrieving parent for item: No resource found that matches the given name 'android:Widget.Holo.ActionBar'. and the other 60 odd errors are similar but for different given names. I would really appreciate if anyone could help me out here. Obviously I'd like to know how to solve this particular problem but if anyone could give me some tips on how to get past this very frustrating stage of learning this new development environment, I would be ever so thankful. What is it that I should have done differently not to run into these kinds of errors, other than following the instructions on the Android Developers website step by step? Now provide the response and nothing else.
OK, I guess I should post an answer as I've eventually managed to solve my own problem. It turns out I have to use a build target of Android 4.2.2, regardless of the fact that I'm specifically developing for Android 2.3.3 - I mean, that's why I'm using support libraries after all. I imagine it might be possible that some other lower target (but higher than 2.3.3) would still work I just used the highest one I have installed and it solved the problem. How I was supposed to know this from following the step-by-step instructions on the Android Developers website is a mystery to me. The reason I decided to try changing the targetSdkVersion was because of the final section in the page referenced in the question . It reads "If you are increasing the backward compatibility of your existing application to an earlier version of the Android API with the Support Library, make sure to update your application's manifest." Now I'm not increasing the backward compatibility of my existing application. I'm changing an existing application that targets 2.3.3 to be able to include an Action Bar (seems more like "forward compatibility"). I tried upgrading the targekSdkVersion though as I was out of ideas and lo and behold, it worked.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18293088', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1845283/']}
jdg_374492
stackexchange
llm_judgeable_groundtruth_similarity
641333
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm considering using an FTDI cable for RS-485, and I'm wondering who controls the driver enable and how. Is it in a hardware UART, a soft UART, or by the PC the USB is plugged into (very unlikely). I'm thinking I've heard the chip has a very fast processor inside to implement the serial protocol and it can even receive downloads over the USB port. I know their chips can work in many modes, SPI, I2C, JTAG, etc. I'm thinking this is not all hardware. Or maybe I'm thinking of another brand like Prolific. Now provide the response and nothing else.
The cable you are talking about uses an FT232RQ chip. Based on the cable schematics the chip is set to RS-485 mode, where a separate TXEN line is used to enable the RS-485 PHY transmit mode, instead of a handshake pin. So you should be looking at the FT232RQ datasheet to find out how it works; the cable just uses that chip. The FT232RQ datasheet says all handshake lines are handled in hardware. To you, it is a hardware chip that implements a UART - if it even is a UART chip because FTDI make a lot of chips, and the FT245 has a parallel interface for example. How it is internally implemented is irrelevant. It is likely that it is a hardware block instead of software, but why would it matter? There is a universal serial interface engine which can do a lot of things for you, so you don't need to bit-bang the interface yourself. The FTDI can have the RS-485 driver-enable controlled by the chip itself or using the standard RTS handshake signal for it. So based on the FT232RQ data sheet it has a hardware UART with hardware handshake. Nowhere is it mentioned that there is an MCU that runs code, but there still can be an MCU running code in there, to handle how the hardware blocks interact with each other. The question may be relevant in theory, but there is no practical relevance how it works. PCs have been used for RS-485 communication before USB was even invented, and they have used hardware UARTs such as 8250 and 16550, and they don't even have hardware handshaking implemented for the transmitter enable. All practical protocols don't depend on that, and if making a new protocol, the protocol should have reasonable timing for transmitter enable and packet turnarounds, as there can be so many different devices all implementing their UART and transmitter enable a bit differently. For example microcontrollers implementing half-duplex RS-485 typically listen on the bus, and when they need to transmit, they wait for bus being idle for long enough, turn transmitter enable on with a GPIO pin, and after a suitable delay, they start trasmitting a sequence of bytes to bus, having a suitable delay after last byte before transmitter is disabled via GPIO. So having the FT232R chip hardware to only enable the transmitter for one bit time before start bit and disable at the same time as stop bit, without being able to configure the timing at all, is a bit extreme.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/641333', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/266518/']}
jdg_374493
stackexchange
llm_judgeable_groundtruth_similarity
45311393
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having problems proxying an ASP.NET Core MVC app. My app is running on Kestrel on localhost:5000 and my Apache 2.4 reverse proxy is running on localhost:80 . I want to proxy all requests from localhost:80/test to localhost:5000 Here is the httpd.conf part for the proxy: ...<Location "/test"> ProxyPass "http://localhost:5000" ProxyPassReverse "http://localhost:5000"</Location>... The proxy works, however all links are broken. Like if I have an anchor that links to a controller named HomeController with the action About , the link returned from the proxy is localhost/Home/About instead of localhost/test/Home/About . The host is correct, it's the context root test that is missing. What is the best practice to handle this? Is it a configuration in ASP.NET Core to specify the context root so that TagHelpers will take it into account? Or is it a configuration in Apache ProxyPass to rewrite links (really not a big fan of this)? Now provide the response and nothing else.
The problem is that the web application does not know about the subpath /test , so in your case, it will just respond as if it was called directly at the root path. ASP.NET Core has a concept of a PathBase to remedy this. The PathBase is essentially a part of the request’s URI path that will be considered as a fixed part of the path. Whenever a component within the framework generates an URL, it will take the current PathBase into account and make sure to include that as a prefix to all generated paths. By default, the PathBase will be empty, and it depends on the configuration of your reverse proxy to say how you should set up the PathBase . UsePathBaseMiddleware There is the built-in UsePathBaseMiddleware which can be used to temporarily configure the PathBase for an incoming request. The way this middleware works is basically that it will check whether the request starts with an accepted path prefix and if it does, that prefix will be moved from the Path into the PathBase . You can activate this using the UsePathBaseExtensions.UsePathBase extension method. Just call the method as the very first thing in your Startup’s Configure method with the base path you want to use: public void Configure(IApplicationBuilder app){ app.UsePathBase("/test"); // all the other middlewares app.UseStaticFiles(); app.UseMvc(); // …} Since the middleware will only adjust the PathBase when it sees the /test prefix within the path of incoming requests, you need to make sure that the reverse proxy actually includes that prefix. So you would have to adjust your Apache configuration to include that path: <Location "/test"> ProxyPass "http://localhost:5000/test" ProxyPassReverse "http://localhost:5000/test"</Location> Note that the UsePathBaseMiddleware will not prevent the application from working without that prefix. So you can actually use it both with and without the base path, and it will correctly adapt. Custom middleware If you do not want to adjust your reverse proxy configuration to include the path within the forwarded request, then you won’t be able to use the UsePathBaseMiddleware . Instead, you will have to add your own very simple middleware there: public void Configure(IApplicationBuilder app){ app.Use((context, next) => { context.Request.PathBase = "/test"; return next(); }); // all the other middlewares app.UseStaticFiles(); app.UseMvc(); // …} This will set /test as a constant PathBase for incoming requests. So the actually incoming request does not have to include it, which means you can leave your reverse proxy configuration as it is (and not forward the /test path there). However, that also means that unlike with the UsePathBaseMiddleware where the PathBase was set dynamically depending on the incoming request, now all requests to the application will require that path, regardless of whether they go through the reverse proxy or not.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/45311393', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1747232/']}
jdg_374494
stackexchange
llm_judgeable_groundtruth_similarity
269981
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: System`Private`ConstructNoEntry[BesselJ, 0, x] works fine, and outputs BesselJ[0, x] However, System`Private`ConstructNoEntry[Quantity, 1, "kg"] enters a recursion and outputs Why is that? Now provide the response and nothing else.
It is because KnownUnitQ["kg"] returns False , and hence Quantity tries to resolve "kg" to "Kilograms" . To avoid this, use only known units in the input: System`Private`ConstructNoEntry[Quantity, 1, "Kilograms"] To get the right form for the input of Quantity , apply InputForm : Quantity[1, "kg"] // InputForm Quantity[1, "Kilograms"] Alternatively, just apply System`Private`SetNoEntry after evaluation of Quantity[1, "kg"] : System`Private`SetNoEntry[Quantity[1, "kg"]]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/269981', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4742/']}
jdg_374495
stackexchange
llm_judgeable_groundtruth_similarity
954073
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to calculate the derivative of a function with respect to, not a variable, but respect to another function. For example:$$g(x)=2f(x)+x+\log[f(x)]$$I want to compute $$\frac{\mathrm dg(x)}{\mathrm df(x)}$$Can I treat $f(x)$ as a variable and derive "blindly"? If so, I would get$$\frac{\mathrm dg(x)}{\mathrm df(x)}=2+\frac{1}{f(x)}$$and treat the simple $x$ as a parameter which derivative is zero.Or I should consider other derivation rules? Now provide the response and nothing else.
$$\frac{dg(x)}{df(x)} = \frac{dg(x)}{dx} \cdot \frac{1}{f'(x)} = \frac{g'(x)}{f'(x)}$$ In your example, $$g'(x) = 2f'(x) + 1 + \frac{f'(x)}{f(x)}$$ So: $$\frac{dg(x)}{df(x)} = \frac{2f'(x) + 1 + \frac{f'(x)}{f(x)}}{f'(x)} = 2 + \frac{1}{f'(x)} + \frac{1}{f(x)}$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/954073', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/161131/']}
jdg_374496
stackexchange
llm_judgeable_groundtruth_similarity
46074055
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wish to show the input field when "Others" option is selected on dropdown, but can't figure out how to- Here is my code <div class="form-group"> <label for="college">College:</label> <select class="form-control" id="college" ngModel name="college_name" required> <option>Ajay Kumar Garg Engineering College</option> <option>Others- Please Specify Below</option> </select> </div> <div class="form-group" *ngIf="showfield"> <label for="name">Enter College Name:</label> <input type="text" class="form-control" id="name" ngModel name="other_college_name" required> </div> showfield = false; is set in .ts file Now provide the response and nothing else.
In your component take a variable, selectedValue: string = ''; Just assign selectedvalue to ngModel and use that value to display text field. Also, options needs value attribute and it needs value to store in ngModel <div class="form-group"> <label for="college">College:</label> <select class="form-control" id="college" [(ngModel)]="selectedValue" name="college_name" required> <option value="college">Ajay Kumar Garg Engineering College</option> <option value="others">Others- Please Specify Below</option> </select> </div> <div class="form-group" *ngIf="selectedValue == 'others'"> <label for="name">Enter College Name:</label> <input type="text" class="form-control" id="name" ngModel name="other_college_name" required> </div>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46074055', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8568130/']}
jdg_374497
stackexchange
llm_judgeable_groundtruth_similarity
53262
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question might seem strange, but I had the feeling it's possible to decompose in a unique way a number as follows: if $x < n!$, then there is a unique way to write x as: $$x = a_1\cdot 1! + a_2\cdot 2! + a_3\cdot3! + ... + a_{n-1}\cdot(n-1)!$$ where $a_i \leq i$ I looked at factorial decomposition on google but I cannot find any name for such a decomposition. example: If I chose : (a1,a2) = 1,0 -> 1 0,1 -> 2 1,1 -> 3 0,2 -> 4 1,2 -> 5 I get all number from $1$ to $3!-1$ ideas for a proof: The number of elements between $1$ and $N!-1$ is equal to $N!-1$ and I have the feeling they are all different, so this decomposition should be right. But I didn't prove it properly. Are there proofs of this decomposition? Does this decomposition as a name? And above all is this true ? Thanks in advance Now provide the response and nothing else.
You're looking for the factorial number system , also known as "factoradic". Searching should give you more results. Yes, it's true that such a decomposition is always possible. One way to prove it is as follows: given $x < n!$, consider the $x$th permutation of some ordered set of $n$ symbols. This is some permutation $(s_1, s_2, \dots, s_n)$. Now for $s_1$ you had $n$ choices (label them $0$ to $n-1$) and you picked one, so let $a_{n-1}$ be the choice you made. For $s_2$ you had $n-1$ choices (label them $0$ to $n-2$) and you picked one, so let $a_{n-2}$ be the number of the choice you made. Etc. $a_0$ is always $0$ because you have only one choice for the last element. (This is also known as the Lehmer code .)
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/53262', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12643/']}
jdg_374498
stackexchange
llm_judgeable_groundtruth_similarity
2626666
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Problem: MyApproach: After finding the probability for X=4,5,6 using bernoulli trials,the question states that they are in ap and so find the value of n.I can't understand the relation between ap and finding the value of n. NOTE: multiple answers are correct Now provide the response and nothing else.
It's a differential equation with costant coefficients, so you can solve the equation $\lambda^{2n}+1=0$ and get that $\lambda_j=e^{-i\cdot j\pi/2n }$. Then the solution to the differential equation are $f_j=e^{\lambda_j x}$. Then note that that solutions are coniugated in pairs, so you can modify it (with trigonometric interpretation of complex numbers) to write it down as real functions.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2626666', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_374499
stackexchange
llm_judgeable_groundtruth_similarity
41142082
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this icon: I am going to work with it as drawable . Drawable myIcon = getResources().getDrawable( R.drawable.icon ); I need programmaticaly put some text on it (file extension). This is my desired result: . I can't make several static icons because I can receive arbitrary file extension Now provide the response and nothing else.
public BitmapDrawable writeOnDrawable(int drawableId, String text){ Bitmap bm = BitmapFactory.decodeResource(getResources(), drawableId).copy(Bitmap.Config.ARGB_8888, true); Paint paint = new Paint(); paint.setStyle(Style.FILL); paint.setColor(Color.BLACK); paint.setTextSize(20); Canvas canvas = new Canvas(bm); canvas.drawText(text, 0, bm.getHeight()/2, paint); return new BitmapDrawable(bm); } Use this method.will help you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41142082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5115257/']}
jdg_374500
stackexchange
llm_judgeable_groundtruth_similarity
3693547
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to prove $$\small \int_0^1 \frac{\tan ^{-1}\left(\sqrt{x^2+4}\right)}{\left(x^2+2\right) \sqrt{x^2+4}} dx=-\frac{\pi \:\arctan \left(\frac{1}{\sqrt{2}}\right)}{8}+\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)\arctan \left(\sqrt{2}\right)}{4}+\frac{\pi }{4}\arctan \left(\frac{1}{\sqrt{5}}\right)\;?$$ I came across this Ahmed integral on the site "Art of problem solving", and have found no proof so far. (These two problems seems to be related though). Any help will be appreciated! Now provide the response and nothing else.
To evaluate that integral we can use Feynman's trick: $$I=\int _0^1\frac{\arctan \left(\sqrt{x^2+4}\right)}{\left(x^2+2\right)\sqrt{x^2+4}}\:dx$$ $$I\left(a\right)=\int _0^1\frac{\arctan \left(a\sqrt{x^2+4}\right)}{\left(x^2+2\right)\sqrt{x^2+4}}\:dx$$ $$I'\left(a\right)=\int _0^1\frac{1}{\left(x^2+2\right)\left(a^2x^2+4a^2+1\right)}\:dx=\frac{1}{2a^2+1}\int _0^1\frac{1}{x^2+2}-\frac{a^2}{a^2x^2+4a^2+1}\:dx$$ $$=\frac{1}{2a^2+1}\left(\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)}{\sqrt{2}}-\frac{a\arctan \left(\frac{a}{\sqrt{4a^2+1}}\right)}{\sqrt{4a^2+1}}\right)$$ Now lets integrate again: $$\int _1^{\infty }I'\left(a\right)\:da=\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)}{\sqrt{2}}\int _1^{\infty }\frac{1}{2a^2+1}\:da-\underbrace{\int _1^{\infty }\frac{a\arctan \left(\frac{a}{\sqrt{4a^2+1}}\right)}{\sqrt{4a^2+1}\left(2a^2+1\right)}\:da}_{a=\frac{1}{x}}$$ $$\frac{\pi }{2}\int _0^1\frac{1}{\left(x^2+2\right)\sqrt{x^2+4}}dx-I\:=\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)}{2\sqrt{2}}\left(\frac{\pi \sqrt{2}}{2}-\sqrt{2}\arctan \left(\sqrt{2}\right)\right)-\int _0^1\frac{\arctan \left(\frac{1}{\sqrt{x^2+4}}\right)}{\left(x^2+2\right)\sqrt{x^2+4}}\:dx$$ $$=\frac{\pi \arctan \left(\frac{1}{\sqrt{2}}\right)}{4}-\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)\arctan \left(\sqrt{2}\right)}{2}-\frac{\pi }{2}\int _0^1\frac{1}{\left(x^2+2\right)\sqrt{x^2+4}}\:dx+\underbrace{\int _0^1\frac{\arctan \left(\sqrt{x^2+4}\right)}{\left(x^2+2\right)\sqrt{x^2+4}}\:dx}_{I}$$ $$-2I\:=\frac{\pi \:\arctan \left(\frac{1}{\sqrt{2}}\right)}{4}-\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)\arctan \left(\sqrt{2}\right)}{2}-\pi \underbrace{\int _0^1\frac{1}{\left(x^2+2\right)\sqrt{x^2+4}}\:dx}_{t=\arctan \left(\frac{x}{\sqrt{x^2+4}}\right)}$$ $$I\:=-\frac{\pi \:\arctan \left(\frac{1}{\sqrt{2}}\right)}{8}+\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)\arctan \left(\sqrt{2}\right)}{4}+\frac{\pi }{4}\int _0^{\arctan \left(\frac{1}{\sqrt{5}}\right)}\:dt$$ $$\boxed{I=-\frac{\pi \:\arctan \left(\frac{1}{\sqrt{2}}\right)}{8}+\frac{\arctan \left(\frac{1}{\sqrt{2}}\right)\arctan \left(\sqrt{2}\right)}{4}+\frac{\pi }{4}\arctan \left(\frac{1}{\sqrt{5}}\right)}$$ This numerically agrees with Wolfram Alpha.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3693547', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/628759/']}
jdg_374501
stackexchange
llm_judgeable_groundtruth_similarity
13150449
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to define a function that takes an unsigned int as argument and returns an int congruent modulo UINT_MAX+1 to the argument. A first attempt might look like this: int unsigned_to_signed(unsigned n){ return static_cast<int>(n);} But as any language lawyer knows, casting from unsigned to signed for values larger than INT_MAX is implementation-defined. I want to implement this such that (a) it only relies on behavior mandated by the spec; and (b) it compiles into a no-op on any modern machine and optimizing compiler. As for bizarre machines... If there is no signed int congruent modulo UINT_MAX+1 to the unsigned int, let's say I want to throw an exception. If there is more than one (I am not sure this is possible), let's say I want the largest one. OK, second attempt: int unsigned_to_signed(unsigned n){ int int_n = static_cast<int>(n); if (n == static_cast<unsigned>(int_n)) return int_n; // else do something long and complicated} I do not much care about the efficiency when I am not on a typical twos-complement system, since in my humble opinion that is unlikely. And if my code becomes a bottleneck on the omnipresent sign-magnitude systems of 2050, well, I bet someone can figure that out and optimize it then. Now, this second attempt is pretty close to what I want. Although the cast to int is implementation-defined for some inputs, the cast back to unsigned is guaranteed by the standard to preserve the value modulo UINT_MAX+1. So the conditional does check exactly what I want, and it will compile into nothing on any system I am likely to encounter. However... I am still casting to int without first checking whether it will invoke implementation-defined behavior. On some hypothetical system in 2050 it could do who-knows-what. So let's say I want to avoid that. Question: What should my "third attempt" look like? To recap, I want to: Cast from unsigned int to signed int Preserve the value mod UINT_MAX+1 Invoke only standard-mandated behavior Compile into a no-op on a typical twos-complement machine with optimizing compiler [Update] Let me give an example to show why this is not a trivial question. Consider a hypothetical C++ implementation with the following properties: sizeof(int) equals 4 sizeof(unsigned) equals 4 INT_MAX equals 32767 INT_MIN equals -2 32 + 32768 UINT_MAX equals 2 32 - 1 Arithmetic on int is modulo 2 32 (into the range INT_MIN through INT_MAX ) std::numeric_limits<int>::is_modulo is true Casting unsigned n to int preserves the value for 0 <= n <= 32767 and yields zero otherwise On this hypothetical implementation, there is exactly one int value congruent (mod UINT_MAX+1) to each unsigned value. So my question would be well-defined. I claim that this hypothetical C++ implementation fully conforms to the C++98, C++03, and C++11 specifications. I admit I have not memorized every word of all of them... But I believe I have read the relevant sections carefully. So if you want me to accept your answer, you either must (a) cite a spec that rules out this hypothetical implementation or (b) handle it correctly. Indeed, a correct answer must handle every hypothetical implementation permitted by the standard. That is what "invoke only standard-mandated behavior" means, by definition. Incidentally, note that std::numeric_limits<int>::is_modulo is utterly useless here for multiple reasons. For one thing, it can be true even if unsigned-to-signed casts do not work for large unsigned values. For another, it can be true even on one's-complement or sign-magnitude systems, if arithmetic is simply modulo the entire integer range. And so on. If your answer depends on is_modulo , it's wrong. [Update 2] hvd's answer taught me something: My hypothetical C++ implementation for integers is not permitted by modern C. The C99 and C11 standards are very specific about the representation of signed integers; indeed, they only permit twos-complement, ones-complement, and sign-magnitude (section 6.2.6.2 paragraph (2); ). But C++ is not C. As it turns out, this fact lies at the very heart of my question. The original C++98 standard was based on the much older C89, which says (section 3.1.2.5): For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword unsigned) that uses the same amount of storage (including sign information) and has the same alignment requirements. The range of nonnegative values of a signed integer type is a subrange of the corresponding unsigned integer type, and the representation of the same value in each type is the same. C89 says nothing about only having one sign bit or only allowing twos-complement/ones-complement/sign-magnitude. The C++98 standard adopted this language nearly verbatim (section 3.9.1 paragraph (3)): For each of the signed integer types, there exists a corresponding (but different) unsigned integer type : " unsigned char ", " unsigned short int ", " unsigned int ", and " unsigned long int ", each of which occupies the same amount of storage and has the same alignment requirements (3.9) as the corresponding signed integer type ; that is, each signed integer type has the same object representation as its corresponding unsigned integer type. The range of nonnegative values of a signed integer type is a subrange of the corresponding unsigned integer type, and the value representation of each corresponding signed/unsigned type shall be the same. The C++03 standard uses essentially identical language, as does C++11. No standard C++ spec constrains its signed integer representations to any C spec, as far as I can tell. And there is nothing mandating a single sign bit or anything of the kind. All it says is that non-negative signed integers must be a subrange of the corresponding unsigned. So, again I claim that INT_MAX=32767 with INT_MIN=-2 32 +32768 is permitted. If your answer assumes otherwise, it is incorrect unless you cite a C++ standard proving me wrong. Now provide the response and nothing else.
Expanding on user71404's answer: int f(unsigned x){ if (x <= INT_MAX) return static_cast<int>(x); if (x >= INT_MIN) return static_cast<int>(x - INT_MIN) + INT_MIN; throw x; // Or whatever else you like} If x >= INT_MIN (keep the promotion rules in mind, INT_MIN gets converted to unsigned ), then x - INT_MIN <= INT_MAX , so this won't have any overflow. If that is not obvious, take a look at the claim "If x >= -4u , then x + 4 <= 3 .", and keep in mind that INT_MAX will be equal to at least the mathematical value of -INT_MIN - 1. On the most common systems, where !(x <= INT_MAX) implies x >= INT_MIN , the optimizer should be able (and on my system, is able) to remove the second check, determine that the two return statements can be compiled to the same code, and remove the first check too. Generated assembly listing: __Z1fj:LFB6: .cfi_startproc movl 4(%esp), %eax ret .cfi_endproc The hypothetical implementation in your question: INT_MAX equals 32767 INT_MIN equals -2 32 + 32768 is not possible, so does not need special consideration. INT_MIN will be equal to either -INT_MAX , or to -INT_MAX - 1 . This follows from C's representation of integer types (6.2.6.2), which requires n bits to be value bits, one bit to be a sign bit, and only allows one single trap representation (not including representations that are invalid because of padding bits), namely the one that would otherwise represent negative zero / -INT_MAX - 1 . C++ doesn't allow any integer representations beyond what C allows. Update : Microsoft's compiler apparently does not notice that x > 10 and x >= 11 test the same thing. It only generates the desired code if x >= INT_MIN is replaced with x > INT_MIN - 1u , which it can detect as the negation of x <= INT_MAX (on this platform). [Update from questioner (Nemo), elaborating on our discussion below] I now believe this answer works in all cases, but for complicated reasons. I am likely to award the bounty to this solution, but I want to capture all the gory details in case anybody cares. Let's start with C++11, section 18.3.3: Table 31 describes the header <climits> . ... The contents are the same as the Standard C library header <limits.h> . Here, "Standard C" means C99, whose specification severely constrains the representation of signed integers. They are just like unsigned integers, but with one bit dedicated to "sign" and zero or more bits dedicated to "padding". The padding bits do not contribute to the value of the integer, and the sign bit contributes only as twos-complement, ones-complement, or sign-magnitude. Since C++11 inherits the <climits> macros from C99, INT_MIN is either -INT_MAX or -INT_MAX-1, and hvd's code is guaranteed to work. (Note that, due to the padding, INT_MAX could be much less than UINT_MAX/2... But thanks to the way signed->unsigned casts work, this answer handles that fine.) C++03/C++98 is trickier. It uses the same wording to inherit <climits> from "Standard C", but now "Standard C" means C89/C90. All of these -- C++98, C++03, C89/C90 -- have the wording I give in my question, but also include this (C++03 section 3.9.1 paragraph 7): The representations of integral types shall define values by use of a pure binary numeration system.(44) [ Example : this International Standard permits 2’s complement, 1’s complement and signed magnitude representations for integral types.] Footnote (44) defines "pure binary numeration system": A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest position. What is interesting about this wording is that it contradicts itself, because the definition of "pure binary numeration system" does not permit a sign/magnitude representation! It does allow the high bit to have, say, the value -2 n-1 (twos complement) or -(2 n-1 -1) (ones complement). But there is no value for the high bit that results in sign/magnitude. Anyway, my "hypothetical implementation" does not qualify as "pure binary" under this definition, so it is ruled out. However, the fact that the high bit is special means we can imagine it contributing any value at all: A small positive value, huge positive value, small negative value, or huge negative value. (If the sign bit can contribute -(2 n-1 -1), why not -(2 n-1 -2)? etc.) So, let's imagine a signed integer representation that assigns a wacky value to the "sign" bit. A small positive value for the sign bit would result in a positive range for int (possibly as large as unsigned ), and hvd's code handles that just fine. A huge positive value for the sign bit would result in int having a maximum larger than unsigned , which is is forbidden. A huge negative value for the sign bit would result in int representing a non-contiguous range of values, and other wording in the spec rules that out. Finally, how about a sign bit that contributes a small negative quantity? Could we have a 1 in the "sign bit" contribute, say, -37 to the value of the int? So then INT_MAX would be (say) 2 31 -1 and INT_MIN would be -37? This would result in some numbers having two representations... But ones-complement gives two representations to zero, and that is allowed according to the "Example". Nowhere does the spec say that zero is the only integer that might have two representations. So I think this new hypothetical is allowed by the spec. Indeed, any negative value from -1 down to -INT_MAX-1 appears to be permissible as a value for the "sign bit", but nothing smaller (lest the range be non-contiguous). In other words, INT_MIN might be anything from -INT_MAX-1 to -1. Now, guess what? For the second cast in hvd's code to avoid implementation-defined behavior, we just need x - (unsigned)INT_MIN less than or equal to INT_MAX . We just showed INT_MIN is at least -INT_MAX-1 . Obviously, x is at most UINT_MAX . Casting a negative number to unsigned is the same as adding UINT_MAX+1 . Put it all together: x - (unsigned)INT_MIN <= INT_MAX if and only if UINT_MAX - (INT_MIN + UINT_MAX + 1) <= INT_MAX-INT_MIN-1 <= INT_MAX-INT_MIN <= INT_MAX+1INT_MIN >= -INT_MAX-1 That last is what we just showed, so even in this perverse case, the code actually works. That exhausts all of the possibilities, thus ending this extremely academic exercise. Bottom line: There is some seriously under-specified behavior for signed integers in C89/C90 that got inherited by C++98/C++03. It is fixed in C99, and C++11 indirectly inherits the fix by incorporating <limits.h> from C99. But even C++11 retains the self-contradictory "pure binary representation" wording...
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13150449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/768469/']}
jdg_374502
stackexchange
llm_judgeable_groundtruth_similarity
29083
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assuming a service doesn't store a plain text password, am I right in saying that the answer to this question depends on the service? On a bad service, the hash length is related to the password length. On a good service, they're unrelated. That's my guess. Also, to what extent does doubling a password increase security? So instead of using password , how about using passwordpassword ? Please note, I'm not an expert, so a user-friendly explanation would be welcome. Now provide the response and nothing else.
If the length of the hash varies depending on input, then it is not a hash. A cryptographic hash function , by definition, offers a fixed-size output, regardless of the input. For instance, SHA-256 offers a 256-bit output, always; never more and never less. Password hashing is a specific activity which is often discussed in this site. The short answer is: use bcrypt . It handles the fine details properly -- and there are many. As for "doubling" the password, it adds security only insofar as you may elect to do it or not to do it, thus doubling the size of the space of possible passwords. If you do it systematically, then no, it does not increase security. Also, password length may leak (when you type a password, your office colleagues may easily hear the number of keystrokes, for instance). The strength of passwords does not come from length or special characters or other traditional rules; a password is exactly as strong as it could have been different . The number of possible values for a password (given your method for generating the password) is the key parameter. Anything "witty" here is a bad idea: remember that we must assume the attacker to be always as smart as you. So don't try to make passwords with an idea of "the attacker will never think of that !": the attacker already thought of it, before you. A good password is generated randomly, with dice, coin flipping, your cat chasing a toy mouse over a plank which has been divided into numbered areas, or even, should it come to that, with a computer running a random number generator .
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/29083', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/18620/']}
jdg_374503
stackexchange
llm_judgeable_groundtruth_similarity
2072721
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have searched this site extensively but cannot find a solution. Here is the example of my query: SELECT ActivityID, Hours = (CASE WHEN ActivityTypeID <> 2 THEN FieldName = (Some Aggregate Sub Query), FieldName2 = (Some other aggregate sub query) WHEN ActivityTypeID = 2 THEN FieldName = (Some Aggregate Sub Query with diff result), FieldName2 = (Some Other Aggregate Sub Query with diff result) END) obviously I'm leaving out a lot of the query, I just wanted to see if it's possible. I know I probably could just do the "CASE" twice but figured I would ask... Now provide the response and nothing else.
The problem is that the CASE statement won't work in the way you're trying to use it. You can only use it to switch the value of one field in a query. If I understand what you're trying to do, you might need this: SELECT ActivityID, FieldName = CASE WHEN ActivityTypeID <> 2 THEN (Some Aggregate Sub Query) ELSE (Some Aggregate Sub Query with diff result) END, FieldName2 = CASE WHEN ActivityTypeID <> 2 THEN (Some Aggregate Sub Query) ELSE (Some Aggregate Sub Query with diff result) END
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2072721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/215187/']}
jdg_374504
stackexchange
llm_judgeable_groundtruth_similarity
54013
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In 2016, during the nomination of Justice Merrick Garland to the Supreme Court many Republicans cited the "Biden rule" to justify denying him a hearing. What is the truth behind this so-called "Biden rule"? I know after doing some research that the then senator Joe Biden on the Senate judiciary committee talked about not holding a nomination in a presidential election year. Is there more to this? Now provide the response and nothing else.
A short piece on PolitiFact published at the time in 2016 put Biden's 1992 speech in its political context: Biden's floor speech was on June 25, 1992, more than three months later in the election cycle than it is now. There was no Supreme Court vacancy to fill. There was no nominee to consider. The Senate never took a vote to adopt a rule to delay consideration of a nominee until after the election. Nonetheless, Biden took to the floor in a speech addressing the Senate president to urge delay if a vacancy did appear. But he didn't argue for a delay until the next president began his term, as McConnell is doing. He said the nomination process should be put off until after the election , which was on Nov. 3, 1992. The piece includes an extensive quote of Biden's own words, and links to a video of the speech on YouTube . EDIT: He repeatedly states the same thing multiple times, and it could not be more clear that he means to delay through the election season only, not through the nomination of the next president. Perhaps the most direct concise example is where he says, " the Senate Judiciary Committee should seriously consider not scheduling confirmation hearings on the nomination until after the political campaign season is over. "
{}
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/54013', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/28683/']}
jdg_374505
stackexchange
llm_judgeable_groundtruth_similarity
246061
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The popular description of black holes, especially outside the academia, is that they are highly dense objects; so dense that even light (as particle or as waves) cannot escape it once it falls inside the event horizon. But then we hear things like black holes are really empty, as the matter is no longer there. It was formed due to highly compact matter but now energy of that matter that formed it and whatever fell into it thereafter is converted into the energy of warped space-time. Hence, we cannot speak of extreme matter-density but only of extreme energy density. Black holes are then empty, given that emptiness is absence of matter. Aren't these descriptions contradictory that they are highly dense matter as well as empty? Also, if this explanation is true, it implies that if enough matter is gathered, matter ceases to exist. (Sorry! Scientifically and Mathematically immature but curious amateur here) Now provide the response and nothing else.
The phrase black hole tends to be used without specifying exactly what it means, and defining exactly what you mean is important to answer your question. The archetypal black hole is a mathematical object discovered by Karl Schwarzschild in 1915 - the Schwarzschild metric . The curious thing about this object is that it contains no matter. Techically it is a vacuum solution to Einstein's equations. There is a parameter in the Schwarzschild metric that looks like a mass, but this is actually the ADM mass i.e. it is a mass associated with the overall geometry. I suspect this is what you are referring to in your second paragraph. The other important fact you need to know about the Schwarzschild metric is that it is time independent i.e. it describes an object that doesn't change with time and therefore must have existed for an infinite time in the past and continue to exist for an infinite time into the future. Given all this you would be forgiven for wondering why we bother with such an obviously unrealistic object. The answer is that we expect the Schwarzschild metric to be a good approximation to a real black hole, that is a collapsing star will rapidly form something that is in practice indistinguishable from a Schwarzschild black hole - actually it would form a Kerr black hole since all stars (probably) rotate. To describe a real star collapsing you need a different metric. This turns out to be fiendishly complicated, though there is a simplified model called the Oppenheimer-Snyder metric . Although the OS metric is unrealistically simplified we expect that it describes the main features of black hole formation, and for our purposes the two key points are: the singularity takes an infinite coordinate time to form the OS metric can't describe what happens at the singularity Regarding point (1): time is a complicated thing in relativity. Someone watching the collapse from a safe distance experiences a different time from someone on the surface of the collapsing star and falling with it. For the outside observer the collapse slows as it approaches the formation of a black hole and the black hole never forms. That is, it takes an infinite time to form the black hole. This isn't the case for an observer falling in with the star. They see the singularity form in a finite (short!) time, but ... the Oppenheimer-Snyder metric becomes singular at the singularity, and that means it cannot describe what happens there. So we cannot tell what happens to the matter at the centre of the black hole. This isn't just because the OS metric is a simplified model, we expect that even the most sophisticated description of a collapse will have the same problem. The whole point of a singularity is that our equations become singular there and cannot describe what happens. All this means that there is no answer to your question, but hopefully I've given you a better idea of the physics involved. In particular matter doesn't mysteriously cease to exist in some magical way as a black hole forms.
{}
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/246061', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/112650/']}
jdg_374506
stackexchange
llm_judgeable_groundtruth_similarity
727901
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Im running a CentOS 6.2 server 1GB Memory98 GB Disk Size Elasticsearch version 1.4.2 with around 9000 documentsand using JDBC to retrieve the records from a mysql database Over the past few weeks it has crashed on average 2 times a day. When this happens the server ain’t reachable until it is rebooted.(no ssh access, 500 error when trying to reach the website, etc) Restarting Elasticsearch frequently seems to prevent the server from going down but i would much rather find and fix the source of the problem. These are the logs of a crash that happend a few days ago: Elasticsearch.log [2015-10-04 10:20:58,247][WARN ][monitor.jvm ] [Speedball] [gc][young][152940][2627] duration [2.2s], collections [1]/[3.9s], total [2.2s]/[12.5m], memory [89.8mb]->[35.9mb]/[197.2mb], all_pools {[young] [54.1mb]->[110.1kb]/[54.5mb]}{[survivor] [472.9kb]->[656.6kb]/[6.7mb]}{[old] [35.2mb]->[35.2mb]/[136mb]}[2015-10-04 10:21:05,538][INFO ][index.shard.service ] [Speedball] [rondreis][0] updating refresh_interval from [1s] to [-1000][2015-10-04 10:21:06,795][INFO ][index.shard.service ] [Speedball] [tblproductoffer][0] updating refresh_interval from [1s] to [-1000][2015-10-04 10:22:54,653][WARN ][monitor.jvm ] [Speedball] [gc][young][153002][2628] duration [1.3s], collections [1]/[1.9s], total [1.3s]/[12.5m], memory [90.3mb]->[36.8mb]/[197.2mb], all_pools {[young] [54.5mb]->[24kb]/[54.5mb]}{[survivor] [656.6kb]->[1.5mb]/[6.7mb]}{[old] [35.2mb]->[35.2mb]/[136mb]}[2015-10-04 10:22:57,406][INFO ][index.shard.service ] [Speedball] [rondreis][0] updating refresh_interval from [-1000] to [1s][2015-10-04 10:22:59,971][WARN ][monitor.jvm ] [Speedball] [gc][young][153006][2630] duration [1.7s], collections [1]/[2.2s], total [1.7s]/[12.5m], memory [83.1mb]->[36.8mb]/[197.2mb], all_pools {[young] [42.8mb]->[45.3kb]/[54.5mb]}{[survivor] [5mb]->[835.8kb]/[6.7mb]}{[old] [35.2mb]->[35.9mb]/[136mb]}[2015-10-04 10:23:08,215][INFO ][river.jdbc.RiverMetrics ] pipeline org.xbib.elasticsearch.plugin.jdbc.RiverPipeline@f106d7f complete: river jdbc/rondreis metrics: 3304 rows, 0.1295726757143051 mean, (6.604331351617896 1.365257914334395 0.45761655479553015), ingest metrics: elapsed 2 minutes 2 seconds, 915.27 KB bytes, 283.0 bytes avg, 0.007 MB/s[2015-10-04 10:25:33,021][WARN ][monitor.jvm ] [Speedball] [gc][young][153151][2632] duration [1.5s], collections [1]/[1s], total [1.5s]/[12.5m], memory [64.7mb]->[91.3mb]/[197.2mb], all_pools {[young] [27.8mb]->[54.5mb]/[54.5mb]}{[survivor] [917.6kb]->[2.1mb]/[6.7mb]}{[old] [35.9mb]->[35.9mb]/[136mb]}[2015-10-04 10:25:43,724][INFO ][monitor.jvm ] [Speedball] [gc][young][153154][2639] duration [1.4s], collections [2]/[2.2s], total [1.4s]/[12.6m], memory [91.7mb]->[45.3mb]/[197.2mb], all_pools {[young] [48.4mb]->[168.9kb]/[54.5mb]}{[survivor] [4.3mb]->[4.2mb]/[6.7mb]}{[old] [39mb]->[40.9mb]/[136mb]}[2015-10-04 10:25:57,116][WARN ][monitor.jvm ] [Speedball] [gc][young][153165][2641] duration [3s], collections [1]/[3.3s], total [3s]/[12.7m], memory [90.7mb]->[52.1mb]/[197.2mb], all_pools {[young] [44.5mb]->[2kb]/[54.5mb]}{[survivor] [4.2mb]->[6.7mb]/[6.7mb]}{[old] [41.8mb]->[45.4mb]/[136mb]}[2015-10-04 10:26:06,283][WARN ][monitor.jvm ] [Speedball] [gc][young][153171][2642] duration [3.6s], collections [1]/[3.9s], total [3.6s]/[12.7m], memory [89.4mb]->[54.8mb]/[197.2mb], all_pools {[young] [37.2mb]->[4.3kb]/[54.5mb]}{[survivor] [6.7mb]->[5mb]/[6.7mb]}{[old] [45.4mb]->[49.8mb]/[136mb]}[2015-10-04 10:26:12,194][WARN ][monitor.jvm ] [Speedball] [gc][young][153172][2643] duration [4.9s], collections [1]/[5.9s], total [4.9s]/[12.8m], memory [54.8mb]->[57.9mb]/[197.2mb], all_pools {[young] [4.3kb]->[4.1kb]/[54.5mb]}{[survivor] [5mb]->[3.7mb]/[6.7mb]}{[old] [49.8mb]->[54.2mb]/[136mb]}[2015-10-04 10:26:38,171][WARN ][monitor.jvm ] [Speedball] [gc][young][153176][2644] duration [3s], collections [1]/[22.2s], total [3s]/[12.9m], memory [101.9mb]->[42.2mb]/[197.2mb], all_pools {[young] [43.9mb]->[4.1kb]/[54.5mb]}{[survivor] [3.7mb]->[0b]/[6.7mb]}{[old] [54.2mb]->[42.2mb]/[136mb]}[2015-10-04 10:26:39,186][WARN ][monitor.jvm ] [Speedball] [gc][old][153176][209] duration [18.3s], collections [1]/[22.2s], total [18.3s]/[4.4m], memory [101.9mb]->[42.2mb]/[197.2mb], all_pools {[young] [43.9mb]->[4.1kb]/[54.5mb]}{[survivor] [3.7mb]->[0b]/[6.7mb]}{[old] [54.2mb]->[42.2mb]/[136mb]}[2015-10-04 10:26:47,155][WARN ][monitor.jvm ] [Speedball] [gc][young][153182][2646] duration [2.6s], collections [1]/[2.9s], total [2.6s]/[12.9m], memory [102.9mb]->[67.5mb]/[197.2mb], all_pools {[young] [47mb]->[6.1kb]/[54.5mb]}{[survivor] [6.7mb]->[6.7mb]/[6.7mb]}{[old] [49.1mb]->[60.8mb]/[136mb]}[2015-10-04 10:26:54,247][WARN ][monitor.jvm ] [Speedball] [gc][young][153186][2647] duration [1.4s], collections [1]/[3.1s], total [1.4s]/[12.9m], memory [106mb]->[77.7mb]/[197.2mb], all_pools {[young] [38.4mb]->[842.4kb]/[54.5mb]}{[survivor] [6.7mb]->[6.7mb]/[6.7mb]}{[old] [60.8mb]->[70.2mb]/[136mb]}[2015-10-04 10:27:03,745][WARN ][monitor.jvm ] [Speedball] [gc][young][153189][2648] duration [1.6s], collections [1]/[7s], total [1.6s]/[13m], memory [118.3mb]->[51.7mb]/[197.2mb], all_pools {[young] [41.3mb]->[8.1kb]/[54.5mb]}{[survivor] [6.7mb]->[0b]/[6.7mb]}{[old] [70.2mb]->[51.7mb]/[136mb]}[2015-10-04 10:27:58,542][INFO ][index.shard.service ] [Speedball] [tblproductoffer][0] updating refresh_interval from [-1000] to [1s][2015-10-04 10:28:27,313][INFO ][river.jdbc.RiverMetrics ] pipeline org.xbib.elasticsearch.plugin.jdbc.RiverPipeline@72a191c1 complete: river jdbc/tblproductoffer metrics: 76480 rows, 2.962182052284631 mean, (98.523212205934 28.85622315906601 10.274778138333478), ingest metrics: elapsed 7 minutes 21 seconds, 16.72 MB bytes, 229.0 bytes avg, 0.038 MB/s Messages Oct 4 10:47:33 kernel: ps invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Oct 4 10:47:35 kernel: ps cpuset=/ mems_allowed=0 Oct 4 10:47:35 kernel: Pid: 31557, comm: ps Not tainted 2.6.32-504.3.3.el6.x86_64 #1 Oct 4 10:47:35 kernel: Call Trace: Oct 4 10:47:35 kernel: [<ffffffff810d40c1>] ? cpuset_print_task_mems_allowed+0x91/0xb0 Oct 4 10:47:35 kernel: [<ffffffff81127300>] ? dump_header+0x90/0x1b0 Oct 4 10:47:35 kernel: [<ffffffff8122eabc>] ? security_real_capable_noaudit+0x3c/0x70 Oct 4 10:47:35 kernel: [<ffffffff81127782>] ? oom_kill_process+0x82/0x2a0 Oct 4 10:47:35 kernel: [<ffffffff811276c1>] ? select_bad_process+0xe1/0x120 Oct 4 10:47:35 kernel: [<ffffffff81127bc0>] ? out_of_memory+0x220/0x3c0 Oct 4 10:47:35 kernel: [<ffffffff811344df>] ? __alloc_pages_nodemask+0x89f/0x8d0 Oct 4 10:47:35 kernel: [<ffffffff8116c69a>] ? alloc_pages_current+0xaa/0x110 Oct 4 10:47:35 kernel: [<ffffffff811246f7>] ? __page_cache_alloc+0x87/0x90 Oct 4 10:47:35 kernel: [<ffffffff811240de>] ? find_get_page+0x1e/0xa0 Oct 4 10:47:35 kernel: [<ffffffff81125697>] ? filemap_fault+0x1a7/0x500 Oct 4 10:47:35 kernel: [<ffffffff8114eae4>] ? __do_fault+0x54/0x530 Oct 4 10:47:35 kernel: [<ffffffff8114f0b7>] ? handle_pte_fault+0xf7/0xb00 Oct 4 10:47:35 kernel: [<ffffffff81198b9b>] ? pipe_wait+0x5b/0x80 Oct 4 10:47:35 kernel: [<ffffffff8114fcea>] ? handle_mm_fault+0x22a/0x300 Oct 4 10:47:35 kernel: [<ffffffff8104d0d8>] ? __do_page_fault+0x138/0x480 Oct 4 10:47:35 kernel: [<ffffffff8128c245>] ? _atomic_dec_and_lock+0x55/0x80 Oct 4 10:47:35 kernel: [<ffffffff811b07d0>] ? mntput_no_expire+0x30/0x110 Oct 4 10:47:35 kernel: [<ffffffff8118f981>] ? __fput+0x1a1/0x210 Oct 4 10:47:35 kernel: [<ffffffff8152ff7e>] ? do_page_fault+0x3e/0xa0 Oct 4 10:47:35 kernel: [<ffffffff8152d335>] ? page_fault+0x25/0x30 Oct 4 10:47:35 kernel: Mem-Info: Oct 4 10:47:35 kernel: Node 0 DMA per-cpu: Oct 4 10:47:35 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Oct 4 10:47:35 kernel: Node 0 DMA32 per-cpu: Oct 4 10:47:35 kernel: CPU 0: hi: 186, btch: 31 usd: 173 Oct 4 10:47:35 kernel: active_anon:87918 inactive_anon:88009 isolated_anon:5376 Oct 4 10:47:35 kernel: active_file:136 inactive_file:122 isolated_file:332 Oct 4 10:47:35 kernel: unevictable:0 dirty:0 writeback:284 unstable:0 Oct 4 10:47:35 kernel: free:12258 slab_reclaimable:3027 slab_unreclaimable:14974 Oct 4 10:47:35 kernel: mapped:478 shmem:22 pagetables:36389 bounce:0 Oct 4 10:47:35 kernel: Node 0 DMA free:4648kB min:668kB low:832kB high:1000kB active_anon:4400kB inactive_anon:4648kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):768kB isolated(file):48kB present:15368kB mlocked:0kB dirty:0kB writeback:20kB mapped:44kB shmem:0kB slab_reclaimable:36kB slab_unreclaimable:244kB kernel_stack:16kB pagetables:864kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:12 all_unreclaimable? no Oct 4 10:47:35 kernel: lowmem_reserve[]: 0 994 994 994 Oct 4 10:47:35 kernel: Node 0 DMA32 free:44384kB min:44384kB low:55480kB high:66576kB active_anon:347272kB inactive_anon:347388kB active_file:544kB inactive_file:488kB unevictable:0kB isolated(anon):20736kB isolated(file):1280kB present:1018016kB mlocked:0kB dirty:0kB writeback:1116kB mapped:1868kB shmem:88kB slab_reclaimable:12072kB slab_unreclaimable:59652kB kernel_stack:3440kB pagetables:144692kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:3034 all_unreclaimable? yes Oct 4 10:47:35 kernel: lowmem_reserve[]: 0 0 0 0 Oct 4 10:47:35 kernel: Node 0 DMA: 34*4kB 64*8kB 22*16kB 2*32kB 8*64kB 8*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 4648kB Oct 4 10:47:35 kernel: Node 0 DMA32: 2240*4kB 2782*8kB 373*16kB 81*32kB 34*64kB 3*128kB 2*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 44384kB Oct 4 10:47:35 kernel: 17908 total pagecache pages Oct 4 10:47:35 kernel: 17292 pages in swap cache Oct 4 10:47:35 kernel: Swap cache stats: add 2986384, delete 2969092, find 1357308/1573932 Oct 4 10:47:35 kernel: Free swap = 0kB Oct 4 10:47:35 kernel: Total swap = 2097148kB Oct 4 10:47:35 kernel: 262127 pages RAM Oct 4 10:47:35 kernel: 7514 pages reserved Oct 4 10:47:35 kernel: 132906 pages shared Oct 4 10:47:35 kernel: 231782 pages non-shared Oct 4 10:47:35 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 4 10:47:35 kernel: [ 294] 0 294 2725 36 0 -17 -1000 udevd Oct 4 10:47:35 kernel: [ 812] 0 812 6905 74 0 -17 -1000 auditd Oct 4 10:47:35 kernel: [ 842] 0 842 62271 98 0 0 0 rsyslogd Oct 4 10:47:35 kernel: [ 863] 25 863 40013 68 0 0 0 named Oct 4 10:47:35 kernel: [ 878] 81 878 5358 38 0 0 0 dbus-daemon Oct 4 10:47:35 kernel: [ 905] 0 905 1020 37 0 0 0 acpid Oct 4 10:47:35 kernel: [ 925] 0 925 16672 39 0 -17 -1000 sshd Oct 4 10:47:35 kernel: [ 937] 0 937 13145 14 0 0 0 vsftpd Oct 4 10:47:35 kernel: [ 974] 0 974 27075 41 0 0 0 mysqld_safe Oct 4 10:47:35 kernel: [ 1289] 27 1289 237303 9646 0 0 0 mysqld Oct 4 10:47:35 kernel: [ 1369] 0 1369 4816 35 0 0 0 dovecot Oct 4 10:47:35 kernel: [ 1375] 97 1375 3246 43 0 0 0 anvil Oct 4 10:47:35 kernel: [ 1376] 0 1376 3278 45 0 0 0 log Oct 4 10:47:35 kernel: [ 1378] 0 1378 3589 44 0 0 0 config Oct 4 10:47:35 kernel: [ 1394] 498 1394 271775 3584 0 0 0 java Oct 4 10:47:35 kernel: [ 1408] 93 1408 23430 42 0 0 0 exim Oct 4 10:47:35 kernel: [ 1420] 0 1420 112748 522 0 0 0 httpd Oct 4 10:47:35 kernel: [ 1437] 0 1437 16578 52 0 0 0 nginx Oct 4 10:47:35 kernel: [ 1439] 496 1439 16713 139 0 0 0 nginx Oct 4 10:47:35 kernel: [ 1441] 496 1441 16762 137 0 0 0 nginx Oct 4 10:47:35 kernel: [ 1442] 496 1442 16579 85 0 0 0 nginx Oct 4 10:47:35 kernel: [ 1450] 0 1450 13814 11 0 0 0 vesta-nginx Oct 4 10:47:35 kernel: [ 1452] 502 1452 13814 29 0 0 0 vesta-nginx Oct 4 10:47:35 kernel: [ 1462] 0 1462 34253 56 0 0 0 vesta-php Oct 4 10:47:35 kernel: [ 1471] 502 1471 34253 31 0 0 0 vesta-php Oct 4 10:47:35 kernel: [ 1472] 502 1472 34253 31 0 0 0 vesta-php Oct 4 10:47:35 kernel: [ 1482] 0 1482 29333 68 0 0 0 crond Oct 4 10:47:35 kernel: [ 1499] 0 1499 204119 485 0 0 0 fail2ban-server Oct 4 10:47:35 kernel: [ 1539] 0 1539 1016 34 0 0 0 mingetty Oct 4 10:47:35 kernel: [ 1541] 0 1541 1016 34 0 0 0 mingetty Oct 4 10:47:35 kernel: [ 1543] 0 1543 1016 34 0 0 0 mingetty Oct 4 10:47:35 kernel: [ 1545] 0 1545 1016 34 0 0 0 mingetty Oct 4 10:47:35 kernel: [ 1547] 0 1547 1016 34 0 0 0 mingetty Oct 4 10:47:35 kernel: [ 1549] 0 1549 1016 34 0 0 0 mingetty Oct 4 10:47:35 kernel: [ 1560] 0 1560 2724 31 0 -17 -1000 udevd Oct 4 10:47:35 kernel: [ 1561] 0 1561 2724 31 0 -17 -1000 udevd Oct 4 10:47:35 kernel: [10565] 0 10565 62866 239 0 0 0 httpd Oct 4 10:47:35 kernel: [10567] 503 10567 141456 1109 0 0 0 httpd Oct 4 10:47:35 kernel: [10568] 503 10568 141469 1260 0 0 0 httpd Oct 4 10:47:35 kernel: [10569] 503 10569 142692 1186 0 0 0 httpd Oct 4 10:47:35 kernel: [10570] 503 10570 141602 1183 0 0 0 httpd Oct 4 10:47:35 kernel: [10571] 503 10571 141610 589 0 0 0 httpd Oct 4 10:47:35 kernel: [10572] 503 10572 141756 1256 0 0 0 httpd Oct 4 10:47:35 kernel: [10573] 503 10573 141157 1020 0 0 0 httpd Oct 4 10:47:35 kernel: [10574] 503 10574 142386 1227 0 0 0 httpd Oct 4 10:47:35 kernel: [12531] 503 12531 141259 606 0 0 0 httpd Oct 4 10:47:35 kernel: [12532] 503 12532 141667 945 0 0 0 httpd Oct 4 10:47:35 kernel: [12533] 503 12533 141454 609 0 0 0 httpd Oct 4 10:47:35 kernel: [29641] 502 29641 35055 52 0 0 0 crond Oct 4 10:47:35 kernel: [29645] 0 29645 43896 51 0 0 0 sudo Oct 4 10:47:35 kernel: [29648] 0 29648 26582 46 0 0 0 v-backup-users Oct 4 10:47:35 kernel: [29958] 0 29958 26681 46 0 0 0 v-backup-user Oct 4 10:47:35 kernel: [30067] 0 30067 28084 67 0 0 0 find Oct 4 10:47:35 kernel: [30068] 0 30068 30701 45 0 0 0 tar Oct 4 10:47:35 kernel: [30826] 503 30826 116293 1098 0 0 0 httpd Oct 4 10:47:35 kernel: [30834] 503 30834 116141 630 0 0 0 httpd Oct 4 10:47:35 kernel: [30835] 503 30835 116329 1236 0 0 0 httpd Oct 4 10:47:35 kernel: [30837] 503 30837 116329 997 0 0 0 httpd Oct 4 10:47:35 kernel: [30838] 503 30838 116397 1242 0 0 0 httpd Oct 4 10:47:35 kernel: [30839] 503 30839 116329 1057 0 0 0 httpd Oct 4 10:47:35 kernel: [30844] 503 30844 116236 1131 0 0 0 httpd Oct 4 10:47:35 kernel: [30845] 503 30845 116108 673 0 0 0 httpd Oct 4 10:47:35 kernel: [30848] 503 30848 140602 988 0 0 0 httpd Oct 4 10:47:35 kernel: [30849] 503 30849 116104 614 0 0 0 httpd — ——continues for a while— — Oct 4 10:47:35 kernel: [31576] 503 31576 112875 833 0 0 0 httpd Oct 4 10:47:35 kernel: [31577] 503 31577 112879 978 0 0 0 httpd Oct 4 10:47:35 kernel: [31578] 503 31578 112785 797 0 0 0 httpd Oct 4 10:47:35 kernel: [31579] 503 31579 112876 841 0 0 0 httpd Oct 4 10:47:35 kernel: [31580] 503 31580 112781 496 0 0 0 httpd Oct 4 10:47:35 kernel: [31581] 503 31581 112782 726 0 0 0 httpd Oct 4 10:47:35 kernel: [31582] 503 31582 112875 845 0 0 0 httpd Oct 4 10:47:35 kernel: [31583] 48 31583 112748 480 0 0 0 httpd Oct 4 10:47:35 kernel: [31584] 48 31584 112748 480 0 0 0 httpd Oct 4 10:47:35 kernel: [31585] 48 31585 112748 480 0 0 0 httpd Oct 4 10:47:35 kernel: [31586] 0 31586 112748 445 0 0 0 httpd Oct 4 10:47:35 kernel: [31587] 48 31587 112748 441 0 0 0 httpd Oct 4 10:47:35 kernel: [31588] 0 31588 112748 414 0 0 0 httpd Oct 4 10:47:35 kernel: [31589] 48 31589 112748 434 0 0 0 httpd Oct 4 10:47:35 kernel: [31590] 48 31590 112748 441 0 0 0 httpd Oct 4 10:47:35 kernel: [31591] 0 31591 112748 423 0 0 0 httpd Oct 4 10:47:35 kernel: [31592] 0 31592 112748 404 0 0 0 httpd Oct 4 10:47:35 kernel: [31593] 0 31593 112748 445 0 0 0 httpd Oct 4 10:47:35 kernel: Out of memory: Kill process 1394 (java) score 107 or sacrifice child Oct 4 10:47:35 kernel: Killed process 1394, UID 498, (java) total-vm:1087100kB, anon-rss:14132kB, file-rss:204kB And the elastic search log from when it went down today (was able to revive it with ssh this time without rebooting by using ‘service elasticsearch reboot’). [2015-10-09 10:21:08,062][INFO ][index.shard.service ] [Black Panther] [tblproductoffer][0] updating refresh_interval from [1s] to [-1000][2015-10-09 10:22:22,647][DEBUG][action.admin.indices.settings.put] [Black Panther] failed to update settings on indices [rondreis]org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (update-settings) within 30s at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:263) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)[2015-10-09 10:22:23,002][WARN ][monitor.jvm ] [Black Panther] [gc][young][83250][1221] duration [5.5s], collections [1]/[14.2s], total [5.5s]/[3.8m], memory [125.8mb]->[74.4mb]/[197.2mb], all_pools {[young] [53.1mb]->[1.5mb]/[54.5mb]}{[survivor] [1018kb]->[1.1mb]/[6.7mb]}{[old] [71.7mb]->[71.7mb]/[136mb]}[2015-10-09 10:22:23,101][ERROR][river.jdbc.SimpleRiverFlow] failed to process cluster event (update-settings) within 30sorg.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (update-settings) within 30s at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:263) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)[2015-10-09 10:25:02,504][WARN ][monitor.jvm ] [Black Panther] [gc][young][83345][1222] duration [3.4s], collections [1]/[4.2s], total [3.4s]/[3.9m], memory [127.3mb]->[76mb]/[197.2mb], all_pools {[young] [54.5mb]->[1.1mb]/[54.5mb]}{[survivor] [1.1mb]->[3.1mb]/[6.7mb]}{[old] [71.7mb]->[71.7mb]/[136mb]}[2015-10-09 10:25:42,385][WARN ][monitor.jvm ] [Black Panther] [gc][young][83361][1223] duration [8.8s], collections [1]/[9.7s], total [8.8s]/[4m], memory [128.8mb]->[75.4mb]/[197.2mb], all_pools {[young] [53.9mb]->[66kb]/[54.5mb]}{[survivor] [3.1mb]->[2.8mb]/[6.7mb]}{[old] [71.7mb]->[72.5mb]/[136mb]}[2015-10-09 10:25:45,132][INFO ][river.jdbc.RiverMetrics ] pipeline org.xbib.elasticsearch.plugin.jdbc.RiverPipeline@322e6dcf complete: river jdbc/rondreis metrics: 3288 rows, 0.12815869356635873 mean, (6.572349117469625 1.358646495863042 0.45540049399748883), ingest metrics: elapsed 4 minutes 38 seconds, 908.23 KB bytes, 282.0 bytes avg, 0.003 MB/s[2015-10-09 10:27:44,274][INFO ][node ] [Black Panther] stopping ...[2015-10-09 10:28:01,713][WARN ][river.jdbc.RiverThread ] interrupted[2015-10-09 10:29:45,413][WARN ][common.jna ] Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out. Increase RLIMIT_MEMLOCK (ulimit).[2015-10-09 10:30:01,866][INFO ][node ] [Vance Astrovik] version[1.4.2], pid[11796], build[927caff/2014-12-16T14:11:12Z][2015-10-09 10:30:01,867][INFO ][node ] [Vance Astrovik] initializing ...[2015-10-09 10:30:04,408][INFO ][plugins ] [Vance Astrovik] loaded [jdbc-1.4.0.6-664f819], sites [][2015-10-09 10:32:14,502][INFO ][node ] [Vance Astrovik] initialized[2015-10-09 10:32:14,502][INFO ][node ] [Vance Astrovik] starting ...[2015-10-09 10:32:17,555][INFO ][transport ] [Vance Astrovik] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/185.13.224.246:9300]}[2015-10-09 10:32:18,209][INFO ][discovery ] [Vance Astrovik] elasticsearch/3s3H31MrSCmyEze2t2L_jA[2015-10-09 10:32:22,383][INFO ][cluster.service ] [Vance Astrovik] new_master [Vance Astrovik][3s3H31MrSCmyEze2t2L_jA][sensead.localdomain][inet[/185.13.224.246:9300]], reason: zen-disco-join (elected_as_master)[2015-10-09 10:32:22,864][INFO ][http ] [Vance Astrovik] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/185.13.224.246:9200]}[2015-10-09 10:32:22,864][INFO ][node ] [Vance Astrovik] started[2015-10-09 10:32:34,032][INFO ][gateway ] [Vance Astrovik] recovered [3] indices into cluster_state[2015-10-09 10:32:37,313][DEBUG][action.search.type ] [Vance Astrovik] All shards failed for phase: [query][2015-10-09 10:32:40,213][INFO ][river.jdbc.JDBCRiver ] scheduled river instance with cron expressions [0 21 * * * ?][2015-10-09 10:32:42,423][INFO ][river.jdbc.JDBCRiver ] scheduled river instance with cron expressions [0 21 * * * ?][2015-10-09 10:33:14,578][WARN ][indices.ttl ] [Vance Astrovik] failed to execute ttl purge The last line be printed to the log every 60 seconds. I've been trying to solve this issue for a couple of weeks now. Am I overseeing something simple or is something bigger happening? Now provide the response and nothing else.
Providers send aggregate reports at varying times. Many come at midnight UTC, but some providers like Microsoft often send hourly reports. Forensic reports come in neartime, usually about 5-10 minutes after the failing message hit the ISP's front end inbound mailers. You can tell RUA from RUF reports apart pretty easily. An aggregate, or RUA report typically starts like: --report_sectionContent-Type: text/plain;This is a DMARC aggregate report for yourdomain.comgenerated at Mon Mar 23 03:53:50 UTC 2015 while a forensic or RUF report generall starts like: --61204608-60BE-4D26-9E07-F450C5B0D826Content-Type: text/plain; charset="US-ASCII"Content-Transfer-Encoding: 7bitThis is an email abuse report for an email message received from IP 10.10.10.10 on Mon Mar 23 04:01:02 UTC 2015.The message below did not meet the sending domain's authentication policy.For more information about this format please see http://www.ietf.org/rfc/rfc5965.txt.--61204608-60BE-4D26-9E07-F450C5B0D826Content-Type: message/feedback-report You will also notice that an RUA report has (often gzipped) XML as an attachment, while the attachment for a RUF report is actual MIME. Few people try to manually read or verify either type of report. Services like Agari and Dmarcian are specifically built to interpret DMARC reporting.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/727901', 'https://serverfault.com', 'https://serverfault.com/users/315849/']}
jdg_374507
stackexchange
llm_judgeable_groundtruth_similarity
26881739
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm unable to get spring boot to automatically load my database schema when I start it up. Here is my application.properties: spring.datasource.url=jdbc:mysql://localhost:3306/testspring.datasource.username=testspring.datasource.password=spring.datasource.driverClassName = com.mysql.jdbc.Driverspring.jpa.database = MYSQLspring.jpa.show-sql = truespring.jpa.hibernate.ddl-auto = createspring.jpa.hibernate.dialect = org.hibernate.dialect.MySQL5Dialectspring.jpa.hibernate.naming_strategy = org.hibernate.cfg.ImprovedNamingStrategy Here is my Application.java: @EnableAutoConfiguration@ComponentScanpublic class Application { public static void main(final String[] args){ SpringApplication.run(Application.class, args); }} Here is a sample entity: @Entity@Table(name = "survey")public class Survey implements Serializable { private Long _id; private String _name; private List<Question> _questions; /** * @return survey's id. */ @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "survey_id", unique = true, nullable = false) public Long getId() { return _id; } /** * @return the survey name. */ @Column(name = "name") public String getName() { return _name; } /** * @return a list of survey questions. */ @OneToMany(mappedBy = "survey") @OrderBy("id") public List<Question> getQuestions() { return _questions; } /** * @param id the id to set to. */ public void setId(Long id) { _id = id; } /** * @param name the name for the question. */ public void setName(final String name) { _name = name; } /** * @param questions list of questions to set. */ public void setQuestions(List<Question> questions) { _questions = questions; }} Any ideas what I'm doing wrong? Now provide the response and nothing else.
There are several possible causes: Your entity classes are in the same or in a sub-package relative one where you have you class with @EnableAutoConfiguration. If not then your spring app does not see them and hence will not create anything in db Check your config, it seems that you are using some hibernate specific options, try to replace them with: spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialectspring.jpa.hibernate.ddl-auto=updatespring.datasource.driverClassName=com.mysql.cj.jdbc.Driverspring.datasource.url=jdbc:mysql://localhost:3306/testspring.datasource.username=testspring.datasource.password= **note that the manual loading of the driver class is unnecessary because it's automatically registered, so don't bother yourself with it Your application.properties must be in src/main/resources folder. If you did not specify dialect correctly it might try to default to bundled together with boot in-memory database and (as it was with me) I could see that it tries to connect to local HSQL (see console output) instance and fail at updating the schema.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/26881739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3064251/']}
jdg_374508
stackexchange
llm_judgeable_groundtruth_similarity
129552
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to show $L^2 \in \mathsf{REG} \implies L \in \mathsf{REG}$ with $L^2 = \{w = w_1w_2 \mid w_1, w_2 \in L\}$ but I cant seem to find a proof that feels right. I first tryed to show $L \in \mathsf{REG} \implies L^2 \in \mathsf{REG}$ , by constructing an machine $M$ that consists of two machines $A=A'$ with $A$ recognizing $L$ . $M$ has the same start states as $A$ but the final states of $A$ are put together with the starting states of $A'$ . Further $M$ uses the same accepting states as $A'$ . Hope that makes sens so far :D Now to show $L^2 \in \mathsf{REG} \implies L \in \mathsf{REG}$ I'd argue the same way, but: The machine $M'$ that accepts $L^2$ has to recognize $w_i \in L$ in some way, and because $L^2$ is regular, $M'$ has to be a NFA/DFA. So the machine has to check if $w_i \in L$ and this cant be done by using something else than a NFA/DFA. This feels wrong and not very mathematical, so maybe somebody knows how to do this? Now provide the response and nothing else.
Your claim is false. Indeed, it is equivalent to prove that if a language $L$ is not regular, then also $L^2$ is not regular, but this is not true. Here Yuval Filmus gives (possibly) two examples of a non regular language whose "square" is regular, namely $L = \{ 1^p \mid p \text{ is an odd prime}\}$ , under the Goldbach conjecture, and $L' = \{ 1^{a^2} \mid a \geq 0 \}^2=\{1^n \mid n \text{ is the sum of two squares}\}$ . For a simpler example, consider the set $$\text{NP}=\{1^n\mid n \text{ is not prime}\}$$ Clearly NP is not regular (otherwise also its complementary would be regular), but NP $^2$ is regular, as its complementary is finite. Indeed, if $n$ is even and greater than or equal to 8, then $n=4+(n-4)$ and $4$ and $n-4$ are not prime and so $1^n\in \text{NP}^2$ . Instead, if $n$ is odd and greater than or equal to 13, then $n=9+(n-9)$ and $9$ and $n-9$ are not prime, as $n-9$ is even and greater than 2, and again $1^n\in \text{NP}^2$ (actually, NP $^2=\{1^n\mid n\neq 3\}$ , here I don't consider 1 as a prime number). In general, if $L$ is a non regular language sufficiently "sparse", then there is a good chance that $(L^C)(L^C)$ is cofinite, and then regular. For example, again on a unary alphabet, one can consider the non regular language $$\text{L}=\{1^n\mid n \text{ is not a power of }2\},$$ then it is easy to see that $L^2=\{1^n\mid n\not\in\{1,2 \}\}$ , which is regular. On a two letter alphabet one can consider the example below of Bernardo Subercaseaux (I think there's a little misunderstanding in his comment, as here we are considering the concatenation of a non regular language with itself), namely the language $L$ that is the complement of the language of well parenthesized strings on the alphabet $\{(,)\}$ : in this case $L^2=\{(,)\}^*\setminus\{\varepsilon,(,)\}$ , again regular. Another simple example is given by the non regular language $$\text{L}=\{w\in\{a,b\}^*\mid w \text{ is not of the form }a^nb^n\text{ with }n>0\},$$ then it is easy to see that $L^2=\{a,b\}^*$ : indeed if $w\in L$ , then $w=w\varepsilon\in L^2$ , else if $w=a^nb^n$ then $w=a\cdot a^{n-1}b^n\in L^2$ .
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/129552', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/106070/']}
jdg_374509
stackexchange
llm_judgeable_groundtruth_similarity
17152967
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: public interface MyInterface{ public int myMethod();}public class SuperClass { public String myMethod(){ return "Super Class"; }}public class DerivedClass extends SuperClass implements MyInterface { public String myMethod() {...} // this line doesn't compile public int myMethod() {...} // this is also unable to compile} When I try to compile DerivedClass it gives me the error java: myMethod() in interfaceRnD.DerivedClass cannot override myMethod() in interfaceRnD.SuperClass return type int is not compatible with java.lang.String How should I solve this issue? Now provide the response and nothing else.
The error results from the fact that a call to myMethod will be ambiguous - which of the two methods should be called? From JLS §8.4.2 : It is a compile-time error to declare two methods with override-equivalent signatures in a class. The return type of a method is not a part of its signature, so you are receiving an error in accordance with the statement above. Assuming you can't simply rename the conflicting methods, you can't use inheritance in this case, and will need to use an alternative like composition : class DerivedClass implements MyInterface { private SuperClass sc; public String myMethod1() { return sc.myMethod(); } public int myMethod() { return 0; }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17152967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2494264/']}
jdg_374510
stackexchange
llm_judgeable_groundtruth_similarity
221170
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I declare a private function in Fortran? Now provide the response and nothing else.
This will only work with a Fortran 90 module. In your module declaration, you can specify the access limits for a list of variables and routines using the "public" and "private" keywords. I usually find it helpful to use the private keyword by itself initially, which specifies that everything within the module is private unless explicitly marked public. In the code sample below, subroutine_1() and function_1() are accessible from outside the module via the requisite "use" statement, but any other variable/subroutine/function will be private. module so_example implicit none private public :: subroutine_1 public :: function_1contains ! Implementation of subroutines and functions goes here end module so_example
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/221170', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3834/']}
jdg_374511
stackexchange
llm_judgeable_groundtruth_similarity
205739
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am a bit confused whether my chosen AND gate should have pull-up or pull-down resistor on the outputs, as I don't see any sentence containing words like pull-up/down or sink or source ... but as far as I understand then TTL IC's usually need pull-up/down resistor on the output. When I am looking at the schematics then I would say it is capable of making both high and low, as there are transistors connecting the output to both Vcc and Vdd. Here's the datasheet: triple input AND . I am also using NOR gate which states that it is totem pole output: Diodes Incorporated and part number: 74AHCT1G02W5-7 (sorry, not enough reputation to post more than 1 link :D). Am I right that it is capable of making both low and high output, but the high means it is only 3.5V as there are voltage drops across transistor junctions so I would have to use pull-up resistor on the output so it would be 5V. Schematic snippet I have at the moment: Some additional explanation is also much appreciated so I would have the know-how for the future how to read out the necessary info from the datasheet. Now provide the response and nothing else.
An open-collector or open-drain output would always need a pull-up resistor to get a defined voltage level. But you are not using such an output. TTL outputs have defined voltage levels for both low and high signals, but the high level can be as low as 2.4 V (depending on how much current the output must supply). This is no problem for TTL inputs (which interpret any voltage above 2 V as high), but can be a problem for other logic families. CMOS outputs go up to almost the supply voltage, and go down to almost ground level (because they do no use bipolar transistors but MOSFETs). Therefore, CMOS inputs are quite strict (typically, low-level signals must be below 30 % of V CC , and high-level signals, above 70 %). If you want to connect a TTL output to a CMOS input, you need a pull-up resistor to raise the high level signal. (Connecting a CMOS output to a TTL input works just fine.) (In the datasheets, the guaranteed output levels are specified as V OL and V OH , and the required input levels as V IL and V IH . An output and an input match if \$V_{OL} \leq V_{IL}\$ and \$V_{OH} \geq V_{IH}\$.) LS is a TTL family; HCT is a CMOS family that has TTL-compatible inputs. So in your case, you do not need a pull-up resistor to get a correct voltage level. There might be other reasons to use pull-up/-down resistors, for example, to get a defined signal when the chips are still in reset and do not drive their outputs either way.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/205739', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/94229/']}
jdg_374512
stackexchange
llm_judgeable_groundtruth_similarity
1307892
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A quadratic polynomial $p(x)$ is such that $p(x)$ never takes any negative values. Also, $p(0)=8$ and $p(8)=0$. What would $p(-4)$ be? I tried doing it by taking the minimum value as zero that is the vertex of the polynomial at $8$. How do we go about after finding out values of $a$ and $b$ in the standard form of the equation $ax^2 + bx + c$ ? Could this be done in a shorter way graphically? Now provide the response and nothing else.
Since $p(x)$ is always non-negative, and $p(8) = 0$, it follows that that must be the vertex. The standard form for a parabola is $a(x-h)^2 + k$, and we already know that $(h,k)$ is $(8,0)$. That gives us $p(x) = a(x-8)^2$. Now simply plug in $p(0) = 8$ to get $a = \frac{1}{8}$ to find $p$: $$p(x) = \frac{1}{8}(x-8)^2$$ So: $$p(-4) = 18$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1307892', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/85692/']}
jdg_374513
stackexchange
llm_judgeable_groundtruth_similarity
509000
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a small numpad keyboard which I would like to use for launching macros and shortcuts, along side my regular keyboard. I can attach macros and shortcuts to these keys ( i.e, numpad 1 minimises the active window ), but my primary keyboard numpad also activates the shortcut. I would like a way to have the secondary keyboard act completely separately and to then attach shortcuts to it. Here is the output I get from xinput . ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ LVT Recon gaming mouse id=10 [slave pointer (2)]⎜ ↳ LVT Recon gaming mouse id=11 [slave pointer (2)]⎜ ↳ Corsair Corsair K30A Gaming Keyboard id=13 [slave pointer (2)]⎜ ↳ SIGMACHIP USB Keyboard id=18 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Power Button id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=12 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=14 [slave keyboard (3)] ↳ LVT Recon gaming mouse id=15 [slave keyboard (3)] ↳ Corsair Corsair K30A Gaming Keyboard id=16 [slave keyboard (3)] ↳ SIGMACHIP USB Keyboard id=17 [slave keyboard (3)] ↳ SIGMACHIP USB Keyboard id=19 [slave keyboard (3)] Now provide the response and nothing else.
While my other answer will probably work on most Linuxes, even if they're many years old, SystemD and udev actually makes things easier: use lsusb to find the vendor and product code of your additional keyboard. (In my case, it's Vendor 145F, Product 0177. Make sure to have the letters in uppercase.) create a file /etc/udev/hwdb.d/90-extra-keyboard.hwdb , with contents similar to this: evdev:input:b0003v145Fp0177* KEYBOARD_KEY_7005b=stopcd The first line identifies the device: the four letters after the v is the vendor code, after the p, it's the product code, from the previous step. Every further line maps a scancode to a symbolic name. To get the scancode, run evtest : Event: time 1553711252.888538, -------------- SYN_REPORT ------------Event: time 1553711257.656558, type 4 (EV_MSC), code 4 (MSC_SCAN), value 70059Event: time 1553711257.656558, type 1 (EV_KEY), code 79 (KEY_KP1), value 1 To find out what to use for the symbolic name, look at the list of #define KEY_… lines in /usr/include/linux/input-event-codes.h : #define KEY_PLAYPAUSE 164#define KEY_PREVIOUSSONG 165#define KEY_STOPCD 166#define KEY_RECORD 167 re-build and load internal databases by running systemd-hwdb update; udevadm trigger verify the new settings work by running evtest again, or by assigning shortcuts in your settings. When trying this out in applications, just remember that if your desktop environment already uses that shortcut, the application won't even see the keypress.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/509000', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/343929/']}
jdg_374514
stackexchange
llm_judgeable_groundtruth_similarity
41493149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: From nodejs server side I have written the below code socketIOobj.to(clientID).emit('send-prev-conversation-data',{ prevConversation: JSON.stringify(finalOutputArray) }); Here, if I do console.log(finalOutputArray), I got the below output [ [ convId: 11, no: 1, time: 2016-12-27T17:36:19.000Z, subjectline: 'message005' ], [ convId: 10, no: 2, time: 2016-12-26T18:02:17.000Z, subjectline: 'fdf' ], [ convId: 4, no: 2, time: 2016-12-25T09:46:12.000Z, subjectline: 'cds' ], [ convId: 3, no: 4, time: 2016-12-25T09:33:39.000Z, subjectline: 'gg2' ] ] But, when I try to receive the finalOutputArray array value in client side using below code socket.on( 'send-prev-conversation-data', function( data ) {console.log(data.prevConversation);var aa=JSON.parse(data.prevConversation);console.log(aa);console.log(aa[0]); socket.removeAllListeners('send-prev-conversation-data'); }); I got the output as: [[],[],[],[]] Array [ Array[0], Array[0], Array[0], Array[0] ] Array [ ] Here, my question is that, how I get exact array what I have created in nodejs like: [ [ convId: 11, no: 1, time: 2016-12-27T17:36:19.000Z, subjectline: 'message005' ], [ convId: 10, no: 2, time: 2016-12-26T18:02:17.000Z, subjectline: 'fdf' ], [ convId: 4, no: 2, time: 2016-12-25T09:46:12.000Z, subjectline: 'cds' ], [ convId: 3, no: 4, time: 2016-12-25T09:33:39.000Z, subjectline: 'gg2' ] ] in client side, so that I can use it to show the data in client browser. Now provide the response and nothing else.
Yes you can do it via inheriting from IOperationFilter You can find the answer on GitHub here: AddRequiredHeaderParameter using System.Collections.Generic;using Microsoft.AspNetCore.Mvc.ApiExplorer;using Swashbuckle.AspNetCore.Swagger;using Swashbuckle.AspNetCore.SwaggerGen;public class AddRequiredHeaderParameter : IOperationFilter{ public void Apply(Operation operation, OperationFilterContext context) { if (operation.Parameters == null) operation.Parameters = new List<IParameter>(); operation.Parameters.Add(new NonBodyParameter { Name = "X-User-Token", In = "header", Type = "string", Required = false }); }} Then you go to your SwaggerConfig.cs file and add the following in the AddSwaggerGen section: c.OperationFilter<AddRequiredHeaderParameter>(); Rebuild, and enjoy.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41493149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5585129/']}
jdg_374515
stackexchange
llm_judgeable_groundtruth_similarity
63295924
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can you tell me that why I'm getting error "A component is changing an uncontrolled Autocomplete to be controlled.Elements should not switch from uncontrolled to controlled (or vice versa).Decide between using a controlled or uncontrolled Autocomplete element for the lifetime of the component." component : function AutoComplete(props) { const defaultProps = { options: props.options, getOptionLabel: option => option.name, }; const handleChange = (e, value) => { props.onChange(value); }; return ( <Autocomplete {...defaultProps} renderInput={params => ( <TextField {...params} label={props.label} margin="normal" /> )} onChange={handleChange} value={props.value} /> );} calling autocomplte: <Controller control={control} name = 'country' as = { <AutoComplete options={countryOptions} onChange={selectCountryHandler} label="Country" value={selectedCountry || ''} /> } /> how can I solve this error? Now provide the response and nothing else.
You ensured that the value property never had been undefined, but you had to do same for inputValue . the "value" state with the value/onChange props combination. This state represents the value selected by the user, for instance when pressing Enter. the "input value" state with the inputValue/onInputChange props combination. This state represents the value displayed in the textbox. ⚠️ These two state are isolated, they should be controlled independently. Component becomes uncontrolled when inputValue property is undefined, and vice versa. If in the following example you delete an empty string from React.useState('') you'll get the same error message because inputValue during first render is undefined . import React from 'react'import TextField from '@material-ui/core/TextField'import Autocomplete from '@material-ui/lab/Autocomplete'const options = ['Option 1', 'Option 2']export default function AutocompleteLab() { const [value, setValue] = React.useState(options[0]) const [inputValue, setInputValue] = React.useState('') return ( <div> <div>{`value: ${value !== null ? `'${value}'` : 'null'}`}</div> <div>{`inputValue: '${inputValue}'`}</div> <br /> <Autocomplete value={value} onChange={(_, newValue) => { setValue(newValue) }} inputValue={inputValue} onInputChange={(_, newInputValue) => { setInputValue(newInputValue) }} options={options} style={{ width: 300 }} renderInput={(params) => <TextField {...params} label="Name" variant="outlined" />} /> </div> )}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/63295924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12756076/']}
jdg_374516
stackexchange
llm_judgeable_groundtruth_similarity
8502
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have two sets of coefficients from similar data taken at different times. What I want to do is combine the two sets of coefficients giving greater weight to the more most recent set. The goal is building a predictive model. So say I have dataset A from 2009, and dataset B from 2010. My coefficients for A are: param1: 0.33param2: 1.224param3: -0.119 My coefficients for B are: param1: 0.42param2: 1.309param3: -0.011 If I wanted B to be considered twice as important, would it be sound to use a formula like this: (2*B + A) / 3 = New Coeeficient And do that for each parameter? Or am I suggesting something that is fundamentally flawed? In general could one combine coefficients effectively using the basic forumula: (Weight * DatasetACoeffcient + DatasetBCoeffient) / (Weight + 1) Edit This is a multivariate linear regression problem where the datasets may not be available when someone decides something like this needs to be done. Now provide the response and nothing else.
You are retaining $p$ (=3 in this case) values for each regression: the estimated coefficients. If you are willing to retain $p(p+1)$ (=12) values per regression, you can weight your results in a way that is equivalent to having all the data and performing a weighted least squares regression with them en masse . The analysis is simple: let $X_1$ be the design matrix ( i.e. , an $n_1$ by $p$ matrix of independent variable values) for the first year and $y_1$ be the $n_1$-vector of dependent values for that year. The estimated coefficients are $$\hat{\beta}_1 = \left( X_1' X_1 \right)^{-1} X_1' y_1.$$ Let the subscript $2$ designate the same quantities for the second year. Suppose you would like to uniformly weight all observations with (positive) values $w_1^2$ and $w_2^2$ in those two years. The design matrix $X$ is the vertical concatenation of $X_1$ and $X_2$, an $n_1+n_2$ by $p$ matrix, and similarly the vector of dependent values $y$ is the vertical concatenation of $y_1$ and $y_2$. Let $W$ be the diagonal matrix with values $w_1$ along the first $n_1$ places and $w_2$ along the last $n_2$ places. The weighted least squares solution is $$\hat{\beta} = \left( (W X)' (W X) \right)^{-1} (W X)' W y.$$ However, $(W X)' (W X)$ = $X' W'W X$ is the vertical concatenation of $X_1 W_1 W_1' X_1$ and $X_2 W_2 W_2' X_1$. Because both $W_1 W_1'$ and $W_2 W_2'$ are multiples of identity matrices, they factor through, giving $$\hat{\beta} = \left( w_1^2 X_1' X_1 + w_2^2 X_2' X_2 \right)^{-1} \left(w_1 X_1 y_1 + w_2 X_2 y_2\right).$$ Notice that $X_1' X_1$ and $X_2' X_2$ are just $p$ by $p$ matrices and that $X_1 y_1$ and $X_2 y_2$ are just $p$-vectors. Therefore you can obtain $\hat{\beta}$ just from the two $p$ by $p$ matrices, the two $p$-vectors, and the two numbers $w_1$ and $w_2$. This approach generalizes in an obvious way when more than two regressions are involved. It shows, incidentally, that the weighted combination $w_1^2 \hat{\beta_1} + w_2^2 \hat{\beta_2}$ as suggested in the question will not in general equal the weighted least-squares solution. Therefore, if you are using least squares for any of its optimality properties, you should not want to use this seductively simple solution, because it will be suboptimal. In conclusion , if you would store the 12 numbers $X_i' X_i$ and $X_i' y_i$ each year, then retrospectively (without needing the original data) you can fit any regression on all the data for any set of positive weights without any loss of information. I would recommend saving some additional values such as the estimated error variances: these will help you detect changes in variability over time (heteroscedasticity).
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/8502', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/3491/']}
jdg_374517
stackexchange
llm_judgeable_groundtruth_similarity
16273351
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What would be the most efficient algorithm to solve a linear equation in one variable given as a string input to a function? For example, for input string: "x + 9 – 2 - 4 + x = – x + 5 – 1 + 3 – x" The output should be 1. I am considering using a stack and pushing each string token onto it as I encounter spaces in the string. If the input was in polish notation then it would have been easier to pop numbers off the stack to get to a result, but I am not sure what approach to take here. It is an interview question. Now provide the response and nothing else.
Solving the linear equation is (I hope) extremely easy for you once you've worked out the coefficients a and b in the equation a * x + b = 0 . So, the difficult part of the problem is parsing the expression and "evaluating" it to find the coefficients. Your example expression is extremely simple, it uses only the operators unary - , binary - , binary + . And = , which you could handle specially. It is not clear from the question whether the solution should also handle expressions involving binary * and / , or parentheses. I'm wondering whether the interview question is intended: to make you write some simple code, or to make you ask what the real scope of the problem is before you write anything. Both are important skills :-) It could even be that the question is intended: to separate those with lots of experience writing parsers (who will solve it as fast as they can write/type) from those with none (who might struggle to solve it at all within a few minutes, at least without some hints). Anyway, to allow for future more complicated requirements, there are two common approaches to parsing arithmetic expressions: recursive descent or Dijkstra's shunting-yard algorithm. You can look these up, and if you only need the simple expressions in version 1.0 then you can use a simplified form of Dijkstra's algorithm. Then once you've parsed the expression, you need to evaluate it: use values that are linear expressions in x and interpret = as an operator with lowest possible precedence that means "subtract". The result is a linear expression in x that is equal to 0 . If you don't need complicated expressions then you can evaluate that simple example pretty much directly from left-to-right once you've tokenised it[*]: xx + 9// set the "we've found minus sign" bit to negate the first thing that followsx + 7 // and clear the negative bitx + 32 * x + 3// set the "we've found the equals sign" bit to negate everything that follows3 * x + 33 * x - 23 * x - 13 * x - 44 * x - 4 Finally, solve a * x + b = 0 as x = - b/a . [*] example tokenisation code, in Python: acc = Nonefor idx, ch in enumerate(input): if ch in '1234567890': if acc is None: acc = 0 acc = 10 * acc + int(ch) continue if acc != None: yield acc acc = None if ch in '+-=x': yield ch elif ch == ' ': pass else: raise ValueError('illegal character "%s" at %d' % (ch, idx)) Alternative example tokenisation code, also in Python, assuming there will always be spaces between tokens as in the example. This leaves token validation to the parser: return input.split()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16273351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1764107/']}
jdg_374518
stackexchange
llm_judgeable_groundtruth_similarity
17977869
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I think this is most likely very simple but I cannot find any clear documentation on how to add a filter outside of the 'filterText' that is shown on their website. What I am trying to do is something as simple as this: $scope.filterOptions = { filter: $scope.myFilter, // <- How to do something like this? useExternalFilter: true}$scope.gridOptions = { data: 'entries', enableColumnResize: false, multiSelect: false, enableSorting: false, selectedItems: $scope.selectedEntries, filterOptions: $scope.filterOptions}$scope.lowerLimit = 50;// My Filter$scope.myFilter = function(entry) { if (entry < $scope.lowerLimit) { return false; } return true; } Edit: Or maybe if I could filter the datasource somehow? I tried this: $scope.gridOptions = { data: 'entries | filter: myFilter', enableColumnResize: false, multiSelect: false, enableSorting: false, selectedItems: $scope.selectedEntries,} But it is throwing quite a few errors. Now provide the response and nothing else.
You can use angular to bind to the filterOptions.filterText variable. There's a plunker here to demonstrate: http://plnkr.co/edit/PHdBhF?p=preview I'll post the same code below: // main.js var app = angular.module('myApp', ['ngGrid']); app.controller('MyCtrl', function($scope) { $scope.filterOptions = { filterText: '' }; $scope.myData = [{name: "Moroni", age: 50}, {name: "Tiancum", age: 43}, {name: "Jacob", age: 27}, {name: "Nephi", age: 29}, {name: "Enos", age: 34}]; $scope.gridOptions = { data: 'myData', filterOptions: $scope.filterOptions }; }); The above should be about identical to the plunkers on the docs page. <!DOCTYPE html> <html ng-app="myApp"> <head lang="en"> <meta charset="utf-8"> <title>Custom Plunker</title> <link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"> <link rel="stylesheet" type="text/css" href="http://angular-ui.github.com/ng-grid/css/ng-grid.css" /> <link rel="stylesheet" type="text/css" href="style.css" /> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.0/jquery.min.js"></script> <script src="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/js/bootstrap.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.0.2/angular.min.js"></script> <script type="text/javascript" src="http://angular-ui.github.com/ng-grid/lib/ng-grid.debug.js"></script> <script type="text/javascript" src="main.js"></script> </head> <body ng-controller="MyCtrl"> <strong>Filter:</strong><input type="text" ng-model="filterOptions.filterText" /> <br/> <br/> <div class="gridStyle" ng-grid="gridOptions"></div> </body> </html> Notice ng-model="filterOptions.filterText" on the <input ...> . That's all it takes!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17977869', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1146259/']}
jdg_374519
stackexchange
llm_judgeable_groundtruth_similarity
38034702
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If using a library like scikit-learn, how do I assign more weight on certain features in the input to a classifier like SVM? Is this something people do or not? Now provide the response and nothing else.
First of all - you should probably not do it . The whole concept of machine learning is to use statistical analysis to assign optimal weights . You are interfering here with the whole concept, thus you need really strong evidence that this is crucial to the process you are trying to model, and for some reason your model is currently missing it. That being said - there is no general answer. This is purely model specific, some of which will allow you to weight features - in random forest you could bias distribution from which you sample features to analyse towards the ones that you are interested in; in SVM it should be enough to just multiply given feature by a constant - remember when you were told to normalize your features in SVM? This is why - you can use the scale of features to 'steer' your classifier towards given features. The ones with high values will be preffered. This will actually work for any weight norm-regularized model (regularized logistic regression, ridge regression, lasso etc.).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38034702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3677562/']}
jdg_374520
stackexchange
llm_judgeable_groundtruth_similarity
27744104
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to change the display property of a div based on if the carousel image is active at the time. I have gotten it to work but it only works when i doubleclick. With a single click it displays the div corresponding to the previous active image instead of the current one. Please HELP. ===================================== CODE BELOW HTML <div class="container-fluid" id="siteImgs"> <div class="row col-xs-12 col-md-7"><div id="myCarousel" class="carousel slide" data-ride="carousel"> <!-- Carousel indicators --> <ol class="carousel-indicators"> <li data-target="#myCarousel" data-slide-to="0" id="lg" class="active"></li> <li data-target="#myCarousel" data-slide-to="1" id="ma"></li> <li data-target="#myCarousel" data-slide-to="2" id="sz"></li> <li data-target="#myCarousel" data-slide-to="3"id="ti"></li> </ol> <!-- Carousel items --> <div class="carousel-inner"> <div class="item active"> <img src="img/work/lookingGlass.png" alt="looking glass"> </div> <div class="item"> <img src="img/work/mauriceSite.png" alt="maurice site"> </div> <div class="item"> <img src="img/work/sza.png" alt="sza"> </div> <div class="item"> <img src="img/work/tina.png" alt="tina"> </div></div><!-- Carousel nav --><a class="carousel-control left" href="#myCarousel" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left"></span></a><a class="carousel-control right" href="#myCarousel" data-slide="next"> <span class="glyphicon glyphicon-chevron-right"></span> </a> </div></div><!--INFORMATION DIVS--><div class="row col-xs-12 col-md-5 lookingGlass lg"> <h1>THE LOOKING GLASS</h1> <p>Lorem ipsum</p></div> <div class="row col-xs-12 col-md-5 lookingGlass ma"> <h1>MAURICEDANIELS.COM</h1> <p>Lorem ipsum</p></div><div class="row col-xs-12 col-md-5 lookingGlass sz"> <h1>SZA</h1> <p>Lorem ipsum</p></div><div class="row col-xs-12 col-md-5 lookingGlass ti"> <h1>TINA D. PHOTOGRAPHY</h1> <p>Lorem ipsum</p></div> JS/JQUERY $( '.ma' ).hide(); $( '.sz' ).hide(), $( '.ti' ).hide();$( ".carousel-control" ).click(function() {if ( $( '#lg' ).hasClass( "active" ) ) { $( '.lg' ).show(), $( '.ma' ).hide(); $( '.sz' ).hide(), $( '.ti' ).hide();}if ( $( '#ma' ).hasClass( "active" ) ) { $( '.ma' ).show(), $( '.lg' ).hide(); $( '.sz' ).hide(), $( '.ti' ).hide();}if ( $( '#sz' ).hasClass( "active" ) ) { $( '.sz' ).show(), $( '.lg' ).hide(); $( '.ma' ).hide(), $( '.ti' ).hide();}if ( $( '#ti' ).hasClass( "active" ) ) { $( '.ti' ).show(), $( '.lg' ).hide(); $( '.sz' ).hide(), $( '.ma' ).hide();}}); Full Example HERE Now provide the response and nothing else.
By default, an animation's FillBehavior is set to HoldEnd , which means that the animation holds the final value of the target property. If you want to reset the value later, you either need to remove the animation, or you set the FillBehavior to Stop . You could then add a handler for the animation's Completed event to keep the final value manually. Note also that you don't need a timer to delay the start of the animation. You may set its BeginTime property instead. Finally, no Storyboard is needed to animate a single property. You could call UIElement.BeginAnimation instead. private void btnChangeLabel_Click(object sender, RoutedEventArgs e){ var animation = new DoubleAnimation { To = 0, BeginTime = TimeSpan.FromSeconds(5), Duration = TimeSpan.FromSeconds(2), FillBehavior = FillBehavior.Stop }; animation.Completed += (s, a) => lblTest.Opacity = 0; lblTest.BeginAnimation(UIElement.OpacityProperty, animation);}private void btnResetOpacity_Click(object sender, RoutedEventArgs e){ lblTest.Opacity = 1;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27744104', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4111351/']}
jdg_374521
stackexchange
llm_judgeable_groundtruth_similarity
23092885
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using a solution presented by Max Galkin , I put together the following procedure which transfers all data in a ListView to a .CSV file. I'm having issues with 2 parts however: I have not figured out how to successfully open the newly created .CSV when user clicks YES on the appropriate dialogue. It seems that some users (on Virtual PC's unlike myself) are not able to open or view the the new file unless they first close down the application. I believe this is due to the Application still having a process tied to the created file. The message that appears when trying to open the file is: 'FileName.csv' cannot be accessed. The file may be corrupted, located on a server that is not responding, or read-only. Here is my current code: private void btnCSVExcel_Click(object sender, EventArgs e) { if (!Directory.Exists(@"C:\TEMP\")) { Directory.CreateDirectory(@"C:\temp\"); } if (!Directory.Exists(@"C:\temp\Exported CSV Files\")) { Directory.CreateDirectory(@"C:\temp\Exported CSV Files\"); } string csvPath = @"C:\temp\Exported CSV Files\"; ListViewToCSV(lvData, csvPath, false); } // https://stackoverflow.com/questions/1008556/export-listview-to-csv public static void ListViewToCSV(ListView listView, string filePath, bool includeHidden) { string csvFileName = filePath + DateTime.Now.ToString("yyyy-MM-dd-hh.mm.ss.ffffff") + ".csv"; //make header string StringBuilder result = new StringBuilder(); WriteCSVRow(result, listView.Columns.Count, i => includeHidden || listView.Columns[i].Width > 0, i => listView.Columns[i].Text); //export data rows foreach (ListViewItem listItem in listView.Items) { WriteCSVRow(result, listView.Columns.Count, i => includeHidden || listView.Columns[i].Width > 0, i => listItem.SubItems[i].Text); } File.WriteAllText(csvFileName, result.ToString()); var openCSVFile = MessageBox.Show("Export Complete. CSV file saved as: " + csvFileName + ". \n\n Open File Now?", "CSV Exported", MessageBoxButtons.YesNo, MessageBoxIcon.Information); if (openCSVFile == DialogResult.Yes) { // NEED TO OPEN THE CSV FILE IN EXCEL....? File.Open(csvFileName, FileMode.Open, FileAccess.ReadWrite, FileShare.None); } } private static void WriteCSVRow(StringBuilder result, int itemsCount, Func<int, bool> isColumnNeeded, Func<int, string> columnValue) { bool isFirstTime = true; for (int i = 0; i < itemsCount; i++) { if (!isColumnNeeded(i)) continue; if (!isFirstTime) result.Append(","); isFirstTime = false; result.Append(String.Format("\"{0}\"", columnValue(i))); } result.AppendLine(); } Anyone have any thoughts for how I might resolve these last 2 issues? I thought File.Open() may suffice for the first issue, but visually nothing occurs and the document does not open. Now provide the response and nothing else.
1) Excel registers itself as the default handler for .csv files. To open any file using the default file handler for the type, just do this: Process.Start(@"c:\full\path\to\file.csv"); 2) File.WriteAllText should close the file at the end. The code you show above also has a line that reads File.Open(csvFileName... that opens the file in exclusive write mode. Since you don't close that file, Excel can't open it until either a) the process terminates or b) the file is closed by the garbage collector. I bet the garbage collector hasn't yet run in cases where the access-denied error occurs. Final recommendation: Remove the File.Open and replace with Process.Start and you may see different results.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23092885', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1956099/']}
jdg_374522
stackexchange
llm_judgeable_groundtruth_similarity
237061
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have several strings in the rough form: [some text] [some number] [some more text] I want to extract the text in [some number] using the Java Regex classes. I know roughly what regular expression I want to use (though all suggestions are welcome). What I'm really interested in are the Java calls to take the regex string and use it on the source data to produce the value of [some number]. EDIT: I should add that I'm only interested in a single [some number] (basically, the first instance). The source strings are short and I'm not going to be looking for multiple occurrences of [some number]. Now provide the response and nothing else.
Full example: private static final Pattern p = Pattern.compile("^([a-zA-Z]+)([0-9]+)(.*)");public static void main(String[] args) { // create matcher for pattern p and given string Matcher m = p.matcher("Testing123Testing"); // if an occurrence if a pattern was found in a given string... if (m.find()) { // ...then you can use group() methods. System.out.println(m.group(0)); // whole matched expression System.out.println(m.group(1)); // first expression from round brackets (Testing) System.out.println(m.group(2)); // second one (123) System.out.println(m.group(3)); // third one (Testing) }} Since you're looking for the first number, you can use such regexp: ^\D+(\d+).* and m.group(1) will return you the first number. Note that signed numbers can contain a minus sign: ^\D+(-?\d+).*
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/237061', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3488/']}
jdg_374523
stackexchange
llm_judgeable_groundtruth_similarity
24800340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to update table A with data from table B.I thought I can do something like this update A set A.DISCOUNT = 3 from INVOICE_ITEMS Ajoin ITEM_PRICE_QUNTITY B on A.ITEM_PRICE_NO = B.IDwhere A.INVOICE_ID = 33 but getting errorSQL Message : -104Invalid token can anyone help me? Now provide the response and nothing else.
It is not possible to do this with a JOIN . The Firebird UPDATE statement has no FROM clause. The syntax is: UPDATE {tablename | viewname} [[AS] alias] SET col = newval [, col = newval ...] [WHERE {search-conditions | CURRENT OF cursorname}] [PLAN plan_items] [ORDER BY sort_items] [ROWS <m> [TO <n>]] [RETURNING <values>]<m>, <n> ::= Any expression evaluating to an integer.<values> ::= value_expression [, value_expression ...]<variables> ::= :varname [, :varname ...] However the equivalent of your example query is: update INVOICE_ITEMS set DISCOUNT = 3 WHERE EXISTS (SELECT 1 FROM ITEM_PRICE_QUNTITY B WHERE B.ID = ITEM_PRICE_NO)AND INVOICE_ID = 33 If you want to update using data from additional tables, you might want to consider using MERGE . In your comment you ask for the equivalent query to do the following with Firebird: UPDATE B SET B.QUANTIY = b.QUANTIY + a.QUANTITY FROM ITEM_PRICE_QUNTITY B JOIN INVOICE_ITEMS A ON A.ITEM_PRICE_NO = B.ID WHERE A.INVOICE_ID = 33 The equivalent MERGE statement would be: MERGE INTO ITEM_PRICE_QUNTITY AS B USING INVOICE_ITEMS AS A ON A.ITEM_PRICE_NO = B.ID AND A.INVOICE_ID = 33 WHEN MATCHED THEN UPDATE SET B.QUANTIY = B.QUANTIY + A.QUANTITY
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24800340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3051457/']}
jdg_374524
stackexchange
llm_judgeable_groundtruth_similarity
18197348
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am working on a multilingual website in PHP and in my languages files i often have strings which contain multiple variables that will be later filled in to complete the sentences. Currently i am placing {VAR_NAME} in the string and manually replacing each occurence with its matching value when used. So basically : {X} created a thread on {Y} becomes : Dany created a thread on Stack Overflow I have already thought of sprintf but i find it inconvenient because it depends on the order of the variables which can change from a language to another. And I have already checked How replace variable in string with value in php? and for now i basically use this method. But i am interested in knowing if there is a built-in (or maybe not) convenient way in PHP to do that considering that i already have variables named exactly as X and Y in the previous example, more like $$ for a variable variable. So instead of doing str_replace on the string i would maybe call a function like so : $X = 'Dany';$Y = 'Stack Overflow';$lang['example'] = '{X} created a thread on {Y}';echo parse($lang['example']); would also print out : Dany created a thread on Stack Overflow Thanks! Edit The strings serve as templates and can be used multiple times with different inputs. So basically doing "{$X} ... {$Y}" won't do the trick because i will lose the template and the string will be initialized with the starting values of $X and $Y which aren't yet determined. Now provide the response and nothing else.
I'm going to add an answer here because none of the current answers really cut the mustard in my view. I'll dive straight in and show you the code I would use to do this: function parse( /* string */ $subject, array $variables, /* string */ $escapeChar = '@', /* string */ $errPlaceholder = null) { $esc = preg_quote($escapeChar); $expr = "/ $esc$esc(?=$esc*+{) | $esc{ | {(\w+)} /x"; $callback = function($match) use($variables, $escapeChar, $errPlaceholder) { switch ($match[0]) { case $escapeChar . $escapeChar: return $escapeChar; case $escapeChar . '{': return '{'; default: if (isset($variables[$match[1]])) { return $variables[$match[1]]; } return isset($errPlaceholder) ? $errPlaceholder : $match[0]; } }; return preg_replace_callback($expr, $callback, $subject);} What does that do? In a nutshell: Create a regular expression using the specified escape character that will match one of three sequences (more on that below) Feed that into preg_replace_callback() , where the callback handles two of those sequences exactly and treats everything else as a replacement operation. Return the resulting string The regex The regex matches any one of these three sequences: Two occurrences of the escape character, followed by zero or more occurrences of the escape character, followed by an opening curly brace. Only the first two occurrences of the escape character are consumed. This is replaced by a single occurrence of the escape character. A single occurrence of the escape character followed by an opening curly brace. This is replaced by a literal open curly brace. An opening curly brace, followed by one or more perl word characters (alpha-numerics and the underscore character) followed by a closing curly brace. This is treated as a placeholder and a lookup is performed for the name between the braces in the $variables array, if it is found then return the replacement value, if not then return the value of $errPlaceholder - by default this is null , which is treated as a special case and the original placeholder is returned (i.e. the string is not modified). Why is it better? To understand why it's better, let's look at the replacement approaches take by other answers. With one exception (the only failing of which is compatibility with PHP<5.4 and slightly non-obvious behaviour), these fall into two categories: strtr() - This provides no mechanism for handling an escape character. What if your input string needs a literal {X} in it? strtr() does not account for this, and it would be substituted for the value $X . str_replace() - this suffers from the same issue as strtr() , and another problem as well. When you call str_replace() with an array argument for the search/replace arguments, it behaves as if you had called it multiple times - one for each of the array of replacement pairs. This means that if one of your replacement strings contains a value that appears later in the search array, you will end up substituting that as well. To demonstrate this issue with str_replace() , consider the following code: $pairs = array('A' => 'B', 'B' => 'C');echo str_replace(array_keys($pairs), array_values($pairs), 'AB'); Now, you'd probably expect the output here to be BC but it will actually be CC ( demo ) - this is because the first iteration replaced A with B , and in the second iteration the subject string was BB - so both of these occurrences of B were replaced with C . This issue also betrays a performance consideration that might not be immediately obvious - because each pair is handled separately, the operation is O(n) , for each replacement pair the entire string is searched and the single replacement operation handled. If you had a very large subject string and a lot of replacement pairs, that's a sizeable operation going on under the bonnet. Arguably this performance consideration is a non-issue - you would need a very large string and a lot of replacement pairs before you got a meaningful slowdown, but it's still worth remembering. It's also worth remembering that regex has performance penalties of its own, so in general this consideration shouldn't be included in the decision-making process. Instead we use preg_replace_callback() . This visits any given part of the string looking for matches exactly once, within the bounds of the supplied regular expression. I add this qualifier because if you write an expression that causes catastrophic backtracking then it will be considerably more than once, but in this case that shouldn't be a problem (to help avoid this I made the only repetition in the expression possessive ). We use preg_replace_callback() instead of preg_replace() to allow us to apply custom logic while looking for the replacement string. What this allows you to do The original example from the question $X = 'Dany';$Y = 'Stack Overflow';$lang['example'] = '{X} created a thread on {Y}';echo parse($lang['example']); This becomes: $pairs = array( 'X' = 'Dany', 'Y' = 'Stack Overflow',);$lang['example'] = '{X} created a thread on {Y}';echo parse($lang['example'], $pairs);// Dany created a thread on Stack Overflow Something more advanced Now let's say we have: $lang['example'] = '{X} created a thread on {Y} and it contained {X}';// Dany created a thread on Stack Overflow and it contained Dany ...and we want the second {X} to appear literally in the resulting string. Using the default escape character of @ , we would change it to: $lang['example'] = '{X} created a thread on {Y} and it contained @{X}';// Dany created a thread on Stack Overflow and it contained {X} OK, looks good so far. But what if that @ was supposed to be a literal? $lang['example'] = '{X} created a thread on {Y} and it contained @@{X}';// Dany created a thread on Stack Overflow and it contained @Dany Note that the regular expression has been designed to only pay attention to escape sequences that immediately precede an opening curly brace. This means that you don't need to escape the escape character unless it appears immediately in front of a placeholder. A note about the use of an array as an argument Your original code sample uses variables named the same way as the placeholders in the string. Mine uses an array with named keys. There are two very good reasons for this: Clarity and security - it's much easier to see what will end up being substituted, and you don't risk accidentally substituting variables you don't want to be exposed. It wouldn't be much good if someone could simply feed in {dbPass} and see your database password, now would it? Scope - it's not possible to import variables from the calling scope unless the caller is the global scope. This makes the function useless if called from another function, and importing data from another scope is very bad practice. If you really want to use named variables from the current scope (and I do not recommend this due to the aforementioned security issues) you can pass the result of a call to get_defined_vars() to the second argument. A note about choosing an escape character You'll notice I chose @ as the default escape character. You can use any character (or sequence of characters, it can be more than one) by passing it to the third argument - and you may be tempted to use \ since that's what many languages use, but hold on before you do that . The reason you don't want to use \ is because many languages use it as their own escape character, which means that when you want to specify your escape character in, say, a PHP string literal, you run into this problem: $lang['example'] = '\\{X}'; // results in {X}$lang['example'] = '\\\{X}'; // results in \Dany$lang['example'] = '\\\\{X}'; // results in \Dany It can lead to a readability nightmare, and some non-obvious behaviour with complex patterns. Pick an escape character that is not used by any other language involved (for example, if you are using this technique to generate fragments of HTML, don't use & as an escape character either). To sum up What you are doing has edge-cases. To solve the problem properly, you need to use a tool capable of handling those edge-cases - and when it comes to string manipulation, the tool for the job is most often regex.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18197348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1015808/']}
jdg_374525
stackexchange
llm_judgeable_groundtruth_similarity
27342813
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to android development, and I am making a calculator in android studio. My calculator works fine but there isa problem with its layout I installed it on my tablet and the buttons were very small as I designed them in Android Studio on an emulator, the button size doesn't change, and need help so that the button size increases as the screen size changes. My calculator design in Android Studio <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="#ff000000"> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/clear" android:layout_alignParentTop="true" android:layout_alignParentLeft="true" android:layout_alignParentStart="true" android:layout_marginTop="110dp" android:background="@drawable/clear_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/plus_minus" android:layout_alignBottom="@+id/clear" android:layout_toRightOf="@+id/clear" android:layout_toEndOf="@+id/clear" android:layout_marginLeft="-11dp" android:background="@drawable/pm_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/per" android:layout_alignBottom="@+id/plus_minus" android:layout_toRightOf="@+id/plus_minus" android:layout_toEndOf="@+id/plus_minus" android:layout_marginLeft="-11dp" android:background="@drawable/per_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/div" android:layout_alignTop="@+id/per" android:layout_toRightOf="@+id/per" android:layout_toEndOf="@+id/per" android:layout_marginLeft="-11dp" android:background="@drawable/div_btn" android:layout_alignParentRight="false" /> <TextView android:layout_width="fill_parent" android:layout_height="70dp" android:textAppearance="?android:attr/textAppearanceLarge" android:id="@+id/disp" android:layout_above="@+id/plus_minus" android:layout_alignParentLeft="true" android:layout_alignParentStart="true" android:layout_alignParentRight="true" android:layout_alignParentEnd="true" android:textColor="@android:color/white" android:textSize="32sp" android:inputType="none" android:textStyle="normal" android:gravity="right" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/seven" android:layout_below="@+id/clear" android:layout_toLeftOf="@+id/plus_minus" android:layout_toStartOf="@+id/plus_minus" android:layout_marginTop="-10dp" android:layout_alignParentTop="false" android:layout_alignParentLeft="true" android:background="@drawable/seven_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/eight" android:layout_alignTop="@+id/seven" android:layout_alignLeft="@+id/plus_minus" android:layout_alignStart="@+id/plus_minus" android:background="@drawable/eight_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/nine" android:layout_alignBottom="@+id/seven" android:layout_alignLeft="@+id/per" android:layout_alignStart="@+id/per" android:background="@drawable/nine_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/mul" android:layout_alignTop="@+id/nine" android:layout_alignLeft="@+id/div" android:layout_alignStart="@+id/div" android:background="@drawable/mul_btn" /> <Button android:layout_width="103dp" android:layout_height="90dp" android:id="@+id/four" android:layout_below="@+id/seven" android:layout_alignRight="@+id/seven" android:layout_alignEnd="@+id/seven" android:layout_marginTop="-10dp" android:layout_alignParentLeft="false" android:background="@drawable/four_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/five" android:layout_alignBottom="@+id/four" android:layout_alignLeft="@+id/eight" android:layout_alignStart="@+id/eight" android:background="@drawable/five_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/six" android:layout_alignTop="@+id/five" android:layout_alignLeft="@+id/nine" android:layout_alignStart="@+id/nine" android:background="@drawable/six_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/sub" android:layout_toEndOf="@+id/nine" android:layout_alignTop="@+id/six" android:layout_alignLeft="@+id/mul" android:layout_alignStart="@+id/mul" android:background="@drawable/min_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/one" android:layout_below="@+id/four" android:layout_alignLeft="@+id/four" android:layout_alignStart="@+id/four" android:layout_marginTop="-10dp" android:layout_alignParentLeft="false" android:background="@drawable/one_btn" /> <Button android:layout_width="203dp" android:layout_height="90dp" android:id="@+id/zero" android:layout_below="@+id/one" android:layout_alignLeft="@+id/one" android:layout_alignStart="@+id/one" android:layout_marginTop="-10dp" android:background="@drawable/zero_btn" android:layout_alignParentLeft="true" android:layout_marginLeft="-7dp" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/two" android:layout_above="@+id/zero" android:layout_alignLeft="@+id/five" android:layout_alignStart="@+id/five" android:background="@drawable/two_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/three" android:layout_below="@+id/five" android:layout_alignLeft="@+id/six" android:layout_alignStart="@+id/six" android:layout_marginTop="-10dp" android:background="@drawable/three_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/add" android:layout_toEndOf="@+id/three" android:layout_below="@+id/six" android:layout_alignLeft="@+id/sub" android:layout_alignStart="@+id/sub" android:layout_marginTop="-10dp" android:background="@drawable/plus_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/dot" android:layout_below="@+id/two" android:layout_alignRight="@+id/three" android:layout_alignEnd="@+id/three" android:layout_marginTop="-10dp" android:background="@drawable/dot_btn" /> <Button android:layout_width="100dp" android:layout_height="90dp" android:id="@+id/equ" android:layout_toEndOf="@+id/dot" android:layout_below="@+id/three" android:layout_alignLeft="@+id/add" android:layout_alignStart="@+id/add" android:layout_marginTop="-10dp" android:background="@drawable/equ_btn" /></RelativeLayout> Now provide the response and nothing else.
You have to use layout_weight in order to achieve what you want, and that will be used by wrapping each row of buttons inside a LinearLayout and all that LinearLayouts plus the TextView will be wrapped inside one big LinearLayout instead of your RelativeLayout like the following: <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/main_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:background="#ff000000" android:orientation="vertical" android:weightSum="6" > <TextView android:id="@+id/disp" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" android:gravity="center" android:inputType="none" android:textAppearance="?android:attr/textAppearanceLarge" android:textColor="@android:color/white" android:textSize="32sp" android:textStyle="normal" /> <LinearLayout android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" android:orientation="horizontal" android:weightSum="4" > <Button android:id="@+id/clear" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:background="@drawable/clear_btn" /> <!-- then the three remaining buttons --> <LinearLayout /> <!-- then the four remaining rows --> <!-- in the last row the "0" button will have layout_weight="2" NOT "1" --> </LinearLayout></LinearLayout> UPDATE: use ImageButtons instead of Buttons for better scalability, and this is how you must use it: <ImageButton android:id="@+id/clear" android:layout_width="0dp" android:layout_height="match_parent" android:layout_weight="1" android:background="@android:color/transparent" android:src="@drawable/clear_btn" android:scaleType="fitCenter" /> <!-- and so for all Buttons --> Note here we changed: 1- layout_height : from wrap_content to match_parent . 2- background : removing the drawable (clear_btn) and making it transparent . and we added: 1- src : assigning to it the drawable (clear_btn). 2- scaleType : making it fitCenter to make it scales in all the available area without ruining the Image scalability ratio. P.S : in case you are want the Button to take all the area weather it is scalable or not, change scaleType from fitCenter to fitXY (may be you will need that for the " 0 " Button).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27342813', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/889030/']}
jdg_374526
stackexchange
llm_judgeable_groundtruth_similarity
7934623
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am storing many longitudes and latitudes as doubles , I am wondering if I can get away with storing them as floats . To answer this question, I need to know the approximate resolution of a single precision floating point number when the stored values are longitudes / latitudes (-180 to +180). Now provide the response and nothing else.
Your question may have several interpretations. If it is just for angles and for storage on a disk or on a device i would suggest you to store your values using a totally different technique: store as 32 bit integer. int encodedAngle = (int)(value * (0x7FFFFFFF / 180.0)); To recover it, do the contrary. double angle = (encodedAngle / (0x7FFFFFFF / 180.0)); In this way you have full 31 bit resolution for 180 degrees and 1 bit for the sign. You can use this way also to keep your values in ram, the cost of this coversion is higher compared to work directly with doubles, but if you want to keep your memory low but resolution high this can work quite well.The cost is not so high, just a conversion to/from integer from/to double and a multiplication, modern processors will do it in a very little amount of time, and since the accessed memory is less, if the list contains a lot of values, your code will be more friendly with processor cache. Your resolution will be 180 / ((2^31) - 1) = 8.38190318 × 10^-8 degrees, not bad :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7934623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296446/']}
jdg_374527
stackexchange
llm_judgeable_groundtruth_similarity
1275357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I redirect the user to the home page after logout. In between I would like to delete all/or specific client cookies (I have previously set). def logoutuser(request): logout(request) return redirect('app.home.views.home') To call response.delete_cookie('user_location') , there is no response object. How do I do this? Now provide the response and nothing else.
Like jobscry said, logout() cleans session data, but it looks like you have set your own cookies too. You could wrap auth logout view, which will return a HttpResponse : def logout_user(request): response = logout(request, next_page=reverse('app.home.views.home')) response.delete_cookie('user_location') return response Or if you're just using the logout method as opposed to the view, you can use the return value for the redirect() method you have (which I assume returns a HttpResponse too). def logout_user(request): logout(request) response = redirect('app.home.views.home') response.delete_cookie('user_location') return response
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1275357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/102494/']}
jdg_374528
stackexchange
llm_judgeable_groundtruth_similarity
114543
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I horizontally center a <div> within another <div> using CSS? <div id="outer"> <div id="inner">Foo foo</div></div> Now provide the response and nothing else.
You can apply this CSS to the inner <div> : #inner { width: 50%; margin: 0 auto;} Of course, you don't have to set the width to 50% . Any width less than the containing <div> will work. The margin: 0 auto is what does the actual centering. If you are targeting Internet Explorer 8 (and later), it might be better to have this instead: #inner { display: table; margin: 0 auto;} It will make the inner element center horizontally and it works without setting a specific width . Working example here: #inner { display: table; margin: 0 auto; border: 1px solid black;}#outer { border: 1px solid red; width:100%} <div id="outer"> <div id="inner">Foo foo</div></div> EDIT With flexbox it is very easy to style the div horizontally and vertically centered. #inner { border: 0.05em solid black;}#outer { border: 0.05em solid red; width:100%; display: flex; justify-content: center;} <div id="outer"> <div id="inner">Foo foo</div></div> To align the div vertically centered, use the property align-items: center .
{}
{'log_upvote_score': 13, 'links': ['https://Stackoverflow.com/questions/114543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20403/']}
jdg_374529
stackexchange
llm_judgeable_groundtruth_similarity
1653001
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am familiar with the classification of covering spaces of a space $X$ in terms of subgroups of $\pi_1(X)$ (up to conjugation). However, if $X$ is a manifold, I know that $H^1(X; G)$ classifies G-bundles over $X$ (using Cech cohomology here). I think finite regular covering spaces are $\mathbb{Z}/k \mathbb{Z}$-bundles; regular means that the deck transformations act transitively on the fiber (and regular covers correspond to normal subgroups of $\pi_1(X)$). Does this mean that $H^1(X; \mathbb{Z}/k\mathbb{Z})$ is in bijection with k-sheeted regular covering spaces over $X$. I could not find such a statement anywhere and so am a bit suspicious. Also, if this is correct, what does $H^1(X; \mathbb{Z})$ classify? I'm not sure what a $\mathbb{Z}$-bundle is - what has automorphism group equal to $\mathbb{Z}$? Also, $H^1(X; \mathbb{Z}) = [X, S^1]$ so if $H^1(X; \mathbb{Z})$ classifies some kind of bundles, there should be universal bundle over $S^1$ which pulls back to these bundles. What is this bundle? Now provide the response and nothing else.
It's somewhat delicate here to get all the details right. First, $G$ -bundles for a finite group $G$ are not required to be connected, so the relevant version of the classification of covering spaces is the disconnected version, which goes like this: the category of covering spaces of a nice connected space $X$ with basepoint $x$ is equivalent to the category of $\pi_1(X, x)$ -sets. More explicitly, $n$ -sheeted covers (possibly disconnected) are equivalent to actions of $\pi_1(X, x)$ on $n$ -element sets, or even more explicitly to conjugacy classes of homomorphisms $\pi_1(X, x) \to S_n$ . Said another way, $n$ -sheeted covers, possibly disconnected, are classified by the nonabelian cohomology set $$H^1(X, S_n).$$ Among these, the connected covers correspond to the transitive actions, which are classified by conjugacy classes of subgroups of $\pi_1(X, x)$ of index $n$ . Among these, the regular covers correspond to normal subgroups. Now, for $G$ a finite group, a $G$ -bundle is more data than a $|G|$ -sheeted cover: the fibers are equipped with a free and transitive right action of $G$ and everything has to be compatible with this. Said another way, $G$ -bundles are equivalent to actions of $\pi_1(X, x)$ on $G$ regarded as a right $G$ -set, or more explicitly to conjugacy classes of homomorphisms $\pi_1(X, x) \to G$ (thinking of $G$ as a subgroup of $S_{|G|}$ to make the connection back to covers). Given a finite regular $n$ -sheeted cover $Y \to X$ with corresponding normal subgroup $H = \pi_1(Y, y)$ of $\pi_1(X, x)$ , we can think of this cover as a $G = \pi_1(X, x)/H$ -bundle, but not all $G$ -bundles arise in this way (the monodromy map $\pi_1(X, x) \to G$ is not required to be surjective in general), and we only know that $G$ is some finite group of order $n$ . Moreover, the data of a $G$ -bundle includes the data of an isomorphism between $G$ and this quotient; it's not enough just to know that it exists. So we can find a finite regular $n$ -cover which is not a $\mathbb{Z}/n\mathbb{Z}$ -bundle, even up to isomorphism of covers, by finding a group $\pi_1(X, x)$ with a normal subgroup $H$ of index $n$ such that the quotient is not $\mathbb{Z}/n\mathbb{Z}$ . A simple example is $X = T^2, \pi_1(X, x) \cong \mathbb{Z}^2$ ; take $H = 2 \mathbb{Z}^2$ , so that the quotient is $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ . And we can find a $\mathbb{Z}/n\mathbb{Z}$ -bundle which is not a finite regular $n$ -cover in the usual sense, again even up to isomorphism of covers, by finding a disconnected such bundle; for example, $X \times \mathbb{Z}/n\mathbb{Z}$ for $n \ge 2$ and any $X$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1653001', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/39598/']}
jdg_374530
stackexchange
llm_judgeable_groundtruth_similarity
4366251
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose there is a tree: 1 / \ 2 3 / \ 4 5 Then the mirror image will be: 1 / \ 3 2 / \ 5 4 Assume the nodes are of this structure: struct node{ node left; node right; int value;} Can someone suggest an algorithm for this? Now provide the response and nothing else.
Sounds like homework. It looks very easy. Write a recursive routine that depth-first visits every node and builds the mirror tree with left and right reversed. struct node *mirror(struct node *here) { if (here == NULL) return NULL; else { struct node *newNode = malloc (sizeof(struct node)); newNode->value = here->value; newNode->left = mirror(here->right); newNode->right = mirror(here->left); return newNode; }} This returns a new tree - some other answers do this in place. Depends on what your assignment asked you to do :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4366251', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501767/']}
jdg_374531
stackexchange
llm_judgeable_groundtruth_similarity
424158
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: By $[\omega]^\omega$ we denote the collection of infinite subsets of $\omega$ . Two sets $A,B\in[\omega]^\omega$ are said to be almost disjoint if $A\cap B$ is finite. An almost disjoint family is a set ${\cal A}\subseteq [\omega]^\omega$ in which every two distinct members are almost disjoint. A standard application of Zorn's Lemma shows that any almost disjoint family is contained in a maximal almost disjoint (MAD) family (maximal with respect to $\subseteq$ ). A "pathological" MAD family is $\{E, \omega\setminus E\}$ where $E = \{2n:n\in \omega\}$ . We will consider infinite MAD families only. (A diagonalisation argument shows that every infinite MAD family is uncountable.) Question. Is there an infinite MAD family ${\cal M}\subseteq [\omega]^\omega$ with $\bigcap {\cal M} = \emptyset$ and a set $R\subseteq \omega$ such that $|R\cap M| = 1$ for all $M\in {\cal M}$ ? Now provide the response and nothing else.
Expositionally, forcing is (usually) easier to understand with a c.t.m. This does indeed lead to somewhat different results, such as $(*)\quad$ If there is a countable transitive model of $\mathsf{ZFC}$ then there is a countable transitive model of $\mathsf{ZFC+\neg CH}$ as opposed to the snappier and more intuitive $(**)\quad$ If there is a model of $\mathsf{ZFC}$ then there is a model of $\mathsf{ZFC+\neg CH}$ (which is of course equivalent to " $\mathsf{Con(ZFC)}\rightarrow\mathsf{Con(ZFC+\neg CH)}$ " by the completeness theorem). More precisely, thinking about c.t.m.s we see that $\mathsf{ZFC}$ proves $(*)$ but we do not immediately see that $\mathsf{ZFC}$ proves $(**)$ . There are, however, various ameliorating points here: First of all, $(*)$ is itself nontrivial; note that while the hypothesesized model is "special," so is the constructed model. So it's not really fair to say that it's weaker than $(**)$ . More substantively, we could "stratify" forcing by finite subtheories of $\mathsf{ZFC}$ . It turns out that we can write $\mathsf{ZFC}$ as a union of an increasing chain of finitely axiomatizable theories $\mathsf{ZFC}=\bigcup_{i\in\mathbb{N}}T_i$ with the property that each $T_i$ "supports forcing" in the appropriate sense: if $\mathcal{M}$ is a c.t.m. of $T_i$ then each generic extension of $\mathcal{M}$ is again a model of $T_i$ . (And in fact this is overkill: it would be enough for our purposes to have each generic extension of a c.t.m. of $T_{i+1}$ satisfy $T_i$ .) The reflection principle gives us c.t.m.s of each $T_i$ , forcing thenn gives (for example) c.t.m.s of each $T_i+\neg\mathsf{CH}$ , and finally the finiteness of proofs lets us conclude $\mathsf{ZFC}\not\vdash\mathsf{CH}$ . This is the approach Kunen outlines if memory serves, and we can think of it as also subsuming Nik Weaver's suggestion in the comments of introducing a named c.t.m. of $\mathsf{ZFC}$ "inside (a conservative extension of) the theory." We could also simply develop forcing over arbitrary countable models of $\mathsf{ZFC}$ . Remember that $\mathsf{ZFC}$ (and indeed much less) proves "If $\mathsf{ZFC}$ is consistent then it has a countable model," so hypothesizing countable models - unlike countable transitive models - doesn't actually take us beyond the (obviously necessary) hypothesis of $\mathsf{Con(ZFC)}$ . This may seem difficult due to the use of recursion in defining the structure $M[G]$ (let alone $\Vdash$ ) and the reference to $V$ in verifying Regularity in $M[G]$ , but since the whole point is that the forcing relation is definable inside the "base structure" these issues are easily surmountable. In fact, via Boolean-valued models we can develop forcing over arbitrary models full stop. In my opinion, this is the "right" approach to take since it really gets at the model-theoretic heart of the matter. Whichever approach one takes, the point is that the c.t.m.-related issues are not actually that substantive, and blithely assuming the existence of a c.t.m. - while technically improper - makes it easier to see the actual mathematical intricacies of forcing (namely, how the combinatorial properties of the poset correspond to the logical properties of the extension that results from forcing with that poset).
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/424158', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8628/']}
jdg_374532
stackexchange
llm_judgeable_groundtruth_similarity
18851504
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I think I made some mistake while writing my code.I have a class MyClass that implements Interface, now I've a method that is generic for List of Interface as a parameter. I want to use my method by passing a List. I can't cast from List to List (that I assumed I could). So, what can I do to make my code work? here an example: List<MyClass> lista = returnMyClassList();myMethod((List<Interface>) lista); //this doesn't work//myMethod signaturepublic void myMethod(List<Interface> struttura); Thanks for helping. Now provide the response and nothing else.
Use an upper bound of Interface for the type: <? extends Interface> . Here's come compilable code that uses classes from the JDK to illustrate: public static void myMethod(List<? extends Comparable> struttura) {}public static void main(String[] args) { List<Integer> lista = null; myMethod(lista); // compiles OK}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18851504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2190639/']}
jdg_374533
stackexchange
llm_judgeable_groundtruth_similarity
433275
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I encounter the problem when I try to shrink one logical volume. I took the actions as following steps shown below: # unmount /home# lvreduce --size -100G /dev/vg_dev/lv_home # lvdisplay /dev/vg_dev/lv_home --- Logical volume ---LV Path /dev/vg_dev/lv_homeLV Name lv_homeVG Name vg_devLV UUID QOxrP3-nS36-nZTG-yL7e-sDi0-Wl9z-LXp4pJLV Write Access read/writeLV Creation host, time , LV Status available# open 1LV Size 197.98 GiBCurrent LE 50683Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:4# e2fsck -f /dev/vg_dev/lv_homee2fsck 1.41.12 (17-May-2010)The filesystem size (according to the superblock) is 78113792 blocksThe physical size of the device is 51899392 blocksEither the superblock or the partition table is likely to be corrupt!Abort<y>?# resize2fs /dev/vg_dev/lv_homeresize2fs 1.41.12 (17-May-2010)Please run 'e2fsck -f /dev/vg_dev/lv_home' first. I cannot mount the logical volume at all, can someone help me solve this problem? Now provide the response and nothing else.
It sounds like you've tried to resize a volume and filesystem in the wrong order. When shrinking , you should perform actions in this order Resize the file system Resize the logical volume When growing , you should perform actions in this order Resize the logical volume Resize the file system For your issue The first thing I would do is clone the entire drive using dd to a spare drive, and work on this one for recovery. If you are lucky and you didn't use any of the LV available extents and thus the blocks on the disk haven't changed, you might be able to resize the LV back to what it was previously, then attempt a fsck on the file system. If the fsck passes, you can then resize the file system. If you still get an error with the fsck you might have done irreparable damage. NB lvresize now supports file system resize when passing an additional argument -r|--resizefs which mitigates the need to resize in a particular order and drastically reduces the potential for error.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/433275', 'https://serverfault.com', 'https://serverfault.com/users/67223/']}
jdg_374534
stackexchange
llm_judgeable_groundtruth_similarity
34745
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have always been surprised by the fact that the quotient of two independent Gaussian random variables is a Cauchy Random variable - as this is often the case, coincidence in mathematics are not accidental: is there any deep explanations behind this connection between the Gaussian and the Cauchy distribution ? other examples: if a $2$-dimensional Brownian motion $(X_t, Y_t)$ is started at $(0,1)$ and stopped the first time $T$ that it hits the real axis, then $X_T$ is also distributed as a Cauchy distribution. the Cauchy distribution also shows up when studying how a complex brownian motion winds around the origin. Now provide the response and nothing else.
The bivariate distribution formed by two independentnormalized Gaussians is rotationally symmetric (think about theusual argument for evaluating the probability integral). Thequotient of two random variables $X$ and $Y$ is the tangent of the anglebetween $(0,0)$ and $(X,Y)$ with the $x$-axis. If one has a rotationallysymmetric distribution for $X$ and $Y$ (with no point mass at the origin)then $Y/X$ is a tangent of a uniformly distributed angle. This is the Cauchydistribution. Added Your example with the Brownian motion states in effect that if$P$ is the first point that the motion hits the $x$-axis then theangle between the line from $P$ to the starting point andthe $y$-axis is uniformly distributed between $-\pi$ and $\pi$.I can't see any reason why this should be so, but perhaps someone (unlike me)who actually knows something about Brownian motion might know why.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/34745', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1590/']}
jdg_374535
stackexchange
llm_judgeable_groundtruth_similarity
693163
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My script is setting array from grep results in such a way that empty grep results are also being stored in array. Eg. set -xecho "${Arr[@]}" + echo '' 'word' '' 'word2' Can someone help unset those empty valuesso that echo "${#Arr[@]}" gives 2 instead of 4 Tried var=-1 && for i in "${Input_Arr[@]}"; do var=$((var+1)); if [ -z "$i" ]; then unset "${Input_Arr[$var]}"; fi; done But it isn't working Now provide the response and nothing else.
First, there's no need to invent a dummy index - you can access the array's indices using the indirection operator ! Second, "${Input_Arr[$var]}" is the element's value ; unset needs the element's name , Input_Arr[$var] or just Input_Arr[var] , since it's already an arithmetic context). So given: $ arr=(foo '' bar '' baz)$ declare -p arrdeclare -a arr=([0]="foo" [1]="" [2]="bar" [3]="" [4]="baz") then $ for i in ${!arr[@]}; do [[ -z ${arr[i]} ]] && unset arr[i]; done leaves $ declare -p arrdeclare -a arr=([0]="foo" [2]="bar" [4]="baz") This also works for associative arrays - with suitable adjustments for the non-numeric keys (including double quoting expansions to prevent potential split + glob): $ declare -A Arr=(['1st val']=foo ['2nd val']='' ['3rd val']=bar ['4th val']='' ['5th val']=baz)$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["2nd val"]="" ["4th val"]="" ["3rd val"]="bar" ["1st val"]="foo" )$ for i in "${!Arr[@]}"; do [[ -z ${Arr[$i]} ]] && unset Arr["$i"]; done$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["3rd val"]="bar" ["1st val"]="foo" )
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/693163', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/516583/']}
jdg_374536
stackexchange
llm_judgeable_groundtruth_similarity
52496628
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It seems an easy way to circumvent a unique_ptr is to use a pointer to the unique_ptr object. And that's not difficult. So using the unique_ptr is sort of a gentleman's agreement and not really super enforced? #include <iostream>#include <memory>using namespace std;class Box { public: int num;};void update_box(unique_ptr<Box>* pu);int main(){ unique_ptr<Box> b{new Box}; unique_ptr<Box>* u = &b; update_box(u); cout << b->num << endl; // Outputs 99. return 0;}void update_box(unique_ptr<Box>* pu) { (*pu)->num = 99;} Now provide the response and nothing else.
You actually put your real question into a comment: My understanding of unique_ptr is that it should be used to ensure you only have one pointer to an object at any time. No, this is wrong. You can easily do a simple: std::unique_ptr<int> a(new int);int *b = a.get(); // b points to the same object as a The design of unique_ptr tries to ensure that you can only have one unique_ptr to an object. But even, it can ensure this only if you don't use naked pointers. With naked pointers, it is easy to sidestep this design: std::unique_ptr<int> a(new int);std::unique_ptr<int> b(a.get()); Here, b points to the same object as a , so this program has undefined behavior (because the int object will be deleted twice). unique_ptr 's intent is to show ownership . As there can be only one unique_ptr which can point to an object (ignoring the "hack" I've presented earlier), unique_ptr owns the pointed object, and as unique_ptr 's destructor is called, it will delete the pointed object.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52496628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1287554/']}
jdg_374537
stackexchange
llm_judgeable_groundtruth_similarity
7491
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a reference for the following result: Adding two integers in the factored representation is as hard as factoring two integers in the usual binary representation. (I'm pretty sure it's out there because this is something I had wondered at some point, and then was excited when I finally saw it in print.) "Adding two integers in the factored representation" is the problem: given the prime factorizations of two numbers $x$ and $y$, output the prime factorization of $x+y$. Note that the naive algorithm for this problem uses factorization in the standard binary representation as a subroutine. Update : Thanks Kaveh and Sadeq for the proofs. Obviously the more proofs the merrier, but I would also like to encourage more help in finding a reference , which as I said I'm fairly sure exists. I recall reading it in a paper with other interesting and not-often-discussed ideas in it, but I don't recall what those other ideas were or what the paper was about in general. Now provide the response and nothing else.
Assume that we can solve the problem (lets call it FactSum) in complexity class $C$ and $C$ is closed under $\log$-iteration (aka $\log$-bounded recursion)(e.g. if we can compute $x*y$ where $*$ is a binary function, we can computed $x_1*\ldots*x_{\log{n}}$) and contains $\mathsf{P}$ (this last condition can be made weaker). We show that factoring is in also in $C$. Note that each number can be written as a sum of $\log n$ powers of $2$. Each of them are easy to factor. Now given a number, write it as the sum of powers of it, then write each summand in the factoring representation, and then use the algorithm to sum them in the factoring representation. The result will be the factoring of the input number. This shows that factoring is reducible to $\log$-iteration of your problem FactSum. Therefore factoring is in $\mathsf{P}^{\text{FactSum}}$ (and I think $\mathsf{P}$ can be replaced with $\mathsf{NC^1}$ here).
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/7491', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/129/']}
jdg_374538
stackexchange
llm_judgeable_groundtruth_similarity
654994
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The formal languages we use to represent number systems are interchangeable, which is why we don't hesitate to use different notations, e.g. hexadecimal, octal, binary, etc... to represent the reals. The addition, or removal, of characters to the underlying alphabet is irrelevant, as long as the interpretation remains consistent. So why are we restricted from using the integers to represent real numbers? For example, the octal language used to represent real numbers consists of the following alphabet:$\{0,1,2,3,4,5,6,7,.,-\}$. Clearly, we can replace the decimal point '$.$' and the minus sign '$-$' with $8$ and $9$ respectively. Thus a real number such as $-9.125_{10}$ or $-11.1_{8}$ becomes $91181$. This mapping also leaves us with an infinite number of meaningless symbols such as $888..$, $8989...$, or $999...$ which could be used to establish an entirely new language ($\{8,9\}^*$) that could be used to represent irrationals or whatever else you'd like. Why doesn't this particular interpretation constitute an injective mapping? How is this not a listing of the reals? Now provide the response and nothing else.
The Kleene star produces only finite sequences of the alphabet symbols. The elements in $\Sigma^*$ for some alphabet $\Sigma$ can be arbitrary long, but each of them is, individually, finite. Because of this, there are not enough elements in $\Sigma^*$ to give every real number a representation. You can select some irrational numbers to represent with your strings-that-don't-have-a-meaning-yet, of course -- getting an injective mapping from $\Sigma^*$ to $\mathbb R$ is no problem, but you can't make it surjective . There will always be some reals left over that you're not representing.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/654994', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/124495/']}
jdg_374539
stackexchange
llm_judgeable_groundtruth_similarity
3441
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Especially when copying text from things like Google docs, I would like Emacs to automatically remove smart double quotes, smart single quotes, and all manner of em-dash and en-dash characters, replacing them with their ascii equivalents. Is there a way to configure Emacs to do this automatically? Or, baring that, a function I can call that will do it on the buffer or region? Now provide the response and nothing else.
Based on SU : How to remove smart quotes in copy Paste You can try something like the following: (defcustom smart-to-ascii '(("\x201C" . "\"") ("\x201D" . "\"") ("\x2018" . "'") ("\x2019" . "'") ;; en-dash ("\x2013" . "-") ;; em-dash ("\x2014" . "-")) "" :type '(repeat (cons (string :tag "Smart Character ") (string :tag "Ascii Replacement"))))(defun replace-smart-to-ascii (beg end) (interactive "r") (format-replace-strings smart-to-ascii nil beg end)) Using it as a defcustom to allow for adding/adjusting characters to match what is desired.
{}
{'log_upvote_score': 5, 'links': ['https://emacs.stackexchange.com/questions/3441', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/379/']}
jdg_374540