source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
1238781
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand that gradients are vectors with partial derivatives as components when working in 3D space, but does the the existence of a gradient at a point imply continuity at that point? Now provide the response and nothing else.
You are already 90% of the way there; using term-by-term integration or any of the other methods described at Taylor series of $\arctan$ , you can derive $$\arctan x=\sum_{n=0}^\infty \frac{(-1)^nx^{2n+1}}{2n+1},\qquad |x|<1,$$ but you cannot claim directly that this is true for $x=1$ because this is out of the range of the equation. (You can't do term-by-term integration on the interval $[0,1]$ because the $\sum\int=\int\sum$ theorem is only valid when convergence is uniform, and a priori you only know that the power series is uniformly convergent on $[0,x]$ for any $|x|<1$.) But with a little extra work, we can push this equality out to the boundary of the unit circle, using Abel's theorem . This says that if $\sum_{n=0}^\infty a_n$ is a convergent series then $$\lim_{x\to1^-}\sum_{n=0}^\infty a_nx^n=\sum_{n=0}^\infty a_n.$$ And we know that $\sum_{n=0}^\infty \frac{(-1)^n}{2n+1}$ is convergent, because it is an alternating series. Thus $$\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}=\lim_{x\to1^-}\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{2n+1}=\lim_{x\to1^-}\arctan x$$ (because the power series is valid for $|x|<1$), and this is equal to $\arctan 1=\frac{\pi}4$ because arctan is continuous in a neighborhood of $1$ (indeed, in a neighborhood of $\Bbb R$). By the way, just yesterday I submitted a formal proof of this exact theorem in Metamath , so I got to see all the little complications that are glossed over in this exposition.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1238781', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/232190/']}
jdg_369941
stackexchange
llm_judgeable_groundtruth_similarity
8065305
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In jQuery v1.7 a new method, on was added. From the documentation: ‘The .on() method attaches event handlers to the currently selected set of elements in the jQuery object. As of jQuery 1.7, the .on() method provides all functionality required for attaching event handlers.’ What's the difference with live and bind ? Now provide the response and nothing else.
on() is an attempt to merge most of jQuery's event binding functions into one. This has the added bonus of tidying up the inefficiencies with live vs delegate . In future versions of jQuery, these methods will be removed and only on and one will be left. Examples: // Using live()$(".mySelector").live("click", fn);// Equivalent `on` (there isn't an exact equivalent, but with good reason)$(document).on("click", ".mySelector", fn); // Using bind()$(".mySelector").bind("click", fn);// Equivalent `on`$(".mySelector").on("click", fn); // Using delegate()$(document.body).delegate(".mySelector", "click", fn);// Equivalent `on`$(document.body).on("click", ".mySelector", fn); Internally, jQuery maps all these methods and shorthand event handler setters to the on() method, further indicating that you should ignore these methods from now on and just use on : bind: function( types, data, fn ) { return this.on( types, null, data, fn );},live: function( types, data, fn ) { jQuery( this.context ).on( types, this.selector, data, fn ); return this;},delegate: function( selector, types, data, fn ) { return this.on( types, selector, data, fn );}, See https://github.com/jquery/jquery/blob/1.7/src/event.js#L965 .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8065305', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310276/']}
jdg_369942
stackexchange
llm_judgeable_groundtruth_similarity
17845073
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Having user information in a Session, how possible is to check the session and allow access to the method decorated with the custom attribute based on one of the values provided. So what I'm trying to do is: public class UserAccess: System.Attribute{ private string userRole; public UserAccess(string userRole) { this.userRole = userRole; }} Then when I decorate an endpoint like this: [UserAccess(userRole = "Residents")]public Response Get(Request r){ ///-- Implementation} Somehow when the endpoint is invoked only userRole = "Residents" can actually execute it based on a session value check up. Also, can this validation be done in the custom attribute implementation? Now provide the response and nothing else.
So the other guys are right, that the attributes do nothing by themselves. It is just metadata that you have to purposely get at some point during the lifetime of the service call. The best way to do that so it is sort of done auto-magically and not always directly in every operation is to add inspectors and service behaviors. It is more work to setup initially, but it gets that out of your direct operation code and can make it apply for any operation to check for that custom attribute. Basically you have your attribute like so: namespace MyCustomExtensionService{ public class UserAccessAttribute : System.Attribute { private string _userRole; public UserAccessAttribute(string userRole) { _userRole = userRole; //you could also put your role validation code in here } public string GetUserRole() { return _userRole; } }} Then you set up your parameter inspector (note there are other inspectors you could use): using System;using System.Collections.Generic;using System.Linq;using System.Reflection;using System.ServiceModel;using System.ServiceModel.Description;using System.ServiceModel.Dispatcher;using System.Web;namespace MyCustomExtensionService{ public class MyParameterInspector : IParameterInspector { public void AfterCall(string operationName, object[] outputs, object returnValue, object correlationState) { //throw new NotImplementedException(); } public object BeforeCall(string operationName, object[] inputs) { MethodInfo method = typeof(Service1).GetMethod(operationName); Attribute[] attributes = Attribute.GetCustomAttributes(method, typeof(UserAccessAttribute), true); var attr = (UserAccessAttribute)attributes.First(); if (attributes.Any()) { var userHasProperAuthorization = true; if (attr.GetUserRole() == "Residents" && userHasProperAuthorization) { //everything is good, continue to operation } else { throw new FaultException("You do not have the right security role!"); } } return null; } }} Then you setup your endpoint behavior (there are other behaviors you could use): using System;using System.Collections.Generic;using System.Linq;using System.ServiceModel.Description;using System.ServiceModel.Dispatcher;using System.Web;namespace MyCustomExtensionService{ public class MyCustomAttributeBehavior : IEndpointBehavior { public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { //throw new NotImplementedException(); } public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime) { foreach (ClientOperation clientOperation in clientRuntime.Operations) { clientOperation.ParameterInspectors.Add( new MyParameterInspector()); } } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher) { foreach (DispatchOperation dispatchOperation in endpointDispatcher.DispatchRuntime.Operations) { dispatchOperation.ParameterInspectors.Add( new MyParameterInspector()); } } public void Validate(ServiceEndpoint endpoint) { //throw new NotImplementedException(); } }} Then you create your behavior section: using System.Linq;using System.ServiceModel.Configuration;using System.Web;namespace MyCustomExtensionService{ public class MyBehaviorSection : BehaviorExtensionElement { protected override object CreateBehavior() { return new MyCustomAttributeBehavior(); } public override Type BehaviorType { get { return typeof(MyCustomAttributeBehavior); } } }} Then you setup the config to use the new behavior: <system.serviceModel> <services> <service name ="MyCustomExtensionService.Service1"> <endpoint address="" behaviorConfiguration="MyCustomAttributeBehavior" binding="basicHttpBinding" contract="MyCustomExtensionService.IService1"> </endpoint> </service> </services> <extensions> <behaviorExtensions> <add name="Validator" type="MyCustomExtensionService.MyBehaviorSection, MyCustomExtensionService, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <behaviors> <endpointBehaviors> <behavior name="MyCustomAttributeBehavior"> <Validator /> </behavior> </endpointBehaviors> here is the services interface - with one operation that will work and one that will fail due to having the wrong user access using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.ServiceModel.Web;using System.Text;namespace MyCustomExtensionService{ [ServiceContract] public interface IService1 { [OperationContract] string GetData(int value); [OperationContract] string GetDataUsingWrongUserAccess(int value); } } And the service operations: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.ServiceModel.Web;using System.Text;namespace MyCustomExtensionService{ public class Service1 : IService1 { [UserAccess("Residents")] public string GetData(int value) { return string.Format("You entered: {0}", value); } [UserAccess("Admin")] public string GetDataUsingWrongUserAccess(int value) { return string.Format("You entered: {0}", value); } }} For more info, see MSDN http://msdn.microsoft.com/en-us/library/ms730137.aspx also for inspectors: https://github.com/geersch/WcfParameterInspectors
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17845073', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1791567/']}
jdg_369943
stackexchange
llm_judgeable_groundtruth_similarity
260414
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have been running MTR to/from one of my servers and noticed something that looks odd to me. Because I am not really into this I will give you three outputs: This is from the server to my home location: My traceroute [v0.75]prag341.server4you.de (0.0.0.0) Sat Apr 16 12:31:36 2011Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 1. v9-609a.s4y14.fra.routeserver.net 0.0% 143 6.6 2.9 0.7 15.6 2.4 2. 217.118.16.161 0.0% 143 0.7 5.7 0.4 67.3 13.2 3. 217.118.16.25 0.0% 143 3.3 5.3 3.3 63.5 8.6 4. 194.25.211.53 0.0% 143 3.4 5.5 3.2 61.1 9.1 5. vie-sb2-i.VIE.AT.NET.DTAG.DE 0.7% 143 17.8 21.7 17.6 131.1 14.8 vie-sb2-i.VIE.AT.NET.DTAG.DE 6. at-vie05b-ri1-pos-5-0.aorta.net 0.7% 143 18.7 18.4 17.6 23.8 0.9 7. at-vie05b-ri2-ge-2-1-9.aorta.net 0.0% 143 17.9 18.6 17.5 41.7 2.6 8. at-vie01a-rd1-xe-1-0-0.aorta.net 0.0% 143 18.2 21.1 17.3 104.1 12.0 9. at-vie-sk11-pe01-vl-20.upc.at 0.0% 143 18.2 20.6 17.7 55.7 7.010. at-vie-sk11-pe02-vl-1.upc.at 0.0% 143 17.8 19.6 17.3 55.2 6.611. ??? This is from my home location to the server: My traceroute [v0.80]joe-desktop (0.0.0.0) Sat Apr 16 14:27:54 2011Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 1. 192.168.1.1 0.0% 87 0.2 0.2 0.2 0.2 0.0 2. ??? 3. 84.116.4.33 0.0% 86 9.7 9.0 6.3 27.3 3.5 4. at-vie-sk11-cia01-vl-2070.upc.at 0.0% 86 22.7 22.8 20.0 52.2 4.7 5. at-vie-sk11-pe01-vl-2069.upc.at 0.0% 86 47.6 23.9 20.2 47.6 5.8 6. at-vie01a-rd1-vl-2042.aorta.net 0.0% 86 21.7 25.0 20.1 61.7 8.5 7. de-fra03a-rd1-xe-9-2-0.aorta.net 0.0% 86 21.3 22.8 19.6 44.0 5.0 8. 84.116.132.154 0.0% 86 20.2 22.8 19.3 41.0 4.1 9. tge-5-1-0-353a.cr2.fra.routeserver.net 0.0% 86 38.6 27.4 20.9 120.2 16.010. 217.118.16.130 0.0% 86 23.7 26.9 20.8 73.0 9.811. 217.118.16.26 0.0% 86 25.5 28.8 22.9 85.1 11.812. 217.118.16.165 81.2% 86 68.2 37.5 25.0 68.2 10.313. prag341.server4you.de 0.0% 86 35.7 27.1 24.0 49.3 4.3 And this is from another server (amazon ec2) to the server: My traceroute [v0.75]flimmit.com (0.0.0.0) Sat Apr 16 12:32:50 2011Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 1. ip-10-48-192-3.eu-west-1.compute.internal 0.0% 178 0.4 0.9 0.3 16.4 1.7 ip-10-48-192-2.eu-west-1.compute.internal 2. ec2-79-125-0-244.eu-west-1.compute.amazonaws.com 0.0% 178 0.5 0.9 0.3 30.8 2.6 ec2-79-125-0-242.eu-west-1.compute.amazonaws.com 3. ??? 4. ??? 5. ??? 6. xe-4-1-0.dub10.ip4.tinet.net 36.5% 178 1.9 3.9 1.6 56.8 8.5 7. xe-4-1-0.dub10.ip4.tinet.net 0.0% 178 12.1 9.7 1.6 92.5 10.5 xe-0-1-0.lon14.ip4.tinet.net xe-2-1-0.lon14.ip4.tinet.net 8. xe-0-1-0.lon14.ip4.tinet.net 0.0% 177 17.4 17.7 11.1 184.3 24.6 xe-2-1-0.lon14.ip4.tinet.net 213.200.77.234 9. 213.200.77.234 0.0% 177 25.2 23.7 12.0 162.5 16.0 tge-4-2-0-0a.cr2.fra.routeserver.net10. tge-4-2-0-0a.cr2.fra.routeserver.net 0.6% 177 178.6 57.1 24.7 178.6 39.0 217.118.16.2611. 217.118.16.26 47.2% 177 32.7 61.1 29.1 164.4 35.4 217.118.16.16512. 217.118.16.165 28.2% 177 28.9 29.8 27.8 48.9 4.2 prag341.server4you.de13. prag341.server4you.de 1.1% 177 28.2 28.7 27.7 63.4 2.9 What looks weird to me is this very high loss >80% on the last hop from My home location to the server.The server is responding fine and services run smoothly. It may be due to my lacking knowledge of networking but it would sound logical to me that loss rates should add up? But I often see MTR outputs where there are high loss rates on the way, but the final target loss is much lower. So my questions are: In my particular case, is this an indicator of a possible problem I should pay attention to? In general, how do I interpret an output of mtr correctly? Can you recommend a good article / literature on that? Now provide the response and nothing else.
The packet loss is not necessary an indication of a problem. Remember those are attempts to communicate with that particular network node directly . Usually those in-between router nodes are only responsible for passing traffik through to another location. They are not required to chat with you directly at all, and one that drops most of your chat should not be a cause for concern. The only important number for you would be how many packets are getting through to your destination . The most useful information to come out of those reports is the relative data of how far apart nodes are (in terms of packet time), and, even more importantly, how many hops there are so you can get an idea how long different legs of the journey will take for people trying to communicate with your servers. Usually the fewer hops there are, the more efficient the route -- indicating the quality of your ISP.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/260414', 'https://serverfault.com', 'https://serverfault.com/users/53998/']}
jdg_369944
stackexchange
llm_judgeable_groundtruth_similarity
17863570
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the latest version of most Hadoop distributions, the HDFS usage reports seem to report on space without accounting for the replication factor, correct? When one looks at the Namenode Web UI and/or runs the 'hadoop dfsadmin -report' command, one can see a report that looks something like this: Configured Capacity: 247699161084 (230.69 GB)Present Capacity: 233972113408 (217.9 GB)DFS Remaining: 162082414592 (150.95 GB)DFS Used: 71889698816 (66.95 GB)DFS Used%: 30.73%Under replicated blocks: 40Blocks with corrupt replicas: 6Missing blocks: 0 Based on the machine sizes of this cluster, it seems that this report does NOT account for triple replication... I.E. If I place a file on the HDFS, I should account for the triple replication myself. For example, if I placed a 50GB file on the HDFS, would my HDFS be dangerously close to full (since it seems that file would be replicated 3 times, using up the 150GB that currently remain)? Now provide the response and nothing else.
Let us define clearly what each of these terms mean. Configured Capacity: It is the total capacity available to HDFS for Storage. So if you have 4 nodes and each node has 50 GB capacity, the configured capacity will be 200 GB. Replication factor is irrelevant in case of configured capacity. DFS Used: This is the amount of storage space that has been used up by HDFS. Divide DFS Used by your replication factor to get the actual size of your files stored without replication. So if your DFS used is 60 GB, and your replication factor is 3, the actual size of your files is 60/3 = 20 GB. DFS Remaining: This is the amount of storage space still available to the HDFS. If you have 150 GB remaining storage space, that mean you can store upto 150/3 = 50 GB of files without exceeding your Configured Capacity (assuming replication factor = 3). Present Capacity: The amount of storage space available for storing user files after allocating space for metadata. The difference: (Configured capacity - Present capacity) is used for storing file system metadata. and inode information. Hope this clears it up.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17863570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667595/']}
jdg_369945
stackexchange
llm_judgeable_groundtruth_similarity
23416536
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote a Scala class and defined the main() method in it. It compiled, but when I ran it, I got NoSuchMethodError:main . In all the scala examples, I have seen, the main method is defined in an object. In Java we define the main method in a class. Is it possible to define main() in a Scala class or do we always need an object for this? Now provide the response and nothing else.
To answer your question, have a look on the following :I made a scala class, compiled and decompiled it, and what I got is interesting. class MyScalaClass{ def main(args: Array[String]): Unit = { println("Hello from main of class") }}Compiled from "MyScalaClass.scala"public class MyScalaClass { public void main(java.lang.String[]); public MyScalaClass();} So it means that when the scala class is converted to java class then the main method of the scala class which in turn being converted to the main method in java class is not static . And hence we would not be able to run the program because JVM is not able to find the starting point in the program. But if the same code is done by using the 'object' keyword then: Compiling the following:object MyScalaClass{ def main(args: Array[String]): Unit = { println("Hello from main of object") }}Decompiling the following:javap MyScalaClass$.classCompiled from "MyScalaClass.scala"public final class MyScalaClass$ { public static final MyScalaClass$ MODULE$; public static {}; public void main(java.lang.String[]);}Decompiling the followingjavap MyScalaClass.classCompiled from "MyScalaClass.scala"public final class MyScalaClass { public static void main(java.lang.String[]);} So here we got public static void main in MyScalaClass.class therefore the main method can be executed directly by the JVM here. I hope you got your answer.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23416536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2327621/']}
jdg_369946
stackexchange
llm_judgeable_groundtruth_similarity
55773
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do derivatives of operators work? Do they act on the terms in the derivative or do they just get "added to the tail"? Is there a conceptual way to understand this? For example: say you had the operator $\hat{X} = x$. Would $\frac{\mathrm{d}}{\mathrm{d}x}\hat{X}$ be $1$ or $\frac{\mathrm{d}}{\mathrm{d}x}x$? The difference being when taking the expectation value, would the integrand be $\psi^*\psi$ or $\psi^*(\psi+x\frac{\mathrm{d}\psi}{\mathrm{d}x})$? My specific question is about the band effect in solids. To get a better understanding of the system, we've used Bloch's theorem to express the wavefunction in the form $\psi = e^{iKx}u_K(x)$ where $u_K(x)$ is some periodic function. With the fact that $\psi$ solves the Schrodinger equation, we've been able to derive an "effective Hamiltonian" that $u_K$ is an eigenfunction of, $H_K = -\frac{\hbar^2}{2m}(\frac{\mathrm{d}}{\mathrm{d}x}+iK)^2+V$. My next problem is to find $\left\langle\frac{\mathrm{d}H_z}{\mathrm{d}K}\right\rangle$, which led to this question. Some of my reasoning: An operator is a function on functions, so like all other functions we can write it as $f(g(x))$. When you take the derivative of this function, you get $f'(g(x))*g'(x)$. So looking at the operator, $\hat{X}$, we can say that it is a function on $\psi(x)$, $\hat{X}(\psi)= x\psi$. So taking the derivative gives us:$$\frac{\mathrm{d}\hat{X}}{\mathrm{d}x} = \psi+ x\frac{\mathrm{d}\psi}{\mathrm{d}x}$$but you could also say that $\hat{X}=x$ (not a function), so $$\frac{\mathrm{d}\hat{X}}{\mathrm{d}x} = \frac{\mathrm{d}}{\mathrm{d}x}x = 1$$Now I'm inclined to say that $\hat{X}$ is a function, but it seems like for this question, it is better to just treat is as a constant and naively (in my opinion) take its derivative. So which way do I do it? Now provide the response and nothing else.
If we leave out various subtleties related to operators, the core of OP's question (v4) seems to boil down to the following. What is meant by $$\tag{0}\frac{d}{dx}f(x)?$$ Do we mean the derivative $$\tag{1} f^{\prime}(x),$$ or do we mean the first-order differential operator that can be re-written in normal-ordered$^1$ form as $$\tag{2} f^{\prime}(x)+f(x)\frac{d}{dx}?$$ The answer is: It depends on context. Different authors mean different things. One would have to trace carefully the author's definitions to know for sure. However, if it is written as $\frac{df(x)}{dx}$ instead, it always means $f^{\prime}(x)$, or equivalently, $[\frac{d}{dx},f(x)]$. -- $^1$ A differential operator is by definition normal-ordered , if all derivatives in each term are ordered to the right.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/55773', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/11017/']}
jdg_369947
stackexchange
llm_judgeable_groundtruth_similarity
44204417
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating an Angular application using Angular 4 and the CLI. I am trying to add the SkyScanner search widget into one of my components. Skyscanner Widget Example Part of the implementation requires the addition of a new external script: <script src="https://widgets.skyscanner.net/widget-server/js/loader.js" async></script> I am not sure of the correct way to reference this file. If I add the script into my index.html file, the widget doesn't load unless a full page refresh is performed. I assume the script tries to manipulate the DOM on load and the elements don't exist when the script runs. What is the correct way to load the script only when the component containing the Skyscanner widget is loaded? Now provide the response and nothing else.
Try to load external JavaScript on component load as below : loadAPI: Promise<any>;constructor() { this.loadAPI = new Promise((resolve) => { this.loadScript(); resolve(true); });}public loadScript() { var isFound = false; var scripts = document.getElementsByTagName("script") for (var i = 0; i < scripts.length; ++i) { if (scripts[i].getAttribute('src') != null && scripts[i].getAttribute('src').includes("loader")) { isFound = true; } } if (!isFound) { var dynamicScripts = ["https://widgets.skyscanner.net/widget-server/js/loader.js"]; for (var i = 0; i < dynamicScripts.length; i++) { let node = document.createElement('script'); node.src = dynamicScripts [i]; node.type = 'text/javascript'; node.async = false; node.charset = 'utf-8'; document.getElementsByTagName('head')[0].appendChild(node); } }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/44204417', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/496978/']}
jdg_369948
stackexchange
llm_judgeable_groundtruth_similarity
131420
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $I$ be a compact interval and $\mathcal{M}(I)$ the space of (signed) Borel measures. We equip it with the weak topology, i.e. a sequence $\mu_n$ converges to zero if and only if$$ \left|\int_I f(x) \mathrm{d}\mu_n(x)\right| \longrightarrow 0$$for all $f \in C(I)$. Now the question is the following: Let $V \subset \mathcal{M}(I)$ be the vectorspace of all finite linear combinations of Dirac measures supported at different points in $I$. Is $V$ dense in $\mathcal{M}(I)$? For example if $I = [0,1]$, the sequence$$ \mu_n = \frac{1}{N}\sum_{j=1}^N \delta_{j/N},$$$\delta_{j/N}$ being the Dirac measure supported at $j/N$, weak*-converges to the Lebesgue measure as $\mu_n$ is just the approximation by Riemann sums. Hence one can easily get all measures that are absolutely continuous w.r.t. the Lebesgue measure. However, there are more measures (singular measures) that are neither point measures nor Lebesgue measures and I don't have an idea how to reach those. Now provide the response and nothing else.
Equipped with the mentionned weak($-\star$) topology, am I wrong or the set of continuous linear forms on $\mathcal{M}(I)$ is precisely given by $C(I)$ ? Then by the classical use of Hahn-Banach theorem, your vectorspace $V$ if dense if and only if the only continuous linear form of $\mathcal{M}(I)-w\star$ vanishing on $V$ is $0$. But since $V$ contains all Dirac masses, it seems clear that a the linear form associated with a continuous function $f\in C(I)$ vanishes on $V$ if and only if $f=0$ ! So I would say that $V$ is indeed dense (but only for the weak-$\star$ topology of course). Ayman
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/131420', 'https://mathoverflow.net', 'https://mathoverflow.net/users/16702/']}
jdg_369949
stackexchange
llm_judgeable_groundtruth_similarity
40790267
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to achieve a really simple task—changing the ViewController of a Container View by pressing a button: In my example the ViewController1 is embedded into the Container View using Interface Builder. By pressing the Button ViewController2 I want to change the view to the second ViewController. I’m confused because the Container View itself seems to be a NSView if I create an Outlet and as far as I know a NSView can’t contain a VC. Really appreciate your help! Now provide the response and nothing else.
Just note that in order for this to work you have to add storyboard identifiers to your view controllers, which can by going to your storyboard then selecting the Identity Inspector in the right hand pane and then entering the Storyboard ID in the Identity subcategory. Then this implementation of ViewController would achieve what you are looking for. import Cocoaclass ViewController: NSViewController { // link to the NSView Container @IBOutlet weak var container : NSView! var vc1 : ViewController1! var vc2 : ViewController2! var vc1Active : Bool = false override func viewDidLoad() { super.viewDidLoad() // Make sure to set your storyboard identiefiers on ViewController1 and ViewController2 vc1 = NSStoryboard(name: "name", bundle: nil).instantiateController(withIdentifier: "ViewController1") as! ViewController1 vc2 = NSStoryboard(name: "name", bundle: nil).instantiateController(withIdentifier: "ViewController2") as! ViewController2 self.addChild(vc1) self.addChild(vc2) vc1.view.frame = self.container.bounds self.container.addSubview(vc1.view) vc1Active = true } // You can link this action to both buttons @IBAction func switchViews(sender: NSButton) { for sView in self.container.subviews { sView.removeFromSuperview() } if vc1Active == true { vc1Active = false vc2.view.frame = self.container.bounds self.container.addSubview(vc2.view) } else { vc1Active = true vc1.view.frame = self.container.bounds self.container.addSubview(vc1.view) } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40790267', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3917662/']}
jdg_369950
stackexchange
llm_judgeable_groundtruth_similarity
30696662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a real hard time figuring out this CSS flexbox solution. Basically there are 2 issues, one with a 2-column and another with a 3-column layout. 2-Column: This, I think, may be pretty straightforward: 3-Column: This is probably a bit more advanced: The container class is, well, .container , and the children are just .left , .right , and .middle . Not sure if it's relevant, but the width of .container is 100% of viewport. I should probably add that using Bootstrap is not possible, due to reasons out of my control. Now provide the response and nothing else.
Here's how you do it for the three columns. I'm only adding that, because it's a bit more tricky: .container { display: flex; flex-wrap: wrap; flex-direction: row; justify-content: flex-start; align-items: stretch;}.left { order: 1; background: red; flex-basis: 100%; height: 300px}.middle { order: 3; background: green; flex-basis: 100%; height: 300px;}.right { order: 2; background: yellow; flex-basis: 100%; height: 300px;}@media screen and (min-width:600px) { .container { flex-wrap: nowrap; } .left { flex-basis: 200px; order: 1; } .middle { flex-basis: 1; order: 2; } .right { flex-basis: 200px; order: 3; }} <div class="container"> <div class="left"></div> <div class="middle"></div> <div class="right"></div></div> And the fiddle http://jsfiddle.net/2touox81/ As you can see, you can set column order for flex items. Hope this helps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30696662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2773381/']}
jdg_369951
stackexchange
llm_judgeable_groundtruth_similarity
204597
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm reading the book Information Theory, Inference and Learning Algorithms . In Section 22.1, the author gives an example of finding the MLE of the mean of an univariate Gaussian, and then obtaining the error bar of it, given the data and the standard variation. The related text is: If we Taylor-expand the log likelihood about the maximum, we can define approximate error bars on the maximum likelihood parameter: we use a quadratic approximation to estimate how far from the maximum-likelihood parameter setting we can go before the likelihood falls by some standard factor, for example $e^{1/2}$ , or $e^{4/2}$. In the special case of a likelihood that is a Gaussian function of the parameters, the quadratic approximation is exact. Then comes Example 22.2: Find the second derivative of the log likelihood with respect to $\mu$, and find the error bars on $\mu$, given the data and $\sigma$. The solution to this example in the text is: Comparing this curvature with the curvature of the log of a Gaussian distribution over $\mu$ of standard deviation $\sigma_{\mu}$, $\exp(-\mu^2/(2\sigma_{\mu}^2))$, which is $-1/\sigma_{\mu}^2$, we can deduce that the error bars on $\mu$ (derived from the likelihood function) are $$\sigma_{\mu} = \frac{\sigma}{\sqrt{N}}$$ I don't understand the above procedure of finding the error bars by "comparing the curvature", what's the principle behind it? Now provide the response and nothing else.
A covariance matrix $\mathbf C$ is called isotropic , or spherical , if it is proportionate to the identity matrix: $$\mathbf C = \lambda \mathbf I,$$ i.e. it is diagonal and all elements on the diagonal are equal. This definition does not depend on the coordinate system; if we rotate coordinate system with an orthogonal rotation matrix $\mathbf V$, then the covariance matrix will transform into $$\mathbf V^\top \mathbf C \mathbf V = \mathbf V^\top \cdot \lambda \mathbf I \cdot\mathbf V = \mathbf V^\top \mathbf V \cdot \lambda \mathbf I = \lambda \mathbf I,$$ i.e. will stay the same. Intuitively, isotropic covariance matrix corresponds to a "spherical" data cloud. A sphere remains a sphere after rotation.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/204597', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/95579/']}
jdg_369952
stackexchange
llm_judgeable_groundtruth_similarity
594580
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Fiddling with Mathematica seems to suggest the following: $$\frac{(2^2)(4^2)(6^2)\cdots(2N^2)}{(1^2)(3^2)(5^2)\cdots(2N-1)^2}=N\pi+\frac{\pi}{4}+\frac{\pi}{32N}-\frac{\pi}{128N^2}+o(1/N^2).$$ Does anyone have an explanation or reference for this, or know how the series continues? The next term appears to be around $-\frac{\pi}{411.5N^3}\approx-0.00763$. Now provide the response and nothing else.
Your first term is $r^2$, where $$r=\frac{2\cdot 4\cdots (2n)}{1\cdot 3\cdot 5\cdots (2n-1)}=\frac{(2\cdot 4\cdots (2n))^2}{(2n)!}=\frac{4^nn!n!}{(2n)!}=\frac{4^n}{{2n\choose n}}$$ Now, ${2n \choose n}$ is the central binomial coefficient , which is known to be approximately $\large \frac{4^n}{\sqrt{\pi n}}$ for large $n$. Hence $r^2\approx \pi n$. You are teasing out more terms of the approximation (then squaring). You can get as many terms as you like by using terms of the Stirling series to approximate the factorials.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/594580', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31753/']}
jdg_369953
stackexchange
llm_judgeable_groundtruth_similarity
375274
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The given transfer function is: $$\frac{6p+100}{p^2+12p+100}$$ and inputted into the system is a unit step response: $$r(t) = \left\{\begin{array} f0 & t < 0\\1 & t > 0 \end{array}\right.$$ However I'm not supposed to use Laplace transforms to solve this. I believe it has something to do with finding the poles but I'm not exactly sure how to accomplish this. Now provide the response and nothing else.
I assume you are permitted to perform partial fractions, even if you aren't supposed to use \$\mathscr{L}^{-1}\$. The roots of your denominator are \$p_1=-6+j\:8\$ and \$p_2=-6-j\:8\$ and the root of the unit step function, \$\frac{1}{p}\$ is \$p_3=0\$. You have: $$\begin{align*}\frac{1}{p}\cdot \frac{6p+100}{p^2+12p+100}&=\frac{6p+100}{p\cdot\left(p-p_1\right)\cdot\left(p-p_2\right)}\\\\&=\frac{A}{p-p_1}+\frac{B}{p-p_2}+\frac{C}{p-p_3}\\\\&=\frac{-0.5}{p-p_1}+\frac{-0.5}{p-p_2}+\frac{1}{p-p_3}\end{align*}$$ We know that the proposed solutions take the form of \$V_{\left(t\right)}=A\: e^{\:p\: t}\$. So it follows that: $$\begin{align*}V_{\left(t\right)}&= A\: e^{\:p_1\: t}+B\: e^{\:p_2\: t}+C\: e^{\:p_3\: t}\\\\&=-\frac{1}{2}\: e^{\:p_1\: t}-\frac{1}{2}\: e^{\:p_2\: t}+ e^{\:p_3\: t}\\\\&=-\frac{1}{2}\: e^{-6\: t}\: e^{8j\: t}-\frac{1}{2}\: e^{-6\: t}\: e^{-8j\: t}+ 1\\\\&=1-\frac{1}{2}\: e^{-6\: t}\cdot\left(\left[\operatorname{cos}\left(8t\right)+i\operatorname{sin}\left(8t\right)\right]+\left[\operatorname{cos}\left(-8t\right)+i\operatorname{sin}\left(-8t\right)\right]\right)\\\\&=1-\frac{1}{2}\: e^{-6\: t}\cdot\left(\operatorname{cos}\left(8t\right)+i\operatorname{sin}\left(8t\right)+\operatorname{cos}\left(8t\right)-i\operatorname{sin}\left(8t\right)\right)\\\\&=1-\frac{1}{2}\: e^{-6\: t}\cdot\left(\operatorname{cos}\left(8t\right)+\operatorname{cos}\left(8t\right)\right)\\\\&=1- e^{-6\: t}\cdot\operatorname{cos}\left(8t\right)\end{align*}$$
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/375274', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/189343/']}
jdg_369954
stackexchange
llm_judgeable_groundtruth_similarity
11333
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand how the Windowed Fourier transform works, so I used Matlab to create an artificial signal, and plotted the Windowed Fourier transform using a gaussian window. $x(t) = \left\{\begin{matrix} \sin(2\pi100t)& \text{for }0\leq{t}<\frac{1}{4}\\ \sin(2\pi50t)& \text{for }\frac{1}{4}\leq{t}<\frac{1}{2}\\ \sin(2\pi25t)& \text{for }\frac{1}{2}\leq{t}<\frac{3}{4}\\ \sin(2\pi10t)& \text{for }\frac{3}{4}\leq{t}\leq1\end{matrix}\right.$ and $f_s=500$, i.e. the data points are sampled at every 1/500 second. This is what I have obtained for my plot. However, for the last 2 intervals, $\frac{1}{2}\leq{t}<\frac{3}{4}$, and $\frac{3}{4}\leq{t}\leq1$, we can see some form of bifurcating phenomenon (vertically). Why is this so? Now provide the response and nothing else.
This happens because your window is too short. I don't have access to a plotting tool right now, but imagine for a second a slowly varying sinusoid that you chop up into pieces, and these pieces are shorter than the period of your sinusoid. If you take the Fourier transform of each of these pieces, some of your chunks will capture more energy of this sinusoid than others because they will contain crest or through of the function, while others will contain regions close to 0. If you increase your window size, you'll be able to catch several cycles of your slow sinusoids inside each window, and the periodicity will go away. The tradeoff is that your windows will capture larger lengths of time, so boundaries between events will be more blurred. If this still doesn't make sense, I'll try to include some plots later.
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/11333', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/5014/']}
jdg_369955
stackexchange
llm_judgeable_groundtruth_similarity
5287381
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: can anyone suggest me the regular expression for ip address and mac address ? i am using python & django for example ,http://[ipaddress]/SaveData/127.0.0.1/00-0C-F1-56-98-AD/ for mac address i tried following but didn't work ([0-9A-F]{2}[:-]){5}([0-9A-F]{2})^([0-9A-F]{2}[:-]){5}([0-9A-F]{2})$ Now provide the response and nothing else.
import res = "http://[ipaddress]/SaveData/127.0.0.1/00-0C-F1-56-98-AD/"re.search(r'([0-9A-F]{2}[:-]){5}([0-9A-F]{2})', s, re.I).group()'00-0C-F1-56-98-AD're.search(r'((2[0-5]|1[0-9]|[0-9])?[0-9]\.){3}((2[0-5]|1[0-9]|[0-9])?[0-9])', s, re.I).group()'127.0.0.1' Place this snippet in your django routing definitions file - urls.py url(r'^SaveData/(?P<ip>((2[0-5]|1[0-9]|[0-9])?[0-9]\.){3}((2[0-5]|1[0-9]|[0-9])?[0-9]))/(?P<mac>([0-9A-F]{2}[:-]){5}([0-9A-F]{2}))', SaveDataHandler.as_view()),
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5287381', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/405383/']}
jdg_369956
stackexchange
llm_judgeable_groundtruth_similarity
1350560
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've a site which is running in a shared host without memcached.So, how about make a MySQL memory DB as object cache just like memcached? Now provide the response and nothing else.
I would say that if you operate a website that would need memcached, you shouldn't be running it on a shared host. That's my flippant answer. Here's a real answer: Memcached has some good advantages over the MEMORY storage engine. Storage is distributed over multiple servers. MEMORY storage engine is limited to a single host, and constrained by the CPU and memory of that host. Quick access of individual entries. MEMORY storage engine has table-level locking only, so concurrency suffers. Non-relational key/value storage. MEMORY storage engine is more structured, which isn't as useful for cache-type usage. Also MEMORY expands varchar to full length, so is less efficient storage. As a caching solution, I wouldn't choose the MySQL MEMORY storage engine. Since you're using PHP, you should be using APC or Xcache or something. These have data cache features that are better for typical usage in PHP. If you're not using one of these PHP caching technologies, that's a more important area to improve than worrying about memcached versus MEMORY storage engine.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1350560', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/84325/']}
jdg_369957
stackexchange
llm_judgeable_groundtruth_similarity
5378518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've installed xCode 4 last night and tried to compile a software which should support 10.5 and above OSs. It gave the error message "Base SDK missing". I found that 10.5 base SDK is no longer in the menu. Is there any way to get that base SDK? Is there any get around for this problem? Thanks in advance. Now provide the response and nothing else.
Dropping the 10.5 SDK is also a problem for software which does not use Xcode for the build system, but expects (perhaps foolishly) for specific SDKs to be in place. To add the SDK back, I re-installed the MacOSX10.5.pkg from the Xcode 3.2.6 download. It is in a hidden folder named "Packages" on the disk image. After mounting the .dmg file, you can open it from the command line with open /Volumes/Xcode\ and\ iOS\ SDK/Packages/ . When installing the package, choose change install location and option-click on the drive you want to install to, so that you can specify a folder. The target folder should be /Developer or the base of your Xcode 4 install if you have put it in a non-standard location.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5378518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/514323/']}
jdg_369958
stackexchange
llm_judgeable_groundtruth_similarity
16545378
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to retrieve data from a web server in an android application, and don't know where to begin. Should I use web services? Now provide the response and nothing else.
I would recommend these tutorials: Connect android with PHP and MySql , JSON in android and PHP and MySQLi I used these tutorials and managed to get what you are trying to do working without too much difficulty. Between them they describe each step in how to do what you are attempting at each stage, the android application, the database and the web server side and has extra information included for what you can then do to process and use the received information The only thing I would add is that the Connect android with PHP and MySql tutorial makes use of mysql_ in php which is deprecated. Much better to use MySqli which is why I included the third link. The basic outline of what you want to do is this: 1) in the android app make a request to a server php script using a class like this: import java.io.BufferedReader;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import java.io.UnsupportedEncodingException;import java.util.List;import org.apache.http.HttpEntity;import org.apache.http.HttpResponse;import org.apache.http.NameValuePair;import org.apache.http.client.ClientProtocolException;import org.apache.http.client.entity.UrlEncodedFormEntity;import org.apache.http.client.methods.HttpGet;import org.apache.http.client.methods.HttpPost;import org.apache.http.client.utils.URLEncodedUtils;import org.apache.http.impl.client.DefaultHttpClient;import org.json.JSONException;import org.json.JSONObject;import android.util.Log;public class JSONParser { // Response from the HTTP Request static InputStream httpResponseStream = null; // JSON Response String to create JSON Object static String jsonString = ""; // Method to issue HTTP request, parse JSON result and return JSON Object public JSONObject makeHttpRequest(String url, String method, List<NameValuePair> params) { try { // get a Http client DefaultHttpClient httpClient = new DefaultHttpClient(); // If required HTTP method is POST if (method == "POST") { // Create a Http POST object HttpPost httpPost = new HttpPost(url); // Encode the passed parameters into the Http request httpPost.setEntity(new UrlEncodedFormEntity(params)); // Execute the request and fetch Http response HttpResponse httpResponse = httpClient.execute(httpPost); // Extract the result from the response HttpEntity httpEntity = httpResponse.getEntity(); // Open the result as an input stream for parsing httpResponseStream = httpEntity.getContent(); } // Else if it is GET else if (method == "GET") { // Format the parameters correctly for HTTP transmission String paramString = URLEncodedUtils.format(params, "utf-8"); // Add parameters to url in GET format url += "?" + paramString; // Execute the request HttpGet httpGet = new HttpGet(url); // Execute the request and fetch Http response HttpResponse httpResponse = httpClient.execute(httpGet); // Extract the result from the response HttpEntity httpEntity = httpResponse.getEntity(); // Open the result as an input stream for parsing httpResponseStream = httpEntity.getContent(); } // Catch Possible Exceptions } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { // Create buffered reader for the httpResponceStream BufferedReader httpResponseReader = new BufferedReader( new InputStreamReader(httpResponseStream, "iso-8859-1"), 8); // String to hold current line from httpResponseReader String line = null; // Clear jsonString jsonString = ""; // While there is still more response to read while ((line = httpResponseReader.readLine()) != null) { // Add line to jsonString jsonString += (line + "\n"); } // Close Response Stream httpResponseStream.close(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } try { // Create jsonObject from the jsonString and return it return new JSONObject(jsonString); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); // Return null if in error return null; } }} Which handles communication, opens a connection and receives a JSON string which it then processes into a JSON object. 2) in the php server, open an mysqli connection to your SQL database, run an mysqli->query() and do something like the following with the result: if (mysqli_num_rows($result) > 0) { // looping through all results $response["apps"] = array(); while ($row = mysqli_fetch_array($result)) { $apps = array(); $apps["name"] = $row["name"]; $apps["package"] = $row["package"]; $apps["version"] = $row["version"]; $apps["dateversion"] = $row["dateversion"]; $apps["sdkver"] = $row["sdkver"]; $apps["pathroot"] = $row["pathroot"]; $apps["rootname"] = $row["rootname"]; $apps["apkmd5"] = $row["apkmd5"]; $apps["extraapkmd5"] = $row["extraapkmd5"]; $apps["instructionsmd5"] = $row["instructionsmd5"]; $apps["assetsmd5"] = $row["assetsmd5"]; $apps["root"] = $row["root"]; $apps["current"] = $row["current"]; // push single product into final response array array_push($response["apps"], $apps); } // success $response["success"] = 1; // echoing JSON response echo json_encode($response); This iterates through the database response and encodes it into a JSON string which is sent back to the android app which can then process it. How to create something like this is all explained in the tutorials linked
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16545378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2037591/']}
jdg_369959
stackexchange
llm_judgeable_groundtruth_similarity
9662346
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a text like this: text = """<div><h1>Title</h1><p>A long text........ </p><a href=""> a link </a></div>""" using pure Python, with no external module I want to have this: >>> print remove_tags(text)Title A long text..... a link I know I can do it using lxml.html.fromstring(text).text_content() but I need to achieve the same in pure Python using builtin or std library for 2.6+ How can I do that? Now provide the response and nothing else.
Using a regex Using a regex, you can clean everything inside <> : import re# as per recommendation from @freylis, compile once onlyCLEANR = re.compile('<.*?>') def cleanhtml(raw_html): cleantext = re.sub(CLEANR, '', raw_html) return cleantext Some HTML texts can also contain entities that are not enclosed in brackets, such as ' &nsbm '. If that is the case, then you might want to write the regex as CLEANR = re.compile('<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});') This link contains more details on this. Using BeautifulSoup You could also use BeautifulSoup additional package to find out all the raw text. You will need to explicitly set a parser when calling BeautifulSoupI recommend "lxml" as mentioned in alternative answers (much more robust than the default one ( html.parser ) (i.e. available without additional install). from bs4 import BeautifulSoupcleantext = BeautifulSoup(raw_html, "lxml").text But it doesn't prevent you from using external libraries, so I recommend the first solution. EDIT: To use lxml you need to pip install lxml .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9662346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/559935/']}
jdg_369960
stackexchange
llm_judgeable_groundtruth_similarity
9059665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Python conditional assignment operator Apologies for such a simple question, but googling ||= isn't very helpful ;) Is there an equivalent in Python to the ||= statement that's present in Ruby and Perl? For example: foo = "hey"foo ||= "what" # assign foo if it's undefined# foo is still "hey"bar ||= "yeah"# bar is "yeah" Also what's the general term for something like this? Conditional assignment was my first guess but the Wikipedia page isn't quite what I had in mind. Now provide the response and nothing else.
A tad bit more verbose, but the easiest is foo = "hey"foo = foo or "what"#foo is still "hey"bar = Nonebar = bar or "yeah"#bar is "yeah" You can also use the ternary operator bar = Nonebar = bar if bar else "yeah" However, if I understand you, ||= assigns variables that weren't previously defined, without complaint? I had no idea. To do that in the local scope, this ugly duckling could work bar = locals()['bar'] if 'bar' in locals() else 'yeah' EDIT: Just saw the duplicate, and it has plenty of solutions as well :) For those too lazy to look, they also include a nicer variant on my last one foo = foo if 'foo' in locals() else 'hey' but this won't work for undefined variables, only falsy values will be replaced and undefined will raise a NameError . This next one will, OTOH, ONLY work for undefined and always keep the same preexisting falsy value, which as @Borodin says is like //= in Perl foo = locals().get('foo','hey') and, of course, someone used an exception :( try: vexcept NameError: v = 'bla bla'
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9059665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/392350/']}
jdg_369961
stackexchange
llm_judgeable_groundtruth_similarity
21339750
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've tried re-defining these variables in every imaginable way possibleto try and get this line to work. I'm just going to give one example here to represent what's troubling me. double const FRAME_COST = 45.00;string input;char yes, no;int frames;cout << "Do you want additional frames? Type yes/no: ";cin >> input; if (input == yes){ cout << "How many?" cin >> frames; frames = frames * FRAME_COST; }// The problem is in **the if statement**// I need to use a string not a bool (according to my professor)// I can't get the compiler to recognize the **if statement**// I realize this isn't practical, but he always throws curve balls. Now provide the response and nothing else.
Your current program has undefined behavior, because yes and no are character variables that have not been initialized, and you are using one of them in a comparison. To fix, remove the declarations of yes and no (you do not need them), and use a string literal instead: if (input == "yes") { ...} Note: your comparison may be too strict, because it is case-sensitive. It will take a yes , but it would not take a Yes or a YES as an answer. To address this you may want to convert the input string to lower case before the comparison.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21339750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3233086/']}
jdg_369962
stackexchange
llm_judgeable_groundtruth_similarity
1840211
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In $\mathbb{R}^2$ every three points that are not colinear lie on a unique circle. Does this generalize to higher dimensions in the following way: If $n+1$ element subset $S$ of $\mathbb{R}^n$ does not lie on any linear manifold (flat) of dimension less than $n$, then there is a unique $(n-1)$-sphere containing $S$. If not, then what would be the proper generalization? Now provide the response and nothing else.
Hagen von Eitzen 's answer gives a neat theoretical approach of this problem. However, I would like to expose a constructive and computational way to find the radius and center of the $(n-1)$-sphere determined by $n+1$ suitable points in $\mathbb{R}^n$. Let $n$ be an integer greater than $1$ and let say $x_i:=(x_{i,j})_{j\in\{1,\cdots,n\}},i\in\{0,\cdots,n\}$ are $n+1$ given points. Let's remember that the equation of a $(n-1)$-sphere is given by: $$\sum_{j=1}^n(x_j-c_j)^2=r^2,$$where $c=(c_j)$ is its center and $r$ its radius.Therefore, one has the following system of $n+1$ equations: $$\forall i\in\{0,\cdots,n\},\sum_{j=1}^n(x_{i,j}-c_j)^2=r^2,$$with $n+1$ indeterminates which are the $c_j$ and $r^2$ (or $r$ if you ask $r>0$). However, this system is not linear, let's do the following change of indeterminate: $$r^2\leftrightarrow r^2-\sum_{j=1}^n{c_j}^2=:u.$$Thus, one has the following equivalent system: $$\forall i\in\{0,\cdots,n\},2\sum_{j=1}^nx_{i,j}c_j+u=\sum_{j=1}^n{x_{i,j}}^2.$$Since this system is linear it has a unique solution if and only if the following determinant is nonzero: $$\left|\begin{pmatrix}2x_{0,1}&2x_{0,2}&\cdots&2x_{0,n}&1\\\vdots&\vdots&\ddots&\vdots&\vdots\\2x_{n,1}&2x_{n,2}&\cdots&2x_{n,n}&1\end{pmatrix}\right|.$$Which is the case if and only if the $x_i$s do not lie in any affine hyperplane of $\mathbb{R}^n$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1840211', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/134012/']}
jdg_369963
stackexchange
llm_judgeable_groundtruth_similarity
1114335
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Currently, DMARC only requires aligned DKIM or SPF. However spoofing SPF is relatively simple for an experienced hacker: You should only control a single IP address in the often large SPF range of e-mail service providers (Microsoft, Google, Mailchimp, ...). It may be even possible to legally do so if the list contains out of date IP addresses. Or you can try to use a bug/hole in the sender verification performed by those service providers. At least some providers do not perform a very secure sender domain verification. The essential problem with SPF is that it whitelists an IP that is shared by many clients of such a service providers. At the other hand, the DKIM key is probably secured much better by those service providers and it is (often) linked to a single customer. Or at least, it should be much easier to secure a DKIM key than to ensure that a hacker could not send an e-mail from one of the allowed SPF IP addresses with a sender address chosen by the hacker. So, wouldn't it be beneficial that DMARC is extended to allow specifying that DKIM should be aligned? Or does a successor of DMARC exists to enforce DKIM alignment? Partially related questions: DMARC Alignment: Enforce messages pass BOTH SPF and DKIM (It's not a duplicate as my question is whether it is a good DMARC design that we couldn't enforce DKIM). Can DMARC's SPF alignment be spoofed? (About the possibility of spoofing aligned SPF: spoofing SPF is easier than spoofing DKIM ). Now provide the response and nothing else.
Should you encrypt data between 2 servers in the cloud? Yes. Modern security thinking is that you don't consider your own network / datacenter as more trusted (than your WAN or the regular internet). Traditionally one would allow for more relaxed security standards in the datacenter, within the "secure" perimeter of your own network. Both internal systems and users would be trusted, implicitly expected to be secure and never abusive or malicious. One only added for example TLS for connections crossing the perimeter and borders of your "secure" internal network. Nowadays the increasingly more prevalent security concept is one of " zero trust " , which abandons the concept of a secure and trusted internal networks/systems/users and applies the same rigorous level of security everywhere, regardless. So for two back-end servers exchanging information with each-other: both servers and and all their services should be configured with TLS certificates (for server authentication and transport encryption) their communication should be encrypted clients should authenticate to services (with username password, a token, client certificate or whatever is suitable) your applications/(micro-)services should still do input validation and not trust the input from the internal clients/backend-systems to always be correct and safe to use verbatim. etc. etc. In response to your edit I just wanted to know who will be able to see that traffic between those servers (server A is on AWS us-east for example and server B is inAzure) Unless Amazon and Microsoft have their own physical datacenter interlinks, traffic between AWS and Azure clouds will be routed over the public internet and/or transit one or more network segments operated by third parties. The exact path your traffic takes and which third parties that are can change at any moment due to how routing protocols and the internet work. When you don't set up transport encryption that traffic will be in clear text and anybody with access to any segment can trivially eavesdrop.
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/1114335', 'https://serverfault.com', 'https://serverfault.com/users/941896/']}
jdg_369964
stackexchange
llm_judgeable_groundtruth_similarity
33253741
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a few event listener and it works in all browsers except mousewheel in Firefox (in Chrome and other it works perfect). It should zoom in and zoom out when I scrolling. It's JSC3D library. Code is below // setup input handlers.// compatibility for touch devices is taken into accountvar self = this;if(!JSC3D.PlatformInfo.isTouchDevice) { this.canvas.addEventListener('mousedown', function(e){self.mouseDownHandler(e);}, false); this.canvas.addEventListener('mouseup', function(e){self.mouseUpHandler(e);}, false); this.canvas.addEventListener('mousemove', function(e){self.mouseMoveHandler(e);}, false); //this.canvas.addEventListener('mousewheel', function(e){self.mouseWheelHandler(e);}, false); this.canvas.addEventListener(JSC3D.PlatformInfo.browser == 'firefox' ? 'DOMMouseScroll' : 'mousewheel', function(e){self.mouseWheelHandler(e);}, false); document.addEventListener('keydown', function(e){self.keyDownHandler(e);}, false); document.addEventListener('keyup', function(e){self.keyUpHandler(e);}, false);}else if(JSC3D.Hammer) { JSC3D.Hammer(this.canvas).on('touch release hold drag pinch', function(e){self.gestureHandler(e);});}else { this.canvas.addEventListener('touchstart', function(e){self.touchStartHandler(e);}, false); this.canvas.addEventListener('touchend', function(e){self.touchEndHandler(e);}, false); this.canvas.addEventListener('touchmove', function(e){self.touchMoveHandler(e);}, false);} And function JSC3D.Viewer.prototype.mouseWheelHandler : JSC3D.Viewer.prototype.mouseWheelHandler = function(e) {if(!this.isLoaded) return;if(this.onmousewheel) { var info = this.pick(e.clientX, e.clientY); this.onmousewheel(info.canvasX, info.canvasY, e.button, info.depth, info.mesh);}e.preventDefault();e.stopPropagation();if(!this.isDefaultInputHandlerEnabled) return;this.zoomFactor *= (JSC3D.PlatformInfo.browser == 'firefox' ? -e.detail : e.wheelDelta) < 0 ? 1.1 : 0.91;this.update();}; Anyone? Now provide the response and nothing else.
Perforce's client-server architecture is confusing you a bit here: the 'verify' command is reporting on the state of the files on the server , but the 'tampered with before resolve' message from 'p4 resolve' is alerting you to a problem with the state of the files on your client . It appears, from the little bit of information that you provided, that: Change 9049 added the source file, with digest A3269... Change 9070 branched that source file to the target. Of course, the digest is the same for source and target, since the branch was a perfect copy of the file. Change 22793 modified the source file, and hence its digest changed You are then trying to integrate change 22793 from the source to the target. This integration would be a straightforward "safe resolve" in most cases, because the file was changed only in the source, and the target was a perfect copy of the previous revision of the source. However, the resolve command looked on your actual workstation copy of the target file, and discovered that the file that you have there is not a file with digest A3269... Since the 'resolve' command was just about to replace your copy of the target file on your workstation with the merged result, it doesn't want to do that if there is a possibility that you made changes to that file (without opening the file for edit) that you don't want to lose. So 'resolve' is trying to tell you to have a look at the copy of the target file that is currently on your workstation (using your text editor, e.g.), and see if you actually made changes to that file. If you did, and you want to save those changes, then you can't perform this integration at this time; you need to instead open the file for edit (using 'edit -k' to retain your modified copy of the file, of course) so that Perforce can merge your edited copy with the changes from change 22793. If you didn't make changes to the target file, or if you made changes but you don't want them, then you need to discard those changes explicitly, for example by: reverting the pending integration to the target file Running 'p4 sync -f target-file' This is the sort of information that 'resolve' is trying to communicate in that very terse filepath tampered with before resolve - edit or revert.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33253741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2818503/']}
jdg_369965
stackexchange
llm_judgeable_groundtruth_similarity
758066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a couple of ActionMethods that queries the Controller.User for its role like this bool isAdmin = User.IsInRole("admin"); acting conveniently on that condition. I'm starting to make tests for these methods with code like this [TestMethod]public void HomeController_Index_Should_Return_Non_Null_ViewPage(){ HomeController controller = new HomePostController(); ActionResult index = controller.Index(); Assert.IsNotNull(index);} and that Test Fails because Controller.User is not set.Any idea? Now provide the response and nothing else.
You need to Mock the ControllerContext, HttpContextBase and finally IPrincipal to mock the user property on Controller. Using Moq (v2) something along the following lines should work. [TestMethod] public void HomeControllerReturnsIndexViewWhenUserIsAdmin() { var homeController = new HomeController(); var userMock = new Mock<IPrincipal>(); userMock.Expect(p => p.IsInRole("admin")).Returns(true); var contextMock = new Mock<HttpContextBase>(); contextMock.ExpectGet(ctx => ctx.User) .Returns(userMock.Object); var controllerContextMock = new Mock<ControllerContext>(); controllerContextMock.ExpectGet(con => con.HttpContext) .Returns(contextMock.Object); homeController.ControllerContext = controllerContextMock.Object; var result = homeController.Index(); userMock.Verify(p => p.IsInRole("admin")); Assert.AreEqual(((ViewResult)result).ViewName, "Index"); } Testing the behaviour when the user isn't an admin is as simple as changing the expectation set on the userMock object to return false.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/758066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41236/']}
jdg_369966
stackexchange
llm_judgeable_groundtruth_similarity
17836799
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the following code that returns a list: public List<Customer> GeAllCust(){ var results = db.Customers .Select(x => new { x.CustName, x.CustEmail, x.CustAddress, x.CustContactNo }) .ToList() return results;} I get an error reporting that C# can't convert the list: Error: Cannot implicitly convert type System.Collections.Generic.List<AnonymousType#1> to System.Collections.Generic.List<WebApplication2.Customer> Why is that? Here's a screenshot showing some additional information that Visual Studio provides in a tooltip for the error: Is it right way to return some columns instead of whole table....? public object GeAllCust(){ var results = db.Customers.Select(x => new { x.CustName, x.CustEmail, x.CustAddress, x.CustContactNo }).ToList(); return results;} Now provide the response and nothing else.
When you look the code: x => new { ... } This creates a new anonymous type . If you don't need to pull back only a particular set of columns, you can just do the following: return db.Customers.ToList(); This assumes that Customers is an IEnumerable<Customer> , which should match up with what you are trying to return. Edit You have noted that you only want to return a certain subset of columns. If you want any sort of compiler help when coding this, you need to make a custom class to hold the values: public class CustomerMinInfo{ public string Name { get; set; } public string Email { get; set; } public string Address { get; set; } public int? ContactNumber { get; set; }} Then change your function to the following: public List<CustomerMinInfo> GetAllCust(){ var results = db.Customers.Select(x => new CustomerMinInfo() { Name = x.CustName, Email = x.Email, Address = x.Address, ContactNumber = x.CustContactNo }) .ToList(); return results;} This will work, however , you will lose all relationship to the database context. This means if you update the returned values, it will not stick it back into the database. Also, just to repeat my comment, returning more columns (with the exception of byte arrays) does not necessarily mean longer execution time. Returning a lot of rows means more execution time. Your function is returning every single customer in the database , which when your system grows, will start to hang your program, even with the reduced amount of columns.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17836799', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1493874/']}
jdg_369967
stackexchange
llm_judgeable_groundtruth_similarity
14984119
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the easiest way to remove duplicate columns from a dataframe? I am reading a text file that has duplicate columns via: import pandas as pddf=pd.read_table(fname) The column names are: Time, Time Relative, N2, Time, Time Relative, H2, etc... All the Time and Time Relative columns contain the same data. I want: Time, Time Relative, N2, H2 All my attempts at dropping, deleting, etc such as: df=df.T.drop_duplicates().T Result in uniquely valued index errors: Reindexing only valid with uniquely valued index objects Sorry for being a Pandas noob. Any Suggestions would be appreciated. Additional Details Pandas version: 0.9.0 Python Version: 2.7.3 Windows 7 (installed via Pythonxy 2.7.3.0) data file (note: in the real file, columns are separated by tabs, here they are separated by 4 spaces): Time Time Relative [s] N2[%] Time Time Relative [s] H2[ppm]2/12/2013 9:20:55 AM 6.177 9.99268e+001 2/12/2013 9:20:55 AM 6.177 3.216293e-005 2/12/2013 9:21:06 AM 17.689 9.99296e+001 2/12/2013 9:21:06 AM 17.689 3.841667e-005 2/12/2013 9:21:18 AM 29.186 9.992954e+001 2/12/2013 9:21:18 AM 29.186 3.880365e-005 ... etc ...2/12/2013 2:12:44 PM 17515.269 9.991756+001 2/12/2013 2:12:44 PM 17515.269 2.800279e-005 2/12/2013 2:12:55 PM 17526.769 9.991754e+001 2/12/2013 2:12:55 PM 17526.769 2.880386e-0052/12/2013 2:13:07 PM 17538.273 9.991797e+001 2/12/2013 2:13:07 PM 17538.273 3.131447e-005 Now provide the response and nothing else.
Here's a one line solution to remove columns based on duplicate column names : df = df.loc[:,~df.columns.duplicated()].copy() How it works: Suppose the columns of the data frame are ['alpha','beta','alpha'] df.columns.duplicated() returns a boolean array: a True or False for each column. If it is False then the column name is unique up to that point, if it is True then the column name is duplicated earlier. For example, using the given example, the returned value would be [False,False,True] . Pandas allows one to index using boolean values whereby it selects only the True values. Since we want to keep the unduplicated columns, we need the above boolean array to be flipped (ie [True, True, False] = ~[False,False,True] ) Finally, df.loc[:,[True,True,False]] selects only the non-duplicated columns using the aforementioned indexing capability. The final .copy() is there to copy the dataframe to (mostly) avoid getting errors about trying to modify an existing dataframe later down the line. Note : the above only checks columns names, not column values. To remove duplicated indexes Since it is similar enough, do the same thing on the index: df = df.loc[~df.index.duplicated(),:].copy() To remove duplicates by checking values without transposing df = df.loc[:,~df.apply(lambda x: x.duplicated(),axis=1).all()].copy() This avoids the issue of transposing. Is it fast? No. Does it work? Yeah. Here, try it on this: # create a large(ish) dataframeldf = pd.DataFrame(np.random.randint(0,100,size= (736334,1312))) #to see size in gigs#ldf.memory_usage().sum()/1e9 #it's about 3 gigs# duplicate a columnldf.loc[:,'dup'] = ldf.loc[:,101]# take out duplicated columns by valuesldf = ldf.loc[:,~ldf.apply(lambda x: x.duplicated(),axis=1).all()].copy()
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/14984119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979203/']}
jdg_369968
stackexchange
llm_judgeable_groundtruth_similarity
3957932
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a server that response the request with XML, I want to parse it in the javascript. I really like the actionscript xml parser that is really easy for me to use. I am wandering is there a very easy/straightforward way to parse the XML I fetched from server? The ideal usage should be: fetchXMLnew XMLParser.parser.parseaccess the document. btw I plan to use jquery. Now provide the response and nothing else.
A regular $.ajax with dataType: "xml" will do the trick, then you can browse the contents with jQuery selectors like you would a simple web page (e.g. the attr function in the example to retrieve the "code" field of each book node or the find function to find specific node types). For example, you could do this to find a specific book by title: $(xml).find("book[title='Cinderella']") where xml is the data the success handler receives from $.ajax . Here is the complete example: <!DOCTYPE html><html><head> <title>jQuery and XML</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="language" content="en" /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script></head><body<div id="output"></div><script type="text/javascript">$(document).ready(function(){ $.ajax({ type: "GET", dataType: "xml", url: "example.xml", success: function(xml){ $(xml).find("book").each(function(){ $("#output").append($(this).attr("code") + "<br />"); }); } });});</script></body></html> And a matching XML file: <?xml version="1.0" encoding="UTF-8"?> <books title="A list of books"> <book code="abcdef" /> <book code="ghijklm"> Some text contents </book></books>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3957932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197036/']}
jdg_369969
stackexchange
llm_judgeable_groundtruth_similarity
428695
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to prove this statement:- "Let $x$ be a positive rational number. There are infinitely many positive rational numbers less than $x$." This is my attempt of proving it:- Assume that $x=p/q$ is the smallest positive rational number. Consider $p/q - 1$ $= (p-q)/q$ Case I: $p$ and $q$ are both positive Then, $p-q<p$ And hence, $(p-q)/q < p/q$ Since $p$ and $q$ are integers, $(p-q)$ is also an integer. Thus, $(p-q)/q$ is a rational number smaller than $p/q$. Therefore, our assumption is wrong, and there always exists a rational number smaller than any given rational number $x$. Case II: $p$ and $q$ are both negative Then, let $p/q = -s/-t$, where $s$ and $t$ are both positive integers. Then, $-s-(-t)>-s \implies (-s+t)/-t < -s/-t \implies (p-q)/q <p/q$ Since $p$ and $q$ are integers, $(p-q)$ is also an integer. Thus, $(p-q)/q$ is a rational number smaller than $p/q$. Therefore, our assumption is wrong, and there always exists a rational number smaller than any given rational number $x$. Q.E.D Is my proof correct? And there are a couple of questions that I've been pondering over:- 1) How do I justify the subtraction of $1$ from $p/q$? I mean, I assumed that $p/q$ is the smallest rational number, so how do I even know if this operation is valid? 2) I proved that there always exists a smaller rational number given any positive rational number. But how do I prove that there's always a smaller positive rational number? 3) Also, I don't seem to have proved that there are infinitely many smaller rational numbers than $x$. If I use a general integer $k$ instead of $1$, this would be taken care of, right? But then again, how do I justify this subtraction? I'd be really grateful, if someone could help me with this! Thanks! Now provide the response and nothing else.
Your proof does not work. Indeed, subtracting $1$ from $\frac p q$ will give you a rational number, but it will be negative by assumption, so this doesn't help you (since it doesn't give you a contradiction). A simpler approach: Explicitly state what the infinitely-many positive rationals less than $x$ are. Hint : If $y$ is a positive rational, what can you say about $\frac{y}2$? About $\frac{y}4$? $\frac{y}8$? ...
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/428695', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/51731/']}
jdg_369970
stackexchange
llm_judgeable_groundtruth_similarity
1999235
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In our last episode ( How I constructed a cross database query in MySQL ) I learned how to construct a cross database query in MySQL. This worked great but when our hero tried to use this newfound knowledge in PHP he found his best friend FAIL waiting for him. I took a look at mysql_select_db for PHP. This seems to imply that if I want to use MySQL with PHP, I have a couple of choices: Use mysql_select_db but be stuck with only using one db at a time. This is our current setup and putting a database as a namespace identifier doesn't seem to work (it works fine in the MySQL shell so I know it's not a problem with our MySQL server setup). Don't use mysql_select_db . From some of the examples I've seen, this seems to mean that I have to specify the db for every query that I make. This makes sense since I haven't used mysql_select_db to tell PHP what db I want to access. This also makes sad since I don't want to go through all my code and prepend a db name to every query. Is there something better than this? Is there a way for me to do a cross db MySQL query in PHP without having to something crazy like (2)? CLARIFICATION : None of the proposed answers actually let me do a cross db query. Instead, they allow me to access two different DBs separately. I want a solution that allows me to do something like SELECT foreign_db.login.username, firstname, lastname from foreign_db.login, user where ... NOT just make different queries to different DBs. For what it's worth, (2) doesn't work for me. Now provide the response and nothing else.
You will need your databases to run on the same host. If so, you should be able to use mysql_select_db on your favourite/default db and manually specify a foreign database. $db = mysql_connect($hots, $user, $password);mysql_select_db('my_most_used_db', $db);$q = mysql_query(" SELECT * FROM table_on_default_db a, `another_db`.`table_on_another_db` b WHERE a.id = b.fk_id"); If your databases run on a different host, you won't be able to join directly. But you can then make 2 queries. $db1 = mysql_connect($host1, $user1, $password1);$db2 = mysql_connect($host2, $user2, $password2);$q1 = mysql_query(" SELECT id FROM table WHERE [..your criteria for db1 here..]", $db1);$tmp = array();while($val = mysql_fetch_array($q1)) $tmp[] = $val['id'];$q2 = mysql_query(" SELECT * FROM table2 WHERE fk_id in (".implode(', ', $tmp).")", $db2);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1999235', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155823/']}
jdg_369971
stackexchange
llm_judgeable_groundtruth_similarity
17213
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some sources state that when the mass of a quark goes to zero, it allows for Spontaneous Breaking of Chiral Symmetry and gets a constituent mass of about $200\, \mathrm{MeV}$. Other sources state that when the masses of the light quarks go to zero, so does the pion mass. In this case, the explicit breaking of chiral symmetry disappears, but the spontaneous chiral symmetry breaking is triggered too. So, in this limit, is the pion massless but their quarks carry a $200\, \mathrm{MeV}$ mass? What is wrong, or misunderstood, here? Now provide the response and nothing else.
You are understanding correctly. In the massless up/down quark limit, chiral symmetry is restored, and the pion becomes massless but quarks are still confined, and baryons have about the same mass as they do now. This is exactly why the idea that the pion is made of quarks is nonsense. In the 1980s, many in the new generation sought to undo the progress of the 1960s, and willfully ignored the revolutionary work of Nambu, Sakurai, Skyrme, and others, dismissing it as pre-quark nonsense. They decided that a pion is made up of two nonrelativistic quark-objects, they called these objects "constituent quarks", and they made up force laws for these to reproduce the Hadron spectrum. Georgi and Glashow even went so far as to invent a quark-quark coupling force which was designed to lower the mass of the pion by interquark interactions! This work is a little embarassing to read. The proper model of the pion was the much earlier one due to Nambu and Weinberg, and this is now verified thanks to numerical lattice QCD, where the mass of the quark can be tuned at will. When you tune the mass of the quarks to zero, the pion mass vanishes according to the laws of chiral peturbation theory. The pion is a mode of oscillation of the quark chiral condensate, a material filling all of space. It is made out of quarks which are created by the independent fluctuations of the gluon field. The gluon field completely randomizes on a Baryon scale, meaning that a quark going in a closed path larger than a proton circumference will get a completely random pick from SU(3) as its holonomy. A random gauge field will create large numbers of objects whose mass scale is much lower than this randomization scale, and in this case, the objects it creates are the light up and down quarks, and to a lesser extend strange quarks. These quarks condense in pairs in the vacuum, making a condensate whose order parameter is much like a mass term in the Dirac equation: $m \bar\psi \psi$. This condensate is not invariant under rotations of the left and right-handed quarks into each other, but the Lagrangian is (more or less, except for the negligible quark mass). The Goldstone modes of the broken symmetry are waves in this condensate, and these are the pions.The goldstone mode is due to oscillations where the left and right part of the condensate slosh in phase in opposite directions, and these are collective excitations of quarks. The pion is made of quarks to the same extent that a sound wave is made of atoms. That the pions are Goldstone bosons was not only theoretically predicted by Nambu, it explains their strange derivative couplings at low energy, and this was spectacularly extended to a full theory by Weinberg's soft-pion theorems, and chiral perturbation theory. The condensates were further used to give nonperturbative corrections to QCD particle propagation at intermediate distances in the Shifman-Vainshtein-Zakharov sum rules. So really, everyone should have known better than constituent quarks. It is not clear that the notion of "constituent quark" actually has any form of real meaning, or whether it is just a figment of the imagination. The only partial evidence in it's favor that I think is not easy to explain in other way is that the total cross sections for pions are about 2/3 the total cross section for protons, as if the pomeron hits 2 quarks instead of three. I don't know if this approximate equality is not just a coincidence.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/17213', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1335/']}
jdg_369972
stackexchange
llm_judgeable_groundtruth_similarity
602231
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I asked a related question about uname before and now want to ask from another angle, because the following simple yet obvious conflicting outputs may mean there is something many people did not think of (me included). I'm running FreeBSD 9 RELEASE; please see the following commands: # sysctl kern.bootfilekern.bootfile: /boot/kernel/kernel# strings /boot/kernel/kernel |grep RELEASE|grep 9@(#)FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 2014FreeBSD 9.2-RELEASE-p7 #0: Tue Jun 3 11:05:13 UTC 20149.2-RELEASE-p7 The above kernel file suggests the running kernel is 9.2-RELEASE-p7. But... # dmesgCopyright (c) 1992-2012 The FreeBSD Project.Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved.FreeBSD is a registered trademark of The FreeBSD Foundation.FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012...# uname -aFreeBSD localhost.localdomain 9.1-RELEASE FreeBSD 9.1-RELEASE #0 r243825: Tue Dec 4 09:23:10 UTC 2012 root@farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 So dmesg and uname says it's 9.1-RELEASE. I also did an extensive find / -type f -exec grep -l "9.1-RELEASE" {} \; , but I found no possible kernel file that contains 9.1-RELEASE. What could lead to the above conflict, and what kernel am I actually running? Please note I run RELEASE and ran freebsd-update to do a binary update, so no compiled kernel is involved. And I have rebooted multiple times after freebsd-update. And the system is not in jail, etc., just the only system on that computer. Now provide the response and nothing else.
One thing that comes to my mind is that your system would have a separate /boot partition, which somehow was unmounted while you performed freebsd-update . This made the new kernel to be copied to your /boot directory inside the / partition, not the actual /boot . Now, during boot, your bootloader is configured to use the actual /boot and so the old kernel gets loaded.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/602231', 'https://serverfault.com', 'https://serverfault.com/users/74283/']}
jdg_369973
stackexchange
llm_judgeable_groundtruth_similarity
34379
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to work through Coursera's probabilistic graphical models class (week 7: Baeysian prediction) and a have several questions. In the Dirichlet distribution, I'm having difficulty trying to understand why there's a   -1   in theta's exponent:$$P(\theta)=Dir(\alpha_1, ..., \alpha_k) = \frac{1}{Z} \cdot \prod_{j} \theta_{j}^{\alpha_{j}-1}$$ How do you get from here:$$P(X)=\int_{\theta}P(X|\theta)P(\theta)d\theta$$to here:$$P(X=x^{i}|\theta) = \int_{\theta} \frac{1}{Z} \cdot \theta_{i} \prod_{j} \theta_{j}^{\alpha_{j}-1}$$ Also, how do you step through the integration for the following?:$$\int_{\theta} \frac{1}{Z} \cdot \theta_{i} \prod_{j} \theta_{j}^{\alpha_{j}-1} = { \alpha_{i}\over{\sum_{j} \alpha_{j}} }$$ These are the lecture notes . My questions refer to the first slide. Now provide the response and nothing else.
Just thought I'd add an example of how to calculate the normalising constant.If you know the beta integral, then its easier to use that for direct integration. With a change of variables in the usual definition you get $$\int_{L}^{U}(x-L)^{a-1}(U-x)^{b-1}dx=(U-L)^{a+b-1}B(a,b)$$ The change in variables is $t=\frac{x-L}{U-L}$ and you get back to the standard definition of the beta integral. To apply this to the calculation of Z we must first determine the limits of integration. This is simple for the simplex as the parameters must all be positive and sum to 1. So we have $$0\leq\theta_1\leq 1$$ $$0\leq\theta_i\leq 1-\sum_{j=1}^{i-1}\theta_j\;\;\; i=2,\dots,n-1$$$$\theta_n=1-\sum_{j=1}^{n-1}\theta_j $$ This assumes that we integrate in the order $\theta_n,\theta_{n-1},\dots,\theta_1$. The order of integration doesn't matter, but this order is easier to write down.The first integral is a substitution so we have for the second integral. $$\int_{0}^{1-\sum_{j=1}^{n-2}\theta_j}\left[\prod_{k=1}^{n-2}\theta_{k}^{\alpha_k-1}\right]\theta_{n-1}^{\alpha_{n-1}-1}\left( 1-\sum_{j=1}^{n-2}\theta_j - \theta_{n-1}\right)^{\alpha_n-1}d\theta_{n-1}$$ This is of the form of the transform beta integral with $L=0$ and $U= 1-\sum_{j=1}^{n-2}\theta_j $ hence we get: $$\left[\prod_{k=1}^{n-2}\theta_{k}^{\alpha_k-1}\right]B(\alpha_n,\alpha_{n-1})\left( 1-\sum_{j=1}^{n-2}\theta_j \right)^{\alpha_n+\alpha_{n-1}-1}$$ Now we apply this again to the integral over $\theta_{n-2}$. It is another transformed beta integral but with $U= 1-\sum_{j=1}^{n-3}\theta_j$. Hence we get $$\left[\prod_{k=1}^{n-3}\theta_{k}^{\alpha_k-1}\right]B(\alpha_n,\alpha_{n-1}) B(\alpha_n+\alpha_{n-1},\alpha_{n-2}) \left( 1-\sum_{j=1}^{n-3}\theta_j \right)^{\alpha_n+\alpha_{n-1}+\alpha_{n-2}-1}$$ It is now straight forward to repeatedly apply this and you get $$Z= B(\alpha_n,\alpha_{n-1}) B(\alpha_n+\alpha_{n-1},\alpha_{n-2}) B(\alpha_n +\alpha_{n-1}+\alpha_{n-2} ,\alpha_{n-3}) \dots B(\alpha_n+\dots+\alpha_{2},\alpha_1)$$ If you plug in the relation between the beta and gamma integrals $B(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$ you get the correct normalising constant.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/34379', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/13325/']}
jdg_369974
stackexchange
llm_judgeable_groundtruth_similarity
2859071
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why the following induction proof is wrong? Claim : Natural numbers $0,1,2,3,\dots$ are all even. Proof by strong induction: Base case: $n=0$ is an even number, hence the statement is true for $n=0$ . Inductive step: Assume that the statement is true for $n=0,1,2,\dots,k$ , and consider $n=k+1$ . By assumption, both 1 and $k$ are even numbers, and hence so is their sum $k+1$ . It thus follows that the statement holds for all $n=0,1,2,3,\dots$ Now provide the response and nothing else.
The following statement is not true for $k = 0$ "By assumption, both $1$ and $k$ are even numbers, and hence so is their sum $k + 1$." For $k = 0$, you only know that $0$ is even not $1$. So you can't make the assertion above for all $k$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2859071', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/569249/']}
jdg_369975
stackexchange
llm_judgeable_groundtruth_similarity
674
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have a few elementary questions about cup-products. Can one develop them in an axiomatic approach as in group cohomology itself, and give an existence and uniqueness theorem that includes an explicitly computable map on cochains? Second, how do they relate to cup-products in algebraic topology? In general, are there connections between cup-products and other mathematical constructions that may provide more intuition into them? Now provide the response and nothing else.
The explicit formula for cup product on group cohomology is as simple as can be. For simplicity let's consider integer coefficients $H^*(G;\mathbb{Z})$ , although this works for any coefficients as long as they're untwisted. Let's define group cohomology using inhomogeneous cochains; thus we take the abelian groups $C^n(G;\mathbb{Z}) :=$ functions from $G^n$ to $\mathbb{Z}$ , endowed with a differential $d: C^n \to C^{n+1}$ , and then $H^n(G;\mathbb{Z})$ is the usual cohomology $\ker d_n/\operatorname{im} d_{n-1}$ . Anyway, cup product is a map from $H^k(G) \otimes H^m(G)$ to $H^{k+m}(G)$ , and it comes from a map $C^k(G) \otimes C^m(G)$ to $C^{k+m}(G)$ . Namely, given two cochains $f: G^n \to \mathbb{Z}$ and $g: G^m \to \mathbb{Z}$ , define $$ f \wedge g: G^{k+m} \to \mathbb{Z} $$ by $$ f\wedge g(x_1,...x_{k+m}) = f(x_1,...x_k)g(x_{k+1},...x_{k+m}) $$ You can check by hand that the differential interacts with this operation by $$ d(f \wedge g) = df \wedge g + (-1)^k f \wedge dg $$ Thus this "wedge product" of cochains descends to a product on group cohomology, and this is exactly cup product. This is also how cup product is defined for de Rham cohomology; differential forms have a natural wedge product which satisfies $d(f \wedge g) = df \wedge g + (-1)^k f \wedge dg$ , and so this induces the cup product on $H^*(M;R)$ . Topologically, cup product is the composition of $$ H^k(Y) \otimes H^m(Y) \to H^{k+m}(Y \times Y) \to H^{k+m}(Y) $$ where the first map is the Kunneth map (just pullback by the two projections $Y \times Y \to Y$ ), and the second map is restriction to the diagonal. Applying this perspective to group cohomology, we would first define $f \times g : (G \times G)^{k+m} \to \mathbb{Z}$ by $$ f \times g ((x_1,y_1),...(x_{k+m},y_{k+m})) = f(x_1,...x_k)g(y_{k+1},...,y_{k+m}). $$ Upon restriction to the diagonal $G < G \times G$ , $f \times g$ restricts to $f \wedge g$ above.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/674', 'https://mathoverflow.net', 'https://mathoverflow.net/users/344/']}
jdg_369976
stackexchange
llm_judgeable_groundtruth_similarity
3325271
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A gumball machine is full of gumballs. There are 7 different colors equally distributed. Each time the handle is turned, 3 gumballs are dispensed. A "jackpot" is defined as all 3 gumballs the same color. The gumballs are replaced after each turn. How many turns are expected between each jackpot? How many turns are expected for two consecutive jackpots of the same color? How many turns are expected for three consecutive jackpots of the same color? For question 2 and 3, there can be any number of turns in between consecutive jackpots, the jackpots just need to be the same color. Now provide the response and nothing else.
Let $\Phi:\mathcal H \to \mathcal H^{**}$ denote the canonical injection, AKA the evaluation map (in the notation of the proof, $\Phi(x) = \hat x$ )t. We want to prove that $\Phi$ is surjective. In other words: we want to prove that for any $\varphi \in \mathcal H^{**}$ , there exists a $y \in \mathcal H$ such that $\Phi(y) = \varphi$ . So, begin with any $\varphi$ . By the RRT, there exists a unique $f_{\varphi}$ such that for all $f \in \mathcal H^*$ , $\varphi(f) = \langle f, f_{\varphi}\rangle$ . Note that this requires that requires an inner product on $\mathcal H^*$ . Recall how such an inner product is defined: RRT says that there exists a $y_f$ for every $f \in \mathcal H^*$ such that for $y \in \mathcal H$ , we have $f(y) = \langle y,y_f\rangle$ . With this established, we define $$\langle f,g \rangle := \langle y_f,y_g\rangle.$$ We claim that $\Phi(y_{f_{\varphi}}) = \varphi$ (that is, $y_{f_{\varphi}}$ is "the $y$ that we're looking for"). Indeed, we note that for any $f \in \mathcal H^*$ , we have $$[\Phi(y_{f_{\varphi}})](f) = f(y_{f_{\varphi}}) = \langle y_{f_\varphi},y_f \rangle= \langle f, f_{\varphi}\rangle = \varphi(f)$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3325271', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/696285/']}
jdg_369977
stackexchange
llm_judgeable_groundtruth_similarity
2813876
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This is a snapshot of Baby Rudin: Theorem For every real $x>0$ and every integer $n>0$ there is one and only one positive real $y$ such that $y^n=x$ . This number $y$ is written $\sqrt[n]{x}$ or $x^{1/n}$ . Proof That there is at most one such $y$ is clear, since $0<y_1<y_2$ implies $y_1^n<y_2^n$ . Let $E$ be the set consisting of all positive real numbers $t$ such that $t^n<x$ . If $t=\frac{x}{1+x}$ then $0\le t<1$ . Hence $t^n\le t<x$ . Thus $t\in E$ , and $E$ is not empty. If $t>1+x$ then $t^n\ge t>x$ , so that $t\notin E$ . Thus $1+x$ is an upper bound of $E$ . Hence Theorem 1.19 implies the existence of $$y=\sup E$$ To prove that $y^n=x$ we will show that each of the inequalities $y^n<x$ and $y^n>x$ leads to a contradiction. The identity $b^n-a^n=(b-a)(b^{n-1}+b^{n-2}a+\cdots+a^{n-1})$ yields the inequality $$b^n-a^n<(b-a)nb^{n-1}$$ when $0<a<b$ . Assume $y^n<x$ . Choose $h$ so that $0<h<1$ and $$h<\frac{x-y^n}{n(y+1)^{n-1}}$$ Put $a=y,b=y+h$ . Then $$(y+h)^n-y^n<hn(y+h)^{n-1}<hn(y+1)^{n-1}<x-y^n$$ Thus $(y+h)^n<x$ , and $y+h\in E$ . Since $y+h>y$ , this contradicts the fact that $y$ is an upper bound of $E$ . Assume $y^n>x$ . Put $$k=\frac{y^n-x}{ny^{n-1}}$$ Then $0<k<y$ . If $t\ge y-k$ , we conclude that $$y^n-t^n\le y^n-(y-k)^n<kny^{n-1}=y^n-x$$ Thus $t^n>x$ , and $t\notin E$ . If follows that $y-k$ is an upper bound of $E$ . To get the definition of $h$ , I understand we need $x-y^n$ in the denominator to make the whole thing greater than $0$ , but where does the denominator $n(y+1)^{n-1}$ come from? The same question on the form of $k$ . Why do we need such a denominator $ny^{n-1}$ ? Now provide the response and nothing else.
Rudin follows the mathematical style of writing his 'probe for the proof' on a piece of scrap paper, and upon arriving at the solution, writing up a formal and terse proof that hides the 'mathematical eyebrow sweat' that went behind it. The $\text{1.21 Theorem}$ the OP is analyzing comes right after Rudin introduces the real numbers as a complete ordered field in Chapter 1; the solution there can only use the developed axiomatic machinery. Let us probe this matter and work up some 'sweat'. We have our fixed positive real number $x$ and an integer $n \gt 0$. In general the only easy thing is to find a real number $a \gt 0$ such that $\tag 1 a^n \lt x$ Can we do better? Can we 'make $a$ larger' so that $\text{(1)}$ still holds? Suppose $b \gt a$. How do we 'enforce' the condition that $b^n \lt x$? $\tag 2 [ b^n \lt x ] \text{ iff } [ (b^n - a^n + a^n) \lt x ] \text{ iff } [ (b^n - a^n) \lt (x - a^n) ]$ It may take you a page or two of scrap paper to discover that the key to solving this is that $\tag 3 b^n-a^n<(b-a)nb^{n-1}$ is always true with $0 \lt a \lt b$. So by setting up some bound $M$ and with $b = a + h$, you can write $\tag 4 b^n \lt h M + a^n \lt hM + x$ Geometrically, Rudin knows that $E$ is an interval. The set $F$ of real positive numbers $u^n \gt x$ is also an open interval. The supremum of $E$ is equal to the infimum of $F$. The above describes the conceptual thinking behind Rudin's proof (no calculus allowed). The following theory gets the result in a different manner, and proposition 2 clarifies part of Rudin's argument in his proof construction. Let $x$ be some fixed positive real number and $n \in {\mathbb Z}^+$. Proposition 1: Let $D \subset (0, +\infty)$ be a nonempty set of real numbers and assume that for any $d \in D$, $d^n \gt x$. Then $[\text{inf(D)}]^n \ge x$. Proof Let $y = \text{inf(D)}$ and to arrive at a contradiction, assume that $y^n \lt x$. Choose $h$ so that $0<h<1$ and $\quad h<\frac{x-y^n}{n(y+1)^{n-1}}$ Set $a=y$ and $b=y+h$. Then, using Rudin's (3) from the prior section, $\quad (y+h)^n-y^n<hn(y+h)^{n-1}<hn(y+1)^{n-1}<x-y^n$ But then $(y+h)^n<x$ and so $y + h$ must be a lower bound for $D$. Yet it is greater than $y$ which is the $\text{glb}$, a contradiction. $\blacksquare$ Proposition 2: Let $D \subset (0, +\infty)$ be a nonempty set of real numbers and assume that for any $d \in D$, $d^n \lt x$. Then $[\text{sup(D)}]^n \le x$. Proof Let $y = \text{sup(D)}$ and to arrive at a contradiction, assume that $y^n \gt x$. Set $\quad h=\frac{y^n-x}{ny^{n-1}}$ It is easy to see that $0 \lt h \lt y$. Set $a=y-h$ and $b=y$. Then, using Rudin's (3) from the prior section, $\quad y^n-(y-h)^n<hny^{n-1}= y^n-x$ But then $(y-h)^n>x$ and so $y - h$ must be an upper bound for $D$. Yet it is smaller than $y$ which is the $\text{lub}$, a contradiction. $\blacksquare$ Let $E = \{t \gt 0 \, | \, t^n \lt x \}$ and $F = \{t \gt 0 \, | \, t^n \gt x \}$. Proposition 3: $\text{sup(E)} = \text{inf(F)}$ and if we denote this number by $\gamma$, then $\gamma^n = x$. Proof Let $\beta = \text{inf(F)}$. By proposition 1, $\beta^n \ge x$. Let $\alpha = \text{sup(E)}$. By proposition 2, $\alpha^n \le x$. Since $\beta$ is an upper bound for $E$, $\alpha \ge \beta$. Since $\alpha$ is an lower bound for $F$, $\alpha \le \beta$. So $\alpha = \beta$ and the proposition follows. $\blacksquare$ In a comment the OP had a question on how to derive the inequality. To be complete, even if it is no longer necessary, consider this: The identity $b^n-a^n=(b-a)(b^{n-1}+b^{n-2}a+\cdots+a^{n-1})$ yields the inequality $b^n-a^n<(b-a)nb^{n-1}$ when $0<a<b$. We have $n$ terms in $b^{n-1}+b^{n-2}a+\cdots+b^0 a^{n-1}$, but since $a \lt b$, $|b^{n-1}a^0+b^{n-2}a^1+\cdots+b^0 a^{n-1}| \lt |b^{n-1}a^0|+|b^{n-2}a^1|+\cdots+|b^0 a^{n-1}| \lt$ $\quad |b^{n-1}|+|b^{n-1}|+\cdots+|b^{n-1}| \lt n b^{n-1}$ In his question, the OP ask where does the denominator come from in $\quad h<\frac{x-y^n}{n(y+1)^{n-1}}$ This is the same thing as working with an $[\varepsilon , \delta]$ proof in calculus. After the 'probing', we set $h$ (like we do with $\delta$) to what works and shows that we can control the quantities. In essence, we are using the fact that the function $f(x) = x^n$ is continuous, but the book still hasn't discussed that concept.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2813876', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/556819/']}
jdg_369978
stackexchange
llm_judgeable_groundtruth_similarity
929824
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning mercurial as my solo scm software. With other management software, you can put change comments into the file header through tags. With hg you comment the change set, and that doesn't get into the source. I'm more used to central control like VSS. Why should I put the file history into the header of the source file? Should I let mercurial manage the history with my changeset comments? Now provide the response and nothing else.
Let the source control system handle it. If you put change details in the header it will soon become unwieldy and overwhelm the actual code. Additionally if the scm has the concept of changelists (where many files are grouped into a single change) then you'll be able to write the comment so that it applies to the whole change and not just the edits in the one file (if that makes sense), giving you a clearer picture of why the edit was required.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/929824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49945/']}
jdg_369979
stackexchange
llm_judgeable_groundtruth_similarity
36423196
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a class containing a volatile reference to an array: private volatile Object[] objects = new Object[100]; Now, I can guarantee that, only one thread (call it writer ) can write to the array. For example, objects[10] = new Object(); All other threads will only read values written by the writer thread. Question: Do I need to synchronize such reads and writes in order to ensure memory consistency? I presume, yes I should. Because it would not be useful from performance standpoint if JVM provides some kind of memory consistency guarantees when writing to an array. But I'm not sure about that. Didn't find anything helpful in documentation. Now provide the response and nothing else.
private volatile Object[] objects = new Object[100]; You make only objects reference to be volatile this way. Not the array instance contents that is associated. Question: Do I need to synchronize such reads and writes in order to ensure memory consistency? Yes. it would not be useful from performance standpoint if JVM provides some kind of memory consistency guarantees when writing to an array consider using collections like CopyOnWriteArrayList (or your own array wrapper with some Lock implementation inside mutators and read methods). Java platform also has Vector (obsolete with flawed design) and synchronized List (slow for many scenarios), but I do not recommend to use them. PS: One more good idea from @SashaSalauyou
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36423196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2786156/']}
jdg_369980
stackexchange
llm_judgeable_groundtruth_similarity
47600737
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning kotlin and has recently spotted an interesting way of writing kotlin but can't really understand it. taken the code I'm learning from here fun drawerImageLoader(actions: DrawerImageLoaderKt.() -> Unit): DrawerImageLoader.IDrawerImageLoader { val loaderImpl = DrawerImageLoaderKt().apply(actions).build() DrawerImageLoader.init(loaderImpl) return loaderImpl}class DrawerImageLoaderKt { ...} why should drawerImageLoader defined outside of class? why can't be in the class and within companion object block? Also, what does .() mean in DrawerImageLoaderKt.() ? Now provide the response and nothing else.
In C++03 it is undefined behavior. In C++11 it is not. There is no sequence point between the various pre-increments. If i was a user-defined type, it would be well-defined behavior because then there would be a function call (a sequence point). In C++11, the idea of sequence points was replaced with sequenced before/sequenced after. Defect 637 ( http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#637 ) provides an example of a previously undefined construct becoming well-defined ( i = ++i + 1 ). To understand why it's not undefined behavior, let's look at the pieces we need. ++i is equivalent to i = i + 1 (except i is evaluated only once). Further if we substitute i = i + 1 with inc , ++(i = i + 1) becomes inc = inc + 1 . [expr.ass] states: In all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression. Thus the assignment in i = i + 1 is sequenced before value computation of inc ; however, the assignment in inc = inc + 1 is sequenced after value computation of inc . There is no undefined behavior because the assignments are sequenced.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47600737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1865027/']}
jdg_369981
stackexchange
llm_judgeable_groundtruth_similarity
17097357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the past, using Twitter API version 1, I used the following URL to get a JSON feed of all tweets with the hashtag "baseball": http://search.twitter.com/search.json?q=%23baseball&result_type=recent How do you achieve a similar result using API version 1.1? I'm using PHP as my server-side code, so not sure if I need to use it to authenticate and such? Sample code would be extremely helpful. Thanks. Now provide the response and nothing else.
As you know, authenticated requests are now required, so there's a few things that you may need to take a look at first. The new 1.1 search, how to use hashtags, and authentication. Twitter Search for 1.1 The new twitter search api docs can be found here . According to these docs: https://api.twitter.com/1.1/search/tweets.json is the new resource URL to use for search. Hashtag searches You've got that part right! %23 decodes to a # character. Authentication OAuth is a lot more complex. It would help if you just used a library that just worked. Here's a post a lot of people found useful to help you make authenticated requests to the 1.1 API. This includes a one-file include library to make requests like those you require. Example This example assumes you're using the above library and set up your keys etc. To make your request: // Your specific requirements$url = 'https://api.twitter.com/1.1/search/tweets.json';$requestMethod = 'GET';$getfield = '?q=#baseball&result_type=recent';// Perform the request$twitter = new TwitterAPIExchange($settings);echo $twitter->setGetfield($getfield) ->buildOauth($url, $requestMethod) ->performRequest(); Yes, that's it. Apart from the little setting up you need to do (as my post explains), for your dev keys, that's everything you need to perform authenticated requests. Response The response is returned to you in JSON. From the overview : API v1.1 will support JSON only. We've been hinting at this for some time now, first dropping XML support on the Streaming API and more recently on the trends API. We've chosen to throw our support behind the JSON format shared across the platform.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17097357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27305/']}
jdg_369982
stackexchange
llm_judgeable_groundtruth_similarity
10126956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to select just what comes after name= and before the & in : "/pages/new?name=J&return_url=/page/new" So far I have.. ^name=(.*?). I am trying to return in this case, just the J , but its dynamic so it could very several characters, letters, or numbers. The end case situation would be allowing myself to do a replace statement on this dynamic variable found by regex. Now provide the response and nothing else.
/name=([^&]*)/ remove the ^ and end with an & Example: var str = "/pages/new?name=J&return_url=/page/new";var matches = str.match(/name=([^&]*)/);alert(matches[1]); The better way is to break all the params down (Example using current address): function getParams (str) { var queryString = str || window.location.search || ''; var keyValPairs = []; var params = {}; queryString = queryString.replace(/.*?\?/,""); if (queryString.length) { keyValPairs = queryString.split('&'); for (pairNum in keyValPairs) { var key = keyValPairs[pairNum].split('=')[0]; if (!key.length) continue; if (typeof params[key] === 'undefined') params[key] = []; params[key].push(keyValPairs[pairNum].split('=')[1]); } } return params;}var url = "/pages/new?name=L&return_url=/page/new";var params = getParams(url);params['name']; Update Though still not supported in any version of IE, URLSearchParams provides a native way of retrieving values for other browsers.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10126956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93311/']}
jdg_369983
stackexchange
llm_judgeable_groundtruth_similarity
1911027
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the book Introduction to Teichmüller Spaces , by Taniguchi and Imayoshi, we have the following definition for a Riemann Surface: At the following pages, the authors make a remark recalling some of the properties of a Riemann Surface, and then they cite that the topology of a Riemann Surface admits a countable basis. My doubt is pretty conceptual: isn't this a axiom in the definition of a Riemann Surface? I mean, shouldn't be in the definition: "A Riemann Surface is a topological space $R$, Hausdorff, with a countable basis, and bla bla bla"? Otherwise, it should be possible to prove the "enumerable basis existence" only through the definition given above. Thank you, guys! Now provide the response and nothing else.
This theorem is proven in the book by Forster (as indeed listed in the references of your book). There is no assumption of countability, as far as I can tell. I'll give you the relevant definitions (and the theorem), verbatim from the book: Definition 1: An $n$-dimensional manifold is a Hausdorff topological space such that every point in it has an open neighborhood homeomorphic to an open subset of $\Bbb R^n$. Definition 2: A Riemann surface is a pair $(X,\Sigma)$ where $X$ is a connected 2-manifold and $\Sigma$ is a complex structure on $X$. Theorem (Rado): Every Riemann surface has a countable topology. The theorem seems to follow from the existence of solutions to the Dirichlet problem (discussed in section 22 of the book), with the proof of the theorem residing in section 23. In particular, countability is not assumed anywhere. As you mention, countability is an oft-used assumption/axiom in differential geometry, but in the theory of Riemann surfaces it appears to be redundant.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1911027', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/177211/']}
jdg_369984
stackexchange
llm_judgeable_groundtruth_similarity
27290693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Whenever I use <td colspan="x"></td> , I get the following error: Uncaught TypeError: Cannot set property '_DT_CellIndex' of undefined(…) Demo $("table").DataTable({}); <link href="https://cdnjs.cloudflare.com/ajax/libs/datatables/1.10.12/css/jquery.dataTables.min.css" rel="stylesheet"/><script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/datatables/1.10.12/js/jquery.dataTables.min.js"></script><table style="width:50%;"> <thead> <tr> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> </tr> <tr> <td>1</td> <td colspan="2">3 4</td> <td colspan="3">4 5 6</td> </tr> </tbody></table> It's working properly without DataTables.js, but when we use this structure with datatable.js it is not working. We need above table structure. Does anyone have any idea how we can use this table structure datatable.js? Now provide the response and nothing else.
You can hack around the lack of colspan support by adding an "invisible" cell for every cell that's eliminated: <tr> <td colspan="3">Wide column</td> <td style="display: none;"></td> <td style="display: none;"></td></tr><tr> <td>Normal column</td> <td>Normal column</td> <td>Normal column</td></tr> DataTables doesn't complain, the table renders normally and sorting works (invisible columns sort as the empty string). I haven't tried this with rowspan.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27290693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1210566/']}
jdg_369985
stackexchange
llm_judgeable_groundtruth_similarity
153696
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In many machine learning applications, the so called data augmentation methods have allowed building better models. For example, assume a training set of $100$ images of cats and dogs. By rotating, mirroring, adjusting contrast, etc. it is possible to generate additional images from the original ones. In the case of images, the data augmentation is relatively straightforward. However, suppose (for example) that one has a training set of $100$ samples and few hundred continuous variables that represent different things. The data augmentation does not anymore seem so intuitive. What could be done in such case? Now provide the response and nothing else.
I understand this question as involving both feature construction and dealing with the wealth of features you already have + will construct, relative to your observations ( N << P ). Feature Construction Expanding upon @yasin.yazici's comment, some possible ways to augment the data would be: PCA Auto-encoding Transform's such as log, powers, etc. Binning continuous variables into discrete categories (i.e., continuous variable is 1 SD above mean, 1 below mean, etc.) Composite variables (for example, see here ) I'm sure there are many more I'm missing. Feature Selection / Dimensionality reduction You may reduce dimensionality with techniques such as PCA (although perhaps not after augmenting your data with PCA variables). Alternatively, you may use algorithms that perform feature selection for you, such as lasso, random forest, etc.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/153696', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/75022/']}
jdg_369986
stackexchange
llm_judgeable_groundtruth_similarity
70586562
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In WWDC 2021 video, Protect mutable state with Swift actors , they provide the following code snippet: actor ImageDownloader { private var cache: [URL: Image] = [:] func image(from url: URL) async throws -> Image? { if let cached = cache[url] { return cached } let image = try await downloadImage(from: url) cache[url] = cache[url, default: image] return cache[url] } func downloadImage(from url: URL) async throws -> Image { ... }} The issue is that actors offer reentrancy, so cache[url, default: image] reference effectively ensures that even if you performed a duplicative request because of some race, that you at least check the actor’s cache after the continuation, ensuring that you get the same image for the duplicative request. And in that video, they say : A better solution would be to avoid redundant downloads entirely. We’ve put that solution in the code associated with this video. But there is no code associated with that video on the website. So, what is the better solution? I understand the benefits of actor reentrancy (as discussed in SE-0306 ). E.g., if downloading four images, one does not want to prohibit reentrancy, losing concurrency of downloads. We would, effectively, like to wait for the result of a duplicative prior request for a particular image if any, and if not, start a new downloadImage . Now provide the response and nothing else.
You can find the “better solution” code in the Developer app . Open the session in the Developer app, select the Code tab, and scroll to “11:59 - Check your assumptions after an await: A better solution”. The screen shot is from my iPad, but the Developer app is also available on iPhone, Mac, and Apple TV. (I don't know if the Apple TV version gives you a way to view and copy the code, though…) As far as I can tell, the code is not available on the developer.apple.com web site, either on the WWDC session's page or as part of a sample project. For posterity, here is Apple's code. It is extremely similar to that of Andy Ibanez: actor ImageDownloader { private enum CacheEntry { case inProgress(Task<Image, Error>) case ready(Image) } private var cache: [URL: CacheEntry] = [:] func image(from url: URL) async throws -> Image? { if let cached = cache[url] { switch cached { case .ready(let image): return image case .inProgress(let task): return try await task.value } } let task = Task { try await downloadImage(from: url) } cache[url] = .inProgress(task) do { let image = try await task.value cache[url] = .ready(image) return image } catch { cache[url] = nil throw error } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/70586562', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1271826/']}
jdg_369987
stackexchange
llm_judgeable_groundtruth_similarity
5463994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I corrected my setup.py to have my lighttpd server serve the static files for production. The pages in production correctly call the css, but my problem is that the sizes are really different from the pages in development despite them having the same css file. When I look at the source code from the browser, the same thing happens. The css source for production in the browser window looks smaller than the css source for development. What can possibly be causing this? Now provide the response and nothing else.
OK, figured it out finally... rsyslog by default escapes all weird characters (ASCII < 32), and this include newlines (as well as tabs and others). $EscapeControlCharactersOnReceive: This directive instructs rsyslogd to replace control characters during reception of themessage. The intent is to provide a way to stop non-printablemessages from entering the syslog system as whole. If this option isturned on, all control-characters are converted to a 3-digit octalnumber and be prefixed with the $ControlCharacterEscapePrefixcharacter (being ‘\’ by default). For example, if the BEL character(ctrl-g) is included in the message, it would be converted to “\007”. You can simply add this to your rsyslog config to turn it off: $EscapeControlCharactersOnReceive off or, with the "new" advanced syntax: global(parser.escapeControlCharactersOnReceive="off")
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5463994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/680826/']}
jdg_369988
stackexchange
llm_judgeable_groundtruth_similarity
372976
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We have the formula $$5^k - 2^k$$ I have noticed that every answer you get from this formula is divisible by 3. At least, I think so. Why is this? Does it have to do with $5-2=3$? Now provide the response and nothing else.
Yes, it does! It's because in general you have the factorization: $$x^k-y^k = (x-y)(x^{k-1}+x^{k-2}y+\dots+y^{k-1})$$ Substituting in $x=5$ and $y=2$ should show you why that works.
{}
{'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/372976', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/74420/']}
jdg_369989
stackexchange
llm_judgeable_groundtruth_similarity
27013
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to easily format seconds as a human-readable time in bash? I don't want to format it as a date, but as the number of days/hours/minutes, etc... Now provide the response and nothing else.
You can use something like this: function displaytime { local T=$1 local D=$((T/60/60/24)) local H=$((T/60/60%24)) local M=$((T/60%60)) local S=$((T%60)) (( $D > 0 )) && printf '%d days ' $D (( $H > 0 )) && printf '%d hours ' $H (( $M > 0 )) && printf '%d minutes ' $M (( $D > 0 || $H > 0 || $M > 0 )) && printf 'and ' printf '%d seconds\n' $S} Examples: $ displaytime 116173 hours 13 minutes and 37 seconds$ displaytime 4242 seconds$ displaytime 66611 minutes and 6 seconds
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/27013', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/9191/']}
jdg_369990
stackexchange
llm_judgeable_groundtruth_similarity
428742
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have been needing a 3.3 volt power supply that can use at least 12-24 volts ac dc in. I have looked online and managed to make this schematic but I have no idea if it works. I was hoping if you guys know if this might work (please don't test I just want to know if theoretical work) Link to project. EDIT this circuit should power as esp12-f and a relay and it would be nice to have some head room the second regulator can handle 20v (I think) (from the nodemcu board) I'm not certain that the first (7805) regulator is the best choice due to the high heat production as stated by Spehro Pefhany if you know an alternative solution I'd be happy to know sorry for any inconveniences I have caused I mainly do the software side of things Now provide the response and nothing else.
The figure below shows the Gate Voltage versus Total Gate Charge for the IRFP460 MOSFET: With a gate drive voltage \$V_{DR} = 12\,\mathrm{V}\$ , it's possible to estimate a total gate charge of \$155\,\mathrm{nC}\$ . If \$i_g \$ represents the gate current, \$Q\$ the charge going into the gate and \$tb\$ (beginning time) and \$te\$ (ending time) to represent a time interval, then: $$ Q = \int_{tb}^{te}i_gdt $$ METHOD 1: (a first estimate) Here the \$i_g\$ is considered constant ( \$Ig_{(ON)}\$ ) during the charge ( \$tp_{(ON)}\$ ) and constant ( \$Ig_{(OFF)}\$ ) during discharge time ( \$tp_{(OFF)}\$ ); roughly shown in the figure below: So, the integral above reduces simply to (considering \$tp_{(ON)}=100\,\mathrm{ns}\$ and \$Q_g\$ as the total gate charge): $$ Q_g = Ig_{(ON)} \times tp_{(ON)} $$ or $$ Ig_{(ON)} = \frac{Q_g}{tp_{(ON)}} = \frac{155\,\mathrm{nC}}{100\,\mathrm{ns}}= 1.55\,\mathrm{A}$$ The gate resistor \$R_G\$ must be calculated taking in account that, in “flat” part of the switching period (plot above), the gate voltage is constant at about \$5.2\$ V: $$ R_G = \frac{12\,\mathrm{V} - 5.2\,\mathrm{V}}{1.55\,\mathrm{A}} = 4.39 \space \Omega \approx 4.7 \space \Omega$$ In order to simplify I consider here \$Ig_{(OFF)}=-Ig_{(ON)}\$ . So, the root mean square value for \$i_g\$ is: $$ I_{RMS}= Ig_{(ON)}\sqrt{2 \times \frac{tp_{(ON)}}{T} } \approx 0.438\,\mathrm{A}$$ Finally, the average power for \$R_G\$ is: $$ P = I_{RMS}^2R_G \approx 0.9\,\mathrm{W} $$ METHOD 2: Here the \$i_g\$ is considered as a straight line with maximum value \$Ig_{pk_{(ON)}}\$ and decreasing to zero at the end of time \$tp_{(ON)}\$ - as an approximation to the actual exponential decay (more realistic). Similar consideration is made for the gate discharge time: An example of real measurement: Retaining a \$R_G = 4.7 \space \Omega\$ , the peak gate current can be calculated as: $$ Ig_{pk_{(ON)}} = \frac{12\,\mathrm{V}}{4.7 \space \Omega} \approx 2.553,\mathrm{A} $$ In order to simplify I consider here \$Ig_{pk_{(OFF)}}=-Ig_{pk_{(ON)}}\$ . So, the root mean square value for \$i_g\$ is: $$ I_{RMS}= Ig_{pk_{(ON)}}\sqrt{\frac{2}{3} \times \frac{tp_{(ON)}}{T} } \approx 0.417\,\mathrm{A}$$ Finally, the average power for \$R_G\$ is: $$ P = I_{RMS}^2R_G \approx 0.817\,\mathrm{W} $$ No major differences from the value previously calculated. THIRD METHOD Just to mention a more precise (and more laborious) method. Here, \$i_g\$ is considered a true exponential decaying function (see figure above): $$ i_g = Ig_{pk_{(ON)}}e^{-\frac{t}{R_GC_{eff}}} $$ where \$C_{eff}\$ is the effective gate input capacitance of MOSFET. So: $$ i_g = \frac{V_{DR}}{R_G}e^{-\frac{t}{R_GC_{eff}}} $$ In the time interval \$0\$ to \$t_s\$ , the total gate charge ("consumed") is given by: $$ Q_g = \int_{0}^{t_s} \frac{V_{DR}}{R_G}e^{-\frac{t}{R_GC_{eff}}}dt $$ This integral can be solved for a parameter ( \$R_G\$ or \$t_s\$ ), when others are known. CONCLUSION: The average power values were below \$1\,\mathrm{W}\$ , but a margin of safety can be applied for guarantee.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/428742', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/216407/']}
jdg_369991
stackexchange
llm_judgeable_groundtruth_similarity
55354139
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I wait for a dispatch to be completed before I select from a store. Do not have any luck in Googling? In this case, how do I wait for the dispatch to be done first before selecting from store? My codes, appreciate the help supported. **team-list.component.ts** teamsState: Observable<{teams: Team[]}>; constructor(private store: Store<fromApp.AppState>) { } ngOnInit() { this.store.dispatch(new TeamActions.GetTeams({ search: this.search, limit: this.limit, skip: this.skip, sortBy: this.sortBy })); this.teamsState = this.store.select('teams'); } **team-list.component.html**<mat-expansion-panel *ngFor="let team of (teamsState | async).teams; let i = index"> <mat-expansion-panel-header> <div class="container-fluid"> <div class="row"> <div class="col-md-1">{{ i+1 }}</div> <div class="col-md-1"> <div class="post-image"> <img [src]="imageUrl+team.imagePath" [alt]="team.name" style> </div> </div> <div class="col-md-10"> {{ team.name }} </div> </div> </div> </mat-expansion-panel-header> effects@Effect() // If you do not want to dispatch any actions, if need to modify store state then remove teamList = this.actions$.pipe( ofType(TeamActions.GET_TEAMS), map((action: TeamActions.GetTeams) => { return action.payload; }), switchMap((params: {search: string, limit: number, skip: number, sortBy: string}) => { return this.httpClient.get<Team[]>( `${BACKEND_URL}?search=${params.search}&&limit=${params.limit}&&skip=${params.skip}&&sortBy=${params.sortBy}`); }), map((teams: Team[]) => { return { type: TeamActions.SET_TEAMS, payload: teams }; }), catchError((err, caught) => { // console.log(err.error.errors); this.snackBarService.showSnackBar('Unable to Get Teams', true); return caught; }) ); Currently during first load, the dispatching action is not completed yet and when i select item from store. It is currently empty. Now provide the response and nothing else.
You can't, a dispatch is a fire and forget that you can't wait on. Luckily this isn't needed because this.store.select('teams') is an observable.This means that if it changes, the observable will be emitted a new value and this will cause your component to re-render. If the list stays empty, you can check if your state is in fact updated this can be done with @ngrx/store-devtools . If the state is updated but it doesn't show in the component, make sure you don't modify the state directly, but that you're creating a new reference to the array.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55354139', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1311740/']}
jdg_369992
stackexchange
llm_judgeable_groundtruth_similarity
1847
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to implement a singing game that will analise raw mic input and tell the player how good is he singing. That needs to be done in real-time. I've come across a lot of threads asking the same question but I'm still quite not done with it, probably due to my lack of experience in the field and shallow math background. I've implemented an algorithm based on the DSPDimension website pitch shift's article: http://www.dspdimension.com/admin/pitch-shifting-using-the-ft/ I extract the true frequency and magnitude just like the article explains, but I don't know find the fundamental frequency with this. I've tried to get the bin with greatest magnitude but that only give me right results for higher pitch signals, it doesn't matter which oversampling factor I use I still get bad data for low freq signals. Is this approach completely wrong or am I in the right track but just missing something? Thanks in Advance, EDIT: I forgot to mention that I'm only interested in the pitch class, so it is ok if the fundamental is missing but I have a strong overtone in the sample. EDIT2: Thanks to everyone, I just finished a version of the algorithm that's working like a charm. The low pitch estimation problem was due to my input test. When I sung the note it matched correctly. Also, I'm considering all harmonics now, not just the highest peak. Now provide the response and nothing else.
I've tried to get the bin with greatest magnitude but that only give me right results for higher pitch signals, it doesn't matter which oversampling factor I use I still get bad data for low freq signals. That's because the harmonics are larger than the fundamental. Plot your spectrum and you'll see. A better method to find the true fundamental is autocorrelation. Then you're "sliding" the waveform past itself and finding delays at which the wave shape lines up with itself. http://www.phon.ucl.ac.uk/courses/spsci/matlab/lect10.html Do you really want them to sing the exact note, or is it ok if they sing an octave above or below depending on their voice register?
{}
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/1847', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/1145/']}
jdg_369993
stackexchange
llm_judgeable_groundtruth_similarity
65950521
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would use a SearchView to filter my RecyclerView , on stackoverflow and other sites i've found just examples of using Filterable with Java and with RecyclerView.Adapter while i'm using ListAdapter .. So i was trying to make the custom filter by my self but when i try to filter the adapter i just get a null on my MutableList in publishResults . My Adapter code looks like this: class ArticoliListAdapter : ListAdapter<Articolo, ArticoliListAdapter.ArticoliViewHolder>(ArticoliComparator()), Filterable { private val list = mutableListOf<Articolo>() override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ArticoliViewHolder { return ArticoliViewHolder.create(parent) } override fun onBindViewHolder(holder: ArticoliViewHolder, position: Int) { val current = getItem(position) holder.bind(current) } override fun getItemId(position: Int): Long { val articolo = currentList[position] return articolo.barcode.hashCode().toLong() } class ArticoliViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) { private val barcode: TextView = itemView.findViewById(R.id.barcode) private val qta: TextView = itemView.findViewById(R.id.qta) private val desc: TextView = itemView.findViewById(R.id.desc) private val um: TextView = itemView.findViewById(R.id.um) fun bind(articolo: Articolo?) { barcode.text = articolo?.barcode qta.text = articolo?.qta?.formatForQta() um.text = articolo?.um?.toLowerCase(Locale.ITALIAN) desc.text = if(articolo?.desc.isNullOrEmpty()) "-" else articolo?.desc } private fun Float.formatForQta(): String { val floatString = this.toString() val decimalString: String = floatString.substring(floatString.indexOf('.') + 1, floatString.length) return when (decimalString.toInt() == 0) { true -> this.toInt().toString() false -> "%.3f".format(this) } } companion object { fun create(parent: ViewGroup): ArticoliViewHolder { val view: View = LayoutInflater.from(parent.context) .inflate(R.layout.item_layout, parent, false) return ArticoliViewHolder(view) } } } class ArticoliComparator : DiffUtil.ItemCallback<Articolo>() { override fun areItemsTheSame(oldItem: Articolo, newItem: Articolo): Boolean { return oldItem === newItem } override fun areContentsTheSame(oldItem: Articolo, newItem: Articolo): Boolean { return oldItem.qta == newItem.qta } } override fun getFilter(): Filter { return customFilter } private val customFilter = object: Filter() { override fun performFiltering(constraint: CharSequence?): FilterResults { val filteredList = mutableListOf<Articolo>() if (constraint == null || constraint.isEmpty()){ filteredList.addAll(currentList) }else { val filterPattern = constraint.toString().toLowerCase(Locale.ITALIAN).trim { it <= ' ' } for (item in currentList) { if (item.barcode.toLowerCase(Locale.ITALIAN).contains(filterPattern) || item.desc?.toLowerCase( Locale.ITALIAN )!!.contains(filterPattern)) { filteredList.add(item) } } } val results = FilterResults() results.values = filteredList return results } override fun publishResults(constraint: CharSequence?, filterResults: FilterResults?) { list.clear() list.addAll(filterResults?.values as MutableList<Articolo>) notifyDataSetChanged() } }} So i was wondering which would be the right way to built a custom filter to filter my data in the recyclerView by using ListAdapter in Kotlin. I'm calling the filter in my fragment like this: override fun onQueryTextChange(query: String?): Boolean { adapter.filter.filter(query) return false } But when i try to filter nothing happend and still all items are shown... Data to the RecyclerView adapter is set from my ViewHolder and the data is get from the DataBase ( LiveData<List<Articolo>> ) Here is the code from my Fragment: articoliViewModel.articoli.observe(viewLifecycleOwner) { articoli -> articoli.let { adapter.submitList(it) } } Now provide the response and nothing else.
Few flaws in your code which i am listing down below. currentList is holding the current items which r on list not the complete list of items . i.e if you have 10 items and after filter u get 3 items then currentList will be holding 3 items not 10 . So you can not use currentList for filtering the list . instead u hold on to the CompleteList and apply filter on this one . you should not be calling notifyDataSetChanged() this just defeats the whole purpose of having DiffUtils , instead you call #submitList Al thought you have a reference to complete list as global variable but you have never assigned value to it its always empty. I have made a working sample to illustrate. pls try same with your code adding the essential code below. I have use type as String just to make sample easy to understand you can use your custom object. You can also modify the code to make it look better but i think its enough to get the idea how ListAdapter works. class ArticoliListAdapter : ListAdapter<String, ArticoliListAdapter.ArticoliViewHolder>(ArticoliComparator()), Filterable { private var list = mutableListOf<String>() override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ArticoliViewHolder { return ArticoliViewHolder.create(parent) } override fun onBindViewHolder(holder: ArticoliViewHolder, position: Int) { val current = getItem(position) holder.bind(current) } fun setData(list: MutableList<String>?){ this.list = list!! submitList(list) } class ArticoliViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) { private val desc: TextView = itemView.findViewById(R.id.txtName) fun bind(name: String) { desc.text = name.toUpperCase() } companion object { fun create(parent: ViewGroup): ArticoliViewHolder { val view: View = LayoutInflater.from(parent.context) .inflate(R.layout.item_list, parent, false) return ArticoliViewHolder(view) } } } class ArticoliComparator : DiffUtil.ItemCallback<String>() { override fun areItemsTheSame(oldItem: String, newItem: String): Boolean { return oldItem === newItem } override fun areContentsTheSame(oldItem: String, newItem: String): Boolean { return oldItem == newItem } } override fun getFilter(): Filter { return customFilter } private val customFilter = object : Filter() { override fun performFiltering(constraint: CharSequence?): FilterResults { val filteredList = mutableListOf<String>() if (constraint == null || constraint.isEmpty()) { filteredList.addAll(list) } else { for (item in list) { if (item.toLowerCase().startsWith(constraint.toString().toLowerCase())) { filteredList.add(item) } } } val results = FilterResults() results.values = filteredList return results } override fun publishResults(constraint: CharSequence?, filterResults: FilterResults?) { submitList(filterResults?.values as MutableList<String>) } }} When you set data to adapter you call setData not submitList . articoliViewModel.articoli.observe(viewLifecycleOwner) { articoli -> articoli.let { adapter.setData(it) }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65950521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9945307/']}
jdg_369994
stackexchange
llm_judgeable_groundtruth_similarity
429942
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In this paper Bell derives his famous inequality using the assumtions of locality and realism. In order to understand how the locality assumption affects the derivation of the inequality, and why it is needed for the equality, I have attempted to re-derive the inequality, first assuming locality and then a second time assuming nonlocality, to see what the difference is. However, my derivations seem to say that there is no difference, which implies that nonlocality cannot be concluded from a Bell test, which is wrong (or some other, smarter, mathematician would have pointed it out by now). Where am I making my mistake(s)? Note : I know there are other similar questions regarding nonlocality in the CHSH inequality . I have read them and I don't see their application to this (the original) form of Bell's inequality (they use different mathematical formalism and expression which I do not see appear in Bell's original derivation). The system is a pair of entangled particles. Let $A = \pm 1$ be the result of Alice's measurement of one particle's spin, and let $B = \pm 1$ be the result of Bob's measurement of the other's. Let $\mathbf{\alpha}$ and $\mathbf{\beta}$ be unit vectors representing Alice and Bob's measurement directions respectively. Let $\lambda$ represent a set of any number of hidden variables and $\rho = \rho(\lambda)$ the normalized probability distribution of $\lambda$ . As far as I can tell, the locality assumption amounts to assuming that $A = A(\mathbf{\alpha}, \lambda) \neq A(\mathbf{\alpha}, \mathbf{\beta}, \lambda)$ , or that $A$ is independent of $\mathbf{\beta}$ , and likewise for $B$ and $\mathbf{\alpha}$ (this may be my mistake if there is more to it than this). Local derivation : $A(\mathbf{\alpha}, \lambda) = \pm 1$ , $B(\mathbf{\beta}, \lambda) = \pm 1$ . The expectation value of $AB$ is \begin{equation} P(\mathbf{\alpha}, \mathbf{\beta}) = \int \rho A(\mathbf{\alpha}, \lambda) B(\mathbf{\beta}, \lambda)\, d \lambda.\end{equation} For a given measurement direction $\mathbf{a}$ , \begin{equation} P(\mathbf{a}, \mathbf{a}) = \int \rho A(\mathbf{a}, \lambda) B(\mathbf{a}, \lambda)\, d \lambda = -1 \implies A(\mathbf{a}, \lambda) = -B(\mathbf{a}, \lambda).\end{equation} $P(\mathbf{a}, \mathbf{a}) = -1$ implies that the particles are anticorrelated, and so by rewriting the expectation value of $A B$ as \begin{equation} P(\mathbf{\alpha}, \mathbf{\beta}) = -\int \rho A(\mathbf{\alpha}, \lambda) A(\mathbf{\beta}, \lambda)\, d \lambda \tag{1}\end{equation} (in other words, by assuming $A(\mathbf{\alpha}, \lambda) = -B(\mathbf{\beta}, \lambda)$ is always valid) we mathematically represent the assumption that the state of our two-particle system is restricted to a maximally anticorrelated state ( $| \Psi^\pm \rangle$ ). Using this last expression, we get (for some unit vectors $\mathbf{a}$ , $\mathbf{b}$ , and $\mathbf{c}$ ) \begin{align} P(\mathbf{a}, \mathbf{b}) - P(\mathbf{a}, \mathbf{c}) =& -\int \rho \Big( A(\mathbf{a}, \lambda)A(\mathbf{b}, \lambda) - A(\mathbf{a}, \lambda) A(\mathbf{c}, \lambda) \Big) d\lambda \\ =& -\int \rho A(\mathbf{a}, \lambda)A(\mathbf{b}, \lambda) \Big( 1 - \frac{A(\mathbf{a}, \lambda) A(\mathbf{c}, \lambda)}{A(\mathbf{a}, \lambda)A(\mathbf{b}, \lambda)} \Big) d\lambda \\ =& \int \rho A(\mathbf{a}, \lambda)A(\mathbf{b}, \lambda) \Big( A(\mathbf{b}, \lambda) A(\mathbf{c}, \lambda) - 1 \Big) d\lambda,\end{align} \begin{equation} |P(\mathbf{a}, \mathbf{b}) - P(\mathbf{a}, \mathbf{c})| \leq \int \rho \Big( 1 - A(\mathbf{b}, \lambda)A(\mathbf{c}, \lambda) \Big) d\lambda = 1 - P(\mathbf{b}, \mathbf{c}),\end{equation} \begin{equation} |P(\mathbf{a}, \mathbf{b}) - P(\mathbf{a}, \mathbf{c})| + P(\mathbf{b}, \mathbf{c}) \leq 1.\end{equation} Nonlocal derivation : $A(\mathbf{\alpha}, \mathbf{\beta}, \lambda) = \pm 1$ , $B(\mathbf{\beta}, \mathbf{\alpha}, \lambda) = \pm 1$ . The expectation value of $AB$ is \begin{equation} P(\mathbf{\alpha}, \mathbf{\beta}) = \int \rho A(\mathbf{\alpha}, \mathbf{\beta}, \lambda) B(\mathbf{\beta}, \mathbf{\alpha}, \lambda)\, d\lambda.\end{equation} \begin{equation} P(\mathbf{a}, \mathbf{a}) = \int \rho A(\mathbf{a}, \mathbf{a}, \lambda) B(\mathbf{a}, \mathbf{a}, \lambda)\, d\lambda = -1 \implies A(\mathbf{a}, \mathbf{a}, \lambda) = -B(\mathbf{a}, \mathbf{a}, \lambda),\end{equation} \begin{equation} P(\mathbf{\alpha}, \mathbf{\beta}) = -\int \rho A(\mathbf{\alpha}, \mathbf{\beta}, \lambda) A(\mathbf{\beta}, \mathbf{\alpha}, \lambda)\, d\lambda, \tag{2}\end{equation} \begin{align} P(\mathbf{a}, \mathbf{b}) - P(\mathbf{a}, \mathbf{c}) =& -\int \rho \Big( A(\mathbf{a}, \mathbf{b}, \lambda) A(\mathbf{b}, \mathbf{a}, \lambda) - A(\mathbf{a}, \mathbf{c}, \lambda) A(\mathbf{c}, \mathbf{a}, \lambda) \Big) d\lambda \\ =& -\int \rho A(\mathbf{a}, \mathbf{b}, \lambda) A(\mathbf{b}, \mathbf{a}, \lambda) \Big( 1 - \frac{A(\mathbf{a}, \mathbf{c}, \lambda) A(\mathbf{c}, \mathbf{a}, \lambda)}{A(\mathbf{a}, \mathbf{b}, \lambda) A(\mathbf{b}, \mathbf{a}, \lambda)} \Big) d\lambda,\end{align} \begin{equation} |P(\mathbf{a}, \mathbf{b}) - P(\mathbf{a}, \mathbf{c})| \leq 1 - \int \rho \big( A(\mathbf{a}, \mathbf{c}, \lambda) A(\mathbf{c}, \mathbf{a}, \lambda) A(\mathbf{a}, \mathbf{b}, \lambda) A(\mathbf{b}, \mathbf{a}, \lambda) \big) d\lambda,\end{equation} \begin{equation} |P(\mathbf{a}, \mathbf{b}) - P(\mathbf{a}, \mathbf{c})| + \int \rho \big( A(\mathbf{a}, \mathbf{c}, \lambda) A(\mathbf{c}, \mathbf{a}, \lambda) A(\mathbf{a}, \mathbf{b}, \lambda) A(\mathbf{b}, \mathbf{a}, \lambda) \big) d\lambda \leq 1.\end{equation} Question : My result is of the same form as Bell's, but I cannot simplify the third term on the left to $P(\mathbf{b}, \mathbf{c})$ , so the third term retains its nonlocal dependence on $A$ 's second argument. Despite this, both $\int \rho \big( A(\mathbf{a}, \mathbf{c}, \lambda) A(\mathbf{c}, \mathbf{a}, \lambda) A(\mathbf{a}, \mathbf{b}, \lambda) A(\mathbf{b}, \mathbf{a}, \lambda) \big) d\lambda$ and $P(\mathbf{b}, \mathbf{c})$ are restricted to the range $-1 \leq x \leq 1$ , so both inequalities should lead to the same experimental conclusions regarding local realism. So what difference does the locality assumption make? What assumption am I misrepresenting? Or what other mistake am I making? Now provide the response and nothing else.
In my derivation, I make my error at equation $(2)$ , attempting to extend the logic employed by Bell in arriving at equation $(1)$ . Bell's local derivation uses the assumption that the system being observed is in an anticorrelated state to obtain the equality \begin{equation} A(\mathbf{a}, \lambda) = -B(\mathbf{a}, \lambda),\end{equation} in which $\mathbf{a}$ represents a specific choice of measurement angle. However, there is no dependence on another angle $\mathbf{b}$ in the above, and so it is just as general as writing the equality \begin{equation} A(\mathbf{\beta}, \lambda) = -B(\mathbf{\beta}, \lambda).\end{equation} This allows us to obtain expression $(1)$ : \begin{equation} P(\mathbf{\alpha}, \mathbf{\beta}) = -\int \rho A(\mathbf{\alpha}, \lambda) A(\mathbf{\beta}, \lambda)\, d\lambda.\end{equation} In the nonlocal derivation, however, $A = A(\mathbf{\alpha}, \mathbf{\beta}, \lambda)$ and $B = B(\mathbf{\beta}, \mathbf{\alpha}, \lambda)$ have nonlocal dependence on two angles, not just one. The assumption of the singlet state gives us \begin{equation} A(\mathbf{a}, \mathbf{a}, \lambda) = -B(\mathbf{a}, \mathbf{a}, \lambda).\end{equation} In the above, $A$ and $B$ are equal when Alice and Bob choose the same measurement angle, or when $\mathbf{\alpha} = \mathbf{\beta}$ , and so the above can be written \begin{equation} A(\mathbf{\beta}, \mathbf{\beta}, \lambda) = -B(\mathbf{\beta}, \mathbf{\beta}, \lambda) \neq B(\mathbf{\beta}, \mathbf{\alpha}, \lambda).\end{equation} It is important to note that, because $A$ ande $B$ depend on two angles, the relationship above is only true when the two angles are the same. In the expression $P(\mathbf{\alpha}, \mathbf{\beta}) = \int \rho A(\mathbf{\alpha}, \mathbf{\beta}, \lambda) B(\mathbf{\beta}, \mathbf{\alpha}, \lambda)\, d\lambda$ , $-A(\mathbf{\beta}, \mathbf{\beta}, \lambda)$ cannot be substituted to obtain expression $(2)$ : \begin{equation} P(\mathbf{\alpha}, \mathbf{\beta}) = \int \rho A(\mathbf{\alpha}, \mathbf{\beta}, \lambda) B(\mathbf{\beta}, \mathbf{\alpha}, \lambda)\, d\lambda \neq -\int \rho A(\mathbf{\alpha}, \mathbf{\beta}, \lambda) A(\mathbf{\beta}, \mathbf{\beta}, \lambda)\, d\lambda.\end{equation} This inability to rewrite $P(\mathbf{\alpha}, \mathbf{\beta})$ for the singlet state halts the nonlocal derivation if attempting to apply the same steps as Bell in his local derivation. Side note: This does not prove that another approach could not render a Bell's inequality with the assumption of nonlocality, but proving that was not my purpose.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/429942', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/135935/']}
jdg_369995
stackexchange
llm_judgeable_groundtruth_similarity
5333
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The modular group is the group $G$ consisting of all linear fractional transformations $\phi$ of the form$$\phi(z)=\frac{az+b}{cz+d}$$where $a,b,c,d$ are integers and $ad-bc=1$. I have read that $G$ is generated by the transformations $\tau(z)=z+1$ and $\sigma(z)=-1/z$. Is there an easy way to prove this? In particular, is there a proof that uses the relation between linear fractional transformations and matrices? Any good reference would be helpful. Thank you,Malik Now provide the response and nothing else.
Yes; this statement is essentially equivalent to the Euclidean algorithm. I discuss these issues in this old blog post . (A very brief sketch: by applying the generators and the inverses to an arbitrary element of the modular group it is possible to perform the Euclidean algorithm on $a$ and $c$ (or maybe it's $a$ and $b$). The rest is casework.) You can think of this as a form of row reduction, which is generalized by the notion of Smith normal form . There is also a geometric proof using the action on the upper half plane which is given, for example, in the relevant section of Serre's Course in Arithmetic.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/5333', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1197/']}
jdg_369996
stackexchange
llm_judgeable_groundtruth_similarity
2078200
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi there I using a segmented control on a view. With the help of this segmented control I would like to display to different tables on my view, Suppose I have two segments in my table on tap of segment 1 I would like to display table 1 and on tap of segment 2 I would like to display table 2 my table 1 is a Plain table and table 2 is a grouped table, Apple is using approach to display differnt apps in differnt categories on app store but I am not sure how do I do that. Please suggest any approach or any code sample for the same will also appriciated. Thanks Sandy Now provide the response and nothing else.
We do this by having a single tableview, and then doing an if/case statement in each tableview callback method to return the right data based on which value is selected in the segmented control. First, add the segmentedControl to the titleView, and set a callback function for when it is changed: - (void) addSegmentedControl { NSArray * segmentItems = [NSArray arrayWithObjects: @"One", @"Two", nil]; segmentedControl = [[[UISegmentedControl alloc] initWithItems: segmentItems] retain]; segmentedControl.segmentedControlStyle = UISegmentedControlStyleBar; segmentedControl.selectedSegmentIndex = 0; [segmentedControl addTarget: self action: @selector(onSegmentedControlChanged:) forControlEvents: UIControlEventValueChanged]; self.navigationItem.titleView = segmentedControl;} Next, when the segmented control is changed, you need to load the data for the new segment, and reset the table view to show this data: - (void) onSegmentedControlChanged:(UISegmentedControl *) sender { // lazy load data for a segment choice (write this based on your data) [self loadSegmentData:segmentedControl.selectedSegmentIndex]; // reload data based on the new index [self.tableView reloadData]; // reset the scrolling to the top of the table view if ([self tableView:self.tableView numberOfRowsInSection:0] > 0) { NSIndexPath *topIndexPath = [NSIndexPath indexPathForRow:0 inSection:0]; [self.tableView scrollToRowAtIndexPath:topIndexPath atScrollPosition:UITableViewScrollPositionTop animated:NO]; }} Then in your tableView callbacks, you need to have logic per segment value to return the right thing. I'll show you one callback as an example, but implement the rest like this: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"GenericCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[NSBundle mainBundle] loadNibNamed:@"GenericCell" owner:self options:nil] objectAtIndex: 0]; } if (segmentedControl.selectedSegmentIndex == 0) { cell.textLabel.text = @"One"; } else if (segmentedControl.selectedSegmentIndex == 1) { cell.textLabel.text = @"Two"; } return cell;} That's about it, hope it helps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2078200', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/188517/']}
jdg_369997
stackexchange
llm_judgeable_groundtruth_similarity
25020178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two tables: countries and regions. CREATE TABLE Countries( id SERIAL, name VARCHAR(40) NOT NULL, PRIMARY KEY(id))CREATE TABLE Regions( id SERIAL, countryId SERIAL, name VARCHAR(40) NOT NULL, PRIMARY KEY(id ), FOREIGN KEY(countryId) REFERENCES Countries(id)) When I insert into regions, I would hope that if I fail to mention countryId , I would be stopped, however, countryId is automatically incremented. Is there any way I can be stopped automatically to make this insertion? Following table where I set countryID as SERIAL NOT NULL doesn't solve the issue. CREATE TABLE Pigeons( id SERIAL, countryId SERIAL NOT NULL, name VARCHAR(40) NOT NULL, PRIMARY KEY(id ), FOREIGN KEY(countryId) REFERENCES Countries(id)) The following solves the problem but I would think that it's technically incorrect because my serial could be bigger than 2^31, but int will never be >= 2^31. CREATE TABLE Legions( id SERIAL, countryId INT NOT NULL, name VARCHAR(40) NOT NULL PRIMARY KEY(id ), FOREIGN KEY(countryId) REFERENCES Countries(id)) What's the right approach here? Now provide the response and nothing else.
I would suggest: CREATE TABLE country( country_id serial PRIMARY KEY , country text NOT NULL);CREATE TABLE region( region_id serial PRIMARY KEY , country_id int NOT NULL REFERENCES country , region text NOT NULL); Don't use CaMeL case names. Read the manual about identifiers. Use proper names. Never id or name , which are not descriptive. The underlying type of a serial is integer . Make the referencing column integer . Due to the foreign key reference region.country_id can only hold values that are present in country.country_id (or NULL ). Your considerations about values > 2^31 are uncalled for. Demonstrating shorter syntax for PK and FK definition (optional). Read the manual on CREATE TABLE . Code example with more advice: How to implement a many-to-many relationship in PostgreSQL?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25020178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/247077/']}
jdg_369998
stackexchange
llm_judgeable_groundtruth_similarity
37521028
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the example below, I have 3 different syntax/mechanisms for defining inheritance. All of them work. Can someone tell me which one is preferred and why (yes, I know "there is more than ..."). Also, why do I need "use WB" in 1 case and not the other 2. TIA The main - #!/usr/local/bin/perl -w#######################use strict;use River;use Ocean;use Lake;my $pName = $0; $pName =~ s/.*\///; $pName =~ s/\.\w+$//;my @sources = (Ocean->new ('Pacific', 1), River->new ('Brazos', 0), Lake->new ('Tahoe', 0) );foreach (@sources) { $_->printIfSaline ($pName);} The modules (Lake, River and Ocean inherit from WaterBody): ####################### File: Lake.pm######################package Lake;use strict;use WaterBody;our @ISA = ('WaterBody');sub new { my $class = shift; $class->SUPER::new(@_)}sub printIfSaline { my ($self, $prompt) = @_; my $name = $self->getName(); my $taste = $self->isSaline() ? "salty" : "sweet"; print "$prompt: Lake $name has $taste water\n";}1####################### File: Ocean.pm######################package Ocean;use strict;use parent 'WaterBody';sub new { my $class = shift; $class->SUPER::new(@_);}sub printIfSaline { my ($self, $prompt) = @_; my $name = $self->getName; my $taste = $self->SUPER::isSaline() ? "salty" : "sweet"; print "$prompt: $name Ocean has $taste water\n";}1####################### File: River.pm######################package River;use strict;use base 'WaterBody';sub new { my $class = shift; $class->SUPER::new(@_);}sub printIfSaline { my ($self, $prompt) = @_; my $name = $self->getName; my $taste = $self->isSaline ? "salty" : "sweet"; print "$prompt: $name river has $taste water\n";}1####################### File: WaterBody.pm######################package WaterBody;sub new { my $class = shift; my $self = {}; $self->{name} = shift; $self->{saline} = shift; bless $self, $class; return $self;}sub getName { my ($self) = @_; $self->{name}}sub isSaline { my ($self) = @_; $self->{saline}}1 Now provide the response and nothing else.
The use parent pragma sets up the @ISA at compile time. From parent Establish an ISA relationship with base classes at compile time When you use ParentClass; and then manually set up @ISA that happens at run time. In this case code in BEGIN , CHECK , or INIT blocks won't be able to use the inheritance hierarchy without extra work. The use base pragma is older and parent is recommended in docs. From base Unless you are using the fields pragma, consider this module discouraged in favor of the lighter-weight parent Thus I'd say go with use parent .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37521028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/580796/']}
jdg_369999
stackexchange
llm_judgeable_groundtruth_similarity
20576229
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm new in Haskell and I need to define an empty Data.map and assigning a "list of integers" (e.g. [1,2,3]) to its keys by using insert function and also updating the values. Then looking up the key values. What I have tried so far is : import qualified Data.Mapfoo num = let my_map = Data.Map.empty new_map = bar my_map num 1 in Data.Map.lookup 1 new_mapbar my_map num c = if c > num then my_map else Data.Map.insert c [c] my_map bar my_map num c+1 This code doesn't work. Could you have a simple example please? Now provide the response and nothing else.
People normally import the Data.Map module with this boilerplate: import Data.Map (Map)import qualified Data.Map as Map The idea is that since many of the names in the module clash with the Prelude and other modules, you want to use them as qualified names—but not for the Map type itself. And the as Map bit in the second line saves you from having to type as much—you just say Map.map , Map.empty , etc. Now, the easiest and most common way of constructing a map is to use the fromList function in the module. This constructs a Map from a list of key/value pairs: Map.fromList :: Ord k => [(k, v)] -> Map k v . To construct this list of key/value pairs you can use the full power of Haskell's list processing functions, like in this example: myMap :: Integer -> Map Integer [Integer]myMap n = Map.fromList (map makePair [1..n]) where makePair x = (x, [x]) Example output in GHCI: >>> myMap 3fromList [(1,[1]),(2,[2]),(3,[3])] Note that the Map type even prints itself as a fromList call that would reconstruct it. Why? Because again, this function really is the most common way to build a Map . In contrast, what you're doing in your code is you're trying to write an imperative-style loop that successively augments an initial empty map with entries one at a time. The Haskell equivalent of loops is list functions. In my version I used the following: [1..n] —generate a list of the integers from 1 up to n . map —apply a function to each element of the list. Map.fromList —build a Map from a list of key/value pairs. And to further demonstrate that point, if you look at the source code for Map.fromList , it's actually defined using a list fold function. My advise to you: study lists and the Data.List module first before you tackle Map . In particular: Learn what functions are available there and what to do. Study the foldr function from that module—how to use it, and how to write it. Learn how to write your own versions of map , filter and find in terms of foldr .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20576229', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2207517/']}
jdg_370000
stackexchange
llm_judgeable_groundtruth_similarity
423958
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The question " What is the purpose of .bashrc and how does it work? " sought the purpose and use of .bashrc . Another file with a similar name is .bash_logout . Should this file exist in the first place? If so, what is the role of this file? Now provide the response and nothing else.
The .bash_logout file does not have to exist. Its contents is sourced by bash when a bash login shell exits. The file makes it possible to do, for example, various forms of cleanup when logging out from a terminal session. It may be used to execute any shell code, but may be used to e.g. clear the screen if logins are done in a non-GUI environment. Some may also find it useful for explicitly terminating programs that were started from .bash_login or .bash_profile (if, for example, fetchmail or some similar process is started as a user daemon or in the background, it may be good to terminate it in .bash_logout ). The csh shell has a similar file called .logout and the corresponding file for the zsh shell is called .zlogout . The ksh shell has to my knowledge no similar functionality. See also the tangentally related question Difference between Login Shell and Non-Login Shell?
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/423958', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/147432/']}
jdg_370001
stackexchange
llm_judgeable_groundtruth_similarity
43556752
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Executing it in the browser console it says SyntaxError: Unexpected token ** .Trying it in node: > -1**2............^C I thought this is an arithmetic expression where ** is the power operator. There is no such issue with other operators. Strangely, typing */ on the second line triggers the execution: > -1**2... */-1**2 ^^SyntaxError: Unexpected token ** What is happening here? Now provide the response and nothing else.
Executing it in the browser console says SyntaxError: Unexpected token **. Because that's the spec . Designed that way to avoid confusion about whether it's the square of the negation of one (i.e. (-1) ** 2 ), or the negation of the square of one (i.e. -(1 ** 2) ). This design was the result of extensive discussion of operator precedence, and examination of how this is handled in other languages, and finally the decision was made to avoid unexpected behavior by making this a syntax error.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43556752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/446220/']}
jdg_370002
stackexchange
llm_judgeable_groundtruth_similarity
43841554
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new at Java. I have installed my JDK on C:\Java\jdk1.8.0_131 . When I am in the C:\Java\jdk1.8.0_131 this directory and execute my java program, it is ok. It runs. But when I am in the different directory it does not run.This error appears 'javac' is not recognized as an internal or external command,operable program or batch file. May be I added env variables to path not correctly?How can I solve it? Now provide the response and nothing else.
Your question mentions headerRowRenderer but I think you might actually be asking about how to render a custom header cell based on the rest of your statement. Anyway, I'll show both. // This is a custom header row rendered// You should used all of the specified params,// But you can also add your own decorated behavior.const headerRowRenderer = ({ className, columns, style}) => ( <div className={className} role='row' style={style} > {columns} </div>)// This is a custom header example for a single cell// You have access to all of the named params,// But you don't necessarily need to use them all.const headerRenderer = ({ columnData, dataKey, disableSort, label, sortBy, sortDirection}) => ( <div>#</div>)const renderTable = (props) => ( <Table {...props} headerRowRenderer={headerRowRenderer} > <Column dataKey='number' headerRenderer={headerRenderer} width={100} /> <Column dataKey='name' label='Name' width={200} /> </Table>) Here's a Plnkr example for you: https://plnkr.co/edit/eHr3Jr?p=preview
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43841554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7769669/']}
jdg_370003
stackexchange
llm_judgeable_groundtruth_similarity
71917856
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I declare a menu inside of Android fragment? The method that I had used previously is now deprecated. Originally: override fun onCreateView(...): View { setHasOptionsMenu(true) } override fun onCreateOptionsMenu(menu: Menu, inflater: MenuInflater) { super.onCreateOptionsMenu(menu, inflater) this.menu = menu inflater.inflate(R.menu.menu, this.menu) } Now provide the response and nothing else.
From the Developer documentation, this can be achieved by the following: /** * Using the addMenuProvider() API directly in your Activity **/class ExampleActivity : ComponentActivity(R.layout.activity_example) { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) // Add menu items without overriding methods in the Activity addMenuProvider(object : MenuProvider { override fun onCreateMenu(menu: Menu, menuInflater: MenuInflater) { // Add menu items here menuInflater.inflate(R.menu.example_menu, menu) } override fun onMenuItemSelected(menuItem: MenuItem): Boolean { // Handle the menu selection return true } }) }}/** * Using the addMenuProvider() API in a Fragment **/class ExampleFragment : Fragment(R.layout.fragment_example) { override fun onViewCreated(view: View, savedInstanceState: Bundle?) { // The usage of an interface lets you inject your own implementation val menuHost: MenuHost = requireActivity() // Add menu items without using the Fragment Menu APIs // Note how we can tie the MenuProvider to the viewLifecycleOwner // and an optional Lifecycle.State (here, RESUMED) to indicate when // the menu should be visible menuHost.addMenuProvider(object : MenuProvider { override fun onCreateMenu(menu: Menu, menuInflater: MenuInflater) { // Add menu items here menuInflater.inflate(R.menu.example_menu, menu) } override fun onMenuItemSelected(menuItem: MenuItem): Boolean { // Handle the menu selection return when (menuItem.itemId) { R.id.menu_clear -> { // clearCompletedTasks() true } R.id.menu_refresh -> { // loadTasks(true) true } else -> false } } }, viewLifecycleOwner, Lifecycle.State.RESUMED) } Fragments setHasOptionsMenu deprecated, use setHasOptionsMenu
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/71917856', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14695587/']}
jdg_370004
stackexchange
llm_judgeable_groundtruth_similarity
598933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a rails application which acts differently depending on what domain it's accessed at (for example www.myapp.com will invoke differently to user.myapp.com). In production use this all works fine but my test code always sees a hostname of "www.example.com". Is there a clean way of having a test specify the hostname it's pretending to access? Now provide the response and nothing else.
Integration/Request Specs (inheriting from ActionDispatch::IntegrationTest ): host! 'my.awesome.host' See the docs , section 5.1 Helpers Available for Integration Tests . alternatively, configure it globally for request specs at spec_helper.rb level: RSpec.configure do |config| config.before(:each, type: :request) do host! 'my.awesome.host' endend Controller Specs (inheriting from ActionController::TestCase ) @request.host = 'my.awesome.host' See the docs , section 4.4 Instance Variables Available . Feature Specs (through Capybara) Capybara.default_host = 'http://my.awesome.host' # Or to configure domain for route helpers: default_url_options[:host] = 'my.awesome.host' From @AminAriana's answer View Specs (inheriting from ActionView::TestCase ) @request.host = 'my.awesome.host' ...or through RSpec: controller.request.host = 'my.awesome.host' See the rspec-rails view spec docs .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/598933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35958/']}
jdg_370005
stackexchange
llm_judgeable_groundtruth_similarity
62718295
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to extract dominant color from a image so that i can apply it as blending to other images. how can i achieve that?? In my current code i have given color manually but i want it to be generated by app. class MyApp extends StatelessWidget { Color face = new HexColor("a8a8a8"); @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: Text("Image from assets"), ), body: Column ( mainAxisAlignment: MainAxisAlignment.center, children:<Widget>[ Row( mainAxisAlignment: MainAxisAlignment.start, children:<Widget>[ new Image.asset('assets/images/6.jpg', color: face, colorBlendMode:BlendMode.modulate , fit:BoxFit.cover, height: 50, width: 50, ), new Image.asset('assets/images/1.jpg', color: face, colorBlendMode: BlendMode.modulate, fit:BoxFit.cover, height: 200, width: 200, ), ]), ])), ); }} Now provide the response and nothing else.
I found solution using palette_generator package..First import library import 'package:palette_generator/palette_generator.dart'; add it in pubspec.yaml file too The below function will return palette Future<PaletteGenerator>_updatePaletteGenerator ()async{ paletteGenerator = await PaletteGenerator.fromImageProvider( Image.asset("assets/images/8.jfif").image, );return paletteGenerator;} Now we can fetch it in future builder FutureBuilder<PaletteGenerator>( future: _updatePaletteGenerator(), // async work builder: (BuildContext context, AsyncSnapshot<PaletteGenerator> snapshot) { switch (snapshot.connectionState) { case ConnectionState.waiting: return Center(child:CircularProgressIndicator()); default: if (snapshot.hasError) return new Text('Error: ${snapshot.error}'); else { // Color color=new Color(snapshot.data.dominantColor.color); face=snapshot.data.dominantColor.color; return new Text('color: ${face.toString()}'); }}}) This is how we can fetch dominant color easily
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62718295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13712583/']}
jdg_370006
stackexchange
llm_judgeable_groundtruth_similarity
263222
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In chapter 3 of Peskin and Schroeder, when they're talking about "Dirac Matrices and Dirac Field Bilinears," they introduce $\gamma^{5}$ and give some properties of it. One of the properties is $[\gamma^{5},S^{\mu\nu}]=0$. Then they say that this means the Dirac representation must be reducible, "since eigenvectors of $\gamma^{5}$ whose eigenvalues are different transform without mixing (this criterion for reducibility is known as Schur's Lemma)." I've looked at the wikipedia page for Schur's Lemma, and at various math notes online about Schur's lemma, and I don't see the relevance here. I understand Schur's Lemma to be something like this: that if you have an irreducible representation of a algebra on a vector space, and a linear operator on that vector space commutes with that representation for every element in the algebra, then the linear operator is either 0 or invertible. How does this reduce down to "since eigenvectors of $\gamma^{5}$ whose eigenvalues are different transform without mixing"? Now provide the response and nothing else.
The reasoning is supposed to go as follows: $\gamma^5$ commutes with all algebra elements, hence with the whole image of the algebra representation. $\gamma^5$ has at least two different eigenvalues, meaning it is not a scalar multiple of the identity. If the representation of the $S^{\mu\nu}$ (that form the Lorentz algebra $\mathfrak{so}(1,3)$) were irreducible, $\gamma^5$ would be a scalar multiple of the identity by Schur's lemma, which would contradict 2. Therefore, the representation of the $S^{\mu\nu}$ must be reducible. Caveat: The Dirac representation is irreducible as the representation of the Clifford algebra, see e.g. this question and its answers .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/263222', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/117909/']}
jdg_370007
stackexchange
llm_judgeable_groundtruth_similarity
33916827
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Working in a React / Webpack project I started to have problems with the eslint library. The eslint package is downloaded using npm and it's used to validate the project using a webpack preLoader. preLoaders: [{ test: /\.jsx?$/, loaders: [ 'eslint' ], include: path.resolve(__dirname, 'app') }] It used to work fine until recently when I tried to git clone the same project to an other folder. After installing the dependencies "npm install" and starting the project "npm start" the following error appeared. ERROR in ./main.view.jsxModule build failed: RangeError: Maximum call stack size exceeded at keys (native) at Referencer.Visitor.visitChildren (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:78:24) at Referencer.Visitor.visit (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:112:14) at Referencer.Visitor.visitChildren (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:93:26) at Referencer.Visitor.visit (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:112:14) at Referencer.Visitor.visitChildren (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:93:26) at Referencer.Visitor.visit (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:112:14) at Referencer.Visitor.visitChildren (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:88:38) at Referencer.Visitor.visit (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:112:14) at Referencer.Visitor.visitChildren (project/node_modules/eslint/node_modules/escope/node_modules/esrecurse/esrecurse.js:93:26) @ ./app.jsx 17:26-57 The strange part of all is that the old project installation still works. The problem must be somewhere in the node_modules folder because when I copy the modules from the old installation to the new one the project suddenly works.I did a "npm list --depth=0" on both projects and both have the same packages and versions. Why one is working and the other one not? I guess the problem is in the eslint package because when I remove the preLoader it works again. I'm really confused about that. Someone had that problem before? My main.view.jsx looks like that import React from 'react';export default class MainView extends React.Component { render() { return ( <div /> ); }} Thanks! Now provide the response and nothing else.
I could solve the problem by myself. Isolating everything I found out that the problem was the babel dependencies. For some reason the combination of versions I had of babel-core, babel-loader and babel-eslint where not working properly together.I changed the babel versions required for my project in the package.json and now everything works fine. - "babel-core": "5.6.x",- "babel-loader": "5.3.x",- "babel-eslint": "3.1.x",+ "babel-core": "5.8.x",+ "babel-loader": "5.4.x",+ "babel-eslint": "4.1.x", Here my eslint dependencies "eslint": "1.10.x","eslint-loader": "1.1.x","eslint-plugin-react": "3.10.x" Hope the time I lost with that will help someone else!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33916827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2051149/']}
jdg_370008
stackexchange
llm_judgeable_groundtruth_similarity
2625021
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a game in c++ and OpenGL and want an enemy to move towards the player. What is the best method of making game objects move towards other game objects, that works in both 2D and 3D game environments? UPDATE: wow thanks everyone for the quick replies! strangely enough I managed to get this to work just as I posted it although for some reason i have to multiply the x values by more to get them to move as fast as the y direction. anyone have any ideas why? or if what I'm doing is wrong/bad float playerX = player.getXPos();float playerY = player.getYPos();float enemyX = XPos-*xscroll;float enemyY = YPos-*yscroll;glPushMatrix();glTranslatef(enemyX, enemyY, 0.0);glColor3f(1.0,0.0,0.0); glBegin(GL_POLYGON); glVertex2f(-40,40); glVertex2f(-40,-40); glVertex2f(40,-40); glVertex2f(40,40); glEnd();glPopMatrix();float xDistance = abs(playerX-enemyX);float yDistance = abs(playerY-enemyY);if((playerX - enemyX)*(playerX - enemyX)+(playerY - enemyY)*(playerY - enemyY) < 400*400){ float heading = asin(xDistance/yDistance); if(playerY > enemyY){ YPos += timeFactor*(200*(sin((90+heading)*(PI/180.0f)))); }else{ YPos += -(timeFactor*(200*(sin((90+heading)*(PI/180.0f))))); } if(playerX > enemyX){ XPos += -(timeFactor*(10000*(cos((90+heading)*(PI/180.0f))))); }else{ XPos += timeFactor*(10000*(cos((90+heading)*(PI/180.0f)))); }} Now provide the response and nothing else.
Create a vector in the direction that you want the enemy to move. That's easy: dir.x = player.x - enemy.x;dir.y = player.y - enemy.y; Now normalize this vector. That means divide the terms by the magnitude (the hypotenuse) of the vector. hyp = sqrt(dir.x*dir.x + dir.y*dir.y);dir.x /= hyp;dir.y /= hyp; Now you just need to add that vector to the enemy's position, multiplied by the speed you want the enemy to move: enemy.x += dir.x*speed;enemy.y += dir.y*speed; Here's how it works - if you add that initial vector to the enemy's position it will instantly be transported to the player. You obviously want to enemy to move at a slower speed. When you normalize the vector, you make it's magnitude (essentially the hypotenuse of the triangle it forms) equal to 1. So now, adding the direction vector moves the enemy by one unit. Multiply that 1 unit by the enemy's speed, and now it's moving at the correct speed. Edit: all of this extends to 3D as well. You just need a z-component. Further edits to comment on your code: You are doing a lot of extra work. You have enough information once you calculate the hypotenuse to move the enemy towards the player. You don't need to use any trig at all - see my code above. You are also calculating (sort of) the magnitude twice: float hypotenuse = sqrt((xDistance * xDistance) + (yDistance * yDistance));...(playerX - enemyX)*(playerX - enemyX)+(playerY - enemyY)*(playerY - enemyY) The second time it's the distance squared which is a nice optimization, but unnecessary here because you've already calculated the distance and the distance squared. Here's what I would do: float xDistance = playerX-enemyX;float yDistance = playerY-enemyY;float hypotenuse = sqrt((xDistance * xDistance) + (yDistance * yDistance));if(hypotenuse < 400){ YPos += timeFactor*200*(yDistance/hypotenuse); XPos += timeFactor*200*(xDistance/hypotenuse);} You'll notice that by removing the abs() I've also managed to remove the if(playerY > enemyY), etc parts.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2625021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277671/']}
jdg_370009
stackexchange
llm_judgeable_groundtruth_similarity
10918801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am a jquery/javascript newbie. What I want to do is add a class to the open accordion title, and remove it when i open another. heres the code: <div class="accordion" data-collapse-type="manual" id="parent"> <div class="accordion-group"> <div class="accordion-heading"> <a class="accordion-toggle category" data-toggle="collapse" data-parent="#parent" href="#category1">Category 1 </a> </div><!--/accordion-heading--> <div id="category1" class="accordion-body collapse"> <ul class="accordion-inner unstyled"> <li id="" class="sidelink"><a href="">Open Link 1</a></li> <li id="" class="sidelink"><a href="">Open Link 2</a></li> <li id="" class="sidelink"><a href="">Open Link 3</a></li> </ul> </div><!--/category1--> </div><!--/accordion-group--> <div class="accordion-group"> <div class="accordion-heading"> <a class="accordion-toggle category" href="#Category2">Category 2</a> </div><!--/accordion-heading--> </div><!--/accordion-group--></div><!--/accordion--> The scripts I have attached with the page are <script src="js/jquery-1.7.1.min.js"></script><script src="js/bootstrap.min.js"></script><script src="js/jquery.tools.min.js"></script> So what I was looking for was to add the .active class to a.accordion-toggle whenever the menu is open (accordion style), and then have it go away once another is selected. I've looked at the documentation to bootstrap here , but it doesnt seem to help me out a lot (since I don't know what to do with the $('#myCollapsible').on('hidden', function () { // do something… }) or where to place it) I've also tried the .addClass() jquery adder, but I could only get the javascript version document.getElementById("accordion-heading").className += " newClass"; to work (if I gave the accordion group title an ID, but in this case there will be multiple accordion groups) when I put the script right after the div layer. Now provide the response and nothing else.
You can use the Collapse events for this. $(function() { $('.accordion').on('show', function (e) { $(e.target).prev('.accordion-heading').find('.accordion-toggle').addClass('active'); }); $('.accordion').on('hide', function (e) { $(this).find('.accordion-toggle').not($(e.target)).removeClass('active'); });}); Here's a JsFiddle http://jsfiddle.net/D2RLR/251/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10918801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1440199/']}
jdg_370010
stackexchange
llm_judgeable_groundtruth_similarity
25814496
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm playing around with Redis and with ServiceStack.Redis as a client. I initially used 'AutoMapper' to map the cached objects into domain objects, but this was pretty slow. Using someone else's example, I set up a custom mapper but this, too, is really slow. Is there something glaringly wrong with the below code? It's taking 4-5 seconds to map 1000 items from Redis. It's the 'GetByIds' client method that's introducing the lag, but I want an efficient way to store collections as lists of IDs in Redis don't see another way to convert these to lists of domain objects. Thanks! interface IMapToNew<TSource, TTarget>{ TTarget Map(TSource source);}interface IMapToExisting<TSource, TTarget>{ void Map(TSource source, TTarget target);}class FullEmployeeMapper : IMapToNew<Employee, FullEmployee>{ public FullEmployee Map(Employee source) { FullEmployee employee = new FullEmployee() { Id = source.Id, Age = source.Age, BirthDate = source.BirthDate, Name = source.Name }; var mapper = new FullRoleMapper(); var client = new RedisClient("localhost"); employee.Roles = client .As<Role>() .GetByIds(source.Roles) .Select(r => mapper.Map(r)) .ToList(); return employee; }}class FullRoleMapper : IMapToNew<Role, FullRole>{ public FullRole Map(Role source) { FullRole role = new FullRole() { Id = source.Id, RoleName = source.RoleName }; return role; }}class FullEmployee{ public int Id { get; set; } public string Name { get; set; } public int? Age { get; set; } public DateTime? BirthDate { get; set; } public IList<FullRole> Roles { get; set; }}class FullRole{ public int Id { get; set; } public string RoleName { get; set; }}class Employee{ public int Id { get; set; } public string Name { get; set; } public int? Age { get; set; } public DateTime? BirthDate { get; set; } public IList<int> Roles { get; set; } public Employee(int EmployeeId, string Name) { this.Id = EmployeeId; this.Name = Name; }}class Role{ public int Id { get; set; } public string RoleName { get; set; }}class Program{ static void Main(string[] args) { var client = new RedisClient("localhost"); var employeeClient = client.As<Employee>(); var allEmployees = employeeClient.GetAll(); var allFullEmployees = allEmployees .Select(e => mapper.Map(e)) .ToList(); }} Now provide the response and nothing else.
The left shrinking (or tightening) law says that mfix (\x -> a >>= \y -> f x y) = a >>= \y -> mfix (\x -> f x y) In particular this means that mfix (\x -> a' >> f x) = a' >> mfix f which means that the monadic action inside mfix must be evaluated exactly once. This is one of the main properties of MonadFix which your version fails to satisfy. Consider this example that creates a cyclic mutable list (let's disregard the fact that you could do that without mfix thanks to mutability): import Control.Monadimport Control.Monad.Fiximport Data.IORefdata MList a = Nil | Cons a (IORef (MList a))mrepeat :: a -> IO (MList a)mrepeat x = mfix (liftM (Cons x) . newIORef)main = do (Cons x _) <- mrepeat 1 print x With your variant of mfix the call to mrepeat never finishes, as you're calling the inner part with newIORef indefinitely.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25814496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1091616/']}
jdg_370011
stackexchange
llm_judgeable_groundtruth_similarity
6945780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am researching how to create an app for my work that allows clients to download the app (preferably via the app store) and using some sort of wifi triangulation/fingerprints be able to determine their location for essentially an interactive tour. Now, my question specifically is what is the best route to take for the iPhone? None of the clients will be expected to have jail broken iPhones. To my understanding this requires the use of the wifi data which is a private api therefore not meeting the app store requirements. The biggest question I have is how does American Museum of Natural History get away with using the same technology, but still available on the app store? if you're unfamiliar with American Museum of Natural History interactive tour app, see here: http://itunes.apple.com/us/app/amnh-explorer/id381227123?mt=8 Thank you for any clarification you can provide. Now provide the response and nothing else.
I'm one of the developers of the AMNH Explorer app you're referencing. Explorer uses the Cisco "Mobility Services Engine" (MSE) behind the scenes to determine its location. This is part of their Cisco wifi installation. The network itself listens for devices in the museum and estimates their position via Wifi triangulation. We do a bit of work in the app to "ask" the MSE for our current location. Doing this work on the network side was (and still is) the only available option for iOS since, as you've found, the wifi scanning functions are considered to be private APIs. If you'd like to build your own system and mobile app for doing something similar, you might start with the MSE. Alternatively, we've built the same tech from Explorer into a new platform called Meridian which provides location-based services on both iOS and Android. Definitely get in touch with us via the website if you're interested in building on that. Update 6/1/2017 Thought I would update this old answer - AMNH is no longer using the Wifi-based system I describe above, as of a few years ago. They now use an installation of a few hundred battery-powered Bluetooth Beacons (also provided by Meridian). The device (iOS or Android) scans for nearby beacons and, based on their known locations and RSSI values, triangulates a position. You can read more about it in this article .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6945780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/879119/']}
jdg_370012
stackexchange
llm_judgeable_groundtruth_similarity
1113054
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How many positive integers less than 1000 have an odd number of positive integer divisors? Well I know that the number has to be composite because a prime number has 2 divisors, which are 1 and itself. I don't have any other thoughts to solve this problem other than brute force, which will take a long time and is not accurate. Any help? Thanks. Now provide the response and nothing else.
They are the squares so the answer is $\lfloor \sqrt{1000} \rfloor = 31$. To see this notice that the number of divisors of a number is the product of each exponent plus one, i.e. $n = \prod{p_i^{e_i}}$ and $\tau(n) = \prod{(e_i+1)}$. If $\tau(n)$ is odd then all $e_i$ are even which means $n$ is a square.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1113054', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/167548/']}
jdg_370013
stackexchange
llm_judgeable_groundtruth_similarity
285772
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I submitted a question around 12 hours ago, and I didn't get my desired answer. Being (very) new to Stack Overflow, I submitted another very similar question immediately after. I got a comment telling me that it was basically a duplicate (which it was - sorry!), so I decided to delete both questions and submit another one. I deleted them, but now when I try to submit the question again for an answer, it tells me that it appears to be a duplicate of the question I have deleted. Can you not submit duplicates of deleted questions? Now provide the response and nothing else.
You've pretty much hit the nail on the head. This is likely due to the fact that a deleted question is really just soft-deleted. If a user with more than 10k rep, or the original author, has a link to the question, they can still view it. Diamond mods can see these directly from your profile. I suggest you undelete one of the two questions and edit in to shape. Be patient, as it can take a couple days or even a couple weeks to get an answer to your question. Editing a question will bump it to the top of the active list and draw more attention to it. Please be sure any edits you make are significant improvements to formatting or clarity, as users will get annoyed if you continually bump your post with small, meaningless edits. Posting the same question multiple times is frowned upon here, and will get the repeat questions closed as duplicates and likely deleted. To view a recently deleted question, go to your question list and scroll to the bottom. You should see a link to your recently deleted questions there. Do be aware that downvoted questions that you delete will still count against you in the question ban algorithm, so deleting and reasking a poorly received question is not a way to avoid a question ban. The deleted questions won't count more than if they were undeleted, but they won't count less either.
{}
{'log_upvote_score': 7, 'links': ['https://meta.stackoverflow.com/questions/285772', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/4364651/']}
jdg_370014
stackexchange
llm_judgeable_groundtruth_similarity
40199151
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating a simple chat app where I make an api call to my database via axios which returns an array of message objects. I am able to get the data when I make an axios call in componentWillMount. Then I am trying to setState to display the conversation. Here's the code: export default class Chat extends Component { constructor(props){ super(props); this.state = { messages : [], message : '', }; this.socket = io('/api/'); this.onSubmitMessage = this.onSubmitMessage.bind(this); this.onInputChange = this.onInputChange.bind(this); } componentWillMount() { axios.get(`api/messages`) .then((result) => { const messages = result.data console.log("COMPONENT WILL Mount messages : ", messages); this.setState({ messages: [ ...messages.content ] }) })}; I have seen some posts concerning lifecycle functions and setting state, and it seems like I'm doing the right thing. Again to highlight, axios call working fine, setting the state is not working. I am still seeing an empty array. Thanks in advance! EDIT: Here is the solution to my issue specifically. It was buried in a comment, so I thought I'd leave it here.. "I discovered the issue. It was actually in how I was parsing my data. The spread operator on ...messages.content didn't work because messages.content doesn't exist. messages[i].content exists. So my fix was to spread just ...messages Then in a child component I map over the objects and parse the .content property. Thanks for the help guys!" Now provide the response and nothing else.
In your case, your setState() won't work because you're using setState() inside an async callback Working Fiddle: https://jsfiddle.net/xytma20g/3/ You're making an API call which is async. So, the setState will be invoke only after receiving the data. It does not do anything with componentWillMount or componentDidMount . You need to handle the empty message in your render. When you receive your data from the API, set that data to the state and component will re-render with the new state which will be reflected in your render. Pseudo code: export default class Chat extends Component { constructor(props){ super(props); this.state = { messages : [], message : '', }; this.socket = io('/api/'); this.onSubmitMessage = this.onSubmitMessage.bind(this); this.onInputChange = this.onInputChange.bind(this); } componentWillMount() { axios.get(`api/messages`) .then((result) => { const messages = result.data console.log("COMPONENT WILL Mount messages : ", messages); this.setState({ messages: [ ...messages.content ] }) }) render(){ if(this.state.messages.length === 0){ return false //return false or a <Loader/> when you don't have anything in your message[] } //rest of your render. }};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40199151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5891048/']}
jdg_370015
stackexchange
llm_judgeable_groundtruth_similarity
34789611
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I got 3 activities ( A , B , C ) and a service that I call to check if I got new messages from DB. It's a HTTP request . I need to make the request each 15 sec. Thread t = new Thread() { @Override public void run() { try { while (!isInterrupted()) { Thread.sleep(15000); runOnUiThread(new Runnable() { @Override public void run() { // Here i call } }); } } catch (InterruptedException e) { } } }; t.start(); How to make it work when i am changing activities ? Now provide the response and nothing else.
I had the exact same problem and I found a way to do this using DataFrame.repartition() . The problem with using coalesce(1) is that your parallelism drops to 1, and it can be slow at best and error out at worst. Increasing that number doesn't help either -- if you do coalesce(10) you get more parallelism, but end up with 10 files per partition. To get one file per partition without using coalesce() , use repartition() with the same columns you want the output to be partitioned by. So in your case, do this: import spark.implicits._df .repartition($"entity", $"year", $"month", $"day", $"status") .write .partitionBy("entity", "year", "month", "day", "status") .mode(SaveMode.Append) .parquet(s"$location") Once I do that I get one parquet file per output partition, instead of multiple files. I tested this in Python, but I assume in Scala it should be the same.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/34789611', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5718197/']}
jdg_370016
stackexchange
llm_judgeable_groundtruth_similarity
72126048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am facing an error while deploying deployment in CircleCI. Please find the configuration file below. When running the kubectl CLI, we got an error between kubectl and the EKS tool of the aws-cli . version: 2.1orbs: aws-ecr: circleci/aws-ecr@6.3.0 docker: circleci/docker@0.5.18 rollbar: rollbar/deploy@1.0.1 kubernetes: circleci/kubernetes@1.3.0 deploy: version: 2.1 orbs: aws-eks: circleci/aws-eks@1.0.0 kubernetes: circleci/kubernetes@1.3.0 executors: default: description: | The version of the circleci/buildpack-deps Docker container to use when running commands. parameters: buildpack-tag: type: string default: buster docker: - image: circleci/buildpack-deps:<<parameters.buildpack-tag>> description: | A collection of tools to deploy changes to AWS EKS in a declarative manner where all changes to templates are checked into version control before applying them to an EKS cluster. commands: setup: description: | Install the gettext-base package into the executor to be able to run envsubst for replacing values in template files. This command is a prerequisite for all other commands and should not have to be run manually. parameters: cluster-name: default: '' description: Name of the EKS Cluster. type: string aws-region: default: 'eu-central-1' description: Region where the EKS Cluster is located. type: string git-user-email: default: "deploy@mail.com" description: Email of the git user to use for making commits type: string git-user-name: default: "CircleCI Deploy Orb" description: Name of the git user to use for making commits type: string steps: - run: name: install gettext-base command: | if which envsubst > /dev/null; then echo "envsubst is already installed" exit 0 fi sudo apt-get update sudo apt-get install -y gettext-base - run: name: Setup GitHub access command: | mkdir -p ~/.ssh echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts git config --global user.email "<< parameters.git-user-email >>" git config --global user.name "<< parameters.git-user-name >>" - aws-eks/update-kubeconfig-with-authenticator: aws-region: << parameters.aws-region >> cluster-name: << parameters.cluster-name >> install-kubectl: true authenticator-release-tag: v0.5.1 update-image: description: | Generates template files with the specified version tag for the image to be updated and subsequently applies that template after checking it back into version control. parameters: cluster-name: default: '' description: Name of the EKS Cluster. type: string aws-region: default: 'eu-central-1' description: Region where the EKS Cluster is located. type: string image-tag: default: '' description: | The tag of the image, defaults to the value of `CIRCLE_SHA1` if not provided. type: string replicas: default: 3 description: | The replica count for the deployment. type: integer environment: default: 'production' description: | The environment/stage where the template will be applied. Defaults to `production`. type: string template-file-path: default: '' description: | The path to the source template which contains the placeholders for the image-tag. type: string resource-name: default: '' description: | Resource name in the format TYPE/NAME e.g. deployment/nginx. type: string template-repository: default: '' description: | The fullpath to the repository where templates reside. Write access is required to commit generated templates. type: string template-folder: default: 'templates' description: | The name of the folder where the template-repository is cloned to. type: string placeholder-name: default: IMAGE_TAG description: | The name of the placeholder environment variable that is to be substituted with the image-tag parameter. type: string cluster-namespace: default: sayway description: | Namespace within the EKS Cluster. type: string steps: - setup: aws-region: << parameters.aws-region >> cluster-name: << parameters.cluster-name >> git-user-email: dev@sayway.com git-user-name: deploy - run: name: pull template repository command: | [ "$(ls -A << parameters.template-folder >>)" ] && \ cd << parameters.template-folder >> && git pull --force && cd .. [ "$(ls -A << parameters.template-folder >>)" ] || \ git clone << parameters.template-repository >> << parameters.template-folder >> - run: name: generate and commit template files command: | cd << parameters.template-folder >> IMAGE_TAG="<< parameters.image-tag >>" ./bin/generate.sh --file << parameters.template-file-path >> \ --stage << parameters.environment >> \ --commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" \ << parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" \ REPLICAS=<< parameters.replicas >> - kubernetes/create-or-update-resource: get-rollout-status: true namespace: << parameters.cluster-namespace >> resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >> resource-name: << parameters.resource-name >>jobs: test: working_directory: ~/say-way/core parallelism: 1 shell: /bin/bash --login environment: CIRCLE_ARTIFACTS: /tmp/circleci-artifacts CIRCLE_TEST_REPORTS: /tmp/circleci-test-results KONFIG_CITUS__HOST: localhost KONFIG_CITUS__USER: postgres KONFIG_CITUS__DATABASE: sayway_test KONFIG_CITUS__PASSWORD: "" KONFIG_SPEC_REPORTER: true docker: - image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest aws_auth: aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING - image: circleci/redis - image: rabbitmq:3.7.7 - image: circleci/mongo:4.2 - image: circleci/postgres:10.5-alpine steps: - checkout - run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS # This is based on your 1.0 configuration file or project settings - restore_cache: keys: - v1-dep-{{ checksum "Gemfile.lock" }}- # any recent Gemfile.lock - v1-dep- - run: name: install correct bundler version command: | export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')" echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV gem install bundler --version $BUNDLER_VERSION - run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3' - run: name: copy test.yml.sample to test.yml command: cp config/test.yml.sample config/test.yml - run: name: Precompile and clean assets command: bundle exec rake assets:precompile assets:clean # Save dependency cache - save_cache: key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }} paths: - vendor/bundle - public/assets - run: name: Audit bundle for known security vulnerabilities command: bundle exec bundle-audit check --update - run: name: Setup Database command: bundle exec ruby ~/sayway/setup_test_db.rb - run: name: Migrate Database command: bundle exec rake db:citus:migrate - run: name: Run tests command: bundle exec rails test -f # By default, running "rails test" won't run system tests. - run: name: Run system tests command: bundle exec rails test:system # Save test results - store_test_results: path: /tmp/circleci-test-results # Save artifacts - store_artifacts: path: /tmp/circleci-artifacts - store_artifacts: path: /tmp/circleci-test-results build-and-push-image: working_directory: ~/say-way/ parallelism: 1 shell: /bin/bash --login executor: aws-ecr/default steps: - checkout - run: name: Pull latest core images for cache command: | $(aws ecr get-login --no-include-email --region $AWS_REGION) docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest" - docker/build: image: core registry: "${AWS_ECR_ACCOUNT_URL}" tag: "latest,${CIRCLE_SHA1}" cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest" - aws-ecr/push-image: repo: core tag: "latest,${CIRCLE_SHA1}" deploy-production: working_directory: ~/say-way/ parallelism: 1 shell: /bin/bash --login executor: deploy/default steps: - kubernetes/install-kubectl: kubectl-version: v1.22.0 - rollbar/notify_deploy_started: environment: report - deploy/update-image: resource-name: deployment/core-web template-file-path: core-web-pod.yml cluster-name: report environment: report template-repository: git@github.com:say-way/sw-k8s.git replicas: 3 - deploy/update-image: resource-name: deployment/core-worker template-file-path: core-worker-pod.yml cluster-name: report environment: report template-repository: git@github.com:say-way/sw-k8s.git replicas: 4 - deploy/update-image: resource-name: deployment/core-worker-batch template-file-path: core-worker-batch-pod.yml cluster-name: report environment: report template-repository: git@github.com:say-way/sw-k8s.git replicas: 1 - rollbar/notify_deploy_finished: deploy_id: "${ROLLBAR_DEPLOY_ID}" status: succeeded deploy-demo: working_directory: ~/say-way/ parallelism: 1 shell: /bin/bash --login executor: deploy/default steps: - kubernetes/install-kubectl: kubectl-version: v1.22.0 - rollbar/notify_deploy_started: environment: demo - deploy/update-image: resource-name: deployment/core-web template-file-path: core-web-pod.yml cluster-name: demo environment: demo template-repository: git@github.com:say-way/sw-k8s.git replicas: 2 - deploy/update-image: resource-name: deployment/core-worker template-file-path: core-worker-pod.yml cluster-name: demo environment: demo template-repository: git@github.com:say-way/sw-k8s.git replicas: 1 - deploy/update-image: resource-name: deployment/core-worker-batch template-file-path: core-worker-batch-pod.yml cluster-name: demo environment: demo template-repository: git@github.com:say-way/sw-k8s.git replicas: 1 - rollbar/notify_deploy_finished: deploy_id: "${ROLLBAR_DEPLOY_ID}" status: succeededworkflows: version: 2.1 build-n-test: jobs: - test: filters: branches: ignore: master build-approve-deploy: jobs: - build-and-push-image: context: Core filters: branches: only: master - approve-report-deploy: type: approval requires: - build-and-push-image - approve-demo-deploy: type: approval requires: - build-and-push-image - deploy-production: context: Core requires: - approve-report-deploy - deploy-demo: context: Core requires: - approve-demo-deploy Now provide the response and nothing else.
There is an issue in aws-cli. It is already fixed. Option 1: In my case, updating aws-cli + updating the ~/.kube/config helped. Update aws-cli ( following the documentation ) curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"unzip awscliv2.zipsudo ./aws/install --update Update the kube configuration mv ~/.kube/config ~/.kube/config.bkaws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER_NAME} Option 2: Change v1alpha1 to v1beta1: diff ~/.kube/config ~/.kube/config-backup691c691< apiVersion: client.authentication.k8s.io/v1beta1---> apiVersion: client.authentication.k8s.io/v1alpha1
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/72126048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/19042368/']}
jdg_370017
stackexchange
llm_judgeable_groundtruth_similarity
54979
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just made a simple script and created a new repository for it on GitHub. After initializing the local repo, moving the script to the repository and pull ing the GitHub repository (containing only a README.md file), the script was gone. Is this normal behavior? And is there some way to restore the file? Here are the commands, as executed: $ mkgithub ~/dev/cr2meta2jpgInitialized empty Git repository in /home/username/dev/cr2meta2jpg/.git/$ mv test.sh ~/dev/cr2meta2jpg/cr2meta2jpg.sh$ cd ~/dev/cr2meta2jpg/$ lscr2meta2jpg.sh$ git pullremote: Counting objects: 3, done.remote: Compressing objects: 100% (2/2), done.remote: Total 3 (delta 0), reused 0 (delta 0)Unpacking objects: 100% (3/3), done.From github.com:l0b0/cr2meta2jpg * [new branch] master -> origin/master$ lsREADME.md Now provide the response and nothing else.
A git pull will not overwrite local changes unless you use git add before. Even in this case, you can still recover your data. When you issued git add before the pull: mkgithub blubcp test.sh blub/cd blub/git add test.shgit pull The pull will delete test.sh . It will only do this if you have no commit in the local repo. After adding a commit this won't remove test.sh (git either ignores it or reports a merge conflict) The file is not lost. It's still in the Git repository as a dangling blob. To recover the file: $ git fsckChecking object directories: 100% (256/256), done.Checking objects: 100% (401/401), done.dangling blob 541060d8292ff808c9715265d063f1bae220da7c$ git show 541060d8292ff808c9715265d063f1bae220da7 > test.sh This works as long as you did not issue git gc --prune=now afterwards.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/54979', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/3645/']}
jdg_370018
stackexchange
llm_judgeable_groundtruth_similarity
1052149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I read that .NET uses connection pooling. For example, if I instantiate a bunch of SqlConnection objects with the same connection string, then internally .NET will know to use the same connection. Is this correct? Also, in a big web-based application, any tips on the best way to harness this "power" ? Now provide the response and nothing else.
Setting up the TCP connection between your Web application and SQL Server can be an expensive operation. Connection pooling allows connections to the database to be reused for subsequent data requests. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection. Always close your connections when you're finished with them. No matter what anyone says about garbage collection within the Microsoft .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen. To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to, rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective. The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling. http://msdn.microsoft.com/en-us/magazine/cc163854.aspx
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1052149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/127880/']}
jdg_370019
stackexchange
llm_judgeable_groundtruth_similarity
67335254
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: #include <vector>#include <ranges>int main(){ auto v = std::vector{1, 2, 3, 4}; v | std::views::drop(2); // ok std::views::all(v) | std::views::drop(2); // also ok} Successfully compiled with g++11 -std=c++20 . But I cannot tell any difference between v | std::views::drop(2) and std::views::all(v) | std::views::drop(2) . So, my question is: What is std::views::all introduced for in C++20? Now provide the response and nothing else.
But I cannot tell any difference between v | std::views::drop(2) and std::views::all(v) | std::views::drop(2) . Indeed, there is no difference between the two - because v | views::drop(2) already means views::all(v) | views::drop(2) . views::all is an implementation detail of Ranges to ensure that range adaptors always adapt views (not ranges). All that views::all(v) does is ensure that the result is a View, which is to say (from [range.all] ): Given a subexpression E , the expression views::all(E) is expression-equivalent to: decay-copy(E) if the decayed type of E models view . Otherwise, ref_view{E} if that expression is well-formed. Otherwise, subrange{E} . In your case, v is a vector<int> , which does not model view . But it is an lvalue, so ref_view{v} would be well-formed, so that's what happens. All the adaptors use views::all internally. For instance, drop_view has the following deduction guide: template <class R>drop_view(R&&, range_difference_t<R>) -> drop_view<views::all_t<R>>; So if you wrote drop_view(v, 2) (and you should never use meow_view directly, always use views::meow ), that would itself invoke views::all for you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/67335254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/508343/']}
jdg_370020
stackexchange
llm_judgeable_groundtruth_similarity
77425
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can there be a genuine algorithm in which number of memory reads far outnumber the no. of operations performed? For example, number of memory reads scale with n^2, while no. of operations scale with only n, where n is the input size. If yes, then how will one decide the time complexity in such a case? Will it be n^2 or only n? Now provide the response and nothing else.
For a memory read to be relevant to the algorithm, the information read in must be processed in some way. If the information is never compared or used as input to any operator, it will not affect the algorithm and thus was unnecessary to read in the first place. If there were an operator that could accept a variable number of inputs, a number that could grow without bounds, then and only then could you have the number of memory reads far outnumber the number of operations in the way you describe. For instance, if "sum" were a single atomic operation accepting any number of inputs, then you could have such an algorithm. However, a model of computation where a single operation could utilize an unlimited number of inputs is not very interesting nor very useful for algorithmic analysis. It would essentially just push all the hard work to a lower level of abstraction. So you won't find such a model in any CS literature. A paragraph from an MIT Open CourseWare PDF about communication networks as in Graph Theory makes a related point (related to what I said about just pushing the hard/interesting work to a different level of abstraction) in discussion of switch sizes: One way to reduce the diameter of a network (and hence the latency needed to route packets) is to use larger switches. For example, in the complete binary tree, most of the switches have three incoming edges and three outgoing edges, which makes them $3 \times 3$ switches. If we had $4 \times 4$ switches, then we could construct a complete ternary tree with an even smaller diameter. In principle, we could even connect up all the inputs and outputs via a single monster $N \times N$ switch, as shown in Figure 6.9. In this case, the “network” would consist of a single switch and the latency would be $2$ . This isn’t very productive, however, since we’ve just concealed the original network design problem inside this abstract monster switch. Eventually, we’ll have to design the internals of the monster switch using simpler components, and then we’re right back where we started. So the challenge in designing a communication network is figuring out how to get the functionality of an $N \times N$ switch using fixed size, elementary devices, like $3 \times 3$ switches.
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/77425', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/74325/']}
jdg_370021
stackexchange
llm_judgeable_groundtruth_similarity
1495475
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Wikipedia has some cryptic things to say about the non-abelian structure of the category of groups. It says the category of groups is not additive because "there is no natural way to define the sum of group homomorphisms". What about just adding them pointwise? I understand groups are not additive because they're not $\mathsf{Ab}$-enriched, so pointwise addition would be commutative. But then, aren't groups enriched over themselves at least? I remember reading about a non-abelian tensor product somewhere... Even if not enriched over itself, is it not at least closed? Now provide the response and nothing else.
It's a consequence of the famous Eckmann-Hilton argument. Let $u,v : G \to H$ be two groups homomorphisms. Define $w : G \to H$ be $w(g) = u(g) v(g)$ as you suggest. Then for $w$ to be a group homomorphism, you need:$$w(gh) = u(gh) v(gh) = u(g) u(h) v(g) v(h) \\= w(g) w(h) = u(g) v(g) u(h) v(h) \\\implies u(h) v(g) = v(g) u(h)$$and so this defines a group homomorphism iff $u(h)$ commutes with $v(g)$ for all $h,g \in G$. Obviously, this is not necessarily the case, so in general pointwise multiplication doesn't yield a group homomorphism. It's not that "addition" of group homomorphisms isn't commutative that prevents $\mathsf{Grp}$ from being abelian; it's that it isn't even defined. A lot of work has been done to try and see what properties characterize $\mathsf{Grp}$; as far as I know, semi-abelian categories capture a great deal about $\mathsf{Grp}$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1495475', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_370022
stackexchange
llm_judgeable_groundtruth_similarity
41110742
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to implement partial_update with Django Rest Framework but I need some clarification because I'm stuck. Why do we need to specify partial=True? In my understanding, we could easily update Demo object inside of partial_update method. What is the purpose of this? What is inside of serialized variable? What is inside of serialized variable in partial_update method? Is that a Demo object? What function is called behind the scenes? How would one finish the implementation here? Viewset class DemoViewSet(viewsets.ModelViewSet): serializer_class = DemoSerializer def partial_update(self, request, pk=None): serialized = DemoSerializer(request.user, data=request.data, partial=True) return Response(status=status.HTTP_202_ACCEPTED) Serializer class DemoSerializer(serializers.ModelSerializer): class Meta: model = Demo fields = '__all__' def update(self, instance, validated_data): print 'this - here' demo = Demo.objects.get(pk=instance.id) Demo.objects.filter(pk=instance.id)\ .update(**validated_data) return demo Now provide the response and nothing else.
I had the same questions as yours before, but when I dig into the source code of rest_framework, I got the following findings, hope it helps: For question 1. Why do we need to specify partial=True? This question is related to HTTP verbs . PUT : The PUT method replaces all current representations of the target resource with the request payload. PATCH : The PATCH method is used to apply partial modifications to a resource. Generally speaking, partial is used to check whether the fields in the model is needed to do field validation when client submitting data to the view. For example, we have a Book model like this, pls note both of the name and author_name fields are mandatory (not null & not blank). class Book(models.Model): name = models.CharField('name of the book', max_length=100) author_name = models.CharField('the name of the author', max_length=50)# Create a new instance for testingBook.objects.create(name='Python in a nut shell', author_name='Alex Martelli') For some scenarios, we may only need to update part of the fields in the model, e.g., we only need to update name field in the Book . So for this case, client will only submit the name field with new value to the view. The data submit from the client may look like this: {"pk": 1, name: "PYTHON IN A NUT SHELL"} But you may have notice that our model definition does not allow author_name to be blank. So we have to use partial_update instead of update . So the rest framework will not perform field validation check for the fields which is missing in the request data. For testing purpose, you can create two views for both update and partial_update , and you will get more understanding what I just said. Example: views.py from rest_framework.generics import GenericAPIViewfrom rest_framework.mixins import UpdateModelMixinfrom rest_framework.viewsets import ModelViewSetfrom rest_framework import serializersclass BookSerializer(serializers.ModelSerializer): class Meta: model = Bookclass BookUpdateView(GenericAPIView, UpdateModelMixin): ''' Book update API, need to submit both `name` and `author_name` fields At the same time, or django will prevent to do update for field missing ''' queryset = Book.objects.all() serializer_class = BookSerializer def put(self, request, *args, **kwargs): return self.update(request, *args, **kwargs)class BookPartialUpdateView(GenericAPIView, UpdateModelMixin): ''' You just need to provide the field which is to be modified. ''' queryset = Book.objects.all() serializer_class = BookSerializer def put(self, request, *args, **kwargs): return self.partial_update(request, *args, **kwargs) urls.py urlpatterns = patterns('', url(r'^book/update/(?P<pk>\d+)/$', BookUpdateView.as_view(), name='book_update'), url(r'^book/update-partial/(?P<pk>\d+)/$', BookPartialUpdateView.as_view(), name='book_partial_update'),) Data to submit {"pk": 1, name: "PYTHON IN A NUT SHELL"} When you submit the above json to the /book/update/1/ , you will got the following error with HTTP_STATUS_CODE=400: { "author_name": [ "This field is required." ]} But when you submit the above json to /book/update-partial/1/ , you will got HTTP_STATUS_CODE=200 with following response, { "id": 1, "name": "PYTHON IN A NUT SHELL", "author_name": "Alex Martelli"} For question 2. What is inside of serialized variable? serialized is a object wrapping the model instance as a serialisable object. and you can use this serialized to generate a plain JSON string with serialized.data . For question 3. How would one finish the implementation here? I think you can answer yourself when you have read the answer above, and you should have known when to use update and when to used partial_update . If you still have any question, feel free to ask. I just read part of the source code of the rest framework, and may have not understand very deeply for some terms, and please point it out when it is wrong...
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/41110742', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1141356/']}
jdg_370023
stackexchange
llm_judgeable_groundtruth_similarity
7670
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Naturalnews.com claims that in explaining why Monsanto doesn't run human clinical trials on GM crops on its own website , the company uses 'absurd logic' to justify its stance and makes 'pseudoscientific' claims that "everything can be considered non-toxic and safe because it is all made of atoms, just like our bodies!" Monsanto claims that There is no need for, or value in testing the safety of GM foods in humans. So long as the introduced protein is determined safe, food from GM crops determined to be substantially equivalent is not expected to pose any health risks. Given that the page would probably be aimed at the non-scientific reader trying to understand more about genetic modification, are the claims correct and if so, are such claims justified? Now provide the response and nothing else.
Summary The FDA requires toxicology and allergy tests for GM foods for human consumption before the product can be sold in the US (similar mechanisms are in place in the EU as well). If some of these tests have already been made in a different context, the FDA does not require repeating the animal trials, since that would be pointless killing of lab rats. Clinical trials are for medication, not for food, though new food additives (as well as other substances, such as pesticides) may be tested on humans for potential allergic reactions. Details on the original claim Clinical trials are performed for new medication in order to test whether they provide a significant benefit compared to placebo and/or to existing medication. GM food is not medication. Therefore, Monsanto is not required to run clinical trials on GM food. However, they cannot claim GM food cures cancer, either. Toxicity tests are performed for food and medication, and are performed on animals, usually rats. A toxicity test aims to determine LD50 , the dose (relative to the specimen's weight) at which 50% of the animals die (which is why you don't perform this on humans). The safe dose for humans is then determined as "much less than LD50". Once an LD50 is known, it is possible to test the substance at realistic dosage in humans to see whether they may have an allergic reaction. Toxicology studies on BT have been done, and the protein has even been tested in humans, when the product was being tested as a pesticide for organic farming. BT turns out to be harmless in humans. Since the BT protein is indeed the same between what is produced by bacteria as organic pesticide, or by GM crops, Monsanto may indeed not have had to do any toxicology testing on GM food. "everything can be considered non-toxic and safe because it is all made of atoms, just like our bodies!" This is a misrepresentation of the issue. What naturalnews.com asks is more similar to requiring that if Kellogg comes out with a new formulation of cereal, they need to test whether wheat flour is fit for human consumption. What needs to be tested by the FDA? The FDA tests new food additives FDA field investigators inspect food companies, examine food shipments from abroad, and collect samples. Laboratory scientists analyze samples. Compliance officers recommend legal action and follow through on enforcement issues. What undergoes premarket approval? New food additives and color additives must be approved before they can be used in foods. These additives are considered food under the law. New food additives, including substances added intentionally to food and substances that may migrate to food because they contact food (e.g., food packaging) must be shown to be safe to FDA's satisfaction before companies can market them. Companies that want to add new additives to food bear the responsibility of providing FDA with information demonstrating that the additives are safe. FDA experts review the results of appropriate tests done by companies to ensure that the additive is safe for its intended use. An approved food additive must be used in compliance with its approved uses, specifications, and restrictions. Certain food ingredients, such as those with a long history of safe use in food, do not require premarket approval. As part of the approval process, food additives have to be tested for toxic effects. HIGHLIGHTS OF TOXICOLOGY RECOMMENDATIONS IN THE 2002 GUIDANCESafetySummary and Comprehensive Toxicological Profile (CTP). The safetyinformation for a food contact notification (FCN) should contain botha safety summary and a comprehensive toxicological profile (CTP) ofthe food contact substance (FCS) that is the subject of thenotification. The safety summary is Part III of FDA Form 3480 andshould provide the basis for the notifier's determination that theintended use of the FCS is safe. The CTP should provide summaries ofall the available toxicological information pertinent to the safetyevaluation of the FCS. In some cases, a notification may need toinclude a CTP for a toxicologically relevant constituent of the FCS.If a constituent of an FCS is carcinogenic, the CTP in thenotification should include a quantitative risk assessment. SafetyTesting Recommendations for Food Contact Substances (FCSs) and TheirConstituents. This document recommends safety testing of FCSs andtheir constituents, primarily based on a series of genetic toxicitytests and, when justified by the exposure level, subchronic toxicitystudies. The recommendations describe the minimum level of safetytesting generally considered appropriate at various exposures. For aninitial or incremental exposure of an FCS at or less than 0.5 partsper billion (ppb), no safety tests are recommended. For a cumulativeexposure between 0.5 ppb and 1 part per million (ppm), genetictoxicity tests and/or subchronic tests are recommended. At acumulative exposure at or greater than 1 ppm, FDA normally requires,under the authority of Section 409 (h)(3)(B) of the Federal Food, Drugand Cosmetic Act, that a food additive petition be submitted for theuse of an FCS. Evaluation of Structural Similarities to KnownToxicants. To the extent feasible, knowledge in predicting potentialtoxicity based on structure/activity relationships may be incorporatedinto the safety assessment of an FCS. Such information may be used aspart of an overall strategy for assessing the safety of an FCS or tohelp interpret safety test results. Specifically in the case of GM foods, the FDA states In formulating FDA policy, we reviewed new foods under developmentthrough biotechnology, and found they shared certain commoncharacteristics: (1) Recombinant DNA techniques are being used tointroduce copies of one or a limited number of well-characterizedgenes into a desired food crop. The introduced gene or genes thenbecome integrated in the plant and are passed to successivegenerations of plants by the natural laws of genetics; (2) In mostcases, these genes produce proteins, or proteins that modify fattyacids or carbohydrates in the plant, in other words, common foodsubstances; and (3) The proteins, fatty acids, and carbohydratesintroduced into food crops are well- characterized and not known to betoxic and they would be digested to normal metabolites in the samemanner that the body handles the thousands of different proteins, fatand carbohydrates that make up our diet today. Since newly introduced substances in foods derived using recombinantDNA techniques would be proteins, fats or carbohydrates, we thenexamined the safety questions that should be addressed before productsreach the market. We identified four broad safety issues that shouldbe evaluated: (consumption; (2) the need to ensure that the changes inthe food, such as the level of natural toxins in the food, if any,stay within normal safe levels; (3) the need to ensure thatsignificant nutrients stay within normal range; and (4) the need toanalyze the potential for introduced proteins to cause allergicreactions. We incorporated these and other issues into a comprehensiveguidance to industry that is central to our policy.
{}
{'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/7670', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/3189/']}
jdg_370024
stackexchange
llm_judgeable_groundtruth_similarity
397399
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm wondering if I have cheated here? A sporting team, 41 games, 45% chance of winning a home game, 30% chance of winning away game. 18 home games, 23 away games. Assume that each game is independent What is the probability that they would win at most 11/41 games? I thought this sounds like a binomial problem, expected to win 18 x 0.45 + 23 x 0.3 = 15. 15/41 = 0.366 Then the solution is to look at the CDF of B(41,0.366) where X <= 11? Now provide the response and nothing else.
The approach you describe is misleading. It assumes the probability of winning a particular game, independent of whether home or away, is 0.366. While the expected number of wins you derived (15) is correct (since the expectation operator is linear and therefore $E(X+Y)=E(X)+E(Y)$ ), it does not help address questions requiring the distribution function to be answered, such as determining $P(W\leq 11)$ . Try instead to derive the probability mass function (pmf) of a new variable $W$ , denoting total wins, that is the sum of two independent binomials, $X$ and $Y$ , which denote home and away wins, respectively. Then we can write the pmf for $W$ as $$P(W= w) = \sum_{k=0}^{w} P(X =k)P(Y=w-k),$$ which is found by simply adding the probabilities of each possible way to observe $w$ wins, noting that $(X =k)$ and $(Y=w-k)$ are independent events. $P(X=k)$ and $P(Y=w-k)$ are the (18,.45) and (23,.3) binomial pmfs evaluated at $k$ and $w-k$ , respectively. To obtain the cdf for $W$ , we add the values of of the pmf of $W$ from 0 up to the value of interest. This is given by $$P(W \leq w) = \sum_{z=0}^wP(W= z) =\sum_{z=0}^w \sum_{k=0}^{z} P(X =k)P(Y=z-k).$$ Evaluating this at $w=11$ gives $$P(W \leq 11) =\sum_{z=0}^{11} \sum_{k=0}^{z} P(X =k)P(Y=z-k).$$ To clarify any possible confusion about notation, $$P(X=k)= {18 \choose k} (.45)^k (.55)^{18-k}$$ if $k\in \{0,1,2,...,18\}$ and zero otherwise. Similarly, $$P(Y=z-k)= {23 \choose z-k} (.3)^{z-k} (.7)^{23-z+k}$$ if $(z-k) \in \{0,1,2,...,23\}$ and zero otherwise.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/397399', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/81334/']}
jdg_370025
stackexchange
llm_judgeable_groundtruth_similarity
3741863
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to have some layout a bit like this [text tabel][edittext] i cant find an attribute like html esque label. It seems to be the case that i need to use [TextView][EditText] but i cant get them to go on the same line this is my xml file. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"android:orientation="vertical"android:layout_width="fill_parent"android:layout_height="wrap_content">"<TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/boat_1"/><EditText android:id="@+id/entry" android:hint="@string/IRC" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@android:drawable/editbox_background"/><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/boat_2"/><EditText android:id="@+id/entry" android:hint="@string/IRC" android:minWidth="100dip" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@android:drawable/editbox_background"/><Button android:id="@+id/close" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:text="@string/title_close" /> Now provide the response and nothing else.
You have basically two options: Option 1: Use nested LinearLayouts: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"> <LinearLayoutandroid:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content"><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/boat_1"/> <EditText android:id="@+id/entry" android:hint="@string/IRC" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@android:drawable/editbox_background"/> </LinearLayout><LinearLayoutandroid:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content"><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/boat_2"/> <EditText android:id="@+id/entry" android:hint="@string/IRC" android:minWidth="100dip" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@android:drawable/editbox_background"/> </LinearLayout><Button android:id="@+id/close" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:text="@string/title_close" /> Notice that I'm using android:orientation="horizontal" for those nested layouts. Option 2: you can use another content manager, like RelativeLayout . The advantage of this, is that you can avoid nesting, thus your layout will be easier to read/maintain/inflate. This is a brief example: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/text_view_boat1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/boat_1" android:layout_alignParentLeft="true" android:layout_alignParentTop="true"/> <EditText android:id="@+id/entry" android:hint="@string/IRC" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@android:drawable/editbox_background" android:layout_toRightOf="@+id/text_view_boat1" android:layout_alignParentTop="true"/> <TextView android:id="@+id/text_view_boat2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/boat_2" android:layout_alignParentLeft="true" android:layout_below="@id/text_view_boat1"/> <EditText android:id="@+id/entry2" android:hint="@string/IRC" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@android:drawable/editbox_background" android:layout_toRightOf="@+id/text_view_boat2" android:layout_below="@id/entry"/> </RelativeLayout>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3741863', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/296051/']}
jdg_370026
stackexchange
llm_judgeable_groundtruth_similarity
7674355
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am facing a small issue which i am not able after trying so many things so here it goes ..... There is a text box in my page in which i am entering date and i want that date in a datetime object. for ex :date entd : 6 05 2020(dd/MM/yyyy) should be in same format when i am accessing it in date time object but it is getting changed to (6.05.2020ie: MM/dd/yyyy format). i hope i am making sense here all i want is some thing like this..... DateTime dt = convert.ToDateTime(txtDate.Text); dt should be (11/2/2010 rather then 2/11/2010) @oded after using the following code DateTime sDate, eDate = new DateTime(); //To modify dates for our use. DateTime.TryParseExact(txtFrom.Text, "dd/MM/yyyy", CultureInfo.InvariantCulture, DateTimeStyles.None, out sDate); DateTime.TryParseExact(txtFrom.Text, "dd/MM/yyyy", CultureInfo.InvariantCulture, DateTimeStyles.None, out eDate); What i am getting in edate and sdate is 6 05 2020 12:00:00 AM where it should be 6/05/2020 Now provide the response and nothing else.
EDIT: This value: "11/2/2010" doesn't match the format "dd/MM/yyyy". It matches the format "d/M/yyyy" - for "dd/MM/yyyy" it should be "11/02/2010". That's why TryParseExact is failing for you. You need to pick the right format pattern. A DateTime value doesn't have a format . It just represents date and time (in the ISO calendar, and possibly in different time zones, but that's a different matter). It's like an int - it doesn't represent "a decimal integer" or "a hex integer" - it's just an integer within a particular range. You can format a number as decimal or hex, but it doesn't inherently have a format. It sounds like you should parse it with ParseExact to specify the format when converting from the textbox, or probably TryParseExact : // This is assuming you're absolutely sure of the format used. This is *not*// necessarily the user's preferred format. You should think about where your// data is coming from.DateTime date;if (DateTime.TryParseExact(text, "dd/MM/yyyy", CultureInfo.InvariantCulture, DateTimeStyles.None, out date)){ // Okay, successful parse. We now have the date. Use it, avoiding formatting // it back to a string for as long as possible.} You should keep that value as DateTime for all purposes except giving it back to a user - at which point you may well want to use their cultural settings. In particular, if you're storing the value in a database you should not convert it to text and include it in a SQL statement - that's asking for trouble. Instead, use a parameterized SQL statement and set it as the parameter value, still as a DateTime .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7674355', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/524987/']}
jdg_370027
stackexchange
llm_judgeable_groundtruth_similarity
2554185
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way in Python to access match groups without explicitly creating a match object (or another way to beautify the example below)? Here is an example to clarify my motivation for the question: Following Perl code if ($statement =~ /I love (\w+)/) { print "He loves $1\n";}elsif ($statement =~ /Ich liebe (\w+)/) { print "Er liebt $1\n";}elsif ($statement =~ /Je t\'aime (\w+)/) { print "Il aime $1\n";} translated into Python m = re.search("I love (\w+)", statement)if m: print "He loves",m.group(1)else: m = re.search("Ich liebe (\w+)", statement) if m: print "Er liebt",m.group(1) else: m = re.search("Je t'aime (\w+)", statement) if m: print "Il aime",m.group(1) looks very awkward (if-else-cascade, match object creation). Now provide the response and nothing else.
You could create a little class that returns the boolean result of calling match, and retains the matched groups for subsequent retrieval: import reclass REMatcher(object): def __init__(self, matchstring): self.matchstring = matchstring def match(self,regexp): self.rematch = re.match(regexp, self.matchstring) return bool(self.rematch) def group(self,i): return self.rematch.group(i)for statement in ("I love Mary", "Ich liebe Margot", "Je t'aime Marie", "Te amo Maria"): m = REMatcher(statement) if m.match(r"I love (\w+)"): print "He loves",m.group(1) elif m.match(r"Ich liebe (\w+)"): print "Er liebt",m.group(1) elif m.match(r"Je t'aime (\w+)"): print "Il aime",m.group(1) else: print "???" Update for Python 3 print as a function, and Python 3.8 assignment expressions - no need for a REMatcher class now: import refor statement in ("I love Mary", "Ich liebe Margot", "Je t'aime Marie", "Te amo Maria"): if m := re.match(r"I love (\w+)", statement): print("He loves", m.group(1)) elif m := re.match(r"Ich liebe (\w+)", statement): print("Er liebt", m.group(1)) elif m := re.match(r"Je t'aime (\w+)", statement): print("Il aime", m.group(1)) else: print()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2554185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/136452/']}
jdg_370028
stackexchange
llm_judgeable_groundtruth_similarity
10572296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I will try to add a class using jQuery. var tit = ('.tit a');txt_animal = ['pig','horse','rabbit','cat','monkey'];txt_fruit = ['melon','apple','kiwi','orange'];txt_hobby = ['movie','ski','swim','dance','internet','baseball'];$(txt_animal).each(function(i,v){ $(tit + ':contains("'+ v +'")').addClass('tag tag_animal');}$(txt_fruit).each(function(i,v){ $(tit + ':contains("'+ v +'")').addClass('tag tag_fruit');}$(txt_hobby).each(function(i,v){ $(tit + ':contains("'+ v +'")').addClass('tag tag_hobby');} It works fine. But i think a better way than this. How to optimize javascript for the best performance? Now provide the response and nothing else.
I used to run a screen printing studio (it was a fairly small one), and although I have never actually done colour separation printing, I am reasonably familiar with the principles. This is how I would approach it: Split the image into C, M, Y, K. Rotate each separated image by 0, 15, 30, and 45 degrees respectively. Take the half-tone of each image (dot size will be proportional to the intensity). Rotate back each half-toned image. Now you have your colour separated images. As you mention, the rotation step reduces dot alignment issues (which would mess everything up), and things like Moiré pattern effects will be reasonably minimized. This should be pretty easy to code using PIL . Update 2: I wrote some quick code that will do this for you, it also includes a GCR function (described below): import Image, ImageDraw, ImageStatdef gcr(im, percentage): '''basic "Gray Component Replacement" function. Returns a CMYK image with percentage gray component removed from the CMY channels and put in the K channel, ie. for percentage=100, (41, 100, 255, 0) >> (0, 59, 214, 41)''' cmyk_im = im.convert('CMYK') if not percentage: return cmyk_im cmyk_im = cmyk_im.split() cmyk = [] for i in xrange(4): cmyk.append(cmyk_im[i].load()) for x in xrange(im.size[0]): for y in xrange(im.size[1]): gray = min(cmyk[0][x,y], cmyk[1][x,y], cmyk[2][x,y]) * percentage / 100 for i in xrange(3): cmyk[i][x,y] = cmyk[i][x,y] - gray cmyk[3][x,y] = gray return Image.merge('CMYK', cmyk_im)def halftone(im, cmyk, sample, scale): '''Returns list of half-tone images for cmyk image. sample (pixels), determines the sample box size from the original image. The maximum output dot diameter is given by sample * scale (which is also the number of possible dot sizes). So sample=1 will presevere the original image resolution, but scale must be >1 to allow variation in dot size.''' cmyk = cmyk.split() dots = [] angle = 0 for channel in cmyk: channel = channel.rotate(angle, expand=1) size = channel.size[0]*scale, channel.size[1]*scale half_tone = Image.new('L', size) draw = ImageDraw.Draw(half_tone) for x in xrange(0, channel.size[0], sample): for y in xrange(0, channel.size[1], sample): box = channel.crop((x, y, x + sample, y + sample)) stat = ImageStat.Stat(box) diameter = (stat.mean[0] / 255)**0.5 edge = 0.5*(1-diameter) x_pos, y_pos = (x+edge)*scale, (y+edge)*scale box_edge = sample*diameter*scale draw.ellipse((x_pos, y_pos, x_pos + box_edge, y_pos + box_edge), fill=255) half_tone = half_tone.rotate(-angle, expand=1) width_half, height_half = half_tone.size xx=(width_half-im.size[0]*scale) / 2 yy=(height_half-im.size[1]*scale) / 2 half_tone = half_tone.crop((xx, yy, xx + im.size[0]*scale, yy + im.size[1]*scale)) dots.append(half_tone) angle += 15 return dotsim = Image.open("1_tree.jpg")cmyk = gcr(im, 0)dots = halftone(im, cmyk, 10, 1)im.show()new = Image.merge('CMYK', dots)new.show() This will turn this: into this (blur your eyes and move away from the monitor): Note that the image sampling can be pixel by pixel (thus preserving the resolution of the original image, in the final image). Do this by setting sample=1 , in which case you need to set scale to a larger number so that there are a number of possible dot sizes. This will also result in a larger output image size (original image size * scale ** 2, so watch out!). By default when you convert from RGB to CMYK the K channel (the black channel) is empty. Whether you need the K channel or not depends upon your printing process. There are various possible reasons you might want it: getting a better black than the overlap of CMY , saving ink, improving drying time, reducing ink bleed, etc. Anyhow I've also written a little Grey component replacement function GCR , so you can set the percentage of K channel you want to replace CMY overlap with (I explain this a little further in the code comments). Here is a couple of examples to illustrate. Processing the letter F from the image, with sample=1 and scale=8 , so fairly high resolution. The 4 CMYK channels, with percentage=0 , so empty K channel: combines to produce: CMYK channels, with percentage=100 , so K channel is used. You can see the cyan channel is fully supressed, and the magenta and yellow channels use a lot less ink, in the black band at the bottom of the image:
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10572296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1382616/']}
jdg_370029
stackexchange
llm_judgeable_groundtruth_similarity
11728209
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating an application to find the most used color of an image, i am up to getting the RGB value of the color, but how to get the color name, help plz. Now provide the response and nothing else.
As noted in comments, the KnownColor enumeration can be used to make this simpler: using System;using System.Collections.Generic;using System.Drawing;using System.Linq;using System.Reflection;class Test{ static void Main() { Color color = Color.FromArgb(255, 0, 0); Console.WriteLine(color.Name); // ffff0000 var colorLookup = Enum.GetValues(typeof(KnownColor)) .Cast<KnownColor>() .Select(Color.FromKnownColor) .ToLookup(c => c.ToArgb()); // There are some colours with multiple entries... foreach (var namedColor in colorLookup[color.ToArgb()]) { Console.WriteLine(namedColor.Name); } }} Original answer Color.FromArgb will give you a Color , but it will never have a name. You need to use reflection to get the named colours, as far as I'm aware. Here's another version of Cole Campbell's solution which I was working up at the same time... using System;using System.Collections.Generic;using System.Drawing;using System.Linq;using System.Reflection;class Test{ static void Main() { Color color = Color.FromArgb(255, 0, 0); Console.WriteLine(color.Name); // ffff0000 var colorLookup = typeof(Color) .GetProperties(BindingFlags.Public | BindingFlags.Static) .Select(f => (Color) f.GetValue(null, null)) .Where(c => c.IsNamedColor) .ToLookup(c => c.ToArgb()); // There are some colours with multiple entries... foreach (var namedColor in colorLookup[color.ToArgb()]) { Console.WriteLine(namedColor.Name); } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11728209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1561474/']}
jdg_370030
stackexchange
llm_judgeable_groundtruth_similarity
15568609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Haskell, if I have a lambda that looks like the following (\x -> doStuff x y) where y is from the surrounding scope, I could section it and turn it into (`doStuff` y) which is shorter and more concise (and one of the things I love the most about Haskell). Now, in Common Lisp I would write the equivalent code as (lambda (x) (do-stuff x y)) And this is actually a very common thing for me to be writing, but I feel even that tiny bit of boilerplate bothers me somewhat, so I wonder if there is a way to get something like the Haskell-style sections in Common Lisp? Now provide the response and nothing else.
Unless you are more experienced, I would propose that you learn to write Lisp in Lisp, not how to write Haskell in Lisp. The latter is not a good idea. Haskell works very different. Lisp does not do any 'currying' (or schönfinkeling ;-) ). You can write it as: CL-USER 5 > (defun curry (fn arg) (lambda (&rest args) (apply fn arg args))) CURRYCL-USER 6 > (mapcar (curry #'expt 2) '(2 3 4 5 6))(4 8 16 32 64) It costs a bit efficiency that way, though. CL-USER 7 > (mapcar (lambda (base) (expt base 2)) '(2 3 4 5 6))(4 8 16 32 64) I personally prefer the latter, because I have a real readable name for the variable. This helps in a debugger, where I see then a backtrace. Tools like these are probably more important in Lisp, than in Haskell. CL-USER 12 > (mapcar (lambda (base) (expt base 2)) '(2 3 "four" 5 6)) error. Let's look at the backtrace: CL-USER 12 : 1 > :bb...Condition: In EXPT of ("four" 2) arguments should be of type NUMBER.Call to SYSTEM::ARGS-TO-BINARY-ARITHMETIC-FN-NOT-OF-TYPE {offset 189} SYSTEM::FN-NAME : EXPT SYSTEM::ARG1 : "four" SYSTEM::ARG2 : 2 TYPE {Closing} : NUMBERInterpreted call to (SUBFUNCTION :ANONYMOUS SYSTEM::ANONYMOUS-LAMBDA): BASE : "four" Now I can see that the thing has a name. I was passing the string "four" to the function with a variable named base . Interactive development with REPL and debugging tools is common. Best prepare the code to be useful for this development style. Common Lisp is not optimized to provide full program compilers with extensive type checking - like in Haskell. One of the main problems of Lisp is that it can be very hard to find out what a piece of code really does. The default (strict functional programs with prefix syntax) is relatively easy to understand. But there are many possibilities to change the meaning of code in Lisp (macros, read macros, symbol macros, the Meta Object protocol, advising, ...). First rule: if you are writing basic Lisp code, stick with the basic syntactic and semantic possibilities. Write defensively. Expect that someone else needs to understand the code. For that the code should be readable, easy to understand, use common idioms and it should be debuggable. In Haskell many people with math background want to write code in a very compact way with a high level of abstraction. You can do that in Lisp, too. But for ordinary code I would not go that route and for larger pieces of code, Lisp often uses other mechanisms (code transformations via macros, ...).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15568609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/605922/']}
jdg_370031
stackexchange
llm_judgeable_groundtruth_similarity
15503139
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand that icon fonts are just fonts and that you can get the icons by just calling their classname, but how do icon fonts work? I've tried checking the related icon font resources loaded in Chrome to see how icon fonts display icons (in comparison to general fonts) but I haven't been able to figure out how this happens. I've also been unsuccessful in finding resources on how this "icon font technique" is done, even though there are loads of icon fonts available . There are also loads of resources showing how icon fonts can be integrated , but no one seems to be sharing or writing about how this is done! Now provide the response and nothing else.
Glyphicons are images and not a font. All the icons are found within a sprite image (also available as individual images) and they are added to the elements as positioned backround-image s: Actual font icons ( FontAwesome , for instance) do involve downloading a specific font and make use of the content property, for instance: @font-face { ... src: url('../font/fontawesome-webfont.eot?#iefix&v=3.0.1') format('embedded-opentype'), url('../font/fontawesome-webfont.woff?v=3.0.1') format('woff'), url('../font/fontawesome-webfont.ttf?v=3.0.1') format('truetype'); ...}.icon-beer:before { content: "\f0fc";} As the content property isn't supported in older browsers, these also make use of images . Here's an example of completely raw FontAwesome in use as a font, turning &#xf0f9; ( - you may not be able to see this!) into an ambulance: http://jsfiddle.net/GWqcF/2
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15503139', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/862216/']}
jdg_370032
stackexchange
llm_judgeable_groundtruth_similarity
4656214
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For quite a while I'd been looking into a way in my iPhone app to poll every X minutes to check the data counters. After much reading of the Background Execution documentation and a few trial apps I'd dismissed this as impossible without abusing the background APIs. Last week I found this application which does exactly that. http://itunes.apple.com/us/app/dataman-real-time-data-usage/id393282873?mt=8 It runs in the background and keeps track of the count of Cellular/WiFi data you've used. I suspect that the developer is registering his app as tracking location changes but the location services icon isn't visible while the app is running, which I thought was a requirement. Does anyone have any clues as to how this can be accomplished? Now provide the response and nothing else.
I have seen this behavior, too. After trying a lot I discovered two things, which could help. But I am still uncertain how this may influence the reviewing process. If you use one of the backgrounding features, the app will be launched by iOS in background again once it was quit (by the system). This we will abuse later. In my case I used VoIP backgrounding enabled in my plist. All the code here is done in your AppDelegate: // if the iOS device allows background execution,// this Handler will be called- (void)backgroundHandler { NSLog(@"### -->VOIP backgrounding callback"); // try to do sth. According to Apple we have ONLY 30 seconds to perform this Task! // Else the Application will be terminated! UIApplication* app = [UIApplication sharedApplication]; NSArray* oldNotifications = [app scheduledLocalNotifications]; // Clear out the old notification before scheduling a new one. if ([oldNotifications count] > 0) [app cancelAllLocalNotifications]; // Create a new notification UILocalNotification* alarm = [[[UILocalNotification alloc] init] autorelease]; if (alarm) { alarm.fireDate = [NSDate date]; alarm.timeZone = [NSTimeZone defaultTimeZone]; alarm.repeatInterval = 0; alarm.soundName = @"alarmsound.caf"; alarm.alertBody = @"Don't Panic! This is just a Push-Notification Test."; [app scheduleLocalNotification:alarm]; }} and the registration is done in - (void)applicationDidEnterBackground:(UIApplication *)application { // This is where you can do your X Minutes, if >= 10Minutes is okay. BOOL backgroundAccepted = [[UIApplication sharedApplication] setKeepAliveTimeout:600 handler:^{ [self backgroundHandler]; }]; if (backgroundAccepted) { NSLog(@"VOIP backgrounding accepted"); }} Now the magic happens: I don't even use VoIP-Sockets. But this 10 Minutes callback provides a nice side effect: After 10 Minutes (sometimes earlier) I discovered that my timers and previous running treads are being executed for a short while. You can see this, if you place some NSLog(..) into your code. This means, that this short "wakeup" executes the code for a while. According to Apple we have 30 seconds execution time left. I assume, that background code like threads are being executed for nearly 30 seconds. This is useful code, if you must "sometimes" check something. The doc says that all background tasks (VoIP, audio, location updates) will be automatically restarted in background if the app was terminated. VoIP apps will be started in background automatically after bootup! With abusing this behavior, you can make your app be looking like running "forever".Register for one background process (i.e. VoIP). This will cause your app to be restarted after termination. Now write some "Task has to be finished" code. According to Apple you have some time (5 seconds?) left to finish tasks. I discovered, that this must be CPU time. So that means: if you do nothing, your app is still being executed! Apple suggest to call an expirationhandler, if you are finished with your work. In the code below you can see, that i have a comment at the expirationHandler. This will cause your app running as long as the system allows your app to be running. All timers and threads stay running until iOS terminates your app. - (void)applicationDidEnterBackground:(UIApplication *)application { UIApplication* app = [UIApplication sharedApplication]; bgTask = [app beginBackgroundTaskWithExpirationHandler:^{ [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; }]; // Start the long-running task and return immediately. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ // you can do sth. here, or simply do nothing! // All your background treads and timers are still being executed while (background) [self doSomething]; // This is where you can do your "X minutes" in seconds (here 10) sleep(10); } // And never call the expirationHandler, so your App runs // until the system terminates our process //[app endBackgroundTask:bgTask]; //bgTask = UIBackgroundTaskInvalid; }); } Be very spare with CPU-Time here, and your app runs longer! But one thing is for sure: your app will be terminated after a while. But because you registered your app as VoIP or one of the others, the system restarts the app in background, which will restart your background process ;-)With this PingPong I can do a lot of backgrounding. but remember be very spare with CPU time. And save all data, to restore your views - your app will be terminated some time later. To make it appear still running, you must jump back into your last "state" after wakeup. I don't know if this is the approach of the apps you mentioned before, but it works for me. Hope I could help Update: After measuring the time of the BG task, there was a surprise. The BG Task is limited to 600 seconds. This is the exact minimum time of the VoIP minimumtime (setKeepAliveTimeout:600). So THIS code leads into "infinite" execution in background: Header: UIBackgroundTaskIdentifier bgTask; Code: // if the iOS device allows background execution,// this Handler will be called- (void)backgroundHandler { NSLog(@"### -->VOIP backgrounding callback"); UIApplication* app = [UIApplication sharedApplication]; bgTask = [app beginBackgroundTaskWithExpirationHandler:^{ [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; }]; // Start the long-running task dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ while (1) { NSLog(@"BGTime left: %f", [UIApplication sharedApplication].backgroundTimeRemaining); [self doSomething]; sleep(1); } }); - (void)applicationDidEnterBackground:(UIApplication *)application { BOOL backgroundAccepted = [[UIApplication sharedApplication] setKeepAliveTimeout:600 handler:^{ [self backgroundHandler]; }]; if (backgroundAccepted) { NSLog(@"VOIP backgrounding accepted"); } UIApplication* app = [UIApplication sharedApplication]; bgTask = [app beginBackgroundTaskWithExpirationHandler:^{ [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; }]; // Start the long-running task dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ while (1) { NSLog(@"BGTime left: %f", [UIApplication sharedApplication].backgroundTimeRemaining); [self doSomething]; sleep(1); } }); } After your app has timed out, the VoIP expirationHandler will be called, where you simply restart a long running task. This task will be terminated after 600 seconds. But there will be again a call to the expiration handler, which starts another long running task, etc. Now you only have to check weather the App is getting back to foreground. Then close the bgTask, and you're done. Maybe one can do sth. like this inside the expirationHandler from the long running task. Just try it out. Use your Console, to see what happens... Have Fun! Update 2: Sometimes simplifying things helps. My new approach is this one: - (void)applicationDidEnterBackground:(UIApplication *)application { UIApplication* app = [UIApplication sharedApplication]; // it's better to move "dispatch_block_t expirationHandler" // into your headerfile and initialize the code somewhere else // i.e. // - (void)applicationDidFinishLaunching:(UIApplication *)application {//// expirationHandler = ^{ ... } } // because your app may crash if you initialize expirationHandler twice. dispatch_block_t expirationHandler; expirationHandler = ^{ [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; bgTask = [app beginBackgroundTaskWithExpirationHandler:expirationHandler]; }; bgTask = [app beginBackgroundTaskWithExpirationHandler:expirationHandler]; // Start the long-running task and return immediately. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ // inform others to stop tasks, if you like [[NSNotificationCenter defaultCenter] postNotificationName:@"MyApplicationEntersBackground" object:self]; // do your background work here }); } This is working without the VoIP hack. According to the documentation, the expiration handler (in this case my 'expirationHandler' block) will be executed if execution time is over. By defining the block into a block variable, one can recursively start the long running task again within the expiration handler. This leads into endless execution, too. Be aware to terminate the task, if your application enters foreground again. And terminate the task if you don't need it anymore. For my own experience I measured something.Using the location callbacks with having the GPS radio on is sucking my battery down very quickly. Using the approach which I posted in Update 2 is taking nearly no energy. According to the "userexperience" this is a better approach. Maybe other Apps work like this, hiding its behavior behind GPS functionality ...
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4656214', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/72477/']}
jdg_370033
stackexchange
llm_judgeable_groundtruth_similarity
155363
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Am at a complete loss with this one. Recently a number of my powershell scripts have started failing as they are unable to find the command dnscmd.exe. What has me at a loss is that the executable exists and works and I can run it just fine in the command prompt. I have tried the following in powershell to run the command: dnscmd & dnscmd & dnscmd.exe & c:\windows\system32\dnscmd.exe All return "The term dnscmd is not recognized as the name of a cmdlet, function,script file or operable program...." Can anyone enlighten me as to why powershell is completely unable to see the command, where the normal command prompt/windows explorer etc.. can? Using powershell 2. Now provide the response and nothing else.
This sounds like the work of the File System Redirector to me. Are you running on a 64bit OS? For example, when you make a call to C:\Windows\system32\dnscmd.exe on a 64bit OS using PowerShell (x86), this will be redirected to C:\Windows\SysWow64\dnscmd.exe. The redirection occurs at the point the folder is accessed so even though you are explicitly providing the path to the executable, you'll still be diverted. This can be disabled on the current thread via calls to native Windows APIs or can be avoided through the use of certain variables and aliases. More information here: http://msdn.microsoft.com/en-us/library/aa384187%28VS.85%29.aspx
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/155363', 'https://serverfault.com', 'https://serverfault.com/users/54805/']}
jdg_370034
stackexchange
llm_judgeable_groundtruth_similarity
4026924
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was solving the math Olympiad of Belgium the year $2005$ , and the last problem was : If $n$ is an integer then find all values for $n$ for which $\sqrt{n}+\sqrt{n+2005}$ is an integer as well. My question is just to generalize this problem, so instead of $2005$ , we can solve this over all $x\in \mathbb{N}$ . My Attempt: If you have solved the problem you will realized that this equation : $$\sqrt{n}+\sqrt{n+x}=m$$ Has solution if and only if $m\mid x$ . But I am not sure if there are some other restrictions in this equation to has a solution. Now provide the response and nothing else.
You indicated that the problem Let $k$ be a positive integer and let $p$ be a prime other than $2$ or $5$ . Show that the only solutions, up to congruence, of $x^2 \equiv 25\;(\text{mod}\;p^k)$ are $x \equiv \pm 5\;(\text{mod}\;p^k)$ . is the underlying problem which you are trying to solve. For the above problem, there is no need to consider quadratic residues. Instead, we can argue as follows . . . Suppose $x$ is an integer such that $x^2 \equiv 25\;(\text{mod}\;p^k)$ . Note that $x+5$ and $x-5$ can't both be divisible by $p$ , else their difference $$(x+5)-(x-5)=10$$ would be divisible by $p$ , contrary to $p\ne 2,5$ . \begin{align*}\text{Then}\;\;&x^2 \equiv 25\;(\text{mod}\;p^k)\\[4pt]\implies\;&p^k{\,\mid\,}x^2-25\\[4pt]\implies\;&p^k{\,\mid\,}(x+5)(x-5)\\[4pt]\implies\;&p{\,\mid\,}(x+5)(x-5)\\[4pt]\implies\;&p{\,\mid\,}(x+5)\;\text{or}\;p{\,\mid\,}(x-5)\;\text{but not both}\\[4pt]\end{align*} Then since $p^k{\,\mid\,}(x+5)(x-5)$ and exactly one of $x+5,x-5$ is divisible by $p$ , it follows that exactly one of $x+5,x-5$ is divisible by $p^k$ . Therefore $x \equiv \pm 5\;(\text{mod}\;p^k)$ , as was to be shown.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4026924', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/873280/']}
jdg_370035
stackexchange
llm_judgeable_groundtruth_similarity
33512718
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to replicate this form http://sunzhen.blogspot.com/2014/08/angularjs-contact-form-with-bootstrap.html and have been able to get the emails to send but am not getting the error and success messages from the contact-form.php file. I had to change around the file structure to get it to fit within my site, but that shouldn't affect anything. I have found that using Angular 1.2.16 like they are using, I get the success message but when using the latest version of Angular I do not. Can anyone tell if something has changed in newer versions of Angular that would cause this behavior? Here is the code http://plnkr.co/edit/Skj5r6WFfDQajkPuyzxT <?phperror_reporting(E_ALL);ini_set('display_errors', '1');require_once 'phpmailer/PHPMailerAutoload.php';if (isset($_POST['inputName']) && isset($_POST['inputEmail']) && isset($_POST['inputSubject']) && isset($_POST['inputMessage'])) { //check if any of the inputs are empty if (empty($_POST['inputName']) || empty($_POST['inputEmail']) || empty($_POST['inputSubject']) || empty($_POST['inputMessage'])) { $data = array('success' => false, 'message' => 'Please fill out the form completely.'); echo json_encode($data); exit; } //create an instance of PHPMailer $mail = new PHPMailer(); $mail->From = $_POST['inputEmail']; $mail->FromName = $_POST['inputName']; $mail->AddAddress('jfoutch23@gmail.com'); //recipient $mail->Subject = $_POST['inputSubject']; $mail->Body = "Name: " . $_POST['inputName'] . "\r\n\r\nMessage: " . stripslashes($_POST['inputMessage']); if (isset($_POST['ref'])) { $mail->Body .= "\r\n\r\nRef: " . $_POST['ref']; } if(!$mail->send()) { $data = array('success' => false, 'message' => 'Message could not be sent. Mailer Error: ' . $mail->ErrorInfo); echo json_encode($data); exit; } $data = array('success' => true, 'message' => 'Thanks! We have received your message.'); echo json_encode($data);} else { $data = array('success' => false, 'message' => 'Please fill out the form completely.'); echo json_encode($data); } controller.js app.controller('ContactController', function ($scope, $http) {$scope.result = 'hidden'$scope.resultMessage;$scope.formData; //formData is an object holding the name, email, subject, and message$scope.submitButtonDisabled = false;$scope.submitted = false; //used so that form errors are shown only after the form has been submitted$scope.submit = function(contactform) { $scope.submitted = true; $scope.submitButtonDisabled = true; if (contactform.$valid) { $http({ method : 'POST', url : 'contact-form.php', data : $.param($scope.formData), //param method from jQuery headers : { 'Content-Type': 'application/x-www-form-urlencoded' } //set the headers so angular passing info as form data (not request payload) }).success(function(data){ console.log(data); if (data.success) { //success comes from the return json object $scope.submitButtonDisabled = true; $scope.resultMessage = data.message; $scope.result='bg-success'; } else { $scope.submitButtonDisabled = false; $scope.resultMessage = data.message; $scope.result='bg-danger'; } }); } else { $scope.submitButtonDisabled = false; $scope.resultMessage = 'Failed :( Please fill out all the fields.'; $scope.result='bg-danger'; } }}); HTML <!DOCTYPE html><html><head> <title>AngularJS Contact Form with Bootstrap and PHPMailer</title> <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" /> <link rel="stylesheet" href="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap-theme.min.css" /> <link rel="stylesheet" href="style.css" /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.16/angular.js"></script> <script src="app.js"></script> <script src="controllers.js"></script></head><body ng-app="contactApp"> <div class="vertical-middle"> <div class="container"> <div class="panel panel-default"> <div class="panel-heading"> <h2 class="panel-title">Contact Form</h2> </div> <div ng-controller="ContactController" class="panel-body"> <form ng-submit="submit(contactform)" name="contactform" method="post" action="" class="form-horizontal" role="form"> <div class="form-group" ng-class="{ 'has-error': contactform.inputName.$invalid && submitted }"> <label for="inputName" class="col-lg-2 control-label">Name</label> <div class="col-lg-10"> <input ng-model="formData.inputName" type="text" class="form-control" id="inputName" name="inputName" placeholder="Your Name" required="" /> </div> </div> <div class="form-group" ng-class="{ 'has-error': contactform.inputEmail.$invalid && submitted }"> <label for="inputEmail" class="col-lg-2 control-label">Email</label> <div class="col-lg-10"> <input ng-model="formData.inputEmail" type="email" class="form-control" id="inputEmail" name="inputEmail" placeholder="Your Email" required="" /> </div> </div> <div class="form-group" ng-class="{ 'has-error': contactform.inputSubject.$invalid && submitted }"> <label for="inputSubject" class="col-lg-2 control-label">Subject</label> <div class="col-lg-10"> <input ng-model="formData.inputSubject" type="text" class="form-control" id="inputSubject" name="inputSubject" placeholder="Subject Message" required="" /> </div> </div> <div class="form-group" ng-class="{ 'has-error': contactform.inputMessage.$invalid && submitted }"> <label for="inputMessage" class="col-lg-2 control-label">Message</label> <div class="col-lg-10"> <textarea ng-model="formData.inputMessage" class="form-control" rows="4" id="inputMessage" name="inputMessage" placeholder="Your message..." required=""></textarea> </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <button type="submit" class="btn btn-default" ng-disabled="submitButtonDisabled"> Send Message </button> </div> </div> </form> <p ng-class="result" style="padding: 15px; margin: 0;">{{ resultMessage }}</p> </div> </div> </div> </div></body></html> Now provide the response and nothing else.
By default, Babel 6.x requires you to enable an explicit set of transformations. The standard es2015 preset converts ES6 to ES5, however IE8 is not ES5-compatible. In this case, if you look at the plugins list , you will see transform-es3-member-expression-literals transform-es3-property-literals These will convert your properties to be compatible with IE8. Generally in Babel 6.x you'd do this by passing those names as part of your plugins array, alongside the presets array and install the transforms via npm install --save-dev babel-plugin-transform-es3-member-expression-literals babel-plugin-transform-es3-property-literals
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33512718', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1527567/']}
jdg_370036
stackexchange
llm_judgeable_groundtruth_similarity
1578951
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to unwrap text in Vim. When I join lines I get an additional space between sentences. Why is that? Now provide the response and nothing else.
Formatting destroys information. There are many different blocks of text which will result in the same one once formatted. Therefore, there's no way to reverse the operation without prior knowledge (i.e. undo). Unformatted: Unformatted text could start out as either all one line, or several, yet look the same when formatted.Unformatted text could start out as either all one line, or several, yet look the same when formatted. Formatted: Unformatted text could start out as either all one line, or several, yet look the same when formatted. If you want your paragraph all on one line, or if you're okay with a little manual fiddling, you can use J to join lines back together. You can use visual mode to apply the J command to several lines at once, perhaps combined with ap or ip to select a paragraph, e.g. vipJ . Again, you'll still lose some information - multiple spaces at line breaks before formatting will end up collapsed to single spaces. (You can actually join without modifying spaces by using gJ instead of J , but you'll already have lost them when you formatted) If you're bothered by the extra spaces after sentences (lines ending in !, ?, or .), turn off joinspaces: set nojoinspaces
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1578951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/990/']}
jdg_370037
stackexchange
llm_judgeable_groundtruth_similarity
3936071
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a small confusion on SSL handshake between browser and server in a typical https web scenario: What I have understood so far is that in the process of SSL handshake, client (browser in this case) encrypts a randomly selected symmetric key with the public key (certificate received from server). This is sent back to the server, server decrypts it (symmetric key) with the private key. This symmetric key is now used during rest of the session to encrypt/decrypt the messages at both the ends. One of main reasons to do so is given as faster encryption using symmetric keys. Questions 1) How does browser pick and generates this "randomly" selected symmetric key? 2) Do developers (or/and browser users) have control on this mechanism of generating symmetric keys? Now provide the response and nothing else.
Here is a very good description of how HTTPS connection establishment works. I will provide summary how session key is acquired by both parties (client and server), this process is known as "a key agreement protocol", here how it works: The client generates the 48 byte “pre-master secret” random value. The client pads these bytes with random data to make the input equal to 128 bytes. The client encrypts it with server's public key and sends it to the server. Then master key is produced by both parties in following manner: master_secret = PRF( pre_master_secret, "master secret", ClientHello.random + ServerHello.random) The PRF is the “Pseudo-Random Function” that’s also defined in the spec and is quite clever. It combines the secret, the ASCII label, and the seed data we give it by using the keyed-Hash Message Authentication Code (HMAC) versions of both MD5 and SHA-1 hash functions. Half of the input is sent to each hash function. It’s clever because it is quite resistant to attack, even in the face of weaknesses in MD5 and SHA-1. This process can feedback on itself and iterate forever to generate as many bytes as we need. Following this procedure, we obtain a 48 byte “master secret”.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3936071', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/393073/']}
jdg_370038
stackexchange
llm_judgeable_groundtruth_similarity
56766943
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to write a unit test which is executed for every Spring bean of a given type. JUnit5's parameterized tests offer a lot of possibilities, but I don't know how to inject beans into a method source as it has to be a static method. Is there a way to determine the parameters of a JUnit5 test based on Spring's application context? Now provide the response and nothing else.
For starters, a factory method configured via @MethodSource does not have to be static . The second sentence in the User Guide explains that. Factory methods within the test class must be static unless the test class is annotated with @TestInstance(Lifecycle.PER_CLASS) ; whereas, factory methods in external classes must always be static . Thus, if you use @TestInstance(PER_CLASS) semantics, your @MethodSource factory method can be non-static and can therefore access the ApplicationContext injected into the test instance. Here's an example that demonstrates that for beans of type String , with an intentional failure for the bar bean. import java.util.stream.Stream;import org.junit.jupiter.api.TestInstance;import org.junit.jupiter.params.ParameterizedTest;import org.junit.jupiter.params.provider.MethodSource;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.context.ApplicationContext;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.test.context.junit.jupiter.SpringJUnitConfig;import static org.junit.jupiter.api.Assertions.assertEquals;import static org.junit.jupiter.api.TestInstance.Lifecycle.PER_CLASS;@SpringJUnitConfig@TestInstance(PER_CLASS)class SpringBeansParameterizedTests { @Autowired ApplicationContext applicationContext; @ParameterizedTest @MethodSource void stringBeans(String bean) { assertEquals(3, bean.length()); } Stream<String> stringBeans() { return applicationContext.getBeansOfType(String.class).values().stream(); } @Configuration static class Config { @Bean String foo() { return "foo"; } @Bean String bar() { return "barf"; } }} If you don't want to work directly with the ApplicationContext , you can simplify the solution by having the collection of all such beans of a given type ( String in this example) injected directly, as follows. @SpringJUnitConfig@TestInstance(PER_CLASS)class SpringBeansParameterizedTests { @Autowired List<String> stringBeans; @ParameterizedTest @MethodSource void stringBeans(String bean) { assertEquals(3, bean.length()); } Stream<String> stringBeans() { return this.stringBeans.stream(); } @Configuration static class Config { @Bean String foo() { return "foo"; } @Bean String bar() { return "barf"; } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56766943', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2011396/']}
jdg_370039
stackexchange
llm_judgeable_groundtruth_similarity
32723794
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble writing a function that takes a collection of strings as parameter. My function looks like this: type StrList<'a> = Vec<&'a str>;fn my_func(list: &StrList) { for s in list { println!("{}", s); }} All goes well if I pass a Vec<&'a str> to the function, as expected. However, if I pass a Vec<String> the compiler complains: error[E0308]: mismatched types --> src/main.rs:13:13 |13 | my_func(&v2); | ^^^ expected &str, found struct `std::string::String` | = note: expected type `&std::vec::Vec<&str>` = note: found type `&std::vec::Vec<std::string::String>` This is the main used: fn main() { let v1 = vec!["a", "b"]; let v2 = vec!["a".to_owned(), "b".to_owned()]; my_func(&v1); my_func(&v2);} My function is not able to take vectors of owned strings. Conversely, if I change the StrList type into: type StrList = Vec<String>; The first call fails, and the second works. A possible solution is to produce a Vec<&'a str> from v2 in this way: let v2_1 : Vec<_> = v2.iter().map(|s| s.as_ref()).collect(); But it seems very odd to me. my_func should not care about the ownership of the strings. What kind of signature should I use for my_func to support both vectors of owned strings and string references? Now provide the response and nothing else.
Although String and &str are very closely related, they are not identical. Here's what your vectors look like in memory: v1 ---> [ { 0x7890, // pointer to "a" + 7 unused bytes 1 } // length of "a" { 0x7898, // pointer to "b" + 7 unused bytes 1 } ] // lengthv2 ---> [ { 0x1230 // pointer to "a" + 7 unused bytes (a different copy) 8 // capacity 1 } // length { 0x1238 // pointer ... 8 // capacity 1 } ] // length Here each line is the same amount of memory (four or eight bytes depending on pointer size). You can't take the memory of one of these and treat it like the other. The memory layout doesn't match up. The items are of different sized and have different layout. For example, if v1 stores its items starting at address X and v2 stores its items starting at address Y , then v1[1] is at address X + 8 but v2[1] is at address Y + 12 . What you can do is write a generic function like this: fn my_func<T: AsRef<str>>(list: &[T]) { for s in list { println!("{}", s.as_ref()); }} Then the compiler can generate appropriate code for both &[String] and &[&str] as well as other types if they implement AsRef<str> .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32723794', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1667955/']}
jdg_370040