source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
9616250
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to know if there is a maximum number of files allowed per jar, after which you can have classpath issues like classes not taken into account? Now provide the response and nothing else.
The jar format is just a rebranded zip format, so it inherits the limitations of that format. The original zip format has a limit of 65535 entries, so in total in Java 6 and earlier, you can have at most that many classes or other files, combined. Many tools also include directories as entires, and this reduces the entires available for classes and other files. In java 7, zip64 is supported, with a much higher limit. I suspect the failure mode, however, won't be randomly missing files, but failure at jar generation time.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9616250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/709635/']}
jdg_376741
stackexchange
llm_judgeable_groundtruth_similarity
2336299
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I installed Ant on a Fedora 5 machine using yum, but Ant does not launch. I get this error message: [bash]# antException in thread "main" java.lang.NoClassDefFoundError: org/apache/tools/ant/launch/LauncherCaused by: java.lang.ClassNotFoundException: org.apache.tools.ant.launch.Launcher at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248)Could not find the main class: org.apache.tools.ant.launch.Launcher. Program will exit. I have set the following environment variables: JAVA_HOME = /usr/java/latestANT_HOME = /usr/bin Ant is installed at /usr/bin/ant . I have also come across this post , and it seems that the JAVAVER_LIBDIR variable does not point to something that exists. I set a symbolic link, like the post suggests cleared the issue up for the author, but it did not fix things for me. [bash]# sudo ln -s /usr/share/java /usr/share/java-1.6.0 Edit: I have also set a symbolic link that addressed a similar problem with the JAVAVER_JNIDIR variable: [bash]# sudo ln -s /usr/lib/java /usr/lib/java-1.6.0 I now get a different error message: [bash]# ant --execdebug/usr/bin/build-classpath: error: Could not find jaxp_parser_impl Java extension for this JVM/usr/bin/build-classpath: error: Could not find xml-commons-apis Java extension for this JVM/usr/bin/build-classpath: error: Some specified jars were not foundexec "/usr/java/latest/bin/java" -classpath "/usr/share/java-1.6.0/ant.jar:/usr/share/java-1.6.0/ant-launcher.jar:/usr/java/latest/lib/tools.jar" -Dant.home="/usr" -Dant.library.dir="/usr/lib" org.apache.tools.ant.launch.Launcher -cp ""Buildfile: build.xml does not exist!Build failed Off to Google these error messages... Now provide the response and nothing else.
I've just met similar issue on Fedora 18. To make long story short, run: sudo mkdir /usr/lib/java-1.7.0 /usr/share/java-1.7.0 Apparently, this problem is specific to java-1.7.0-openjdk because similar empty directories for all other JDK versions existed. DETAILS There was single-line error after running ant : Error: Could not find or load main class org.apache.tools.ant.launch.Launcher The problem was somewhere in /usr/share/java-utils/java-functions script which tried to set JAVAVER_JNIDIR / JAVAVER_LIBDIR variables to refer to the directories above. Due to inappropriate error message redirection to stdout, some variables got assignments like: LOCALCLASSPATH='/usr/bin/build-classpath: error: JAVAVER_JNIDIR /usr/lib/java-1.7.0 does not exist or is not a directory' Creating the directories according to the convention set by all other java versions is the cleanest solution.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2336299', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6688/']}
jdg_376742
stackexchange
llm_judgeable_groundtruth_similarity
432752
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am working on ETL job where we have used the below command to get the File names from the folder and it will also count the rows for each file. exec( '/usr/bin/ksh', '-c "cd [$PFileDir];/usr/bin/wc -l [$PFileName] > [$PFileDir]/ETL_FILE_LIST"',2); Now, I want the same functionality in Windows command, and not sure how to do it. Any help will be highly appreciated. Now provide the response and nothing else.
For Windows 7, I've broken this down into smaller chunks so that I have more control. This assumes that you have a directory structure like this and that you download the .msu updates from Microsoft to the updates folder: c:\ - Win7 --- mount --- updates --- stagedupdates You can obviously modify this to your liking. I also manually download the hotfixes that I'm slipstreaming so that I know exactly what's in my image. PrepUpdates.bat expand -F:* c:\win7\updates\* c:\win7\updatescopy updates\*.cab stagedUpdatesdel stagedUpdates\WSUSSCAN.cab /s MountWim.bat cd \win7dism /mount-wim /wimfile:c:\win7\install.wim /index:1 /mountdir:c:\win7\mount InstallUpdates.bat cd \win7dism /image:c:\win7\mount /add-package /packagepath:C:\WIN7\stagedUpdates UnmountAndCommit.bat dism /unmount-Wim /MountDir:c:\win7\mount /commit UnmountAndDiscard.bat dism /unmount-Wim /MountDir:c:\win7\mount /discard You can obviously just cobble these together and script the deletion of the leftover updates after. I prefer to do that myself. This should give you a good framework to use, though.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/432752', 'https://serverfault.com', 'https://serverfault.com/users/138770/']}
jdg_376743
stackexchange
llm_judgeable_groundtruth_similarity
6932907
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Okay, the code below "works" in that when you scroll to the bottom of the page the AJAX function is fired and results are appended to the #postswrapper div on the page. The issue is: if I scroll really quickly when I reach the bottom of the page, the AJAX fires several times and loads several sets of results into the #postswrapper div (number of additional, 'unwanted' results are determined by how many additional scroll events were fired by scrolling quickly). Ultimately, I need only serve one set of results per time the user reaches the bottom. I've tried adding timers and such, but to no avail. It's obviously storing the additional scroll actions in the DOM and firing them in sequential order thereafter. Any ideas? I'm using jquery.1.4.4.js if that helps anybody. And e.preventDefault() doesn't work, in this situation, anyways. $(window).scroll(function(e) { e.preventDefault(); if ($(window).scrollTop() >= ($(document).height() - $(window).height())) { $('div#loadmoreajaxloader').show(); $.ajax({ cache: false, url: 'loadmore.php?lastid=' + $('.postitem:last').attr('id'), success: function(html) { if (html) { $('#postswrapper').append(html); $('div#loadmoreajaxloader').hide(); } else { $('div#loadmoreajaxloader').html(); } } }); }}); Now provide the response and nothing else.
Try storing some kind of data that stores whether the page is currently loading new items. Maybe like this: $(window).data('ajaxready', true).scroll(function(e) { if ($(window).data('ajaxready') == false) return; if ($(window).scrollTop() >= ($(document).height() - $(window).height())) { $('div#loadmoreajaxloader').show(); $(window).data('ajaxready', false); $.ajax({ cache: false, url: 'loadmore.php?lastid=' + $('.postitem:last').attr('id'), success: function(html) { if (html) { $('#postswrapper').append(html); $('div#loadmoreajaxloader').hide(); } else { $('div#loadmoreajaxloader').html(); } $(window).data('ajaxready', true); } }); }}); Right before the Ajax request is sent, a flag is cleared signifying that the document is not ready for more Ajax requests. Once the Ajax completes successfully, it sets the flag back to true, and more requests can be triggered.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6932907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/877420/']}
jdg_376744
stackexchange
llm_judgeable_groundtruth_similarity
25260371
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I did this: byte[] data = Convert.FromBase64String(str); string decodedString = Encoding.UTF8.GetString(data); Console.WriteLine(decodedString); but got Unhandled Exception: System.FormatException: Invalid length for a Base-64 char array or string. In javascript using atob(str) gives me the right decoded str. javascript console: atob("eyJpc3MiOiJodHRwczovL2lkZW50aXR5LXN0YWdpbmcuYXNjZW5kLnh5eiIsImF1ZCI6Imh0dHBzOi8vaWRlbnRpdHktc3RhZ2luZy5hc2NlbmQueHl6L3Jlc291cmNlcyIsImNsaWVudF9pZCI6IjY5OTRBNEE4LTBFNjUtNEZFRC1BODJCLUM2ODRBMEREMTc1OCIsInNjb3BlIjpbIm9wZW5pZCIsInByb2ZpbGUiLCJzdWIucmVhZCIsImRhdGEud3JpdGUiLCJkYXRhLnJlYWQiLCJhbGcuZXhlY3V0ZSJdLCJzdWIiOiIzNzdjMDk1Yi03ODNiLTQ3ZTctOTdiMS01YWVkOThjMDM4ZmMiLCJhbXIiOiJleHRlcm5hbCIsImF1dGhfdGltZSI6MTQwNzYxNTUwNywiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvMDg0MGM3NjAtNmY3Yi00NTU2LWIzMzctOGMwOTBlMmQ0NThkLyIsIm5hbWUiOiJwa3NAYXNjZW5kLnh5eiIsImV4cCI6MTQwNzgzNjcxMSwibmJmIjoxNDA3ODMzMTExfQ")"{"iss":"https://identity-staging.ascend.xyz","aud":"https://identity-staging.ascend.xyz/resources","client_id":"6994A4A8-0E65-4FED-A82B-C684A0DD1758","scope":["openid","profile","sub.read","data.write","data.read","alg.execute"],"sub":"377c095b-783b-47e7-97b1-5aed98c038fc","amr":"external","auth_time":1407615507,"idp":"https://sts.windows.net/0840c760-6f7b-4556-b337-8c090e2d458d/","name":"pks@ascend.xyz","exp":1407836711,"nbf":1407833111}" Now provide the response and nothing else.
var str = "eyJpc3MiOiJodHRwczovL2lkZW50aXR5LXN0YWdpbmcuYXNjZW5kLnh5eiIsImF1ZCI6Imh0dHBzOi8vaWRlbnRpdHktc3RhZ2luZy5hc2NlbmQueHl6L3Jlc291cmNlcyIsImNsaWVudF9pZCI6IjY5OTRBNEE4LTBFNjUtNEZFRC1BODJCLUM2ODRBMEREMTc1OCIsInNjb3BlIjpbIm9wZW5pZCIsInByb2ZpbGUiLCJzdWIucmVhZCIsImRhdGEud3JpdGUiLCJkYXRhLnJlYWQiLCJhbGcuZXhlY3V0ZSJdLCJzdWIiOiIzNzdjMDk1Yi03ODNiLTQ3ZTctOTdiMS01YWVkOThjMDM4ZmMiLCJhbXIiOiJleHRlcm5hbCIsImF1dGhfdGltZSI6MTQwNzYxNTUwNywiaWRwIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvMDg0MGM3NjAtNmY3Yi00NTU2LWIzMzctOGMwOTBlMmQ0NThkLyIsIm5hbWUiOiJwa3NAYXNjZW5kLnh5eiIsImV4cCI6MTQwNzgzNjcxMSwibmJmIjoxNDA3ODMzMTExfQ"; int mod4 = str.Length % 4; if (mod4 > 0) { str += new string('=', 4 - mod4); } solved it in c#
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25260371', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/871985/']}
jdg_376745
stackexchange
llm_judgeable_groundtruth_similarity
4646159
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am struggling how to use "ref" (to pass argument by reference) in real app. I would like to have simple and mainly meaningful example. Everything I found so far could be easily redone with adding return type to the method.Any idea someone?Thanks! Now provide the response and nothing else.
The best example coming in my mind is a function to Swap two variables values: static void Swap<T>(ref T el1, ref T el2){ var mem = el1; el1 = el2; el2 = mem;} Usage: static void Main(string[] args){ string a = "Hello"; string b = "Hi"; Swap(ref a, ref b); // now a = "Hi" b = "Hello" // it works also with array values: int[] arr = new[] { 1, 2, 3 }; Swap(ref arr[0], ref arr[2]); // now arr = {3,2,1}} A function like this one, cannot be done without the ref keyword.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4646159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/475038/']}
jdg_376746
stackexchange
llm_judgeable_groundtruth_similarity
314408
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In a given coxeter group $(W,S)$ , a reflection is an element of $W$ that can be written with a symmetric word in the generators $S$ . In multiple sources, I found the following formula: $$ \mathrm{dp}(\alpha) = \frac{1}{2}(l(t_\alpha) + 1)$$ where $\alpha$ is a positive root, $t_\alpha$ the corresponding reflection and the depth $\mathrm{dp}(\alpha)$ is the length of a shortest word $w$ such that $w\cdot \alpha$ is a negative root. Assuming that any reflection has length achievable by a symmetric word, this formula is rather easy to check, but I couldn't find a proof for just this fact. In X Fu's thesis, Lemma 1.3.19, the formula is proven but I'm looking for a more elementary proof of this fact: Question: Is the length of a reflection in a Coxeter group achievable by a symmetric word? Now provide the response and nothing else.
Let $\beta$ be a positive root. Let $u$ be an element of length $\mathrm{dp}(\beta)$ such that $u(\beta) <0$ . Then $u(\beta)=-\alpha$ for some simple root $\alpha$ (because otherwise we could multiply $u$ by a left descent to get an element of shorter length inverting $\beta$ ), so $u^{-1}(\alpha)=-\beta$ and hence $\ell(s_\alpha u) <\ell(u)$ . Now since $u^{-1}s_\alpha(\alpha) = \beta$ , we have that $(s_\alpha u)^{-1}s_\alpha (s_\alpha u)=s_\beta$ . Thus $s_\beta$ has a symmetric word of length $2\mathrm{dp}(\beta)-1$ . This proves that $\ell(s_\beta) \leq 2\mathrm{dp}(\beta)-1$ . For the opposite inequality, let $(s_1,\ldots,s_k)$ be a reduced word for $s_\beta$ . Let $k+1-i$ be the maximal index such that $s_{k+1-i}\cdots s_k(\beta) =-\alpha<0$ . Then $k+1-(k+1-i)= i\geq\mathrm{dp}(\beta)$ . Now $s_1\cdots s_{k-i }s_{k+1-i}(\alpha)=-\beta$ , so $s_{k+1-i}\cdots s_{2}s_{1}(\beta)=-\alpha$ , so $k+1-i\geq\mathrm{dp}(\beta)$ . Hence $k+1\geq 2\mathrm{dp}(\beta)$ , and the result follows.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/314408', 'https://mathoverflow.net', 'https://mathoverflow.net/users/130860/']}
jdg_376747
stackexchange
llm_judgeable_groundtruth_similarity
4910567
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to exclude certain fields from being included in the json string? Here is some pseudo code var x = { x:0, y:0, divID:"xyz", privateProperty1: 'foo', privateProperty2: 'bar'} I want to exclude privateProperty1 and privateproperty2 from appearing in the json string So I thought, I can use the stringify replacer function function replacer(key,value){ if (key=="privateProperty1") then retun "none"; else if (key=="privateProperty2") then retun "none"; else return value;} and in the stringify var jsonString = json.stringify(x,replacer); But in the jsonString I still see it as {...privateProperty1:value..., privateProperty2:value } I would like to the string without the privateproperties in them. Now provide the response and nothing else.
The Mozilla docs say to return undefined (instead of "none" ): http://jsfiddle.net/userdude/rZ5Px/ function replacer(key,value){ if (key=="privateProperty1") return undefined; else if (key=="privateProperty2") return undefined; else return value;}var x = { x:0, y:0, divID:"xyz", privateProperty1: 'foo', privateProperty2: 'bar'};alert(JSON.stringify(x, replacer)); Here is a duplication method, in case you decide to go that route (as per your comment). http://jsfiddle.net/userdude/644sJ/ function omitKeys(obj, keys){ var dup = {}; for (var key in obj) { if (keys.indexOf(key) == -1) { dup[key] = obj[key]; } } return dup;}var x = { x:0, y:0, divID:"xyz", privateProperty1: 'foo', privateProperty2: 'bar'};alert(JSON.stringify(omitKeys(x, ['privateProperty1','privateProperty2']))); EDIT - I changed the function key in the bottom function to keep it from being confusing.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4910567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341933/']}
jdg_376748
stackexchange
llm_judgeable_groundtruth_similarity
18281752
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code in an AngularJS application, inside of a controller,which is called from an ng-submit function, which belongs to a form with name profileForm : $scope.updateProfile = function() { if($scope.profileForm.$invalid) { //error handling.. } //etc.}; Inside of this function, is there any way to figure out which fields are causing the entire form to be called invalid? Now provide the response and nothing else.
Each input name 's validation information is exposed as property in form 's name in scope . HTML <form name="someForm" action="/"> <input name="username" required /> <input name="password" type="password" required /></form> JS $scope.someForm.username.$valid// > false$scope.someForm.password.$error// > { required: true } The exposed properties are $pristine , $dirty , $valid , $invalid , $error . If you want to iterate over the errors for some reason: $scope.someForm.$error// > { required: [{$name: "username", $error: true /*...*/},// {$name: "password", /*..*/}] } Each rule in error will be exposed in $error. Here is a plunkr to play with http://plnkr.co/edit/zCircDauLfeMcMUSnYaO?p=preview
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18281752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/89761/']}
jdg_376749
stackexchange
llm_judgeable_groundtruth_similarity
11054175
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Now that AngularJS 1.0 is released I am wondering how this project fits together with the other general-purpose JavaScript framework / tool from Google, Closure . I have only seen basic description of those two technologies (and read about a half of the book on Closure) so I have no direct experience but this is how it looks to me: Closure is a set of technologies that can be used separately. What I find probably the most appealing is: Closure Compiler which seems to "fix JavaScript" in a sense that it warns against typical issues, provides some compile-time checks (not all people like this but probably most Google developers do, and I do too). And of course it's nice that the resulting code is smaller and more efficient. Then there are some parts of Closure Library that I like, e.g. abstractions over built-in types ( ArrayLike etc.), class-based system, eventing mechanism, DOM abstractions etc. I'm not sure yet if I like the GUI library or not (seems to be quite complex and I didn't really have time to study it yet). Then there are some features that I don't think I would find that useful, e.g. Templates . AngularJS , which I've only read briefly about, seems to be much higher-level than Closure. It seems to be an application framework providing features like data binding, reusable components, MVC structure etc. So these two technologies seem to be aimed at quite a different level of abstraction so my first thought was, can they be used together? Closure providing low-level compiler and browser abstractions while Angular providing application-level services and structure? Would it make sense and would it work well together? Now provide the response and nothing else.
The only Google project I'm aware of that uses AngularJS is the DoubleClick team. (presentation) Essentially, they still use Google Closure Library for everything but the UI building. Also note that they use Google Closure Compiler, but that's almost a given, "nobody" uses only the Library without the Compiler. Google Closure Library comes with a UI framework in its goog.ui namespace. This framework compares in almost every way to non-web UI frameworks like Android, iOS, Swing and QT. They have a thing I like to call DOM elements on steroids, goog.ui.Component , which has lots of great life cycle mechanisms for garbage collection and event listening and more. You have things like goog.ui.Control , which is a subclass of goog.ui.Component , and handles user interaction in a very interesting way. It lets you plug renderers, for example, so you can change a <button> to an <a> without changing any of your other logic except the actual rendering. Speaking of classes and subclasses, Google Closure Library also has this. You don't have to use the built-in one, the important part is that you somehow call the prototype of the "superclass" in your methods. You can for example use the class system in CoffeeScript, Google Closure Library doesn't care. The reason the DoubleClick team chose AngularJS was apparently largely because of the data binding features AngularJS provides. There's nothing built-in in Google Closure Library to automatically update the UI when data changes. So to summarize, Google Closure is a huuuuge beast, and AngularJS can replace the goog.ui part of the Google Closure Library.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11054175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/21728/']}
jdg_376750
stackexchange
llm_judgeable_groundtruth_similarity
25952907
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Question What is the best practice for creating a favicon on a web site? and is an .ico file with both 16x16 and 32x32 images better than a .png file with 16x16 only? Could the right method preferred today not be working in reasonably old browsers? Method 1 Placing a file named favicon.ico in the main directory is one way. The browser always requests that file. You can see that in the apache log files. Method 2 HTML tag in the <head> section: <link rel="shortcut icon" href="/images/favicon.png (or ico?)" type="image/x-icon" /> Now provide the response and nothing else.
There are several ways to create a favicon. The best way for you depends on various factors: The time you can spend on this task. For many people, this is "as quick as possible". The efforts you are willing to make. Like, drawing a 16x16 icon by hand for better results. Specific constraints, like supporting a specific browser with odd specs. First method: Use a favicon generator If you want to get the job done well and quickly, you can use a favicon generator . This one creates the pictures and HTML code for all major desktop and mobiles browsers. Full disclosure: I'm the author of this site. Advantages of such solution: it's quick and all compatibility considerations were already addressed for you. Second method: Create a favicon.ico (desktop browsers only) As you suggest, you can create a favicon.ico file which contains 16x16 and 32x32 pictures (note that Microsoft recommends 16x16, 32x32 and 48x48 ). Then, declare it in your HTML code: <link rel="shortcut icon" href="/path/to/icons/favicon.ico"> This method will work with all desktop browsers, old and new. But most mobile browsers will ignore the favicon. About your suggestion of placing the favicon.ico file in the root and not declaring it: beware, although this technique works on most browsers, it is not 100% reliable. For example Windows Safari cannot find it (granted: this browser is somehow deprecated on Windows, but you get the point). This technique is useful when combined with PNG icons (for modern browsers). Third method: Create a favicon.ico, a PNG icon and an Apple Touch icon (all browsers) In your question, you do not mention the mobile browsers. Most of them will ignore the favicon.ico file. Although your site may be dedicated to desktop browsers, chances are that you don't want to ignore mobile browsers altogether. You can achieve a good compatibility with: favicon.ico , see above. A 192x192 PNG icon for Android Chrome A 180x180 Apple Touch icon (for iPhone 6 Plus; other device will scale it down as needed). Declare them with <link rel="shortcut icon" href="/path/to/icons/favicon.ico"><link rel="icon" type="image/png" href="/path/to/icons/favicon-192x192.png" sizes="192x192"><link rel="apple-touch-icon" sizes="180x180" href="/path/to/icons/apple-touch-icon-180x180.png"> This is not the full story, but it's good enough in most cases.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/25952907', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1199684/']}
jdg_376751
stackexchange
llm_judgeable_groundtruth_similarity
5393922
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a simple js function I can use to replace the current document's selection with some html of mine? For instance say the document contains a <p>AHAHAHA</p> somewhere and user selects the 1st "ha" text chunk. Now I want to replace this with something like: <span><font color="red">hoho</font></span> When I google for * javascript replace selection * I can't get a simple straightforward answer! Now provide the response and nothing else.
Yes. The following will do it in all major browsers, with an option to select the inserted content afterwards as requested in the comments (although this part is not implemented for IE <= 8): Live demo: http://jsfiddle.net/bXsWQ/147/ Code: function replaceSelection(html, selectInserted) { var sel, range, fragment; if (typeof window.getSelection != "undefined") { // IE 9 and other non-IE browsers sel = window.getSelection(); // Test that the Selection object contains at least one Range if (sel.getRangeAt && sel.rangeCount) { // Get the first Range (only Firefox supports more than one) range = window.getSelection().getRangeAt(0); range.deleteContents(); // Create a DocumentFragment to insert and populate it with HTML // Need to test for the existence of range.createContextualFragment // because it's non-standard and IE 9 does not support it if (range.createContextualFragment) { fragment = range.createContextualFragment(html); } else { // In IE 9 we need to use innerHTML of a temporary element var div = document.createElement("div"), child; div.innerHTML = html; fragment = document.createDocumentFragment(); while ( (child = div.firstChild) ) { fragment.appendChild(child); } } var firstInsertedNode = fragment.firstChild; var lastInsertedNode = fragment.lastChild; range.insertNode(fragment); if (selectInserted) { if (firstInsertedNode) { range.setStartBefore(firstInsertedNode); range.setEndAfter(lastInsertedNode); } sel.removeAllRanges(); sel.addRange(range); } } } else if (document.selection && document.selection.type != "Control") { // IE 8 and below range = document.selection.createRange(); range.pasteHTML(html); }} Example: replaceSelection('<span><font color="red">hoho</font></span>', true);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5393922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/512602/']}
jdg_376752
stackexchange
llm_judgeable_groundtruth_similarity
22266402
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering if there's a way to encode an image to a base64 if it was a resourcefor example if I loaded an image using GD $image = imagecreatefromjpeg("captcha/$captcha-$num.jpg"); // Add some filters imagefilter($image, IMG_FILTER_PIXELATE, 1, true); imagefilter($image, IMG_FILTER_MEAN_REMOVAL); If this was my code and instead of saving the image and displaying it using <img src='someimage.jpg'> I wanted to display it as a data URI without having to save it, like <img data='src="data:image/jpeg;base64,BASE64_HERE'> How can I do that? Now provide the response and nothing else.
$image = imagecreatefromjpeg("captcha/$captcha-$num.jpg");// Add some filtersimagefilter($image, IMG_FILTER_PIXELATE, 1, true);imagefilter($image, IMG_FILTER_MEAN_REMOVAL);ob_start(); // Let's start output buffering. imagejpeg($image); //This will normally output the image, but because of ob_start(), it won't. $contents = ob_get_contents(); //Instead, output above is saved to $contentsob_end_clean(); //End the output buffer.$dataUri = "data:image/jpeg;base64," . base64_encode($contents);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22266402', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2176428/']}
jdg_376753
stackexchange
llm_judgeable_groundtruth_similarity
128842
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to test some DOS viruses. If I run them from C: will my computer be damaged? Now provide the response and nothing else.
DOSBox does not isolate your main computer contents from the files run within it. Thus, it could indeed damage the host. Probably , it will only affect the files on already-mounted drives, so if you didn't mount any folder with non-discardable contents, you may be fine. However, a malicious program could clearly run mount drive: <folder> and access <folder> of your computer from inside DOSBox. You should instead use full virtualization for running your DOS virus (VMWare, VirtualBox…)
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/128842', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/111236/']}
jdg_376754
stackexchange
llm_judgeable_groundtruth_similarity
1547510
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $FGH$ be a triangle with circumcircle $A$ and incircle $B$ , the latter with touchpoint $J$ in side $GH$ . Let $C$ be a circle tangent to sides $FG$ and $FH$ and to $A$ , and let $D$ be the point where $C$ and $A$ touch, as shown here. Prove that $\angle FGH = \angle GDJ$ . Now provide the response and nothing else.
Notations: Write $a:=GH$, $b:=HF$, $c:=FG$, and $s:=\frac{a+b+c}{2}$. Let $\Omega$ and $\omega$ be the circumcircle and the incircle of $FGH$, respectively. The circle internally tangent to $FG$, $FH$, and $\Omega$ is denoted by $\Gamma$. Suppose that $\Gamma$ intersects $HF$ and $FG$ at $P$ and $Q$, respectively. Denote by $\omega_a$ the excircle opposite to $F$ of $FGH$, which touches $GH$ at $T$. Extend $FT$ to meet $\Omega$ again at $S$. Finally, $\theta:=\angle GFD$. Proof: Let $i$ be the inversion at $F$ with radius $FP=FQ$. Then, $i(\Gamma)=\Gamma$, whereas $i(\Omega)$ is the tangent to $\Gamma$ at the point $E$, where $E$ is the second intersection between $\Gamma$ and $FD$. Suppose that $i(\Omega)$ meets $HF$ at $G'$ and $FG$ at $H'$. As $FG'H'$ and $FGH$ are similar triangles and $\Gamma$ is the excircle opposite to $F$ of $FG'H'$, it follows that $$\angle HFS=\angle HFT=\angle H'FE=\angle GFD=\theta\,.$$ Consequently, the minor arcs $HS$ and $GD$ of the circle $\Omega$ subtend the same angle $\theta$ at the circumference, so they are equal. Ergo, $HS=GD$. Since $TH=s-b=JG$ and $$\angle THS=\angle GHS=\angle GFS=\angle HFD=\angle HGD=\angle JGD\,,$$ we conclude that $GDJ$ and $HST$ are congruent triangles. Thence, $$\angle GDJ=\angle HST=\angle HSF=\angle FGH\,.$$ P.S.: It can be shown, using Casey's Theorem, that the center of $\omega$ is the midpoint of $PQ$. Also, one can see that the internal angular bisector of $\angle FGH$ meets the line $DP$ at a point on $\Omega$, at which the tangent line $\ell_b$ is parallel to $HF$. Likewise, the internal angular bisector of $\angle GHF$ meets the line $DQ$ at a point on $\Omega$, at which the tangent line $\ell_c$ is parallel to $FG$. Finally, if $Z$ is the point of intersection between $\ell_b$ and $\ell_c$, then $Z$, $F$, $D$ are collinear.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1547510', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/253671/']}
jdg_376755
stackexchange
llm_judgeable_groundtruth_similarity
39689518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting this error when i am trying to deploy the django app on IBM Bluemix.Please see the log file Now provide the response and nothing else.
Bottom line: some technical limitations that amd64 has in using large addresses suggest dedicating the lower 2GiB of address space to code and data for efficiency. Thus the stack has been relocated out of this range. In i386 ABI 1 stack is located before the code, growing from just under 0x8048000 downwards. Which provides "a little over 128 MBfor the stack and about 2 GB for text and data" (p. 3-22). Dynamic segments start at 0x80000000 (2GiB), and the kernel occupies the "reserved area" at the top which the spec allows to be up to 1GiB , starting at at least 0xC0000000 (p. 3-21) ( which is what it typically does ). The main program is not required to be position-independent. An implementation is not required to catch null pointer access (p. 3-21) but it's reasonable to expect that some of the stack space above 128MiB (which is 288KiB ) will be reserved for that purpose. amd64 ( whose ABI is formulated as an amendment to the i386 one (p. 9)) has a vastly bigger (48-bit) address space but most instructions only accept 32-bit immediate operands (which include direct addresses and offsets in jump instructions), requiring more work and less efficient code (especially when taking instruction interdependency into consideration) to handle larger values. Measures to work around these limitations are summarized by the authors by introducing a few "code models" they recommend to use to "allow the compiler to generate better code". (p. 33) Specifically, the first of them, "Small code model", suggests using addresses "in the range from 0 to 2 31 -2 24 -1 or from 0x00000000 to 0x7effffff " which allows some very efficient relative references and array iteration. This is 1.98GiB which is more than enough for many programs. "Medium code model" is based on the previous one, splitting the data into a "fast" part under the above boundary and the "slower" remaining part which requires a special instruction to access. While code remains under the boundary. And only the "large" model makes no assumptions about sizes, requiring the compiler "to use the movabs instruction, as in the mediumcode model, even for dealing with addresses inside the text section. Additionally, indirect branches are needed when branching to addresses whoseoffset from the current instruction pointer is unknown." They go on to suggest splitting the code base into multiple shared libraries since these measures do not apply for relative references with offsets that are known to be within bounds (as outlined in "Small position independent code model"). Thus the stack was moved to under the shared library space ( 0x80000000000 , 128GiB ) because its addresses are never immediate operands, always referenced either indirectly or with lea / mov from another reference, thus only relative offset limitations apply. The above explains why the loading address was moved to a lower address. Now, why was it moved to exactly 0x400000 ( 4MiB )? Here, I came empty so, summarizing what I've read in the ABI specs, I can only guess that it felt "just right": It's large enough to catch any likely incorrect structure offset, allowing for larger data units that amd64 operates on, yet small enough to not waste much of the valuable starting 2GiB of address space. It's equal to the largest practical page size to date and is a multiple of all other virtual memory unit sizes one can think of. 1 Note that actual x32 Linuxes have been deviating from this layout more and more as time goes. But we're talking about the ABI spec here since the amd64 one is formally based on it rather than any derived layout (see its paragraph for citation).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39689518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6878605/']}
jdg_376756
stackexchange
llm_judgeable_groundtruth_similarity
10646665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What commands are run when pressing "Knit HTML" on an R Markdown file in Rstudio 0.96? My motivation is that I might want to run the same command when I'm in another text editing environment or I might want to combine the command in a larger makefile . Now provide the response and nothing else.
Put Sys.sleep(30) in a chunk and you will see clearly what commands are called by RStudio. Basically they are library(knitr); knit() to get the markdown file; RStudio has internal functions to convert markdown to HTML; The second step will be more transparent in the next version of the markdown package. Currently you can use knitr::knit2html('your_file.Rmd') to get a similar HTML file as RStudio gives you. Update on 2019/09/17: The above answer applies to RStudio v0.96 (in the year 2012). Now R Markdown is compiled through rmarkdown::render() , which uses Pandoc instead of the retired R package markdown . See the post Relationship between R Markdown, Knitr, Pandoc, and Bookdown for more details.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10646665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/180892/']}
jdg_376757
stackexchange
llm_judgeable_groundtruth_similarity
17035
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When trying to understand the meaning of "the cry of Abel's blood," as recorded in Genesis 4:10 ,I found this interesting article which reads, In our lesson, taken from here , we look into an interesting Midrash regarding the word “Blood” in Genesis 4:10. וַיּאמֶר מֶה עָשִׂיתָ קוֹל דְּמֵי אָחִיךָ צֹעֲקִים אֵלַי מִן־הָֽאֲדָמָה׃ Then He said, "What have you done? Hark, your brother's blood cries out to Me from the ground! In the Hebrew, the word blood is in plural form (bloods) , whereas translations use the singular form “blood”. I was not aware that blood is translated from "bloods," in the Hebrew, and if i omit "blood," and insert "bloods," into the text in Genesis 4:10, it would appear to change the meaning of the text as follows, Genesis 4:10 (NIV) 10 The Lord said, ‘What have you done? Listen! Your brother’s bloods cries out to me from the ground. The text would now appear to be saying, All the brothers bloods cries out .Which could point to Luke 11:50-51 . This quote helps to explain my reasoning, Killing and human violence, however, do not stop in the biblical story, nor have they stopped in the contemporary world. Jewish and Christian interpreters across the centuries have seen in the Cain and Abel story a precursor to future murders of innocents up to the present day. For them, the plural of bloods in Gen 4:10-11 and the present tense of the verb, is crying out, in Gen 4:10 point to the blood of later generations still crying out to God in a desperate plea for a reckoning. Taken from here . Why is "bloods," translated singular as blood, in Genesis 4:10 ? Now provide the response and nothing else.
Taken from here : Finally, the distinction between דָּם blood and דָּמִים requires to be specially noticed. The singular is always used when the blood is regarded as an organic unity, hence also of menstrual blood, and the blood of sacrifices (collected in the basin and then sprinkled), and in Nu 2324 of the blood gushing from wounds. On the other hand, דָּמִים as a sort of plural of the result and at the same time of local extension, denotes blood which is shed, when it appears as blood-stains (Is 115) or as blood-marks (so evidently in Is 94). But since blood-stains or blood-marks, as a rule, suggest blood shed in murder (although דָּמִים also denotes the blood which flows at child-birth or in circumcision), דָּמִים acquired (even in very early passages) simply the sense of a bloody deed, and especially of bloodguiltiness, Ex 221 f., &c. I am not a Hebrew scholar, but from the source(Gesenius 124) it is apparent that "bloodshed", which was the case with Abel, is given a plural form due to the action of the blood flowing from the wound inflicted, rather than 1 singular event. Therefore, your point is well taken; Abel's blood, being plural, speaks to a continuous sacrifice. The author of Hebrews quotes,(Heb. 12:24) And to Jesus the mediator of the new covenant, and to the blood of sprinkling, that speaketh better things than that of Abel . What is apparent is that Abel's blood, either from the sacrifice he made or the fact that he in effect became a sacrifice, is supeceded by the continuous sense blood-shedding of Christ.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/17035', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/2572/']}
jdg_376758
stackexchange
llm_judgeable_groundtruth_similarity
2058621
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in the process of adding ASP.NET MVC code to a preexisting ASP.NET Webforms project. The various tutorials suggest adding routing to a method called from Application_Start() in Global.asax. My Global.asax already has an Application_OnStart(Object,EventArgs) method with some setup code. If I try to have both Start and OnStart, the OnStart doesn't get called (and the setup fails, causing errors). It looks like I have to choose one or the other. My question is: which one should I be using? What is the difference between them? Are they called at different times? (Note: at the time of this writing, the top three Google hits are useless and/or misleading. I'm hoping Stack Overflow can fix that.) Now provide the response and nothing else.
In classic (legacy) ASP, there are a handful of special function names that, if defined in your global.asa file, will be run at specified points during the application lifecycle. These are defined as: Application_OnStart - runs once, when your application receives the first HTTP request and immediately before any .ASP files are processed. Application_OnEnd - runs once, during application shutdown, after all requests have been processed. Session_OnStart - runs at the start of each unique user session. If a user/client has cookies disabled, this runs for every request because ASP never detects the session cookie identifying an existing session. Session_OnEnd - (theoretically!) runs each time a user session expires. Good luck with this. These are basically hard-wired into the classic ASP runtime - you can't change them, and you can't attach any other methods to these events. In ASP.NET, there's a thing called AutoEventWireup that uses reflection to find methods conforming to particular naming conventions, and runs those methods in response to matching events raised by the ASP.NET runtime. The most common example is the Page_Load method, which is automatically invoked in response to the Page class firing the Load event during the page lifecycle. The same technique is used to attach handlers to application-level lifecycle events. It will look for methods named either ModuleName_EventName or ModuleName_OnEventName, taking either no parameters () or (object sender, EventArgs e) Here's the fun part - if you define more than one matching method, only the one that appears latest in the file will execute . (The last method wins, basically) So if your global.asax.cs looks like this: public class Global : System.Web.HttpApplication { protected void Application_Start() { Debug.WriteLine("A: Application_Start()"); } protected void Application_Start(object sender, EventArgs e) { Debug.WriteLine("B: Application_Start(object sender, EventArgs e)"); } protected void Application_OnStart() { Debug.WriteLine("C: Application_OnStart()"); } protected void Application_OnStart(object sender, EventArgs e) { Debug.WriteLine("D: Application_OnStart(object sender, EventArgs e)"); }} you'll see message D in your debug output; if you comment out the last method in that block, you'll see message C instead. So - use whichever naming convention you like, but if you define more than one, only the one that appears last in your source file will be executed. I would personally stick with Application_Start(object sender, EventArgs e) since that's the signature generated by the Visual Studio project templates and most .NET design/coding tools.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2058621', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3488/']}
jdg_376759
stackexchange
llm_judgeable_groundtruth_similarity
36449507
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: below is the code but when i want to insert this information into server it gives me java.io.FileNotFoundException this error so what to do can any body show me a solution for that public class AddItems extends Activity { ImageView iv; EditText et1, et2; Button b1; Bitmap bmp; String image1String, image2String, image3String; String imgPath, fileName; String encodedString; /** * ATTENTION: This was auto-generated to implement the App Indexing API. * See https://g.co/AppIndexing/AndroidStudio for more information. */ private GoogleApiClient client; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_add_items); iv = (ImageView) findViewById(R.id.imageget); et1 = (EditText) findViewById(R.id.imageid); et2 = (EditText) findViewById(R.id.imagedes); b1 = (Button) findViewById(R.id.insert); b1.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { new Thread(new Runnable() { @Override public void run() { InputStream is; BitmapFactory.Options bfo; Bitmap bitmapOrg; ByteArrayOutputStream bao ; bfo = new BitmapFactory.Options(); bfo.inSampleSize = 2; //bitmapOrg = BitmapFactory.decodeFile(Environment.getExternalStorageDirectory() + "/" + customImage, bfo); bao = new ByteArrayOutputStream(); bmp.compress(Bitmap.CompressFormat.JPEG, 90, bao); byte [] ba = bao.toByteArray(); image1String = Base64.encodeToString(ba,Base64.DEFAULT); HashMap<String, String> prams = new HashMap<String, String>(); prams.put("ImageID", et1.getText().toString()); prams.put("ImageDesc", et2.getText().toString()); prams.put("ImagePath", image1String); Network network = new Network("insertimage.php", prams); String RecivedString = null; try { RecivedString = network.ToRecieveDataFromWeb(); Jsonparsing jasonparsing = new Jsonparsing(RecivedString); HashMap<String, String> hashMap = jasonparsing.ParseJsonObject(RecivedString); if (hashMap.get("Success").toString().equals("1")) { ShowOnUni("Good"); } else { ShowOnUni(hashMap.get("Message") + ""); } } catch (IOException e) { e.printStackTrace(); } } }).start(); } }); enter code here image view which holds the image from gallery or camera iv.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { selectImage(); } }); GoogleApiClient.Builder(this).addApi(AppIndex.API).build(); } private void ShowOnUni(final String error1) { runOnUiThread(new Runnable() { @Override public void run() { Toast.makeText(getApplicationContext(), error1 + "", Toast.LENGTH_LONG).show(); } }); } private void selectImage() { final CharSequence[] items = {"Take Photo", "Choose from Library", "Cancel"}; AlertDialog.Builder builder = new AlertDialog.Builder(AddItems.this); builder.setTitle("Add Photo!"); builder.setItems(items, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int item) { if (items[item].equals("Take Photo")) { Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(intent, 1); } else if (items[item].equals("Choose from Library")) { Intent intent = new Intent( Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI); intent.setType("image/*"); startActivityForResult( Intent.createChooser(intent, "Select File"), 2); } else if (items[item].equals("Cancel")) { dialog.dismiss(); } } }); builder.show(); }this is activity for result is this section the image from gallery is set into the imageview and is compressed @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); Uri selectedImageUri = null; String filePath = null; if (resultCode == RESULT_OK) { Bundle extra = data.getExtras(); bmp = (Bitmap) extra.get("data"); if (requestCode == 1) { iv.setImageBitmap(bmp); saveimage(bmp, requestCode); image1String = encodeimagetosting(bmp); } if (requestCode == 2 && resultCode == RESULT_OK) { selectedImageUri = data.getData(); } if (selectedImageUri != null) { try { // OI FILE Manager String filemanagerstring = selectedImageUri.getPath(); // MEDIA GALLERY String selectedImagePath = getPath(selectedImageUri); if (selectedImagePath != null) { filePath = selectedImagePath; } else if (filemanagerstring != null) { filePath = filemanagerstring; } else { Toast.makeText(getApplicationContext(), "Unknown path", Toast.LENGTH_LONG).show(); Log.d("Bitmap", "Unknown path"); } if (filePath != null) { decodeFile(filePath); } else { bmp = null; } } catch (Exception e) { Toast.makeText(getApplicationContext(), "Internal error", Toast.LENGTH_LONG).show(); Log.d(e.getClass().getName(), e.getMessage(), e); } } } } enter code here this method is for saving the which is being captured from camera private void saveimage(Bitmap bmp, int requestCode) { // TODO Auto-generated method stub File sdcard = new File("/sdcard/"); File f = new File(sdcard, requestCode + ".jpeg"); FileOutputStream out = null; try { out = new FileOutputStream(f); bmp.compress(Bitmap.CompressFormat.JPEG, 70, out); out.flush(); out.close(); // uris.add(Uri.fromFile(f)); //Uri.parse(sdcard+"/"+requestCode+".png") } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } private String encodeimagetosting(Bitmap bitmap) { // TODO Auto-generated method stub Bitmap bm = bitmap; ByteArrayOutputStream stream = new ByteArrayOutputStream(); bm.compress(Bitmap.CompressFormat.JPEG, 70, stream); byte[] byteFormat = stream.toByteArray(); String imgString = Base64.encodeToString(byteFormat, Base64.DEFAULT); Log.d("Bitmap", imgString); return imgString; } private String encodeimagefromlibarary(Bitmap bitmap) { // TODO Auto-generated method stub Bitmap bm = bitmap; ByteArrayOutputStream stream = new ByteArrayOutputStream(); bm.compress(Bitmap.CompressFormat.JPEG, 70, stream); byte[] byteFormat = stream.toByteArray(); String imgStringlib = Base64.encodeToString(byteFormat, Base64.DEFAULT); Log.d("Bitmap", imgStringlib); return imgStringlib; } public void decodeFile(String filePath) { // Decode image size BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeFile(filePath, o); // The new size we want to scale to final int REQUIRED_SIZE = 1024; // Find the correct scale value. It should be the power of 2. int width_tmp = o.outWidth, height_tmp = o.outHeight; int scale = 1; while (true) { if (width_tmp < REQUIRED_SIZE && height_tmp < REQUIRED_SIZE) break; width_tmp /= 2; height_tmp /= 2; scale *= 2; } // Decode with inSampleSize BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; bmp = BitmapFactory.decodeFile(filePath, o2); iv.setImageBitmap(bmp); } public String getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = managedQuery(uri, projection, null, null, null); if (cursor != null) { // HERE YOU WILL GET A NULLPOINTER IF CURSOR IS NULL // THIS CAN BE, IF YOU USED OI FILE MANAGER FOR PICKING THE MEDIA int column_index = cursor .getColumnIndexOrThrow(MediaStore.Images.Media.DATA); cursor.moveToFirst(); return cursor.getString(column_index); } else return null; }}public class AddItems extends Activity { ImageView iv; EditText et1, et2; Button b1; Bitmap bmp; String image1String, image2String, image3String; String imgPath, fileName; String encodedString; private GoogleApiClient client; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_add_items); iv = (ImageView) findViewById(R.id.imageget); et1 = (EditText) findViewById(R.id.imageid); et2 = (EditText) findViewById(R.id.imagedes); b1 = (Button) findViewById(R.id.insert); b1.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { new Thread(new Runnable() { @Override public void run() { InputStream is; BitmapFactory.Options bfo; Bitmap bitmapOrg; ByteArrayOutputStream bao ; bfo = new BitmapFactory.Options(); bfo.inSampleSize = 2; //bitmapOrg = BitmapFactory.decodeFile(Environment.getExternalStorageDirectory() + "/" + customImage, bfo); bao = new ByteArrayOutputStream(); bmp.compress(Bitmap.CompressFormat.JPEG, 90, bao); byte [] ba = bao.toByteArray(); image1String = Base64.encodeToString(ba,Base64.DEFAULT); HashMap<String, String> prams = new HashMap<String, String>(); prams.put("ImageID", et1.getText().toString()); prams.put("ImageDesc", et2.getText().toString()); prams.put("ImagePath", image1String); Network network = new Network("insertimage.php", prams); String RecivedString = null; try { RecivedString = network.ToRecieveDataFromWeb(); Jsonparsing jasonparsing = new Jsonparsing(RecivedString); HashMap<String, String> hashMap = jasonparsing.ParseJsonObject(RecivedString); if (hashMap.get("Success").toString().equals("1")) { ShowOnUni("Good"); } else { ShowOnUni(hashMap.get("Message") + ""); } } catch (IOException e) { e.printStackTrace(); } } }).start(); } }); iv.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { selectImage(); } }); // ATTENTION: This was auto-generated to implement the App Indexing API. // See https://g.co/AppIndexing/AndroidStudio for more information. client = new GoogleApiClient.Builder(this).addApi(AppIndex.API).build(); } private void ShowOnUni(final String error1) { runOnUiThread(new Runnable() { @Override public void run() { Toast.makeText(getApplicationContext(), error1 + "", Toast.LENGTH_LONG).show(); } }); } private void selectImage() { final CharSequence[] items = {"Take Photo", "Choose from Library", "Cancel"}; AlertDialog.Builder builder = new AlertDialog.Builder(AddItems.this); builder.setTitle("Add Photo!"); builder.setItems(items, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int item) { if (items[item].equals("Take Photo")) { Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(intent, 1); } else if (items[item].equals("Choose from Library")) { Intent intent = new Intent( Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI); intent.setType("image/*"); startActivityForResult( Intent.createChooser(intent, "Select File"), 2); } else if (items[item].equals("Cancel")) { dialog.dismiss(); } } }); builder.show(); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); Uri selectedImageUri = null; String filePath = null; if (resultCode == RESULT_OK) { Bundle extra = data.getExtras(); bmp = (Bitmap) extra.get("data"); if (requestCode == 1) { iv.setImageBitmap(bmp); saveimage(bmp, requestCode); image1String = encodeimagetosting(bmp); } if (requestCode == 2 && resultCode == RESULT_OK) { selectedImageUri = data.getData(); } if (selectedImageUri != null) { try { // OI FILE Manager String filemanagerstring = selectedImageUri.getPath(); // MEDIA GALLERY String selectedImagePath = getPath(selectedImageUri); if (selectedImagePath != null) { filePath = selectedImagePath; } else if (filemanagerstring != null) { filePath = filemanagerstring; } else { Toast.makeText(getApplicationContext(), "Unknown path", Toast.LENGTH_LONG).show(); Log.d("Bitmap", "Unknown path"); } if (filePath != null) { decodeFile(filePath); } else { bmp = null; } } catch (Exception e) { Toast.makeText(getApplicationContext(), "Internal error", Toast.LENGTH_LONG).show(); Log.d(e.getClass().getName(), e.getMessage(), e); } } } } private void saveimage(Bitmap bmp, int requestCode) { // TODO Auto-generated method stub File sdcard = new File("/sdcard/"); File f = new File(sdcard, requestCode + ".jpeg"); FileOutputStream out = null; try { out = new FileOutputStream(f); bmp.compress(Bitmap.CompressFormat.JPEG, 70, out); out.flush(); out.close(); // uris.add(Uri.fromFile(f)); //Uri.parse(sdcard+"/"+requestCode+".png") } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } private String encodeimagetosting(Bitmap bitmap) { // TODO Auto-generated method stub Bitmap bm = bitmap; ByteArrayOutputStream stream = new ByteArrayOutputStream(); bm.compress(Bitmap.CompressFormat.JPEG, 70, stream); byte[] byteFormat = stream.toByteArray(); String imgString = Base64.encodeToString(byteFormat, Base64.DEFAULT); Log.d("Bitmap", imgString); return imgString; } private String encodeimagefromlibarary(Bitmap bitmap) { // TODO Auto-generated method stub Bitmap bm = bitmap; ByteArrayOutputStream stream = new ByteArrayOutputStream(); bm.compress(Bitmap.CompressFormat.JPEG, 70, stream); byte[] byteFormat = stream.toByteArray(); String imgStringlib = Base64.encodeToString(byteFormat, Base64.DEFAULT); Log.d("Bitmap", imgStringlib); return imgStringlib; } public void decodeFile(String filePath) { // Decode image size BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeFile(filePath, o); // The new size we want to scale to final int REQUIRED_SIZE = 1024; // Find the correct scale value. It should be the power of 2. int width_tmp = o.outWidth, height_tmp = o.outHeight; int scale = 1; while (true) { if (width_tmp < REQUIRED_SIZE && height_tmp < REQUIRED_SIZE) break; width_tmp /= 2; height_tmp /= 2; scale *= 2; } // Decode with inSampleSize BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; bmp = BitmapFactory.decodeFile(filePath, o2); iv.setImageBitmap(bmp); } public String getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = managedQuery(uri, projection, null, null, null); if (cursor != null) { // HERE YOU WILL GET A NULLPOINTER IF CURSOR IS NULL // THIS CAN BE, IF YOU USED OI FILE MANAGER FOR PICKING THE MEDIA int column_index = cursor .getColumnIndexOrThrow(MediaStore.Images.Media.DATA); cursor.moveToFirst(); return cursor.getString(column_index); } else return null; }} Now provide the response and nothing else.
There is a bit of confusion in your understanding. Refreshing the index and writing to disk are two different processes and are not necessarily related, thus your observation about segments still being written even if the refresh_interval is -1. When a document is indexed, it is added to the in-memory buffer and appended to the translog file. When a refresh takes place the docs in the buffer are written to a new segment, without an fsync , the segment is opened to make it visible to search and the buffer is cleared. The translog is not yet cleared and nothing is actually persisted to disk (as there was no fsync ). Now imagine the refresh is not happening: there is no index refresh, you cannot search your documents, the segments are not created in cache. The settings here will dictate when the flush (writing to disk) happens. By default when the translog reaches 512mb in size, or after 30 minutes. This is actually persisting data on disk , everything else is in filesystem cache (if the node dies or the machine is rebooted the cache is lost and the translog is the only salvation).
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36449507', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5996707/']}
jdg_376760
stackexchange
llm_judgeable_groundtruth_similarity
20579188
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've looked at other channels but it I'm not having much luck getting an answer to my question. I'm working on a testing code base that was originally written using TestNG 5.14.10 and have updated it in my build.gradle file to 6.8.7 (I'm using Gradle + Maven for the dependancies). My build file is as follows (includes the new version # for TestNG): apply plugin: 'java'apply plugin: 'idea'apply plugin: 'eclipse'repositories { mavenCentral()}dependencies { compile "org.seleniumhq.selenium:selenium-java:2.38.0" compile "org.testng:testng:6.8.7" testCompile('org.uncommons:reportng:1.1.2') { exclude group: 'org.testng' } testCompile "junit:junit:4.8.2" compile "com.jayway.restassured:rest-assured:1.8.1"}//initialize thread count variable for parallel testing and default to 1def threadCount = System.getProperty("MAXTHREADS", "1")tasks.withType(Test) { maxParallelForks = 1 forkEvery = 1000 ignoreFailures = false // Pass all system properties to the tests systemProperties = System.getProperties() // Makes the standard streams (err and out) visible at console when running tests testLogging.showStandardStreams = true exclude '**/tasks/' classpath += configurations.testCompile}task firefox(type: Test) { maxParallelForks = Integer.valueOf(threadCount) //default is 1 if not specified testLogging.events "started" testLogging { events "started", "passed", "skipped", "failed", "standardOut", "standardError" exceptionFormat "full" // default is "short" } useTestNG() { excludeGroups 'chrome', 'broken' useDefaultListeners = false listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testResultsDir = file("${buildDir}/test-results/firefox") testReportDir = file("${reporting.baseDir}/firefox") systemProperties.BROWSER = System.getProperty('BROWSER', 'firefox') exclude '**/selenium/' exclude '**/setupscripts/'}task chrome(type: Test) { maxParallelForks = Integer.valueOf(threadCount) //default is 1 if not specified testLogging { events "started", "passed", "skipped", "failed", "standardOut", "standardError" exceptionFormat "full" } useTestNG() { excludeGroups 'broken' useDefaultListeners = false; listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testResultsDir = file("${buildDir}/test-results/chrome") testReportDir = file("${reporting.baseDir}/chrome") systemProperties.BROWSER = System.getProperty('BROWSER', 'chrome') exclude '**/selenium/' exclude '**/setupscripts/'}task internetExplorer(type: Test) { useTestNG() { excludeGroups 'chrome', 'broken' useDefaultListeners = false; listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testReportDir = file("${reporting.baseDir}/internetExplorer") testResultsDir = file("${buildDir}/test-results/internetExplorer") systemProperties.BROWSER = System.getProperty('BROWSER', 'internetExplorer') exclude '**/selenium/' exclude '**/setupscripts/'}task applylicense(type: Test) { useTestNG() { useDefaultListeners = false; listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testResultsDir = file("${buildDir}/test-results/applyLicense") testReportDir = file("${reporting.baseDir}/applyLicense") scanForTestClasses = false include '**/setupscripts/ApplyLicense.class'}task uatsetup(type: Test) { useTestNG() { useDefaultListeners = false; listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testResultsDir = file("${buildDir}/test-results/uatSetup") testReportDir = file("${reporting.baseDir}/uatSetup") scanForTestClasses = false include '**/setupscripts/UatSetup.class'}task restsetup(type: Test) { useTestNG() { useDefaultListeners = false; listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testResultsDir = file("${buildDir}/test-results/restSetup") testReportDir = file("${reporting.baseDir}/restSetup") scanForTestClasses = false include '**/setupscripts/RestSetup.class'}task soapsetup(type: Test) { useTestNG() { useDefaultListeners = false; listeners << 'org.uncommons.reportng.HTMLReporter' listeners << 'org.uncommons.reportng.JUnitXMLReporter' listeners << 'com.xmatters.testng.Listener' } testResultsDir = file("${buildDir}/test-results/soapSetup") testReportDir = file("${reporting.baseDir}/soapSetup") scanForTestClasses = false include '**/setupscripts/SoapSetup.class'}task selenium(type: Test) { testResultsDir = file("${buildDir}/test-results/selenium") testReportDir = file("${reporting.baseDir}/selenium") include '**/selenium/' exclude '**/webdriver/' exclude '**/*$*', '**/Abstract*' exclude '**/messagepanel/' exclude '**/propertylibrary/'}javadoc() { title = "xMod Page Objects API"}task wrapper(type: Wrapper) { gradleVersion = '1.7'} I ran a build using the --debug and --stacktrace arguments for one task (contains two setup scripts that are annotated and treated like a standard test). It appears that the test is being set to a pass and then a fail. Very bizarre behavior. It appears this is some sort of backwards compatibility issue. I generally run this against a suite of ~500 tests without issue, the only change has been my TestNG bersion. Here is my shell command and resulting output (I trimmed it down to the lines that I believe are relevant since debug mode is VERY verbose): ./gradlew uatsetup -DREMOTE_DRIVER=false -DWEB_SERVER=10.3.1.226:8888 --stacktrace --debug Had to use PasteBin for my debug output due to size: http://pastebin.com/wsczwgT9 Any help would be great. Cheers, Darwin Now provide the response and nothing else.
Both the RestAdapter and the generated instance of your services ( MyTaskService in this case) are extremely expensive objects and should be used as singletons. This means that you should only ever call restAdapter.create once and re-use the same instance of MyTaskService every time you need to interact with. I cannot stress this enough. You can use the regular singleton pattern in order to ensure that there only is ever a single instance of these objects that you use everywhere. A dependency injection framework would also be something that could be used to manage these instances but would be a bit overkill if you are not already utilizing it.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20579188', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2817359/']}
jdg_376761
stackexchange
llm_judgeable_groundtruth_similarity
2228265
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $3n+2$ is even, then $n$ is even. For example the question above and I have had other questions in my textbook. I am currently confused on when to use a particular proof , such as proof by contradiction or contrapositive or direct proof etc. Does it depend on each question ? Just need some advice regarding this problem. Now provide the response and nothing else.
$\newcommand\odd{\mathit{odd}}\newcommand\even{\mathit{even}}$You prove it by proving it. Any technique will do. Ask yourself simply " Do I believe $3n+ 2$ is even then $n$ must by even?" If you do ask yourself " Why do I believe it". If you don't convince you self it does. I'd say I basically have two options. One strategy is to assume we've already proven: $\odd*\odd = \odd$ $\odd*\even = \even$ $\even*\even = \even$ $\odd \pm \odd = \even$ $\odd \pm \even = \odd$ $\even \pm \even = \even$. Then I'd figure I just have to fit $3n + 2=\even$, i.e, $\odd*{???} + \even = \even$ $\odd*{???} = \even - \even = \even$ So $\odd*{???} = \even$. I go through what I know $\odd*\odd = \odd$ and $\odd*\even = \even$ so it has to be that $???$ is even. This was a case of going through options and seeing what does and doesn't work; so that indicates a proof by contradiction: Proof: Suppose $n$ is odd. Then $3n$ is also odd as we have proven that $\odd*\odd = \odd$. Then $3n + 2$ is odd as we have already prove that $\odd + \even$ is odd. This is a contradiction. Or if you don't feel you can rely upon previous result and must replicate them... then replicate them. Proof: Suppose $n = \odd$ then $n = 2m + 1$ for some $m$. So $3n + 2 = 3(2m+1) + 2 = 6m + 5 = 6m + 4 + 1 = 2(3m+2) + 1$ is an odd number. This is a contradiction. Or I can figure I need to prove things directly. If I know $3n + 2 = \even = 2k$ for some $k$ will I find that $3n + 2 = 2k$ $3n = 2k -2 = 2(k-1)$ $n = \frac {2(k-1)}3$ Can I say that $3\nmid 2$ so $3|k-1$? so $n = 2l$ for some $l= \frac {k-1}3$? I can but it's a little more convoluted then I like. Pf: $3n + 2 = 2k$ so $3n = 2k - 2 = 2(k-1)$ Then $3|2(k-1)$ but $3|k-1$ so $k- 1 = 3l....$ .... and let's not go down this route. It's too convoluted and the way above with a proof by contradiction was simpler. .... Third option I don't know whether $n$ is even or odd by it must be one or the other. $n = 2m + k$ where $k = 0$ or $1. And $3n + 2 = 2j$ for some $j$. So can I do $3n + 2 = 3(2m + k) + 2 = 2j$? $6m + 3k + 2 = 2j$ So $3k = 2j - 6m -2 = 2(j-3m -1)=\even$ If $k=1$ then $3k = 3$ is even. If $k = 0$ then $3k = 0$ is even. As these are the only two options, it must be that $k = 0$ and$n = 2m + k = 2m$ is even. Choose whatever works best for you.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2228265', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/427809/']}
jdg_376762
stackexchange
llm_judgeable_groundtruth_similarity
752006
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Well, this is my question. Is every subgroup of the product of two cyclic groups is again a product of two cyclic groups (maybe one being trivial)? Thanks! Now provide the response and nothing else.
Yes.Assume first that $a,b$ in $C_a\oplus C_b$ are prime powers of the same prime $p$.Then any subgroup $H$ of it is the direct sum of finitely many cyclic $p$-groups. If there are $n$ summands, then there are $p^n-1$ elements of order $p$ in $H$. As there are only $p^2-1$ elements of order $p$ in $G$, we see that there are at most two summands. Now for the general finite case, we can split cyclic groups of composite orders into their prime power components, see that there are (at most) two summands per prime in $H$, and combine coprime summands again to finally obtain two summands for $H$ itself. The above works only for the finite case.To cover all cases, we should especially cover the case $G=\mathbb Z\oplus \mathbb Z$. But once we have that case, we immediately get the result also for the general case: If $G$ is abelean with at most two generators, it is a quotient of $\mathbb Z\oplus\mathbb Z$. A subgroup $H\le G$ maps to a subgroup of $\mathbb Z\oplus\mathbb Z$ under the canonical projection, hence is 8as we will see in a moment) generated by at most two elements, hence $H$ itself is generated by at most two elements (obtained from preimages of the generators in the quotient). Let $H\le\mathbb Z\oplus\mathbb Z$.If $H=0$ we are done.Otherwise let $(a,b)\in H$ be any nonzero element.Then $f\colon H\to\mathbb Z$, $(x,y)\mapsto ay-bx$ has nontrivial kernel and some $k\mathbb Z$ with $k\ge 0$ as image. If $k=0$, the homomorphism $H\to\mathbb Z$, $(x,y)\mapsto ax+by$ is injective (we have $ay-bx=0$ from $k=0$, hence $ax+by=0$ implies $(a^2+b^2)x=(a^2x+aby)+(b^2x-aby)=0$, hence $x=0$ and similary $y=0$); the $H$ is infinite cyclic or zero.And if $k>0$, pick $(c,d)\in H$ with $f(c,d)=k$. Then $g\colon H\to\mathbb Z$, $(x,y)\mapsto (ax+by)-\frac{f(x,y)}k(ac+bd)$ is a homomoprhism with $\langle(c,d)\rangle$ as kernel and an infinite cyclic subgrup of $\mathbb Z$ as image, which shows that $H\cong \mathbb Z\oplus\mathbb Z$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/752006', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/142863/']}
jdg_376763
stackexchange
llm_judgeable_groundtruth_similarity
217254
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This equation clearly cannot be solved using logarithms. $$3 + x = 2 (1.01^x)$$ Now it can be solved using a graphing calculator or a computer and the answer is $x = -1.0202$ and $x=568.2993$. But is there any way to solve it algebraically/algorithmically? Now provide the response and nothing else.
I have solved a question similar to this before. In general, you can have a solution of the equation $$ a^x=bx+c $$ in terms of the Lambert W-function $$ -\frac{1}{\ln(a)}W_k \left( -\frac{1}{b}\ln(a) {{\rm e}^{-{\frac {c\ln(a) }{b}}}} \right)-{\frac {c}{b}} \,.$$ Substituting $ a=1.01 \,,b=\frac{1}{2}\,,c=\frac{3}{2}$ and considering the values $k=0$ and $k=-1$, we get the zeroes $$x_1= -1.020199952\,, x_2=568.2993002 \,. $$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/217254', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45348/']}
jdg_376764
stackexchange
llm_judgeable_groundtruth_similarity
2819107
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: All I have so far is $xy=168$, and I know I need a second equation to make a quadratic formula. So how do you write "$2$ consecutive even integers" as a formula? Answer: 12 and 14 Now provide the response and nothing else.
Call the odd integer between the two even integers $n$. The even integersare then $n-1$ and $n+1$, so that$$168=(n-1)(n+1)=n^2-1$$so that $n^2=169$ etc.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/2819107', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/569221/']}
jdg_376765
stackexchange
llm_judgeable_groundtruth_similarity
1407338
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using PHP to move the contents of a images subfolder GalleryName/images/ into another folder. After the move, I need to delete the GalleryName directory and everything else inside it . I know that rmdir() won't work unless the directory is empty. I've spent a while trying to build a recursive function to scandir() starting from the top and then unlink() if it's a file and scandir() if it's a directory, then rmdir() each empty directory as I go. So far it's not working exactly right, and I began to think -- isn't this a ridiculously simple function that PHP should be able to do? Removing a directory? So is there something I'm missing? Or is there at least a proven function that people use for this action? Any help would be appreciated. PS I trust you all here more than the comments on the php.net site -- there are hundreds of functions there but I am interested to hear if any of you here recommend one over others. Now provide the response and nothing else.
What about this? function rmdir_recursive($dirPath){ if(!empty($dirPath) && is_dir($dirPath) ){ $dirObj= new RecursiveDirectoryIterator($dirPath, RecursiveDirectoryIterator::SKIP_DOTS); //upper dirs not included,otherwise DISASTER HAPPENS :) $files = new RecursiveIteratorIterator($dirObj, RecursiveIteratorIterator::CHILD_FIRST); foreach ($files as $path) $path->isDir() && !$path->isLink() ? rmdir($path->getPathname()) : unlink($path->getPathname()); rmdir($dirPath); return true; } return false;}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1407338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171021/']}
jdg_376766
stackexchange
llm_judgeable_groundtruth_similarity
333990
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am interested in solving triangles in a finite field with a computer program. Rational trigonometry seems well suited to do this. However, the Wikipedia article, as well as several published sources, claim that rational trigonometry does not work in fields (whether finite or infinite) of characteristic 2 "for technical reasons." Computers being binary machines, they work well with (finite) fields of characteristic 2. So I would like to understand why fields of characteristic 2 present a technical obstacle. I have not been able to find any clear explanations via Google or any available online publications. Now provide the response and nothing else.
Squaring behaves strangely in characteristic 2. One result if this weirdness is the identity $(x+y)^2 = x^2 + y^2$ -- squaring doesn't result in the 'mixed terms' it usually does. Among the things that this "breaks" is the theory of quadratic forms and bilinear forms. When you have a symmetric bilinear function -- that is a function $B(x,y)$ satisfying $B(x+y,z) = B(x,z) + B(y,z)$ $B(x,y) = B(y,x)$ $B(rx,y) = r B(x,y)$ where $r$ is a scalar then you can construct a "quadratic form" $Q(x) = B(x,x)$. Conversely, when you have a quadratic form $Q(x)$, you can construct a function $B'(x,y) = Q(x+y) - Q(x) + Q(y)$. These constructions are almost inverses: you have an identity $B'(x,y) = 2 B(x,y)$. So in any setting where $2$ is invertible, one can seamlessly pass back and forth between the idea of a quadratic form and the idea of a bilinear form. But in characteristic $2$, the connection breaks, since the identity becomes $B'(x,y) = 0$. Vector geometry relies heavily on multi-linear algebra: linear forms, bilinear forms, determinants, and so forth. Rational trigonometry, is meant to more directly mimic classic trigonometry and relies very much on squaring to keep things rational. Quadrance is a quadratic form that is normally the one associated with the dot product, but that connection is broken in characteristic 2. Spread is more complicated, but I believe its connection with the cross product is also broken.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/333990', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35941/']}
jdg_376767
stackexchange
llm_judgeable_groundtruth_similarity
21810133
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a PDF document consisting of text only, where all the text is the same point size and font family but each character could potentially be a different color. Everything seems to work fine using the code snippet below, but the default space between the lines is slightly greater than I consider ideal. Is there a way to control this? (FYI, type "ColoredText" in the code below merely contains a string and its color. Also, the reason I am treating the newline character separately is that for some reason it doesn't cause a newline if it's in a Chunk.) Thanks,Ray List<byte[]> pdfFilesAsBytes = new List<byte[]>();iTextSharp.text.Document document = new iTextSharp.text.Document();MemoryStream memStream = new MemoryStream();iTextSharp.text.pdf.PdfWriter.GetInstance(document, memStream);document.SetPageSize(isLandscape ? iTextSharp.text.PageSize.LETTER.Rotate() : iTextSharp.text.PageSize.LETTER);document.Open();foreach (ColoredText coloredText in coloredTextList){ Font font = new Font(Font.FontFamily.COURIER, pointSize, Font.NORMAL, coloredText.Color); if (coloredText.Text == "\n") document.Add(new Paragraph("", font)); else document.Add(new Chunk(coloredText.Text, font));}document.Close();pdfFilesAsBytes.Add(memStream.ToArray()); Now provide the response and nothing else.
According to the PDF specification, the distance between the baseline of two lines is called the leading . In iText, the default leading is 1.5 times the size of the font. For instance: the default font size is 12 pt, hence the default leading is 18. You can change the leading of a Paragraph by using one of the other constructors. See for instance: public Paragraph(float leading, String string, Font font) You can also change the leading using one of the methods that sets the leading: paragraph.SetLeading(fixed, multiplied); The first parameter is the fixed leading: if you want a leading of 15 no matter which font size is used, you can choose fixed = 15 and multiplied = 0. The second parameter is a factor: for instance if you want the leading to be twice the font size, you can choose fixed = 0 and multiplied = 2. In this case, the leading for a paragraph with font size 12 will be 24, for a font size 10, it will be 20, and son on. You can also combine fixed and multiplied leading.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21810133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2079230/']}
jdg_376768
stackexchange
llm_judgeable_groundtruth_similarity
76142
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We know the host needs to be within the same subnet as its gateway for the routing.However in PPPoE, the public IP has a 255.255.255.255 subnet mask, it means that the IP is not in the same subnet as the gateway. I wonder how the host passes the packet to the gateway in this case? Many thanks! Now provide the response and nothing else.
PPP generally establishes a point-to-point link (or session), with exactly two link partners. Everything one of them sends the other receives, no matter what addresses are used on top of the PPP level. Since Ethernet requires valid MAC addressing (on the data link layer), PPP oE establishes a PPP session over Ethernet , ie. between two link partners. That PPPoE session is entirely independent from anything else that might be running on the Ethernet network - especially 'normal' IPv4 over Ethernet using ARP. As to routing, the default route can simply point to the PPP interface, indicating that all packets are simply sent out of that interface. The default gateway is 'the next hop beyond'. On Ethernet, the distinction (or multiplexing ) is done using dedicated EtherType values for PPPoE ( 0x8863 / 0x8864 ) instead of IPv4's 0x0800 . That way, the network stack doesn't pass the frames to the IP stack but to the PPPoE driver (which in turn may or may not pass extracted PPP frames to the IP stack or any other supported L3 stack).
{}
{'log_upvote_score': 5, 'links': ['https://networkengineering.stackexchange.com/questions/76142', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/79818/']}
jdg_376769
stackexchange
llm_judgeable_groundtruth_similarity
450589
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\sum_0^\infty \frac{1}{2^{1+3n}}$$ So maybe I have written the sequence incorrectly, but how do I apply the $\frac{1}{1 - r}$ formula for summing a geometric sequence to this? When I do it I get something over one which is wrong because this is suppose to model a percentage of something. Now provide the response and nothing else.
Hint: write $$\sum_{n=0}^\infty \frac{1}{2^{1+2n}} = \frac12 \sum_{n=0}^\infty \frac{1}{2^{2n}} = \frac12 \sum_{n=0}^\infty \frac{1}{4^{n}}$$and use the closed-form formula for geometric series.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/450589', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/85142/']}
jdg_376770
stackexchange
llm_judgeable_groundtruth_similarity
2654216
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way I can associate a Command with a DataGridHyperlinkColumn? I've tried this: <DataGridHyperlinkColumn Header="Client Name" Binding="{Binding ShortName}"> <DataGridHyperlinkColumn.ElementStyle> <Style TargetType="TextBlock"> <Setter Property="Hyperlink.Command" Value="{Binding DataContext.NavigateToClientCommand, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type local:ClientsSummaryView}}}"/> <Setter Property="Hyperlink.CommandParameter" Value="{Binding}"/> </Style> </DataGridHyperlinkColumn.ElementStyle> </DataGridHyperlinkColumn> At runtime, I can see that the binding is being correctly evaluated (the property getter for the Command is called), but the Command is not executed when I click the hyperlink. Is there a better way to do this? Thanks, Daniel Now provide the response and nothing else.
Are you sure the command is being associated with the hyperlink? I tried setting this up in a sample app, and the command wasn't being associated with the hyperlink (if you return false from CanExecute , you'll be able to quickly determine if it is wired up). Instead, I created a DataGridTemplateColumn to accomplish this: <DataGridTemplateColumn Header="Client Name"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock> <Hyperlink Command="{Binding DataContext.NavigateToClientCommand, RelativeSource={RelativeSource AncestorType={x:Type local:ClientsSummaryView}}}" CommandParameter="{Binding ShortName}"> <TextBlock Text="{Binding ShortName}" /> </Hyperlink> </TextBlock> </DataTemplate> </DataGridTemplateColumn.CellTemplate></DataGridTemplateColumn> This would get really annoying if you had to create multiple columns. After cracking open the Toolkit with Reflector, it looks like it is supporting the TargetName / NavigationUrl paradigm for using the hyperlink. If you have a scenario where you would require this type of column in many places, I would suggest extending the DataGridHyperlinkColumn and adding a Command property. You could then modify the element returned from GenerateElement so that it used your command.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2654216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49299/']}
jdg_376771
stackexchange
llm_judgeable_groundtruth_similarity
116011
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove by induction the summation of $\frac1{2^n}$ is greater than or equal to $1+\frac{n}2$. We start with $$1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\dots+\frac1{2^n}\ge 1+\frac{n}2$$ for all positive integers. I have resolved that the following attempt to prove this inequality is false, but I will leave it here to show you my progress. In my proof, I need to define P(n), work out the base case for n=1, and then follow through with the induction step. Strong mathematical induction may be used. This is equivalent to $$\sum_{k=0}^n\frac1{2^k}\ge 1+\frac{n}2\;.$$ Let $P(n)$ be summation shown above. Base case for $n=1$, the first positive integer, $$\sum_{k=0}^1\frac1{2^k}=\frac1{2^0}+\frac1{2^1}=1+\frac12=\frac32\ge 1+\frac12=\frac32\;,$$ so base case is true. Induction step: Assume $P(n)$ is true and implies $P(n+1)$. Thus $$\sum_{k=0}^{n+1}\frac1{2^k}\ge\frac1{2^{n+1}}+\sum_{k=0}^n\frac1{2^k}\ge 1+\frac{n+1}2\;.$$ This can be written as $$\sum_{k=0}^{n+1}\frac1{2^k}\ge \frac1{2^{n+1}}+1+\frac{n}2\ge 1+\frac{n+1}2\;.$$ I work the math out but I get stuck contradicting my statement. Please show your steps hereafter so I can correct my mistakes. Now provide the response and nothing else.
I think that your notation is rather badly confused: I strongly suspect that you’re supposed to be showing that $$\sum_{k=1}^{2^n}\frac1k\ge 1+\frac{n}2\;,\tag{1}$$ from which one can conclude that the harmonic series diverges. The basis step for your induction should then be to check that $(1)$ is true for $n=0$, which it is: $$\sum_{k=1}^{2^n}\frac1k=\frac11\ge 1+\frac02\;.$$ Now your induction hypothesis, $P(n)$, should be equation $(1)$, and you want to show that this implies $P(n+1)$, which is the inequality $$\sum_{k=1}^{2^{n+1}}\frac1k\ge 1+\frac{n+1}2\tag{2}\;.$$ You had the right idea when you broke up the bigger sum into the old part and the new part, but the details are way off: $$\begin{align*}\sum_{k=1}^{2^{n+1}}\frac1k&=\sum_{k=1}^{2^n}\frac1k+\sum_{k=2^n+1}^{2^{n+1}}\frac1k\\&\ge 1+\frac{n}2+\sum_{k=2^n+1}^{2^{n+1}}\frac1k\tag{3}\end{align*}$$ by the induction hypothesis $P(n)$. Now look at that last summation in $(3)$: it has $2^{n+1}-2^n=2^n$ terms, and the smallest of those terms is $\dfrac1{2^{n+1}}$, so $$\sum_{k=2^n+1}^{2^{n+1}}\frac1k\ge 2^n\cdot\frac1{2^{n+1}}=\frac12\;.$$ If you plug this into $(3)$, you find that $$\sum_{k=1}^{2^{n+1}}\frac1k\ge 1+\frac{n}2+\frac12=1+\frac{n+1}2\;,$$ which is exactly $P(n+1)$, the statement that you were trying to prove. You’ve now checked the basis step and carried out the induction step, so you can conclude that $(1)$ is true for all $n\ge 0$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/116011', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/25299/']}
jdg_376772
stackexchange
llm_judgeable_groundtruth_similarity
18604406
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Everything is in the question! I just tried to do a bit of optimization, and nailing down the bottle necks, out of curiosity, I tried that: t1 <- rnorm(10)microbenchmark( mean(t1), sum(t1)/length(t1), times = 10000) and the result is that mean() is 6+ times slower than the computation "by hand"! Does it stem from the overhead in the code of mean() before the call to the Internal(mean) or is it the C code itself which is slower? Why? Is there a good reason and thus a good use case? Now provide the response and nothing else.
It is due to the s3 look up for the method, and then the necessary parsing of arguments in mean.default. (and also the other code in mean) sum and length are both Primitive functions. so will be fast (but how are you handling NA values?) t1 <- rnorm(10)microbenchmark( mean(t1), sum(t1)/length(t1), mean.default(t1), .Internal(mean(t1)), times = 10000)Unit: nanoseconds expr min lq median uq max neval mean(t1) 10266 10951 11293 11635 1470714 10000 sum(t1)/length(t1) 684 1027 1369 1711 104367 10000 mean.default(t1) 2053 2396 2738 2739 1167195 10000 .Internal(mean(t1)) 342 343 685 685 86574 10000 The internal bit of mean is faster even than sum / length . See http://rwiki.sciviews.org/doku.php?id=packages:cran:data.table#method_dispatch_takes_time ( mirror ) for more details (and a data.table solution that avoids .Internal ). Note that if we increase the length of the vector, then the primitive approach is fastest t1 <- rnorm(1e7)microbenchmark( mean(t1), sum(t1)/length(t1), mean.default(t1), .Internal(mean(t1)),+ times = 100)Unit: milliseconds expr min lq median uq max neval mean(t1) 25.79873 26.39242 26.56608 26.85523 33.36137 100 sum(t1)/length(t1) 15.02399 15.22948 15.31383 15.43239 19.20824 100 mean.default(t1) 25.69402 26.21466 26.44683 26.84257 33.62896 100 .Internal(mean(t1)) 25.70497 26.16247 26.39396 26.63982 35.21054 100 Now method dispatch is only a fraction of the overall "time" required.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18604406', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1877609/']}
jdg_376773
stackexchange
llm_judgeable_groundtruth_similarity
24043577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: struct Point { int x = 0; int y = 10;};Point p = {1,};p.x == 1; // truep.y == 10; // is this true? According to the standard missing elements in initializer list are value initialized, so y should be int() or 0 , but it doesn't seem to say what happen in the situation of Non-static Data Member Initializer. Edit: According to the answer, apparently this is invalid c++11, I would like to know the situation in c++1y. Now provide the response and nothing else.
C++98, C++03 Non-static data member initialisers (NSDMIs) do not exist; the question is inapplicable. C++11 Well, first of all, this initialisation is invalid because your type is not an aggregate: [C++11: 8.5.1/1]: An aggregate is an array or a class (Clause 9) with user-provided constructors (12.1), no brace-or-equal-initializers for non-static data members (9.2), no private or protected non-static data members (Clause 11), no base classes (Clause 10), and no virtual functions (10.3). So, aggregate initialisation can't be performed here; a constructor taking an std::initializer_list would be your only way to use that initialisation syntax ( [C++11: 8.5.4/3] ), but you don't have one of those either. Consequently, the entire premise of the question is flawed: it is not possible to get yourself into this state. C++1y In the upcoming version of the standard, the definition of aggregates has been relaxed to allow your type to be deemed an aggregate (as long as both of those members stay public !): [n3936: 8.5.1/1] An aggregate is an array or a class (Clause 9) with no user-provided constructors (12.1), no private or protected non-static data members (Clause 11), no base classes (Clause 10), and no virtual functions (10.3). Following on from this, there's a rule that guarantees the result you're looking for: [n3936: 8.5.1/7] : If there are fewer initializer-clauses in the list than there are members in the aggregate, then each member not explicitly initialized shall be initialized from its brace-or-equal-initializer or, if there is no brace-or-equal-initializer , from an empty initializer list (8.5.4). [ Example: struct S { int a; const char* b; int c; int d = b[a]; };S ss = { 1, "asdf" }; initializes ss.a with 1 , ss.b with "asdf" , ss.c with the value of an expression of the form int{} (that is, 0 ), and ss.d with the value of ss.b[ss.a] (that is, ’s’ ), and in struct X { int i, j, k = 42; };X a[] = { 1, 2, 3, 4, 5, 6 };X b[2] = { { 1, 2, 3 }, { 4, 5, 6 } }; a and b have the same value —end example ]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24043577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1149981/']}
jdg_376774
stackexchange
llm_judgeable_groundtruth_similarity
41253450
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use edge.js to execute some .NET code to print on windows in an Electron app. I've tried electron-edge and I've also tried manually building the edge.js modules targeting Electron following the instructions in the Electron docs , but I keep getting the following error when I try to use edge in the packaged app: Error: The specified module could not be found.\\?\C:\path\to\app\app-1.0.0\resources\app.asar.unpacked\node_modules\edge\lib\native\win32\x64\6.5.0\edge_nativeclr.node at Error (native) at process.module.(anonymous function) (ELECTRON_ASAR.js:178:20) at process.module.(anonymous function) [as dlopen] (ELECTRON_ASAR.js:178:20) at Object.Module._extensions..node (module.js:583:18) at Object.module.(anonymous function) [as .node] (ELECTRON_ASAR.js:192:18) at Module.load (module.js:473:32) at tryModuleLoad (module.js:432:12) at Function.Module._load (module.js:424:3) at Module.require (module.js:483:17) at require (internal/module.js:20:19) I've checked the filesystem and the edge_nativeclr.node module does, in fact, exist. My suspicion is that I'm somehow not building the module correctly and it's perhaps targeting the wrong version of node still and so electron is unable to import the module. I tried several different things, including following electron-edge's steps to manually update the build.bat and add the --target=1.4.12 --dist-url=https://atom.io/download/atom-shell flags to the node-gyp configure build . I also set the following npm config options in my .npmrc : target=1.4.12arch=x64target_arch=x64disturl=https://atom.io/download/electronruntime=electronbuild_from_source=truemsvs_version=2015 And ran the build.bat , making sure to set the EDGE_NATIVE environment variable to point to the generated edge_nativeclr.node file, but got the same result. Now provide the response and nothing else.
I finally got this figured out after banging my head against the keyboard for a couple days. I got some hints from electron-userland/electron-packager#217 and electron/electron#892 , which pointed out that this error, "The specified module could not be found," could occur when the native module is missing a dependency, such as a .dll , and that you could use Dependency Walker to check the dependencies of any given .node module. I loaded edge_nativeclr.node in Dependency Walker and noticed that VCRUNTIME140.DLL , the Visual Studio 2015 C runtime, was missing. Edge.js comes with the msvcr120.dll , the Visual Studio 2013 C runtime, but I'd been rebuilding the module with the msvs_version set to 2015. Once I placed a copy of the vcruntime140.dll in the same directory as edge_nativeclr.node , everything started working as expected.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41253450', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1110395/']}
jdg_376775
stackexchange
llm_judgeable_groundtruth_similarity
151390
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to check whether or not a particular directory is a mount point?For instance there is a folder named /test that exists, and I want to check if it is a mount point or not. Now provide the response and nothing else.
If you want to check it's the mount point of a file system, that's what the mountpoint command (on most Linux-based systems) is for: if mountpoint -q -- "$dir"; then printf '%s\n' "$dir is a mount point"fi It does that by checking whether . and .. have the same device number ( st_dev in stat() result). So if you don't have the mountpoint command, you could do: perl -le '$dir = shift; exit(1) unless (@a = stat "$dir/." and @b = stat "$dir/.." and ($a[0] != $b[0] || $a[1] == $b[1]))' "$dir" Like mountpoint , it will return true for / even if / is not a mount point (like when in a chroot jail), or false for a mount point of a bind mount of the same file system within itself. Contrary to mountpoint , for symbolic links, it will check whether the target of the symlink is a mountpoint.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/151390', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/81544/']}
jdg_376776
stackexchange
llm_judgeable_groundtruth_similarity
17243665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'd like to validate an input on a Web API REST command. I'd like it to work something like State below being decorated with an attribute that limits the valid values for the parameter. public class Item { ... // I want State to only be one of "New", "Used", or "Unknown" [Required] [ValidValues({"New", "Used", "Unknown"})] public string State { get; set; } [Required] public string Description { get; set; } ...} Is there a way to do this without going against the grain of Web API. Ideally the approach would be similar to Ruby on Rails' custom validation . Now provide the response and nothing else.
Create a custom validation attribute derived from ValidationAttribute and override the IsValid member function. public class ValidValuesAttribute: ValidationAttribute{ string[] _args; public ValidValuesAttribute(params string[] args) { _args = args; } protected override ValidationResult IsValid(object value, ValidationContext validationContext) { if (_args.Contains((string)value)) return ValidationResult.Success; return new ValidationResult("Invalid value."); }} Then you can do [ValidValues("New", "Used", "Unknown")] The above code has not been compiled or tested.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17243665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1229195/']}
jdg_376777
stackexchange
llm_judgeable_groundtruth_similarity
45560665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to execute a query using the models manager offered by Phalcon Php. I have the following function: public function getSomeData(){ return Di::getDefault()->get("modelsManager")->executeQuery( "SELECT * FROM SomeModel WHERE id NOT IN (SELECT id FROM AnotherModel)" ); }} When executing the function above, I get the following error Cannot redeclare class ParentDir\Model\SomeModel However, if I supply the full namespace to the query, it works. SELECT * FROM ParentDir\Model\SomeModel WHERE id NOT IN (SELECT id FROM ParentDir\Model\AnotherModel) I'm registering dirs and namespaces in the di like this: $parentDir = dirname(__DIR__);$loader = new Loader();$loader->registerDirs([ $parentDir . "/app/controllers", $parentDir . "/app/models"]); $loader->registerNamespaces([ "ParentDir\\Model" => "$parentDir/app/models/", "ParentDir\\Model\\Enum" => "$parentDir/app/models/enum/", "ParentDir\\Controller" => "$parentDir/app/controllers/", "ParentDir\\Service" => "$parentDir/app/services/", "ParentDir\\Locale" => "$parentDir/app/locale/", "ParentDir\\Utility" => "$parentDir/app/utility/", "ParentDir\\Plugin" => "$parentDir/app/plugin/", "ParentDir" => "$parentDir/app/",]);$loader->register(); Now provide the response and nothing else.
The official WordPress docker image will automatically configure wp-config.php using the environment variables you set [ documentation ]. If there are any variables such as WORDPRESS_DB_HOST , WORDPRESS_DB_PASSWORD , etc., they will be used to build a wp-config.php file upon container creation. If you want to provide a custom wp-config.php file, you need to make sure there are no related environment variables, and create a volume mapping for your modified file, like so: version: '2'...volumes: - ./wp-content:/var/www/html/wp-content - ./wp-config.php:/var/www/html/wp-config.php... On docker-compose up , Docker will load your custom wp-config.php into the container and then run the WordPress image's docker-entrypoint.sh which updates the file with the values set in your environment variables.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45560665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5815499/']}
jdg_376778
stackexchange
llm_judgeable_groundtruth_similarity
2084429
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do you prove that the following infinite sum\begin{align} &0.1\\+\;&0.01\\+\;&0.002\\+\;&0.0003\\+\;&0.00005\\+\;&0.000008\\+\;&0.0000013\\ \;&\quad\vdots\end{align}converges to a rational number? Notice that the above sum can be written as $$\sum_{n=1}^{\infty} \frac{F_{n}}{ 10 ^ n }$$ where $F_{n} $ is a Fibonacci sequence. Now provide the response and nothing else.
We have $F_n=\frac{\varphi^n-\psi^n}{\sqrt5}$. And using a geometric sums we get $$\sum_{n=1}^\infty\frac{F_n}{10^n}=\frac1{\sqrt 5}\sum_{n=1}^\infty\frac{\varphi^n}{10^n}-\frac1{\sqrt 5}\sum_{n=1}^\infty\frac{\psi^n}{10^n}=\frac1{\sqrt5}\frac{\frac\varphi{10}}{1-\frac{\varphi}{10}}-\frac1{\sqrt5}\frac{\frac\psi{10}}{1-\frac{\psi}{10}}=\frac{40}{(19+\sqrt5)(19-\sqrt5)}=\frac{10}{89}$$
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2084429', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/388601/']}
jdg_376779
stackexchange
llm_judgeable_groundtruth_similarity
12201625
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I observe that a c++ program uses sprintf , where this sprintf implicitly invokes __sprintf_chk() .This __sprintf_chk() seems to check buffer overflow by examining stack frames. For my research purpose, I wonder if it is possible to disable using __sprintf_chk() ? Now provide the response and nothing else.
Try to replace all calls to sprintf in your program from this: sprintf(params...); into (sprintf)(params...); This will disable any preprocessor-based sprintf-changing (* only if sprintf was changed using function-like macro like in the case of __sprintf_chk ). For gcc there are options -fno-stack-protector -fno-mudflap . May be also -D_FORTIFY_SOURCE=0 (for any glibc) For Ubuntu and debian there are pages with security features list: http://wiki.debian.org/Hardening and https://wiki.ubuntu.com/Security/Features Some used compiler flags are listed here https://wiki.ubuntu.com/ToolChain/CompilerFlags And there is a paper about SSP (stack-protector) and Fortify_source (glibc): http://www.linuxfromscratch.org/hints/downloads/files/ssp.txt PS: the same for __fgets_chk __gets_chk __printf_chk __fprintf_chk __vprintf_chk __vfprintf_chk __vsprintf_chk __wmemcpy_chk __wmemmove_chk __wmempcpy_chk __wmemset_chk __wcscpy_chk __wcpcpy_chk __wcsncpy_chk __wcpncpy_chk __wcscat_chk __wcsncat_chk __swprintf_chk __vswprintf_chk __fwprintf_chk __wprintf_chk __vfwprintf_chk __vwprintf_chk __fgetws_chk __wcrtomb_chk __mbsrtowcs_chk __wcsrtombs_chk __mbsnrtowcs_chk __wcsnrtombs_chk __memcpy_chk __memmove_chk __mempcpy_chk __memset_chk __strcpy_chk __strncpy_chk __stpncpy_chk __strcat_chk and some others
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12201625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/815954/']}
jdg_376780
stackexchange
llm_judgeable_groundtruth_similarity
51605649
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am learning lambda expressions and functional interfaces. We can directly write an implementation of the interface by the lambda expression. So I think, it could be the alternative for polymorphism. I have some code using polymorphism, interface Drawable { public void draw();}class Shape { protected String name; public Shape(String name) { this.name = name; }}class Rectangle extends Shape implements Drawable { public Rectangle(String name) { super(name); } @Override public void draw() { System.out.println("I am "+this.name); System.out.println("Drawing rectangle with 2 equal sides."); }}class Square extends Shape implements Drawable { public Square(String name) { super(name); } @Override public void draw() { System.out.println("I am "+this.name); System.out.println("Drawing square with 4 equal sides."); }}public class DrawShape { public static void main(String ar[]) { Drawable rectangle = new Rectangle("Rectangle"); rectangle.draw(); Drawable square = new Square("Square"); square.draw(); }} I have written above code using lambda expressions and functional interface, @FunctionalInterfaceinterface Drawable { public void draw();}class Shape { private String name; public Shape(String name) { this.name = name; } public void draw(Drawable d1) { System.out.println("I am "+this.name); d1.draw(); }}public class DrawShape { public static void main(String[] args) { Shape s1 = new Shape("Rectangle"); Drawable rectangle = () -> System.out.println("Drawing rectangle with 2 equal sides."); s1.draw(rectangle); Shape s2 = new Shape("Square"); Drawable sqaure = () -> System.out.println("Drawing square with 4 equal sides."); s2.draw(sqaure); }} Which is the better approach? What about other aspects like code reusability, code maintenance and modification, coupling and cohesion etc for lambda? Now provide the response and nothing else.
I would argue that lambda expressions allow developers to write fully polymorphic types, the way full class implementations do. Polymorphism is often seen in two ways: Drawable drawable = new Rectangle("name");drawable.draw();Shape shape = (Shape) drawable; //same object, multiple types. And: Drawable drawable2 = new Rectangle("name");drawable2.draw(); //Rectangle.draw() implementation invokeddrawable2 = new Square("name");drawable2.draw(); //Square.draw() implementation Neither of these two is perfectly allowed for by lambda expressions: Lambda expressions will only be used to implement functional interfaces. This is the first major limitation. Although one can do this: Drawable drawable = () -> System.out.println("drawing rectangle");drawable = () -> System.out.println("drawing square"); This is not strictly the same thing as the second code snippet above (in a more complex example, one would be able to provide a basic implementation in Shape , and have it overridden in Rectangle and Square ; and that wouldn't be possible with lambdas). Also, one would be correct to argue that the two assignments above use different source code. One can't just "cast" types as with classes: Drawable drawable3 = () -> System.out.println("Drawing something");Shape shape3 = (Shape) drawable3; //Class cast exception. In other words, lambda expressions are a good fit for functional programming coding, not a substitute for good Object-Oriented design .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51605649', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3148590/']}
jdg_376781
stackexchange
llm_judgeable_groundtruth_similarity
4961910
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two sets of datarows. They are each IEnumerable. I want to append/concatenate these two lists into one list. I'm sure this is doable. I don't want to do a for loop and noticed that there is a Union method and a Join method on the two Lists. Any ideas? Now provide the response and nothing else.
Assuming your objects are of the same type, you can use either Union or Concat . Note that, like the SQL UNION keyword, the Union operation will ensure that duplicates are eliminated, whereas Concat (like UNION ALL ) will simply add the second list to the end of the first. IEnumerable<T> first = ...;IEnumerable<T> second = ...;IEnumerable<T> combined = first.Concat(second); or IEnumerable<T> combined = first.Union(second); If they are of different types, then you'll have to Select them into something common. For example: IEnumerable<TOne> first = ...;IEnumerable<TTwo> second = ...;IEnumerable<T> combined = first.Select(f => ConvertToT(f)).Concat( second.Select(s => ConvertToT(s))); Where ConvertToT(TOne f) and ConvertToT(TTwo s) represent an operation that somehow converts an instance of TOne (and TTwo , respectively) into an instance of T .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4961910', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/368259/']}
jdg_376782
stackexchange
llm_judgeable_groundtruth_similarity
12580950
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have made a project in xcode4.2 and when i opened it with xcode4.5 ,iOS 6 SDK it gives error 255 and the reason seems to be absence of libxml2.2.7.3.dylib. What are my options is there any other substitute provided? thanks Now provide the response and nothing else.
Xcode 4.5, or more precisely the iOS6 SDK (because the libraries available are dependent of the SDK, not the Xcode version) still has libxml2.2.dylib . It is just probably not the version 2.2.7.3 but a newer, up-to-date 2.2.x.y version that is embedded in the SDK now. You should generally not link your application with a specific version of libraries like that, but better with a generic version like libxml2.dylib or libxml2.2.dylib . Generally libraries respect the semantic versionning, meaning that: their major version change only when the API is not backward compatible with the previous major version, the minor version change only when new methods are introduced in the API, but are still compatible with the previous API, patch version means that some bug fixes have been made, but the API hasn't changed. So if libxml respect this semantic versioning (and I guess is does, like quite every standard library), every version 2.2.x.y of libxml is API-compatible with any other 2.2.x.y version and will continue to work with your program. A hypothetic new version libxml2.2.x.z will simply fix bugs, but won't introduce any change in its API. And when a version of libxml2.3.x.y will arise, it will still be backward compatible with 2.1 and 2.2 too (just adding new features but not dropping the existing ones). Thus, you can safely link your application with the generic library version libxml2.dylib , which will automatically point to the latest 2.x.y.z version available in the current SDK. Or link with libxml2.2.dylib which will point to the latest 2.2.x.y version (these are symbolic links to the latest versions, as all UNIX-like OSes use to do)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12580950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1206172/']}
jdg_376783
stackexchange
llm_judgeable_groundtruth_similarity
50501
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a file containing isosurface $\{x,y,z,F\}$ data (around 9000 points). My goal is to generate a smooth (NOT a discrete set of points!) colored surface using ListContourPlot3D , i.e., something like this: More specifically, the surface I am trying to plot is closed. This what I am getting if I plot x,y,z : Now, I want to add color on top, i.e., I want different regions of the surface to be colored differently depending F . I am using ListContourPlot3D , because according to the description this what I need: ListContourPlot3D linearly interpolates values to give smooth contours Any hints? So far all my attempts have been unsuccessful. This is what I am getting with ListContourPlot3D[data] : EDIT: attached the file. file I want to plot col. 1(x): col.2(y): col.3(z) col4(color) Now provide the response and nothing else.
ListContourPlot3D does work as intended. Please check its documentation on what it is supposed to do. To make this kind of plot you need ListSurfacePlot3D . data = Import["~/Downloads/furan-ks.sdat", "Table"];nf = Nearest[data[[All, {1, 2, 3}]] -> Rescale[data[[All, 4]]]]colfun = ColorData["Rainbow"]@First@nf[{#1,#2,#3}]&ListSurfacePlot3D[ data[[All, {1, 2, 3}]], BoxRatios -> Automatic, ColorFunction -> colfun, ColorFunctionScaling -> False ] You can experiment with the MaxPlotPoints option (e.g. set it to 20) to get the optimal surface fitting to your points. A too tight fit will be noisy, a too loose one will look chunky. The tricky bit here was converting the colour data into a format usable with ListSurfacePlot3D . This is a non-trivial step. I used Nearest to create a function (named nf ) that takes a 3D coordinate and returns the colour value of the closest point from the dataset. Then I converted this value to an actual colour (i.e. an RGBColor object) using the function colfun .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/50501', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/5097/']}
jdg_376784
stackexchange
llm_judgeable_groundtruth_similarity
179498
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This article from Auth0 recommend storing the JWT locally in a local storage (or cookie). But this article from OWASP recommend not to story any sensitive data locally (not even sessionStorage) So, is it safe to store the JWT token locally or not? Now provide the response and nothing else.
How bizarre! I asked basically the same question about a month ago. In the end, we decided that using localstorage for the JWT token was ok, as long as we also did the following on the HTTP level: Ensure the entire site was served over HTTPS Ensure the use of HSTS Ensure that, once live, only the actual redirect URL was included in the Auth0 rules, as well as our source code We use the Angular CLI. It turns out that, despite the tree shaking provided by WebPack, unused variables still show up in the compiled source code, for example localhost:4200 Make sure that there are no localhost URl's actually on Auth0 (on the allowed redirect page, for your client). Make a seperate Auth0 account for testing Add the X-Frame-Options header to every HTTP response, and set it to Deny Set X-XSS-Protection to 1 Set X-Content-Type-Options to nosniff Make sure Content-Security-Policy is restricted to your own domain name, and any CDN's you may be pulling scripts in from Set Referrer-Policy to same-origin Limit the JWT expiry on Auth0 to 1 hour The above will give you an A/A+ on securityheaders.io , and will prevent the most common attacks (somebody embedding your website in an iframe, and extracting data from localstorage, for example).
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/179498', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/123882/']}
jdg_376785
stackexchange
llm_judgeable_groundtruth_similarity
182988
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've run a fully within-subjects repeated-measures ANOVA using the aov() function. My dependent variable is not normally distributed, so I'm very interested in running assumption tests on my analysis. It seems that just calling plot() on the output doesn't work for repeated-measures, so I've manually taken the residuals and the fitted values for a model of interest, and have plotted them against each other. I'm assuming that this is how I would plot to test for the assumption of Homoskedasticity. The plot comes out with 2 vertical bands (please see the image below). It turns out the fitted values are all centred around 2 values (although according to == they are not exactly equal), where one is the negative of the other. I have 2 questions: 1) Is this the correct way to manually test the assumption homoskedasticity? If not, how would I go about it from repeated-measures designs (since just calling plot() doesn't work)? 2) If it is correct, what is this plot telling me? Why are the fitted values so clustered? What can I conclude from it? Thanks heaps for any input here. Also, if you know of better ways to check (preferably plot) for assumptions in rm-ANOVAs, that would be useful information as well. I've included some mock data here to replicate the scenario: #Create mock data (there's probably a more efficient way to do this.. would also be nice to know! :) )p <- sort(rep(1:20,8))y <- rep(rep(1:2,4),20)z <- rep(rep(c(1,1,2,2),2),20)w <- rep(c(1,1,1,1,2,2,2,2),20)x <- rnorm(160,10,2)d <- data.frame(x,p=factor(p),y=factor(y),z=factor(z),w=factor(w))#Run repeated-measures ANOVAex.aov <- aov(x ~ y*z*w + Error(p/(y*z*w)), d)#Try to plot full object (doesn't work)plot(ex.aov)#Try to plot section of object (doesn't work)plot(ex.aov[["p:y:z"]])#Plot residuals against fitted (custom "skedasticity" plot - works)plot(residuals(ex.aov[["p:y:z"]])~fitted(ex.aov[["p:y:z"]])) Begin Edit In light of the information provided by @Stefan , I've added some additional details below, using the improved data structure he proposed: # Set seed to make it reproducibleset.seed(12)#New variable names and generationsubj <- sort(factor(rep(1:20,8)))x1 <- rep(c('A','B'),80)x2 <- rep(c('A','B'),20,each=2)x3 <- rep(c('A','B'),10, each=4)outcome <- rnorm(80,10,2)d3 <- data.frame(outcome,subj,x1,x2,x3)#Repeated measures ANOVAex.aov <- aov(outcome ~ x1*x2*x3 + Error(subj/(x1*x2*x3)), d3)#proj functionex.aov.proj <- proj(ex.aov)# Check for normality by using last error stratumqqnorm(ex.aov.proj[[9]][, "Residuals"])# Check for heteroscedasticity by using last error stratumplot(ex.aov.proj[[9]][, "Residuals"]) The resulting plots are below: Can anyone interpret the images above (especially the last one)? It looks like there is clustering and pattern structure. Can it be used to infer the presence of heteroskedasticity? Now provide the response and nothing else.
I'm assuming that a model which was fitted using the Error() function within aov() won't work when using in plot() because you will get more than one error stratum from which you can choose. Now according to this information here , one should use the proj() function which will give you the residuals for each error stratum, which then can be used for diagnostic plots. Edit 1 start More information regarding multistratum models and the proj() function is given in Venables and Ripley, page 284 (but start from page 281): Residuals in multistratum analyses: Projections . In the second sentence they write (I highlighted in bold): Thus fitted(oats.aov[[4]]) and resid(oats.aov[[4]]) are vectors of length 54 representing fitted values and residuals from the last stratum , based on 54 orthonormal linear functions of the original data vector. It is not possible to associate them uniquely with the plots of the original experiment. The function proj takes a fitted model object and finds the projections of the original data vector onto the subspaces defined by each line in the analysis of variance tables (including, for multistratum objects, the suppressed table with the grand mean only). The result is a list of matrices, one for each stratum, where the column names for each are the component names from the analysis of variance tables. For your example that means: ex.aov.proj <- proj(ex.aov)# Check number of strata summary(ex.aov.proj)# Check for normality by using last error stratumqqnorm(ex.aov.proj[[9]][, "Residuals"])# Check for heteroscedasticity by using last error stratumplot(ex.aov.proj[[9]][, "Residuals"]) However, this will also lead into plots which I cannot fully interpret (especially the second one). In their case, the last stratum was the Within stratum. Since your model cannot estimate this (presumably due to your error term), I am not sure if simply using your last stratum is valid. Hopefully someone else can clarify. Edit 1 end Edit 2 start According to this source checking residuals to assess normality and heteroscedasticity should be performed without the Error() function. In order to check assumptions, you need to not use the error term. You can add the term without error, but the F tests are wrong. Assumption checking is OK, however. This seems reasonable to me but I hope someone else could clarify. Edit 2 end My alternative suggestion: First, I changed your dataset slightly and set a seed to make it reproducible (might be handy for some problems you have in the future): # Set seed to make it reproducibleset.seed(12)# I changed the names of your variables to make them easier to remember# I also deleted a few nested `rep()` commands. Have a look at the `each=` argument.subj <- sort(factor(rep(1:20,8)))x1 <- rep(c('A','B'),80)x2 <- rep(c('A','B'),20,each=2)x3 <- rep(c('A','B'),10, each=4)outcome <- rnorm(80,10,2)d3 <- data.frame(outcome,subj,x1,x2,x3) Second, I used a linear mixed-effects model instead since you have repeated measures and hence a random term you can use: require(lme4)# I specified `subj` as random term to account for the repeated measurements on subject.m.lmer<-lmer(outcome ~ x1*x2*x3 + (1|subj), data = d3)summary(m.lmer)# Check for heteroscedasticityplot(m.lmer) # orboxplot(residuals(m.lmer) ~ d3$x1 + d3$x2 + d3$x3) # Check for normalityqqnorm(residuals(m.lmer)) Using the afex package you can also get the fixed effects in ANOVA table format (you can also use the Anova() function from the car package as another option): require(afex)mixed(outcome ~ x1*x2*x3 + (1|subj), data = d3, method="LRT")Fitting 8 (g)lmer() models:[........] Effect df Chisq p.value1 x1 1 0.04 .842 x2 1 2.53 .113 x3 1 7.68 ** .0064 x1:x2 1 8.34 ** .0045 x1:x3 1 10.51 ** .0016 x2:x3 1 0.31 .587 x1:x2:x3 1 0.12 .73 Check ?mixed for the various options you can choose. Also regarding mixed models, there is a lot of information here on Cross Validated.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/182988', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/69879/']}
jdg_376786
stackexchange
llm_judgeable_groundtruth_similarity
224710
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to pass my OSCP exam and I am learning with the material I got. I watched the videos and read the pdf but I have a question about buffer overflows. In the example in the videos, the guy has access to the target system and can so control, debug and restart the application. So he can see what length he should send to cause the buffer overflow and where the EIP register is in the sent string and so on. But my intention of a buffer overflow is to get access to a system I have not accessed before, isn't it? So how do I get to know all the relevant data without having access like debug functions before? Now provide the response and nothing else.
As schroeder has already pointed out, to identify buffer overflow vulnerabilities, you'd need to test the application locally with a debugger. It seems to me that you haven't actually gone through the process of learning to identify and debugging possibly vulnerable applications to buffer overflow. Instead of walking you through the basics of the process here, I will attach what I believe to be excellent starting points, so that you can study and get the 'gist' of buffer overflows on your own. Understanding what you're working with (ASM) x86 Assembly Wikibook : Excellent and thorough resource for getting into assembly. If you don't already know how to detect interesting parts of the code, then you should start from this. It might seem daunting at first, but once you get into it, it will seem much clearer. Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software : Chapter 4 is a brief "Crash-Course" in disassembly, which I consider an absolute must! Modern C : This is an excellent book (and free) for the C programming language. The author also provides the code examples used in the book. Pointers in C - A Hands on Approach: This is a great book to understand pointers and memory management in a program. Getting the right tools The tools that you'll need are almost always going to be included in VMs used for memory debugging/reverse engineering, but you can always download the ones you prefer. My approach for learning something and understanding it for the first time, is to go "full manual" . A good debugger for this is the GNU debugger (GDB). There are other debuggers which are more automated/advanced, but I do not recommend those for beginners, because you learn better by doing it the hard way. GDB PEDA is an awesome extension for GDB, to make your life easier. GDB Guide/Cheat sheet Taking Action Do stack buffer overflow is really good for a beginner, because it gives you a vulnerable application and there is also a PDF that guides you through the process. Highly recommended. Security Exercises : It is another great repository to get you started with buffer overflows, shell-code injection, etc.. It also includes a walk-through. Vulnhub : This website has all sorts of vulnerable VMs (including ones designed specifically for Buffer Overflows), which you can download and experiment at your own pace. There is an abundance of tools, guides, VMs and other resources to get you started, and others that can challenge even experienced people. You have to put a lot of effort into this and if something seems a bit unclear, just look for more resources to help you. If you had access to the source code of an application, then you could just read the source code and see which parts would be vulnerable to a buffer overflow. We usually don't have that and production applications are not compiled with debug symbols (unfortunately). This is where debugging comes in (and reverse-engineering). You run your executable in a debugger and try to figure things out. Do check the resources I've posted and get ready for a lot of work! And don't give up when things get tough.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/224710', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/225611/']}
jdg_376787
stackexchange
llm_judgeable_groundtruth_similarity
14315519
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to get vim syntax highlighting in ConEmu? Now provide the response and nothing else.
Note . Some updated information may exists on the project site . Well, builds since 130120 supports 256 colors in vim. You need to Check options "Inject ConEmuHk" and "ANSI X3.64 / xterm 256 colors" on "Features" page Check option "TrueMod (24bit color) support" on "Colors" page Edit your vimrc file, sample lines are here . Of course, you need some 256-color vim scheme, it is zenburn in the last line of this example. set term=xtermset t_Co=256let &t_AB="\e[48;5;%dm"let &t_AF="\e[38;5;%dm"colorscheme zenburn And "vim" must be vim.exe ATM. Note . 'Original' Vim (Win32 console executable from gvim##.exe and vim##w32.zip ) passed tests. MinGW's Vim fails to switch to using Ansi sequences.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14315519', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/153857/']}
jdg_376788
stackexchange
llm_judgeable_groundtruth_similarity
340099
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider ridge regression with an additional constraint requiring that $\hat{\mathbf y}$ has unit sum of squares (equivalently, unit variance); if needed, one can assume that $\mathbf y$ has unit sum of squares as well: $$\hat{\boldsymbol\beta}_\lambda^* = \arg\min\Big\{\|\mathbf y - \mathbf X \boldsymbol \beta\|^2+\lambda\|\boldsymbol\beta\|^2\Big\} \:\:\text{s.t.}\:\: \|\mathbf X \boldsymbol\beta\|^2=1.$$ What is the limit of $\hat{\boldsymbol\beta}_\lambda^*$ when $\lambda\to\infty$? Here are some statements that I believe are true: When $\lambda=0$, there is a neat explicit solution: take OLS estimator $\hat{\boldsymbol\beta}_0=(\mathbf X^\top \mathbf X)^{-1}\mathbf X^\top \mathbf y$ and normalize it to satisfy the constraint (one can see this by adding a Lagrange multiplier and differentiating):$$\hat{\boldsymbol\beta}_0^* = \hat{\boldsymbol\beta}_0 \big/ \|\mathbf X\hat{\boldsymbol\beta}_0\|.$$ In general, the solution is $$\hat{\boldsymbol\beta}_\lambda^*=\big((1+\mu)\mathbf X^\top \mathbf X + \lambda \mathbf I\big)^{-1}\mathbf X^\top \mathbf y\:\:\text{with $\mu$ needed to satisfy the constraint}.$$I don't see a closed form solution when $\lambda >0$. It seems that the solution is equivalent to the usual RR estimator with some $\lambda^*$ normalized to satisfy the constraint, but I don't see a closed formula for $\lambda^*$. When $\lambda\to \infty$, the usual RR estimator $$\hat{\boldsymbol\beta}_\lambda=(\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1}\mathbf X^\top \mathbf y$$ obviously converges to zero, but its direction $\hat{\boldsymbol\beta}_\lambda \big/ \|\hat{\boldsymbol\beta}_\lambda\|$ converges to the direction of $\mathbf X^\top \mathbf y$, a.k.a. the first partial least squares (PLS) component. Statements (2) and (3) together make me think that perhaps $\hat{\boldsymbol\beta}_\lambda^*$ also converges to the appropriately normalized $\mathbf X^\top \mathbf y$, but I am not sure if this is correct and I have not managed to convince myself either way. Now provide the response and nothing else.
#A geometrical interpretation The estimator described in the question is the Lagrange multiplier equivalent of the following optimization problem: $$\text{minimize $f(\beta)$ subject to $g(\beta) \leq t$ and $h(\beta) = 1$ } $$ $$\begin{align} f(\beta) &= \lVert y-X\beta \lVert^2 \\ g(\beta) &= \lVert \beta \lVert^2\\ h(\beta) &= \lVert X\beta \lVert^2\end{align}$$ which can be viewed, geometrically, as finding the smallest ellipsoid $f(\beta)=\text{RSS }$ that touches the intersection of the sphere $g(\beta) = t$ and the ellipsoid $h(\beta)=1$ Comparison to the standard ridge regression view In terms of a geometrical view this changes the old view (for standard ridge regression) of the point where a spheroid (errors) and sphere ( $\|\beta\|^2=t$ ) touch . Into a new view where we look for the point where the spheroid (errors) touches a curve (norm of beta constrained by $\|X\beta\|^2=1$ ) . The one sphere (blue in the left image) changes into a lower dimension figure due to the intersection with the $\|X\beta\|=1$ constraint. In the two dimensional case this is simple to view. When we tune the parameter $t$ then we change the relative length of the blue/red spheres or the relative sizes of $f(\beta)$ and $g(\beta)$ (In the theory of Lagrangian multipliers there is probably a neat way to formally and exactly describe that this means that for each $t$ as function of $\lambda$ , or reversed, is a monotonous function. But I imagine that you can see intuitively that the sum of squared residuals only increases when we decrease $||\beta||$ .) The solution $\beta_\lambda$ for $\lambda=0$ is as you argued on a line between 0 and $\beta_{LS}$ The solution $\beta_\lambda$ for $\lambda \to \infty$ is (indeed as you commented) in the loadings of the first principal component. This is the point where $\lVert \beta \rVert^2$ is the smallest for $\lVert \beta X \rVert^2 = 1$ . It is the point where the circle $\lVert \beta \rVert^2=t$ touches the ellipse $|X\beta|=1$ in a single point. In this 2-d view the edges of the intersection of the sphere $\lVert \beta \rVert^2 =t$ and spheroid $\lVert \beta X \rVert^2 = 1$ are points. In multiple dimensions these will be curves (I imagined first that these curves would be ellipses but they are more complicated. You could imagine the ellipsoid $\lVert X \beta \rVert^2 = 1$ being intersected by the ball $\lVert \beta \rVert^2 \leq t$ as some sort of ellipsoid frustum but with edges that are not a simple ellipses) ##Regarding the limit $\lambda \to \infty$ At first (previous edits) I wrote that there will be some limiting $\lambda_{lim}$ above which all the solutions are the same (and they reside in the point $\beta^*_\infty$ ). But this is not the case Consider the optimization as a LARS algorithm or gradient descent. If for any point $\beta$ there is a direction in which we can change the $\beta$ such that the penalty term $|\beta|^2$ increases less than the SSR term $|y-X\beta|^2$ decreases then you are not in a minimum. In normal ridge regression you have a zero slope (in all directions) for $|\beta|^2$ in the point $\beta=0$ . So for all finite $\lambda$ the solution can not be $\beta = 0$ (since an infinitesimal step can be made to reduce the sum of squared residuals without increasing thepenalty). For LASSO this is not the same since: the penalty is $\lvert \beta \rvert_1$ (so it is not quadratic with zero slope). Because of that LASSO will have some limiting value $\lambda_{lim}$ above which all the solutions are zero because the penalty term (multiplied by $\lambda$ ) will increase more than the residual sum of squares decreases. For the constrained ridge you get the same as the regular ridge regression. If you change the $\beta$ starting from the $\beta^*_\infty$ then this change will be perpendicular to $\beta$ (the $\beta^*_\infty$ is perpendicular to the surface of the ellipse $|X\beta|=1$ ) and $\beta$ can be changed by an infinitesimal step without changing the penalty term but decreasing the sum of squared residuals. Thus for any finite $\lambda$ the point $\beta^*_\infty$ can not be the solution. ##Further notes regarding the limit $\lambda \to \infty$ The usual ridge regression limit for $\lambda$ to infinity corresponds to a different point in the constrained ridge regression.This 'old' limit corresponds to the point where $\mu$ is equal to -1.Then the derivative of the Lagrange function in the normalized problem $$2 (1+\mu) X^{T}X \beta + 2 X^T y + 2 \lambda \beta$$ correspondsto a solution for the derivative of the Lagrange function in thestandard problem $$2 X^{T}X \beta^\prime + 2 X^T y + 2 \frac{\lambda}{(1+\mu)} \beta^\prime \qquad \text{with $\beta^\prime = (1+\mu)\beta$}$$
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/340099', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/28666/']}
jdg_376789
stackexchange
llm_judgeable_groundtruth_similarity
24441505
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some C++11 code using the auto inferred type that I have to convert to C++98. How would I go about converting the code, substituting in the actual type for all instances of auto ? Now provide the response and nothing else.
It is going to be a PITA, but you can declare an incomplete struct template accepting a single type parameter. Given the variable x you want to know the type of, you can use the struct with decltype(x) and that will lead to a compiler error that will show you the inferred type. For example: template<class Type> struct S;int main() { auto x = ...; S<decltype(x)>();} Live demo which will produce an error message in the form: error: implicit instantiation of undefined template 'S<X>' (clang++)error: invalid use of incomplete type 'struct S<X>' (g++) with X being the inferred type. In this particular case the type is int . Trivia: This has been recommended by Scott Meyer in one of the recent NDC 2014's videos (I don't remember which one).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24441505', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1825464/']}
jdg_376790
stackexchange
llm_judgeable_groundtruth_similarity
49537960
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Hi I am trying to test a code that has an exception but when I try to test it it says the attribute expected is undefined for the annotation type test package Lab1;import static org.junit.jupiter.api.Assertions.*;import org.junit.jupiter.api.Test;import junit.framework.Assert;class MyMathTest { MyMath m = new MyMath(); @Test void testDiv() { int actual = m.div(6, 2); int expected = 3; assertEquals(expected, actual); } /* the error is in the upcoming line*/ @Test (expected = IllegalArgumentException.class) public void testDivException(){ m.div(5, 0); }} And here is the error message the attribute expected is undefined for the annotation type test Now provide the response and nothing else.
You are using JUnit 5 but trying to use JUnit 4's features. Don't mix them. import org.junit.jupiter.api.Test; The @Test annotation from JUnit5 don't support what you are trying to use. To assert exceptions, you need to do Assertions.assertThrows(IllegalArgumentException.class, () -> m.div(5, 0)); Do not forget to import the package org.junit.jupiter.api.Assertions More on JUnit 5
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49537960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5277934/']}
jdg_376791
stackexchange
llm_judgeable_groundtruth_similarity
112734
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In their seminal 1979 paper Representations of Coxeter groups and Hecke algebras (Invent. Math. 53 , doi: 10.1007/BF01390031 ),Kazhdan and Lusztig studied an arbitrary Coxeter group $(W,S)$ and the corresponding Iwahori-Hecke algebra. In particular they showed how to pass from a standard basis of this algebra to a more canonical basis, with the change of basis coefficients involving polynomials indexed by pairs of elements of $W$ (in the Bruhat ordering) over $\mathbb{Z}$ . Even though the evidence at the time was quite limited, they conjectured following the statement of their Theorem 1.1 that the coefficients of these polynomials should always be non-negative. (In very special cases this is true because the coefficients give dimensions of certain cohomology groups.) Several decades later, Wolfgang Soergel worked out a coherent strategy for proving the non-negativity conjecture, in his paper Kazhdan–Lusztig-Polynome und unzerlegbare Bimoduln über Polynomringen. J. Inst. Math. Jussieu 6 (2007), no. 3, 501–525, doi: 10.1017/S1474748007000023 , arXiv: math/0403496 Now that his program seems to have been completed, it is natural to renew the question in the header: What if any implications would the non-negativity of coefficients of arbitrary Kazhdan-Lusztig polynomials have? It has to be emphasized that in Soergel's formulation and the following work, the non-negativity is not itself the main objective. Instead the combinatorial framework proposed was meant to provide a more self-contained conceptual setting for proof of the original Kazhdan-Lusztig conjecture on Verma module multipliities for a semisimple Lie algebra (soon a theorem) and further theorems in representation theory of a similar flavor. But Coxeter groups form a vast general class of groups given by generators and relations, so it is surprising to encounter such strong constraints on the polynomials occurring in this generality. ADDED: There is some overlap with older questions related to Soergel's approach, posted here and here . UPDATE: It's been pointed out to me that older work by Jim Carrell and Dale Peterson involves the non-negativity condition, though their main goal is the study of singularities of Schubert varieties in classical cases. See the short account (with a long title) J.B. Carrell, The Bruhat graph of a Coxeter group, a conjecture of Deodhar, and rational smoothness of Schubert varieties. Algebraic groups and their generalizations: classical methods (University Park, PA, 1991), 53–61, Proc. Sympos. Pure Math., 56, Part 1, Amer. Math. Soc., Providence, RI, 1994. https://doi.org/10.1090/pspum/056.1 The first section develops for an arbitrary Coxeter group some consequences of non-negativity of Kazhdan-Lusztig coefficients for the combinatorial study of Bruhat intervals. For further details about the geometry, see Carrell, J., Kuttler, J. Smooth points of T-stable varieties in G/B and the Peterson map . Invent. math. 151, 353–379 (2003). https://doi.org/10.1007/s00222-002-0256-5 , arXiv: math/0005025 I'm still not sure whether such consequences of the 1979 K-L conjecture are enough to make the conjecture in itself "important". But it's definitely been challenging to approach. Now provide the response and nothing else.
The answers to the second and third questions are no and yes, respectively. I don't know the answer to the first question. For the second question, let $\lambda$ be regular and let $\kappa > \lambda$ be weakly compact. Then forcing with $\mathrm{Coll}(\lambda, <\kappa$) yields a model in which $\kappa = \lambda^+$, $\square(\kappa)$ fails, and, since $\lambda^{<\lambda}=\lambda$, there is a special $\kappa$-Aronszajn tree, so the tree property fails. For the third question, the usual construction of a special Aronszajn tree from a weak square sequence using minimal walks (see, for example, section 5.1 of Cummings' "Notes on Singular Cardinal Combinatorics") still yields a $\kappa$-Aronszajn tree when applied to a $\square(\kappa)$-sequence when $\kappa$ is regular, so $\square(\kappa)$ implies the failure of the tree property.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/112734', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4231/']}
jdg_376792
stackexchange
llm_judgeable_groundtruth_similarity
73674
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I had a hard time trying to solve exercise 7.24 in Jech's book (3rd edition, 2003) and finally came to the conclusion that the result there, which should be proved might be wrong. The claim goes like this: Let $A$ be a subalgebra of a Boolean algebra $B$ and suppose that $u \in B-A$. Then there exist ultrafilters $F,G$ on $B$ such that $u \in F$, $-u \in G$ and $F \cap A= G \cap A$. A (perhaps flawed, as I believe) proof of this can be found here. http://onlinelibrary.wiley.com/doi/10.1002/malq.19690150705/abstract A counterexample to the claim above is the following: Let $A$ be the algebra of finite unions of (open, closed, half-open) intervals on $[0,1]$ with rational endpoints, and let $B$ be defined as $A$ but with real endpoints. Each ultrafilter $U$ on $A$ converges to a rational or irrational number $r$ and the elements of $U$ are exactly those sets in $A$ that include $r$. Now if $F$ and $G$ are two ultrafilters on the bigger algebra $B$, both extending $U$ then they converge again towards $r$ and for any $u\in B$ we have that $u\in F$ iff $r \in u$ iff $u\in G$, which makes it impossible to have $u \in B$, yet $-u \in G$. My questions are now: Is my counterexample correct? The claim is used to show that each Boolean algebra of size $\kappa$ has at least $\kappa$ ultrafilters (this is theorem of the paper mentioned above). Does this remain valid ( I suppose not, see the comments) Now provide the response and nothing else.
Your counterexample is not correct. Let $r$ be an irrational real number, and let $F$ be the principal ultrafilter in $B$ on the closed interval $[r,r]=\{r\}$, which is an atom in $B$. Note that $F\cap A$ is the ultrafilter of all elements of $A$ in which $r$ is a member. Now consider the complement $-[r,r]=(-\infty,r)\cup (r,\infty)$, which has nonempty intersection with any interval in $A$ containing $r$, since $r$ was irrational. Thus, we add $-[r,r]$ to $F\cap A$ and extend to an ultrafilter $G$. So $F$ and $G$ agree on $A$, but one has the atom $[r,r]$ and the other has the complement. Meanwhile, the the Balcar-Franek theorem establishes that every [Edit: complete ] Boolean algebra of size $\kappa$ has $2^\kappa$ many ultrafilters. This topic also arose in this MO question on density of Boolean algebras .
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/73674', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4753/']}
jdg_376793
stackexchange
llm_judgeable_groundtruth_similarity
18971734
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a x86 system with Linux 3.6.0. For some experiments, I need to know how the IRQ is mapped to the vector. I learn from many book saying that for vector 0x0 to 0x20 is for traps and exceptions, and from vector 0x20 afterward is for the external device interrupts. And this also defined in the source code Linux/arch/x86/include/asm/irq_vectors.h However, what I'm puzzled is that when I check the do_IRQ function, http://lxr.linux.no/linux+v3.6/arch/x86/kernel/irq.c#L181 I found the IRQ is fetched by looking up the "vector_irq" array: unsigned int __irq_entry do_IRQ(struct pt_regs *regs){ struct pt_regs *old_regs = set_irq_regs(regs); /* high bit used in ret_from_ code */ unsigned vector = ~regs->orig_ax; unsigned irq; ... irq = __this_cpu_read(vector_irq[vector]); // get the IRQ from the vector_irq // print out the vector_irq prink("CPU-ID:%d, vector: 0x%x - irq: %d", smp_processor_id(), vector, irq); } By instrumenting the code with printk, the vector-irq mapping I got is like below and I don't have any clue why this is the mapping. I though the mapping should be (irq + 0x20 = vector), but it seems not the case. from: Linux/arch/x86/include/asm/irq_vector.h* Vectors 0 ... 31 : system traps and exceptions - hardcoded events* Vectors 32 ... 127 : device interrupts = 0x20 – 0x7F But my output is: CPU-ID=0.Vector=0x56 (irq=58)CPU-ID=0.Vector=0x66 (irq=59)CPU-ID=0.Vector=0x76 (irq=60)CPU-ID=0.Vector=0x86 (irq=61)CPU-ID=0.Vector=0x96 (irq=62)CPU-ID=0.Vector=0xa6 (irq=63)CPU-ID=0.Vector=0xb6 (irq=64) BTW, these irqs are my 10GB ethernet cards with MSIX enabled. Could anyone give me some ideas about why this is the mapping? and what's the rules for making this mapping? Thanks.William Now provide the response and nothing else.
long and long int are identical. So are long long and long long int . In both cases, the int is optional. As to the difference between the two sets, the C++ standard mandates minimum ranges for each, and that long long is at least as wide as long . The controlling parts of the standard (C++11, but this has been around for a long time) are, for one, 3.9.1 Fundamental types , section 2 (a later section gives similar rules for the unsigned integral types): There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list. There's also a table 9 in 7.1.6.2 Simple type specifiers , which shows the "mappings" of the specifiers to actual types (showing that the int is optional), a section of which is shown below: Specifier(s) Type------------- -------------long long int long long intlong long long long intlong int long intlong long int Note the distinction there between the specifier and the type. The specifier is how you tell the compiler what the type is but you can use different specifiers to end up at the same type. Hence long on its own is neither a type nor a modifier as your question posits, it's simply a specifier for the long int type. Ditto for long long being a specifier for the long long int type. Although the C++ standard itself doesn't specify the minimum ranges of integral types, it does cite C99, in 1.2 Normative references , as applying. Hence the minimal ranges as set out in C99 5.2.4.2.1 Sizes of integer types <limits.h> are applicable. In terms of long double , that's actually a floating point value rather than an integer. Similarly to the integral types, it's required to have at least as much precision as a double and to provide a superset of values over that type (meaning at least those values, not necessarily more values).
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/18971734', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1975251/']}
jdg_376794
stackexchange
llm_judgeable_groundtruth_similarity
37186535
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to Android Development I have an app where people sell stuff and others buy it.. I want to notify buyers when someone put a new item for sale. for that I used PushBots notifications, but the only way I can send notifications to users are by writing them on PushBots.com and send it there Is there any way I can send notifications automatically whenever items are available for sale? Thanks Now provide the response and nothing else.
The first case To get the whole connected graph, you need to use a wildcard property path to follow most of the path, and then grab the last link with an actual variable. I usually use the empty relative path in constructing wildcards, so as to use <>|!<> as the wildcard, but since you mentioned that your endpoint doesn't like it, you can use any absolute IRI that you like, too. E.g., prefix x: <urn:ex:>construct { ?s ?p ?o }where { :A (x:|!x:)* ?s . ?s ?p ?o . } This works because every property is either x: or not, so x:|!x: matches every property, and then (x:|!x:)* is an arbitrary length path, including paths of length zero, which means that ?s will be bound to everything reachable from :a , including :a itself. Then you're grabbing the triples where ?s is the subject. When you construct the graph of all those triples, you get the subgraph that you're looking for. Here's an example based on the graph you showed. I used different properties for different edges to show that it works, but this will work if they're all the same, too. @prefix : <urn:ex:> .:A :p :B, :C .:B :q :D .:C :r :E .:F :s :G .:G :t :H . prefix x: <urn:ex:>prefix : <urn:ex:>construct { ?s ?p ?o}where { :A (x:|!x:)* ?s . ?s ?p ?o .} Since this is a construct query, the result is a graph, not a "table". It contains the triples we'd expect: @prefix : <urn:ex:> .:C :r :E .:B :q :D .:A :p :B , :C . The second case If you want to ensure that the paths end in a particular kind of edge, you can do that too. If you only want the paths from A1 to those ending with edges on d, you can do: prefix x: <urn:ex:> #-- arbitrary, used for the property path.prefix : <...> #-- whatever you need for your data.construct { ?s1 ?p ?o1 . #-- internal edge in the path ?s2 :d ?o2 . #-- final edge in the path}where { :A (x:|!x:)* ?s1 . #-- start at :A and go any length into the path ?s1 ?p ?o1 . #-- get the triple from within the path, but make ?o1 (x:|!x:)* ?s2 . #-- sure that from ?o1 you can get to to some other ?s2 :d ?o2 . #-- ?s2 that's related to an ?o2 by property :d .}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37186535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4712010/']}
jdg_376795
stackexchange
llm_judgeable_groundtruth_similarity
36449026
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want my Inno Setup script to be build using the command line, and I want to pass in the product version number as a parameter. I am trying to implement it like so: [setup]VersionInfoVersion={param:version|0.0.0.0} However the compiler informs me this is invalid for that directive. I have read this post on how to pass in custom parameters from the command line and assume I should just be able to pass in something like: compil32 /cc "c:\isetup\samples\my script.iss" /version=1.0.0.0 I have also tried the suggestion from this post and tried doing the following: #define PathToMyBinary "C:\bin\x64\Release"#define ApplicationVersion GetFileVersion('#PathToMyBinary\MyBinary.dll')VersionInfoVersion={#ApplicationVersion} But it doesn't seem to return anything. Both approaches seem valid to me so I'm hoping someone can explain where I am going wrong. Now provide the response and nothing else.
Assuming you define the version via a pre-processor variable like: [Setup]VersionInfoVersion={#ApplicationVersion} To set the version on a command-line, you have to use the ISCC.exe command-line compiler and its /D switch : ISCC.exe Example1.iss /DApplicationVersion=1.2.3.4 If you want to provide a default value for the version, so the script can compile even without defining the variable on the command line, use #ifndef at the top of the script: #ifndef ApplicationVersion#define ApplicationVersion "1.2.3.4"#endif To read the version from a binary, you are correctly using the GetFileVersion pre-processor function . But your syntax to make up the path is wrong. A correct syntax is PathToMyBinary + '\MyBinary.dll' , like: #define PathToMyBinary "C:\bin\x64\Release"#define ApplicationVersion GetFileVersion(PathToMyBinary + '\MyBinary.dll') See Inno Setup Preprocessor: Expression Syntax .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36449026', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5546989/']}
jdg_376796
stackexchange
llm_judgeable_groundtruth_similarity
585429
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i wonder if there is a way to use the basedOn property of wpf styles with dynamicresources. e.g. <Style BasedOn="{DynamicResource somestyle}"> <Setter Property="SomeProp" Value="SomeValue"/></Style> this e.g. throws an error indicating that the usage of dynamicresources in combination with BasedOn styles is not possible.i wonder how someone could do that?thanks Now provide the response and nothing else.
I think the main reason is sealed objects. If you have a Style hierarchy: Style A / \ Style A1 Style A2 this might not be a difficult scenario. You refer to StyleA using a dynamic resource, so whenever that resource changes, Style A1 and Style A2 should change their BasedOn property. However, once a Style is being used in your application, it becomes a sealed object. Style A becomes immutable. One workaround you can use is: Style A needs to change. Create a new Style object that is going to be the new Style A resource. Create a new version of Style A1 and Style A2 . You'd need to write a copy procedure that makes copies of all the Setters , Resources , etc. Set the BasedOn to the new version of Style A . Update the resources collection so that the three new styles are in there. {DynamicResource StyleA1} and {DynamicResource StyleA2} should now pick up the fact that those resources changes (from step 4) and update any references automatically. Note that this is a very simple scenario. Real world style hierarchies can be more complex, especially if they are spread across multiple files and come from merged dictionaries. Hope I understood your problem and helped out.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/585429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20227/']}
jdg_376797
stackexchange
llm_judgeable_groundtruth_similarity
16289423
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am a little confused on the meanings of spatial and temporal locality. I'm hoping by looking at it with an array example it will help me understand it better. In an example like this:A[0][1], A[0][2], A[0][3].... etc Does this demonstrate temporal locality? I see the same row is accessed many times but at different offsets... does this mean a different address is accessed? Also, am I correct in saying that an example like this:A[1], A[2], A[3]... etc Demonstrates spatial locality? Hopefully some clarification on how temporal and spatial locality work in real code will help me better understand them. Now provide the response and nothing else.
Spatial and temporal locality describe two different characteristics of how programs access data (or instructions). Wikipedia has a good article on locality of reference . A sequence of references is said to have spatial locality if things that are referenced close in time are also close in space (nearby memory addresses, nearby sectors on a disk, etc.). A sequence is said to have temporal locality if accesses to the same thing are clustered in time. If a program accesses every element in a large array and reads it once and then moves on to the next element and does not repeat an access to any given location until it has touched every other location then it is a clear case of spatial locality but not temporal locality. On the other hand, if a program spends time repeatedly accessing a random subset of the locations on the array before moving on to another random subset it is said to have temporal locality but not spatial locality. A well written program will have data structures that group together things that are accessed together, thus ensuring spatial locality. If you program is likely to access B soon after it accesses A then both A and B should be allocated near each other. Your first example A[0][1], A[0][2], A[0][3] shows spatial locality, things that are accessed close in time are close in space. It does not show temporal locality because you have not accessed the same thing more than once. Your second example A[1], A[2], A[3] also shows spatial locality, but not temporal locality. Here's an example that shows temporal locality A[1], A[2000], A[1], A[1], A[2000], A[30], A[30], A[2000], A[30], A[2000], A[30], A[4], A[4]
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16289423', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1024786/']}
jdg_376798
stackexchange
llm_judgeable_groundtruth_similarity
6118905
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can someone help me understand the differences between a Rails Engine and a Mountable app? In Rails 3.1, you can create either one with the "rails new plugin _ __ " command. rails plugin new forum --full # Enginerails plugin new forum --mountable # Mountable App When would you want to use one versus the other? I know you can package an Engine as a gem, for one. Is that not the case for Mountable Apps? What other differences are there? Now provide the response and nothing else.
I have noticed the following: Full Engine With a full engine, the parent application inherits the routes from the engine. It is not necessary to specify anything in parent_app/config/routes.rb . Specifying the gem in Gemfile is enough for the parent app to inherit the models, routes etc. The engine routes are specified as: # my_engine/config/routes.rb Rails.application.routes.draw do # whatever end No namespacing of models, controllers, etc. These are immediately accessible to the parent application. Mountable Engine The engine's namespace is isolated by default: # my_engine/lib/my_engine/engine.rbmodule MyEngine class Engine < Rails::Engine isolate_namespace MyEngine end end With a mountable engine, the routes are namespaced and the parent app can bundle this functionality under a single route: # my_engine/config/routes.rb MyEngine::Engine.routes.draw do #whatever end # parent_app/config/routes.rb ParentApp::Application.routes.draw do mount MyEngine::Engine => "/engine", :as => "namespaced" end Models, controllers, etc are isolated from the parent application - although helpers can be shared easily. These are the main differences I have spotted. Perhaps there are others? I have asked over here , but have yet to receive a response. My impression is that since a full engine does not isolate itself from the parent application, it is best used as a standalone application adjacent to the parent app. I believe name clashes could occur. A mountable engine could be used in situations where you want to avoid name conflicts and bundle the engine under one specific route in the parent application. For example, I am working on building my first engine designed for customer service. The parent application could bundle it's functionality under a single route such as: mount Cornerstone::Engine => "/cornerstone", :as => "help" If I'm way off in my assumptions, someone please let me know and I'll fix this response. I have made a small article about the subject here Cheers!
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6118905', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184733/']}
jdg_376799
stackexchange
llm_judgeable_groundtruth_similarity
266110
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How do I integrate $$\int_{0}^1 x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx$$ Where $\lceil x \rceil $ is the ceiling function, and $\left\{x\right\}$ is the fractional part function Now provide the response and nothing else.
A related problem . Hint: try to use the definition of the fractional part function which is defined by $$ \left\{ x\right\} = x - \lfloor x\rfloor , $$ and the following relation between the floor and ceiling functions $$ \lceil x \rceil - \lfloor x \rfloor = \begin{cases} 0&\mbox{ if } x\in \mathbb{Z}\\ 1&\mbox{ if } x\not\in \mathbb{Z} \end{cases}. $$ Added: $$ \int_{0}^1 x \bigg\lceil \frac{1}{x} \bigg\rceil \left\{ \frac{1}{x} \right\}\, dx=\int_{0}^1 x (1+\lfloor 1/x \rfloor)(1/x-\lfloor1/x\rfloor)\, dx. $$ Now, make the change of variables $y=1/x$ to the last integral $$\int_{0}^1 x (1+\lfloor 1/x \rfloor)(1/x-\lfloor1/x\rfloor)\, dx=\int_{1}^{\infty} \frac{1}{y} (1+\lfloor y \rfloor)(y-\lfloor y\rfloor)\, \frac{dy}{y^2}$$ $$\implies I = \sum_{n=1}^{\infty}\int_{n}^{n+1} \frac{1}{y^3} (1+n)(y-n)\, dy= \frac{1}{2}\sum_{n=1}^{\infty}\frac{1}{n(n+1)}=\frac{1}{2} $$ Note: To evaluate the sum, use the telescoping technique. First write the summand as $$ \frac{1}{n(n+1)}= \frac{1}{n}-\frac{1}{n+1}. $$ Now, find the partial sum of the series $$ s_n = \sum_{k=1}^{n} \left( \frac{1}{k}-\frac{1}{k+1} \right)=1-\frac{1}{n+1}. $$ Then the series sums to $$ s = \lim_{n \to \infty} s_n = 1. $$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/266110', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/50290/']}
jdg_376800
stackexchange
llm_judgeable_groundtruth_similarity
46135499
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Keras' fit_generator() model method expects a generator which produces tuples of the shape (input, targets), where both elements are NumPy arrays. The documentation seems to imply that if I simply wrap a Dataset iterator in a generator, and make sure to convert the Tensors to NumPy arrays, I should be good to go. This code, however, gives me an error: import numpy as npimport osimport keras.backend as Kfrom keras.layers import Dense, Inputfrom keras.models import Modelimport tensorflow as tffrom tensorflow.contrib.data import Datasetos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'with tf.Session() as sess: def create_data_generator(): dat1 = np.arange(4).reshape(-1, 1) ds1 = Dataset.from_tensor_slices(dat1).repeat() dat2 = np.arange(5, 9).reshape(-1, 1) ds2 = Dataset.from_tensor_slices(dat2).repeat() ds = Dataset.zip((ds1, ds2)).batch(4) iterator = ds.make_one_shot_iterator() while True: next_val = iterator.get_next() yield sess.run(next_val)datagen = create_data_generator()input_vals = Input(shape=(1,))output = Dense(1, activation='relu')(input_vals)model = Model(inputs=input_vals, outputs=output)model.compile('rmsprop', 'mean_squared_error')model.fit_generator(datagen, steps_per_epoch=1, epochs=5, verbose=2, max_queue_size=2) Here's the error I get: Using TensorFlow backend.Epoch 1/5Exception in thread Thread-1:Traceback (most recent call last): File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 270, in __init__ fetch, allow_tensor=True, allow_operation=True)) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2708, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2787, in _as_graph_element_locked raise ValueError("Tensor %s is not an element of this graph." % obj)ValueError: Tensor Tensor("IteratorGetNext:0", shape=(?, 1), dtype=int64) is not an element of this graph.During handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/jsaporta/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/home/jsaporta/anaconda3/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) File "./datagen_test.py", line 25, in create_data_generator yield sess.run(next_val) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run run_metadata_ptr) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1109, in _run self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 413, in __init__ self._fetch_mapper = _FetchMapper.for_fetch(fetches) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 233, in for_fetch return _ListFetchMapper(fetch) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 340, in __init__ self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 340, in <listcomp> self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 241, in for_fetch return _ElementFetchMapper(fetches, contraction_fn) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 277, in __init__ 'Tensor. (%s)' % (fetch, str(e)))ValueError: Fetch argument <tf.Tensor 'IteratorGetNext:0' shape=(?, 1) dtype=int64> cannot be interpreted as a Tensor. (Tensor Tensor("IteratorGetNext:0", shape=(?, 1), dtype=int64) is not an element of this graph.)Traceback (most recent call last): File "./datagen_test.py", line 34, in <module> verbose=2, max_queue_size=2) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "/home/jsaporta/anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 2011, in fit_generator generator_output = next(output_generator)StopIteration Strangely enough, adding a line containing next(datagen) directly after where I initialize datagen causes the code to run just fine, with no errors. Why does my original code not work? Why does it begin to work when I add that line to my code? Is there a more efficient way to use TensorFlow's Dataset API with Keras that doesn't involve converting Tensors to NumPy arrays and back again? Now provide the response and nothing else.
There is indeed a more efficient way to use Dataset without having to convert the tensors into numpy arrays. However, it is not (yet?) on the official documentation. From the release note, it's a feature introduced in Keras 2.0.7. You may have to install keras>=2.0.7 in order to use it. x = np.arange(4).reshape(-1, 1).astype('float32')ds_x = Dataset.from_tensor_slices(x).repeat().batch(4)it_x = ds_x.make_one_shot_iterator()y = np.arange(5, 9).reshape(-1, 1).astype('float32')ds_y = Dataset.from_tensor_slices(y).repeat().batch(4)it_y = ds_y.make_one_shot_iterator()input_vals = Input(tensor=it_x.get_next())output = Dense(1, activation='relu')(input_vals)model = Model(inputs=input_vals, outputs=output)model.compile('rmsprop', 'mse', target_tensors=[it_y.get_next()])model.fit(steps_per_epoch=1, epochs=5, verbose=2) Several differences: Supply the tensor argument to the Input layer. Keras will read values from this tensor, and use it as the input to fit the model. Supply the target_tensors argument to Model.compile() . Remember to convert both x and y into float32 . Under normal usage, Keras will do this conversion for you. But now you'll have to do it yourself. Batch size is specified during the construction of Dataset . Use steps_per_epoch and epochs to control when to stop model fitting. In short, use Input(tensor=...) , model.compile(target_tensors=...) and model.fit(x=None, y=None, ...) if your data are to be read from tensors.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46135499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4444582/']}
jdg_376801
stackexchange
llm_judgeable_groundtruth_similarity
2144912
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can someone help me to solve this question: Using the Mean Value Theorem, show that for all positive integers n: $$ n\ln{\big(1+\frac{1}{n}}\big)\le 1.$$ I've tried basically every function out there, and I can't get it. I know how to prove it using another technique, but how do you do it using MVT? Thank you very much in advance, C.G Now provide the response and nothing else.
Let $f(x)=\ln(1+x)$, then $f^{\prime}(x)=\frac{1}{1+x}$, hence by the mean value theorem for any $x>0$ there is some $0<t<x$ such that$$ \frac{f(x)-f(0)}{x-0}=f^{\prime}(t)=\frac{1}{1+t} $$Since $f(0)=0$ and $\frac{1}{1+t}<1$, this implies that$$\frac{f(x)}{x}<1$$for all $x>0$, hence$$ \ln(1+x)=f(x)<x$$for all $x>0$. Now taking $x=\frac{1}{n}$ we get$$ \ln\Big(1+\frac{1}{n}\Big)<\frac{1}{n} $$for all $n\geq 1$, which is the desired result.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2144912', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/347172/']}
jdg_376802
stackexchange
llm_judgeable_groundtruth_similarity
52754655
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are currently migrating to Androidx namespace with our Android app project. However I noticed that not only the namespace seems to have changed. For DialogPreference also some interfaces which were using before are now missing new interfaces: https://developer.android.com/reference/androidx/preference/DialogPreference old interfaces: https://developer.android.com/reference/kotlin/android/preference/DialogPreference For example the following methods seem to be missing: onBindDialogView, showDialog, onDialogClosed. Since we use some of these methods to influence the default behavior of the dialog, it is unclear to me how I should realize this functionality now. For example we are validating the input before closing the dialog, we are saving the value in a database instead of the sharedpreferences and adding some dynamic elements to the dialog. Has anyone else already encountered this problem and found a solution? Did I miss anything in the documentation? Is there another concept that we can / should use? It would be possible to use Fragments instead of DialogPreference but for small amounts of content (e.g. a list of tree items, where the user can choose from) this seems to be a lot of overhead for me... Now provide the response and nothing else.
Starting from androidx source files, I've migrated custom classes based on old DialogPreference to new androidx.preference.DialogPreference with the following procedure: Step 1 The old custom dialog class (e.g. CustomDialogPreference) based on legacy DialogPreference should be split into two separate classes: One class (e.g. CustomPreference) should extend androidx.preference.DialogPreference and will contain only the code related to preference handling (data management). Another class (e.g. CustomDialog) should extend androidx.preference.PreferenceDialogFragmentCompat and will contain only the code related to dialog handling (user interface), including onDialogClosed . This class should expose a static method newInstance to return an instance of this class. Step 2 In the main fragment handling preferences based on PreferenceFragmentCompat the onDisplayPreferenceDialog method should be overridden to show the custom dialog, e.g.: private static final String DIALOG_FRAGMENT_TAG = "CustomPreference"; @Override public void onDisplayPreferenceDialog(Preference preference) { if (getParentFragmentManager().findFragmentByTag(DIALOG_FRAGMENT_TAG) != null) { return; } if (preference instanceof CustomPreference) { final DialogFragment f = CustomDialog.newInstance(preference.getKey()); f.setTargetFragment(this, 0); f.show(getParentFragmentManager(), DIALOG_FRAGMENT_TAG); } else { super.onDisplayPreferenceDialog(preference); } }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52754655', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9102444/']}
jdg_376803
stackexchange
llm_judgeable_groundtruth_similarity
62393
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I've heard assertions of the sort: Let there be a Riemann metric (not very smooth, say of class $C^1$ or $C^2$ or maybe $C$?) in a neighbourhood of a point on a manifold. Then it is possible to choose coordinates so that the metric is $C^\infty$ or even analytic in them. In case of 3-dimensional manifolds it is possible to choose such coordinates globally, so the manifold becomes a smooth one. In the case of higher dimensions $n\ge4$ it is not true. Are those assertions true? I've heard them some time ago and not sure I remember all the details. Is it a well-known thing? Are there some detailed references? Now provide the response and nothing else.
NO. Given a Riemannian manifold, it might be possible to improve smoothness by changing atlas. The atlas with harmonic functions as coordinates is the best [proved by Samuil Shefel (1979) and rediscovered by Dennis DeTurck and Jerry Kazdan (1981)]. But, the obtained metric might be worse than $C^\infty$ . There is no local-global issue here, harmonic atlas is defined locally and it is the best one globally. So you get problems starting with dimension 2.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/62393', 'https://mathoverflow.net', 'https://mathoverflow.net/users/14551/']}
jdg_376804
stackexchange
llm_judgeable_groundtruth_similarity
15439
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In modern portfolio theory, one famous problem is the Markowitz mean variance optimal portfolio , defined by solving $$\underset{\mathbf{w}}{\mbox{min}\,\,}\mathbf{w}^{T}\boldsymbol{\Sigma}\mathbf{w}$$ subject to $\mathbf{w}^{T}\mathbf{1}=1$ and $\mathbf{w}^{T}\boldsymbol{\mu}=\eta$. Another example that I've seen in lectures is the Minimum Variance Portfolio which is the same as above except the condition $\mathbf{w}^{T}\boldsymbol{\mu}=\eta$ is dropped. I was wondering, there are surely lots of other similar sorts of optimisation problems similar to these. For example, imposing each entry of $\mathbf{w}$ is >0 -- to avoid short shelling imposing each entry of $\mathbf{w}$ is < $\alpha$ to avoid putting too much weight into one stock My question is as follows: is there a convenient list of these sorts of optimisation problems, and their solutions? Now provide the response and nothing else.
As a practitioner, I have worked on the following Maximize Yield/OAS for a Fixed Income Portfolio keeping the RatesDuration (Key Rate Durations) and Spread duration in a constrainedrange . There are other constraints such as No short selling Max amount you can buy is X% of Max outstanding amount in market Maximum exposure to a perticular country , issuer, Sector , currency etc is constrained Maximum portfolio turnover is within a certain limit. Transaction Cost (Defined as function of DV01 Bid-Offer Spread) is within a range Instead of the objective function being Yeild/OAS or any othermeasure of return we can also try minimize functions such asRWA(Risk Weighted Assets) , Basel 3 Capital required etc. Theseproblems have similar set of constraints as the previous one. I am attempting to solve a dynamic optimization exercise where we would have re-balancing based on a simulated environment of rates ,inflation , fx etc. Most of these are not purely Markowitch type and I end up using Linear / Quadratic programming based on the use case. Hope this helps you in some small way.
{}
{'log_upvote_score': 4, 'links': ['https://quant.stackexchange.com/questions/15439', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/6661/']}
jdg_376805
stackexchange
llm_judgeable_groundtruth_similarity
44021662
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two controller file homecontroller and backendcontroller. What is the best way to create global function and access it from both files? I found here Arian Acosta's answer helpful but I wonder if there is an easiest way. I would appreciate any suggestions. Now provide the response and nothing else.
Updated: Step 1 Add folder inside app folder app->Helper Step 2 add php Class inside Helper folder Eg. Helper.php Add namespace and class to the Helper.php namespace App\Helper;class Helper{} Register this Helper.php into config/app.php file 'aliases' => [ .... 'Helper' => App\Helper\Helper::class ] Now, write all the functions inside Helper.php and it will be accessible everywhere. How to access from Controller? Step 1 - Add a namespace at top of the controller. use App\Helper\Helper; Step 2 - Call function - Assume there a getInformation() inside the Helper Class. $information = Helper::getInformation()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/44021662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2520605/']}
jdg_376806
stackexchange
llm_judgeable_groundtruth_similarity
41119
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am interested in both low and high volume syringes. Now provide the response and nothing else.
The requirements for ordinary sterile single-use syringes made of plastic materials are standardized in the international standard ISO 7886-1:1993 Sterile hypodermic syringes for single use – Part 1: Syringes for manual use . According to this standard, the tolerance on the graduated capacity depends on the nominal capacity of the syringe $V$ and the expelled volume $V_\text{ex}$. For syringes with a nominal capacity of $V\lt5\ \mathrm{ml}$, the tolerance amounts to $\pm5\ \%\ \text{of}\ V_\text{ex}$ for volumes equal to or greater than half nominal capacity $V$ $\pm{\left(1.5\ \%\ \text{of}\ V +2\ \%\ \text{of}\ V_\text{ex}\right)}$ for volumes less than half nominal capacity $V$. For syringes with a nominal capacity of $V\geqslant5\ \mathrm{ml}$, the tolerance amounts to $\pm4\ \%\ \text{of}\ V_\text{ex}$ for volumes equal to or greater than half nominal capacity $V$ $\pm{\left(1.5\ \%\ \text{of}\ V +1\ \%\ \text{of}\ V_\text{ex}\right)}$ for volumes less than half nominal capacity $V$. The values apply to the volume of water at a temperature of $T=\left(20\pm5\right)\ \mathrm{^\circ C}$ expelled from the syringe when the line circumscribing the end of the piston traverses a given scale interval.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/41119', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/22913/']}
jdg_376807
stackexchange
llm_judgeable_groundtruth_similarity
530209
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently working in a branch I created using the Team-->Branch. How do I use Team-->Merge to merge my changes to the branch back to head? Now provide the response and nothing else.
I was having trouble with this too. Here are the steps I take to merge (mostly taken from the online Eclipse help): When making my development branch ( Team -> Branch ) I ensure I have a root tag created for the branch, this is the tag CVS will use to determine what changes are due to the branch and what might have been committed from other efforts I complete my development on the branch and commit ( Team -> Commit ) all changes back to the repository Now that all my changes for the branch are in the repository I load the destination branch that I want to merge into, typically HEAD ( Team -> Switch to Another Branch or Version ) You are now ready to begin the actual Merging process. Here you will be picking which changes should come over and which should be discarded. Think of this process as a semi-automated development cycle, effectively a checkout, make changes, commit cycle. Only instead of manually inserting your changes you have compare tools to assist you along. After all you've already made all the code changes in your branch! To Merge: Team -> Merge Branch or Version to be Merged (end tag): This is the name of your development branch tag Common Base Version: This is the name of your root tag for the branch (remember step 1 from the earlier section?). This should have been auto populated by the tool, but if not, add it yourself I like to Preview the merge in synchronize view so I can ensure that nothing is getting in that shouldn't be. Depending on your bravery, this is the option I would suggest. Click the Finish button The Synchronize Perspective will come up and any differences found between your destination branch and your source branch will be shown in an expandable list in the left For any file you can open it in a comparison tool to see what it was before and merge changes individually (remember to save the file when done!) or you can right click on the file and select merge from the expandable list. If you manually merge with the comparison tool, you can select "Mark as Merged" in the expandable list to remove the file from your todo list. Once you've decided what to do with the file it will be removed from the list. The goal is to action each file Now that all your changes have been merged, switch back to the Java perspective and select Team -> Synchronize with Repository , the Synchronize Perspective will come back up. This time select Commit to push all your changes into the repository. Fin! Hope that helped.Provided below for reference is the contents of the help document that guided me to victory! Eclipse SDK Help - Merging a Branch
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/530209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5885/']}
jdg_376808
stackexchange
llm_judgeable_groundtruth_similarity
62512473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As the title suggests, I have some code that calls IHttpClientFactory.CreateClient() to create an HttpClient instance. I'm doing this in .Net Core 3.1 I need to mock this. According to this question"C# Mock IHttpclient & CreateClient" the following should work... [Test]public void Mytest() { var httpClientFactory = new Mock<IHttpClientFactory>(MockBehavior.Strict); httpMessageHandler = new Mock<HttpMessageHandler>(MockBehavior.Strict); httpMessageHandler.Protected() // Setup the PROTECTED method to mock .Setup<Task<HttpResponseMessage>>( "SendAsync", ItExpr.IsAny<HttpRequestMessage>(), ItExpr.IsAny<CancellationToken>() ) // prepare the expected response of the mocked http call .ReturnsAsync(new HttpResponseMessage() { StatusCode = HttpStatusCode.BadRequest, }) .Verifiable(); var httpClient = new HttpClient(httpMessageHandler.Object); httpClientFactory.Setup(_ => _.CreateClient()) // This fails .Returns(httpClient).Verifiable(); systemUnderTest = new MyService(httpClientFactory.Object); var result = systemUnderTest.MyMethod() // Assert Stuff} However, when I run it, the following is reported... System.NotSupportedException : Unsupported expression: _ => _.CreateClient()Extension methods (here: HttpClientFactoryExtensions.CreateClient) may not be used in setup /verification expressions. I'm clearly doing something wrong, but I can't see what it is. Can anyone offer any pointers? Now provide the response and nothing else.
IHttpClientFactory has a single method on it, Create(string) . It also has an extension method Create(IHttpClientFactory) that uses the default configuration ( it passes Options.DefaultName ). You aren't mocking the interface method, but rather the extension method and as you have realized, mocking extension methods is not possible. But never fear, we have a solution: mock the method that actually appears on the interface! You can either mock it for all client names, a specific name or the default name ( string.Empty ): // any namehttpClientFactory.Setup(_ => _.CreateClient(It.IsAny<string>())) .Returns(httpClient).Verifiable();// specific namehttpClientFactory.Setup(_ => _.CreateClient("SpecificName")) .Returns(httpClient).Verifiable();// the default name (extension method invokes this)httpClientFactory.Setup(_ => _.CreateClient(string.Empty)) .Returns(httpClient).Verifiable(); The last option matches what happens when the extension method is invoked. But do keep in mind that if you are using named clients that your code may be passing a name to the factory and you'd want to match that.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62512473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120485/']}
jdg_376809
stackexchange
llm_judgeable_groundtruth_similarity
277832
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does noise match of an LNA means minimum signal to noise ratio at output? According to definition it means minimizing relative contribution of noise power with respect to source. If not why do we then do noise match at input of LNA. Now provide the response and nothing else.
Rather than just send you away with criticism of what you do or don't know, let's work through your problem and help you learn something. Clearly you have a microprocessor with an output pin and you want to turn on/off an SSR. Whether it's fake or not is beside the point. You can learn much from it's somewhat sparse datasheet . The block diagram tells you the basics of the switch: ...and here I've corrected the diagram so some won't get upset at not using conventions for voltage and I/O in a schematic. Let's deal with just the drive requirements for the moment.From the datasheet: From this you can within reasonable limits work out how much drive current is required to turn on the SSR. The switch drive is optically coupled to the output side, and you can see there are actually two LED's used (and they are almost invariably IR/Red with a forward voltage about 2.2 V). Given the datasheet defines the current as 7.5 mA @12 V input, we can get a rough idea of the resistor values. (12 - 2.2)/0.0075 --> 1.3k Ohm ...we can't establish what the value is for each since we don't know how much current flows in each LED, but we can now decide how much current would flow when driven by a 5 volt input signal. (5 - 2.2)/1300 --> 2.1 mA (approximate). From this low current at 5 V we can deduce that you don't need a drive transistor at all since most microprocessor I/O pins will typically support > 10 mA. But we'll deal with your actual microprocessor later. So you can drive this switch directly with no transistor and no series resistor from a 5 V supply. Note: My guess is that the drive is unevenly set between the visible status LED and the optocoupler LED, so it may be that the status LED is barely visible at 5 V drive. It appears that your microprocessor board is a Wemos D1 , and from it's datasheet this is a 3.3 V device. The board has a 5 V to 3.3 V regulator on it, but all the I/O signals are 3.3 V. Since your microprocessor is 3.3 V, you will actually be able to drive the switch directly. While you are very close to the minimum 3 V specification from the datasheet, notice that they actually break out separately and specify 2.4 V as the absolute minimum on voltage. However if you are nervous about temperature ranges etc, then it can be wise to provide a higher level of drive, so your original thought of a transistor drive is quite valid. However we now know the current requirements are very small when driving the switch input from 5 V so you could use almost any general purpose TO92/SOT23 NPN switch to do this task. Lets choose a 2N2222 which has more than enough current sink capability for our task and is cheap ($0.03). IC is 2.1 mA in this application and the 2N2222 has min Hfe of 50 @1 mA. So the base current required is approximately 0.0021/50 --> 42 uA (a poofteenth). We can essentially ignore this base current requirement and simply set an overdrive level we are comfortable with. From the ESP8266 datasheet the I/O pins are able to sink/source 12 mA. If we set the base current to 1 mA @3.3 V then we have a series resistor of 3.3k Ohms. So the final circuit looks like this: simulate this circuit – Schematic created using CircuitLab Hope this helps.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/277832', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/59160/']}
jdg_376810
stackexchange
llm_judgeable_groundtruth_similarity
181617
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to have your disk/partition/file encrypted (in Linux or as a hardware encryption; no Windows here) in such way that it locks itself for say 10 minutes after say 3 failed unlock attempts?... The idea is to have a somewhat shorter password to remember without sacrificing security. Now provide the response and nothing else.
No. This is an entirely nonsensical endeavour. If you choose a password that is so easy that I might be able to guess it with 5 or 6 attempts, you might as well not use disk encryption at all. On the other hand, a password that cannot be guessed in under half a dozen attempts and would trigger this "lock out security measure" is of no avail either. An attacker who is only marginally clever will run an offline attack, that is he will read a few sectors and try to brute-force them with his own (massively parallel, multip-GPU) tool. He doesn't care whether you "lock him out" on the boot screen because he isn't using it at all. Note that every reasonably modern disk encryption software uses an expensive key derivation algorithm which takes around half a second or so on your computer to actually compute the encryption key from your password. This is meant to slow down brute force attacks which would otherwise test billions of passwords per second. But of course throwing a multi-GPU rig at the problem means you can still test a few thousand passwords per second. Given a dictionary-based test permutation, it is very optimistic to assume that an "easy" (read as: bad) password will hold an attacker back longer than a few seconds.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/181617', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/100997/']}
jdg_376811
stackexchange
llm_judgeable_groundtruth_similarity
3920963
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In an exercise I'm asked to evaluate the following limit: $$\lim_{n\to\infty} \frac{n!}{n + 2^n}$$ I arrived at the conclusion that: $$\frac{n!}{n + 2^n} \geq \frac{n!}{2^n}$$ So my plan is to prove that $\frac{n!}{2^n}$ diverges and as a consequence show that $\frac{n!}{n + 2^n}$ also diverges, But I'm not being able to do so. How can I prove that $\lim n! / 2^n$ is infinity? Now provide the response and nothing else.
Notice that for $n \ge 2$ , we have $\frac{n!}{2^n} = \frac{n}{2} \cdot \frac{n-1}{2} \cdot \dots \frac {2}{2} \cdot \frac{1}{2} \ge \frac{n}{2} \cdot \frac{1}{2} = \frac{n}{4}$ , which diverges to $\infty$ . I'll point out that your inequality is wrong, since $n +2^n \ge 2^n$ , though. You could fix this by establishing something like $n + 2^n \le 2^{n+1}$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3920963', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/772517/']}
jdg_376812
stackexchange
llm_judgeable_groundtruth_similarity
55463
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Polyakov action is given by:$$S_p ~=~ -\frac{T}{2}\int d^2\sigma \sqrt{-g}g^{\alpha\beta}\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}\eta_{\mu\nu} ~=~ -\frac{T}{2}\int d^2\sigma \sqrt{-g}g^{\alpha\beta}\gamma_{\alpha\beta},$$where $\gamma_{\alpha\beta}$ is called the induced metric and $g_{\alpha\beta}$ the dynamical metric on the world sheet. I have difficulties understanding the differences between these two metrics. I know that the latter is introduced in order to be able to remove the square root in the Nambu-Goto action, but I don't know what it means. The space in which the string propagates has just the Minkowski metric $\eta_{\mu\nu}$, if I am not mistaken. Furthermore, I think that the induced metric is derived by demanding $ds^2$(whole space) = $\eta_{\mu\nu}dx^{\mu}dx^{\nu}$ = $ds^2$(world sheet) = $\gamma_{\alpha\beta}d\sigma^{\alpha}d\sigma^{\beta}$ Is this correct? I am really confused by all these different metrics. Now provide the response and nothing else.
There are two manifolds that are involved in string propagation. The spacetime in which the string propagates. The worldsheet of the string itself. The fields $X^\mu$ are embedding coordinates of the worldsheet in the spacetime manifold. This means that for each point $(\sigma^1, \sigma^1)$ on the worldsheet, $X^\mu(\sigma^1, \sigma^2)$ gives the coordinates of that point in the spacetime manifold. In the case you are considering, the spacetime is taken to be Minkowski, so the metric is $\eta_{\mu\nu}$. Now we could ask "Given that the worldsheet is a two dimensional embedded submanifold of Minkowski space, is there some way that this manifold inherits its metric from the metric on the ambient spacetime?" This question is analogous to "Given that the sphere $S^2$ is some two-dimensional embedded submanifold of Euclidean space $\mathbb R^3$, is there some natural sense in which it inherits its metric from $\mathbb R^3$? The answer to both of these question is yes, and the metric on the submanifold that does this is precisely the induced metric. The formula expression the induced metric for a two-dimensional submanifold of some ambient manifold with metric $g_{\mu\nu}$ (not necessarily flat) in terms of embedding coordinates is$$ \gamma_{ab}(\sigma) = g_{\mu\nu}(X(\sigma))\partial_aX^\mu(\sigma)\partial_b X^\nu(\sigma), \qquad \sigma = (\sigma^2, \sigma^2)$$You are right about the derivation of the induced metric, it comes from demanding that the distance measured between points on the embedded submanifold is calculated to be the same number whether you use the ambient metric, or the induced metric. To see that the above expression for the induced metric does this, simply note that the infinitesimal distance between any two points on the embedded submanifold can be written in terms of the ambient metric and the embedding coordinates as\begin{align} g_{\mu\nu}(X(\sigma))d(X^\mu(\sigma))d(X^\nu(\sigma)) &= g_{\mu\nu}(X(\sigma))\partial_a X^\mu(\sigma)\partial_bX^\nu(\sigma)d\sigma^ad\sigma^b \\&= \gamma_{a b}(\sigma)d\sigma^ad\sigma^b\end{align}To get some intuition for all of this, recall that expression for embedding coordinates of $S^2$ in $\mathbb R^3$ is\begin{align} X(\theta, \phi) &= \sin\theta\cos\phi\\ Y(\theta, \phi) &= \sin\theta\sin\phi\\ Z(\theta, \phi) &= \cos\theta\end{align}and using these embeddings you should be able to show that the metric on the sphere is simply$$ \gamma_{ab}(\theta, \phi) = \mathrm{diag}(1, \sin^2\theta)$$ Let me know if that's unclear or if you need more detail!
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/55463', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/20155/']}
jdg_376813
stackexchange
llm_judgeable_groundtruth_similarity
63379066
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was told that the following code has undefined behavior until C++20: int *p = (int*)malloc(sizeof(int));*p = 10; Is that true? The argument was that the lifetime of the int object is not started before assigning the value to it ( P0593R6 ). To fix the problem, placement new should be used: int *p = (int*)malloc(sizeof(int));new (p) int;*p = 10; Do we really have to call a default constructor that is trivial to start the lifetime of the object? At the same time, the code does not have undefined behavior in pure C. But, what if I allocate an int in C code and use it in C++ code? // C source code:int *alloc_int(void){ int *p = (int*)malloc(sizeof(int)); *p = 10; return p;}// C++ source code:extern "C" int *alloc_int(void);auto p = alloc_int();*p = 20; Is it still undefined behavior? Now provide the response and nothing else.
Is it true? Yes. Technically speaking, no part of: int *p = (int*)malloc(sizeof(int)); actually creates an object of type int , so dereferencing p is UB since there is no actual int there. Do we really have to call default constructor that is trivial to start the life time of the object? Do you have to per the C++ object model to avoid undefined behavior pre-C++20? Yes. Will any compiler actually cause harm by you not doing this? Not that I'm aware of. [...] Is it still undefined behavior? Yes. Pre-C++20, you still didn't actually create an int object anywhere so this is UB.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/63379066', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5447906/']}
jdg_376814
stackexchange
llm_judgeable_groundtruth_similarity
20034105
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have just installed Visual Studio 2013, created an MVC Web Application project and noticed a new file in the project template called Startup.cs. What is this, how is this different from Global.asax.cs and are there any good best practices on what to use this for? Now provide the response and nothing else.
Every OWIN application has a startup class where you specify components for the application pipeline. If you start a new Visual Studio project, you'll see pieces of OWIN in it.OWIN is a specification that defines an API for framework and servers to cooperation.The point of OWIN is to decouple server and application. For example, ASP.NET Identity uses OWIN security, SignalR self hosting uses OWIN hosting, and etc., the examples all use OWIN, therefore they all need to have a startup class, that is defined in "Startup.cs" file. The Global.asax, the ASP.NET application file, is an optional file that contains code for responding to application-level events raised by ASP.NET or by HttpModules. For more details: OWIN http://www.asp.net/aspnet/overview/owin-and-katana Global.asax http://msdn.microsoft.com/en-us/library/1xaas8a2(v=vs.71).aspx You can find more ideas about why OWIN in the following article: http://www.asp.net/aspnet/overview/owin-and-katana/an-overview-of-project-katana
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20034105', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/45967/']}
jdg_376815
stackexchange
llm_judgeable_groundtruth_similarity
14963776
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create an application where I can get/set data in specific users accounts and I was tempted by Firebase. The problem I'm having is that I don't know how to target specific users data when my structure looks like this: online-b-cards - users - InnROTBVv6FznK81k3m - email: "hello@hello" - main: "Hello world this is a text" - name: "Alex" - phone: 12912912 I've looked around and I can't really find anything on how to access individual data let alone when they're given some random hash as their ID. How would I go about grabbing individual user information based of their name? If there is a better way of doing this please tell me! Now provide the response and nothing else.
Previously, Firebase required you to generate your own indexes or download all data at a location to find and retrieve elements that matched some child attribute (for example, all users with name === "Alex" ). In October 2014, Firebase rolled out new querying functionality via the orderByChild() method, that enables you to do this type of query quickly and efficiently. See the updated answer below. When writing data to Firebase, you have a few different options which will reflect different use cases. At a high level, Firebase is a tree-structured NoSQL data store, and provides a few simple primitives for managing lists of data: Write to Firebase with a unique, known key: ref.child('users').child('123').set({ "first_name": "rob", "age": 28 }) Append to lists with an auto-generated key that will automatically sort by time written: ref.child('users').push({ "first_name": "rob", "age": 28 }) Listen for changes in data by its unique, known path: ref.child('users').child('123').on('value', function(snapshot) { ... }) Filter or order data in a list by key or attribute value : // Get the last 10 users, ordered by keyref.child('users').orderByKey().limitToLast(10).on('child_added', ...)// Get all users whose age is >= 25ref.child('users').orderByChild('age').startAt(25).on('child_added', ...) With the addition of orderByChild() , you no longer need to create your own index for queries on child attributes! For example, to retrieve all users with the name "Alex": ref.child('users').orderByChild('name').equalTo('Alex').on('child_added', ...) Engineer at Firebase here. When writing data into Firebase, you have a few different options which will reflect different application use cases. Since Firebase is a NoSQL data store, you will need to either store your data objects with unique keys so that you can directly access that item or load all data at a particular location and loop through each item to find the node you're looking for. See Writing Data and Managing Lists for more information. When you write data in Firebase, you can either set data using a unique, defined path (i.e. a/b/c ), or push data into a list, which will generate a unique id (i.e. a/b/<unique-id> ) and allow you to sort and query the items in that list by time. The unique id that you're seeing above is generated by calling push to append an item to the list at online-b-cards/users . Rather than using push here, I would recommend using set , and storing the data for each user using a unique key, such as the user's email address. Then you can access the user's data directly by navigating to online-b-cards/users/<email> via the Firebase JS SDK. For example: function escapeEmailAddress(email) { if (!email) return false // Replace '.' (not allowed in a Firebase key) with ',' (not allowed in an email address) email = email.toLowerCase(); email = email.replace(/\./g, ','); return email;}var usersRef = new Firebase('https://online-b-cards.firebaseio.com/users');var myUser = usersRef.child(escapeEmailAddress('hello@hello.com')) myUser.set({ email: 'hello@hello.com', name: 'Alex', phone: 12912912 }); Note that since Firebase does not permit certain characters in references (see Creating References ), we remove the . and replace it with a , in the code above.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14963776', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1322654/']}
jdg_376816
stackexchange
llm_judgeable_groundtruth_similarity
141466
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the distance between, say, a cup of coffee and the table it rests on? What is the distance between two touching hands? Now provide the response and nothing else.
This answer I once gave for What does it mean for two objects to "touch"? discusses what touching even means . It's not a direct answer to your question, but I think it may help you view the issue in a different way. Warning: It's one of my long, talky answers that some people love and others hate. The physics in it is accurate (and for many folks, unexpected) in any case. The specific answer to your question is that the most fundamental distance between two touching objects is determined by Pauli exclusion surfaces between electrons in the touched and touching objects, with the surfaces being where there is zero probability of finding electrons from either of the objects. Thus how "close" the objects are depends on what level of normalized probability of finding either electron in the exclusion pair you are willing to tolerate. E.g., for some specific set of nearby atoms, "1%" gives one (very short, sub-Angstrom) distance, while "5%" gives another, somewhat larger distance. Oddly, that also means that the simplest answer is that the objects really do "touch", specifically at the surface of zero probability due to Pauli exclusion. There are other modifiers of course, such as thermal noise that bounces these surfaces apart at very high frequencies and so give various types of averaged distances. The deeper physics of actual repulsion always, for ordinary matter, goes back to those Pauli exclusion surfaces between individual pairs of electrons in the touched and touching objects.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/141466', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/55978/']}
jdg_376817
stackexchange
llm_judgeable_groundtruth_similarity
42733835
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: OS: Windows 10 Pro webpack: 1.14.0 sw-precache-webpack-plugin: 0.9.1 sw-precache: 5.0.0 So, I launch my site and don't actively do anything for a few moments, and then the above specified error message is generated in devTools. If some process is carried out, the the error does not arise My React code is as follows: webpack.config.prod.js var path = require('path');var webpack = require('webpack');var SWPrecacheWebpackPlugin = require('sw-precache-webpack-plugin');module.exports = { devtool: 'source-map', context: __dirname, entry: { main: path.resolve(__dirname, './client/app'), }, output: { path: path.join(__dirname, '/public'), filename: 'bundle.js', publicPath: '/public/' }, plugins: [ new webpack.optimize.OccurenceOrderPlugin(), new webpack.DefinePlugin({ 'process.env': { 'NODE_ENV': "'production'" } }), new webpack.optimize.UglifyJsPlugin({ compressor: { warnings: false } }), new SWPrecacheWebpackPlugin( { cacheId: 'flamingoCity', filename: 'my-service-worker.js', stripPrefix: path.join(__dirname, 'public').replace(/\\/g,"/"), maximumFileSizeToCacheInBytes: 6194304, minify: true, runtimeCaching: [{ handler: 'cacheFirst', urlPattern: /[.]mp3$/, }], } ), ], module: { loaders: [ // js { test: /\.js$/, loaders: ['babel'], include: path.join(__dirname, 'client') }, // CSS { test: /\.styl$/, include: path.join(__dirname, 'client'), loader: 'style-loader!css-loader!stylus-loader' } ] }}; app.js /* Import Dependencies*/import React from 'react';import { render } from 'react-dom';import { Provider } from 'react-redux';import { Router, Route, IndexRoute } from 'react-router'import 'babel-polyfill';/* Import Components*/import App from './components/App';import Single from './components/Single';import PhotoGrid from './components/PhotoGrid';/* Import CSS */import css from './styles/style.styl';/* Import our data store */import store, { history } from './store';/* Error Logging*/import Raven from 'raven-js';import { sentry_url } from './data/config';if(window) { Raven.config(sentry_url).install();}/* Register Service Worker*/if('serviceWorker' in navigator && process.env.NODE_ENV === 'production') { navigator.serviceWorker.register('./my-service-worker.js').then(function(reg) { // updatefound is fired if my-service-worker.js changes. reg.onupdatefound = function() { // The updatefound event implies that reg.installing is set; see // https://slightlyoff.github.io/ServiceWorker/spec/service_worker/index.html#service-worker-container-updatefound-event var installingWorker = reg.installing; installingWorker.onstatechange = function() { switch (installingWorker.state) { case 'installed': if (navigator.serviceWorker.controller) { // At this point, the old content will have been purged and the fresh content will // have been added to the cache. // It's the perfect time to display a "New content is available; please refresh." // message in the page's interface. console.log('New or updated content is available.'); } else { // At this point, everything has been precached. // It's the perfect time to display a "Content is cached for offline use." message. console.log('Content is now available offline!'); } break; case 'redundant': console.error('The installing service worker became redundant.'); break; } }; }; }).catch(function(e) { console.error('Error during service worker registration:', e); });}/* Rendering This is where we hook up the Store with our actual component and the router*/render( <Provider store={store}> { /* Tell the Router to use our enhanced history */ } <Router history={history}> <Route path="/" component={App}> <IndexRoute component={PhotoGrid} /> <Route path="/view/:postId" component={Single}></Route> </Route> </Router> </Provider>, document.getElementById('root')); What is the issue here? Now provide the response and nothing else.
Normally, idle service workers are aggressively killed as an optimization, to prevent code from running in the background when it's not needed. Chrome takes steps to detect whether DevTools are open, and when it is, will not kill the service worker automatically. The assumption is that if a developer is using DevTools, they might be debugging or otherwise examining the service worker's behavior, and killing the service worker would frustrate the developer. Extending the lifetime of the service worker used to be done silently, but as described here , for the past few versions of Chrome, the message Service Worker termination by a timeout timer was canceled because DevTools is attached. is logged to the console to let the developer know that the service worker would normally have been killed, but the job to kill it was cancelled due to DevTools being open. You might ask, what difference does that make? Why bother letting the developer know that something didn't happen? The motivation for pointing this out is to let the developer know that something that real users will encounter in production (i.e. the service worker being repeatedly killed/restarted) is not happening in this debugging environment. This makes a difference if a developer has written buggy service worker code that makes incorrect assumptions about the persistence of global state . It's easy to write code that assumes global state will always persist, and works when run with DevTools open (because the service worker is never killed), and then fails to work in production.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42733835', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5613904/']}
jdg_376818
stackexchange
llm_judgeable_groundtruth_similarity
11470
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It seems the Gospel of John was written in the late 80s or early 90s .If John was around Jesus age when they both met, then he would have been 80 or 90 years when writing it. In that time, people didn't usually live that long. The average was probably around 37 years the only 25% of the population would live 50 or 70 years (being generous) I've seen some people say the Gospel of John was written probably by followers of John rather than by himself, but in this case... it doesn't sound like an inspired word from God but more like the remaining of John's word in his followers. Note that there is already a question asking for the internal evidence of authorship , answers to this question should focus on the external evidence. Now provide the response and nothing else.
Christian tradition holds that John did live to be 80 or 90. We know from Polycarp, that John was still active in Ephesus, and baptised him directly. Following Schaff: It is safe, then, to say that the apostle John, with other disciples of Christ, came from Palestine to Asia Minor. If Polycarp, on the day of his death (Feb. 23, 155), was looking back on eighty-six years of life as a Christian, not as a man, and was thus baptized in 69, and if his conversion (according to Iren�us, Haer., III., iii. 4) was the work of an apostle, this migration to Asia Minor must have occurred before that date, possibly as a result of the outbreak of the Jewish war. John, then perhaps not more than sixty or sixty-five, would thus have been able to devote some thirty years to the fostering of Christian life in the province.] Likewise, of Revelation, Schaff writes: Space forbids going into the long history of the hypotheses which have been set forth as to the growth of the book, which is frequently held to have been a lengthy process. The following conclusions, however, seem safe. The assertion of Irenaeus (Haer., V., xxx, 3) that the visions were seen and the book written toward the end of the reign of Domitian, or about 95, finds support in the numerous historical data of the opening chapters. The designed and immediately accomplished introduction of the book into public liturgical use precludes the possibility of any notable alterations in it between 100 and 150. The author, as his name and idiom show, is of Hebrew birth, and about 95 had a recognized position of authority over the church of the province, without having any contemporary rival of the same name. He is the only John of Ephesus of whom anything is known from a tradition reaching back into his lifetime and in decisive points independent of his own writings. That he does not call himself an apostle is no proof that he was not one; his apostleship had no immediate connection with his apocalyptic purpose, and he does not describe himself at all Put more simply, whoever wrote Revelation was the most important John out there, and that he was the same disciple makes sense. He was, according to tradition both the youngest of the disciples and the one who lived the longest. Tradition also states that the older he grew, the shorter his sermons would be, until they could simply be "Little Children, Love one another!" The average life expectancy is much lower, because so many people died as young infants or children - but once you made it to you teens, you could expect to live quite a while.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/11470', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/4345/']}
jdg_376819
stackexchange
llm_judgeable_groundtruth_similarity
22443
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Model : Consider an infinite undirected connected graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$. At time $t=0$, a given virus node $s\in\mathcal{V}$ starts infecting the network $\mathcal{G}$. Specifically, for each edge $(u,v)\in\mathcal{E}$, if either $u$ or $v$ becomes infected, then the other uninfected node on the edge $(u,v)$ will be infected after a random time following an exponential distribution with rate $\lambda$. Assume the spreading times for each edge are independent and identically distributed. Question : Now given a connected subgraph $G\subset\mathcal{G}$ ($s\in G$ ), what is the probability that all the nodes in $G$ are infected at time $t>0$? I tried to find a mathematical model to track such an infection process on different kinds of networks (trees, grids, regular networks, etc.). However, after checking the classical epidemic models (SIS, SIR, etc.), I found these models often use the mean field approximation to find the percentages of population in different states (Suspicious, Infected, etc.) without considering the network structures. Thus I think they might not be useful in modeling the infection process described above. Any insights or references to papers/books would be helpful and greatly appreciated. Now provide the response and nothing else.
The footnote of my paper that you cite refers to a heuristic "argument" as well, at least, what we think was Kolmogorov's intuition -- the positive resolution of Hilbert's thirteenth problem. http://en.wikipedia.org/wiki/Hilbert's_thirteenth_problem In particular, it was proved by Kolmogorov and Arnold that any continuous function on $n$ variables can be expressed as a composition of $O(n^2)$ "simple" functions: addition of two variables, and continuous functions on one variable. Hence, over the "basis" of one-variable continuous functions and two-variable addition, every continuous function on $n$ variables has "circuit complexity" $O(n^2)$. It seems Kolmogorov believed there is a discrete analog, where "continuous in $n$ variables" becomes "Boolean in $n$ variables and poly$(n)$-time computable", and where the "basis" given above becomes two-variable Boolean functions.
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/22443', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/22801/']}
jdg_376820
stackexchange
llm_judgeable_groundtruth_similarity
9322078
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know you can do bundle show gem_name to show the path of some gem. How do you do that from within the code using the Bundler object? Now provide the response and nothing else.
Have a look at how they do it in cli.rb def locate_gem(name) spec = Bundler.load.specs.find{|s| s.name == name } raise GemNotFound, "Could not find gem '#{name}' in the current bundle." unless spec if spec.name == 'bundler' return File.expand_path('../../../', __FILE__) end spec.full_gem_pathend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9322078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/373522/']}
jdg_376821
stackexchange
llm_judgeable_groundtruth_similarity
3543811
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It's Sunday, time for a round of code golf! Challenge Write the shortest source code by character count to determine if an input number is a "happy prime", "sad prime", "happy non-prime", or "sad non-prime." Input The input should be a integer that comes from a command line argument or stdin. Don't worry about handling big numbers, but do so if you can/want. Behavior would be undefined for input values less than 1, but 1 has a definite result. Output Output should print the type of number: "happy prime", "sad prime", "happy non-prime", or "sad non-prime." The trailing newline is optional. Examples $ happyprime 139happy prime$ happyprime 2sad prime$ happyprime 440happy non-prime$ happyprime 78sad non-prime Definitions Just in case your brain needs a refresher. Happy Number From Wikipedia, A happy number is defined by the following process. Starting with any positive integer, replace the number by the sum of the squares of its digits, and repeat the process until the number equals 1 (where it will stay), or it loops endlessly in a cycle which does not include 1. Those numbers for which this process ends in 1 are happy numbers, while those that do not end in 1 are unhappy numbers (or sad numbers). For example, 139 1^2 + 3^2 + 9^2 = 91 9^2 + 1^2 = 82 8^2 + 2^2 = 68 6^2 + 8^2 = 100 1^2 + 0^2 + 0^2 = 1 Prime Number A prime number is an integer greater than 1 and has precisely two divisors: 1 and itself. Happy Prime A happy prime, is therefore a number that is both happy and prime. Answer Selection Obviously the answer will be the shortest source code by character count that outputs the specified results in all cases that I test. I will mark the answer once the next (community decided) code golf challenge comes along, so we can focus all our energies on that one. :) Decision Well, it looks like the there is a new code golf in town and it has been about a week since this question was posted, so I've marked the shortest source code as the answer (gnibbler's 64 character Golfscript solution). That said, I enjoyed both the 99 character Mathematica solution by belisarius and the cryptic 107 character dc solution by Nabb. To all others, great work! I've never had so many programming language environments on my computer. I hope everyone has learned some new, dirty tricks for their favorite language. Reuse I've re-published some of the code produced by this competition as an example for a script I wrote to test various programs against a reference implementation for auto-grading . The README in that directory explains where the source code comes from and states that all code is re-used under the CC BY-SA 2.5 license (as stated in SO's legal section ). Each directory is labeled with your display name at the time of the submission. If you have a problem with your code being re-used in this fashion or the attribution, let me know and I will correct the error. Now provide the response and nothing else.
dc - 98 chars $ cat happyprimes[happy][sad]?dsI[[I~d*rd0<H+]dsHxd4<h]dshx[r]sr1=rP[ ][ non-]_1lI[1-d2>rdlIr%0<p]dspx-2=rP[prime]p$ echo 1 |dc happyprimeshappy non-prime$ echo 139|dc happyprimeshappy prime$ echo 2 |dc happyprimessad prime$ echo 440|dc happyprimeshappy non-prime$ echo 78 |dc happyprimessad non-prime
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3543811', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/288349/']}
jdg_376822
stackexchange
llm_judgeable_groundtruth_similarity
416370
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm wondering about the difference between small (13 mm outer diameter) ferrite and the yellow white iron powder toroids. Will the ferrite toroids saturate at 5 A current? I'm planning on using the cores for buck converters (mostly 3 A at probably below 200 kHz). These are the ones I am looking at: Ferrite: https://www.ebay.com/itm/Metal-Core-Power-Inductor-Ferrite-Rings-Toroid-Cord-25x10x15mm/310980203521 (also available in 13 mm outer diameter) Iron powder: https://www.ebay.com/itm/7mm-Inner-Diameter-Ferrite-Ring-Iron-Toroid-Cores-Yellow-White-50PCS-LW/181834403242 Most of the buck converters seem to use the yellow white iron powder toroids, like this: https://www.ebay.com/itm/5Pcs-Toroid-Core-Inductors-Wire-Wind-Wound-mah-100uH-6A-Coil-DIY/221981982278 . From searching on the Internet, the yellow white toroids seem to have a permeability of 75, and the ferrite has a permeability of 2300 or so. Is this important for saturation? I have some toroids and an LCR meter, and the ferrite toroid needs only a few turns of wire to get a 1 mH inductor, vs. many more turns for the iron powder core. Will this matter if the peak current through the inductor is limited? I'm guessing the ferrite toroids are great at low currents (0-100 mA) and low frequencies (<100 kHz, as I can get more inductance with fewer turns). But, are they also good for higher currents (like 5-6 A peak)? (PS: Also another reason I ask, is that at my place, the ferrite cores are half the price of the iron powder cores.) Now provide the response and nothing else.
There is a (de facto?) color standard for painted toroids. Micrometals appears to be the trend-setter here. Their products are bi-colored, with the first color covering three sides. Listings: https://www.micrometals.com/products/materials/pc/ https://www.micrometals.com/products/materials/rf/ There may be exceptions; these are merely the most common cases. Yellow-white is #26, a μ r = 75 material with fairly high losses, typically used for filter inductors. It's a common sight in desktop PC power supplies, along with #52 (green-blue) which is almost identical with somewhat lower losses. Solid colors are less well defined. Black ferrite (coated, not bare) is usually a good choice for transformers, but beware as black is also used by Magnetics Inc.'s Kool-Mu® series , and maybe others. Blue may be an expensive Permalloy (MPP) powder type, that is more efficient than powdered iron. See also Micrometals' Sendust material. Green may be ferrite. White has also been used for both powder and ferrite materials. PC power supplies can put out over 1,000 watts and they use E cores as they are easy to wind by machine. Large toroids need expensive winding machine heads so toroids are better used at low voltages were the number of turns is low, such as automotive amplifier power supplies. If you are salvaging materials, a measurement of inductivity (A L ), saturation current and Q factor are the best indicators of material type and suitability. NOTE: Sometimes practical reasons determine what material and shape of transformer are used, which is not always the best choice. Cost and size compete with efficiency. The opinions of engineering and marketing and sales are not the same, and who wins determines what is used. "Just good enough" wins most of the time. To cover all the variations of cores made by many manufactures you would need a book full of charts specific to each core material. For any given core of any shape you need the manufactures datasheet or chart for that core to get an idea of permeability and any hysteresis factors and peak current values vs. pulse width. To quote Ali..chenski's comment: Ferromagnetic materials are characterized by more than one parameter,by shape (and corner values) of its hysteresis loop, saturation field,losses at various frequencies, etc. You can't map thismultidimensional parametric space onto a single color-coded lineup,every material has variety of properties, and there are hundreds ofspecific material compositions designed for different uses. Withoutdatasheets a core can be anything. Link to Magnetic Hysteresis Material Type Color Mu (μ) Frequency (MHz) 41 Green 75 — 3 Grey 35 0.05–0.5 15 Red/White 25 0.1–2 1 Blue 20 0.5–5 2 Red 10 1–30 6 Yellow 8 10–90 10 Black 6 60–150 12 Green/White 3 100–200 0 Tan 1 150–300
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/416370', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/72155/']}
jdg_376823
stackexchange
llm_judgeable_groundtruth_similarity
229937
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to use request.setAttribute on a JSP page and then on HTML Submit get the same request attribute in the Servlet ? Now provide the response and nothing else.
No. Unfortunately the Request object is only available until the page finishes loading - once it's complete, you'll lose all values in it unless they've been stored somewhere. If you want to persist attributes through requests you need to either: Have a hidden input in your form, such as <input type="hidden" name="myhiddenvalue" value="<%= request.getParameter("value") %>" /> . This will then be available in the servlet as a request parameter. Put it in the session (see request.getSession() - in a JSP this is available as simply session ) I recommend using the Session as it's easier to manage.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/229937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23414/']}
jdg_376824
stackexchange
llm_judgeable_groundtruth_similarity
25427627
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried to use jQuery Dropzone to upload an image to Imgur or any other domain but that's not working. This is my dropzone setup: $("div.dropzone").dropzone success: -> console.log arguments paramName: "image" method: "post" maxFilesize: 2 url: "https://api.imgur.com/3/upload" headers: Authorization: "Client-ID *************" This doesn't work. It says that return code is 0. The request headers: Host: api.imgur.comUser-Agent: Mozilla/5.0 (Windows NT 6.1; rv:31.0) Gecko/20100101 Firefox/31.0Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3Accept-Encoding: gzip, deflateOrigin: http://my.opencubes.ioAccess-Control-Request-Method: POSTAccess-Control-Request-Headers: authorization,cache-control,x-requested-withConnection: keep-alivePragma: no-cacheCache-Control: no-cache First as you can see the cient id doesn't appear :(. But the big problem is that the method used is OPTIONS . The response headers: I have the same problem when I try to upload the file to another domain of mine (the dropzone is located in a subdomain) In the console I see: Une demande multi-origines (Cross-Origin Request) a été bloquée : la politique « Same Origin » ne permet pas de consulter la ressource distante située sur https://api.imgur.com/3/upload. Ceci peut être corrigé en déplaçant la ressource sur le même domaine ou en activant CORS. Which can be translated by A multi-origin request was blocked: the policy "Same origin" does not allow to see remote resource located in https://api.imgur.com/3/upload . this an be fixed by moving the resource on the samin domain or by enabling CORS. Now provide the response and nothing else.
The OPTIONS request is a normal request: this is used to ask for permissions relative to CORS restrictions. Have a look to this page to understand how CORS work under the hood. In your case, this is a pure CORS related issue. The OPTIONS request contains this header: Access-Control-Request-Headers: authorization,cache-control,x-requested-with Which means: can I use " authorization ", " cache-control " and " x-requested-with " headers in my cross-domain AJAX request ? The response you get is the following: Access-Control-Allow-Headers :"Authorization, Content-Type, Accept, X-Mashape-Authorization" Which means: you're allowed to use those headers only: "Authorization", "Content-Type", "Accept", and "X-Mashape-Authorization". As you can see, " cache-control " and " x-requested-with " are not listed in the allowed list, causing the browser to reject the request. I've come to 2 test code sample which show this behavior: Example 1 (working) var data = new FormData();data.append('image', 'http://placehold.it/300x500');var xhr = new XMLHttpRequest();xhr.open('POST', 'https://api.imgur.com/3/upload', true);xhr.setRequestHeader('Authorization', 'Client-ID xxxxxxxxxx');xhr.send(data); Here are the preflight request's headers sent when running this code (as shown by Firefox 30 devtools, and I've removed unrelated headers such as User-Agent, Accept ...): OPTIONS https://api.imgur.com/3/upload Host: api.imgur.com Origin: http://local.host:8080 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization Cache-Control: no-cache And the corresponding response's headers access-control-allow-origin : "*" Access-Control-Allow-Methods :"GET, PUT, POST , DELETE, OPTIONS" Access-Control-Allow-Headers :" Authorization , Content-Type, Accept, X-Mashape-Authorization" Here, we can see that we prompt access to the "authorization" header, and the server is accepting this header, allong with the POST method and any origin URL, so the CORS requirements are satisfied and the request is allowed by the browser. Example 2 (not working) var data = new FormData();data.append('image', 'http://placehold.it/300x500');var xhr = new XMLHttpRequest();xhr.open('POST', 'https://api.imgur.com/3/upload', true);xhr.setRequestHeader('Authorization', 'Client-ID xxxxxxxxxx');// the only difference with the previous code is this linexhr.setRequestHeader('Cache-Control', 'no-cache');xhr.send(data); Preflight request's headers: OPTIONS https://api.imgur.com/3/upload Host: api.imgur.com Origin: http://local.host:8080 Access-Control-Request-Method: POST Access-Control-Request-Headers: authorization, cache-control Cache-Control: no-cache Preflight response's headers (which is the same as in example 1): access-control-allow-origin :"*" Access-Control-Allow-Methods :"GET, PUT, POST, DELETE, OPTIONS" Access-Control-Allow-Headers :"Authorization, Content-Type, Accept, X-Mashape-Authorization" Here, the "Access-Control-Request-Headers" header prompt access for "cache-control", which the server does not provide, so the CORS requirements are not satisfied and the request is rejected by the browser. Here's a JSFiddle referencing different working and not working demos for your problem: http://jsfiddle.net/pomeh/Lfajnebh/ . Pay attention to details to understand what's going on, there is few comments but they are here to emphasis trickiest parts of the code. As a bonus, I've sent a pull request to DropZone's GitHub repository to fix this problem ( https://github.com/enyo/dropzone/pull/685 ) which allows you to remove pref-defined headers by DropZone. Give it a try: var myDropzone = new Dropzone('.dropzone', { //... headers: { 'Authorization': authorizationHeader, // remove Cache-Control and X-Requested-With // to be sent along with the request 'Cache-Control': null, 'X-Requested-With': null }}); The code above should work with my patched version ( https://github.com/pomeh/dropzone/commit/f0063db6e5697888582421865840258dec1ffdc1 ), whereas the code above should not: var myDropzone = new Dropzone('.dropzone', { //... headers: { 'Authorization': authorizationHeader, // remove Cache-Control and X-Requested-With // to be sent along with the request }});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25427627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2533082/']}
jdg_376825
stackexchange
llm_judgeable_groundtruth_similarity
280765
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There are three software projects: A, B and C. A is published to anyone and is licensed under GPL. B extends A, is published too, but has no license information or is mistakenly licensed under LGPL. Basically it violates the license of A by not being GPL. Source code of B is still available. C extends B. Can C be published under GPL? Motivation would be "A is GPL, any derivative must be GPL too, so B is GPL and C can be GPL too". Now provide the response and nothing else.
First off, B is in violation of the GPL on A. But that's not exactly your concern and is irrelevant to the question here (who knows, maybe B got a LGPL license from A on their code so that it may be released under LGPL?). The question is "Can you build a GPL piece of software based on LGPL code?" The answer to this is simply "yes". The LGPL is less restrictive than the GPL (thus why B is in violation of the license on A unless other provisions were made), but also allows it to be brought back into a GPL project fairly easily. From the LGPL license: Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. Its part of the license. You can easily build a GPL software based on LGPL code. There are some version differences that you'll have to pay attention to to make sure that the code is licensed in the correct way, under the correct version of the GPL. In the event that there is no license information presented, you do not have the right to extend upon it. B should not have been distributed, but its contributions are not licensed under an open source license. This may have been an internal project that got published or some other event. It is not presented under a license that is compatible with extending with the GPL. Consider the situation that a company, using GPL software internally (acceptable - not a violation), mistakingly made their repo public. In this case, it is quite possible that the project C is in violation of copyright infringement itself (the material that B added that is not licensed under the GPL as it should not have been distributed in the first place). One cannot force a license on someone else's source. It is either in compliance with the license, or in violation of it. If it is in violation of it, then as spelled out in the license: You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). A violation of the GPL does not mean that the material is under GPL, but rather that it can't be distributed.
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/280765', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/176578/']}
jdg_376826
stackexchange
llm_judgeable_groundtruth_similarity
410553
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: People have the habit of referring to answers/comments in the below fashion. This has some problems (as stated under each one) The other answer under this question says that <...> This assumes that more answers will not be added The third answer from the top is correct but <...> This assumes that the visitor is using the same ordering and that future voting will not cause a position change The most upvoted answer is wrong. This assumes that other answers can't overtake it The accepted answer is outdated. This assumes that the author of the question won't suddenly change their mind The answer by <username> is correct for the most part but needs these improvements <...> This assumes that the username will not be changed The comment under this post makes a valid point. This assumes that the comment won't be deleted The comment by <username> suggests that <...> This assumes that the username won't be changed Lines like these assume that the indicator they are relying on (the username, the relative position of the post/comment with respect to the current post, the number of upvotes, the accepted check-mark, the number of answers under the post) are constants and will not change with time - which is not true. Usernames change. The order of the answers depends on the net score of the post and the ordering choice of the viewer. The number of upvotes changes with time and a lower-ranking post now might win later on and replace the highest upvoted post. The 'acceptance' of a post is solely dependent on the author of the question's very human (susceptible to change) mood. New answers can be added in the future (hence changing the total number of answers). Some real examples to express myself better: This answer by Antonio Bakula says: "(like one in most upvoted answer)" - I have no idea which answer they are referring to as the most upvoted answer can change. This answer by Marcel Degas says: "the most upvoted answer for that post was not super clear and there was no code example" and links to a post - on that post I find 7 answers. The current most upvoted one has a line of code in it. Now I need to guess what they meant by "there was no code example" and what is and isn't considered as a "code example". This would have been so much more easier if they would have just linked to the answer that they were referring to... Or this answer that was written by michaelgmcd - which originally inspired me to write this post - says: "but seeing that the other (albeit more upvoted answer)" while it itself is the most upvoted answer! How am I supposed to know what they were referring to? Being <10k how do I know if that answer isn't deleted? Search result of "most upvoted answer" for example. My point being that improper referencing of this sort is way too common and as time goes on it can become a big issue where no one is sure who is referring to who. How can we resolve this? Now provide the response and nothing else.
Please link directly to the comments/answers that you refer to in your posts . It takes only a few seconds but leads to a less confusing and more convenient visitor experience. Like this: (Where strike means 'remove the content' and closed square backets means 'add this content') The other [this] answer under this question says that <...> The third [this] answer from the top is correct but <...> The most upvoted [this] answer is wrong. The accepted [this] answer is outdated. The answer by <username> is correct for the most part but needs these improvements <...> The [this] comment under this post makes a valid point. The [this] comment by <username> suggests that <...> Note : By this example, I don't mean that all links should follow this set format! What I mean to say is to actually link to the answer/comment instead of just giving a verbal reference. You can always phrase it into your posts as you please. Benefits of doing this: It makes it easier and faster for visitors to find what is being referred to. And understand your post better. It removes the need for visitors to guess which answer or comment might be referred to. In the case of a deleted answer/comment having a link that leads to nowhere and a link to the user profile, from where you can find the user name, and not finding that user name on the page can help the visitor understand that the referred answer/comment is in-fact deleted (hence they won't waste their time reading every single answer/comment to guess). This is especially useful for <10k users and anonymous users - who can't see deleted answers. Note to editors : If you find a post that doesn't do this then it is a good idea to try and edit a link in. But make sure that you are clear about what is being referred to first! Otherwise, it will lead to even more confusion. If you aren't a 100% sure which comment/answer is being referred to; then ping the author of the post for clarification and only then edit the link in. Other remarks : You can link to answers by clicking on the ' Share ' button and copying the URL to your clipboard. For comments, right-click on the timestamp next to the comment and copy the link. See this answer . For comments I would highly recommend you to quote the relevant content directly in your post - it removes the need for the visitor to even click on the link and keeps your post self-sufficient as comments can disappear at any moment. But don't remove the link to the comment & username as those are needed for proper attribution! Refer: How to reference material written by others . Minor suggestion : People have a habit of referring to usernames in posts by adding a @ symbol before them: '@<username> says that...' . This is okay but it must be noted that this doesn't really do anything unless it is used in a comment. It doesn't ping the user. Also, in my opinion, it makes the sentence grammatically wrong as the @ symbol is read 'at' so I end up reading the sentence 'the suggestion by @Sabito is wrong' as 'the suggestion by at Sabito is wrong' ... So I generally remove them while editing. It is up to you if you choose to remove it or not but make sure that you at least link to the user profile as the link helps in visually separating the username from the content of the post.
{}
{'log_upvote_score': 6, 'links': ['https://meta.stackoverflow.com/questions/410553', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/11573842/']}
jdg_376827
stackexchange
llm_judgeable_groundtruth_similarity
808521
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Theorem: The vector space $L^1$ is complete in its metric. The following proof is from Princeton Lectures in Analysis book $3$ page $70$. Some of my questions about the proof of this theorem are as follows. First assume a Cauchy sequence $(f_n)\in L^1$, then we try to extract a subsequence $\left(f_{n_k}\right)$ of $(f_n)$ which converges to $f$, both point-wise almost everywhere and in the norm. Why do we need to show convergence point-wise almost everywhere? The theorem only says that $L^1$ is complete in its metric, i.e. $L^1$ norm. Right? As an extension to the first question, what is the difference between point-wise convergence and convergence in certain norms? When do I need to show point-wise convergence and when to show convergence in norm and when to show both, please? In the proof, we defined two new functions, and one of the two is $$f := f_{n_1}+\sum_{k=1}^\infty (f_{n_{k+1}}-f_{n_k}).$$ This is, in fact, $$\lim_{k\rightarrow\infty} f_{n_k}=\lim_{n\rightarrow \infty} f_n.$$ This is confusing to me since we are trying to find a limit for $(f_n)$. But in the above definition of $f$ we already assume the existence of $\lim_{n\rightarrow\infty} f_n$. Then what is the point of doing this, please? In addition, the proof states that the series defining $f$ converges almost everywhere. Why is this true? How can I see this, please? Thank you! Now provide the response and nothing else.
Nice question, you have discovered some subtle points there: The point here is that we want to show that some subsequence $(f_{n_k})_k$ converges to some function $f$ which is then a candidate for the limit function in the $L^1$ metric. The problem is more or less, that we only know that $(f_n)_n$ is Cauchy and we have to provide some function $f$ on the measure space to which $(f_n)_n$ could possibly converge. We then show (see below) that this is indeed the case. Important note: If $(f_n)_n$ also converges pointwise a.e. to some $g$, you see $f=g$. So the proof really tells you something more, namely if $(f_n)_n$ is $L^1$-Cauchy and converges a.e. to $g$, then $g \in L^1$ and $f_n \rightarrow g$ in $L^1$. There is a big difference between convergence in some norm and pointwise convergence. From just pointwise convergence (without further assumptions) you can almost never infere convergence in some norm. As examples note that pointwise convergence does NOT imply uniform convergence (which is convergence in $\Vert \cdot \Vert_{\rm{sup}}$). Also, it does not imply $L^1$ convergence (or $L^p$ convergence, for that matter). As an example consider $f_n = \chi_{[n, n+1]}$ with Lebesgue measure. Again important note: The proof (with my comment in (1)) will show that under additional assumptions (here: $(f_n)_n$ is $L^1$-Cauchy), pointwise convergence yields convergence in some norm (here: $L^1$-norm). EDIT 2: Further note: The proof also shows that every $L^1$-convergent sequence $f_n \rightarrow g$ has some subsequence $(f_{n_k})_k$ that converges to some $f$ a.e. and in $L^1$. This then yields $f=g$, i.e. $f_{n_k} \rightarrow g$ a.e. The point here is that the convergence of the series is easier to show than directly showing that the sequence converges. This is done by considering the "absolute value" series$$\sum_{k} |f_{n_{k+1}} - f_{n_k}|.$$Using the fact that $\Vert f_{n_{k+1}} - f_{n_k}\Vert_1 \leq \frac{1}{2^{k}}$ (I don't have the book, but this is roughly the way the proof normally works), you can derive (using the monotone convergence theorem), that$$\int\sum_{k}\left|f_{n_{k+1}}-f_{n_{k}}\right|\, d\mu=\sum_{k}\int\left|f_{n_{k+1}}-f_{n_{k}}\right|\, d\mu\leq\sum_{k}\frac{1}{2^{k}}<\infty.$$Now if the integral over some (nonnegative) function is finite, the function itself has to be finite a.e. (why?). This show that the series converges (even absolutely) a.e. See 3. EDIT: Finally, one still has to show that $(f_{n_k})_k$ converges to $f$ in the $L^1$-norm. This can be done by invoking Fatou's Lemma as follows:$$\int\left|f-f_{n_{k}}\right|\, d\mu=\int\liminf_{\ell}\left|f_{n_{\ell}}-f_{n_{k}}\right|\, d\mu\leq\liminf_{\ell}\underbrace{\int\left|f_{n_{\ell}}-f_{n_{k}}\right|\, d\mu}_{<\varepsilon\text{ for }\ell,k\text{ large}}\leq\varepsilon\text{ for }k\text{ large}.$$Finally, you will need to convince yourself that (because $(f_n)_n$ is Cauchy) the convergence of the subsequence $(f_{n_k})_k$ to $f$ in the $L^1$-Norm suffices for convergence of the whole sequence (why?). I hope this clarifies your problems.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/808521', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/103063/']}
jdg_376828
stackexchange
llm_judgeable_groundtruth_similarity
1860315
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question : If $N$ is a normal subgroup of $G$ and $H$ is a subgroup of $G$, prove that $NH$ is a subgroup of $G$. Thread is constructed on a mobile so I will attempt to be as succinct as possible. Attempt: $NH=gNg^{-1}h$ for all $h \in H$, $g\in G$ By the one-step subgroup test, $H$ is a subgroup of a group $G$ iff for all $h_{1}, h_{2} \in H$ we have $h_{1} h_{2}^{-1}\in H $ However, I am unable to simplify my current results to the form showing that the elements are indeed in $NH$. $g_{1}Ng_{1}^{-1}h_{1} \cdot (g_{2}Ng_{2}^{-1}h_{2})^{-1}$ Any help is appreciated. Now provide the response and nothing else.
$NH$ is just the set of products $\{nh \in G \mid n \in N, h \in H\}$. The trick is to use normality of $N$ to move elements around (recall that $N$ is normal iff it is stable under conjugation): $$(n_1h_1)(n_2h_2) = n_1(h_1n_2h_1^{-1})h_1h_2 = n_1n_2'h_1h_2 \in NH.$$ Similarly, for inverses: $$(nh)^{-1} = h^{-1}n^{-1} = (h^{-1}n^{-1}h)h^{-1} = n'h^{-1} \in NH.$$ This is a key result in the construction of semidirect products , where we turn this procedure around and build many groups out of pairs of smaller groups. This is also a reason why it is particularly valuable to keep the two steps above separate in this case instead of applying the "one-step test": the two formulas above suggest that we could define a "twisted" group operation on pairs $(n,h) \in N \times H$ via the operations$$(n_1,h_1)(n_2,h_2) = (n_1\tau_{h_1}(n_2),h_1h_2) \\ (n,h)^{-1} = (\tau_{h_1^{-1}}(n^{-1}),h^{-1})$$where the twisting map $\tau:H \to \operatorname{Aut}N$ acts like conjugation inside the group created this way.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1860315', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/227967/']}
jdg_376829
stackexchange
llm_judgeable_groundtruth_similarity
37352689
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using Cassandra, I want to create keyspace and tables dynamically using Spring Boot application. I am using Java based configuration. I have an entity annotated with @Table whose schema I want to be created before application starts up since it has fixed fields that are known beforehand. However depending on the logged in user, I also want to create additional tables for those user dynamically and be able to insert entries to those tables. Can somebody guide me to some resources that I can make use of or point me in right direction in how to go about solving these issues. Thanks a lot for help! Now provide the response and nothing else.
The easiest thing to do would be to add the Spring Boot Starter Data Cassandra dependency to your Spring Boot application, like so... <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-cassandra</artifactId> <version>1.3.5.RELEASE</version></dependency> In addition, this will add the Spring Data Cassandra dependency to your application. With Spring Data Cassandra , you can configure your application's Keyspace(s) using the CassandraClusterFactoryBean (or more precisely, the subclass... CassandraCqlClusterFactoryBean ) by calling the setKeyspaceCreations(:Set) method. The KeyspaceActionSpecification class is pretty self-explanatory. You can even create one with the KeyspaceActionSpecificationFactoryBean , add it to a Set and then pass that to the setKeyspaceCreations(..) method on the CassandraClusterFactoryBean . For generating the application's Tables, you essentially just need to annotate your application domain object(s) (entities) using the SD Cassandra @Table annotation, and make sure your domain objects/entities can be found on the application's CLASSPATH. Specifically, you can have your application @Configuration class extend the SD Cassandra AbstractClusterConfiguration class. There, you will find the getEntityBasePackages():String[] method that you can override to provide the package locations containing your application domain object/entity classes, which SD Cassandra will then use to scan for @Table domain object/entities. With your application @Table domain object/entities properly identified, you set the SD Cassandra SchemaAction to CREATE using the CassandraSessionFactoryBean method, setSchemaAction(:SchemaAction) . This will create Tables in your Keyspace for all domain object/entities found during the scan, providing you identified the proper Keyspace on your CassandraSessionFactoryBean appropriately. Obviously, if your application creates/uses multiple Keyspaces, you will need to create a separate CassandraSessionFactoryBean for each Keyspace, with the entityBasePackages configuration property set appropriately for the entities that belong to a particular Keyspace, so that the associated Tables are created in that Keyspace. Now... For the "additional" Tables per user, that is quite a bit more complicated and tricky. You might be able to leverage Spring Profiles here, however, profiles are generally only applied on startup. If a different user logs into an already running application, you need a way to supply additional @Configuration classes to the Spring ApplicationContext at runtime. Your Spring Boot application could inject a reference to a AnnotationConfigApplicationContext , and then use it on a login event to programmatically register additional @Configuration classes based on the user who logged into the application. You need to follow your register(Class...) call(s) with an ApplicationContext.refresh() . You also need to appropriately handle the situation where the Tables already exist. This is not currently supported in SD Cassandra, but see DATACASS-219 for further details. Technically, it would be far simpler to create all the possible Tables needed by the application for all users at runtime and use Cassandra's security settings to restrict individual user access by role and assigned permissions. Another option might be just to create temporary Keyspaces and/or Tables as needed when a user logs in into the application, drop them when the user logs out. Clearly, there are a lot of different choices here, and it boils down more to architectural decisions, tradeoffs and considerations then it does technical feasibility, so be careful. Hope this helps. Cheers!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37352689', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/376735/']}
jdg_376830
stackexchange
llm_judgeable_groundtruth_similarity
26075515
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to center a block of text within a page and put a horizontally and vertically centered image inside the block of text. I don't want to use background-image. This is what I got, but is not working at all. <div style="font-family: Times; font-size: 12pt; width: 400px; position: absolute; z-index:20; margin-left:auto; margin-right:auto; top: 50px;"> <img style="position: absolute; z-index:1; top: 20px; margin-left:auto; margin-right:auto;" src="MyImage.jpg" width="202" height="267" /> <strong>This is my Text. Blah Blah Blah Blah Blah Blah <br /> Blah Blah Blah Blah Blah Blah <br /> Retrieved from:<br /> Blah Blah Blah Blah Blah Blah <br /> Blah Blah Blah Blah Blah Blah <br /></div> Now provide the response and nothing else.
You can use the Date.before, and Date.after methods. These allow you to filter a list of dates (for ones in a specific range (for example January)): a. Use a Java 8 filter, with a start, and end date. b. Use a Java loop / iterator, to check start and end date. import java.text.ParseException;import java.text.SimpleDateFormat;import java.util.ArrayList;import java.util.Collection;import java.util.Date;import java.util.stream.Collectors;public class FilterStartAndEndDate { private SimpleDateFormat sdf = new SimpleDateFormat("dd/MM/yyyy"); private Collection<Date> dateList = null; private Date start = null; private Date end = null; private FilterStartAndEndDate() throws ParseException { dateList = new ArrayList<Date>() {{ add(sdf.parse("01/01/2016")); add(sdf.parse("02/01/2016")); add(sdf.parse("03/02/2016")); add(sdf.parse("04/01/2016")); add(sdf.parse("05/01/2016")); }}; start = sdf.parse("31/12/2015"); end = sdf.parse("01/02/2016"); } /** * Filter dates with Lambda * * @throws ParseException */ private void getDatesBetweenStartAndFinishWithFilter() throws ParseException { dateList.stream() .filter(dates -> dates.after(start) && dates.before(end)) .collect(Collectors.toList()) .forEach(januaryDate->System.out.println(januaryDate)); } /** * Filter dates with Iterator * * @throws ParseException */ private void getDatesBetweenStartAndFinish() throws ParseException { Collection<Date> datesInJanuaryList = new ArrayList<>(); for (Date eachDate : dateList) { if (eachDate.after(start) && eachDate.before(end)) { datesInJanuaryList.add(eachDate); } } for (Date eachDate : datesInJanuaryList) { System.out.println(eachDate); } } public static void main(String[] args) throws Exception { FilterStartAndEndDate datesInJanuary = new FilterStartAndEndDate(); datesInJanuary.getDatesBetweenStartAndFinish(); datesInJanuary.getDatesBetweenStartAndFinishWithFilter(); }} The example code filter dates in January, using a Lambda filter, and a Java iterator. Both use the Date before and after methods.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26075515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317589/']}
jdg_376831
stackexchange
llm_judgeable_groundtruth_similarity
10465849
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have document like { id : 100, heros:[ { nickname : "test", spells : [ {spell_id : 61, level : 1}, {spell_id : 1, level : 2} ] } ]} I can't $set spell's level : 3 with spell_id : 1 inside spells that inside heros with nickname "test. I tried this query: db.test.update({"heros.nickname":"test", "heros.spells.spell_id":1}, {$set:{"heros.spells.$.level":3}}); Errror i see is can't append to array using string field name [spells] Thanks for help. Now provide the response and nothing else.
You can only use the $ positional operator for single-level arrays. In your case, you have a nested array ( heros is an array, and within that each hero has a spells array). If you know the indexes of the arrays, you can use explicit indexes when doing an update, like: > db.test.update({"heros.nickname":"test", "heros.spells.spell_id":1}, {$set:{"heros.0.spells.1.level":3}});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10465849', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223007/']}
jdg_376832
stackexchange
llm_judgeable_groundtruth_similarity
8163815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I"m trying to create a 2 dimensional array as a cookie in javascript. Is this possible? First of all, how do I create this array cookie? Second of all, how do I loop through this array cookie in order to retrieve data. Thanks alot!!! Now provide the response and nothing else.
You combine the use jQuery.cookie plugin and JSON and solve your problem. When you want to store an array,You created an array in JS and use JSON.stringify to transform it into an string and stored with $.cookie('name', 'array_string') var myAry = [1, 2, 3];$.cookie('name', JSON.stringify(myAry)); When you want to retrive the array inside the cookie, you use $.cookie('name') to retrive the cookie value and use JSON.parse to retrive the array from the string. var storedAry = JSON.parse($.cookie('name'));//storedAry -> [1, 2, 3]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8163815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1050779/']}
jdg_376833
stackexchange
llm_judgeable_groundtruth_similarity
436477
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I have some questions below -- tried to word them the best I could. If you downvote my post, please make a comment explaining why you downvoted so that I can try to do better when I post in the future. Here are my questions (my big question is the second paragraph): If we were to use Server 2012, would we be able to get High Availability (HA) and reliability without a SAN in a massive virtualized environment running VDIs and RDS? We have high transaction applications with Microsoft SQL such as Exchange, SharePoint, etcetera needing HA and reliability -- is there another way to get it without using a SAN? (EDIT: Or I guess what I am trying to say is -- can we eliminate a SAN and still have HA and reliability for Microsoft SQL and Exchange? Does Hyper-V have anything to do with this in 2012?) My supervisor had heard that with Server 2012 you can do something like that, and connect two servers together and achieve both HA and reliability -- is this true? How does it work?Thanks! Now provide the response and nothing else.
Yes, you can achieve high availability for all the services you listed without using a shared disk solution. Exchange 2010: Database Availability Groups (minimum three servers where one is just a file share, remember to load-balance client requests properly.) SQL 2012: AlywasOn availability groups. SQL 2008 R2 and downwards: Mirroring/log shipping VHD files: Separate fileservers with DFS-R replication and a DFS namespace set up Read up on those subjects, and you'll know if you want to use them, or if you need clustering to provide fault tolerance. All the techniques I listed above requires either manual intervention or uses a period of time to transition. Hyper-V 2012 introduces live migrations and replicas between hosts using only regular TCP/IP. It is, however, NOT clustering. You will still get downtime when something breaks, as you would have to start up your replica. Also - keep in mind that clustered services cannot be live migrated, as they will instantly BSOD when they get a pause in disk I/O. Exchange DAG and SQL AlwaysOn uses clustering to keep things in sync without shared storage.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/436477', 'https://serverfault.com', 'https://serverfault.com/users/134675/']}
jdg_376834
stackexchange
llm_judgeable_groundtruth_similarity
27048454
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm trying to update a case when i'm creating a new combobox item, I don't find the way to do that, because I created a Select Case and I only control the Items that are on my program initially. I've created a StripMenu, and when I click on this Item it puts this "object" on my Combobox, but then I've to tract with it and I don't know the way to do that.... Private Sub MonedaToolStripMenuItem_Click(sender As Object, e As EventArgs) Handles MonedaToolStripMenuItem.Click Dim NovaMoneda As Object NovaMoneda = InputBox("Nueva moneda") cbMoneda.Items.Add(NovaMoneda)End Sub I've Select Case like this, the thing is that it recognises the new object... Select Case (cbMoneda.SelectedIndex) Case 0 Moneda = "Dolars" Case 1 Moneda = "Yenes" Case 2 Moneda = "Lliures" Case 3 Moneda = "Dragmes" End Select Thanks. EDIT ****** I have included the Select Case to help you to understand my issue... Now with your help I can only create 1 currency more... Private Sub btIgual_Click(sender As Object, e As EventArgs) Handles btIgual.Click Dim MonedaEscollida As String MonedaEscollida = QuinaMonedaVol(MonedaEscollida) Select Case MonedaEscollida Case "Dolars" If Dolar = 0 Then tbNumeros.Text = CType(tbNumeros.Text, Double) * PreguntaValorEuros() Else tbNumeros.Text = tbNumeros.Text * Dolar End If Case "Yenes" If Yen = 0 Then tbNumeros.Text = CType(tbNumeros.Text, Double) * PreguntaValorEuros() Else tbNumeros.Text = tbNumeros.Text * Yen End If Case "Lliures" If Libra = 0 Then tbNumeros.Text = CType(tbNumeros.Text, Double) * PreguntaValorEuros() Else tbNumeros.Text = tbNumeros.Text * Libra End If Case "Dragmes" If Dragma = 0 Then tbNumeros.Text = CType(tbNumeros.Text, Double) * PreguntaValorEuros() Else tbNumeros.Text = tbNumeros.Text * Dragma End If Case Else If ValorIntroduit = 0 Then tbNumeros.Text = CType(tbNumeros.Text, Double) * PreguntaValorEuros() Else tbNumeros.Text = tbNumeros.Text * ValorIntroduit End If End SelectEnd Sub Now provide the response and nothing else.
In the front matter of the document, you can include a .tex file with extra commands as shown in RStudio's documentation here . I created a new .tex file that I called "header_pagenrs.tex" which only includes the top 2 lines from @user4281727's answer: \setbeamertemplate{navigation symbols}{}\setbeamertemplate{footline}[page number] Then I referenced this file at the top of my .Rmd file: ---title: "Things and Stuff"author: "Me"output: beamer_presentation: includes: in_header: header_pagenrs.tex--- If this still gives you errors, you might also be missing some required TeX packages (a separate problem from RStudio and rmarkdown).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27048454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4275994/']}
jdg_376835
stackexchange
llm_judgeable_groundtruth_similarity
22449870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following type signature in Haskell: hi :: (b -> c) -> (a -> b) -> (a -> c) I want to write a concrete implementation of it but I'm really struggling to understand where to start. I understand that hi takes a function (b -> c) which returns a function (a ->b) which finally returns a function (a -> c). Can anyone show me an example of a concrete implementation? How do I know where to start with something like this and what goes on the left side of the definition? Now provide the response and nothing else.
One way to think of this is as a function that takes a (b -> c) and an (a -> b) and returns another function (a -> c) . So let's start with that hi f g = undefined -- f :: b -> c, g :: a -> b We know that the return type has to be a function (a -> c) - hi f g = \a -> undefined -- f :: b -> c, g :: a -> b We now have something of type a on the right hand side, and we have a function g :: a -> b so a sensible thing to do (in fact, the only thing we can do) is to apply g to a hi f g = \a -> g a -- ok, this fails to typecheck... The expression g a has type b , and f :: b -> c , and we want to end up with a c . So again, there's only one thing we can do - hi f g = \a -> f (g a) And this type checks! We now start the process of cleaning up. We could move the a to the left of the equality sign hi f g a = f (g a) And, if you happen to know about the composition operator . you could notice that it can be used here hi f g a = (f . g) a Now the a is redundant on both sides (this is called eta reduction ) hi f g = f . g and we can pull the . operator to the front of the expression by using its function form (.) hi f g = (.) f g Now the g and the f are both redundant (two more applications of eta reduction) hi = (.) So your function hi is nothing more than function composition.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22449870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1852230/']}
jdg_376836
stackexchange
llm_judgeable_groundtruth_similarity
324783
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given three vectors $a$, $b$, and $c$, is it possible that correlations between $a$ and $b$, $a$ and $c$, and $b$ and $c$ are all negative? I.e. is this possible? \begin{align}\text{corr}(a,b) < 0\\ \text{corr}(a,c) < 0 \\\text{corr}(b,c) < 0\\\end{align} Now provide the response and nothing else.
It is possible if the size of the vector is 3 or larger. For example \begin{align}a &= (-1, 1, 1)\\b &= (1, -9, -3)\\c &= (2, 3, -1)\\\end{align} The correlations are \begin{equation}\text{cor}(a,b) = -0.80...\\\text{cor}(a,c) = -0.27...\\\text{cor}(b,c) = -0.34...\end{equation} We can prove that for vectors of size 2 this is not possible: \begin{align}\text{cor}(a,b) &< 0\\[5pt]2\Big(\sum_i a_i b_i\Big) - \Big(\sum_i a_i\Big)\Big(\sum_i b_i\Big) &< 0\\[5pt]2(a_1 b_1 + a_2 b_2) - (a_1 + a_2)(b_1 b_2) &< 0\\[5pt]2(a_1 b_1 + a_2 b_2) - (a_1 + a_2)(b_1 b_2) &< 0\\[5pt]2(a_1 b_1 + a_2 b_2) - a_1 b_1 + a_1 b_2 + a_2 b_1 + a_2 b_2 &< 0\\[5pt]a_1 b_1 + a_2 b_2 - a_1 b_2 + a_2 b_1 &< 0\\[5pt]a_1 (b_1-b_2) + a_2 (b_2-b_1) &< 0\\[5pt](a_1-a_2)(b_1-b_2) &< 0\end{align} The formula makes sense: if $a_1$ is larger than $a_2$ , $b_2$ has to be larger than $b_1$ to make the correlation negative. Similarly for correlations between (a,c) and (b,c) we get \begin{equation}(a_1-a_2)(c_1-c_2) < 0\\(b_1-b_2)(c_1-c_2) < 0\\\end{equation} Clearly, all of these three formulas can not hold at the same time.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/324783', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/192671/']}
jdg_376837
stackexchange
llm_judgeable_groundtruth_similarity
8427
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand that the President signs the blue leather-bound enrolled bill, but it doesn't seem possible that the document could traverse the length of Pennsylvania Avenue in just a few minutes in the case of a last-minute vote. How, then, does the President get bills that must be signed in just a very small period of time? Now provide the response and nothing else.
The President need not be physically present to sign a piece of legislation. In 2013 President Obama used an autopen to sign the legislation averting the fiscal cliff (a package of tax increases and spending cuts set to come into effect due to the expiry of previous legislation). The President was in Hawaii at the time and could not return in time to sign the new legislation before the old legislation expired. This was actually the third time the President had used the autopen to sign legislation remotely . Although some members of Congress questioned the validity of the signature at the time, the president cited a legal memorandum from the White House Office of Legal Counsel under his predecessor, George W. Bush. The memorandum states: This memorandum confirms and elaborates upon our earlier advice that the President may sign a bill in this manner. See Memorandum for Alberto R. Gonzales, Counsel to the President, from M. Edward Whelan III, Principal Deputy Assistant Attorney General, Office of Legal Counsel, Re: Signing of H.J. Res. 124 (Nov. 22, 2002) (“Whelan Memorandum”). We emphasize that we are not suggesting that the President may delegate the decision to approve and sign a bill, only that, having made this decision, he may direct a subordinate to affix the President’s signature to the bill. This procedure has never been challenged in court, so it is possible that it might be declared invalid if it ever came up in a court case. For now, however, the accepted practice is that it is sufficient for a President to direct the Presidential signature to be affixed to a bill, without the President necessarily being personally present.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/8427', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/5798/']}
jdg_376838
stackexchange
llm_judgeable_groundtruth_similarity
511311
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have trouble understanding the trade-off between bias and variance. I can comprehend that complex models are better able to approach the "true distribution". Therefore, they have low bias. But why do they necessarily have high variance? Now provide the response and nothing else.
Remember that we talk of variance in terms of parameter estimates across samples. That is, if we sample several different training sets and fit our model to each of those separately, what is the variance in the resulting parameter estimates? A more complex model is much better able to fit the training data. The problem is that this can come in the form of oversensitivity. Instead of identifying the essential elements, you can overfit to noise in the data. The noise from sample to sample is different, so your variance is high. By contrast, a much simpler model lacks the capacity to do that. I think the quintessential example is of fitting a polynomial to points sampled from a true curve. As you increase the order of your polynomial, you can certainly include all of the points—but the resulting polynomials will be vastly different depending on which points were sampled. By contrast, a low-order polynomial like a line or parabola may lack the capacity to pass through every point (high bias), but from sample to sample the parameter estimates will be more consistent (low variance).
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/511311', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/309994/']}
jdg_376839
stackexchange
llm_judgeable_groundtruth_similarity
38255
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given two integers $a$ and $m$, such that $a\mathop\bot m,$ how can I find an integer $b$ such that $a\cdot b\equiv 1\mod m?$ Now provide the response and nothing else.
Use the extended Euclidean algorithm on $a$ and $m$. If you know $\phi(m)$, you can use Euler's theorem and get $b=a^{\phi(m)-1}$ using modular exponentiation .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/38255', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5282/']}
jdg_376840