source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9187946
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having trouble understanding how escaping works inside html tag attribute values that are javascript. I was lead to believe that you should always escape & ' " < > . So for javascript as an attribute value I tried: <a href="javascript:alert('Hello');"></a> It doesn't work. However: <a href="javascript:alert('Hello');"></a> and <a href="javascript:alert('Hello');"></a> does work in all browsers! Now I am totally confused. If all my attribute values are enclosed in double quotes, does this mean I do not have to escape single quotes? Or is apos and ascii 39 technically different characters? Such that javascript requires ascii 39, but not apos?
Now provide the response and nothing else.
|
There are two types of “escapes” involved here, HTML and JavaScript. When interpreting an HTML document, the HTML escapes are parsed first. As far as HTML is considered, the rules within an attribute value are the same as elsewhere plus one additional rule: The less-than character < should be escaped. Usually < is used for this. Technically, depending on HTML version, escaping is not always required, but it has always been good practice. The ampersand & should be escaped. Usually & is used for this. This, too, is not always obligatory, but it is simpler to do it always than to learn and remember when it is required. The character that is used as delimiters around the attribute value must be escaped inside it. If you use the Ascii quotation mark " as delimiter, it is customary to escape its occurrences using " whereas for the Ascii apostrophe, the entity reference ' is defined in some HTML versions only, so it it safest to use the numeric reference ' (or ' ). You can escape > (or any other data character) if you like, but it is never needed. On the JavaScript side, there are some escape mechanisms (with \ ) in string literals. But these are a different issue, and not relevant in your case. In your example, on a browser that conforms to current specifications, the JavaScript interpreter sees exactly the same code alert('Hello'); . The browser has “unescaped” ' or ' to ' . I was somewhat surprised to hear that ' is not universally supported these days, but it’s not an issue: there is seldom any need to escape the Ascii apostrophe in HTML (escaping is only needed within attribute values and only if you use the Ascii apostrophe as its delimiter), and when there is, you can use the ' reference.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9187946', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70189/']}
|
jdg_371441
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
65266
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Each time a python file is imported that contains a large quantity of static regular expressions, cpu cycles are spent compiling the strings into their representative state machines in memory. a = re.compile("a.*b")b = re.compile("c.*d")... Question: Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import? Pickling the object simply does the following, causing compilation to happen anyway: >>> import pickle>>> import re>>> x = re.compile(".*")>>> pickle.dumps(x)"cre\n_compile\np0\n(S'.*'\np1\nI0\ntp2\nRp3\n." And re objects are unmarshallable: >>> import marshal>>> import re>>> x = re.compile(".*")>>> marshal.dumps(x)Traceback (most recent call last): File "<stdin>", line 1, in <module>ValueError: unmarshallable object
Now provide the response and nothing else.
|
Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import? Not easily. You'd have to write a custom serializer that hooks into the C sre implementation of the Python regex engine. Any performance benefits would be vastly outweighed by the time and effort required. First, have you actually profiled the code? I doubt that compiling regexes is a significant part of the application's run-time. Remember that they are only compiled the first time the module is imported in the current execution -- thereafter, the module and its attributes are cached in memory. If you have a program that basically spawns once, compiles a bunch of regexes, and then exits, you could try re-engineering it to perform multiple tests in one invocation. Then you could re-use the regexes, as above. Finally, you could compile the regexes into C-based state machines and then link them in with an extension module. While this would likely be more difficult to maintain, it would eliminate regex compilation entirely from your application.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9241/']}
|
jdg_371442
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32881022
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible? I am not sure, since d3 makes heavy use of this rebindings and this seems to conflict with ES6 spec . For instance, the following works fine: // Working finevar data = [1,2,3]var svg = d3.select('body').append('svg').attr('height', 500).attr('width', 500).style('background-color', 'orange');var gs = svg.selectAll('g').data(data).enter();gs.append('circle') .attr('cx', function () { return Math.random()*500; }) .attr('cy', function () { return Math.random()*500; }) .attr('r', function () { return Math.random()*100; }) .each(function () { console.log(this); }); // this is bound to the current element in the enter selection While the following does not work as expected ( this is not bound to the current element in the enter selection but to Window object): var data = [1,2,3]var svg = d3.select('body').append('svg').attr('height', 500).attr('width', 500).style('background-color', 'blue');var gs = svg.selectAll('g').data(data).enter();gs.append('circle') .attr('cx', () => Math.random()*500) .attr('cy', () => Math.random()*500) .attr('r', () => Math.random()*100) .each(() => console.log(this)); // this is bound to Window object Related fiddle here .
Now provide the response and nothing else.
|
You can use arrow functions if you don't need access to this of the current element. Fallback to the old style functions for cases where you want to access this of the current element. Or use explicit binding to allow your function (not arrow function) to access whatever object you want using .bind() To avoid working with this you have the option of using d3 name or class selectors to conveniently access any element. e.g.: var stuffINeed = svg.selectAll('.someClass');
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32881022', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667147/']}
|
jdg_371443
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
815578
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
how would I be able to prove that, using the comparison test, diverges? Using symbolab gave me diverges, but it does not show how, and it used the series root test, which I will not cover in my course. Thank you.
Now provide the response and nothing else.
|
Hint : Try \begin{align}\int^{ \pi /2}_{0} \ln \sin x \, dx &= \int^{ \pi /2}_{0} \ln \left(2 \sin \frac{x}{2} \cos \frac{x}{2} \right) \, dx \\ &=\int^{ \pi /2}_{0} \ln 2 \, dx + \int^{ \pi /2}_{0}\ln \left( \sin \frac{x}{2} \right) \, dx + \int^{ \pi /2}_{0} \ln \left( \cos \frac{x}{2} \right) \, dx \\&= \frac{\pi}{2} \ln 2 + \underbrace{2 \int^{ \pi /4}_{0} \ln (\sin u) \, du}_{\text{Let }u=x/2} + \underbrace{2 \int^{ \pi /4}_{0} \ln (\cos u) \, du}_{\text{Let }u=x/2}\end{align}
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/815578', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/151408/']}
|
jdg_371444
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4652098
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been developing a piece of software that takes rules of the form 'A Disallows B' or 'X Requires Y' to make an application that will allow you to configure a complex item. To describe it more fully, imagine you've got a level of folders, which have some number of sub-folders, and each of those sub-folders has some number of items. The intent is to express relationships between the items in a way that helps the user create a valid collection of selections. So the actual rules look more like 'All(A, B, C) Disallows Any(X, Y, Z)', which states that if the user selects A, B, and C, they can't select any of X, Y, or Z. It's basically just boolean logic. There are additional types, like those that say if you select A then you must also select B, but it's not worth getting into the specifics of all that. It's not business logic (at least not in the normal sense), and the rules can change any time (and they will) so they need to be read from a string and interpreted or hooked up through a convoluted database structure (which I'd prefer to stay away from). The real point is: Is there some sort of system or framework out there that deals in this kind of runtime behavior? I know that Oracle has a thing called the "Oracle Configurator" that does exactly what I'm talking about, but it's prohibitively expensive I think. The project is almost over, and I've built a "working" system, but I'm trying to do a post-mortem to see if it could have been done better/faster/cheaper.
Now provide the response and nothing else.
|
I'd like to cite Lukas Rytz (from here ): The reason is that we wanted a deterministic naming-scheme for the generated methods which return default arguments. If you write def f(a: Int = 1) the compiler generates def f$default$1 = 1 If you have two overloads with defaults on the same parameter position, we would need a different naming scheme. But we want to keep the generated byte-code stable over multiple compiler runs. A solution for future Scala version could be to incorporate type names of the non-default arguments (those at the beginning of a method, which disambiguate overloaded versions) into the naming schema, e.g. in this case: def foo(a: String)(b: Int = 42) = a + bdef foo(a: Int) (b: Int = 42) = a + b it would be something like: def foo$String$default$2 = 42def foo$Int$default$2 = 42 Someone willing to write a SIP proposal ?
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4652098', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/177547/']}
|
jdg_371445
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
198430
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
The factorial function is primitive recursive, and therefore definable by a $\Sigma_1$ formula. Is it also definable by a $\Delta_0$ formula (i.e. bounded quantifiers)?If not, why?
Now provide the response and nothing else.
|
(Sorry for the earlier confusion.) Yes, the graph of factorial is $\Delta_0$. First, the problem can be restated in terms of computational complexity. By a characterization going back to Bennett, $\Delta_0$-definable predicates are exactly those computable in the linear-time hierarchy. A convenient sufficient condition is provided by Nepomnjaščiĭ’s theorem [4]: a predicate is in $\Delta_0$ whenever it is decidable by an algorithm working simultaneously in polynomial time, and space $n^\epsilon$ for some $\epsilon<1$. In particular, all predicates computable in logarithmic space are $\Delta_0$. Chiu, Davida, and Litow [2], building on earlier work by Beame, Cook, and Hoover [1], proved that logarithmic space (in fact, log-space uniform $\mathrm{TC}^0$, subsequently improved to fully uniform $\mathrm{TC}^0$ by Hesse, Allender, and Barrington [3]) contains the iterated multiplication problem: given a sequence $x_1,\dots,x_n$ of integers in binary, compute their product. By applying their algorithm to the sequence $1,\dots,n$, we can compute in logarithmic space (and in $\mathrm{TC}^0$) the factorial $n!$ when $n$ is given in unary. Consequently, we can decide in logarithmic space the graph $\{(x,y):y=x!\}$, where both $x,y$ are binary: we test whether $x$ is logarithmically smaller than $y$, and if so, compute its factorial and compare it to $y$. (Note that testing the equality of two log-space computable functions can be done in log space: we do not need to write down the intermediate results, we recompute their individual bits on the fly as needed.) References: [1] P. W. Beame, S. A. Cook, H. J. Hoover, Log depth circuits for division and related problems , SIAM Journal on Computing 15 (1986), no. 4, pp. 994–1003. [2] A. Chiu, G. Davida, B. Litow, Division in logspace-uniform $\mathit{NC}^1$ , RAIRO – Theoretical Informatics and Applications 35 (2001), no. 3, pp. 259–275. [3] W. Hesse, E. Allender, D. A. M. Barrington, Uniform constant-depth threshold circuits for division and iterated multiplication , Journal of Computer and System Sciences 65 (2002), no. 4, pp. 695–716. [4] V. A. Nepomnjaščiĭ, Rudimentary predicates and Turing computations [calculations] , Doklady Akademii Nauk SSSR 195 (1970), pp. 282–284. MR0281611 , Zbl 0223.02031 .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/198430', 'https://mathoverflow.net', 'https://mathoverflow.net/users/68515/']}
|
jdg_371446
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9397065
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to understand more the CPU used on Android phones. The reason is that we are building the C library which has the certain CPU/math processor architecture flags we can set. So far we have found that all Android devices CPUs are ARM design and are either ARMv6 (older devices, low ends, Huawei, ZTE, small SE) or ARMv7 (Honeycomb tablets and all more expensive devices, almost all with resolution WVGA and higher)I have checked ~20 devices and all have processor of that type. Is that correct? Are there some others? Now when it comes to the multimedia and mathematical operations I think two units are important – the VFP for floating point arithmetic and the SIMD - NEON . After testing the above mentioned group of devices I have found that VFP support is in almost all devices, while NEON not. Any comments to that? I do not know what exactly is the ARMv6 and ARMv7 difference (besides the speed in general). Now we are building a multimedia C library, which has couple of flags for building. My question is how to target the largest number of devices on one side and how to allow the users of the better devices to use their hardware. My proposal is to prepare 3 distinct builds: ARMv6/VFP, ARMv7/VFP and ARMv7/VFP/NEON. Other proposals? The ARMv6/VFP I think should run on all configurations, except devices, which are missing the VFP (e.g. the old HTC Wildfire) – but those will remain unsupported. Is this a good approach? Any comments are welcomed. Regards,STeN
Now provide the response and nothing else.
|
That's correct. Currently there are two types - ARMv6 and ARMv7. Most likely in nearest future there will additionally x86 target. Newest NDK already supports builds for it. VFP is mandatory on ARMv7, but not on ARMv6. NEON is optional, and not all devices support it. Most distinct example is Nvidia Tegra 2. It is deployed on most high-end tablets and phones, but it doesn't support NEON. Nvidia Tegra 3 supports NEON. I think you should stick to ARMv6 with floating point emulation, ARMv7+VFP, ARMv7+NEON. Exactly - VFP is not supported on all ARMv6 devices. So simply don't use it there. By default NDK builds armeabi target that is intended for ARMv6 devices and doesn't use VFP. armeabi-v7a builds for ARMv7 and uses VFP.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9397065', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/384115/']}
|
jdg_371447
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1929958
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought I saw something answering this on SO recently but now I can't find it. Here is the code I am using now to determine if settings are for 24 hour time display. It works for me in the US, but I don't know if it will work in all locales. Is this sufficient or is there a better way to find out the current setting for this? +(BOOL) use24HourClock{ BOOL using24HourClock = NO; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setLocale: [NSLocale currentLocale]]; [dateFormatter setDateStyle:kCFDateFormatterNoStyle]; [dateFormatter setTimeStyle:kCFDateFormatterShortStyle]; // get date/time (1Jan2001 0000UTC) NSDate* midnight = [[NSDate alloc] initWithTimeIntervalSinceReferenceDate:0]; NSString* dateString = [dateFormatter stringFromDate: midnight]; // dateString will either be "15:00" or "16:00" (depending on DST) or // it will be "4:00 PM" or "3:00 PM" (depending on DST) using24HourClock = ([dateString length] == 5); [midnight release]; [dateFormatter release]; return using24HourClock;}
Now provide the response and nothing else.
|
Here's the best way to do it: NSString *formatStringForHours = [NSDateFormatter dateFormatFromTemplate:@"j" options:0 locale:[NSLocale currentLocale]];NSRange containsA = [formatStringForHours rangeOfString:@"a"];BOOL hasAMPM = containsA.location != NSNotFound; in Swift: let formatString: NSString = NSDateFormatter.dateFormatFromTemplate("j", options: 0, locale: NSLocale.currentLocale())!let hasAMPM = formatString.containsString("a") Swift 4: let formatString = DateFormatter.dateFormat(fromTemplate: "j", options: 0, locale: Locale.current)!let hasAMPM = formatString.contains("a") This uses a special date template string called "j". According to the ICU Spec , "j"... requests the preferred hour format for the locale (h, H, K, or k), as determined by whether h, H, K, or k is used in the standard short time format for the locale. In the implementation of such an API, 'j' must be replaced by h, H, K, or k before beginning a match against availableFormats data. Note that use of 'j' in a skeleton passed to an API is the only way to have a skeleton request a locale's preferred time cycle type (12-hour or 24-hour). That last sentence is important. It "is the only way to have a skeleton request a locale's preferred time cycle type". Since NSDateFormatter and NSCalendar are built on the ICU library, the same holds true here.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1929958', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1693173/']}
|
jdg_371448
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
58537493
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I updated my data model by removing a column from a table to another, but I hava data in this column To do that, I started by adding the new column in the new table and its done. Now I want to migrate column data from the old table to the new one, I'm thinking to do it in the Up(MigrationBuilder migrationBuilder) method but I don't know the right approach. So, I'm asking if anyone knows from where I can start. Thank you all!
Now provide the response and nothing else.
|
Within the Up() Method of the migration you can write custom SQL using the migrationBuilder.Sql() function - this is generally the way that I would handle data being migrated from one column to another or in this case across tables. Bear in mind of course that things are executed in an order, so you would need the sql to run between the column being added and the other being dropped. Equally for safety sake and keeping things backwards compatible in the Down() side of things you should also include the sql that does the reverse so that you can always roll back later
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58537493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10978013/']}
|
jdg_371449
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11062674
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am really interested in event-driven programming in C especially with sockets so I am going to dedicate some time doing my researches. Let's assume that I want to build a program with much File and Network I/O like a client/server app, basically, the first question is what is the philosophy behind this model. While in normal programming I would spawn new processes, how come a single process can actually serve many other requests. For example, there are some web-servers which can handle connections without creating threads or other processes, just one main process. I know this is complicated but it's always nice to know how different solutions work.
Now provide the response and nothing else.
|
You definitely must read the following: http://www.kegel.com/c10k.html . That page is the perfect overview of event-driven and asynchronous techniques. However, a quick & dirty answer : event-driven is neither non-blocking, nor asynchronous. Event-driven means, that the process will monitor its file descriptors (and sockets), and act only when some event occurs on some descriptor (events are: data received, error, became writeable, ...). BSD sockets have the "select()" function. When called, the OS will monitor the descriptors, and return to the process as soon as some event on one of the descriptors occurs. However, the website above has much better descriptions (and details about the different APIs).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11062674', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1459980/']}
|
jdg_371450
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8688135
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets say we have an entity Person, a controller PersonController and an edit.jsp page (creating a new or editing an existing person) Controller @RequestMapping(value = "/edit", method = RequestMethod.POST)public String editPerson(@RequestParam("fname") String fname, Model model) { if(fname == null || fname.length() == 0){ model.addAttribute("personToEditOrCreate", new Person()); } else{ Person p = personService.getPersonByFirstName(fname); model.addAttribute("personToEditOrCreate", p); } return "persons/edit";}@RequestMapping(value = "/save", method = RequestMethod.POST)public String savePerson(Person person, BindingResult result) { personService.savePerson(person); return "redirect:/home";} edit.jsp <form:form method="post" modelAttribute="personToEditOrCreate" action="save"> <form:hidden path="id"/> <table> <tr> <td><form:label path="firstName">First Name</form:label></td> <td><form:input path="firstName" /></td> </tr> <tr> <td><form:label path="lastName">Last Name</form:label></td> <td><form:input path="lastName" /></td> </tr> <tr> <td><form:label path="money">Money</form:label></td> <td><form:input path="money" /></td> </tr> <tr> <td colspan="2"> <input type="submit" value="Add/Edit Person"/> </td> </tr> </table> </form:form> Im trying the code above (without using the @ModelAttribute annotation in the savePerson method, and it works correct. Why and when do i need to add the annotation to the person object: @RequestMapping(value = "/save", method = RequestMethod.POST)public String savePerson(@ModelAttribute("personToEditOrCreate") Person person, BindingResult result) { personService.savePerson(person); return "redirect:/home";}
Now provide the response and nothing else.
|
You don't need @ModelAttribute ( parameter ) just to use a Bean as a parameter For example, these handler methods work fine with these requests: @RequestMapping("/a")void pathA(SomeBean someBean) { assertEquals("neil", someBean.getName());}GET /a?name=neil@RequestMapping(value="/a", method=RequestMethod.POST)void pathAPost(SomeBean someBean) { assertEquals("neil", someBean.getName());}POST /aname=neil Use @ModelAttribute ( method ) to load default data into your model on every request - for example from a database, especially when using @SessionAttributes . This can be done in a Controller or in a ControllerAdvice : @Controller@RequestMapping("/foos")public class FooController { @ModelAttribute("foo") String getFoo() { return "bar"; // set modelMap["foo"] = "bar" on every request }} Any JSP forwarded to by FooController : ${foo} //=bar or @ControllerAdvicepublic class MyGlobalData { @ModelAttribute("foo") String getFoo() { return "bar"; // set modelMap["foo"] = "bar" on every request }} Any JSP: ${foo} //=bar Use @ModelAttribute ( parameter ) if you want to use the result of @ModelAttribute ( method ) as a default : @ModelAttribute("attrib1")SomeBean getSomeBean() { return new SomeBean("neil"); // set modelMap["attrib1"] = SomeBean("neil") on every request}@RequestMapping("/a")void pathA(@ModelAttribute("attrib1") SomeBean someBean) { assertEquals("neil", someBean.getName());}GET /a Use @ModelAttribute ( parameter ) to get an object stored in a flash attribute : @RequestMapping("/a")String pathA(RedirectAttributes redirectAttributes) { redirectAttributes.addFlashAttribute("attrib1", new SomeBean("from flash")); return "redirect:/b";}@RequestMapping("/b")void pathB(@ModelAttribute("attrib1") SomeBean someBean) { assertEquals("from flash", someBean.getName());}GET /a Use @ModelAttribute ( parameter ) to get an object stored by @SessionAttributes @Controller@SessionAttributes("attrib1")public class Controller1 { @RequestMapping("/a") void pathA(Model model) { model.addAttribute("attrib1", new SomeBean("neil")); //this ends up in session due to @SessionAttributes on controller } @RequestMapping("/b") void pathB(@ModelAttribute("attrib1") SomeBean someBean) { assertEquals("neil", someBean.getName()); }}GET /aGET /b
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8688135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/711189/']}
|
jdg_371451
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7218625
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Could someone explain me what differences there are between strtok() and strsep() ?What are the advantages and disadvantages of them?And why would I pick one over the other one.
Now provide the response and nothing else.
|
One major difference between strtok() and strsep() is that strtok() is standardized (by the C standard, and hence also by POSIX) but strsep() is not standardized (by C or POSIX; it is available in the GNU C Library, and originated on BSD). Thus, portable code is more likely to use strtok() than strsep() . Another difference is that calls to the strsep() function on different strings can be interleaved, whereas you cannot do that with strtok() (though you can with strtok_r() ). So, using strsep() in a library doesn't break other code accidentally, whereas using strtok() in a library function must be documented because other code using strtok() at the same time cannot call the library function. The manual page for strsep() at kernel.org says: The strsep() function was introduced as a replacement for strtok(3), since the latter cannot handle empty fields. Thus, the other major difference is the one highlighted by George Gaál in his answer; strtok() permits multiple delimiters between a single token, whereas strsep() expects a single delimiter between tokens, and interprets adjacent delimiters as an empty token. Both strsep() and strtok() modify their input strings and neither lets you identify which delimiter character marked the end of the token (because both write a NUL '\0' over the separator after the end of the token). When to use them? You would use strsep() when you want empty tokens rather than allowing multiple delimiters between tokens, and when you don't mind about portability. You would use strtok_r() when you want to allow multiple delimiters between tokens and you don't want empty tokens (and POSIX is sufficiently portable for you). You would only use strtok() when someone threatens your life if you don't do so. And you'd only use it for long enough to get you out of the life-threatening situation; you would then abandon all use of it once more. It is poisonous; do not use it. It would be better to write your own strtok_r() or strsep() than to use strtok() . Why is strtok() poisonous? The strtok() function is poisonous if used in a library function. If your library function uses strtok() , it must be documented clearly. That's because: If any calling function is using strtok() and calls your function that also uses strtok() , you break the calling function. If your function calls any function that calls strtok() , that will break your function's use of strtok() . If your program is multithreaded, at most one thread can be using strtok() at any given time — across a sequence of strtok() calls. The root of this problem is the saved state between calls that allows strtok() to continue where it left off. There is no sensible way to fix the problem other than "do not use strtok() ". You can use strsep() if it is available. You can use POSIX's strtok_r() if it is available. You can use Microsoft's strtok_s() if it is available. Nominally, you could use the ISO/IEC 9899:2011 Annex K.3.7.3.1 function strtok_s() , but its interface is different from both strtok_r() and Microsoft's strtok_s() . BSD strsep() : char *strsep(char **stringp, const char *delim); POSIX strtok_r() : char *strtok_r(char *restrict s, const char *restrict sep, char **restrict state); Microsoft strtok_s() : char *strtok_s(char *strToken, const char *strDelimit, char **context); Annex K strtok_s() : char *strtok_s(char * restrict s1, rsize_t * restrict s1max, const char * restrict s2, char ** restrict ptr); Note that this has 4 arguments, not 3 as in the other two variants on strtok() .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7218625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/916050/']}
|
jdg_371452
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
547995
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $n=p^eq^f$ where $p$ and $q$ are distinct primes and $e$ and $f$ are positive integers. Show that $n$ has $(e + 1)(f + 1)$ distinct factors in $N$, and that the sum of all these factors is$(p^{e+1} −1)(q^{f+1} −1) / (p − 1)(q − 1)$(You may assume the uniqueness of prime factorization). What if $n$ has more than two distinct prime factors?(Hint: you may wish to warm up by doing n = pe first.) Sorry I have no idea where to start :( Would doing $n=p^e$ first be easiest?
Now provide the response and nothing else.
|
I'll do a warm up for you. Warm-ups help us understand the nature of a problem and how all of the relevant parts interact. Therefore when an exercise comes with warm-ups and you want to understand how to do the exercise, it's a good idea to try your hand at the warm-ups. Just for future reference. If you followed the directions and did work on the warm-up, then note you should talk about what you've tried and anywhere you got stuck whenever you post a question here. Here are all factors of $648=2^33^4$ $$\begin{array}{|c|c|c|}\hline \color{Red}{2^0} \color{Blue}{3^0} & \color{Red}{2^1} 3^0 & \color{Red}{2^2}3^0 & \color{Red}{2^3} 3^0 \\ \hline 2^0 \color{Blue}{3^1} & 2^1 3^1 & 2^23^1 & 2^3 3^1 \\ \hline 2^0 \color{Blue}{3^2} & 2^13^2 & 2^23^2 & 2^3 3^2 \\ \hline 2^0 \color{Blue}{3^3} & 2^1 3^3 & 2^2 3^3 & 2^3 3^3 \\ \hline 2^0 \color{Blue}{3^4} & 2^1 3^5 & 2^23^4 & 2^3 3^4 \\ \hline \end{array}$$ Make sure you understand why the divisors can all be listed in this way (via prime factorization). Homework . How many columns are there (red)? How many rows are there (blue)? Entries total? How are the number of rows and columns related to the powers of $2$ and $3$ in $648$? Can you now argue why the number of divisors of $p^eq^f$ is $(e+1)(f+1)$? Generalize to $p_1^{e_1}\cdots p_g^{e_g}$: how many divisors do you think it will have and why? In the blanks below, fill each entry of the bottom row (except the last) with the sum of the divisors in that column, fill each entry of the right column (except the last) with the divisors of that row, and fill the corner in with the sum of all of the divisors of the whole array. How would you sum $p^rq^0+p^rq^1+\cdots p^rq^f$ in general, where $r$ remains fixed? Or what about $p^0q^s+p^1q^s+\cdots+p^eq^s$ where $s$ remains fixed? Write the sum of all divisors of $p^eq^f$ as a double summation . Evaluation the double summation using the geometric sum formula . Note $\sum_{i,j}a_ib_j=(\sum_ia_i)(\sum_jb_j)$. $$\begin{array}{|c|c|c|c|}\hline \color{Red}{2^0} \color{Blue}{3^0} & \color{Red}{2^1} 3^0 & \color{Red}{2^2}3^0 & \color{Red}{2^3} 3^0 & \phantom{--} \\ \hline 2^0 \color{Blue}{3^1} & 2^1 3^1 & 2^23^1 & 2^3 3^1 \\ \hline 2^0 \color{Blue}{3^2} & 2^13^2 & 2^23^2 & 2^3 3^2 \\ \hline 2^0 \color{Blue}{3^3} & 2^1 3^3 & 2^2 3^3 & 2^3 3^3 \\ \hline 2^0 \color{Blue}{3^4} & 2^1 3^5 & 2^23^4 & 2^3 3^4 \\ \hline \phantom{r} \\ \hline \end{array}$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/547995', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/104705/']}
|
jdg_371453
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1289801
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $f:X \to [0,1]$ is an onto continuous mapand $\{f^{-1} (y)\}$ is compact for every $y \in [0,1]$, then is X necessarily compact? Now continuous image of a compact set is compact. Again $X$ is the uncountable union of compact sets but to get any conclusion?
Now provide the response and nothing else.
|
No, it doesn't have to be. Counterexample: endow $[0,1]$ with the discrete topology, $\mathcal T_{dis}$. Call the standard topology on $[0,1]$ $\mathcal T_{Euc}$. Now consider the identity function: $$\mathrm{id}:([0,1],\mathcal T_{dis})\longrightarrow ([0,1],\mathcal T_{Euc}) \atop \qquad x\mapsto x$$Then $f^{-1}(\{y\})=\{y\}$ for all $y\in[0,1]$ yet $([0,1],\mathcal T_{dis})$ is not compact.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1289801', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152715/']}
|
jdg_371454
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38894
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a set of data in an uptrend. I would like to find two BEST parallel lines to contain all the data but I don't know how to approach it, anyone can help ? Thanks.
Now provide the response and nothing else.
|
This is a computational geometry problem. I'll illustrate a solution using some sample data: Needs["ComputationalGeometry`"]pts = Table[{x, Sin[x*3] + x/2. + RandomReal[.5]}, {x, 0, 10, .1}];ListPlot[pts] You want to find 3 "support points" in your data, that touch these two lines. Now imagine you had a "convex hull", i.e. the smallest convex polygon that contains all your data points. Calculating a convex hull is cheap (at least in 2D) and it eliminates most of the points in your data: hull = ConvexHull[pts];ListPlot[pts, Epilog -> {Red, Point[pts[[hull]]], Opacity[0.2], Line[pts[[Append[hull, hull[[1]]]]]]}] Obviously, the support points you're looking for are all on the convex hull. And two of them are next to each other on the convex hull. EDIT : I'll shamelessly copy @Kuba's idea to minimize m first, as it makes the solution so much simpler: (* find the possible value for m - the slopes between each adjacent \pair of points on the convex hull *)possibleMs = Divide @@@ Differences[Reverse /@ pts[[Append[hull, hull[[1]]]]]];(* find the m with the smallest c2-c1 difference *)m = SortBy[possibleMs, Max[#] - Min[#] &[pts[[hull]].{-#, 1}] &][[1]];(* find c1 and c2 *){c1, c2} = {Max[#], Min[#]} &[pts[[hull]].{-m, 1}];ListPlot[pts, Epilog -> {Red, Dashed, Line[{{0, c1}, {10, 10 m + c1}}], Line[{{0, c2}, {10, 10 m + c2}}]}]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/38894', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/11169/']}
|
jdg_371455
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5832094
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like a Covariant collection whose items can be retrieved by index. IEnumerable is the only .net collection that I'm aware of that is Covariant, but it does not have this index support. Specifically, I'd like to do this: List<Dog> dogs = new List<Dog>();IEnumerable<Animal> animals = dogs;IList<Animal> animalList = dogs; // This line does not compile Now, I'm aware of why this is a problem. List implements ICollection that has an Add method. By up casting to IList of Animals, it would allow subsequent code to add any type of animal which is not allowed in the "real" List<Dog> collection. So is anyone aware of a collection that supports index lookups that is also covariant? I would like to not create my own.
Now provide the response and nothing else.
|
Update: from .NET 4.5 onwards there is IReadOnlyList<out T> and IReadOnlyCollection<out T> which are both covariant; The latter is basically IEnumerable<out T> plus Count ; the former adds T this[int index] {get;} . It should also be noted that IEnumerable<out T> is covariant from .NET 4.0 onwards. Both List<T> and ReadOnlyCollection<T> (via List<T>.AsReadOnly() ) implement both of these. It can only be covariant if it only has a get indexer, i.e. public T this[int index] { get; } But all main collections have {get;set;} , which makes that awkward. I'm not aware of any that would suffice there, but you could wrap it, i.e. write an extension method: var covariant = list.AsCovariant(); which is a wrapper around an IList<T> that only exposes the IEnumerable<T> and the get indexer...? should be only a few minutes work... public static class Covariance{ public static IIndexedEnumerable<T> AsCovariant<T>(this IList<T> tail) { return new CovariantList<T>(tail); } private class CovariantList<T> : IIndexedEnumerable<T> { private readonly IList<T> tail; public CovariantList(IList<T> tail) { this.tail = tail; } public T this[int index] { get { return tail[index]; } } public IEnumerator<T> GetEnumerator() { return tail.GetEnumerator();} IEnumerator IEnumerable.GetEnumerator() { return tail.GetEnumerator(); } public int Count { get { return tail.Count; } } }}public interface IIndexedEnumerable<out T> : IEnumerable<T>{ T this[int index] { get; } int Count { get; }}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5832094', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/731000/']}
|
jdg_371456
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
377713
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Write out $x_n(t)$ for the following differential equation: $$\frac{dx}{dt} = f(t,x) = x^2, x(0)=1$$ So using the Picard Iteration : $x_n(t) = x_0 + \int_{0}^t f(s,x_{n-1}(s)) ds$, we have $x_0(t) = 1 \quad \forall t$, $x_1= 1+ \int_0^t (1)^2 ds = 1+t$ $x_2 = 1+ \int_0^t (1+s)^2 ds = 1+t+t^2+ \frac{t^3}{3}$ $x_3 = 1+ \int_0^t (1+s+s^2+ \frac{s^3}{3})^2 ds = 1+t+t^2+ \frac{t^3}{3}+ \frac{2}{3}t^4+ \frac{1}{3}t^5+ \frac{1}{9}t^6+ \frac{1}{56}t^7$ I am stuck in giving a general $x_n(t)$ since I could not figure out the pattern. EDIT: Yes I do know the solution is $\frac{1}{1-t}$, which is why I wonder there should be a recognizable pattern $1+t+t^2+t^3+...$ or something but yet my calculation seems to lead me to nowhere.
Now provide the response and nothing else.
|
I don't see any simple pattern in the coefficients of $t^k$ in polynomial $x_n$ for $k>n$. But there is such a pattern when $k\le n$: the coefficients are all $1$. That is, $$x_n=1+t+t^2+\dots+t^n +O(t^{n+1})\tag1$$where $O(t^{n+1})$ stands for powers $n+1$ and higher. It's not hard to prove (1) by induction. The base case is $x_0=1$. If (1) holds for $n$, then squaring (1) yields $$x_n^2=1+2t+3t^2+\dots+(n+1)t^n+O(t^{n+1}) \tag2$$ which integrates to $$x_{n+1}=1+t+t^2+\dots+t^{n+1} +O(t^{n+2})\tag3$$ At least in the sense of convergence of coefficients, (1) is enough to tell you what happens as $n\to\infty$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/377713', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33464/']}
|
jdg_371457
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
448573
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In this youtube video it is claimed that electrons orbit their atom's nucleus not in well-known fixed orbits, but within "clouds of probability", i.e., spaces around the nucleus where they can lie with a probability of 95%, called "orbitals". It is also claimed that the further away one looks for the electron from the nucleus, the more this probability decreases, yet it never reaches 0 . The authors of the video conclude that there is a non-zero probability for an atom to have its electron "on the other side of the Universe". If this is true, then there must be a portion of all atoms on Earth whose electron lies outside the Milky Way. Which portion of atoms has this property?
Now provide the response and nothing else.
|
The quantity you should consider first is the Bohr radius , this tells you an idea of the relevant atomic scales, $$a_0 = 5.29\times 10^{-11} ~{\rm m}$$ For hydrogen (the most abundant element), in its ground state, the probability of finding an electron beyond a distance $r$ from the center looks something like (for $r \gg a_0$ ) $$P(r) \approx e^{-2r/a_0}$$ Now let's plug in some numbers. The virial radius of the Milky Way is around $200 ~{\rm kpc} \approx 6\times 10^{21}~{\rm m}$ , so the probability of finding an electron outside the galaxy from an atom on Earth is around $$P \sim e^{-10^{32}}$$ that's ... pretty low. But you don't need to go that far to show this effect, the probability that an electron of an atom in your foot is found in your hand is $\sim 10^{-10^{10}}$ .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/448573', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/81683/']}
|
jdg_371458
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
289495
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Assume I need to solve an NP-complete problem, for which problem-specific methods (e.g. efficient heuristics or exponential algorithms faster than naive one) are not well developed. If the size of input is n, then, in theory, I could reduce the problem to SAT of size P(n), where P is some polynomial, and use SAT solvers. Or I could reduce it to other NP-complete problem with well-developed algorithms available. Of course, I would like to use reduction with P(n) being polynomial with as low degree as possible. 1) Is there a (reasonably recent) book/survey/webpage in which I can learn what are the most efficient known reductions from some (as many as possible) NP-complete problems to (say) SAT? 2) I am sure many such reductions has already been implemented, some of them open source. Is there a webpage collecting links to such implementations?
Now provide the response and nothing else.
|
Lurie characterizes the symmetric monoidal structure on $\mathsf{Sp}$ by a universal property (HA.4.8.2.19): it is uniquely determined up to a contractible space of choices by the property that $S^0$ is the unit and $\wedge$ commutes with homotopy colimits in both variables. I think on first glance this sounds like one of those formal things that doesn't help you 'compute' anything, but I claim that (i) you can get all the computational mileage out of this that you usually get from model categories of spectra and (ii) if you like, you can easily show that your favorite symmetric monoidal model category of spectra models this symmetric monoidal structure. We'll start with (ii) since that's what you asked about, and it's easier. Suppose you have a model category $\mathbf{M}$ which models spectra (which is something else that can be checked easily from universal properties by its relationship to some model for $\mathsf{Spaces}$, for example). Suppose further that it's a symmetric monoidal model category. Then the underlying $\infty$-category $\mathsf{M}[W^{-1}] \cong \mathsf{Sp}$ inherits a symmetric monoidal structure. To compare it with the 'universal' one, one needs to check that the tensor product commutes with homotopy colimits and that the unit is weakly equivalent to the sphere spectrum. The unit requirement is just always satisfied (otherwise what sort of spectra are you using??) and the homotopy colimit requirement is also always satisfied, because part of the definition of being a symmetric monoidal model category is that $\otimes$ be a left Quillen bifunctor- and that forces the tensor product to commute with homotopy colimits in both variables. Now let's talk about (i). What does it even mean to 'understand' the smash product of spectra? Well, usually the way a spectrum is handed to us is as a sequence of (say, pointed) spaces $(X_k)$ and maps $\Sigma X_k \to X_{k+1}$. Every model ever includes at least that much data, often with all sorts of extra requirements and structure. But in any case, this data presents a spectrum $X = \mathrm{hocolim}_k \Sigma^{-k}\Sigma^{\infty}X_k$. So, from the universal property we learn that $$ X \wedge Y = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}X_j \wedge \Sigma^{\infty}Y_k\right). $$ So now we'd better figure out how to compute $\Sigma^{\infty}A \wedge \Sigma^{\infty} B$ for pointed spaces $A$ and $B$. It's possible to argue very abstractly that $\Sigma^{\infty}$ must be symmetric monoidal, using the methods in HA.4.8.2, but you can also argue as follows: first reduce to the unpointed case, so we want to compute $\Sigma^{\infty}_+A \wedge \Sigma^{\infty}_+B$. But $A$ and $B$ are homotopy colimits of constant diagrams shaped like $A$ and $B$, respectively. And $\Sigma^{\infty}_+$ commutes with homotopy colimits. A little string of equivalences and the fact that $\Sigma^{\infty}_+(*) = S^0$ is the unit gives the result. Excellent, so just from nonsense we learn that $\Sigma^{\infty}$ is symmetric monoidal. Moving back to our original formula we learn that a 'concrete' computation for the smash product is: $$X \wedge Y = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}X_j \wedge \Sigma^{\infty}Y_k\right) = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}(X_j \wedge Y_k)\right).\quad (1)$$ If you like, then at this point you could pick your favorite cofinal copy of $\mathbb{N}$ inside $\mathbb{N} \times \mathbb{N}$ and present this smash product as a sequence of spaces with maps from the suspension of one to the next. This way of thinking about the smash product is the very original one, going back to Boardman and Adams. Of course they ran into all sorts of technical issues verifying all the properties they wanted out of a symmetric monoidal structure. What happened to those? Well, when dealing with a symmetric monoidal structure one would like (a) properties and (b) the ability to 'compute' what the thing does. The original approach was to begin with (b) and then work hard to verify (a). In the present situation, one begins with (a) using various levels of sophistication and then deduces (b). Of course, lots of technical work went into producing the symmetric monoidal $\infty$-category $\mathsf{Sp}$! But the work pays off: you get a much stronger theorem and a more flexible theory. Let me go into a bit more detail about comparing with, say, the formula for the smash product one finds for orthogonal spectra. I claim that it is a particular model-categorical presentation of precisely the formula (1). To justify that, I'm going to compare the two formulae directly. So I'll need to review a bit about orthogonal spectra. Recall that an orthogonal spectrum consists of a sequence of pointed spaces $X_n$ equipped with $O(n)$-actions together with compatible, $O(n)\times O(m)$-equivariant, based maps $X_n \wedge S^m \to X_{n+m}$. Given an orthogonal spectrum, one would like to know how to describe the corresponding object in $\mathsf{Sp}$ and how to understand smash products. We'll take the $\infty$-category $\mathsf{Spaces}$ as 'understood' and the functor $\Sigma^{\infty}$ as also understood. Then an orthogonal spectrum $(X_k)$ presents an object of $\mathsf{Sp}$ by the formula $$X = \underset{\mathbb{N}}{\mathrm{hocolim}}\, \Sigma^{-k}\Sigma^{\infty}X_k$$ Of course, we ignored the orthogonal group action. Luckily, here's a fun fact: Fun fact. Let $\mathsf{Orth}$ denote the $\infty$-category of real inner product spaces and isometric embeddings. Then the inclusion $\mathbb{N} \to \mathsf{Orth}$ is homotopy final. It follows that we may compute the homotopy colimit either over $\mathsf{Orth}$ or over $\mathbb{N}$. That's important, because the formula for the smash product of orthogonal spectra is really trying to be a formula for a homotopy colimit over $\mathsf{Orth} \times \mathsf{Orth}$. I won't bother typesetting the formula here (see page 5 of Schwede's book , for example) but unless I'm mistaken one arrives at this formula as follows: Take the formula in (1) and replace the homotopy colimit over $\mathbb{N} \times \mathbb{N}$ by a homotopy colimit over $\mathsf{Orth} \times \mathsf{Orth}$. To compute that homotopy colimit, we are free to first left Kan extend along $\oplus: \mathsf{Orth} \times \mathsf{Orth} \to \mathsf{Orth}$. Stare at the pointwise formula for the left Kan extension evaluated at $\mathbb{R}^n$, and you become interested in a homotopy colimit along all pairs $p+q = n$ together with actions of $O(p) \times O(q)$ etc. Now suspend everything n times, erase the $\Sigma^{\infty}$'s, and use one of the standard formulas for computing a homotopy colimit as an ordinary colimit in some model category, and you should get (with cofibrancy conditions if you want the right homotopy type) the previously mentioned formula in the linked book. The same thing works in symmetric spectra, except that the claim about finality is false. Instead, one uses a cofibrancy condition that allows a lemma of Bökstedt to apply so you can replace $\mathbb{N}$ with the category of finite sets and injections.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/289495', 'https://mathoverflow.net', 'https://mathoverflow.net/users/31472/']}
|
jdg_371459
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6273289
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a form with a mobile/cell number and a home phone number. I want to have only validate presence of mobile/cell number if the phone number has been left blank or vice versa. My current validations for these fields are as follows. validates_presence_of :mobile_numbervalidates_presence_of :home_phonevalidates_length_of :home_phone, :minimum => 12, :maximum => 12validates_length_of :mobile_number, :minimum => 10, :maximum => 10, :allow_blank => truevalidates_format_of :home_phone, :with => /\A[0-9]{2}\s[0-9]{4}\s[0-9]{4}/, :message => "format should be 02 9999 9999" I thought I could have something like the following but not sure how to do this exactly. validates_presence_of :mobile_number, :unless => :home_phone.blank? I'm using Rails 3.
Now provide the response and nothing else.
|
You don't need a lambda. This will do: validates_presence_of :mobile_number, :unless => :home_phone? Also, all of the validators take the same if/unless options, so you can make them conditional at will. Update: Looking back at this answer a few days later, I see that I should explain why it works: If you set a validator's :unless option to be a symbol, Rails will look for an instance method of that name, invoke that method on the instance that's being validated -- at validation time -- and only perform the validation if the method returns false. ActiveRecord automatically creates question mark methods for each of your model's attributes, so the existence of a home_phone column in your model's table causes Rails to create a handy #home_phone? method. This method returns true if and only if home_phone is present (i.e. not blank). If the home_phone attribute is nil or an empty string or a bunch of white space, home_phone? will return false. UPDATE: Confirmed that this old technique continues to work in Rails 5.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6273289', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/733471/']}
|
jdg_371460
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
696356
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I figure that Lagrange's theorem and the fact that the characteristic of an integral domain is either $0$ or prime should be used, but just can't figure it out exactly.
Now provide the response and nothing else.
|
$F$ cannot have characteristic $0$, because it is a finite field. If $F$ has characteristic $p>0$, then $1$ generates an additive subgroup of order $p$. By Lagrange's theorem, $p$ divides the order of the whole group $F$, which is $2^n$. Since $p$ is prime, $p=2$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/696356', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/99084/']}
|
jdg_371461
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2554166
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
"A linear transformation between polynomial spaces: $f: P_2(\mathbb{R}) \to P_2(\mathbb{R})$ is given by: $$f(p(x))=3 \cdot p(1)-x^2 \cdot p(0)+(x-1) \cdot p'(1)$$ Provide the transformation matrix with respect to the monomial basis $(1,x,x^2)$ How can one determine this?
Now provide the response and nothing else.
|
A commutative ring can be embedded in a field iff it is an integral domain. Indeed, if a ring can be embedded in a field then it cannot have zero divisors because fields cannot have zero divisors. Conversely, every integral domain can be embedded in a field, namely, its field of fractions.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2554166', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/501670/']}
|
jdg_371462
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3954514
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the recommended way of doing date arithmetics in Perl? Say for example that I want to know the date three days ago from today (where today = 2010-10-17 and today - 3 days = 2010-10-13 ). How would you do that in Perl?
Now provide the response and nothing else.
|
You can use DateTime and DateTime::Duration http://search.cpan.org/dist/DateTime/lib/DateTime/Duration.pm Or work with unix timestamps: my $now = time();my $threeDaysAgo = $now - 3 * 86400;my ($day, $mon, $year) = (localtime($threeDaysAgo))[3, 4, 5];printf("Three days ago was %04d-%02d-%02d", $year+1900, $mon+1, $day);
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3954514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58394/']}
|
jdg_371463
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
67564496
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why is the last operation returning 20? console.log(2 + 2); // equals 4console.log("2" + "2"); // equals "22"console.log(2 + 2 - 2); // equals 2console.log("2" + "2" - "2"); // equals 20
Now provide the response and nothing else.
|
+ and - evaluate left-to-right. When either operand is a string, the result is concatenation. When both are numbers, the result is addition. In contrast, - will always coerce both sides to numbers. '2' + '2' - '2' does // left-to-right('2' + '2') - '2'// both sides are strings, so concatenate'22' - '2'// operator is -, so coerce both sides to numbers22 - 220
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/67564496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6219957/']}
|
jdg_371464
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10501121
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a reduced example of the structure of my code: void increment(int j);int main(){ int i = 0; while(1) { i = increment(i); } return 0;}int increment(int j){ return j + 1;} And here is the corresponding GDB script: b incrementcommand 1finishprint icontinueend The problem is that the finish command prevents the commands that come after it (namely print i and continue ) to not be called. Is there a way to tell GDB to print i right after any increment call?
Now provide the response and nothing else.
|
You can apparently work around this bug by wrapping all the commands in a single python invocation e.g. (gdb) break doSomethingBreakpoint 1 at 0x400478: file iter.c, line 5.(gdb) commandsType commands for breakpoint(s) 1, one per line.End with a line saying just "end".>python gdb.execute("print i"); gdb.execute("finish"); gdb.execute("print i");>endBreakpoint 1, doSomething () at iter.c:55 while (i < 5)$1 = 0main (argc=1, argv=0x7fffffffe178) at iter.c:1313 return 0;$2 = 5 edit: a 2nd work around that doesn't require python appears to be defining a new gdb commandand running that in commands: define fooprint *iset $addrOfI = ifinishprint *$addrOfIendbreak doSomethingcommandsfooend
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10501121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/707381/']}
|
jdg_371465
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33312
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For any arbitrary NP complete language is there always a polytime superset the complement of which is also infinite? A trivial version which does not stipulate the superset to have infinite complement has been asked at https://cs.stackexchange.com/q/50123/42961 For purposes of this question, you can assume that $P \ne NP$. As Vor explained, if $P = NP$ then the answer is "No". (If $P = NP$, then $X = \{x \mid x \in \mathbb{N^+} \land x > 1\}$ is NP-complete. Clearly there is no superset of $X$ which is infinite and has an infinite complement, as the complement of $X$ has only a single element.) Thus we can focus on the case $P \ne NP$.
Now provide the response and nothing else.
|
Every $\mathsf{coNP}$-complete set contains an infinite subset in $\mathsf{P}$ assuming that pseudorandom generators exist, and secure one-way permutations exist. In other words, assuming that these two conjectures are true, no $\mathsf{coNP}$-complete set is P- immune .As pointed out in the comments by Lance, this is implied by Theorem 4.4 of Glasser, Pavan, Selman, and Sengupta, " Properties of NP-complete sets ",SIAM J. Comput. 36(2), 516–542. (Kaveh has already shown that your question is equivalent to whether every $\mathsf{coNP}$-complete sets contains an infinite $\mathsf{P}$ subset. In other language, this is saying that no $\mathsf{coNP}$-complete set is "$\mathsf{P}$-immune." This is the language used in the above-referenced theorem.)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/33312', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/17763/']}
|
jdg_371466
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
257979
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to use I2C signals on the FPGA from cypress 2.0 module with i2c pins as below.but It does not work well. enter image description here Xilinx FPGA <--------------------> MCU module (I2c) .UCFNET in_scl LOC = "A1";NET in_sda LOC = "A2";NET out_scl LOC = "C1";NET out_sda LOC = "C2"; and in verilog as below, I'm implemented. module top();inout wire in_sda;inout wire in_scl;inout wire out_sda;inout wire out_scl;assign out_sda = in_sda;assign out_scl = in_scl;endmodule Can you help me what am I missing?
Now provide the response and nothing else.
|
In I2C communication, open collector drivers are used. The protocol depends on the level on the line being the 'wire-OR' of all the drivers. This means that, on a local interface, a pin may be low because it is being pulled low by a remote driver, or it's being pulled low by itself. Obviously any buffer between that sits between this interface and the remote ones must be able to distinguish why the line is low. If it's low because a remote is pulling it low, then it too must pull low to transmit that. If it's low because the local driver is pulling it low, then it must not, otherwise the line would get stuck low, and nothing more would happen. Various strategies are used by vendors of (for instance) opto isolators designed for I2C. Their output pulls low to a different level to the typical local interface, and voltage sensors can distinguish which driver is pulling it low. But it's all rather flaky and depends on the local driver having 'typical' good performance. As an I2C bus is designed to be bus connected, by far the best way to connect the FPGA to the bus is to implement an input and an o/c driver for each of the two pins, and connect to the two I2C bus wires. This retains the direct connection of all the items already on the bus. The minimal programming so that the FPGA does not disrupt bus traffic is to not drive the o/c outputs. To participate in bus traffic will require an I2C protocol stack in the FPGA. A robust way to break the path through the FPGA is to implement two I2C stacks in the FPGA, one facing each way, to relay messages. While this may seem quite a heavy option, I2C interfaces are available as pre-written IP for FPGAs, and it eliminates the flakiness of trying to implement a dumb pass-through.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/257979', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/123772/']}
|
jdg_371467
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47955558
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote a script to scrape some web data that I would like to run every 10 minutes. The script runs fine when I do it from the command line but I get an error when I try to run it using a cronjob every 10 minutes because it can't import bs4'. I get this error: No module named bs4 My crontab: */10 * * * * PYTHONPATH=/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages python /Users/{username}/Documents/scripts/cot.py I got the PYTHONPATH variable by finding where 'bs4' was downloaded using this command: python -c 'import bs4; print(bs4.__file__)' Output in /var/{username}/mail: X-Cron-Env: <SHELL=/bin/sh>X-Cron-Env: <PATH=/usr/bin:/bin>X-Cron-Env: <LOGNAME={username}>X-Cron-Env: <USER={username}>X-Cron-Env: <HOME=/Users/{username}>Message-Id: <20171223194001.0783F8639E1@mbp-0500.localdomain>Date: Sat, 23 Dec 2017 13:40:00 -0600 (CST)Traceback (most recent call last): File "/Users/{username}/Documents/scripts/cot.py", line 3, in <module> from bs4 import BeautifulSoupImportError: No module named bs4
Now provide the response and nothing else.
|
Try https://github.com/SeleniumHQ/docker-selenium . It has python installed: $ docker run selenium/standalone-chrome python3 --versionPython 3.5.2 The instructions indicate you start it with docker run -d -p 4444:4444 --shm-size=2g selenium/standalone-chrome Edit: To allow selenium to run through python it appears you need to install the packages. Create this Dockerfile : FROM selenium/standalone-chromeUSER rootRUN wget https://bootstrap.pypa.io/get-pip.pyRUN python3 get-pip.pyRUN python3 -m pip install selenium Then you could run it with docker build . -t selenium-chrome && \ docker run -it selenium-chrome python3 The advantage compared to the plain python docker image is that you won't need to install the chromedriver itself since it comes from selenium/standalone-chrome .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47955558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2636200/']}
|
jdg_371468
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
158781
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When reading about web servers, frameworks, etc most of the time I notice that the goal is to have a technology that has the next features: Able to handle as many connections as possible. Fit an I/O model (connections to DBs and other web services). Those features fit the actual web model but I am interested in knowing which technologies will fit a heavy-cpu user case. For example, Node.js is a technology that really shines when you have to write an application that uses a lot of I/O. On the other hand, due to the Node.js nature of being evented, It is not suitable for being use in CPU-heavy user cases.(video encoding, machine learning, graphics) I also have take a look at Haskell web frameworks like Snap and Warp and at the benchmarks they really are fast. Are they Haskell web frameworks suitable for CPU-heavy problem? Which other languages/technologies are candidates?
Now provide the response and nothing else.
|
What you're asking for is typical of an n-tier architecture. You want a web tier that is designed for IO, that's its primary task. When it needs to perform some heavy processing then you need to farm that out to another tier that is designed for CPU crunching. A lot of websites do this (at least the really big ones), they have a farm of web servers spreading the IO load, but these are there solely for presentation work - handing back static files and taking in requests. When a request for work comes in, the web server passes the request to an application tier, where app servers crunch the data and pass the results back to the webserver to return to the user. The benefits are in scalability - you can add a new app server anytime you like if you have more cpu crunching to do, similarly, if the amount of IO requests gets too much, you can add a new web server. What you must not do is think of a web server as an applications server. Whilst it can perform that role in a small application, that's only there as a convenience. So node.js can perform the task you want, you just need to send any CPU-heavy work off somewhere. It's easy to send requests (on an async method!) to another service or process. As node runs on a single thread, that means you have several others on the same box available, assuming you've not filled them with DB work ( which is a CPU-heavy task itself ).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/158781', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/32976/']}
|
jdg_371469
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7283156
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a problem while upserting to mongo db using the official C# driver. public abstract class AggregateRoot{ /// <summary> /// All mongoDb documents must have an id, we specify it here /// </summary> protected AggregateRoot() { Id = ObjectId.GenerateNewId(); } [BsonId] public ObjectId Id { get; set; }} My entities already have the id-s but I had to create the mongo specific Id for it to work, as all the documents in a collection should have one. Now then I receive a new entity in my system a new Mongo Id is generated and I get the mongo cannot change _id of a document old exception. Is there some work-around? Let me describe the design a bit. All the entities which would be stored as documents were inheriting from AggregateRoot which had the id generation in it. Every sub-document had its id generated automatically and I had no problem with this. The id in AggregateRoot was introduced to correct the problem when retrieving data from MongoCollection to List and the generation was introduced so the id-s are different. Now we can move that id generation to save methods because the new entity for update had a new id generation. But it would mean that every dev on the team must not forget generating id-s in repository which is risky. It would be nicer just to ignore the id than mapping from mongo if it is possible and not to have AggregateRoot class at all
Now provide the response and nothing else.
|
I've encountered similar problem.I wanted to upsert documents using official C# driver. I had a class like this: public class MyClass{ public ObjectId Id { get; set; } public int Field1 { get; set; } public string Field2 { get; set; }} In console I would write: db.collection.update({Field1: 3},{Field1: 3, Field2: "value"}) and it would work. In C# I wrote: collection.Update(Query.EQ("Field1", 3), Update.Replace(new MyClass { Field1 = 3, Field2 = "value" }), UpdateFlags.Upsert); and it didn't work! Because driver includes empty id in update statement and when I upsert second document with different value of Field1 exception E11000 duplicate key error index is thrown (in this case Mongo tries to insert a document with _id that already exists in db). When I generated _id by myself (like topic starter) I've encountered the same exception ( mongo cannot change _id of a document ) on upserting objects with existing value of Field1. Solution is to mark Id property by attribute [BsonIgnoreIfDefault] (and not initialize it). In this case driver omits _id field in update statement and MongoDb generates Id if it necessary.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7283156', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/855959/']}
|
jdg_371470
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
117623
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Backstory: I'm using log4net to handle all logging for a project I'm working on. One particular method can be called under several different circumstances -- some that warrant the log messages to be errors and others that warrant the log messages to be warnings. So, as an example, how could I turn Public Sub CheckDifference(ByVal A As Integer, ByVal B As Integer) If (B - A) > 5 Then log.ErrorFormat("Difference ({0}) is outside of acceptable range.", (B - A)) End IfEnd Sub Into something more along the lines of: Public Sub CheckDifference(ByVal A As Integer, ByVal B As Integer, "Some delegate info here") If (B - A) > 5 Then **delegateinfo**.Invoke("Difference ({0}) is outside of acceptable range.", (B - A)) End IfEnd Sub So that I could call it and pass either log.ErrorFormat or log.WarnFormat as the delegate? I'm using VB.NET with VS 2008 and .NET 3.5 SP1. Also, I'm fairly new to delegates in general, so if this question should be worded differently to remove any ambiguities, let me know. EDIT: Also, how could I initialize the delegate to either the ErrorFormat or the WarnFormat in the class constructor? Would it be as easy as myDelegate = log.ErrorFormat ? I would imagine there is more to it than that (pardon my ignorance on the subject -- delegates are really something I want to learn more about, but so far they have eluded my understanding).
Now provide the response and nothing else.
|
Declare your delegate signature: Public Delegate Sub Format(ByVal value As String) Define your Test function: Public Sub CheckDifference(ByVal A As Integer, _ ByVal B As Integer, _ ByVal format As Format) If (B - A) > 5 Then format.Invoke(String.Format( _ "Difference ({0}) is outside of acceptable range.", (B - A))) End IfEnd Sub Somewhere in your code call your Test function: CheckDifference(Foo, Bar, AddressOf log.WriteWarn) Or CheckDifference(Foo, Bar, AddressOf log.WriteError)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/117623', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1108/']}
|
jdg_371471
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41385403
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to automatically remove completed Jobs besides making a CronJob to clean up completed Jobs? The K8s Job Documentation states that the intended behavior of completed Jobs is for them to remain in a completed state until manually deleted. Because I am running thousands of Jobs a day via CronJobs and I don't want to keep completed Jobs around.
Now provide the response and nothing else.
|
You can now set history limits, or disable history altogether, so that failed or successful CronJobs are not kept around indefinitely. See my answer here . Documentation is here . To set the history limits : The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. The config with 0 limits would look like: apiVersion: batch/v1beta1kind: CronJobmetadata: name: hellospec: schedule: "*/1 * * * *" successfulJobsHistoryLimit: 0 failedJobsHistoryLimit: 0 jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/41385403', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4647873/']}
|
jdg_371472
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
201939
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
At my office, there is a Windows Server 2008 router, 192.168.16.1, which is our default gateway. This sends data through our main internet connection. This works perfectly. 192.168.16.1 also has another internet connection that is currently sitting unused. This connects to a router, 192.168.1.254 which then connects to our secondary internet connection. I have set up Routing and Remote Access to route the 192.168.1.* range to our internal network, and I am able to ping 192.168.1.254 from my desktop that has an IP address of 192.168.16.22. I have a very intensive upload and download task for a linux box in the same office, so I want to set up a static route so I do not tie up our main internet connection and affect my co-workers. So, my plan is to set up this box with a static route. From the linux box, I am able to ping 192.168.1.254, but I am unable to add a static route to the destination server in our data center: $ sudo route add -host 123.123.123.123 gw 192.168.1.254 dev eth0SIOCADDRT: No such process From extensive googling, this error message seems to indicate that I am unable to access 192.168.1.254 to set up a route, yet I can ping it. I attempted to add a static route to 192.168.1.254, which worked perfectly, but this did not solve the issue (as expected). I am not advanced enough at networking to diagnose this issue further as my background is software development, not system administration. I would be very grateful for any advice given. Thank you for your time.
Now provide the response and nothing else.
|
Is the desktop on 192.168.16.0/24? You cannot add a route (to any target) via a gateway which you are not directly connected to, because your computer does not know how to find it's way to the router without going through the default route.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/201939', 'https://serverfault.com', 'https://serverfault.com/users/60304/']}
|
jdg_371473
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8771167
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing a small Java app using JavaMail that sends the user an automated email. They can choose between (for now) two ports: 25 and 587. The port can be selected via a radio button on the GUI. I added a test button to allow the user to test the email settings (including port). However, for some reason, once the user tries to send a test email, the port can't be changed. Javamail will always use the port of the original test email. Example: User tries to send an email on port 25 and JavaMail says it can not connect on port 25 (for example, the SMTP host uses another port). User clicks port 587, and tries to send a new email. JavaMail throws an error saying it can not connect on port 25, again. I'm kind of stumped as to why. Every time a new test email is sent an entirely new SendMailUsingAuthentication object is created. Within that class the properties are always reset to the proper port. Whenever I debug, as far as I can see, all variables are correct and associated with the correct port. Is there something going on inside of Transport that I'm missing? In the front end GUI: private void testButtonActionPerformed(java.awt.event.ActionEvent evt) { int port = port25RadioButton.isSelected() ? PORT_25 : PORT_587; notifier = new SendMailUsingAuthentication(hostNameTextField.getText(), userTextField.getText(), getPassword(), emailTextField.getText().split(","),port); Thread wait = new Thread(new Runnable() { public void run() { try { changeStatusText("Sending test email..."); notifier.postTestMail(); changeStatusText("Test email sent."); } catch (AddressException ex) { changeStatusText("Error. Invalid email address name."); } catch (MessagingException ex) { changeStatusText("SMTP host connection refused."); System.err.println(ex.getMessage()); } catch (Exception ex) { System.err.println(ex); } } }); wait.start();} In the email sender class: public void postTestMail() throws MessagingException, AddressException{ String[] testReciever = new String[1]; testReciever[0] = emailList[0]; postMail(testReciever, "Test email.", "Your email settings are successfully set up.", emailFromAddress);}private void postMail(String recipients[], String subject, String message, String from) throws MessagingException, AddressException { //Set the host smtp address Properties props = new Properties(); props.put("mail.smtp.port", smtpPort); props.put("mail.smtp.host", smtpHostName); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", true); Authenticator auth = new SMTPAuthenticator(); Session session = Session.getDefaultInstance(props, auth); session.setDebug(false); // create a message Message msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i < recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); // Setting the Subject and Content Type msg.setSubject(subject); msg.setContent(message, "text/plain"); Transport.send(msg);}
Now provide the response and nothing else.
|
This happens because you're using getDefaultInstance() which says : Get the default Session object. If a default has not yet been setup, a new Session object is created and installed as the default. And that the Properties argument is "used only if a new Session object is created." So the first time you invoke getDefaultInstance it uses your specified port. After that, the Session has already been created, and subsequent calls to getDefaultInstance will return that same session, and ignore the changed properties. Try using Session.getInstance() instead of getDefaultInstance() , which creates a new Session each time, using the supplied properties. It pays to read the javadocs very carefully.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8771167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/759553/']}
|
jdg_371474
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31121513
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a class where every method starts the same way: class Foo { public void bar() { if (!fooIsEnabled) return; //... } public void baz() { if (!fooIsEnabled) return; //... } public void bat() { if (!fooIsEnabled) return; //... }} Is there a nice way to require (and hopefully not write each time) the fooIsEnabled part for every public method in the class?
Now provide the response and nothing else.
|
I don't know about elegant, but here is a working implementation using Java's built-in java.lang.reflect.Proxy that enforces that all method invocations on Foo begin by checking the enabled state. main method: public static void main(String[] args) { Foo foo = Foo.newFoo(); foo.setEnabled(false); foo.bar(); // won't print anything. foo.setEnabled(true); foo.bar(); // prints "Executing method bar"} Foo interface: public interface Foo { boolean getEnabled(); void setEnabled(boolean enable); void bar(); void baz(); void bat(); // Needs Java 8 to have this convenience method here. static Foo newFoo() { FooFactory fooFactory = new FooFactory(); return fooFactory.makeFoo(); }} FooFactory class: import java.lang.reflect.InvocationHandler;import java.lang.reflect.Method;import java.lang.reflect.Proxy;public class FooFactory { public Foo makeFoo() { return (Foo) Proxy.newProxyInstance( this.getClass().getClassLoader(), new Class[]{Foo.class}, new FooInvocationHandler(new FooImpl())); } private static class FooImpl implements Foo { private boolean enabled = false; @Override public boolean getEnabled() { return this.enabled; } @Override public void setEnabled(boolean enable) { this.enabled = enable; } @Override public void bar() { System.out.println("Executing method bar"); } @Override public void baz() { System.out.println("Executing method baz"); } @Override public void bat() { System.out.println("Executing method bat"); } } private static class FooInvocationHandler implements InvocationHandler { private FooImpl fooImpl; public FooInvocationHandler(FooImpl fooImpl) { this.fooImpl = fooImpl; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if (method.getDeclaringClass() == Foo.class && !method.getName().equals("getEnabled") && !method.getName().equals("setEnabled")) { if (!this.fooImpl.getEnabled()) { return null; } } return method.invoke(this.fooImpl, args); } }} As others have pointed out, it does seem like overkill for what you need if you only have a handful of methods to worry about. That said, there certainly are benefits: A certain separation of concerns is achieved, because Foo 's method implementations don't have to worry about the enabled check cross-cutting concern. Instead, the method's code only needs to worry about what the method's primary purpose is, nothing more. There is no way for an innocent developer to add a new method to the Foo class and mistakenly "forget" to add the enabled check. The enabled check behavior is automatically inherited by any newly added method. If you need to add another cross-cutting concern, or if you need to enhance the enabled check, it's very easy to do so safely and in one place. It is kind of nice that you can get this AOP-like behavior with built-in Java functionality. You are not forced into having to integrate some other framework like Spring , though they can definitely be good options too. To be fair, some of the downsides are: Some of the implementation code that handles the proxy invocations is ugly. Some would also say that having inner classes to prevent instantiation of the FooImpl class is ugly. If you want to add a new method to Foo , you have to make a change in 2 spots: the implementation class and the interface. Not a big deal, but it's still a bit more work. Proxy invocations are not free. There is a certain performance overhead. For general use though, it won't be noticeable. See here for more information. EDIT: Fabian Streitel's comment got me thinking about 2 annoyances with my above solution that, I'll admit, I'm not happy about myself: The invocation handler uses magic strings to skip the "enabled-check" on the "getEnabled" and "setEnabled" methods. This can easily break if the method names are refactored. If there was a case where new methods need to be added that should not inherit the "enabled-check" behavior, then it can be pretty easy for the developer to get this wrong, and at the very least, it would mean adding more magic strings. To resolve point #1, and to at least ease the problem with point #2, I would create an annotation BypassCheck (or something similar) that I could use to mark the methods in the Foo interface for which I don't want to perform the "enabled check". This way, I don't need magic strings at all, and it becomes a lot easier for a developer to correctly add a new method in this special case. Using the annotation solution, the code would look like this: main method: public static void main(String[] args) { Foo foo = Foo.newFoo(); foo.setEnabled(false); foo.bar(); // won't print anything. foo.setEnabled(true); foo.bar(); // prints "Executing method bar"} BypassCheck annotation: import java.lang.annotation.*;@Target(ElementType.METHOD)@Retention(RetentionPolicy.RUNTIME)public @interface BypassCheck {} Foo interface: public interface Foo { @BypassCheck boolean getEnabled(); @BypassCheck void setEnabled(boolean enable); void bar(); void baz(); void bat(); // Needs Java 8 to have this convenience method here. static Foo newFoo() { FooFactory fooFactory = new FooFactory(); return fooFactory.makeFoo(); }} FooFactory class: import java.lang.reflect.InvocationHandler;import java.lang.reflect.Method;import java.lang.reflect.Proxy;public class FooFactory { public Foo makeFoo() { return (Foo) Proxy.newProxyInstance( this.getClass().getClassLoader(), new Class[]{Foo.class}, new FooInvocationHandler(new FooImpl())); } private static class FooImpl implements Foo { private boolean enabled = false; @Override public boolean getEnabled() { return this.enabled; } @Override public void setEnabled(boolean enable) { this.enabled = enable; } @Override public void bar() { System.out.println("Executing method bar"); } @Override public void baz() { System.out.println("Executing method baz"); } @Override public void bat() { System.out.println("Executing method bat"); } } private static class FooInvocationHandler implements InvocationHandler { private FooImpl fooImpl; public FooInvocationHandler(FooImpl fooImpl) { this.fooImpl = fooImpl; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if (method.getDeclaringClass() == Foo.class && !method.isAnnotationPresent(BypassCheck.class) // no magic strings && !this.fooImpl.getEnabled()) { return null; } return method.invoke(this.fooImpl, args); } }}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31121513', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4243/']}
|
jdg_371475
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12901742
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have made a jsonarray of jsonobjects. Now I need to sort the JSONArray on base of a value from the jsonobjects. Formerly I sorted ArrayLists of custom objects like this: Comparators: public class KreeftenComparatorLA implements Comparator<Kreeft> { public int compare(Kreeft left, Kreeft right) { return left.latijnseNaam.compareTo(right.latijnseNaam); }}public class KreeftenComparatorNL implements Comparator<Kreeft> { public int compare(Kreeft left, Kreeft right) { return left.naam.compareTo(right.naam); }} And then sort the arraylist: Collections.sort(db.lijst, new KreeftenComparatorLA()); or: Collections.sort(db.lijst, new KreeftenComparatorNL()); But when I try the same thing with the JSONArray like this (JA = my jsonarray) Collections.sort(JA, new KreeftenComparatorNL()); the Collections.sort gives an error: The method sort(List, Comparator) in the type Collections is not applicable for the arguments (JSONArray, ThisActicity.KreeftenComparatorNL) Does anybody know how to sort the JSONArray?
Now provide the response and nothing else.
|
The issue is that JSONArray more or less holds JSONObjects (and other JSONArrays) which ultimately are strings. Deserializing the strings entirely into POJOs, sorting those, then back into JSON is fairly heavy. The second issue is that a JSONArray can contain: Boolean, JSONArray, JSONObject, Number, String, or the JSONObject.NULL object; i.e. it is mixed types, making it hard to just dump the elements into a List of some type and sort that, then pass through the list dumping sorted items back into the JSON array. the only certain way to get a common type of each element from the JSONArray is using the Object get() method.. of course then all you have is Object objects and won't be able to do any meaningful sorting on them without revisiting the serialization issue. Assuming your JSONArray contains homogeneously structured values, you could iterate through the JSONArray, calling one of the typed get() methods on each one, dumping them into a List type, then sorting on that. If your JSONArray just holds "simple" type like String or numbers, this is relatively easy. This isn't exact code but something like: List<String> jsonValues = new ArrayList<String>();for (int i = 0; i < myJsonArray.length(); i++) jsonValues.add(myJsonArray.getString(i));Collections.sort(jsonValues);JSONArray sortedJsonArray = new JSONArray(jsonValues); Of course, if you have nested objects this can get a little trickier. If the value(s) you want to sort on live in the top level, it may not be soo bad... List<JSONObject> jsonValues = new ArrayList<JSONObject>();for (int i = 0; i < myJsonArray.length(); i++) jsonValues.add(myJsonArray.getJSONObject(i)); Then use a comparator like this to sort: class JSONComparator implements Comparator<JSONObject>{ public int compare(JSONObject a, JSONObject b) { //valA and valB could be any simple type, such as number, string, whatever String valA = a.get("keyOfValueToSortBy"); String valB = b.get("keyOfValueToSortBy"); return valA.compareTo(valB); //if your value is numeric: //if(valA > valB) // return 1; //if(valA < valB) // return -1; //return 0; }} Again, this makes some assumptions about the homogeneity of the data in your JSONArray. Adjust to your case if possible. Also you will need to add your exception handling, etc. Happy coding! edit fixed based on comments
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12901742', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1380611/']}
|
jdg_371476
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11686208
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a shell script, where I have to check if environment variable is set, if not set then I have to set it. Is there any way to check in shell script, whether an environment variable is already set or not ?
Now provide the response and nothing else.
|
[ -z "$VARIABLE" ] && VARIABLE="abc"if env | grep -q ^VARIABLE=then echo env variable is already exportedelse echo env variable was not exported, but now it is export VARIABLEfi I want to stress that [ -z $VARIABLE ] is not enough, because you can have VARIABLE but it was not exported. That means that it is not an environment variable at all.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11686208', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/646276/']}
|
jdg_371477
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8470889
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to install sbt-assembly by following the instructions in order to make a stand-alone jar that can run on a computer without scala installed. So far these are the steps I've taken. I created a plugins.sbt file: $ cat sbt/project/plugins.sbt addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.7.2") And I added the following to the beginning of my build.sbt file: $ head -n3 sbt/build.sbt import AssemblyKeys._ // put this at the top of the fileseq(assemblySettings: _*) But when I run sbt, I get the following error: sbt/build.sbt:1: error: not found: value AssemblyKeysimport AssemblyKeys._
Now provide the response and nothing else.
|
Make sure you are running sbt version at least 0.11 by typing $ sbt sbt-version at the bash prompt. Make sure you have the plugins file set up as follows: $ cat sbt/project/plugins.sbtaddSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.7.2") Make your build file ( build.sbt ) look like this: import AssemblyKeys._ seq(assemblySettings: _*)name := "my_project"version := "1.0"scalaVersion := "2.9.1"libraryDependencies ++= Seq( "org.scalatest" %% "scalatest" % "1.6.1" % "test", "commons-lang" % "commons-lang" % "2.6")traceLevel in run := 0fork in run := truescalacOptions ++= Seq("-optimize")// The following is the class that will run when the jar is compiled!mainClass in assembly := Some("MyMain")
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8470889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/244526/']}
|
jdg_371478
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
432063
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!
Now provide the response and nothing else.
|
Testing with a file based pool (v28 on FreeBSD 8.3 using file-backed md devices) suggests that it should work. I was able to offline one of the remaining disks while the resilver was in progress. Ideally it'd need testing with real disks and to actually pull one to be 100% sure but ZFS was perfectly happy to let me offline the disk. Before offlining md0, the pool was still fully ONLINE so it appears to me that ZFS is just mirroring the replaced disk to the new disk, but still treating the whole lot as available during the process. NAME STATE READ WRITE CKSUM test DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 8480467682579886773 OFFLINE 0 0 0 was /dev/md0 md1 ONLINE 0 0 0 replacing-2 ONLINE 0 0 0 md2 ONLINE 0 0 0 md3 ONLINE 0 0 0 (resilvering)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/432063', 'https://serverfault.com', 'https://serverfault.com/users/137215/']}
|
jdg_371479
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3239768
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A mass $M$ is drawn up a straight incline of given height $h$ by a mass $m$ which is attached to the first mass by a string passing from it over a pulley at the top of the incline (Fig 10-10) and which hangs vertically. Find the angle of the incline in order that the time of ascent be minimum. I know that the net force acting on $M$ is $32m - 32MsinA$ , (assuming that the pull of gravity is 32 ft/s) but I'm not sure how to use this information to minimize the time of ascent. I've tried applying the work formula as follows: $$W = Fd = (32m-32MsinA) \frac{h}{sinA}$$ $$W = \frac{32mh}{sinA} - 32Mh$$ I then decided to take the derivative with respect to $h$ and set it to zero. By minimizing $h$ , the hypotenuse is minimized which means that the time of ascent will be less since less distance has to be covered for the mass $M$ . The derivative of the above gave me the following: $$ \frac{32m}{sinA} - 32M = 0$$ $$sin A = \frac{m}{M}$$ However, my book said that the answer is $sin A = \frac{m}{2M}$ . I'm not really sure why. Any help would be appreciated. Thanks in advance.
Now provide the response and nothing else.
|
The trick is to look at the actual arrows involved. Also for convenience sake, I will assume $N$ is actually $M$ delete an open ball, since they're homotopy equivalent. The $0$ , $1$ exact sequence Consider the exact sequence $$ 0 \to H^0(M)\to H^0(N)\oplus \Bbb{R}\to \Bbb{R}\to H^1(M)\to H^1(N)$$ The trick is to observe that $H^0(M)=\Bbb{R}$ , since $M$ is connected. Thus we have $$ 0 \to \Bbb{R} \to H^0(N)\oplus \Bbb{R}\to \Bbb{R},$$ and since $N\ne \varnothing$ , we see that the dimension of $H^0(N) \oplus \Bbb{R}$ is greater than or equal to 2, but it's also at most 2, since it fits into this exact sequence. Thus $H^0(N)=\Bbb{R}$ and the map $H^0(N)\oplus \Bbb{R}\to \Bbb{R}$ is surjective. We could also show this geometrically ( $N$ must be connected, so $H^0(N)=\Bbb{R}$ from that consideration alone, and surjectivity of the map is clear geometrically). Regardless, surjectivity of this map allows us to split the exact sequence as $$ 0\to H^0(M)\to H^0(N)\oplus \Bbb{R} \to \Bbb{R} \to 0$$ and $$ 0\to H^1(M)\to H^1(N).$$ If $n> 2$ , then this fits in as $0\to H^1(M)\to H^1(N)\to 0$ , giving $H^1(M)\simeq H^1(N)$ . Otherwise this is part of the other exact sequence we don't understand, so let's look at that one next. The $n$ , $n-1$ exact sequence We have $$ 0 \to H^{n-1}(M) \to H^{n-1}(N) \to \Bbb{R} \to H^n(M)\to H^n(N) \to 0$$ Consider the map $H^{n-1}(N)\to \Bbb{R}$ . This is induced by restricting a differential form $\omega$ representing some cohomology class $[\omega]$ , defined on $N$ to the boundary sphere of $N$ . Then observe that $$\int_{\partial N} \omega = \int_N d\omega = \int_N 0 = 0,$$ by Stokes' theorem and that $\omega$ is closed. Therefore the image of $[\omega]$ in $\Bbb{R} = H^{n-1}(\partial N)$ is $0$ . Thus the map $H^{n-1}(N)\to \Bbb{R}$ is $0$ . Therefore we can split the exact sequence again into the pieces $$0\to H^{n-1}(M)\to H^{n-1}(N)\to 0$$ and $$0\to \Bbb{R}\to H^n(M)\to H^n(N)\to 0,$$ and using that $H^n(M)=\Bbb{R}$ , since $M$ is connected, compact, and orientable, we see that $H^n(N)=0$ , as desired. Alternatively without using that $H^n(M)=\Bbb{R}$ , we already have $b_n(N) = b_n(M)-1$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3239768', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/559192/']}
|
jdg_371480
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33058590
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to learn pandas but I have been puzzled with the following. I want to replace NaNs in a DataFrame with the row average. Hence something like df.fillna(df.mean(axis=1)) should work but for some reason it fails for me. Am I missing anything, is there something wrong with what I'm doing? Is it because its not implemented? see link here import pandas as pdimport numpy as nppd.__version__Out[44]:'0.15.2'In [45]:df = pd.DataFrame()df['c1'] = [1, 2, 3]df['c2'] = [4, 5, 6]df['c3'] = [7, np.nan, 9]dfOut[45]: c1 c2 c30 1 4 71 2 5 NaN2 3 6 9In [46]: df.fillna(df.mean(axis=1)) Out[46]: c1 c2 c30 1 4 71 2 5 NaN2 3 6 9 However something like this looks to work fine df.fillna(df.mean(axis=0)) Out[47]: c1 c2 c30 1 4 71 2 5 82 3 6 9
Now provide the response and nothing else.
|
As commented the axis argument to fillna is NotImplemented . df.fillna(df.mean(axis=1), axis=1) Note: this would be critical here as you don't want to fill in your nth columns with the nth row average. For now you'll need to iterate through: m = df.mean(axis=1)for i, col in enumerate(df): # using i allows for duplicate columns # inplace *may* not always work here, so IMO the next line is preferred # df.iloc[:, i].fillna(m, inplace=True) df.iloc[:, i] = df.iloc[:, i].fillna(m)print(df) c1 c2 c30 1 4 7.01 2 5 3.52 3 6 9.0 An alternative is to fillna the transpose and then transpose, which may be more efficient... df.T.fillna(df.mean(axis=1)).T
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33058590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5431734/']}
|
jdg_371481
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
539970
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
One coin is labelled with the number $1$, two different coins are labelled with the number $2$, three different coins are labelled with the number $3$, $\ldots$ , forty-nine different coins are labelled with the number $49$, fifty different coins are labelled with the number $50$. All of these coins are then put into a black bag. The coins are then randomly drawn one by one. We need $10$ coins of any type. What is the minimum number of coins that must be drawn to make sure that we have at least $10$ coins of any type?
Now provide the response and nothing else.
|
This is a Pigeonhole Principle problem. First select the coins labeled $\{1,2,3,...,9\}$ since none of these types can reach $10$. So we have selected $1+2+3+\cdots+9=45$ coins so far. Now we have $41$ coins left. If we select $9$ coins of each type we will have selected $41\cdot 9=369$ coins. Thus the number of coins we can select before we select $10$ of the same type is $45+369=414$ and so the very next coin that is selected will give us $10$ of the same type. Hence $415$ coins need to be selected in order to guarantee that we have $10$ of the same type.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/539970', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/53489/']}
|
jdg_371482
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30545198
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a HMSegmentedControl with 4 segments. When it is selected, it should pop up view. And when the pop up dismissed, and trying to click on same segment index it should again show the pop up. By using following does not have any action on click of same segment index after pop up dissmissed. segmetedControl.addTarget(self, action: "segmentedControlValueChanged:", forControlEvents: UIControlEvents.ValueChanged)
Now provide the response and nothing else.
|
You can add the same target for multiple events. So lets say your segmentedControlValueChanged: looks like this: @objc func segmentedControlValueChanged(_ sender: UISegmentedControl) { if sender.selectedSegmentIndex == 0 { // value for first index selected here }} Then you can add targets for more than 1 events to call this function: segmentedControl.addTarget(self, action: #selector(segmentedControlValueChanged(_:)), for: .valueChanged)segmentedControl.addTarget(self, action: #selector(segmentedControlValueChanged(_:)), for: .touchUpInside) Now your function will get called when a value was changed and when the user releases his finger.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30545198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4861280/']}
|
jdg_371483
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47134698
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have got a problem with getting excel file and opening download window in the browser after getting a response (in success ajax method) with that file. I have got appropriate Content-Type and Content-Disposition headers , I tried using Blob in js and I couldn't achieve what I want - simple file downloading. I accomplished few versions of my ajax, one of them is below. I developed ajax which returns excel file which I couldn't open properly because it's corrupted (despite .xlsx extension). Maybe the problem is with inappropriate data type used in Blob constructor? I tried using "xhr.response" instead of "data" from success method arguments but it doesn't work too. I checked Response Headers in Developer Tools in Chrome and they are set properly. The important thing - all the excel workbooks created on the server side is correct because it worked in the previous version when data was sent in URL, not in ajax post. Controller method in Java/Spring server side below: response.reset();response.setContentType("application/vnd.ms-excel");response.addHeader("Content-Disposition","attachment;filename=\"" + className + " " + title + ".xlsx\""); try (ServletOutputStream output = response.getOutputStream()){ workbook.write(output); output.flush(); } catch (Exception e) { throw new RuntimeException(e); } My Ajax to download file and open download window: $.ajax({ url: myUrl, type: 'POST', data: myData, success: function(data, status, xhr) { var contentType = 'application/vnd.ms-excel'; var filename = ""; var disposition = xhr.getResponseHeader('Content-Disposition'); if (disposition && disposition.indexOf('attachment') !== -1) { var filenameRegex = /filename[^;=\n]*=((['"]).*?\2|[^;\n]*)/; var matches = filenameRegex.exec(disposition); if (matches != null && matches[1]) filename = matches[1].replace(/['"]/g, ''); } console.log("FILENAME: " + filename); try { var blob = new Blob([data], { type: contentType }); var downloadUrl = URL.createObjectURL(blob); var a = document.createElement("a"); a.href = downloadUrl; a.download = filename; document.body.appendChild(a); a.click(); } catch (exc) { console.log("Save Blob method failed with the following exception."); console.log(exc); }
Now provide the response and nothing else.
|
It looks like JQuery have got some problem with dealing with the binary data from the response. I used simply XMLHttpRequest and I add all data to the URL. var request = new XMLHttpRequest();request.open('POST', url, true);request.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8');request.responseType = 'blob';request.onload = function(e) { if (this.status === 200) { var blob = this.response; if(window.navigator.msSaveOrOpenBlob) { window.navigator.msSaveBlob(blob, fileName); } else{ var downloadLink = window.document.createElement('a'); var contentTypeHeader = request.getResponseHeader("Content-Type"); downloadLink.href = window.URL.createObjectURL(new Blob([blob], { type: contentTypeHeader })); downloadLink.download = fileName; document.body.appendChild(downloadLink); downloadLink.click(); document.body.removeChild(downloadLink); } } }; request.send();
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47134698', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6054528/']}
|
jdg_371484
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
321336
|
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My intentions have been leaning towards "gaining reputation points" more than "help people" recently. I still have intentions to help, but my urge to gain more reputation points has been outweighing my urge to help. These actions include: Answering duplicates before they are marked duplicate , giving answers that are specific to the question being asked, albeit the "broader" answer in another post explains the situation pretty well. Answering questions that lack detail , then editing my answer as details are gained (rather than asking the asker to include more details, confirming the problem, before answering) Advertising my previous answers by slightly manipulating the subject at hand to make my advertised answer seem relevant, while understanding that the advertised answer may not be of any use to the OP (in hopes that bored wanderers will find it interesting) Boosting older posts by editing them , allowing them to appear on the first page of "active" in hopes of gaining more attention/reputation. Although the edits are justified (includes more, relevant details), the intention for editing is "game" related. Asking questions I already know the answer to , but have yet to seen them anywhere on the site solely to gain reputation, knowing that there is a "community wiki" option. ...[Insert "rep-aholic" action here]... You get the idea: I (try to) stay within the rules of the site. But I'm not focusing on the primary purpose of the site : Build a library of detailed answers to every question about programming . Instead, I focus on gaining badges/reputation to gain privileges . Although I still help developers, I'm not helping the site itself by working towards keeping it clean. Some reasons why I think I may have gamified Stack Overflow: Reputation seems a lot harder to come by without taking "the extra step" , due to many questions in the language I favor already being answered, thus making privileges harder to unlock I see others doing it (conformity) I currently lack privileges I desire Gamification seems to be my best form of motivation towards bringing myself to contribute (rather than taking the "Well someone else will most likely/is bound to do it" route), since a reachable goal with an end in sight is created as soon as I start gamifying, as opposed to the somewhat overwhelming, opinion-based and never-ending goal of "A well documented Q&A site" No one has actually asked me to stop I feel the competitive nature when gaming leads me to write higher quality posts I want to stop , but I know it would strongly impact my usage of the site, to the point where I find myself constantly flagging posts (mostly for duplicates) rather than helping a developer get from point A to point B. I came here to help developers with their programming problems, not teach people how to write well-received posts. I know there are others out there who gamify Stack Overflow as well (50k+ reputation points, answering questions they know should be flagged/improved, for the sake of gaining rep), so this is not a rare problem. Some of the "solutions" I've come up with, each with a description of why it may not have effect: Find a new language to work with if you feel all the good questions in your favored language have been spent. Not everyone is a hobbyist. You may assume " if they have time to gain reputation points on Stack Overflow, they have time to delve into a new language ", but that doesn't always apply. Stack Overflow has been implementing systems to allow developers to advertise their accounts to employers - Stack Overflow has been attempting to become a source of credential. Reputation could be a goal to gain moderation privileges, such as re-opening a good question (after it has been updated) that you are interested in. If (some peoples') competitive nature of gamification leads to a trashy site, all the progress you have now could be lost How could we know for sure how many people actually gamify Stack Overflow? For all we know, the ones gamifying the site could be the ones keeping it alive. I find gamification to have quite an impact on how "laid-back" my posts are. I'm more determined to push out wiki-like posts if I feel there is competition, and that I must give a well-formatted answer. There are many answers that get chosen due to publicity from outside viewers, albeit the answer that helped the OP was written in a form easier for the OP to read, rather than being seen as reference material. Those are the only two things I can think of that'll lead to me no longer gamifying. I should be helping programmers with code, but instead, I'm moderating with tool restrictions. I don't mind moderating, as it does help programmers understand concepts like the XY problem, how to express your problems better, how to avoid writing unclear posts, etc... It also contributes towards creating a hearty Q&A site for programmers. But 90% of my time on Stack Overflow would be moderation if I didn't perform the actions listed at the top of this post . Rather than helping people with programming problems, I feel more like I'm working as an overqualified janitor that's not getting paid. The worst part is, I'm not given all the tools involved with the job. To gain those tools, I must perform non-moderation tasks such as asking/answering questions. So not only do I feel more like a janitor than a programmer, I feel like a petty janitor that lacks tools. If I'm gonna be a janitor, I'm going to work towards being a handy janitor, in the sense that I have all the tools needed to be the best janitor I can be. This contributes to the intention to "game" to gain reputation points. Yet I choose to stay. Stack Overflow is the best of its kind, hands down. It seems to be the most maintained programming Q&A site with the highest quality content. That's why I feel bad for treating it like a game in ways that may reduce the quality. If I have a question about programming, I'm going to want to ask it here. I also want to be respected when I do so. For example, this question . As soon as it was asked, it received some pretty bad feedback. It had -2 and 4 close votes at the time. Eventually, it started gaining some better feedback, but you can still see the damage in the comments, up until a user mentioned : I think that if this question was asked by a higher rep user, it would have no close votes or downvotes and many more upvotes. Which was proven by a question asked by the same guy who made the quoted statement . The user with low reputation was shunned for using reflection and questioning his program's strange results. The user with high reputation did the same exact thing, except he didn't receive any negative feedback. It's pretty apparent why I'd want higher reputation when it comes to getting a problem of mine solved. So why would anyone stop? Well, it does have the potential of littering the site. But we are willing to use vehicles and tools that pollute our air. We are willing to build cities that destroy the homes and migration paths of wildlife. There are TONS of selfish actions we humans do primarily for our own selfish benefit, so unless I somehow force myself to become the Stack Overflow version of a "tree hugger", what would be the reason to stop? In my eyes, I'm simply one of many others on this site, so my mind travels towards the same questions people ask in the real world: "How much damage could I really be doing? If I stopped, would it make any difference, seeing how everyone else is still doing it?" and other similar questions. There's a post asking for suggestions on anti-gamification techniques that should be implemented , but that doesn't help me right now. I haven't always gamified Stack Overflow, and even in recent times I haven't done much of it, but I can feel the tendency rising. I'm guessing it's due to the 5k reputation gap between privileges that I have encountered since I haven't had this feeling up until that point. What should someone who gamifies Stack Overflow do in the very near future (don't have to wait for an update) to stop gamifying while still staying active in the programming aspect of the community (not the moderation side)?
Now provide the response and nothing else.
|
It is my understanding that Stack Overflow and the other Stack Exchange sites are built on the idea of gamification. To ask you and other users to stop gamifying it would be self-contradictory. However, if the rules of the game are producing bad results (like answering duplicates and questions lacking detail) then it sounds like the rules of the game need to be modified and refined. With regard to your own noble desire to stop gamifying Stack Overflow in a way that produces bad results, a possible solution would be to change your playing strategy. In this this meta answer , CodeGnome compares the fastest-answer strategy and the most-canonical-answer strategy. It sounds like you have been playing the fastest-answer strategy. This is a short-term strategy for quick rep gain. Most-Canonical-Answer Strategy Consider playing the most-canonical-answer strategy. With this strategy you write good answers that are superior to the other answers which already exist (or at least fill a void left by the other answers). You won't gain any quick rep points but over time your answers will rise to the top. It is the idea of compound interest or investment. Using this strategy you won't find yourself answering duplicate questions before they have had time to be marked. You also won't be answering before other people have already requested all the relevant info from the OP. Perhaps most importantly, you won't be wasting your time answering questions that no one ever looks at again. I almost never answer new questions (but I am very thankful for those who do). I use the most-canonical-answer strategy. Since I am still learning myself, I write the answers that I wish had been there for me. Thus, writing a canonical answer forces me to learn the topic more thoroughly. I see that you have written a fair number of questions yourself, so that shows you are still willing to learn, too. Here are some specific strategies: Answer questions that have a high question-to-answer vote ratio. People tend to upvote answers more than questions. If a question has a good answer, the ratio is usually 1:2 or 1:3. That is, the top voted answer has 2 to 3 times more votes than the question does. If the ratio is 1:1 or if the question has more votes than the top answer, then this probably means that a lot of people come here but don't find the answer they were looking for. ( example ) Answer questions that have a lot of views. These are usually answers that come up first in a search. A general rule is about 10,000 views (or at least thousands of views a year). A question may be lacking a good canonical answer, but if no one is visiting it don't waste your time. I make exceptions for low traffic tags, though. ( example ) You are not answering the question for the OP but for the masses. Sometimes this means you are answering the question asked in the title and not the specific details of the OP's problem. This point may be controversial, but I think it provides answers that help more people.( example ) Q&A format. When both the question and the canonical answer that you are looking for don't exist, then add your own Q&A pair. If it is something you wished had been there when you were searching, then probably other people in the future will, too. ( example ) Supplemental answers may be helpful if the current answers lack details. The current answers may answer the question adequately as far as getting to code to work goes, but some people are looking for a more detailed explanation of why it works. ( example ) A picture is worth a thousand words. This is related to supplemental answers. For some topics, adding a graphic can help people understand a concept much more quickly. ( example ) A note about advertising Your third point was about advertising your previous answers. Using the most-canonical-answer strategy does not prevent this, but at least it makes the answers you link to more valuable. I'm not opposed to people linking to their own answers if those answers are actually helpful. Just the other day MartinR linked to one of his previous answers. I thought that was great because his answers are some of the best around. It wasn't a duplicate, but it was similar enough to be very useful. As my answers increase, I also link related ones together for easy navigation ( example ). But I also link to other people's answers. Links help people. Summary Stack Overflow is built on gaming theory, but it intends to provide quality answers to developers. Using the most-canonical-answer strategy to play the game is a viable option to merge these two aspects.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://meta.stackoverflow.com/questions/321336', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/2398375/']}
|
jdg_371485
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9896950
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have opened a sample ASP.NET MVC project. In HomeController I have created a method (action) named MethodA public ActionResult MethodA(){ return View();} I have right clicked on MethodA and created a new view called MethodA1 Re-did it and created a new view called MethodA2 . How is this magical relationship done? I looked for the config to tell the compiler that views MethodAX are related to action MethodA , but found none. What view will the controller return when MethodA is called?
Now provide the response and nothing else.
|
The convention is that if you don't specify a view name, the corresponding view will be the name of the action. So: public ActionResult MethodA(){ return View();} will render ~/Views/ControllerName/MethodA.cshtml . But you could also specify a view name: public ActionResult MethodA(){ return View("FooBar");} and now the ~/Views/ControllerName/FooBar.cshtml view will be rendered. Or you could even specify a fully qualified view name which is not inside the views folder of the current controller: public ActionResult MethodA(){ return View("~/Views/Foo/Baz.cshtml");} Now obviously all this assumes Razor as view engine. If you are using WebForms, replace .cshtml with .aspx or .ascx (if you are working with partials). For example if there is no view it will even tell you where and in what order is looking for views: Remember: ASP.NET MVC is all about convention over configuration.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9896950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/311130/']}
|
jdg_371486
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21069294
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to parse the webpage http://dcsd.nutrislice.com/menu/meadow-view/lunch/ to grab today's lunch menu. (I've built an Adafruit #IoT Thermal Printer and I'd like to automatically print the menu each day.) I initially approached this using BeautifulSoup but it turns out that most of the data is loaded in JavaScript and I'm not sure BeautifulSoup can handle it. If you view source you'll see the relevant data stored in bootstrapData['menuMonthWeeks'] . import urllib2from BeautifulSoup import BeautifulSoupurl = "http://dcsd.nutrislice.com/menu/meadow-view/lunch/"soup = BeautifulSoup(urllib2.urlopen(url).read()) This is an easy way to get the source and review. My question is: what is the easiest way to extract this data so that I can do something with it? Literally, all I want is a string something like: Southwest Cheese Omelet, Potato Wedges, The Harvest Bar (THB), THB - Cheesy Pesto Bread, Ham Deli Sandwich, Red Pepper Sticks, Strawberries I've thought about using webkit to process the page and get the HTML (i.e. what a browser does) but that seems unnecessarily complex. I'd rather simply find something that can parse the bootstrapData['menuMonthWeeks'] data.
Now provide the response and nothing else.
|
Something like PhantomJS may be more robust, but here's some basic Python code to extract it the full menu: import jsonimport reimport urllib2text = urllib2.urlopen('http://dcsd.nutrislice.com/menu/meadow-view/lunch/').read()menu = json.loads(re.search(r"bootstrapData\['menuMonthWeeks'\]\s*=\s*(.*);", text).group(1))print menu After that, you'll want to search through the menu for the date you're interested in. EDIT : Some overkill on my part: import itertoolsimport jsonimport reimport urllib2text = urllib2.urlopen('http://dcsd.nutrislice.com/menu/meadow-view/lunch/').read()menus = json.loads(re.search(r"bootstrapData\['menuMonthWeeks'\]\s*=\s*(.*);", text).group(1))days = itertools.chain.from_iterable(menu['days'] for menu in menus)day = next(itertools.dropwhile(lambda day: day['date'] != '2014-01-13', days), None)if day: print '\n'.join(item['food']['description'] for item in day['menu_items'])else: print 'Day not found.'
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21069294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1043146/']}
|
jdg_371487
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1703952
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to test a Controllerthat has a Command object with data binding. The Command Object has a Service injected into it. But When I try test the command object the injected service methodis never found as it is never "injected" Is there a way to mock a service inside a command object? Test method void testLoginPasswordInvalid() { mockRequest.method = 'POST' mockDomain(User, [new User(login:"freddy", password:"realpassword")]) mockLogging(UserService) // userService mocked MockUtils.prepareForConstraintsTests(LoginCommand) def userService = new UserService() def user = userService.getUser("freddy")//Gets called and returns the mockDomain assert userService.getUser("freddy")//Passes def cmd = new LoginCommand(login:"freddy", password:"letmein") cmd.validate() // Fails (userService is nevr injected) controller.login(cmd) assertTrue cmd.hasErrors() assertEquals "user.password.invalid", cmd.errors.password assertEquals "/store/index", renderArgs.view} The getUser() method of the userService isn't found Cannot invoke method getUser() on null objectjava.lang.NullPointerException: Cannot invoke method getUser() on null object Code The login method of the controller being called, def login = { LoginCommand cmd -> if(request.method == 'POST') { if(!cmd.hasErrors()){ session.user = cmd.getUser() redirect(controller:'store') } else{ render(view:'/store/index', model:[loginCmd:cmd]) } }else{ render(view:'/store/index') }} The Command Object has a "userService" injected into it. The validator calls this userService to find a user class LoginCommand { def userService String login String password static constraints = { login blank:false, validator:{ val, cmd -> if(!cmd.userService.getUser()){ return "user.not.found" } } } The userService.getUser() looks like this. class UserService { boolean transactional = true User getUser(String login) { return User.findByLogin(login) }}
Now provide the response and nothing else.
|
Service injection is done using Spring autowire-by-name. (Grep the Grails source tree for autowire to find a nice code fragment you can use to get it to autowire your controllers for you in integration tests.) This only functions in integration tests, where there's a Spring application context around that has the beans that can be injected. In unit tests, you have to do this yourself since there's no Spring-land surrounding your stuff. This can be a pain, but gives you some benefits: 1) It's easy to inject mock versions of services - for example, using an Expando - in order to more closely specify the behavior of your controller's collaborating services, and to allow you to test only the controller logic rather than the controller and service together. (You can certainly do the latter in a unit test as well, but you have the choice of how to wire it up.) 2) It forces you to be explicit about the dependencies of your controller - if you depend on it, your tests will show it. This makes them a better specification for the behavior of your controller. 3) You can mock only the pieces of external collaborators your controller depends on. This helps your tests be less fragile - less likely to need to change when things change. Short answer: your test method needs a cmd.userService = userService line.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1703952', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149289/']}
|
jdg_371488
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
45538960
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on Azure functions timer Job , i need to get the cron expression from the appsettings. Please let me know, how can i get the value from the appsettings in the Azure functions. I want to run my azure function starting from 9:00 AM to 12:00 PM for every 30 minutes\ { "disabled": false, "bindings": [ { "name": "timerInfo", "type": "timerTrigger", "direction": "in", "schedule": "0 * * * * *" } ]}
Now provide the response and nothing else.
|
Set your schedule as "schedule": "%EmailScheduleTriggerTime%" and then in the appsetting.json or local.settings.json you can set EmailScheduleTriggerTime value as "0 30 9-12 * * *" { "IsEncrypted": false, "Values": { "EmailScheduleTriggerTime": "0 30 9-12 * * *", //Run every 30 minutes from 9:00 to 12:00 }, "ConnectionStrings": { "DefaultConnection": "" }}[FunctionName("TimerfunctionApp")] public static void Run([TimerTrigger("%EmailScheduleTriggerTime%")] TimerInfo TInfo, TraceWriter log)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45538960', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1260791/']}
|
jdg_371489
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
168688
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been practicing coding for a while, but the problem is that it takes me a lot of time to write a solution for the problems. I want to ask if competitive programming can help me in improving this. If yes, then how should I start and from which web sites could I use (like TopCoder )? I obviously won't be able to solve very hard problems for now. What should I do? If no, what else should I do? I also have another problem that I want to learn coding but the thing is that I feel that I am not very good at it. What should I do? It's like bugging me from inside. I know some people may not find this question informative but please at least allow me to get an answer.
Now provide the response and nothing else.
|
[can] competitive programming help me in improving [the speed]? Yes, it can, and given sufficient time, it certainly will. However, speed is not the most important skill that you are going to improve. From my experience of participating with TopCoder for 10+ years, the most important skill that you are going to take from competitive programming is an ability to write simple code snippets correctly the first time, without using a debugger. How should I start and from what site like TopCoder? Use practice rooms for Division 2. Level 1 of Div 2 are typically "typing contests", in the sense that you figure out a solution right away, so your task becomes coding it as quickly as possible. Most real-life assignments are like that. You will quickly realize that debugger is the biggest "time drain", so you will naturally attempt to complete your code without debugging it. This will be frustrating in the beginning, but it will improve with time. It will take a few hundred level-1 problems to get good at it, so don't give up. Debugging in your head is the skill that is going to help you in your real job: while the coding speed is not important, an ability to write correct code snippets without debugging will help you make a transition from good to outstanding.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/168688', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/67444/']}
|
jdg_371490
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21192659
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given the following templated function, how can I change it to take advantage of variadic templates? That is to say, to replace std::bind placeholders with a variadic parameter instead of P1 and P2? At the moment I have one of these functions per arity, with arity zero having no P parameter, up to arity 9 having P1 to P9 parameters. I was hoping to collapse this into a single function if possible. template<typename R, typename T, typename U, typename P1, typename P2>void Attach(R (T::*f)(P1, P2), U p){ AttachInternal(p, std::bind(f, p.get(), std::placeholders::_1, std::placeholders::_2));}
Now provide the response and nothing else.
|
You can (partially) specialize std::is_placeholder for specializations of a custom template. This way, you can introduce a placeholder generator via the usual int_sequence technique. From [func.bind.isplace]/2 The implementation shall provide a definition that has the BaseCharacteristic of integral_constant<int, J> if T is the type of std::placeholders::_J , otherwise it shall have a BaseCharacteristic of integral_constant<int, 0> . A program may specialize this template for a user-defined type T to have a BaseCharacteristic of integral_constant<int, N> with N > 0 to indicate that T should be treated as a placeholder type. The usual int_sequence : #include <cstddef>template<int...> struct int_sequence {};template<int N, int... Is> struct make_int_sequence : make_int_sequence<N-1, N-1, Is...> {};template<int... Is> struct make_int_sequence<0, Is...> : int_sequence<Is...> {}; The custom placeholder template and specialization of is_placeholder : template<int> // begin with 0 here!struct placeholder_template{};#include <functional>#include <type_traits>namespace std{ template<int N> struct is_placeholder< placeholder_template<N> > : integral_constant<int, N+1> // the one is important {};} I'm not sure where to introduce the 1 ; the places I considered are all not optimal. Using it to write some binder: template<class Ret, class... Args, int... Is>void my_bind(Ret (*p)(Args...), int_sequence<Is...>){ auto x = std::bind(p, placeholder_template<Is>{}...); x( Args(42)... );}template<class Ret, class... Args>void my_bind(Ret (*p)(Args...)){ my_bind(p, make_int_sequence< sizeof...(Args) >{});} Usage example of the binder: #include <iostream>void foo(double, char, int) { std::cout << __PRETTY_FUNCTION__ << "\n"; }void bar(bool, short) { std::cout << __PRETTY_FUNCTION__ << "\n"; }int main(){ my_bind(foo); my_bind(bar);}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21192659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/416274/']}
|
jdg_371491
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
102754
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to setup 2 Glassfish servers in a load balanced configuration using UCARP and HAProxy Server1 has 2 IPs x.x.x.17 and x.x.x.18 HAProxy is listening on only x.x.x.18 and Glassfish listening on only x.x.x.17 running with the following configuration... globalmaxconn 4096debuguser haproxygroup haproxydefaultsmode httpretries 3option redispatchlisten wms x.x.x.18:8080source x.x.x.18option httpchkbalance leastconnserver Server1 x.x.x.17:8080 check inter 2000 fastinter 500 fall 2 weight 50server Server2 x.x.x.19:8080 check inter 2000 fastinter 500 fall 2 weight 50 Server2 with 1 IP x.x.x.19 is running Glassfish Even though I can manually wget the page from x.x.x.17:8080 and receive a 200 OK response, HAProxy says Server1 is DOWN and doesn't direct any requests to it. I can't find any reason why. Here is an excerpt from the Server1 access log with the checks... "x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:44:23 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:44:23 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:44:23 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:44:29 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:44:29 +0000" "OPTIONS / HTTP/1.0" 200 0 Here is an excerpt from the Server2 access log with the checks... "x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:58:25 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:58:25 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:58:31 +0000" "OPTIONS / HTTP/1.0" 200 0"x.x.x.18" "NULL-AUTH-USER" "14/Jan/2010:14:58:31 +0000" "OPTIONS / HTTP/1.0" 200 0 If I remove the httpchk option then Server1 checks as UP, however this is not a permanent solution because we need it to fail over properly if the response really fails. Any ideas? (HAProxy is v1.3.22) Addn: I just tried adding server3 x.x.x.13 running Glassfish but on Windows and that also says down when it is up and accessible from the proxy machine. Addn2: After installing v1.4 of haproxy to get error codes, the error is Layer7 invalid response info: "HTTP/1.1 ". When we retrieve the page manually both the UP and DOWN server return HTTP/1.1 200 OK as the first line. So after running wireshark to see what is going on. On the glassfish server which works (and all the other webservers I've checked) the response HTTP/1.1 200 OK all comes in the first packet. On the glassfish servers that don't work the response comes in 3 packets of HTTP/1.1 then 200 then OK. So any idea why HAProxy is not dealing with multiple packets or how to configure glassfish not to split it? (maxKeepAliveRequests=1 already)
Now provide the response and nothing else.
|
The answer is that Glassfish in the latest versions splits the response into multiple packets. I posted on the haproxy mailing list and had a remarkably quick response. Krzysztof Oledzki confirmed that haproxy assumes that the response will all be contained within the the first packet as that is the behavior of most known web servers. He built a patch with a quick and dirty fix which is available in the mailing list archives if you search for Glassfish and can be applied to the beta or latest stable version 1.3.22 I also tried to find out why Glassfish has started to behave this way but without paid support I got nowhere. If anyone can answer that, the bounty is still open.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/102754', 'https://serverfault.com', 'https://serverfault.com/users/15008/']}
|
jdg_371492
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1195813
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let us have a regular decagon. Prove that the radius of the circle that can be drawn around it (if I have drawn correctly, that's the blue line) equals $|ad|-|ab|$. How can I prove this statement, any ideas? :)
Now provide the response and nothing else.
|
Geometrically: Angles $ a,b,c $ of triangle $\begin{align}a &= (\pi - 3\pi/5)/2 = \pi/5 \\b & = 2\pi/5\\\therefore c &= \pi - (a+b) = 2\pi/5\\\end{align}$ so $\Delta a b c$ is isosceles and so is the similar triangle above, giving the result.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1195813', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/215617/']}
|
jdg_371493
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
417324
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Motivation : The following is a theorem of Berrick-Hesselholt (essentially also due to Linnell, though not in this form): Let $G$ be a group. Suppose that for every subgroup of $G$ isomorphic to $\mathbb Q$ , $G$ has a quotient in which the image of this subgroup is central and nontrivial. In this case the Bass trace conjecture holds for $G$ . I can add it for context, but for my question it is not important to know what this conjecture is - I just state this as motivation. Question : What are some examples of finitely presented groups which do not have this property ? That is, they have a subgroup isomorphic to $\mathbb Q$ such that for each quotient of $G$ , its image is central only if it is trivial.
Now provide the response and nothing else.
|
a) There are old results which directly imply the existence of such groups: (1) Boone Higman 1972: every f.g. group with solvable word problem embeds into a simple subgroup of a finitely presented group. (2) Every countable group with solvable word problem embeds into a f.g. group with solvable word problem (reference? the original HNN construction directly works since it consists of explicit amalgams — alternatively here Ph. Hall produced in the 50s an explicit 3-generated metabelian group with solvable word problem, with copies of $\mathbf{Q}$ ). b) A more explicit example is the group $\tilde{T}$ obtained as the set of self-homeomorphisms of $\mathbf{R}/\mathbf{Z}$ commuting with $\sigma:n\mapsto n+1$ , that are piecewise affine with dyadic slopes and breakpoints. The center of $\tilde{T}$ is the infinite cyclic group $\langle\sigma\rangle$ and the quotient is naturally identified with Thompson's group $T$ , which is a finitely presented simple group. The normal proper subgroups of $\tilde{T}$ are precisely the subgroups of $\langle\sigma\rangle$ . If $Q$ is any copy of $\mathbf{Q}$ in $\tilde{T}$ , it follows that the image of $Q$ in $T$ is non-central (since $\mathbf{Q}$ has no nontrivial cyclic quotient). Hence the same holds in every nontrivial quotient of $\tilde{T}$ . Finally, that $\tilde{T}$ contains a copy of $\mathbf{Q}$ (and even continuum many such copies) is an original observation of Belk, Matucci, Hyde ( arXiv ). ( The other answer is closely related as it refers to a more complicated finitely presented simple group containing $\tilde{T}$ . The group $\tilde{T}$ itself is not finitely presented but has few enough normal subgroups for the condition to hold.)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/417324', 'https://mathoverflow.net', 'https://mathoverflow.net/users/102343/']}
|
jdg_371494
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4665510
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In following code, the usage of the string "“" (i.e. a left double quotation mark inside a string) results in a compile error in VB.NET: StringVar = Replace(StringVar, "“", "“") What’s going on here?
Now provide the response and nothing else.
|
It seems as if you want to replace curly quotes with their HTML code equivalent. On the first glance, your code is absolutely correct. The problem is that VB allows curly quotes in place of regular quotes in code (because Unicode is great, right?). That is, the following codes are all equivalent: Dim str = "hello"Dim str = “hello”Dim str = "hello“ Now, if you want to use a quotation mark inside a string, VB doesn’t know whether the quotation mark is supposed to end the string or not. In C#, this would be fixed by escaping the quotation mark, i.e. in place of """ you’d write "\"" . In VB, the same is done by doubling the quotation mark, i.e. """" . Back to your curly quote. The same as for straight quotes applies according to the VB language specification (¶1.6.4). So to write a curly quote in code, try the following: StringVar = Replace(StringVar, "““", "“") Unfortunately, I cannot try this code now and it’s altogether possible that the IDE simply replaces this by straight quotes. If that’s the case, an alternative is to use Chr or ChrW with the character code of the “left double quotation mark”: StringVar = Replace(StringVar, ChrW(&H201C), "“") Or, for symmetry, written in decimal (but I prefer hexadecimal for character codes): StringVar = Replace(StringVar, ChrW(8220), "“") Something else: the Replace function will probably soon be deprecated and doesn’t work everywhere (e.g. Windows Phone 7 ). Instead, use the Replace method of the String class: StringVar = StringVar.Replace(, ChrW(8220), "“")
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4665510', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/461322/']}
|
jdg_371495
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22999
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My interest in combinatorics was recently sparked when I read about the many things that the Catalan numbers count, as found by Richard Stanley. I picked up a copy of Brualdi's Combinatorics, and while browsing the section on counting sequences I found a nice little puzzle that has definitely puzzled me. Let $m$ and $n$ be nonnegative integers with $n\geq m$. There are $m+n$ people in line to get into a theater for which admission if $50$ cents. Of the $m+n$ people, $n$ have a $50$-cent piece and $m$ have a $\$ 1$ dollar bill. The box offices opens with an empty cash register. Show that the number of ways the people can line up so that change is available when needed is $$\frac{n-m+1}{n+1}\binom{m+n}{m}.$$ I first noted that the first person to enter must be one of the $n$ with a half-dollar. Now the register has a half-dollar change. The second person can be either a person with a half-dollar or a dollar. In the first case, the register will now have two half-dollars, in the second case, the register will now have one dollar bill. So it seems to me that when one of the $n$ people with a half-dollar enters, the number of half-dollars in the register increases by $1$, and when one of the $m$ people with a bill enters, the number of half-dollars decreases by $1$ but the number of bills increases by $1$. I tried to model this by looking at paths in $\mathbb{Z}^2$. The $x$-axis is like the number of half-dollars, and the $y$-axis is the number of bills. You start at $(0,0)$, and you can take steps forward $(1,0)$ or backwards diagonally $(-1,1)$ corresponding to who enters, but you must always stay in the first quadrant of the plane without crossing over the axes. The goal is to make $m+n$ moves, and I figured maybe the number of such paths is counted by $\frac{n-m+1}{n+1}\binom{m+n}{m}$, but I'm not sure how to show this. I don't know if this observation simplifies the problem at all, as I don't know how to finish up. I'd be happy to see how this problem is done, thank you.
Now provide the response and nothing else.
|
You are correct that you can think of this as a problem of counting (restricted) paths on the $\mathbb{Z}^2$, and that this is probably a good way to think about it. But I think it is easier if you think of a square array, and you are trying to get from the bottom left, $(0,0)$, to the upper right $(n,m)$, and your steps must be from $(k,\ell)$ to $(k+1,\ell)$ or from $(k,\ell)$ to $(k,\ell+1)$: If $n$ is the number of people with $50$ cent pieces, and $m$ is the number of people with dollar bills. When a person with a 50 cent piece enters, you take a step right. When a person with a dollar bill enters, you take a step up. If you try to take a step from $(k,\ell)$ to $(k,\ell+1)$, you will only have enough change in the till if $\ell+1\leq k$. Otherwise, you'll be out of luck (because you need one person with 50 cent piece for every person with a dollar that has managed to come in). So, you must always stay at or below the diagonal. So the paths you want to count are the paths from $(0,0)$ to $(n,m)$ that stay at or below the main diagonal. (Notice that $\binom{n+m}{m}$ is the number of total paths from $(0,0)$ to $(n,m)$ taking only steps right or up: you must take $n+m$ steps total, and of those $m$ will be steps up; so $\binom{n+m}{m}$ picks which of the $n+m$ steps will be steps up. So the factor $\frac{n+1-m}{n+1}$ must be the fraction of the paths that stay at or below the main diagonal.) Does that help sufficiently, or should I expand more?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/22999', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1583/']}
|
jdg_371496
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25029743
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently (After July 1st), I have a new app in Google play. But I came to know that from July 1st onwards Samsung is not going to accept the App which are not using the Samsung SDK. Question Though I do not need any special features in the Samsung SDK they forcing me to use it.I have to utilize the user base in Samsung Store. What is the easy way to make my app to be compatible with Samsung Store. Share your experience. Can we simply use the SDK and not implement any features will work ? Thanks in Advance !
Now provide the response and nothing else.
|
After spending hours trying to resolve the problem, I finally got the solution after contacting the Samsung Engineers. They checked out my APK's content and according to them: "Regarding your inquiry, we would like to inform you that if the following use-feature exists in the manifest file, it is considered that the application uses S Pen SDK.(uses-feature name="com.sec.feature.spen_usp" android:required="true"/)Please note that you need to change ( and ) to < and > in your manifest file.However the uses-feature does not exist in the manifest of your application, so it is considered that the application does not use the S Pen SDK." After adding the feature to the app's manifest, the portal detected the SDK (Please See image), and I was able to publish the app. Note that I didn't include the SPen sdk jars to the app, neither did I reference any Samsung library within the app. <uses-feature android:name="com.sec.feature.spen_usp" android:required="true"/> EDIT: The app has been accepted but after second try. The first was rejected because under the Category section, I chose "Galaxy Specials > S Pen". They told me to change it to "Galaxy Specials > Other" since it had no S Pen functionality. So make sure you set accordingly. Update 2020: Under the Category section in "App Information" just choose "Galaxy Specials > Multi Window". Then in your manifest add these: <uses-feature android:name="com.sec.feature.spen_usp" android:required="true"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" ......> <!-- for samsung multiwindow --> <uses-library android:name="com.sec.android.app.multiwindow" android:required="false" /> <meta-data android:name="com.sec.android.support.multiwindow" android:value="true" /> <meta-data android:name="com.sec.android.multiwindow.DEFAULT_SIZE_W" android:value="480dip" /> <meta-data android:name="com.sec.android.multiwindow.DEFAULT_SIZE_H" android:value="800dip" /> <meta-data android:name="com.sec.android.multiwindow.MINIMUM_SIZE_W" android:value="200dip" /> <meta-data android:name="com.sec.android.multiwindow.MINIMUM_SIZE_H" android:value="400dip" /> <meta-data android:name="com.samsung.android.sdk.multiwindow.enable" android:value="true" /> <meta-data android:name="com.samsung.android.sdk.multiwindow.multiinstance.enable" android:value="true" /> <meta-data android:name="android.intent.category.MULTIWINDOW_LAUNCHER" android:value="true" /> <meta-data android:name="com.sec.android.intent.category.MULTIWINDOW_LAUNCHER" android:value="true" /> <activity android:name=".SplashActivity" android:label="@string/app_name" android:theme="@style/AppTheme" android:configChanges="keyboard|keyboardHidden|orientation|screenSize"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> <!-- for samsung multiwindow --> <category android:name="android.intent.category.MULTIWINDOW_LAUNCHER" /> </intent-filter> </activity> ..... Your app should be approved now using this approach. Thanks
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25029743', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2155425/']}
|
jdg_371497
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51568826
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using one api in which the response is coming in two columns but in one column there are many parameters, I need to decode one column and want to show in different columns. Column name coming from api timestamp, dataFrame, I need to show in 3 columns timestamp, oil temperature and winding temperature. I need to add one more column in datatable to show the values in the datatable.sample json data [{ "timestamp": "2018-07-21T07:56:23.838Z", "dataFrame": "HA==" }, { "timestamp": "2018-07-21T08:16:23.902Z", "dataFrame": "HA==" }] output Expected Output Timestamp, Oil Temp, winding temp in 3 separate columns code <script> window.onload = getddata(); function getddata() { var xmlhttp = new XMLHttpRequest(); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { var Readings = JSON.parse(xmlhttp.responseText); //jquery data table - start $(document).ready(function () { $('#example').DataTable({ columnDefs: [{ "targets": 1, "render": function (data, type, row, meta) { var hexvalue =base64toHEX(data); var deviceid= hexToDec(hexvalue.substr(1,1)); var oil_temp= hexToDec(hexvalue.substr(2,2)); var winding_temp= hexToDec(hexvalue.substr(4,2)); return 'oil temp: ' + oil_temp + ', Winding Temp: ' + winding_temp + ' ; } }, { "targets": 0, "render": function (data, type, row, meta) { var updDate= UtcToIst(data).toLocaleDateString(); var updTime= UtcToIst(data).toLocaleTimeString(); var updDateTime=updDate + ", " + updTime; return updDateTime ; } } } ], data: Readings, columns: [{ "data": "timestamp" }, { "data": "dataFrame" } ] }); }); //data table code -close } }; xmlhttp.open("GET", "https://.., true); xmlhttp.setRequestHeader("Authorization", "Basic a2VybmV"); xmlhttp.setRequestHeader("Content-type", "Application/json"); xmlhttp.send(); } </script>
Now provide the response and nothing else.
|
Looks like Safari doesn't support lookbehind yet (that is, your (?<=\/) ). One alternative would be to put the / that comes before in a non-captured group, and then extract only the first group (the content after the / and before the # ). /(?:\/)([^#]+)(?=#*)/ Also, (?=#*) is odd - you probably want to lookahead for something (such as # or the end of the string), rather than a * quantifier ( zero or more occurrences of # ). It might be better to use something like /(?:\/)([^#]+)(?=#|$)/ or just omit the lookahead entirely (because the ([^#]+) is greedy), depending on your circumstances.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/51568826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10100512/']}
|
jdg_371498
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2245185
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a view model like this: public class SignUpViewModel{ [Required(ErrorMessage = "Bitte lesen und akzeptieren Sie die AGB.")] [DisplayName("Ich habe die AGB gelesen und akzeptiere diese.")] public bool AgreesWithTerms { get; set; }} The view markup code: <%= Html.CheckBoxFor(m => m.AgreesWithTerms) %><%= Html.LabelFor(m => m.AgreesWithTerms)%> The result: No validation is executed. That's okay so far because bool is a value type and never null. But even if I make AgreesWithTerms nullable it won't work because the compiler shouts "Templates can be used only with field access, property access, single-dimension array index, or single-parameter custom indexer expressions." So, what's the correct way to handle this?
Now provide the response and nothing else.
|
My Solution is as follows (it's not much different to the answers already submitted, but I believe it's named better): /// <summary>/// Validation attribute that demands that a boolean value must be true./// </summary>[AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = false)]public class MustBeTrueAttribute : ValidationAttribute{ public override bool IsValid(object value) { return value != null && value is bool && (bool)value; }} Then you can use it like this in your model: [MustBeTrue(ErrorMessage = "You must accept the terms and conditions")][DisplayName("Accept terms and conditions")]public bool AcceptsTerms { get; set; }
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2245185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/271150/']}
|
jdg_371499
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4000593
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have always been taught to almost never to use goto statements in programming. However we are required to do so as part of my most recent programming project. I have an if/else statement with various goto statements, and the goto statements are failing to execute. I have no idea why. Any help would be appreciated. int myInt = XXXXXXX; if((myInt>>22) & 7 == X) goto a; else if((myInt>>22) & 7 == Y) goto b; else if((myInt>>22) & 7 == Z) goto c;a: printf("this always executes\n"); goto end;b: printf("this never executes\n"); goto end;c: printf("nor does this\n"); goto end;end: //more code A brief explanation of the bit shifting and such: We are implementing a computer processer, and need to look at the first 3 bits of a 25-bit opcode. So (myInt >> 22) & 7 isolates the 3 bits in the opcode. Any ideas as to what is going on here?
Now provide the response and nothing else.
|
This actually has nothing to do with goto. You've got an operator precedence problem. Bitwise and (&) has lower precedence than equality (==). As a result, you're actually doing if ((myInt>>22) & (7 == X)) . To fix it, just add some parens: if ((myInt>>22) & 7) == X) .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4000593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/239242/']}
|
jdg_371500
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
212880
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to write a simple API for a project I'm working on. This API will be used internally to perform some server side actions that were triggered by AJAX calls. To make things easier for me, I thought of using the Chain-of-Command/Responsibility pattern. Here are more details about what I want to do: I am creating a "dashboard" where admins of my app will be able to update information (meta data) about items stored in a database. To make things easy and simple for the admins, I chose to use AJAX. So, if an admin wants to delete an item, (s)he clicks on the "delete button". A POST request is then sent to edit.php page with all the information needed to perform the deletion (action : delete, element : link, id : xx ...). Where action , element and of course id can change. This is why I opted for a mini-API that will, depending on the action and the element data, call a different function from a different class. Now, to implement the API, I decided to use Chain-of-Responsibility design pattern. Why? Because I can easily add, delete or update classes without having to modify the API itself. Is the Chain of Responsibility design pattern good fit for my case?
Now provide the response and nothing else.
|
The idea of the chain-of-command pattern is to build a chain of handlers and pass a command along this chain until one of the handlers handles the command. This behavior is typically found in event processing, where, say, a click event from a UI button bubbles up the hierarchy of UI elements till it reaches an element that has a respective handler attached. This handler can then decided whether it handles the command -- effectively ending the event processing -- or not -- in which case the event is propagated further along the chain. Let's now assume we use that pattern for you web API. What you describe sounds like a classical CRUD(L) interface to me, where your actions are (a subset of) create, read, update, and delete. You say you have delete requests and I'm assume that you want some kind of update requests, too. Let's assume further that you wrote respective handlers hupd and hdel for these kinds requests. Following the chain-of-command pattern, you then build the chain [hupd,hdel] to handle requests to your API. What happens is that every update request passed into the chain gets immediately handled by hupd , while every delete request is rejected by hupd and passed along to hdel , which handles it. This behavior shows a fixed mapping between actions and handlers that actually makes the chain unnecessary. (In fact, the chain even lowers your system's performance, because of the check and the passing along of every delete request). Why does this happen? Because there are no two handlers responsible for different subsets of requests with the same action type. What you really want to have here is a direct mapping ["update" => hupd, "delete" => hdel] and a dispatcher that takes respective requests and passes them directly to the respective handler. Such a design can still be extendible with regard to new actions, if there is a dynamic registry holding the mapping. Now you could say, that you want to have different handlers for, say, the deletion of elements of type A and B. What gives you handlers for subsets of requests with the same action type. But once again, you have a direct and fixed mapping between the handlers and the element type, i.e., you can repeatedly dispatch requests based on the target element type. This gives you a two-level dispatch, where with a chain-of-command you would pass the request through number-of-actions times number-of-element-types handlers, in the worst case. Conclusion: I would not recommend the chain-of-command pattern to implement this kind of API. For the pattern to have value, you need a scenario where you want to dynamically add and remove handlers and where the condition of when a handler actually handles an event is not expressible by a simple mapping from constants.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/212880', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/90792/']}
|
jdg_371501
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43761
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to give someone permission to host some simple files on my Ubuntu webserver. This is for an IRC logging bot. I'd like to give them passwordless ssh/scp/rsync/sftp access. However I want to be able to restrict what they can do, I don't want to give them a full ssh shell in otherwords. I'd also like to be able to limit the amount of disk space they can take up, so they can't use more than (say) 200MB of my disk in total. Bonus options that aren't required but might be nice: Limit what sort of files they can upload? (Mime type maybe?) Log when they upload a file? Limit their bandwidth? What are my options?
Now provide the response and nothing else.
|
On Ubuntu/Debian, you can install the scponly shell to restrict them to scp/sftp only. Just install the package and change their shell to /usr/bin/scponly : sudo aptitude install scponlysudo usermod -s /usr/bin/scponly USERNAME The disk space problem is likely best solved with filesystem quotas. Unfortunately I have little experience with them. If you want to get fancier (restricting filetypes and such) you'll probably need to write your own script to validate the commands being passed from the client. The simplest script would be a shell script that performs a test similar to this one: if [[ $SSH_ORIGINAL_COMMAND = "scp -f *txt" ]]then exec $SSH_ORIGINAL_COMMANDfi That would allow downloading .txt files. Quick tests indicate the filename is not passed as part of the scp command when sending. So it may not be possible in that case. The script would be set as the forced command for the user's key. (The ``command="blah"'' field in authorized_keys.) It can also be set as the "ForceCommand" option for a Match group in sshd_config, like so: Match group scponly ForceCommand /usr/local/sbin/scpwrapper Then add the users to be so restricted to the "scponly" group. EDIT:By request, here's the script I use to enforce rsync-only access: #!/bin/bash# # Verify that rsync command appears to be a legitimate rsnapshot command.# Requires >= bash 3.x## Ben Beuchler# 6/4/07# rsync needs to be operating in "server" mode.re_server='rsync --server'# Match both -x and --word optionsre_options=' +-{1,2}[[:alpha:]-]+'# Match legal pathsre_paths=' +[-[:alnum:]_./]+'# Build the full regexr="^${re_server}(${re_options})+(${re_paths}){2}$"echo $SSH_ORIGINAL_COMMAND >> ssh_logif [[ $SSH_ORIGINAL_COMMAND =~ $r ]]then exec $SSH_ORIGINAL_COMMANDelse echo "Invalid rsync command."fi Use that script as a forced command (either by "command=" in ~/.ssh/authorized_keys or using a "Match" block in sshd_config as noted above) and it will reject everything except rsync commands. With a little tweaking it could be made to only accept specific rsync commands.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/43761', 'https://serverfault.com', 'https://serverfault.com/users/8950/']}
|
jdg_371502
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
957390
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Integrating by parts: I'm having a hard time choosing the $u$, $du$, $v$ and $dv$...I gave it a shot. $u = \ln x \implies du = 1/x \ dx$ $v= \ ?$$dv = \cos \ dx$
Now provide the response and nothing else.
|
Choose $u = \cos\log x$ and $dv = dx$. $$\int \cos\log x \; dx = x\cos\log x + \int \sin\log x \; dx.$$ For the second integral choose $u = \sin \log x$ and $dv = dx$. $$\int \sin\log x \; dx = x\sin\log x-\int \cos\log x \; dx.$$ We then have $$\int \cos\log x \; dx = x\cos\log x + x\sin\log x- \int \cos\log x \; dx.$$ $$= \frac{x}{2} \sin \log x + \frac{x}{2}\cos\log x + K.$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/957390', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/115078/']}
|
jdg_371503
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14362429
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My redirect process is showing some crazy stuff. The first part of the whole loop works just fine (if only the first element is typed in). Possible url's look like: www.site.com/category www.site.com/category/product But also: www.site.com/cart Using site.com/jeans works just fine. But when you click on a product, something strange happens. The categorie.php file (used to display categories) is STILL included and after that one, the product.php file is included. Same story with the cart page (http://www.site.com/winkelwagen/). So my includes are wrong at some point. Winkelwagen is a folder on my site which has an index file. It should include http://www.site.com/winkelwagen/index.php and not categorie.php as well. The route code : <?php$mult = Array();if( ! empty( $_SERVER[ 'REQUEST_URI' ] ) ) {$mult = explode ( '/', substr ( $_SERVER[ 'REQUEST_URI' ], 1 ) );} else if( ! empty( $_SERVER[ 'ORIG_PATH_INFO' ] ) ) {$mult = explode ( '/', substr ( $_SERVER[ 'ORIG_PATH_INFO' ], 1 ) ); } else if( ! empty( $_SERVER[ 'PATH_INFO' ] ) ) {$mult = explode ( '/', substr ( $_SERVER[ 'PATH_INFO' ], 1 ) );}if(empty($mult[0])){include("comingsoon/index.html");}if(!empty($mult[0]) && empty($mult[1])){$file = "$mult[0].php";if($mult[0] == "index2"){ include("index2.php"); die;}// if file exists include fileif(file_exists($file)){ include($file);}else { $file2 = "/$mult[0]/index.php"; // if folder index file exists include that file if(file_exists($file2)) { include($file2); } else { // if folder index file doesn't exist, send to category page $_GET['q'] = $mult[0]; include("categorie.php"); }}}if(!empty($mult[0]) && !empty($mult[1])){if($mult[0] == "add"){ $_GET['addid'] = $mult[1]; include("addtocart.php");}elseif($mult[0] == "remove") { $_GET['removeid'] = $mult[1]; include("deletefromcart.php");}// check if folder exists (first part of the url)elseif(is_dir($mult[0])){ // check if file .php (second part of the url) exists $filenew = "$mult[0]/$mult[1].php"; if(file_exists($filenew)) { // include that file include("$mult[0]/$mult[1].php"); } else { // second file does not exist, do something }}else { // folder does not exist so redirect to product page $_GET['c'] = $mult[0]; $_GET['p'] = $mult[1]; include("product.php");}}?> I tried removing the categorie.php file but it still shows up (like, how on earth ?!) I'm excited for the answer - I have absolutely no idea what I'm doing wrong. Also nice to know: when I comment out the include(categorie.php) part in the route code, the file is STILL included...
Now provide the response and nothing else.
|
Ok... Welcome to Stack Overflow. I'll start by saying you are allowed to post links, trying to disrupt links by using "dot" actually feels more like spam , to me at least. I'll continue by advising you to not go with your site and that code public. It has various security vulnerabilities , to which I am not going to go into detail. But, let's just say I'm curious why your user is called d284h1 and why your site/home is on a mount point /mnt/home/d284h1 ... Heed my words. You just posted your routing logic and your site on a very public site. Regarding your code. I really hope that's SO destroying your indentation and not your actual source code. You are missing some control logic. Some of them might have been leading to the file inclusions you were experiencing. I also noticed a possible bug, where you were testing and including a file from the root directory , instead of relatively to your site path. Update: Actually looking back at your original code, absolutely referencing the file $file2 = "/$mult[0]/index.php"; was causing categorie.php to load. And not having proper control logic, was causing multiple inclusions to occur in the file. Took the liberty of revising your code, mildly. The below code, should not continue to include any random files. Unless included files themselves do it. $mult = array();if( ! empty( $_SERVER[ 'REQUEST_URI' ] ) ) { $mult = explode ( '/', substr ( $_SERVER[ 'REQUEST_URI' ], 1 ) );} else if( ! empty( $_SERVER[ 'ORIG_PATH_INFO' ] ) ) { $mult = explode ( '/', substr ( $_SERVER[ 'ORIG_PATH_INFO' ], 1 ) ); } else if( ! empty( $_SERVER[ 'PATH_INFO' ] ) ) { $mult = explode ( '/', substr ( $_SERVER[ 'PATH_INFO' ], 1 ) );}if (empty($mult[0])) { include("comingsoon/index.html"); die; #missing}# no need to test for !empty($mult[0]), if it were empty, the above die would fireif (empty($mult[1])) { $file = "$mult[0].php"; if($mult[0] == "index2") { include("index2.php"); die; } // if file exists include file if (file_exists($file)) { include($file); die; # missing die } # no need for else, you just die'd # renamed $file2 to $file, don't use temporary variable names in global scope. It clutters your application $file = "$mult[0]/index.php";# are you sure you meant to include from the root level? // if folder index file exists include that file if (file_exists($file)) { include($file); die;# missing die } # no need for else, you just die'd // if folder index file doesn't exist, send to category page $_GET['q'] = $mult[0]; include("categorie.php"); die;# missing die}# don't do succesive if/elseif on the same variable, use a switch!switch($mult[0]) { case'add': $_GET['addid'] = $mult[1]; include('addtocart.php'); break; case'remove': $_GET['removeid'] = $mult[1]; include('deletefromcart.php'); break;}if (is_dir($mult[0])) { // check if file .php (second part of the url) exists $filenew = "$mult[0]/$mult[1].php"; if(file_exists($filenew)) { // include that file include("$mult[0]/$mult[1].php"); die; # missing die }} else { // folder does not exist so redirect to product page $_GET['c'] = $mult[0]; $_GET['p'] = $mult[1]; include("product.php");} My updates are commented with # and this is in no way the final form it should look like. Take a look at PSR1 for a mild idea, on what coding standards are. They are meant to help and make you more proficient in your quest for the ultimate code, despite initially feeling cumbersome. Other things I'd continue on doing are: swapping !empty($var) with isset($var[0]) , if $var is a string swapping include($file);die; with return include $file; , if you're in the main scope swapping if/elseif blocks with ternary operators Actually regarding #3, here's an example: $mult = isset($_SERVER['REQUEST_URI'][0]) ? $_SERVER['REQUEST_URI'] : isset($_SERVER['ORIG_PATH_INFO'][0]) ? $_SERVER['ORIG_PATH_INFO'] : isset($_SERVER['PATH_INFO'][0]) ? $_SERVER['PATH_INFO'] : false ;$mult = $mult ? explode('/', substr($mult, 1)) : array(); P.S. I did not fix the security issues you were having, as I believe the code you are using should not be used. Consider using a framework or at least learning from one . Routing is the corner stone of good MVC , you're on the right path, go one step beyond.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14362429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/430979/']}
|
jdg_371504
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14608783
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a JSON string from PHP's json_encode() . This is how it looks in JSONViewer.stack.hu , and this is how it looks in the browser . Is it possible to use NSJSONSerialization to download the JSON data directly? If so, I am going to save the downloaded JSON data to SQLite by using FMDB .
Now provide the response and nothing else.
|
Your domain syntax is wrong. It should be [('user_ids', '=' , user.id)] Each tuple in the search domain needs to have 3 elements, in the form: ('field_name', 'operator', value) , where: field_name must be a valid name of field of the object model, possibly following many-to-one relationships using dot-notation, e.g 'street' or 'partner_id.country' are valid values. operator must be a string with a valid comparison operator from this list: =, !=, >, >=, <, <=, like, ilike, in, not in, child_of, parent_left, parent_right The semantics of most of these operators are obvious. The child_of operator will look for records who are children or grand-children of a given record, according to the semantics of this model (i.e following the relationship field named by self._parent_name , by default parent_id . value must be a valid value to compare with the values of field_name , depending on its type. Domain criteria can be combined using 3 logical operators than can be added between tuples: ' & ' (logical AND, default), ' | ' (logical OR), ' ! ' (logical NOT). These are prefix operators and the arity of the ' & ' and ' | ' operator is 2, while the arity of the ' ! ' is just 1. Be very careful about this when you combine them the first time. Here is an example of searching for Partners named ABC from Belgium and Germany whose language is not english :: [('name','=','ABC'),'!',('language.code','=','en_US'),'|',('country_id.code','=','be'),('country_id.code','=','de')] The '&' is omitted as it is the default, and of course we could have used '!=' for the language, but what this domain really represents is:: (name is 'ABC' AND (language is NOT english) AND (country is Belgium OR Germany))
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14608783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1732875/']}
|
jdg_371505
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40025319
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
All this time, when any Haskell lecture spoke of "flat map", usually in relation to Monads, I thought it was called "flat" for a reason, i.e. it flattens out the container. So [[1,2],[3,4]] would be processed just as if it were [1,2,3,4] But now I discover that fmap and map are basically the same thing, the only difference being the application of one for functors and the other for just lists. And that was only done, in the end, to avoid confusing error messages when using map. Is that true? And if so why did f in fmap come to mean "flat", why not "functor map"?
Now provide the response and nothing else.
|
And if so, why did f in fmap come to mean “flat”, why not “functor map”? Your intuition is right: the f in fmap does stand for “functor map”, not “flat map” at all. In fact, in newer, similar languages, such as PureScript, the name is just map . The Haskell map was defined first for lists, though, so coming up with a new name was difficult. Using the F from Functor was an easy, if not particularly creative, choice. It is more likely that the lecturer was referring to the monadic bind function, >>= . Due to x >>= f ’s equivalence to join (fmap f x) , bind is also sometimes called flatMap in other languages. It has the behavior you expect on lists, for example: > [1,2,3] >>= \x -> [x,x][1,1,2,2,3,3] It’s important to keep in mind, though, that this “flat map” does not recursively flatten to an arbitrary depth. In fact, writing such a function isn’t really possible in Haskell without some complicated typeclass trickery. Try it yourself: what would the type signature for a flatten function look like, even one that operates directly on lists? flatten :: ??? -> [a] The >>= function is very simple in comparison: it is like fmap , but every output element must be wrapped in the functor, and >>= shallowly “flattens” the results into a single wrapper. This operation is the essence of what a monad is, which is why the >>= function lives in the Monad typeclass, but fmap is in Functor . This answer is taken from some of the comments on the original question, so I have marked it community wiki. Edits and improvements are welcome.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40025319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1086638/']}
|
jdg_371506
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
54181797
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why is a constexpr function no evaluated at compile time but in runtime in the return statement of main function? It tried template<int x>constexpr int fac() { return fac<x - 1>() * x; } template<>constexpr int fac<1>() { return 1; } int main() { const int x = fac<3>(); return x;} and the result is main: push rbp mov rbp, rsp mov DWORD PTR [rbp-4], 6 mov eax, 6 pop rbp ret with gcc 8.2. But when I call the function in the return statement template<int x>constexpr int fac() { return fac<x - 1>() * x; } template<>constexpr int fac<1>() { return 1; } int main() { return fac<3>();} I get int fac<1>(): push rbp mov rbp, rsp mov eax, 1 pop rbp retmain: push rbp mov rbp, rsp call int fac<3>() nop pop rbp retint fac<2>(): push rbp mov rbp, rsp call int fac<1>() add eax, eax pop rbp retint fac<3>(): push rbp mov rbp, rsp call int fac<2>() mov edx, eax mov eax, edx add eax, eax add eax, edx pop rbp ret Why is the first code evaluated at compile time and the second at runtime? Also I tried both snippets with clang 7.0.0 and they are evaluated at runtime. Why is this not valid constexpr for clang? All evaluation was done in godbolt compiler explorer.
Now provide the response and nothing else.
|
A common misconception with regard to constexpr is that it means "this will be evaluated at compile time" 1 . It is not. constexpr was introduced to let us write natural code that may produce constant expressions in contexts that need them. It means "this must be evaluatable at compile time" , which is what the compiler will check. So if you wrote a constexpr function returning an int, you can use it to calculate a template argument, an initializer for a constexpr variable (also const if it's an integral type) or an array size. You can use the function to obtain natural, declarative, readable code instead of the old meta-programming tricks one needed to resort to in the past. But a constexpr function is still a regular function. The constexpr specifier doesn't mean a compiler has 2 to optimize it to heck and do constant folding at compile time. It's best not to confuse it for such a hint. 1 - Thanks user463035818 for the phrasing. 2 - c++20 and consteval is a different story however :)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54181797', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4645334/']}
|
jdg_371507
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13522
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Page 580, Chapter 12 in Jackson's 3rd edition text carries the statement: From the first postulate of special relativity the action integral must be a Lorentz scalar because the equations of motion are determined by the extemum condition, $\delta A = 0$ Certainly the extremeum condition must be an invariant for the equation of motion between $t_1$ and $t_2$, whereas I don't see how the action integral must be a Lorentz scalar. Using basic classical mechanics as a guide, the action for a free particle isn't a Galilean scalar but still gives the correct equations of motion.
Now provide the response and nothing else.
|
First, observe that although the non-relativistic Lagrangian is not invariant. It changes by a total derivative, thus the equations of motions remain invariant.The reason of the difference between the Lorentzian and the Galilean cases is that the group action of the Lorentz group on the classical variables (positions and momenta) is a by means of a true representation, while in the case of the Galilean group the representation is projective.In the Language of geometric quantization, $exp(i \frac{S}{\hbar})$, where $S$ is the action is a section in $L \otimes \bar{L}$, where $L$ is the prequantization line bundle and $\bar{L}$ its dual. In other words, the action needs not be a scalar, only an exprssion of the form:$\bar{\psi}(t_2)exp(i \frac{S(t_1, t_2)}{\hbar})\psi(t_1)$, where $\psi(t)$ is the wavefunction at time $t$ and $S(t_1, t_2)$ is the classical action between $t_1$ and $t_2$.The reason that the representation in the Galilean case is projective is related to the nontriviality of the cohomology group $H^2(G, U(1))$ in the Galilean case in contrast to the Lorentz case.I have given a more detailed answer on a very similar subject in my answer to Anirbit: Poincare group vs Galilean group and in the comments therein.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/13522', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1345/']}
|
jdg_371508
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23267119
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What I am trying to do is to show a div as a pop-up when I push a button and disable the background. My pop-up is working perfectly fine but the problem comes when I try to disable the background. To do this, I make use of the div called 'mask' which must to take up all the body. This div must be hidden at the beginning and to show it when somebody push the button. The thing is that this div (mask) is shown all the time, since the beginning. I have been trying to find a solution in internet and I found, among others, the following links: CSS Disable background when div popup and disable background using css when popup appear The first one doesn't have a solution and the solution of the second one doesn't fix my problem. This is my .jsp file: <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <link href="css/styles.css" rel="stylesheet" type="text/css"> <link href="css/popup.css" rel="stylesheet" type="text/css"> <script src="js/jquery-2.1.0.js" type="text/javascript"></script> <script language="javascript" type="text/javascript" src="js/popup.js"></script> </head> <body> <div id="pop"> <div id="close">X</div> <div id="contentPop"></div> </div> <div id="mask"> <div id="page-wrap"> ... <a class="minibtn" onclick="show();">Show pop-up</a> ... </div> </div> </body> </html> I have omitted all the code that it is alien to the pop-up and I have replaced it for "...". The popup.css file: #mask{ z-index: 500; position: fixed; display: none; /* removes the element completely from the document, it doesn't take up any space. */ /* visibility: hidden; -- hides the element, but it still takes up space in the layout. */ background: transparent; background-color: #ccc; height: 100%; width: 100%; top: 0px; left: 0px;}#pop { z-index:2; position:absolute; border: 1px solid #333333; text-align:center; background:#ffffff;}#close { float:right; margin-right:5px; cursor:pointer; font:Verdana, Arial, Helvetica, sans-serif; font-size:12px; font-weight:bold; color:#FFFFFF; background-color:#666666; width:12px; position:relative; margin-top:-1px; text-align:center;} And the popup.js file: function show() { // Show pop-up and disable background using #mask $("#pop").fadeIn('slow'); $("#mask").fadeIn('slow'); // Load content. $.post("contentPopup.html", function(data) { $("#contentPop").html(data); });} $(document).ready(function() { // Hide pop-up and mask $("#mask").hide(); $("#pop").hide(); // Size pop-up var img_w = 600; var img_h = 300; // width and height in css. $("#pop").css('width', img_w + 'px'); $("#pop").css('height', img_h + 'px'); // Get values from the browser window var w = $(this).width(); var h = $(this).height(); // Centers the popup w = (w / 2) - (img_w / 2); h = (h / 2) - (img_h / 2); $("#pop").css("left", w + "px"); $("#pop").css("top", h + "px"); // Function to close the pop-up $("#close").click(function() { $("#pop").fadeOut('slow'); $("#mask").fadeOut('slow'); });}); Thank you very much for your time and help. If there is any doubt, just let me know and I will try to explain it in a better way.
Now provide the response and nothing else.
|
I need this list of employee in ajax In spring when you need object serialization, de-serialization and message conversion. in that case you need to annotate your controller handler method with @RequestBody and @ResponseBody . Where: @ResponseBody : will inform spring that try to convert its return value and write it to the http response automatically. @RequestBody : will inform spring that try to convert the content of the incoming request body to your parameter object on the fly. in your case you need JSON type, you have to add @ResponseBody to your method signature or just above the method, and produces and consumes which are optional, like: @RequestMapping(value="phcheck", method=RequestMethod.GET produces="application/json")public @ResponseBody List<Employee> pay(@RequestParam("empid") int empid, String fdate, String tdate) { //get your employee list here return empList;} and in AJAX call use: contentType: 'application/json' attribute tells the type of data you're sending. and dataType: json attribute tells jquery what content type of response will receive. in your case contentType: 'application/json' is not needed, default one i.e. 'application/x-www-form-urlencoded; charset=UTF-8' is enough. and you can receive list of employees in your AJAX success, to iterate over it do like: success: function (data) { $.each(data, function(index, currEmp) { console.log(currEmp.name); //to print name of employee }); }, Note: Jackson mapper or any other mapper should be available on buildpath in order to work JSON serialize and deserialize. See Also: New features in spring mvc 3.1
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23267119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3257646/']}
|
jdg_371509
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4964255
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a time series for which I want to intelligently interpolate the missing values. The value at a particular time is influenced by a multi-day trend, as well as its position in the daily cycle. Here is an example in which the tenth observation is missing from myzoo start <- as.POSIXct("2010-01-01") freq <- as.difftime(6, units = "hours") dayvals <- (1:4)*10 timevals <- c(3, 1, 2, 4) index <- seq(from = start, by = freq, length.out = 16)obs <- (rep(dayvals, each = 4) + rep(timevals, times = 4))myzoo <- zoo(obs, index)myzoo[10] <- NA If I had to implement this, I'd use some kind of weighted mean of close times on nearby days, or add a value for the day to a function line fitted to the larger trend, but I hope there already exist some package or functions that apply to this situation? EDIT: Modified the code slightly to clarify my problem. There are na.* methods that interpolate from nearest neighbors, but in this case they do not recognize that the missing value is at the time that is the lowest value of the day. Maybe the solution is to reshape the data to wide format and then interpolate, but I wouldn't like to completely disregard the contiguous values from the same day. It is worth noting that diff(myzoo, lag = 4) returns a vector of 10's. The solution may lie with some combination of reshape , na.spline , and diff.inv , but I just can't figure it out. Here are three approaches that don't work: EDIT2. Image produced using the following code. myzoo <- zoo(obs, index)myzoo[10] <- NA # knock out the missing pointplot(myzoo, type="o", pch=16) # plot solid linepoints(na.approx(myzoo)[10], col = "red")points(na.locf(myzoo)[10], col = "blue")points(na.spline(myzoo)[10], col = "green")myzoo[10] <- 31 # replace the missing pointlines(myzoo, type = "o", lty=3, pch=16) # dashed line over the gaplegend(x = "topleft", legend = c("na.spline", "na.locf", "na.approx"), col=c("green","blue","red"), pch = 1)
Now provide the response and nothing else.
|
Try this: x <- ts(myzoo,f=4)fit <- ts(rowSums(tsSmooth(StructTS(x))[,-2]))tsp(fit) <- tsp(x)plot(x)lines(fit,col=2) The idea is to use a basic structural model for the time series, which handles the missing value fine using a Kalman filter. Then a Kalman smooth is used to estimate each point in the time series, including any omitted. I had to convert your zoo object to a ts object with frequency 4 in order to use StructTS. You may want to change the fitted values back to zoo again.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4964255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/573546/']}
|
jdg_371510
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
65977705
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dictionary which have key value pairs like: dict = {'a11': 1, 'a21': 0, 'a12': 1, 'a14': 2, 'a41': 1 . . . . . .. . 'a67':99, 'a23':98, 'a19':99} all they key values are in form of a11, a12, a13 and so on and the values are from 0 to 99. Now from the dict, I want to build the lists such that: list lst0 have keys which got values 0, lst1 have keys which got values 1 and so on .. lst 99 have keys which have values 99. I am able to achieve the same when the values were less than 5 using the below code: for key,value in d.items(): if value==1: lst.append(key) but how to achieve the result when we have more number of different values, say 100 or more and don't have to define all 100 lists explicitly.
Now provide the response and nothing else.
|
On top of dragging the object around, you can also make it zoomable with the help of a GestureDetector . I applied the GestureDetector to the main Stack so that you can pinch to zoom in/out anywhere on the screen. It makes it somewhat easier to see what you are doing. HookWidget version class DragArea extends HookWidget { final Widget child; const DragArea({Key key, this.child}) : super(key: key); @override Widget build(BuildContext context) { final position = useState(Offset(100, 100)); final prevScale = useState(1.0); final scale = useState(1.0); return GestureDetector( onScaleUpdate: (details) => scale.value = prevScale.value * details.scale, onScaleEnd: (_) => prevScale.value = scale.value, child: Stack( children: [ Positioned.fill( child: Container(color: Colors.amber.withOpacity(.4))), Positioned( left: position.value.dx, top: position.value.dy, child: Draggable( maxSimultaneousDrags: 1, feedback: Transform.scale( scale: scale.value, child: child, ), childWhenDragging: Opacity( opacity: .3, child: Transform.scale( scale: scale.value, child: child, ), ), onDragEnd: (details) => position.value = details.offset, child: Transform.scale( scale: scale.value, child: child, ), ), ) ], ), ); }} StatefulWidget version class StatefulDragArea extends StatefulWidget { final Widget child; const StatefulDragArea({Key key, this.child}) : super(key: key); @override _DragAreaStateStateful createState() => _DragAreaStateStateful();}class _DragAreaStateStateful extends State<StatefulDragArea> { Offset position = Offset(100, 100); double prevScale = 1; double scale = 1; void updateScale(double zoom) => setState(() => scale = prevScale * zoom); void commitScale() => setState(() => prevScale = scale); void updatePosition(Offset newPosition) => setState(() => position = newPosition); @override Widget build(BuildContext context) { return GestureDetector( onScaleUpdate: (details) => updateScale(details.scale), onScaleEnd: (_) => commitScale(), child: Stack( children: [ Positioned.fill( child: Container(color: Colors.amber.withOpacity(.4))), Positioned( left: position.dx, top: position.dy, child: Draggable( maxSimultaneousDrags: 1, feedback: widget.child, childWhenDragging: Opacity( opacity: .3, child: widget.child, ), onDragEnd: (details) => updatePosition(details.offset), child: Transform.scale( scale: scale, child: widget.child, ), ), ), ], ), ); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65977705', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2194036/']}
|
jdg_371511
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42226351
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I like to convert the following query into sequelize code select * from table_a inner join table_b on table_a.column_1 = table_b.column_1and table_a.column_2 = table_b.column_2 I have tried many approaches and followed many provided solution but I am unable to achieve the desired query from sequelize code. The max I achieve is following : select * from table_a inner join table_b on table_a.column_1 = table_b.column_1 I want the second condition also. and table_a.column_2 = table_b.column_2 any proper way to achieve it?
Now provide the response and nothing else.
|
You need to define your own on clause of the JOIN statement ModelA.findAll({ include: [ { model: ModelB, on: { col1: sequelize.where(sequelize.col("ModelA.col1"), "=", sequelize.col("ModelB.col1")), col2: sequelize.where(sequelize.col("ModelA.col2"), "=", sequelize.col("ModelB.col2")) }, attributes: [] // empty array means that no column from ModelB will be returned } ]}).then((modelAInstances) => { // result...});
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42226351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1844897/']}
|
jdg_371512
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
423868
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Looking at the datasheet for the MC74VHC1G08 , under the features section, it states Chip Complexity: FETs = 62 . Why does this IC need 62 transistors, while an AND gate can be made with only 6 transistors? What are the other 56 transistors being used for?My guess would be some sort of protection circuitry, but I am not sure.
Now provide the response and nothing else.
|
There can be several reasons why more than the minimum 6 MOSFETs (4 for an NAND + 2 for an inverter) are used in this IC: As stated in the datasheet: The internal circuit is composed of multiple stages, including a buffer output which provides high noise immunity and stable output. The output will be made using fairly large (not minimum size) transistors. There are always "folded" meaning multiple transistors are combined into one large one where drain and source diffusion areas are shared between two transistors. This behaves as one large transistor but could be counted as many if you want a higher transistor count. The ESD protection at inputs and outputs of IC fabricated in modern CMOS processes often uses "grounded-gate MOSFETs" instead of the more traditional diodes. An "ESD clamp" circuit is needed between the supply pins, such a circuit consists of a couple of transistors. Digital circuits (like this AND gate) often need on-chip supply decoupling. These are called "decap cells". These are capacitors between the supply rails. These capacitors are mostly made by using the Gate-Drain/Source capacitance of Transistors. In CMOS processes the MOSFETs are the most "basic" components, they are also the most controlled component and most flexible ones so IC designers prefer to use a MOSFET whenever possible. All-in-all it is "quite easy" to need 62 transistors to make a seemingly simple function like an AND gate. That's also because this IC is "a bit more" than just a simple AND gate. The AND gates in more complex circuits like CPUs, microcontrollers etc. will often only use 6 transistors. But these aren't "stand alone" AND gates like this IC.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/423868', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/173925/']}
|
jdg_371513
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8841195
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to automate the set up of site creation for our in-house development server. Currently, this consists of creating a system user, mysql user, database, and apache config. I know how I can do everything in a single bash file, but I wanted to ask if there was a way to more cleanly generate the apache config. Essentially what I want to do is generate a conf file based on a template, similar to using printf. I could certainly use printf, but I thought there might be a cleaner way, using sed or awk. The reason I don't just want to use printf is because the apache config is about 20 lines long, and will take up most of the bash script, as well as make it harder to read. Any help is appreciated.
Now provide the response and nothing else.
|
Choose a way of marking parameters. One possibility is :parameter: , but any similar pair of markers that won't be confused with legitimate text for the template file(s) is good. Write a sed script (in sed , awk , perl , ...) similar to the following: sed -e "s/:param1:/$param1/g" \ -e "s/:param2:/$param2/g" \ -e "s/:param3:/$param3/g" \ httpd.conf.template > $HTTPDHOME/etc/httpd.conf If you get to a point where you need sometimes to edit something and sometimes don't, you may find it easier to create the relevant sed commands in a command file and then execute that: {echo "s/:param1:/$param1/g"echo "s/:param2:/$param2/g"echo "s/:param3:/$param3/g"if [ "$somevariable" = "somevalue" ]then echo "s/normaldefault/somethingspecial/g"fi} >/tmp/sed.$$sed -f /tmp/sed.$$ httpd.conf.template > $HTTPDHOME/etc/httpd.conf Note that you should use a trap to ensure the temporary doesn't outlive its usefulness: tmp=/tmp/sed.$$ # Consider using more secure alternative schemestrap "rm -f $tmp; exit 1" 0 1 2 3 13 15 # aka EXIT HUP INT QUIT PIPE TERM...code above...rm -f $tmptrap 0 This ensures that your temporary file is removed when the script exits for most plausible signals. You can preserve a non-zero exit status from previous commands and use exit $exit_status after the trap 0 command.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8841195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245163/']}
|
jdg_371514
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
88642
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm following through a tutorial and it mentions to run this command: sudo chmod 700 !$ I'm not familiar with !$ . What does it mean?
Now provide the response and nothing else.
|
Basically, it's the last argument to the previous command. !$ is the "end" of the previous command. Consider the following example: We start by looking for a word in a file: grep -i joe /some/long/directory/structure/user-lists/list-15 if joe is in that userlist, we want to remove him from it. We can either fire up vi with that long directory tree as the argument, or as simply as vi !$ Which bash expands to: vi /some/long/directory/structure/user-lists/list-15 ( source ; handy guide, by the way) It's worth nothing the distinction between this !$ token and the special shell variable $_ .Indeed, both expand to the last argument of the previous command. However, !$ is expanded during history expansion , while $_ is expanded during parameter expansion .One important consequence of this is that, when you use !$ , the expanded command is saved in your history. For example, consider the keystrokes echo Foo Enter echo !$ Jar Enter Up Enter ; and echo Foo Enter echo $_ Jar Enter Up Enter . (The only characters changed are the $! and $_ in the middle.) In the former, when you press Up , the command line reads echo Foo Jar , so the last line written to stdout is Foo Jar . In the latter, when you press Up , the command line reads echo $_ bar , but now $_ has a different value than it did previously—indeed, $_ is now Jar , so the last line written to stdout is Jar Jar . Another consequence is that _ can be used in other parameter expansions, for example, the sequence of commands printf '%s ' isomorphismprintf '%s\n' ${_%morphism}sceles prints isomorphism isosceles .But there's no analogous " ${!$%morphism} " expansion. For more information about the phases of expansion in Bash, see the EXPANSION section of man 1 bash (this is called Shell Expansions in the online edition). The HISTORY EXPANSION section is separate.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/88642', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/40509/']}
|
jdg_371515
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1273509
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to use Maven to build a project. The project builds fine on a few computers I tried but on one machine, I'm having a random issue. Essentially, maven is searching for a dependency that is not included in the pom.xml file or the in the settings.xml file at all. I have tried mvn dependency:purge-local-repository but still no avail; I'm still stuck at mvn searching for a dependency I have no idea where it's looking from.
Now provide the response and nothing else.
|
If you run mvn dependency:tree on your project you'll see where the unwanted dependency is being introduced, you can then exclude the dependency. This is an example of the output: [INFO] [dependency:tree][INFO] org.apache.maven.plugins:maven-dependency-plugin:maven-plugin:2.0-alpha-5-SNAPSHOT[INFO] \- org.apache.maven.doxia:doxia-site-renderer:jar:1.0-alpha-8:compile[INFO] \- org.codehaus.plexus:plexus-velocity:jar:1.1.3:compile[INFO] \- velocity:velocity:jar:1.4:compile To exclude a dependency, you'd do something like this: <dependency> <groupId>sample.ProjectA</groupId> <artifactId>Project-A</artifactId> <version>1.0</version> <exclusions> <exclusion> <!-- declare the exclusion here --> <groupId>sample.ProjectB</groupId> <artifactId>Project-B</artifactId> </exclusion> </exclusions> </dependency> For more details on the dependency plugin see the documentation . For more details on exclusions, see the guide . Update: You can also use the m2eclipse plugin to generate a graphical tree of the dependencies or as a graph. dependency tree screenshot http://www.sonatype.com/books/m2eclipse-book/reference/figs/web/eclipse_pom-editor-depend-tree-locate.png dependency graph screenshot http://www.sonatype.com/books/m2eclipse-book/reference/figs/web/eclipse_pom-editor-graph.png
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1273509', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/413317/']}
|
jdg_371516
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27985300
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm running one of my scripts on IE 11 browser with Selenium 2.43.1 when the script types in text field using following: element.sendKeys("string"); In IE browser, I can see that one character of string is typed in text field and it waits for 1-2 seconds before typing next character. Means for typing one character it's taking 1-2 seconds. Why is typing so slow with IE? Is there any alternate way to speed up typing?
Now provide the response and nothing else.
|
My issue was with the driver architecture, and fixed it by downloading and using a 32bit one. To switch to 32 bit here is what you have to do Download 32 bit driver service from http://selenium-release.storage.googleapis.com/index.html Instantiate your InterExplorerWeDriver class using InternetExplorerDriverService class with path to 32 bit driverservice. InternetExplorerDriver ieDiver = new InternetExplorerDriver(“Path to the 32 bit Explorer driver”); OR if using a builder: System.setProperty(“webdriver.ie.driver”,“C:\\drivers\\IEDriverServer.exe”);DesiredCapabilities ieCapabilities=DesiredCapabilities.internetExplorer();ieCapabilities.setCapability(InternetExplorerDriver .INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS,true);ieCapabilities.setCapability("requireWindowFocus", true);File ie_temp=newFile(“C:\\Selenium\\IEDrivertemp”);InternetExplorerDriverService.Builder ies=newInternetExplorerDriverService.Builder();ies.withExtractPath(ie_temp);InternetExplorerDriverService service=ies.build();WebDriver driver=newInternetExplorerDriver(service,ieCapabilities)) The thread that helped me resolve http://forumsqa.com/question/typing-too-slow-in-text-fields-while-replaying-tests/
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27985300', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1471417/']}
|
jdg_371517
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
56507839
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's my SwiftUI code: struct ContentView : View { @State var showingTextField = false @State var text = "" var body: some View { return VStack { if showingTextField { TextField($text) } Button(action: { self.showingTextField.toggle() }) { Text ("Show") } } }} What I want is when the text field becomes visible , to make the text field become the first responder (i.e. receive focus & have the keyboard pop up).
Now provide the response and nothing else.
|
Swift UI 3 As of Xcode 13, you can use the focused modifier to make a view become first responder. Swift UI 1/2 It doesn't seem to be possible at the moment, but you can implement something similar yourself. You can create a custom text field and add a value to make it become first responder. struct CustomTextField: UIViewRepresentable { class Coordinator: NSObject, UITextFieldDelegate { @Binding var text: String var didBecomeFirstResponder = false init(text: Binding<String>) { _text = text } func textFieldDidChangeSelection(_ textField: UITextField) { text = textField.text ?? "" } } @Binding var text: String var isFirstResponder: Bool = false func makeUIView(context: UIViewRepresentableContext<CustomTextField>) -> UITextField { let textField = UITextField(frame: .zero) textField.delegate = context.coordinator return textField } func makeCoordinator() -> CustomTextField.Coordinator { return Coordinator(text: $text) } func updateUIView(_ uiView: UITextField, context: UIViewRepresentableContext<CustomTextField>) { uiView.text = text if isFirstResponder && !context.coordinator.didBecomeFirstResponder { uiView.becomeFirstResponder() context.coordinator.didBecomeFirstResponder = true } }} Note: didBecomeFirstResponder is needed to make sure the text field becomes first responder only once, not on every refresh by SwiftUI ! You would use it like this... struct ContentView : View { @State var text: String = "" var body: some View { CustomTextField(text: $text, isFirstResponder: true) .frame(width: 300, height: 50) .background(Color.red) }} P.S. I added a frame as it doesn't behave like the stock TextField , meaning there's more stuff going on behind the scenes. More on Coordinators in this excellent WWDC 19 talk: Integrating SwiftUI Tested on Xcode 11.4
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/56507839', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6583/']}
|
jdg_371518
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1337137
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to Charles Nutter , Duby is a static-typed language with Ruby's syntax and Java's type system. Duby supports all Ruby's literals, uses local type inference (only argument types must be declared), and runs as fast as Java (because it produces nearly identical bytecode). But with the advent of invokedynamic, Duby needed a playmate. 1. What's invokedynamic and why does Duby "need a playmate"? Juby , on the other hand, is intended to be basically like Duby, in that it uses Java's types and Ruby's syntax. But it takes advantage of the new invokedynamic opcode to be 100% dynamic. Juby is a dynamic Duby, or perhaps a dynamic Java with Ruby syntax. It's not hard to comprehend. Actually it is hard to comprehend. 2. Could someone elaborate a little bit more on what this distinction is about? 3. Why do we need ( need! ) another Ruby-related language? Or, rather, two more Ruby-related languages?
Now provide the response and nothing else.
|
I'm going to answer the questions out of order, starting with the simplest one: 2. Could someone elaborate a little bit more on what this distinction is about? Duby is statically typed, Surinx (which is the final name for what was for a short amount of time called Juby ) is dynamically typed. That's already all there is to it. Actually, there is one small detail as a consequence of this: Surinx syntax is a strict subset of Ruby syntax, i.e. every syntactically valid Surinx program is also a syntactically valid Ruby program. Duby OTOH is almost a syntactic subset, except for its mandatory method parameter type annotations: def foo(bar => Integer, baz => String) => Array # ...end That's illegal in Ruby . 3. Why do we need (need!) another Ruby -related language? First off: other than syntactic similarity, these languages are in no way, shape or form related to Ruby . So, why did Charles Oliver Nutter create Duby ? He is the lead developer of the JRuby Ruby implementation, which is an implementation of the Ruby programming language for the JVM. Like most Ruby implementations, it is written in the dominant programming language of the underlying platform: MRI , YARV and tinyrb are implemented 100% in C, MacRuby mostly in C with a bit of Objective-C, Ruby.NET and IronRuby 100% in C#, HotRuby in ECMAScript , Red Sun in ActionScript, Cardinal in PIR and NQP and so on. (The only Ruby implementations that contain a significant amount of Ruby code are Rubinius (about 70% Ruby , 30% C++) and MagLev (unknown amounts of Ruby and Smalltalk ).) And naturally, XRuby and JRuby are implemented 100% in Java. Now, the funny thing is, Charlie came to Ruby , because he didn't like his day job, doing Java development. And now, he still writes Java code all day long! Of course, he doesn't like that, and so he was looking for another programming language in which to implement the core of JRuby . One option would certainly be to just write it all in Ruby , but with metacircular implementations there usually comes a point of diminishing returns, where the implementations degenerates into academic masturbation. It would certainly make sense to rewrite the libraries, the ahead-of-time compiler (actually, that's already being done ) and some of the core classes in Ruby , but some parts of the engine core are better written in something closer to the execution model of the JVM itself. Charlie was looking at the available options: Scala , Groovy , Fan , Clojure , Nice , but all of them had a significant disadvantage: a fairly large language runtime. The size of the JRuby runtime is already a big problem in terms of memory consumption and startup latency (especially compared to MRI or YARV and even more so if you actually include the JVM itself in your measurements), and rewriting it in a language that adds its own runtime to that weight is simply a no-go. Unfortunately, there was no programming language which satisfied the two basic criteria Charlie was looking for: no runtime and compiles to JVM bytecode that is at least as efficient as the equivalent Java. So, he decided to create his own. The reason why he chose to use a syntax similar to Ruby , is actually quite simple: he didn't need to write a parser for it, Duby just uses JRuby 's already existing parser with one minor modification to support method parameter type annotations. (Actually, he also likes Ruby 's syntax, that was of course also a factor.) As you know the syntax is actually the least important part of a programming language. (Its irrelevance isn't always obvious from the amount of arguing about it , but that's just because syntax is the only thing you can argue about without having to actually understand what you are talking about.) Much more important than the syntax is the type system and the evaluation semantics. And here comes the trick: Duby doesn't have either! It only has syntax! It's like a parasite: it simply "borrows" the type system and semantics from its underlying platform. That means that on the JVM, Duby 's type system is the Java type system, and Duby 's semantics are Java's semantics. To put it another way: Duby isn't a programming language at all , rather it is "just" an alternative syntax for Java. That means that there is no mapping, no conversion overhead and no speed difference between Duby and Java. And that means that the internals of JRuby could be written in Duby , without losing any features. So, that's Duby . In order to explain Surinx , I'll first answer your first question: 1. What's invokedynamic and why does Duby "need a playmate"? invokedynamic is specifically a new bytecode that is going to be added to the 3rd edition of the JVM specification and that is going to be released in JDK7. However, more generally invokedynamic is usually used as a stand-in to refer to a whole bunch of features, of which the actual invokedynamic bytecode is only one, that are currently being developed under the umbrella of JSR-292 "Supporting Dynamically Typed Languages on the Java Platform" . And even more generally, the name invokedynamic is used as a moniker for the general change of strategy both in Sun and in the JCP as a whole to turn the Java platform into a general purpose language platform for all sorts of languages. The specific purpose of JSR-292 (which is what Charlie alludes to in his blog post), is to make dynamic method dispatch faster – indeed, almost as fast as static dispatch in Java, at least in the best case. Surinx is a dynamically typed programming language which basically does the same thing as Duby : like Duby , it also has only syntax, like Duby , it also used the Java type system. But unlike Duby , it does not use Java's method invocation semantics, instead it uses invokedynamic s method invocation semantics. IOW: it is dynamically typed and uses dynamic dispatch. So, that's Surinx . Now, I can answer the second half of your third question: 3. Why do we need (need!) […] two more Ruby-related languages? I already answered for Duby , here's the answer for Surinx : it's what Groovy should have been – a lightweight (actually, zero -weight) dynamic expressive scripting language for the JVM. Also, it is currently the simplest way to play around with the inner workings of invokedynamic . (The current development snapshots of JRuby 1.4 also support it, but that's a much more complex project.) Two things I left out: Duby actually uses local variable type inference, so, unlike Java, you only have to declare the types of method parameters, but everything inside a method will be type-inferred. And secondly, both Duby and Surinx are not actually tied to the JVM. Since they just steal their semantics and type systems from the underlying platform, they can be ported almost anywhere, where you have a rough mapping from Ruby syntax to platform concepts. Off the top of my head, I could imagine ports of Duby to C, C++, Objective-C (iPhone, anyone?), D , CLI and ActionScript and ports of Surinx to the DLR , Smalltalk , Parrot , ECMAScript , Python , Perl , PHP and Objectice-C. In fact, there are already the beginnings of a C port of Duby .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1337137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/163301/']}
|
jdg_371519
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
196929
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The principle is defined as modules having one reason to change . My question is, surely these reasons to change are not known until the code actually starts to change?? Pretty much every piece of code has numerous reasons why it could possibly change but surely attempting to anticipate all of these and design your code with this in mind would end up with very poor code. Isn't it a better idea to only really start to apply SRP when requests to change the code start coming in? More specifically, when a piece of code has changed more than once for more than one reason, thus proving it has more than one reason to change. It sounds very anti-Agile to attempt to guess reasons for change. An example would be a piece of code which prints a document. A request comes in to change it to print to PDF and then a second request is made to change it to apply some different formatting to the document. At this point you have proof of more than a single reason to change (and violation of SRP) and should make the appropriate refactoring.
Now provide the response and nothing else.
|
Of course, the YAGNI principle will tell you to apply SRP not before you really need it. But the question you should ask yourself is: do I need to apply SRP first and only when I have to actually change my code? To my experience, the application of SRP gives you a benefit much earlier: when you have to find out where and how to apply a specific change in your code. For this task, you have to read and understand your existing functions and classes. This gets very much easier when all your functions and classes have a specific responsibility. So IMHO you should apply SRP whenever it makes your code easier to read, whenever it makes your functions smaller and more self-describing. So the answer is yes , it makes sense to apply SRP even for new code. For example, when your printing code reads a document, formats the document and prints the result to a specific device, these are 3 clear separable responsibilities. So make at least 3 functions out of them, give them according names. For example: void RunPrintWorkflow() { var document = ReadDocument(); var formattedDocument = FormatDocument(document); PrintDocumentToScreen(formattedDocument); } Now, when you get a new requirement to change document formatting or another one to print to PDF, you know exactly at which of these functions or locations in code you have to apply changes, and even more important, where not. So, whenever you come to a function you don't understand because the function does "too much", and you are not sure if and where to apply a change, then consider to refactor the function into separate, smaller functions. Don't wait until you have to change something. Code is 10x more often read than changed, and smaller functions are much easier to read. To my experience, when a function has a certain complexity, you can always split the the function into different responsibilities, independent of knowing which changes will come in the future. Bob Martin typically goes a step further, see the link I gave in my comments below. EDIT: to your comment: The main responsibility of the outer function in the example above is not to print to a specific device, or to format the document - it is to integrate the printing workflow . Thus, at the abstraction level of the outer function, a new requirement like "docs should not be formatted anymore" or "doc should be mailed instead of printed" is just "the same reason" - namely "printing workflow has changed". If we talk about things like that, it is important to stick to the right level of abstraction .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/196929', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/85655/']}
|
jdg_371520
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9037492
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to follow the http://wiki.eclipse.org/Jetty/Tutorial/Jetty_HelloWorld tutorial (with Eclipse). The project compiles fine. However, when I hit the localhost server I get: java.lang.ClassNotFoundException: javax.servlet.AsyncContext Now it looks like that is defined in Java EE what do I need to add to the build path to get this to work? I have installed Java EE but I am not sure where to go from there. I am trying this with jetty-all-8.0.4.v20111024.jar servlet-api-2.5.jar
Now provide the response and nothing else.
|
That class is introduced in Servlet 3.0 which is part of Java EE 6. Jetty 7 as mentioned in that tutorial is a Servlet 2.5 container. Servlet 2.5 is part of Java EE 5. You need to upgrade to Jetty 8 which is a Servlet 3.0 container. Downloading and installing the whole Java EE pack as available here makes no sense as that basically gives you the Java EE reference implementation Glassfish back, not Jetty, while you need a newer version of Jetty, not Glassfish. You also need to make sure that you don't have downloaded an arbitrary servlet-api.jar file from somewhere and placed it in the classpath while you already have a fullworthy servlet container like Jetty at your hands (which is a classic beginner's mistake to circumvent compilation errors they faced on the javax.servlet API). See also: What exactly is Java EE? How do I import the javax.servlet API in my Eclipse project?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9037492', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/833970/']}
|
jdg_371521
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
349845
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Environment: Rails 2.2.2, Oracle 10g Most of the columns declared "date" in my ActiveRecord models are exactly that: dates: they don't care about time at all. So with a model declared thus:# class MyDateOnlyModel < ActiveRecord::Migration def self.up create_table :my_date_only_model do |t| t.date :effective_date t.timestamps end endend writing a test like this: test_date = Date.new(2008,12,05)MyDateOnlyModel.create!(:effective_date => test_date)assert_equal test_date, MyDateOnlyModel.find(:first).effective_date should pass, shouldn't it? (Assuming I didn't mess anything up transcribing the above, of course) But it doesn't - not quite. I get this: <Fri, 05 Dec 2008> expected but was<Fri, 05 Dec 2008 00:00:00 UTC +00:00>. So I put a date into the database and got ... well what did I get? puts MyDateOnlyModel.find(:first).eff_date.class tells me I actually got a ActiveSupport::TimeWithZone . Which wasn't what I wanted at all. Is there a simple way to tell ActiveRecord that some (not all) columns are Date s and only Date s? UPDATE: more complaining... Yes, I could use to_date: assert_equal test_date, MyDateOnlyModel.find(:first).effective_date.to_date works fine. But that's what I'm trying to avoid. I asked AR to make me a date, I want a date back. And I could add a method to my class, effective_date_as_date - that works too. But surely it's not impossible to just get a date, dagnabbit. PRE-ACCEPTANCE UPDATE Eventually I realised why this was a particular problem with Oracle: there is no distinction between DATE and DATETIME, so ActiveRecord can't figure out unaided whether a time of zero means midnight (possibly with time zone corrections) or just the date. Bah. Stupid Oracle. So I'm going to have either to go down the plugin route, change my database (tempting, so very tempting) or continue with the to_date/to_time mess I have at present.
Now provide the response and nothing else.
|
The answer is now to create a @ClassRule within your suite. The rule will be invoked before or after (depending on how you implement it) each test class is run. There are a few different base classes you can extend/implement. What is nice about class rules is that if you do not implement them as anonymous classes then you can reuse the code! Here is an article about them: http://java.dzone.com/articles/junit-49-class-and-suite-level-rules Here is some sample code to illustrate their use. Yes, it is trivial, but it should illustrate the life-cycle well enough for you to get started. First the suite definition: import org.junit.*;import org.junit.rules.ExternalResource;import org.junit.runners.Suite;import org.junit.runner.RunWith;@RunWith( Suite.class )@Suite.SuiteClasses( { RuleTest.class,} )public class RuleSuite{ private static int bCount = 0; private static int aCount = 0; @ClassRule public static ExternalResource testRule = new ExternalResource(){ @Override protected void before() throws Throwable{ System.err.println( "before test class: " + ++bCount ); sss = "asdf"; }; @Override protected void after(){ System.err.println( "after test class: " + ++aCount ); }; }; public static String sss;} And now the test class definition: import static org.junit.Assert.*;import org.junit.ClassRule;import org.junit.Rule;import org.junit.Test;import org.junit.rules.ExternalResource;public class RuleTest { @Test public void asdf1(){ assertNotNull( "A value should've been set by a rule.", RuleSuite.sss ); } @Test public void asdf2(){ assertEquals( "This value should be set by the rule.", "asdf", RuleSuite.sss ); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/349845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1060/']}
|
jdg_371522
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21822054
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried what's told in How to force /bin/bash interpreter for oneliners By doing os.system('GREPDB="my command"')os.system('/bin/bash -c \'$GREPDB\'') However no luck, unfortunately I need to run this command with bash and subp isn't an option in this environment, I'm limited to python 2.4. Any suggestions to get me in the right direction?
Now provide the response and nothing else.
|
Both commands are executed in different subshells. Setting variables in the first system call does not affect the second system call. You need to put two command in one string (combining them with ; ). >>> import os>>> os.system('GREPDB="echo 123"; /bin/bash -c "$GREPDB"')1230 NOTE You need to use "$GREPDB" instead of '$GREPDBS' . Otherwise it is interpreted literally instead of being expanded. If you can use subprocess : >>> import subprocess>>> subprocess.call('/bin/bash -c "$GREPDB"', shell=True,... env={'GREPDB': 'echo 123'})1230
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21822054', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2453153/']}
|
jdg_371523
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14252180
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to do something that seems relatively simple and logic from a user interface level but I have one bug that is very annoying. I have a ToggleButton and I am trying to show a Popup when the button is toggled in and hide the Popup when the button is toggled out. The Popup also hides when the user clicks away from it. Everything is working as expected with the following XAML except when I click the toggle button after the Popup is shown, the Popup disappears for a split second then reappears. I suspect what's going on here is that clicking away from the Popup is causing it to toggle the button off then immediately after the button is toggled back on as the mouse clicks it. I just don't know how to go about fixing it. Any help is appreciated. Thanks. <ToggleButton x:Name="TogglePopupButton" Content="My Popup Toggle Button" Width="100" /> <Popup StaysOpen="False" IsOpen="{Binding IsChecked, ElementName=TogglePopupButton, Mode=TwoWay}"> <Border Width="100" Height="200" Background="White" BorderThickness="1" BorderBrush="Black"> <TextBlock>This is a test</TextBlock> </Border> </Popup>
Now provide the response and nothing else.
|
Stephans answers has the disadvantage, that the desired behaviour of closing the popup whenever it loses focus also disappears. I solved it by disabling the toggle-button when the popup is open. An alternative would be to use the IsHitTestVisible Property instead of is enabled: <ToggleButton x:Name="TogglePopupButton" Content="My Popup Toggle Button" Width="100" IsEnabled="{Binding ElementName=ToggledPopup, Path=IsOpen, Converter={StaticResource BoolToInvertedBoolConverter}}"/> <Popup x:Name="ToggledPopup" StaysOpen="False" IsOpen="{Binding IsChecked, ElementName=TogglePopupButton, Mode=TwoWay}"> <Border Width="100" Height="200" Background="White" BorderThickness="1" BorderBrush="Black"> <TextBlock>This is a test</TextBlock> </Border> </Popup> The converter looks like this: public class BoolToInvertedBoolConverter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value is bool) { bool boolValue = (bool)value; return !boolValue; } else return false; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException("ConvertBack() of BoolToInvertedBoolConverter is not implemented"); }}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14252180', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1596973/']}
|
jdg_371524
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9373104
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Instead of supporting method overloading Ruby overwrites existing methods. Can anyone explain why the language was designed this way?
Now provide the response and nothing else.
|
Method overloading can be achieved by declaring two methods with the same name and different signatures. These different signatures can be either, Arguments with different data types, eg: method(int a, int b) vs method(String a, String b) Variable number of arguments, eg: method(a) vs method(a, b) We cannot achieve method overloading using the first way because there is no data type declaration in ruby( dynamic typed language ). So the only way to define the above method is def(a,b) With the second option, it might look like we can achieve method overloading, but we can't. Let say I have two methods with different number of arguments, def method(a); end;def method(a, b = true); end; # second argument has a default valuemethod(10)# Now the method call can match the first one as well as the second one, # so here is the problem. So ruby needs to maintain one method in the method look up chain with a unique name.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/9373104', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/489559/']}
|
jdg_371525
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7129178
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
QUESTION: What browser versions support CORS (Cross-Origin Resource Sharing) headers for Cross Domain Images used in Canvas? CORS can apply to both cross domain XMLHttpRequests and to image requests. This question is about image requests My normal go to for browser version compatibility http://caniuse.com/cors is unclear on the issue and google search yields no good results. I did find a recent chrome development blog implying that CORS support was wide spread in modern browsers but might break because of WebGL security problems. http://blog.chromium.org/2011/07/using-cross-domain-images-in-webgl-and.html More detail on CORS: We're considering the viability of using canvas & CORS with cross domain image requests as described in the W3C Working Draft http://www.w3.org/TR/cors/#use-cases . CORS is used by html canvas to allow cross domain resource usage in a fashion similar to the way flash uses crossdomain.xml. Basically, we want to read/edit the image data pixels and we don't want to use a same origin proxy server. Normally, if are images loaded cross domain and used with html canvas, accessing pixels using functions like canvas.toDataURL() will throw a security error. However, If the server delivering the image adds a header like this, the cross domain usage should be allowed. access-control-allow-origin: * Browsers We Care Most About: We are planning to work around IE's lack of canvas support using flash, so for desktop browsers with a CORS problem we can do that as well, but on mobile flash is not an option, and using a proxy to make the requests same origin is not an option in our use case. So, I'm particularly interested in Andriod, Iphone, IPAD browser support for CORS.
Now provide the response and nothing else.
|
Test Results : Bad News, it appears to only work in Chrome. All other browsers (including Android Mobile) give an error like this: Failed: DOM Exception: SECURITY_ERR (18) Mobile Devices I tested Android (samsung galaxy kernel version 2.6.32.9), Iphone and IPAD V1 and it failed in all three. You can test your own mobile device with this URL: http://maplarge.com/CrossOriginImageTest.html The Test Script: <!DOCTYPE html><html><head><title>Canvas Cross Origin Image Test: Testing for Canvas Cross Domain Image CORS Support</title><script type="text/javascript"> function initialize() { //will fail here if no canvas support try { var can = document.getElementById('mycanvas'); var ctx = can.getContext('2d'); var img = new Image(); img.crossOrigin = ''; //domain needs to be different from html page domain to test cross origin security img.src = 'http://lobbydata.com/Content/images/bg_price2.gif'; } catch (ex) { document.getElementById("results").innerHTML = "<span style='color:Red;'>Failed: " + ex.Message + "</span>"; } //will fail here if security error img.onload = function () { try { var start = new Date().getTime(); can.width = img.width; can.height = img.height; ctx.drawImage(img, 0, 0, img.width, img.height); var url = can.toDataURL(); // if read succeeds, canvas isn't dirty. //get pixels var imgd = ctx.getImageData(0, 0, img.width, img.width); var pix = imgd.data; var len = pix.length; var argb = []; //pixels as int for (var i = 0; i < len; i += 4) { argb.push((pix[i + 3] << 24) + (pix[i] << 16) + (pix[i + 1] << 8) + pix[i + 2]); } var end = new Date().getTime(); var time = end - start; document.getElementById("results").innerHTML = "<span style='color:Green;'>" + "Success: Your browser supports CORS for cross domain images in Canvas <br>"+ "Read " + argb.length+ " pixels in "+ time+"ms</span>"; } catch (ex) { document.getElementById("results").innerHTML = "<span style='color:Red;'>Failed: " + ex + "</span>"; } } }</script></head><body onload="initialize()"><h2>Canvas Cross Origin Image Test: Testing for Canvas Cross Domain Image CORS Support</h2><h2><a href="http://blog.chromium.org/2011/07/using-cross-domain-images-in-webgl-and.html">What is CORS Image Security?</a></h2><h1 id="results" style="color:Orange;">Testing...</h1><canvas id="mycanvas"></canvas><br /><a href="/Example/List">More Examples</a></body></html>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7129178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/161852/']}
|
jdg_371526
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
316
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Density Functional Theory courses, one is often reminded that Kohn-Sham orbitals are often said to bear no any physical meaning. They only represent a noninteracting reference system which has the same electron density as the real interacting system. That being said, there are plenty of studies in that field’s literature that given KS orbitals a physical interpretation, often after a disclaimer similar to what I said above. To give only two examples, KS orbitals of H 2 O [1] and CO 2 closely resemble the well-known molecular orbitals. Thus, I wonder: What good (by virtue of being intuitive, striking or famous) examples can one give as a warning of interpreting the KS orbitals resulting from a DFT calculation? [1] “What Do the Kohn-Sham Orbitals and Eigenvalues Mean?”, R. Stowasser and R. Hoffmann, J. Am. Chem. Soc. 1999 , 121 , 3414–3420.
Now provide the response and nothing else.
|
When people say that Kohn-Sham orbitals bear no physical meaning, they mean it in the sense that nobody has proved mathematically that they mean anything. However, it has been empirically observed that many times, Kohn-Sham orbitals often do look very much like Hartree-Fock orbitals, which do have accepted physical interpretations in molecular orbital theory. In fact, the reference in the OP lends evidence to precisely this latter viewpoint. To say that orbitals are "good" or "bad" is not really that meaningful in the first place. A basic fact that can be found in any electronic structure textbook is that in theories that use determinantal wavefunctions such as Hartree-Fock theory or Kohn-Sham DFT, the occupied orbitals form an invariant subspace in that any (unitary) rotation can be applied to the collection of occupied orbitals while leaving the overall density matrix unchanged. Since any observable you would care to construct is a functional of the density matrix in SCF theories, this means that individuals orbitals themselves aren't physical observables, and therefore interpretations of any orbitals should always be undertaken with caution. Even the premise of this question is not quite true. The energies of Kohn-Sham orbitals are known to correspond to ionization energies and electron affinities of the true electronic system due to Janak's theorem, which is the DFT analogue of Koopmans' theorem. It would be exceedingly strange if the eigenvalues were meaningful while their corresponding eigenvectors were completely meaningless.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://chemistry.stackexchange.com/questions/316', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/30/']}
|
jdg_371527
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4214153
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following code works fine, but why is this correct code? Why is the "c_str()" pointer of the temporary returned by foo() valid? I thought, that this temporary is already destroyed when bar() is entered - but it doesn't seem to be like this. So, now I assume that the temporary returned by foo() will be destroyed after the call to bar() - is this correct? And why? std::string foo() { std::string out = something...; return out;}void bar( const char* ccp ) { // do something with the string..}bar( foo().c_str() );
Now provide the response and nothing else.
|
$12.2/3- "Temporary objects are destroyed as the last step in evaluating the full-expression (1.9) that (lexically) contains the point where they were created. This is true even if that evaluation ends in throwing an exception." The lifetime of the temporary returned by foo() extends until the end of the full expression where it is created i.e. until the end of the function call 'bar'. EDIT 2: $1.9/12- "A full-expression is an expression that is not a subexpression of another expression. If a language construct is defined to produce an implicit call of a function, a use of the language construct is considered to be an expression for the purposes of this definition."
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4214153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/206247/']}
|
jdg_371528
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5835126
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I make an infinite loop in JavaScript? I'm trying to make a slideshow, which I have working, but I can't get it to loop. I can't even get it to loop twice. The code I'm using right now is window.onload = function start() { slide();}function slide() { var num = 0; for (num=0;num<=10;num++) { setTimeout("document.getElementById('container').style.marginLeft='-600px'",3000); setTimeout("document.getElementById('container').style.marginLeft='-1200px'",6000); setTimeout("document.getElementById('container').style.marginLeft='-1800px'",9000); setTimeout("document.getElementById('container').style.marginLeft='0px'",12000); }} Without the for thing in there, it does go through once. When I put in a for, it either makes Firefox lock up, or just loops once. I'm sure this is a really simple thing to do, and even if it has to be loop 1,000,000 times or something instead of infinite, that'd work fine for me. Also, I don't want to use jQuery or something that someone else created. I'm learning JavaScript, and this is partially to help me learn, and partially because I'm trying to make as many HTML5-based systems as I can. EDIT: I think the reason it's freezing is because it executes the code all at once, and then just stores it in a cache or something. What I want it to do is go through this once, then start at the top again, which is what I've always thought loops where for. In "batch" (command prompt) scripting, this could be done with a " GOTO " command. I don't know if there's an equivalent in JS or not, but that's really my goal.
Now provide the response and nothing else.
|
The correct approach is to use a single timer. Using setInterval , you can achieve what you want as follows: window.onload = function start() { slide();}function slide() { var num = 0, style = document.getElementById('container').style; window.setInterval(function () { // increase by num 1, reset to 0 at 4 num = (num + 1) % 4; // -600 * 1 = -600, -600 * 2 = -1200, etc style.marginLeft = (-600 * num) + "px"; }, 3000); // repeat forever, polling every 3 seconds}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5835126', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/654480/']}
|
jdg_371529
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
191725
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a histogram obtained using given data points, how do I randomly sample from the distribution predicted by the histogram? Any conceptual comment / R code would be welcome.
Now provide the response and nothing else.
|
Since the sampling from a kernel density estimate is solved once or twice already, I'll focus on sampling from a histogram-as-population-pdf. The idea is simply For each observation in the new sample 1. choose a histogram bin according to the proportions of the original sample (treated as a discrete pmf) 2. sample uniformly from that bin-interval For example in R: #create an original histogramx=rgamma(200,4)xhist=hist(x,freq=FALSE)#sample from itsamplesize=400bins=with(xhist,sample(length(mids),samplesize,p=density,replace=TRUE)) # choose a binresult=runif(length(bins),xhist$breaks[bins],xhist$breaks[bins+1]) # sample a uniform in ithist(result,freq=FALSE,add=TRUE,bord=3) Just for completeness, (since sampling from the kernel density estimate* is very simple): repeat nsim times: sample (with replacement) a random observation from the data sample from the kernel, and add the previously sampled random observation * note that some kernels - like fourth order kernels - are not densities and this assumes that the kernel is a density In R, for a Gaussian kernel and bandwidth h, with data in x: dnorm(nsim,m=sample(x,nsim,replace=TRUE), s=h)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/191725', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/95931/']}
|
jdg_371530
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11255353
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This question has been bugging me for a long time now but essentially I'm looking for the most efficient way to grab all Strings between two Strings. The way I have been doing it for many months now is through using a bunch of temporary indices, strings, substrings, and it's really messy. (Why does Java not have a native method such as String substring(String start, String end) ? Say I have a String: abcabc [pattern1]foo[pattern2] abcdefg [pattern1]bar[pattern2] morestuff The end goal would be to output foo and bar . (And later to be added into a JList) I've been trying to incorporate regex in .split() but haven't been successful. I've tried syntax using * 's and . 's but I don't think it's quite what my intention is especially since .split() only takes one argument to split against. Otherwise I think another way is to use the Pattern and Matcher classes? But I'm really fuzzy on the appropriate procedure.
Now provide the response and nothing else.
|
You can construct the regex to do this for you: // pattern1 and pattern2 are String objectsString regexString = Pattern.quote(pattern1) + "(.*?)" + Pattern.quote(pattern2); This will treat the pattern1 and pattern2 as literal text, and the text in between the patterns is captured in the first capturing group . You can remove Pattern.quote() if you want to use regex, but I don't guarantee anything if you do that. You can add some customization of how the match should occurs by adding flags to the regexString . If you want Unicode-aware case-insensitive matching, then add (?iu) at the beginning of regexString , or supply Pattern.CASE_INSENSITIVE | Pattern.UNICODE_CASE flag to Pattern.compile method. If you want to capture the content even if the two delimiting strings appear across lines, then add (?s) before (.*?) , i.e. "(?s)(.*?)" , or supply Pattern.DOTALL flag to Pattern.compile method. Then compile the regex, obtain a Matcher object, iterate through the matches and save them into a List (or any Collection , it's up to you). Pattern pattern = Pattern.compile(regexString);// text contains the full text that you want to extract dataMatcher matcher = pattern.matcher(text);while (matcher.find()) { String textInBetween = matcher.group(1); // Since (.*?) is capturing group 1 // You can insert match into a List/Collection here} Testing code: String pattern1 = "hgb";String pattern2 = "|";String text = "sdfjsdkhfkjsdf hgb sdjfkhsdkfsdf |sdfjksdhfjksd sdf sdkjfhsdkf | sdkjfh hgb sdkjfdshfks|";Pattern p = Pattern.compile(Pattern.quote(pattern1) + "(.*?)" + Pattern.quote(pattern2));Matcher m = p.matcher(text);while (m.find()) { System.out.println(m.group(1));} Do note that if you search for the text between foo and bar in this input foo text foo text bar text bar with the method above, you will get one match, which is text foo text .
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11255353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1112835/']}
|
jdg_371531
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55334160
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Some online articles says, that there is no standard for querystring and hash in URL, but we are following what has continued to happen. So, my question is what should be a better way of having both querystring and hash in same URL. The problem I think is, if hash follows querystring, it can become a value to some querystring data and if querystring follows hash, the whole querystring may become a hask. So, what order should I follow?
Now provide the response and nothing else.
|
Some online articles says, that there is no standard for querystring and hash in URL Either they are wrong or you are misinterpreting them. The query string must appear before the fragment identifier (which you call the hash). The specification shows the format of a URI: URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ] It clearly shows the fragment appearing after the query. if hash follows querystring, it can become a value to some querystring data It can't. The # is a special character that indicates the start of the fragment. To include one in query string data it needs to be escaped as %23 .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55334160', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9965515/']}
|
jdg_371532
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7926680
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I had posted another question with this as one aspect of it. I was told to clarify the question, but that question was already pretty long and complicated, so I created a new one. I want to know if there is standard way of referencing the current node's attribute in an XPath expression testing for another one. For an example, consider the following XSLT <?xml version="1.0" encoding="utf-8"?><xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"><xsl:template match="/"> <xsl:for-each select="potato/stem[eye]"> In session <xsl:value-of select="@sessionID"/>, the potato had <xsl:value-of select="/potato/stem[@sessionID=@sessionID][scc]/scc/@leafnumber"/> s.c.c. leaves. </xsl:for-each></xsl:template> (XML source at bottom of this question.)(Note that the for-each references nodes of type stem[eye] but the second value requested references nodes of type stem[scc], which are on a different branch of the source XML tree.) Now, obviously, the "@sessionID=@sessoinID" part of that is mostly meaningless because XPath perceives this as "The value of the sessionID attribute of the node should equal ... the value of the sessionID attribute of the node." But what I want to say is "Test to make sure the value of the seesionID attribute of that node (the one in the XPath expression) is the same as the sessionID of the node of whatever /stem[eye] node I'm in right now." I can't do this with a variable, because you are not allowed to declare a variable within a for-each clause. For reference, this is the XML source. Its structure is not what one would like, but it is what I have to work with. <?xml version="1.0" encoding="utf-8"?><potato><stem sessionID="1"> <eye number = "25"/></stem><stem sessionID="3"> <eye number = "33"/></stem><stem sessionID="1"> <scc leafnumber = "234" /></stem><stem sessionID="2"> <scc leafnumber = "433"/></stem><stem sessionID="3"> <scc leafnumber = "463"/></stem><stem sessionID="1"> <agd leafnumber = "154"/></stem><stem sessionID="2"> <agd leafnumber = "233"/></stem><stem sessionID="3"> <agd leafnumber = "113"/></stem> </potato> The output I'm looking for is: In session 1, the potato had 234 s.c.c. leaves. In session 3, the potato had 463 s.c.c. leaves. (Of course, this is all just sandbox example. I realize there are probably easy ways to accomplish the above output in a completely different manner, but I hope this example gets across my question, which is how to use values associated with the current node (say, in a for-each) in a predicate for an XPath searching out a different one.)
Now provide the response and nothing else.
|
In XSLT 1.0 you can use the standard function current() which refers to the node that is matched by the current template or the inner-most xsl:for-each : /potato/stem[@sessionID=current()/@sessionID][scc]/scc/@leafnumber or by defining a key (at a global level): <xsl:key name="kPotById" match="stem[scc]" use="@sessionID"/> and referencing this key: key('kPotById', @sessionID)/scc/@leafnumber In XSLT 2.0 / XPath 2.0 you have additional ways to express this (range variables): for $thisSessionID in @sessionId return /potato/stem[@sessionID=$thisSessionID][scc]/scc/@leafnumber
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7926680', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1000026/']}
|
jdg_371533
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8851953
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I read that PhoneGap is approved by Apple as a framework for building native apps, but I'm not clear on how this does not constitute a "website wrapped as an app," something that Apple specifically doesn't like, as per their App Store guidelines. I'm assuming it is because the JS is contained in the application, rather than running on a remote server? Any clarification on this issue would be most appreciated.
Now provide the response and nothing else.
|
but I'm not clear on how this does not constitute a "website wrapped as an app," something that Apple specifically doesn't like, as per their App Store guidelines. By this Apple means Apps that simply show a UIWebView that loads a remote website, so it basically just behaves like Mobile Safari, just without the controls. Also they wan't Apps to show at least a UI when the device is not connected to the internet, like in the Facebook App where you can still read already downloaded status updates, view photos etc. That's what a "I just wrap a remote website inside a UIWebView"-App can't do. PhoneGap Apps have all the resources they need (HTML, CSS, JavaScript) on the device residing in the App's bundle, so using such an App would not depend on network availability and could at least show a UI that tells you to connect to the Internet in order to use the App, or they could display already loaded content (like Facebook, Twitter and so on).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8851953', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/793972/']}
|
jdg_371534
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
281870
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
can you help me with this question find $\sin$ Fourier transform of $\frac{\sin{x}}{x}$
Now provide the response and nothing else.
|
I write the Fourier transform as $$\hat{f}(v) = \int_{-\infty}^{\infty} dx \: \frac{\sin{x}}{x} e^{-i 2 \pi v x} $$ Consider, rather, the integral $$ \frac{1}{i 2} \int_{-\infty}^{\infty} dx \: \frac{e^{i x}-e^{-i x}}{x} e^{-i 2 \pi v x} $$ $$ = \frac{1}{i 2} \int_{-\infty}^{\infty} dx \: \frac{e^{i (1-2 \pi v) x}}{x} - \frac{1}{i 2} \int_{-\infty}^{\infty} dx \: \frac{e^{-i (1+2 \pi v) x}}{x} $$ Consider the following integral corresponding to the first integral: $$\oint_C dz \: \frac{e^{i (1-2 \pi v) z}}{z} $$ where $C$ is the contour defined in the illustration below: This integral is zero because there are no poles contained within the contour. Write the integral over the various pieces of the contour: $$\int_{C_R} dz \: \frac{e^{i (1- 2 \pi v)z}}{z} + \int_{C_r} dz \: \frac{e^{i (1- 2 \pi v) z}}{z} + \int_{-R}^{-r} dx \: \frac{e^{i (1- 2 \pi v) x}}{x} + \int_{r}^{R} dx \: \frac{e^{i (1- 2 \pi v) x}}{x} $$ Consider the first part of this integral about $C_R$, the large semicircle of radius $R$: $$\int_{C_R} dz \: \frac{e^{i (1- 2 \pi v)z}}{z} = i \int_0^{\pi} d \theta e^{i (1-2 \pi v) R (\cos{\theta} + i \sin{\theta})} $$ $$ = i \int_0^{\pi} d \theta e^{i (1-2 \pi v) R \cos{\theta}} e^{-(1- 2 \pi v) R \sin{\theta}} $$ By Jordan's lemma , this integral vanishes as $R \rightarrow \infty$ when $1-2 \pi v > 0$. On the other hand, $$ \int_{C_r} dz \: \frac{e^{i (1-2 \pi v) z}}{z} = i \int_{\pi}^0 d \phi \: e^{i (1-2 \pi v) r e^{i \phi}} $$ This integral takes the value $-i \pi$ as $r \rightarrow 0$. We may then say that $$\begin{align} & \int_{-\infty}^{\infty} dx \: \frac{e^{i (1-2 \pi v) x}}{x} = i \pi & 1-2 \pi v > 0\\ \end{align}$$ When $1-2 \pi v < 0$, Jordan's lemma does not apply, and we need to use another contour. A contour for which Jordan's lemma does apply is one flipped about the $\Re{z}=x$ axis. By using similar steps as above, it is straightforward to show that $$\begin{align} & \int_{-\infty}^{\infty} dx \: \frac{e^{i (1-2 \pi v) x}}{x} = -i \pi & 1-2 \pi v < 0\\ \end{align}$$ Using a similar analysis as above, we find that $$\int_{-\infty}^{\infty} dx \: \frac{e^{-i (1+2 \pi v) x}}{x} = \begin{cases} -i \pi & 1+2 \pi v < 0 \\ i \pi & 1+2 \pi v >0 \\ \end{cases} $$ We may now say that $$\hat{f}(v) = \int_{-\infty}^{\infty} dx \: \frac{\sin{x}}{x} e^{-i 2 \pi v x} = \begin{cases} \pi & |v| < \frac{1}{2 \pi} \\ 0 & |v| > \frac{1}{2 \pi} \\ \end{cases} $$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/281870', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58861/']}
|
jdg_371535
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
101418
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been struggeling with the command riffle. list = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {10, 11, 12}, {13,14, 15}} How do I use riffle like this, but add 1 to "X", every time it gets inserted?So when it has inserted "X" 5 times, it would be X5, instead of just "X". "X" is a string. Riffle[list, "X", 3] {{1, 2, 3}, {4, 5, 6}, X, {7, 8, 9}, {10, 11, 12}, X, {13, 14, 15}} The output I am looking for: {{1, 2, 3}, {4, 5, 6}, X1, {7, 8, 9}, {10, 11, 12}, X2, {13, 14, 15}}
Now provide the response and nothing else.
|
Create a list to riffle with, Riffle[list, Table[StringJoin["X", ToString[i]], {i, 1, Length[list]}], 3] {{1, 2, 3}, {4, 5, 6}, "X1", {7, 8, 9}, {10, 11, 12}, "X2", {13, 14, 15}}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/101418', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/36027/']}
|
jdg_371536
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4850149
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to add the gradient as a background to label. I used the following code to acheive that. but the problem is that though the gradient color appears on the label, but the text is not visible. please help lblPatientDetail.text=PatientsDetails;lblPatientDetail.textColor=[UIColor blackColor]; CAGradientLayer *gradient = [CAGradientLayer layer];gradient.frame = lblPatientDetail.bounds;gradient.colors = [NSArray arrayWithObjects:(id)[[UIColor whiteColor] CGColor],(id)[[UIColor colorWithRed:255/255.0 green:239/255.0 blue:215/255.0 alpha:1.0] CGColor],nil]; [lblPatientDetail.layer addSublayer:gradient];lblPatientDetail.backgroundColor=[UIColor clearColor];
Now provide the response and nothing else.
|
Inserting a sublayer to a UILabel hides the text, so the best way to get what you want is to add the label and gradient layer to a UIView. UIView *gradientLabelView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 30)];CAGradientLayer *gradient = [CAGradientLayer layer];gradient.frame = gradientLabelView.bounds;gradient.colors = [NSArray arrayWithObjects:(id)[[UIColor whiteColor] CGColor],(id)[[UIColor colorWithRed:255/255.0 green:239/255.0 blue:215/255.0 alpha:1.0] CGColor],nil];[gradientLabelView.layer addSublayer:gradient];lblPatientDetail.frame = gradientLabelView.bounds;lblPatientDetail.backgroundColor = [UIColor clearColor];[gradientLabelView addSubview:lblPatientDetail];[self addSubview:gradientLabelView];
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4850149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/540176/']}
|
jdg_371537
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43451125
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am reading through the documentation of PyTorch and found an example where they write gradients = torch.FloatTensor([0.1, 1.0, 0.0001])y.backward(gradients)print(x.grad) where x was an initial variable, from which y was constructed (a 3-vector). The question is, what are the 0.1, 1.0 and 0.0001 arguments of the gradients tensor ? The documentation is not very clear on that.
Now provide the response and nothing else.
|
Explanation For neural networks, we usually use loss to assess how well the network has learned to classify the input image (or other tasks). The loss term is usually a scalar value. In order to update the parameters of the network, we need to calculate the gradient of loss w.r.t to the parameters, which is actually leaf node in the computation graph (by the way, these parameters are mostly the weight and bias of various layers such Convolution, Linear and so on). According to chain rule, in order to calculate gradient of loss w.r.t to a leaf node, we can compute derivative of loss w.r.t some intermediate variable, and gradient of intermediate variable w.r.t to the leaf variable, do a dot product and sum all these up. The gradient arguments of a Variable 's backward() method is used to calculate a weighted sum of each element of a Variable w.r.t the leaf Variable . These weight is just the derivate of final loss w.r.t each element of the intermediate variable. A concrete example Let's take a concrete and simple example to understand this. from torch.autograd import Variableimport torchx = Variable(torch.FloatTensor([[1, 2, 3, 4]]), requires_grad=True)z = 2*xloss = z.sum(dim=1)# do backward for first element of zz.backward(torch.FloatTensor([[1, 0, 0, 0]]), retain_graph=True)print(x.grad.data)x.grad.data.zero_() #remove gradient in x.grad, or it will be accumulated# do backward for second element of zz.backward(torch.FloatTensor([[0, 1, 0, 0]]), retain_graph=True)print(x.grad.data)x.grad.data.zero_()# do backward for all elements of z, with weight equal to the derivative of# loss w.r.t z_1, z_2, z_3 and z_4z.backward(torch.FloatTensor([[1, 1, 1, 1]]), retain_graph=True)print(x.grad.data)x.grad.data.zero_()# or we can directly backprop using lossloss.backward() # equivalent to loss.backward(torch.FloatTensor([1.0]))print(x.grad.data) In the above example, the outcome of first print is 2 0 0 0 [torch.FloatTensor of size 1x4] which is exactly the derivative of z_1 w.r.t to x. The outcome of second print is : 0 2 0 0 [torch.FloatTensor of size 1x4] which is the derivative of z_2 w.r.t to x. Now if use a weight of [1, 1, 1, 1] to calculate the derivative of z w.r.t to x, the outcome is 1*dz_1/dx + 1*dz_2/dx + 1*dz_3/dx + 1*dz_4/dx . So no surprisingly, the output of 3rd print is: 2 2 2 2 [torch.FloatTensor of size 1x4] It should be noted that weight vector [1, 1, 1, 1] is exactly derivative of loss w.r.t to z_1, z_2, z_3 and z_4. The derivative of loss w.r.t to x is calculated as: d(loss)/dx = d(loss)/dz_1 * dz_1/dx + d(loss)/dz_2 * dz_2/dx + d(loss)/dz_3 * dz_3/dx + d(loss)/dz_4 * dz_4/dx So the output of 4th print is the same as the 3rd print : 2 2 2 2 [torch.FloatTensor of size 1x4]
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/43451125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5016028/']}
|
jdg_371538
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5223113
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to use MySQL in R (statistic language) for Windows (7)? There is no problems then using linux: install.packages('RMySQL')library(RMySQL)... But I found no such package for Windows on CRAN. There is only note about this fact.I found package for version 2.10 , but it is not usable in 2.12.1. Also tried install.packages("RMySQL", type="source") but got an error : * installing *source* package 'RMySQL' ... ERROR: configuration failed for package 'RMySQL'* removing 'C:/.../R/win-library/2.12/RMySQL' Is there any way to connect to MySQL database from R in windows?
Now provide the response and nothing else.
|
Found solution with help of ran2 , who gave me link to common question . The basic process is described here , but there are several hints, So I will describe the whole solution (please change the R version and paths if needed) : Install latest RTools from here install MySQL or header and library files of mysql create or edit file C:\Program Files\R\R-2.12.1\etc\Renviron.site and add line like MYSQL_HOME=C:/mysql (path to your mysql files) copy libmysql.lib from mysql/lib to mysql/lib/opt to meet dependencies. copy libmysql.dll to C:\Program Files\R\R-2.12.1\bin or to windows/system32 directory. run install.packages('RMySQL',type='source') and wait while compilation will end. Thanks to all who tried to answer.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5223113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/614735/']}
|
jdg_371539
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5683111
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was playing with optional parameters to see how they would work with interfaces and I came across a strange warning. The setup I had was the following code: public interface ITestInterface { void TestOptional(int a = 5, int b = 10, object c = null); } public class TestClass : ITestInterface { void ITestInterface.TestOptional(int a = 5, int b = 5, object c = null) { Console.Write("a=" + a + " b=" + b + " c=" + c); } } The compiler gives me the following warnings: The default value specified for parameter 'a' will have no effect because it applies to a member that is used in contexts that do not allow optional arguments The default value specified for parameter 'b' will have no effect because it applies to a member that is used in contexts that do not allow optional arguments The default value specified for parameter 'c' will have no effect because it applies to a member that is used in contexts that do not allow optional arguments If I run this with the following code: class Program{ static void Main(string[] args) { ITestInterface test = new TestClass(); test.TestOptional(); Console.ReadLine(); }} I get the output of " a=5 b=10 c= " as I'd expect. My question is what is warning for? What contexts is it referring to?
Now provide the response and nothing else.
|
The problem with optional arguments in C# is whether the callee sees the object as a TestClass or an ITestInterface . In the first case, the values declared in the class apply. In the second case the values declared in the interface apply. It is because the compiler uses the statically available type information to construct the call. In case of an explicit interface implementation the method is never called 'for a class', always 'for an interface' The C# Language Specification in 10.6.1 states: If optional parameters occur in an implementing partial method declaration (§10.2.7) , an explicit interface member implementation (§13.4.1) or in a single-parameter indexer declaration (§10.9) the compiler should give a warning, since these members can never be invoked in a way that permits arguments to be omitted.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5683111', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/341209/']}
|
jdg_371540
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.