source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21335832
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does anyone know if this new feature can be performed on multiple relationships? For example, I have a query where I want to filter on not only the club name ( related question ) but also the territory name. In this example, I'd like query results where the club (club relationship) name is Arsenal and the the region is Australia (territory relationship) $ret->with('territory')->with('homeClub')->with('awayClub'); $ret->whereHas('territory',function( $query ){ $query->where('region','Australia'); })->whereHas('homeClub', function ( $query ) { $query->where('name', 'Arsenal' ); })->orWhereHas('awayClub', function ( $query ) { $query->where('name', 'Arsenal' ); }); When executing this query - the result isn't constraining the territory whereHas just the clubs one. Can whereHas be chained to filter the results on previous relationship's whereHas? Any suggestions if not? thanks jon
Now provide the response and nothing else.
|
Yes that's possible. The generated SQL will probably be: SELECT * FROM ... WHERE (territory constraint) AND (homeClub constratint) OR (awayClub constraint) This means that if awayClub constraint is satisfied, the line will be retrieved. I think you want to add a parenthesis to the generated sql: SELECT * FROM ... WHERE (territory constraint) AND ((homeClub constratint) OR (awayClub constraint)) to do that, you need to nest both queries inside a where: $ret->with('territory')->with('homeClub')->with('awayClub'); $ret->whereHas('territory',function( $query ){ $query->where('region','Australia'); }) ->where(function($subQuery) { $subQuery->whereHas('homeClub', function ( $query ) { $query->where('name', 'Arsenal' ); }) ->orWhereHas('awayClub', function ( $query ) { $query->where('name', 'Arsenal' ); }); });
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21335832', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1194469/']}
|
jdg_371041
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7142838
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to do something like the following var tuple = (1, "test")tuple._2 = "new" However this does not compile it complains about val
Now provide the response and nothing else.
|
You can't reassign tuple values. They're intentionally immutable: once you have created a tuple, you can be confident that it will never change. This is very useful for writing correct code! But what if you want a different tuple? That's where the copy method comes in: val tuple = (1, "test")val another = tuple.copy(_2 = "new") or if you really want to use a var to contain the tuple: var tuple = (1, "test")tuple = tuple.copy(_2 = "new") Alternatively, if you really, really want your values to change individually, you can use a case class instead (probably with an implicit conversion so you can get a tuple when you need it): case class Doublet[A,B](var _1: A, var _2: B) {}implicit def doublet_to_tuple[A,B](db: Doublet[A,B]) = (db._1, db._2)val doublet = Doublet(1, "test")doublet._2 = "new"
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7142838', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144152/']}
|
jdg_371042
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
267901
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Notes: 1) I know next to nothing about algebraic geometry, although I am greatly interested in the field. 2) I realize that "constructive" might be a technical term, here I am using it only in an informal manner. I hope that this question belongs on this site, since it is not strictly research-level. As an autodidact (I have an ongoing formal education in physics, but the amount of math we learn here is abysmal, so most of my mathematics knowledge is self-taught), I have noted that algebraic geometry seems to be really impenetrable for somebody who has no formal education in the field, unlike, say, differential geometry or functional analysis, which are areas I can effectively learn on my own. Pretty much every time I encounter AG-related stuff on sites such as this one or math.se, I see layers and layers of abstractation on top of one another to the point where it makes me wonder, is this field of mathematics constructive, in the sense that can it be used to actually calculate anything or have any use outside mathematics? The point I am trying to make is that, using differential geometry as an example, is is constructive. No matter how abstractly do I define manifolds, tensor fields, differential forms, connections, etc, they are always resoluble into component functions in some local trivializations, with which one can actually calculate stuff. Every time I used DG to calculate stellar equilibrium or cosmological evolution or geodesics of some model spacetime, I get actual, direct, palpable, realizable results in terms of real numbers. I can use differential forms to calculate the volumes of geometric shapes, and every time I use a Lagrangian or Hamiltonian formalism to calculate trajectories for classical mechanical systems, I make use of differential geometry to obtain palpable results. On top of that, I know that DG is useful outside physics too, I have heard of uses in economy, music theory etc. I am curious if there is any real-world application of AG where one can use AG to obtain palpable results. I am not curious (for the purpose of this question) about uses to mathematics itself, I know they are numerous. But every time I try to read about AG I get lost in the infinitude of sheaves, stacks, schemes, functors and other highly abstract objects, which often seem so impossible to me to be resolved into calculable numbers. The final point is , I would like to hear about some interesting applications of algebraic geometry outside mathematics, if there are any.
Now provide the response and nothing else.
|
If you forget about all the layers of abstraction, algebraic geometry is, ultimately (and very roughly speaking), the study of polynomial equations in several variables, and of the geometric objects they define. So in a certain sense, whenever you're doing anything with multivariate polynomials, there's probably some algebraic geometry behind it; and conversely, algebraic geometry questions can generally be reduced, at least in principle, to "does this system of polynomial equations have a solution?".Now to go a (little) bit further, one should consider the field upon which these equations are defined, and more importantly, in which the solutions are sought. The whole point of algebraic geometry is that most of the formalism can be made uniform in the base field, or indeed, ring. But there are at least two main flavors: Geometric questions are over algebraically closed fields (and all that really matters, then, is the characteristic of the field). These questions are, in principle, algorithmically decidable (although the complexity can be very bad), at least if we bound every degree involved in the problem; Gröbner bases are a the key tool to solve these geometric problems in practice. Arithmetic questions are over any other field, typically the rational numbers (→ diophantine equations); for example, an arithmetic question could be "does the curve $x^n + y^n = z^n$ (where $x,y,z$ are homogeneous coordinates) have rational solutions beyond the obvious ones?". Arithmetic questions can be undecidable, so there is no universal tool like Gröbner bases to solve them. There is a subtle interplay between geometry and arithmetic (for example, in the simplest nontrivial case, that of curves, the fundamental geometric invariant, the genus $g$, determines very different behaviors on its rational points according as $g=0$, $g=1$ or $g\geq 2$). Then there are some fields which are "not too far" from being algebraically closed, like the reals, the finite fields, and the $p$-adics. Here, it is still decidable in principle whether a system of polynomial equations has a solution, but the complexity is even worse than for algebraically closed fields (for finite fields, there is the obvious algorithm consisting of trying possible value). Some theory can help bring it down to a manageable level. As for applications outside mathematics, they mostly fall in this "not too far from algebraically closed" region: Algebraic geometry over the reals has applications in robotics, algebraic statistics (which is part of mathematics but itself has applications to a wide variety of sciences), and computer graphics, for example. Algebraic geometry over finite fields has applications in cryptography (and perhaps more generally boolean circuits) and the construction of error-correcting codes. But I would like to emphasize that the notion of "applications" is not quite clear-cut. Part of classical algebraic geometry is the theory of elimination (i.e., essentially given a system of polynomial equations in $n+k$ variables, find the equations in the $n$ first variables defining whether there exists a solution in the $k$ last): this is a very useful computational tool in a huge range of situations where polynomials or polynomial equations play any kind of rôle. For example, a number of years ago, I did some basic computations on the Kerr metric in general relativity (ultimately to produce such videos as this one ): the computations themselves were differential-geometric in nature (and not at all sophisticated), but by remembering that, in the right coordinate system, everything is an algebraic function, and by using some elimination theory, I was able to considerably simplify some symbolic manipulations in those computations. I wouldn't call it an application of algebraic geometry to physics, but knowing algebraic geometry definitely help me not make a mess of the computations.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://mathoverflow.net/questions/267901', 'https://mathoverflow.net', 'https://mathoverflow.net/users/85500/']}
|
jdg_371043
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34512482
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’m trying to increment pointer. I was sure that it is similar to do i+=1 , but I’m getting adress. #include "stdafx.h" #include <iostream>using namespace std;int main(){ int i = 42; int *a = &i; *a++; cout << *a; cin.get(); return 0;} Can anybody explain why ?
Now provide the response and nothing else.
|
++ has a higher operator precedence than the pointer dereference operator * . So what *a++ does is to return the value of i (the old value of *a ) before incrementing the pointer value. After that expression has been evaluated, a now points to something other than the address of i , and the behaviour of a subsequent *a is undefined . If you want to increment i via the pointer, then use (*a)++;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34512482', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4329657/']}
|
jdg_371044
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
62453399
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use in my ARCore project Sceneform. It seems that this project is now mentioned as Archived by Google. More info we can find here or on this page . I don't understand if Google really abandoned this SDK, or if it is - or will be - directly integrated in ARCore SDK? Thanks for any information concerning the future of this SDK and a potential replacements.
Now provide the response and nothing else.
|
Updated : May 12, 2022. About Sceneform replacement on Google IO '21 Let's listen to what Fred Sauer (Developer Advocate at Google on the ARCore team) told us about replacements of Sceneform. Here's a link to video called AMA : ARCore (time 06:25). Sceneform fork (Unofficial Sceneform 1.21) This is the continuation of the archived Sceneform. About last official versions There are three official versions to choose from: Sceneform 1.17.1 Sceneform 1.16 Sceneform 1.15 Several months ago Google recommended not to use the latest version Sceneform 1.17 due to Maven artifacts (now it might work normally but it's still a question). Sceneform 1.16 isn't available via Android Studio plugins' menu because, as you said, its status is Archived (I should say Abandoned ). Release 1.16 supports glTF / glTB formats, instead of SFA and SFB assets. And Sceneform 1.16 went open-source. Hence you can download it from GitHub. The only thing I don't understand why Google released Sceneform 1.17 if Sceneform 1.16 was archived?! So there are two 100%-working versions at the moment – Sceneform 1.16 and Sceneform 1.15 . Sceneform 1.16 installation For downloading Sceneform 1.16 for ARCore Android use this link on GitHub. Use these thorough instructions to include and build the Sceneform 1.16 SDK with your app. Google discontinued Sceneform Google didn't prepare a release of a new official 3D rendering engine (even using Filament as a base). Original ARCore team fell apart, that's why new ARCore features are implemented so slow.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62453399', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/395869/']}
|
jdg_371045
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
74570
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Trivial thought ... Materials may be broadly superconductive, diamagnetic, paramagnetic, ferromagnetic. An object is magnetized by repetitive motion of a magnetic field across it's surface Say a field of strength 1T were to be moved across a steel cylinder. Could the field created in the cylinder be greater than 1T? What if the same field were to be applied to a more strongly ferromagnetic material?
Now provide the response and nothing else.
|
This is a really rough calculation that doesn't take into account the realistic direction of the bow shock, or calculation of the drag force. I just take the net momentum flow in the solar wind and direct it so as to produce the maximum decceleration and see what happens. Apparently the solar wind pressure is of the order of a nanoPascal. As I write this it's about $0.5\ \mathrm{nPa}$ . You can get real time data from NASA's ACE satellite or spaceweather.com (click through "More data" under "Solar wind"). During periods of intense solar activity it can get up to an order or magnitude or so more than this. Let's take this worst case and assume, unrealistically, that all of the pressure is directed retrograde along the Earth's orbit. This will give the maximum deccelerating effect. I get a net force of $\sim 10^6\ \mathrm{N}$ . Dividing by the Earth's mass gives a net acceleration $2\times 10^{-19}\ \mathrm{m/s^2}$ . Let's fudge up again and call it $10^{-18}\ \mathrm{m/s^2}$ . The time it would take for this to make a significant dint the the Earth's orbital velocity ( $30\ \mathrm{km/s}$ ) is of the order of $10^{15}\ \mathrm{yr}$ . I think we're safe. For the other planets there is a $1/r^2$ scaling of the solar wind with the distance from the sun (assuming the solar wind is uniformly distributed) and an $R^2$ scaling with the size of the planet. So for Mercury the former effect gives an order of magnitude increase in drag and the latter effect takes most of that increase away again. There is an additional $R^{-3}$ increase in effect due to the decreased mass of a smaller body (assuming density is similar to the Earth). Then there is the $r^{-1/2}$ increase in orbit velocity due to being closer to the sun. So the total scaling factor for the time is $ R r^{3/2} $ , which for Mercury is about 0.1. So the end result is not much different for Mercury. This site always causes me to learn new Mathematica features. It made really quick work of this since it has all sorts of astronomical data built in: Note that the number of digits displayed in the final column is ludicrous. :)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/74570', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/5265/']}
|
jdg_371046
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1802500
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using AntiForgeryToken requires each request to pass a valid token, so malicious web pages with simple script posting data to my web application won't succeed. But what if a malicious script will first make some simple GET request (by Ajax ) in order to download the page containing the antiforgery token in a hidden input field, extracts it, and use it to make a valid POST ? Is it possible, or am I missing something?
Now provide the response and nothing else.
|
Yes, this is all you need to do. As long as you generate a new token on each protected page, with <%= Html.AntiForgeryToken() %> and always ensure it is checked in any protected action, using [ValidateAntiForgeryToken] This implements the Synchronizer Token Pattern as discussed at the CSRF Prevention Cheat Sheet at OWASP. In order for a script to succeed in making an acceptable request, it would have to first get the form and read the token and then post the token. Same Origin Policy will stop this from being allowed in a browser. A site canot make an AJAX style http request to another site; only to itself. If for some reason same origin policy can be breached, then you will become vulnerable. Note that if you have a cross-site scripting vulnerability, then an attacker can abuse the xss vulnerability to circumvent the protection provided by the same origin policy (because the script is now running from your own site, so SOP succeeds). The injected script can then happily read and resubmit the token. This technique to get past CSRF protection via XSS has been common in some worms recently. Basically, if you have XSS, your CSRF-protection is a waste of time, so ensure you are not vulnerable to either. Another thing to watch out for is Flash and Silverlight. Both of these technologies do not subscribe to the same origin policy and instead use cross domain policy files to restrict access to remote resources. Flash/Silverlight script can only access resources on your site if you publish a cross domain policy xml file on your own site. If you do publish this file, only ever allow a whitelist of trusted third-party servers and never allow *. Read more about CSRF at OWASP See also: XSS Prevention Cheat Sheet
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1802500', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/207717/']}
|
jdg_371047
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
632093
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I come across with a transistor circuit as shown below. What is the name for this circuit? Besides, what is the working principle of it?
Now provide the response and nothing else.
|
This goes by the name of 'amplified diode'. Consider what would happen if there was enough voltage across the collector and emitter terminals, VCE, to put 0.7 V across the base-emitter junction by voltage divider action between R59 and R37. The transistor would be conducting a small amount of current. With 10 k and 33 k, that voltage is about 3 V. If VCE now dropped a little, the collector current would drop a lot. If VCE increased a little, the collector current would increase a lot. Both of these behaviours are much the same as if you replaced these components with a diode with a 3 V forward voltage drop, with its VI plot voltage scaled accordingly. This circuit is frequently used to bias the output stage of a class B amplifier, where it has to track the voltage drop of the several stacked VBEs that you get with complementary darlington transistors.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/632093', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/320370/']}
|
jdg_371048
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
408646
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone give me a resonable definition of $C^k$ boundary, e.g., to define and after give a brief explain about the definition. I need this 'cause I'm not understanding what the Evan's book said. Thanks!
Now provide the response and nothing else.
|
In $\mathbb{R^n}$, the boundary of a subset is $C^k$ if it's locally the graph of a $C^k$ function in some direction. So a circle has $C^{\infty}$ boundary because at all points in the positive upper half plane, it's the graph of the function $y=\sqrt{1-x^2}$, which has infinitely many derivatives at every point but the two end points. But those end points are in the graph of $x=\sqrt{1-y^2}$ or $x=-\sqrt{1-y^2}$, which also has infinitely many derivatives.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/408646', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/78993/']}
|
jdg_371049
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
456807
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ASP MVC App with some seemingly simple code to save and retrieve cookies but for some reason they won't persist. The code in the controller is : if (System.Web.HttpContext.Current.Response.Cookies["CountryPreference"] == null){ HttpCookie cookie = new HttpCookie("CountryPreference"); cookie.Value = country; cookie.Expires = DateTime.Now.AddYears(1); System.Web.HttpContext.Current.Response.Cookies.Add(cookie);} And to load it again : if (System.Web.HttpContext.Current.Request.Cookies["CountryPreference"] != null){ System.Web.HttpContext.Current.Request.Cookies["CountryPreference"].Expires = DateTime.Now.AddYears(1); data.Country = System.Web.HttpContext.Current.Request.Cookies["CountryPreference"].Value;} For some reason the cookie is always null?
Now provide the response and nothing else.
|
The problem lies in following code: if (System.Web.HttpContext.Current.Response.Cookies["CountryPreference"] == null) When you try to check existence of a cookie using Response object rather than Request, ASP.net automatically creates a cookie. Check this detailed post here: http://chwe.at/blog/post/2009/01/26/Done28099t-use-ResponseCookiesstring-to-check-if-a-cookie-exists!.aspx Quote from the article in case the link goes down again .... The short explanation, if you don’t like to read the entire story If you use code like “if (Response.Cookies[“mycookie”] != null) { … }”, ASP.Net automatically generates a new cookie with the name “mycookie” in the background and overwrites your old cookie! Always use the Request.Cookies-Collection to read cookies! [ More detail in the article ]
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/456807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/27294/']}
|
jdg_371050
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12201292
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have ViewController in objective-c and most of my code is c++ (.mm). I'd like to setup some callbacks to member functions from obj-c (in c++) and call them from c++. Something like this (it's very simplifyed): @interface MyClass{ }-(void)my_callback;@end@implementation MyClass-(void)my_callback{ printf("called!\n");}-(void)viewDidLoad{ // setup_callback( "to my_callback ?" );}@end and: void setup_callback(void(*func)()) { func(); } this is not correct of course. Any advice how can I do it, please?
Now provide the response and nothing else.
|
You have a few options. Using blocks You may use blocks to convey your callback work. This is probably the simplest solution as it allows you to call your code without having to pass any parameter to the callback "function". Blocks work in C and all its supersets with Clang, and Clang++ even allows implicit casts between blocks and lambdas. #include <dispatch/dispatch.h>void setup_callback(dispatch_block_t block){ // required to copy the block to the heap, otherwise it's on the stack dispatch_block_t copy = [block copy]; // setup stuff here // when you want to call the callback, do as if it was a function pointer: // block();}int main(){ MyClass* instance = [[MyClass alloc] init]; setup_callback(^{ [instance callback_method]; });} That might require some reworking on the C++ end to accept functors (or just blocks if it's simpler) instead of function pointers. Since blocks create closures, they're very convenient for that kind of works. Blocks are an Apple extension to C, C++ and Objective-C. See more about them here . Use the Objective-C runtime to acquire the function pointer to the method you want to call Use the Objective-C runtime to access the function pointer of your selector. This is more tedious and requires you to keep track of three variables (the object to call the method on, the selector to use, and the method implementation), but it actually works even in the case you can't use the Objective-C syntax. Objective-C method implementations are function pointers with this signature: typedef void (*IMP)(id self, SEL _cmd, ...); Where self is what you'd expect, _cmd is the selector that caused this method call (the _cmd variable is actually available in all Objective-C methods, try it), and the rest is considered variadic. You need to cast IMP variables into the proper function signature because the calling convention for variadic C functions doesn't always match the calling convention for Objective-C method calls (the Objective-C method call is the standard function calling convention for your compiler, probably either cdecl or the amd64 calling convention, and the variadic calling convention is not always the same). A reinterpret_cast will be able to do it. Here's some code I put together for similar intents. It uses C++11 variadic templates to help with getting the proper function signature. #include <objc/runtime.h>template<typename TReturnType, typename... TArguments>auto GetInstanceMethodPointer(Class class, SEL selector) -> TReturnType (*)(id, SEL, TArguments...){ Method m = class_getInstanceMethod(class, selector); IMP imp = method_getImplementation(m); return reinterpret_cast<TReturnType (*)(id, SEL, TArguments...)>(imp);}int main(){ MyClass* instance = [[MyClass alloc] init]; auto foo = GetInstanceMethodPointer<void>( [MyClass class], @selector(my_callback)); // foo is a void (*)(id, SEL) function pointer foo(instance, @selector(my_callback));} Also take care that your instance is not nil before using the function call, because nil checking is handled by the Objective-C runtime. In this case, we're bypassing it. Keep track of an object and a SEL Use -[NSObject performSelector:] to perform your callback. Basically a simpler version of the Objective-C runtime solution. void setup_callback(id object, SEL selector){ // do stuff // to execute the callback: // [object performSelector:selector];}int main(){ MyClass* instance = [[MyClass alloc] init]; setup_callback(instance, @selector(my_callback));} Wrapping your call inside a C++ function I think this one doesn't really need any example. Create a function that accepts your object type as the first parameter and call the method you want on it. Similarly to the SEL solution, you then need to separately keep track of the function to call and the object on which to call it.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12201292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1341278/']}
|
jdg_371051
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1055758
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm creating a Brainfuck parser (in a BASIC dialect) ultimately to create an interpreter but i've realise it's not as straight forward as i first thought. My problem is that i need a way to accurately parse the matching loop operators within a Brainfuck program. This is an example program: ,>,>++++++++[<------<------>>-]<<[>[>+>+<<-]>>[<<+>>-]<<<-]>>>++++++[<++++++++>-],<.>. '[' = start of loop ']' = end of loop I need to record the start and end point of each matching loop operator so i can jump around the source as needed. Some loops are alone, some are nested. What would be the best way to parse this? I was thinking maybe move through the source file creating a 2D array (or such like) recording the start and end positions of each matching operator, but this seems like a lot of 'to'ing and fro'ing' through the source. Is this the best way to do it? More info: Brainfuck homepage EDIT: Sample code in any language greatly appreciated.
Now provide the response and nothing else.
|
Have you considered using a Stack data structure to record "jump points" (i.e. the location of the instruction pointer). So basically, every time you encounter a "[" you push the current location of the instruction pointer on this stack. Whenever you encounter a "]" you reset the instruction pointer to the value that's currently on the top of the stack. When a loop is complete, you pop it off the stack. Here is an example in C++ with 100 memory cells. The code handles nested loops recursively and although it is not refined it should illustrate the concepts.. char cells[100] = {0}; // define 100 memory cellschar* cell = cells; // set memory pointer to first cellchar* ip = 0; // define variable used as "instruction pointer"void interpret(static char* program, int* stack, int sp){ int tmp; if(ip == 0) // if the instruction pointer hasn't been initialized ip = program; // now would be a good time while(*ip) // this runs for as long as there is valid brainF**k 'code' { if(*ip == ',') *cell = getch(); else if(*ip == '.') putch(*cell); else if(*ip == '>') cell++; else if(*ip == '<') cell--; else if(*ip == '+') *cell = *cell + 1; else if(*ip == '-') *cell = *cell - 1; else if(*ip == '[') { stack[sp+1] = ip - program; *ip++; while(*cell != 0) { interpret(program, stack, sp + 1); } tmp = sp + 1; while((tmp >= (sp + 1)) || *ip != ']') { *ip++; if(*ip == '[') stack[++tmp] = ip - program; else if(*ip == ']') tmp--; } } else if(*ip == ']') { ip = program + stack[sp] + 1; break; } *ip++; // advance instruction }}int _tmain(int argc, _TCHAR* argv[]){ int stack[100] = {0}; // use a stack of 100 levels, modeled using a simple array interpret(",>,>++++++++[<------<------>>-]<<[>[>+>+<<-]>>[<<+>>-]<<<-]>>>++++++[<++++++++>-],<.>.", stack, 0); return 0;} EDIT I just went over the code again and I realized there was a bug in the while loop that would 'skip' parsed loops if the value of the pointer is 0. This is where I made the change: while((tmp >= (sp + 1)) || *ip != ']') // the bug was tmp > (sp + 1){ip++;if(*ip == '[') stack[++tmp] = ip - program;else if(*ip == ']') tmp--;} Below is an implementation of the same parser but without using recursion: char cells[100] = {0};void interpret(static char* program){ int cnt; // cnt is a counter that is going to be used // only when parsing 0-loops int stack[100] = {0}; // create a stack, 100 levels deep - modeled // using a simple array - and initialized to 0 int sp = 0; // sp is going to be used as a 'stack pointer' char* ip = program; // ip is going to be used as instruction pointer // and it is initialized at the beginning or program char* cell = cells; // cell is the pointer to the 'current' memory cell // and as such, it is initialized to the first // memory cell while(*ip) // as long as ip point to 'valid code' keep going { if(*ip == ',') *cell = getch(); else if(*ip == '.') putch(*cell); else if(*ip == '>') cell++; else if(*ip == '<') cell--; else if(*ip == '+') *cell = *cell + 1; else if(*ip == '-') *cell = *cell - 1; else if(*ip == '[') { if(stack[sp] != ip - program) stack[++sp] = ip - program; *ip++; if(*cell != 0) continue; else { cnt = 1; while((cnt > 0) || *ip != ']') { *ip++; if(*ip == '[') cnt++; else if(*ip == ']') cnt--; } sp--; } }else if(*ip == ']') { ip = program + stack[sp]; continue; } *ip++; }}int _tmain(int argc, _TCHAR* argv[]){ // define our program code here.. char *prg = ",>++++++[<-------->-],[<+>-]<."; interpret(prg); return 0;}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1055758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13227/']}
|
jdg_371052
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18916
|
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As per StormFront (warning: neo-nazi website): While only 3% of the population, the Jews control over 25% of the nation's wealth and this percentage rises every year. Is only 3% of the population Jewish? Do the Jewish control over 25% of the nation's wealth? Is this percentage rising?
Now provide the response and nothing else.
|
The short answer: Yes, self-identified Jews have, on average, higher household incomes in the USA than other religious groups, with Hindus only slightly behind. Drawing strong conclusions from this correlation is dangerous. (Watch out! This answer includes a tiny bit of misdirection. The question was about wealth , but the answer is about income because it was the best I could find. Wealth and income aren't exactly the same, but they are pretty tightly related, so I think the answer should still be acceptable.) The Pew Forum on Religion and Public Life produce the U.S. Religious Landscape Survey . They have a chart showing the results of income distributions. (This chart has been turned into an infographic which I find doesn't help very much, but you may prefer it.) From this chart, you can see that Jews have the highest proportion of household income of greater than $US100,000. Similarly, they are the second-least-likely to have household incomes of less than $US30,000, after Hindus. Antisemitic sites, such as Stormfront, use data like this to conclude that there is a Jewish conspiracy. This is not a safe assumption, due to a large number of potential confounding factors (ignoring the innate difficulty/impossibility of having a conspiracy of that size.) The relatively low socioeconomic position of black Americans is likely a large confounding factor. Black Americans account for only roughly 3% of American Jewish population compared to roughly 13% of the general population (sources: [ 1 ]( http://en.wikipedia.org/wiki/Alliance_of_Black_Jews ), [2] , [3] ). Note the lowest incomes are associated with "Historically Black Churches". Income is correlated with age. If American Jews are demographically aged closer to their peak earning potential than the general population, it may sway the results. Immigration status is likely to affect income. If American Jews are now more likely to be settled in the country longer ago than American Muslims, on average, it may make their relative income higher. Household size affects household income. If American Jews are more likely to live in households with a larger number of income earners, it may increase their apparent income without actually increasing the average individual income. If the women are more likely to participate in the workforce, that would also increase the figures. Culture is likely to affect income. If American Jews are more likely to be focussed on academic achievement than other cultures, it may increase their income. My goal here is not to prove that any of these potential confounding factors are true or relevant, just to give context to the bare facts before they are used to make dangerous conclusions about conspiracies.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://skeptics.stackexchange.com/questions/18916', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/16934/']}
|
jdg_371053
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26659129
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was setting up a Selenium server using docker, basically following this github tutorial. I have no problem setting up the server, but I noticed that the processes that I started inside the docker image actually got shown up on my host process list. As you can see in the screen shot, the docker ran a bash script and also executed a jar file, which I assume should only happen inside the box. Does this mean the user from the host could possibly kill a certain process outside the container which will totally screw up the world inside the box? When I stopped the container, all the processes went away as I expected. Is this the way Docker is designed for.. and the flawed isolation is what you have to accept in trade for the lightweight comparing with Virtualbox/Vagrant...or I am doing anything wrong? Thanks!
Now provide the response and nothing else.
|
This seems to be a common misconception about Docker being lightweight virtual machine" that is why some might expect similar behavior as VirtualBox or VMWare but just faster. Docker does not use virtualization, so all processes run by the native host kernel just isolated from each other. Non-root user cannot kill processes inside container, but root can stop the entire container not only kill a process. To distinguish between processes running inside container and others, run top then press shift+f and select the nsPID and nsUSER as shown in the attached screenshot. Then you will see beside each process the namespace if it is running on the server directly this value most likely will be empty and if the process running inside a container you will see the namespace id for each container. (you can sort by the namespace to see processes in each container)
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/26659129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1953475/']}
|
jdg_371054
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1380333
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reviewing a patch that moved a lot of things around, added a few things, and removed a few things. I'm wondering if anyone's written a utility for picking out the unique adds/removes in a universal diff? That is, an add and a remove of the same line should cancel themselves out. Obviously this isn't useful all the time, but sometimes it's exactly what I want :)
Now provide the response and nothing else.
|
This is what I ended up using. Example usage: git diff -w | /path/to/ignore_moves.py | less -R ignore_moves.py #!/usr/bin/python import sysfrom itertools import *RED = 31GREEN = 32RESET_SEQ = "\033[0m"COLOR_SEQ = "\033[0;%dm"stack = []def inverse(line): return ('-' if line[0] == '+' else '+') + line[1:].strip()def reverse_enumerate(l): for i, x in enumerate(reversed(l)): yield len(l)-1-i, xdef dumpchanges(): for line in stack: SEQ = COLOR_SEQ % (GREEN if line.startswith('+') else RED) print SEQ + line.strip() + RESET_SEQ stack[:] = []for line in sys.stdin.readlines(): if not line[1:].strip(): continue # ignore empty lines if line.startswith(('---', '+++')): dumpchanges() print line.strip() elif line.startswith(('+', '-')): inverted = inverse(line) line = line[0] + line[1:].strip() for i, match in reverse_enumerate(stack): if inverted == match: stack.pop(i) break else: stack.append(line)# finished reading, still have state to be dumped dumpchanges()
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1380333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124497/']}
|
jdg_371055
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4272587
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I've got a grid that needs to change it's margin based on flag in the VM.Seems like datatriggers is the right way to handle this. So I set this up: <Grid x:Name="myGrid" Grid.Row="1" Margin="30,0"> <Grid.Style> <Style TargetType="Grid"> <Style.Triggers> <DataTrigger Binding="{Binding UI_Preferences.RightPanelPinned}" Value="true" > <Setter Property="Margin" value="200" /> </DataTrigger> <DataTrigger Binding="{Binding UI_Preferences.LeftPanelPinned}" Value="true" > <Setter Property="Margin" value="200" /> </DataTrigger> </Style.Triggers> </Style> </Grid.Style></Grid> Which works, but I can't figure out how to modify just the left or right margins individually.
Now provide the response and nothing else.
|
A margin is actually a Thickness element, so you can do it like this: EDIT - have added condition where bot are set: <Grid x:Name="myGrid" Grid.Row="1" Margin="30,0"><Grid.Style> <Style TargetType="Grid"> <Style.Triggers> <DataTrigger Binding="{Binding UI_Preferences.RightPanelPinned}" Value="true" > <Setter Property="Margin"> <Setter.Value> <Thickness Left="200"/> </Setter.Value> </Setter> </DataTrigger> <DataTrigger Binding="{Binding UI_Preferences.LeftPanelPinned}" Value="true" > <Setter Property="Margin"> <Setter.Value> <Thickness Right="200"/> </Setter.Value> </Setter> </DataTrigger> <MultiDataTrigger> <MultiDataTrigger.Conditions> <Condition Binding="{Binding UI_Preferences.LeftPanelPinned}" Value="true" /> <Condition Binding="{Binding UI_Preferences.RightPanelPinned}" Value="true" /> </MultiDataTrigger.Conditions> <Setter Property="Margin"> <Setter.Value> <Thickness Right="200" Left="200"/> </Setter.Value> </Setter> </MultiDataTrigger> </Style.Triggers> </Style></Grid.Style>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4272587', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/37154/']}
|
jdg_371056
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
342409
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have binary data in an unsigned char variable.I need to convert them to PEM base64 in c.I looked in openssl library but i could not find any function.Does any body have any idea?
Now provide the response and nothing else.
|
Here's the one I'm using: #include <stdint.h>#include <stdlib.h>static char encoding_table[] = {'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '+', '/'};static char *decoding_table = NULL;static int mod_table[] = {0, 2, 1};char *base64_encode(const unsigned char *data, size_t input_length, size_t *output_length) { *output_length = 4 * ((input_length + 2) / 3); char *encoded_data = malloc(*output_length); if (encoded_data == NULL) return NULL; for (int i = 0, j = 0; i < input_length;) { uint32_t octet_a = i < input_length ? (unsigned char)data[i++] : 0; uint32_t octet_b = i < input_length ? (unsigned char)data[i++] : 0; uint32_t octet_c = i < input_length ? (unsigned char)data[i++] : 0; uint32_t triple = (octet_a << 0x10) + (octet_b << 0x08) + octet_c; encoded_data[j++] = encoding_table[(triple >> 3 * 6) & 0x3F]; encoded_data[j++] = encoding_table[(triple >> 2 * 6) & 0x3F]; encoded_data[j++] = encoding_table[(triple >> 1 * 6) & 0x3F]; encoded_data[j++] = encoding_table[(triple >> 0 * 6) & 0x3F]; } for (int i = 0; i < mod_table[input_length % 3]; i++) encoded_data[*output_length - 1 - i] = '='; return encoded_data;}unsigned char *base64_decode(const char *data, size_t input_length, size_t *output_length) { if (decoding_table == NULL) build_decoding_table(); if (input_length % 4 != 0) return NULL; *output_length = input_length / 4 * 3; if (data[input_length - 1] == '=') (*output_length)--; if (data[input_length - 2] == '=') (*output_length)--; unsigned char *decoded_data = malloc(*output_length); if (decoded_data == NULL) return NULL; for (int i = 0, j = 0; i < input_length;) { uint32_t sextet_a = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]]; uint32_t sextet_b = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]]; uint32_t sextet_c = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]]; uint32_t sextet_d = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]]; uint32_t triple = (sextet_a << 3 * 6) + (sextet_b << 2 * 6) + (sextet_c << 1 * 6) + (sextet_d << 0 * 6); if (j < *output_length) decoded_data[j++] = (triple >> 2 * 8) & 0xFF; if (j < *output_length) decoded_data[j++] = (triple >> 1 * 8) & 0xFF; if (j < *output_length) decoded_data[j++] = (triple >> 0 * 8) & 0xFF; } return decoded_data;}void build_decoding_table() { decoding_table = malloc(256); for (int i = 0; i < 64; i++) decoding_table[(unsigned char) encoding_table[i]] = i;}void base64_cleanup() { free(decoding_table);} Keep in mind that this doesn't do any error-checking while decoding - non base 64 encoded data will get processed.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/342409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_371057
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11432169
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I am try to capture a view then save as an image into Photo Library , but I need create a custom resolution for captured image , here is my code but when app saves the images the resolution is low ! UIGraphicsBeginImageContextWithOptions(self.captureView.bounds.size, self.captureView.opaque, 0.0);[self.captureView.layer renderInContext:UIGraphicsGetCurrentContext()];UIImage * screenshot = UIGraphicsGetImageFromCurrentImageContext();CGRect cropRect = CGRectMake(0 ,0 ,1435 ,1435);CGImageRef imageRef = CGImageCreateWithImageInRect([screenshot CGImage], cropRect);CGImageRelease(imageRef);UIImageWriteToSavedPhotosAlbum(screenshot , nil, nil, nil);UIGraphicsEndImageContext(); but the resolution in iPhone is : 320 x 320 and retina is : 640 x 640 I would be grateful if you help me to fix this issue .
Now provide the response and nothing else.
|
Your code is pretty close. What you need to do is re-render the screenshot at the custom resolution. I modified your code to do this: UIView* captureView = self.view;/* Capture the screen shoot at native resolution */UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, captureView.opaque, 0.0);[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];UIImage * screenshot = UIGraphicsGetImageFromCurrentImageContext();UIGraphicsEndImageContext();/* Render the screen shot at custom resolution */CGRect cropRect = CGRectMake(0 ,0 ,1435 ,1435);UIGraphicsBeginImageContextWithOptions(cropRect.size, captureView.opaque, 1.0f);[screenshot drawInRect:cropRect];UIImage * customScreenShot = UIGraphicsGetImageFromCurrentImageContext();UIGraphicsEndImageContext();/* Save to the photo album */UIImageWriteToSavedPhotosAlbum(customScreenShot , nil, nil, nil); Note that if capture view is not square then the image will be distorted. The saved image will always be square and 1435x1435 pixels.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11432169', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/319097/']}
|
jdg_371058
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4550798
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm still new to blocks in objective-c and wondering if I have this psuedo code correct. I'm not sure if it's enough to just remove the observer or if i have to call removeObserver:name:object: -(void) scan { Scanner *scanner = [[Scanner alloc] init]; id scanComplete = [[NSNotificationCenter defaultCenter] addObserverForName:@"ScanComplete" object:scanner queue:nil usingBlock:^(NSNotification *notification){ /* do something */ [[NSNotificationCenter defaultCenter] removeObserver:scanComplete]; [scanner release]; }]; [scanner startScan];} Update: I'm receiving intermittent EXC_BAD_ACCESS from this block, so this can't be right.
Now provide the response and nothing else.
|
Declare the scanComplete variable before defining the block itself. The reason why you need to do this is because you're trying to access a variable that doesn't exist within the block at the time of definition since the variable itself has not been assigned yet. What is EXC_BAD_ACCESS ? Well, it's an exception that is thrown when you try to access a reference that doesn't exist. So that is exactly the case in your example. So if you declare the variable before the block itself, then it should work: -(void) scan { Scanner *scanner = [[Scanner alloc] init]; __block id scanComplete; scanComplete = [[NSNotificationCenter defaultCenter] addObserverForName:@"ScanComplete" object:scanner queue:nil usingBlock:^(NSNotification *notification){ /* do something */ [[NSNotificationCenter defaultCenter] removeObserver:scanComplete]; [scanner release]; }]; [scanner startScan];}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4550798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26986/']}
|
jdg_371059
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29578633
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought using float: right; would fix this, but it makes the button appear outside of the div. How do I solve this? HTML <div id="main"> <h1>Title</h1> <button>Button</button></div> CSS #main { width: 200px; border: 1px dotted black;}h1 { margin: 0;}button { float: right;} JSFiddle
Now provide the response and nothing else.
|
The original answer suggested using inline-block and float to position the elements, but things have moved on since then. A more flexible solution today would be to use flex . #main { border: 1px dotted black; display: flex; align-items: center; /* Vertical align the elements to the center */}h1 { margin: 0;}button { margin-left: auto; /* Push this element to the right */} <div id="main"> <h1>Title</h1> <button>Button</button></div> Old answer Give your h1 display: inline-block to allow your elements to occupy the same row... #main { width: 200px; border: 1px dotted black;}h1 { margin: 0; display: inline-block;}button { float: right;} <div id="main"> <h1>Title</h1> <button>Button</button></div>
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/29578633', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4777138/']}
|
jdg_371060
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31053250
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have following xml response out of a transformer:- <?xml version="1.0" encoding="UTF-8"?><detailList> <details> <Id>ID123</Id> <personInfo> <address> <address>All the address</address> <city>CITY</city> <state>STATE</state> <postalCode>123123</postalCode> <country>UNITED STATES</country> </address> </personInfo> <nameF>NAME1</nameF> <nameL>NAME2</nameL> </details></detailList> I want to obtain the details node (including all of its child nodes).I have used following xpath expressions to get this but all of them are printing blank spaces :- <logger level="INFO" message="#[xpath://detailList]" /><logger level="INFO" message="#[xpath:/detailList]" /><logger level="INFO" message="#[xpath:/detailList/*]" /><logger level="INFO" message="#[xpath://detailList/details]" /><logger level="INFO" message="#[xpath:/detailList/details]" /> The response that gets printed is (blank space after LoggerMessageProcessor) :- INFO TimeStamp org.mule.api.processor.LoggerMessageProcessor: Everything looks fine but I am puzzled why it is not able to locate the node(s)/element(s) through these xpath evaluators.
Now provide the response and nothing else.
|
Depends on the version. The old System.Net.Http packages (the 2.0 ones) are legacy packages which are deprecated in favor of Microsoft.Http.Net according to the description: Legacy package, System.Net.Http is now included in the'Microsoft.Net.Http' package. They exist to provide the HttpClient in previous .NET versions and Portable Class libraries. You should use Microsoft.Net.Http in that case. Since you're using .NET Core, you should use the latest System.Net.Http package (eg. 4.3.3). Updated for csproj As of .NET Standard 2.0, the System.Net.HttpClient package is already included and available when you target netstandard2.0 . If, for some reason, you still want to reference it for both full .NET and .NET Core, you can add this to your csproj file: <ItemGroup Condition=" '$(TargetFramework)' == 'net461' "> <!-- // HttpClient for full .NET --> <Reference Include="System.Net.Http" /></ItemGroup><ItemGroup Condition=" '$(TargetFramework)' == 'netstandard2.0' "> <!-- // HttpClient for .NET Core --> <PackageReference Include="System.Net.Http" Version="4.3.3" /></ItemGroup> If you're using project.json If your project.json targets both full .NET and .NET Core, you have to add the System.Net.Http assembly to the frameworkAssemblies element. For example: "frameworks": { "net451": { "frameworkAssemblies": { "System.Net.Http": "4.0.0.0" // HttpClient for full .NET } }, "netstandard1.3": { "dependencies": { "System.Net.Http": "4.1.0", // HttpClient for .NET Core } }}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/31053250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2069590/']}
|
jdg_371061
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19335165
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In simple words, what is the difference between cross-validation and grid search? How does grid search work? Should I do first a cross-validation and then a grid search?
Now provide the response and nothing else.
|
Cross-validation is when you reserve part of your data to use in evaluating your model. There are different cross-validation methods. The simplest conceptually is to just take 70% (just making up a number here, it doesn't have to be 70%) of your data and use that for training, and then use the remaining 30% of the data to evaluate the model's performance. The reason you need different data for training and evaluating the model is to protect against overfitting. There are other (slightly more involved) cross-validation techniques, of course, like k-fold cross-validation, which often used in practice. Grid search is a method to perform hyper-parameter optimisation, that is, it is a method to find the best combination of hyper-parameters (an example of an hyper-parameter is the learning rate of the optimiser), for a given model (e.g. a CNN) and test dataset. In this scenario, you have several models, each with a different combination of hyper-parameters. Each of these combinations of parameters, which correspond to a single model, can be said to lie on a point of a "grid". The goal is then to train each of these models and evaluate them e.g. using cross-validation. You then select the one that performed best. To give a concrete example, if you're using a support vector machine, you could use different values for gamma and C . So, for example, you could have a grid with the following values for (gamma, C) : (1, 1), (0.1, 1), (1, 10), (0.1, 10) . It's a grid because it's like a product of [1, 0.1] for gamma and [1, 10] for C . Grid-search would basically train a SVM for each of these four pair of (gamma, C) values, then evaluate it using cross-validation, and select the one that did best.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19335165', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2579334/']}
|
jdg_371062
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25721185
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have below kind of animation.But I need to animate it continues rather than go top and suddenly start it again.In other words smooth animation from bottom to top and then again start from the bottom (Like a Loop).Can I have any help ? JS var ticker = $('#ticker');var container = $('#ticker > div');var spacing = ticker.outerHeight() - ticker.height();function animator(currentItem) { var distance = currentItem.outerHeight() + spacing; var currentTop = parseInt(container.css('margin-top'), 10); var duration = (distance + currentTop) / 0.05; container.animate({ marginTop: -distance }, duration, "linear", function () { var parent = currentItem.parent(); currentItem.detach(); parent.css("marginTop", 5); parent.append(currentItem); animator(parent.children(":first")); });};animator(container.children(":first"));ticker.mouseenter(function () { container.stop();});ticker.mouseleave(function () { animator(container.children(":first"));}); URL for the JSFiddle
Now provide the response and nothing else.
|
Assuming that is the actual value shown then consider: twice_json = '"{\\"orderId\\":\\"123\\"}"' // (ingore the extra slashes)json = JSON.parse(twice_json) // => '{"orderId":"123"}'obj = JSON.parse(json) // => {orderId: "123"}obj.orderId // => "123" Note how applying JSON.stringify to the json value (which is a string , as JSON is text ) would result in the twice_json value. Further consider the relation between obj (a JavaScript object ) and json (the JSON string ). That is, if the result shown in the post is the output from JSON.stringify(res) then res is already JSON (which is text / a string ) and not a JavaScript object - so don't call stringify on an already-JSON value! Rather, use obj = JSON.parse(res); obj.orderId , as per the above demonstrations/transformations.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25721185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1077309/']}
|
jdg_371063
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3850921
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am bit confused in case of objects when it comes to Strings, So wanted to know how many objects will be created with following code, with some explanation about String objects creation with respect to String pool and heap. public static void main(String[] args) { String str1 = "String1"; String str2 = new String("String1"); String str3 = "String3"; String str4 = str2 + str3; }
Now provide the response and nothing else.
|
4 objects will be created. Two notes: new String("something") always creates a new object. The string literal "something" creates only one object for all occurrences. The best practice is to never use new String("something") - the instantiation is redundant. the concatenation of two strings is transformed to StringBuilder.append(first).append(second).toString() , so another object is created here.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3850921', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/241717/']}
|
jdg_371064
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55877419
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to fill list of objects from DB. And before passing value to itemes I want all of them to finish. Is here any short way calling await() for each item to wait. I want to make clean code, May be some design pattern or trick? for (x in 0..10) { launch { withContext(Dispatchers.IO){ list.add(repository.getLastGame(x) ?: MutableLiveData<Task>(Task(cabinId = x))) } } } items.value = list
Now provide the response and nothing else.
|
coroutineScope { // limits the scope of concurrency (0..10).map { // is a shorter way to write IntRange(0, 10) async(Dispatchers.IO) { // async means "concurrently", context goes here list.add(repository.getLastGame(x) ?: MutableLiveData<Task>(Task(cabinId = x))) } }.awaitAll() // waits all of them} // if any task crashes -- this scope ends with exception
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/55877419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5993479/']}
|
jdg_371065
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3538120
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that there is only one continuous function $f:[0,1]\to\mathbb{R}$ satisfying $$\int_0^1 f(x)(1-f(x))\,dx=1/4.$$ I am able to see that the constant function $f(x)=1/2$ satisfies the hypotheses but I just can’t prove uniqueness. Help please.
Now provide the response and nothing else.
|
Hint. Note that $$0\leq \int_0^1 \left(\frac{1}{2}-f(x)\right)^2\,dx=\int_0^1 \left(\frac{1}{4}-f(x)+f(x)^2\right)\,dx=\frac{1}{4}-\int_0^1 f(x)(1-f(x))\,dx.$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3538120', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/201051/']}
|
jdg_371066
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
504150
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In bash, how do I encode zero-width sequences into PS1, when those sequences are coming from stdout of an external process or function? How do I implement writes-prompt-sequences-to-stdout so that it can emit multi-colored text to the prompt? PS1='$( writes-prompt-sequences-to-stdout )' I know that, when writing a bash PS1 prompt, I must wrap zero-width sequences in \[ \] so bash can compute correct prompt width. PS1='\[\e[0;35m\]$ \[\e[00m\]' bash does not print the \[ \] and understands the prompt is only 2 characters wide. How do I move those sequences into an external function? The following does not work, my prompt looks like \[\]$ \[\] , even though I can run render-prompt and see it writing the correct sequence of bytes to stdout. PS1='$( render-prompt )'function render-prompt { printf '\[\e[0;35m\]$ \[\e[00m\]'} Moving the printf call into PS1 does work: PS1='$( printf '"'"'\[\e[0;35m\]$ \[\e[00m\]'"'"' )' I theorized, perhaps bash is scanning the PS1 string before execution to count the number of zero-width bytes. So I tried tricking it by encoding [] sequences that aren't printed, but it correctly ignores the trick. PS1='$( printf '"'"'$$$$$'"'"' '"'"'\[\e[00m\]'"'"' )' My question: How do I write \[ \] sequences to stdout from a function or binary that is invoked via PS1?
Now provide the response and nothing else.
|
I figured it out. Bash special-cases \e , \[ , and \] within PS1. It coverts \e to an escape byte, \[ to a 1 byte, and \] to a 2 byte. External commands must write 1 and 2 bytes to stdout. According to ASCII, these encode "start of heading" and "start of text." http://www.columbia.edu/kermit/ascii.html Here's a working example, which relies on printf converting \ escapes within the first positional parameter into the correct bytes: PS1='$( render-prompt )'function render-prompt { printf '\1\033[0;35m\2$ \1\033[00m\2'} render-prompt | hexdump -C 00000000 01 1b 5b 30 3b 33 35 6d 02 24 20 01 1b 5b 30 30 |..[0;35m.$ ..[00|00000010 6d 02 |m.|00000012
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/504150', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/339823/']}
|
jdg_371067
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
249919
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If the determinant $\det(A)$ of the matrix $A$ of a non-homogeneous system of equations is $0$ , then how do we know if it has no solutions or infinitely many solutions? And while we are at it, kindly answer the following "sub-questions" arising from it. I shall be really grateful to you as it will be crucial to my understanding of the whole thing: a) Since the determinant being zero means that a situation of "Division by zero" arises (using Cramer's Rule), the "no solution" option is understandable as division by zero is not defined. But it confuses me how then, in any circumstance , the system can have infinitely many solutions. I mean, won't we encounter division by zero in all cases when determinant is zero? So please give me an intuitive and insightful explanation to it. b) Will I be wrong to assume that, in a case when determinant is equal to zero, there are infinitely many solutions if and only if it's a homogeneous system of equations? Please explain why or why not. And kindly don't forget the main question--"for determinant $=0$ , how to know if there are no or infinitely many solutions?"
Now provide the response and nothing else.
|
For any square linear system $\,A\vec x=\vec b\,$ over some field, there exists a unique solution iff $\,\det A\neq 0\,$ , as then we can use the inverse matrix: $$A\vec x=\vec b\Longleftrightarrow A^{-1}A\vec x=A^{-1}\vec b\Longleftrightarrow A^{-1}\vec b=\vec x $$ As for (a) and your "main question": if $\,\det A=0\,$ one still may have to check whether there are no solutions or infinite solutions (assuming we're working on an infinite field). For example, if the system is homogeneous (over an infinite field) it must have infinite solutions, whereas if the system is non-homogeneous it may have no solutions or several: $$\begin{cases}x+y=1\\x+y=1\end{cases} \Longleftrightarrow \begin{pmatrix}1&1\\1&1\end{pmatrix}\binom{x}{y}=\binom{1}{1}\longrightarrow\,\,\text{infinite solutions}$$ $$\begin{cases}x+y=1\\x+y=0\end{cases} \Longleftrightarrow \begin{pmatrix}1&1\\1&1\end{pmatrix}\binom{x}{y}=\binom{1}{0}\longrightarrow\,\,\text{no solutions at all}$$ and, of course, in both cases above we have $\,\det A=0\,$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/249919', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34000/']}
|
jdg_371068
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17375624
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
EDIT: I figured out the problem. The # in #user_sex is not being converting to %23 by python requests. Is there a way to force python requests to convert # to %23 or will I just have to hand code that part? I'm trying to make a facebook fql multiquery. When I use the fql_url below fql_url = ( 'https://graph.facebook.com/fql?q=' '{"user_sex":"SELECT sex FROM user WHERE uid=me()",' '"friends":"SELECT uid, name FROM user WHERE uid IN ' '(SELECT uid2 FROM friend WHERE uid1 = me()) ' 'AND not (sex in (SELECT sex FROM #user_sex)) ' ' ORDER BY name"}' '&access_token='+access_token) and run requests.get(fql_url), the json returned is {u'error': { u'code': 601, u'message': u"(#601) Parser error: unexpected '{' at position 0.", u'type': u'OAuthException'}} However when I hand code the fql_url as this fql_url = ( 'https://graph.facebook.com/fql?q=%7B%22' 'user_sex%22:%22SELECT%20sex%20FROM%20user%20WHERE%20uid=me()%22,%22' 'friends%22:%22SELECT%20uid,%20name%20FROM%20user%20WHERE%20uid%20IN%20' '(SELECT%20uid2%20FROM%20friend%20WHERE%20uid1%20=%20me())%20' 'AND%20not%20(sex%20in%20(select%20sex%20from%20%23user_sex))%20%20' 'ORDER%20BY%20name%22%7D&' 'access_token='+access_token) everything works (the json has the desired data). I've compared both the first fql_url and the hand coded fql_url and both should result in the same url being used to get the json. Is the requests urlencode not working or am I doing something wrong here?
Now provide the response and nothing else.
|
The problem is, that # is indeed a valid character in an URL. It denotes the fragment part.As the fragment is always resolved by the useragent it is never sent to the server.You can try this: >>> import urllib3>>> urllib3.util.parse_url(fql_url)Url(scheme='https', auth=None, host='graph.facebook.com', port=None, path='/fql', query='q={"user_sex":"SELECT sex FROM user WHERE uid=me()","friends":"SELECT uid, name FROM user WHERE uid IN (SELECT uid2 FROM friend WHERE uid1 = me()) AND not (sex in (SELECT sex FROM ', fragment='user_sex)) ORDER BY name"}') As you can see, the last part of your URL ended up being parsed as the fragment. Edit: The most convenient way would probably be to let requests do all the encoding. import requestss = requests.Session()s.params = {'access_token': 'foobarbaz'} # so you don't have to specify it every timequery = ('{"user_sex":"SELECT sex FROM user WHERE uid=me()",' '"friends":"SELECT uid, name FROM user WHERE uid IN ' '(SELECT uid2 FROM friend WHERE uid1 = me()) ' 'AND not (sex in (SELECT sex FROM #user_sex)) ' ' ORDER BY name"}')s.get('https://graph.facebook.com/fql', params={'q': query})
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17375624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1163114/']}
|
jdg_371069
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
332218
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I believe, in Python defaultdict inherited from dict violates Liskov Substitution Principle. defaultdict doesn't raise KeyError while x in d is still False , for instance. Is that so? If it is, why developer decided to make such a sacrifice?
Now provide the response and nothing else.
|
At first sight, yes At first sight it seems to violate the Liskov Substitution Principle : If S is a subtype of T, then objects of type T in a program may be replaced with objects of type S without altering any of the desirable properties of that program If you have a program written for a dict dictionary, that should look for words read in a stream, and do something if the words are not found (aka KeyError occurs), the same program would not work with a defaultdict because this would never cause a KeyError to happen (which is an expected invariant for dict ). Here we could open a debate, that only the Python designers could close: To be fully compliant with the LSP, they should have created an abstract_dict that defines the shared interface, but makes no claim on the behavior in case of a missing key. Then they should have made dict, defaultdict and ordereddict inherit from abstract_dict . You would then use the three types of dictionnary in the same manner, but knowing that they are "sibbligns", you would not expect that they guarantee the same behavior. But you already know that from the current documentation, don't you ? As the abstract_dict would not be very useful in the library, and as many of the methods certainly have the same implementation, the designers chose the inheritance. The idea is to reuse an implementation. Of course this kind of reuse violates the LSP. I suppose they have carefully evaluated the risks. But what's the risk for you if you knowingly choose defaultdict , because you prefer working with empty values instead of KeyError ? I suppose they have also compared the potential risks with the benefits: if in future, new features would be added to dict, this inheritance model would ensure that defaultdict would remain consistent at a lesser cost. But in looking in depth, defaultdict is fully LSP compliant First sights can mislead. In fact semantically speaking a defaultdict is-a dict . Parameters and return values are the same. Behavior is exactly the same for all programs that verify existence of the key (aka key in d ) before trying to address it. Even more, if you look closely at the documentation , for dict , it is said: d[key] Return the item of d with key key. Raises a KeyError if key is not in the map. If a subclass of dict defines a method __missing__() and key is not present, the d[key] operation calls that method with the key key as argument. The d[key] operation then returns or raises whatever is returned or raised by the __missing__(key) call. No other operations or methods invoke __missing__() . If __missing__() is not defined, KeyError is raised. __missing__() must be a method; it cannot be an instance variable: This means that you can't assume KeyError when you use dict: you have to assume that a different value could be returned or error raised. The claimed invariant is more complex, and it is defined in such a manner that it really covers the behavior of defaultdict . By the way, defaultdict seems to have its properties by using the __missing__() method. And by the way, if no factory is provided in its constructor, it will raise a KeyError as dict would (see documentation ). So it seems to me that it's fully compliant.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/332218', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/180065/']}
|
jdg_371070
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15346
|
Below is a question asked on the forum economics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are many ways to draw a sequential game 'by hand'. By drawing the game I mean this: Displaying players' decision points, available actions and payoffs. Is there any way I can do this in R or a similar high level programming language? To be precise: I do not want to plot geometric equations, I want to define a structure (players, points, connections, payoffs) and have the program plot it. I am currently browsing the 'igraph' package but I am having some difficulty labeling so I am wondering if there is a better way.
Now provide the response and nothing else.
|
LaTeX with forest The forest package of LaTeX allows you to draw game trees with pretty simple syntax. After copying a pre-set template into the LaTeX preamble, one can build up the game tree using a nested [] syntax, then the program takes care of node placement/spacing/etc. pros: customizability (you can annotate the game tree in any way you want) and font/style consistency cons: inputting the game tree elements (player/action names, payoffs, info sets) is still somewhat manual, especially if the tree is large; and a somewhat steep learning curve if you're not already familiar with the TikZ package Here's an example from the forest manual ("Decision tree" in Section 5.1): \begin{forest} decision tree[N,plain content [I;{p_1=0.5},plain content,elo={yshift=4pt} [{5,1};a] [II;b,plain content [{1,2};m] [{2,3};n] ] ] [II;{p_2=0.5},plain content,elo={yshift=4pt} [;c [{1,0};z] [{2,2};t] ] [;d [{3,1};z] [{0,0};t] ] ]{\draw[dashed](!1.anchor)--(!2.anchor) node[pos=0.5,above]{I};}]\end{forest} Game Theory Explorer The game theory explorer is developed by a few people at LSE. It allows users to input matrix-form games or build extensive form games through a GUI. It also seems capable of converting between normal and extensive form games. Additionally, the software comes with a solver that looks for Nash equilibria of the inputted game. pros: GUI; solver; no need for local installation; (limited) customizability cons: no support for annotation; font/style inconsistency with rest of document Screenshots:
|
{}
|
{'log_upvote_score': 4, 'links': ['https://economics.stackexchange.com/questions/15346', 'https://economics.stackexchange.com', 'https://economics.stackexchange.com/users/1601/']}
|
jdg_371071
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
123423
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to export a plot with axes as a graphic where the plot is centred within a white/transparent background region such that the centre of the plot region coincides with the centre of the background. By default, if one exports a plot, then centres the graphic within another program (Word/LaTeX etc) the plot region is not centred but the overall image. plot = Plot[x^2, {x, 0, 5}, Frame -> True, FrameLabel -> {x, x^2}]Export["plot.pdf", plot]; As can be seen above, the plot region (the black rectangle) is not centred within the image (the orange box).
Now provide the response and nothing else.
|
This is a simple little hack that will replace the polygons created by your Filling command with a set of random points. By default I'm scaling the number of random points by the number of points in the polygon, so that the density of points stays relatively constant. dotFillPlot[plot_, ndots_: 5] := plot // Normal // ReplaceAll[ Polygon[a__] :> {PointSize[Small], Point[RandomPoint[Polygon[a], ndots Length@a]]}]dotFillPlot@Plot[Sin[x], {x, 0, 2 π}, Filling -> Axis]dotFillPlot@ Plot[Evaluate[Table[BesselJ[n, x], {n, 4}]], {x, 0, 10}, Filling -> Axis] I'd rather have a regular grid of points, but that will require a more elaborate function I think - using a Texture with the polygons. If you don't care for the dot's appearance, then you might prefer to manually set the Opacity for the dots rather than taking the value from the polygon. If you put Opacity[0.6] right before RandomPoint in the function definition, then you get the following plot:
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/123423', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/6328/']}
|
jdg_371072
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
63729
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Say that a closed system has $n$ dimensions and is in the shape of a $n$-ball with a radius of 1, it's volume will be $$\frac{\pi^\frac{n}{2}}{\Gamma(\frac{n}{2}+1)}$$ which tends to 0 yet is not empty as n tends to infinity. My question might not make any sense, and my understanding of entropy might make things even worse, but if it's sensible, what will become of the entropy of such a system? If my first exposé really makes no sense, I would reformulate as follows: let's say that an unfathomable force compresses a closed system such that it's volume decreases towards zero, what will become of the entropy of such a system? (I realise it might not be the same problem/question)
Now provide the response and nothing else.
|
Let me begin with the second question where you don't change the dimensionality, just the volume. The entropy never decreases when you actually compress gas. The compression means that the walls are mostly moving against the colliding molecules which means that they're recoiled backwards at higher velocities. The molecules' kinetic energy increases so they occupy a larger volume in the momentum space (in macroscopic language, a gas heats up while being compressed) which at least compensates the decrease of the volume in the position space. The other answer is incorrect. The second laws says not only that systems exhibit some activity indicating that they don't like a decreasing entropy; instead, it says that whatever activity physical systems display, they will never achieve a macroscopic decrease of the entropy. It's just impossible. To compress gas by 70% is possible, to decrease the entropy by a macroscopic amount is not. Now, the interesting first question. If you could change the effective dimensionality, it would still be true in any consistent theory that the entropy can't decrease. So if your theory were just able to add dimensions like that while keeping a molecule in a sphere of the increasing dimension, the second law of thermodynamics would imply that such an addition of dimensions isn't physically possible – it would be another, more sophisticated example of the perpetual motion machine of the second kind. In some sense, it is true that the second law encourages physical systems to lose the dimensions (a way to increase the entropy, given your formula for the higher-dimensional spherical volumes). When the energy dissipates, the energy per degree of freedom effectively goes down which allows us to use a lower-dimensional "effective" description. For example, a gas full of Kaluza-Klein particles probing (moving in) extra dimensions will tend dissipate its energy and decay to many lower-energy quanta which are effectively living just in 3+1 dimensions.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/63729', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7322/']}
|
jdg_371073
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11537362
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Edit: Just to be clear, the struct doesn't do anything, as in it has no functions. I think I gave the impression that I thought using an initialiser list and leaving the body of the constructor empty was the issue at hand. Say I'm using a struct to hold two values, and I have a constructor just so I can create an arbitrary struct like this: struct twoValues{ int x; int y; twoValues(int x_, int y_):y(y_),x(x_) {}};someFunction(twoValues(1,2)); That saves me from having to do this: twoValues anInstance;anInstance.x=1;anInstance.y=2;someFunction(anInstance); Edit: You're all correct, I could also initialise with the following: twoValues anInstance = {1,2}; I see nothing wrong with this but I had some feedback from a C++ test and one of the negative feedback marks was "constructors for structs that don't do anything". I had limited contact with the guy testing me and so never asked why. Is it a bad thing and why? I would rather carry on doing it.
Now provide the response and nothing else.
|
It depends on what the struct is being used for. As others have said,the constructor means that the class is no longer a POD, and thataggregate initialization cannot be used for it. In particular, youcannot have something at namespace scope like: TwoValues const table[] ={ { 1, 2 }, { 3, 4 }, // ...}; You can have: TwoValues const table[] ={ TwoValues( 1, 2 ), TwoValues( 3, 4 ), // ...}; but it is more verbose, and it implies dynamic initialization, which mayresult in order of initialization issues. On the other hand, without the constructor, you cannot create temporaryinstances on the fly. Instead of: extern void f( TwoValues const& );// ...f( TwoValues( 1, 2 ) ); you have to write: extern void f( TwoValues const& );// ...TwoValues tmp = { 1, 2 };f( tmp ); If the object is dynamically allocated, it's even worse, since youeither have to allocate first, then initialize, or create a temporary asabove, and then write new TwoValues( tmp ) and use the implicit copyconstructor. You have to choose. Depending on what the struct is used for, one orthe other will be preferred; on one hand, I have a lot of structs whichare used exclusively in static tables, and intentionally don't have aconstructor (and contain only types which support staticinitialization), and use them a lot for configuring code. On the otherhand, I also have a lot of structs which are internal to a class, alongthe lines of Node in a tree or a graph; these almost always have aconstructor, to facilitate creating them on the fly. There's no"correct" answer without knowing the role of the struct in yourapplication.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11537362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1095766/']}
|
jdg_371074
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
197918
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I read that there are two modes called “kernel mode” and “user mode” to handle execution of processes. ( Understanding the Linux Kernel , 3rd Edition.) Is that a hardware switch (kernel/user) that is controlled by Linux, or software feature provided by the Linux kernel?
Now provide the response and nothing else.
|
Kernel mode and user mode are a hardware feature, specifically a feature of the processor. Processors designed for mid-to-high-end systems (PC, feature phone, smartphone, all but the simplest network appliances, …) include this feature. Kernel mode can go by different names: supervisor mode, privileged mode, etc. On x86 (the processor type in PCs), it is called “ring 0”, and user mode is called “ring 3”. The processor has a bit of storage in a register that indicates whether it is in kernel mode or user mode. (This can be more than one bit on processors that have more than two such modes.) Some operations can only be carried out while in kernel mode, in particular changing the virtual memory configuration by modifying the registers that control the MMU . Furthermore, there are only very few ways to switch from user mode to kernel mode, and they all require jumping to addresses controlled by the kernel code. This allows the code running in kernel mode to control the memory that code running in user mode can access. Unix-like operating systems (and most other operating systems with process isolation) are divided in two parts: The kernel runs in kernel mode. The kernel can do everything. Processes run in user mode. Processes can't access hardware and can't access the memory of other processes (except as explicitly shared). The operating system thus leverages the hardware features (privileged mode, MMU) to enforce isolation between processes. Microkernel -based operating systems have a finer-grained architecture, with less code running in kernel mode. When user mode code needs to perform actions that it can't do directly (such as access a file, access a peripheral, communicate with another process, …), it makes a system call : a jump into a predefined place in kernel code. When a hardware peripheral needs to request attention from the CPU, it switches the CPU to kernel mode and jumps to a predefined place in kernel code. This is called an interrupt . Further reading Wikipedia What is the difference between user-level threads and kernel-level threads? Hardware protection needed for operating system kernel
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/197918', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/110279/']}
|
jdg_371075
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6712587
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to write some kind of loop that can count the frequency of each letter in a string. For example: "aasjjikkk" would count 2 'a', 1 's', 2 'j', 1 'i', 3 'k'. Ultimately id like these to end up in a map with the character as the key and the count as the value. Any good idea how to do this?
Now provide the response and nothing else.
|
You can use a java Map and map a char to an int . You can then iterate over the characters in the string and check if they have been added to the map, if they have, you can then increment its value. For example: Map<Character, Integer> map = new HashMap<Character, Integer>();String s = "aasjjikkk";for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); Integer val = map.get(c); if (val != null) { map.put(c, val + 1); } else { map.put(c, 1); }} At the end you will have a count of all the characters you encountered and you can extract their frequencies from that. Alternatively, you can use Bozho's solution of using a Multiset and counting the total occurences.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6712587', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/792080/']}
|
jdg_371076
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24042949
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Traditionally in Objc, we do weakSelf to prevent additional retain count for blocks. How does swift internally manage retain cycles that occur in blocks for Objc?
Now provide the response and nothing else.
|
To prevent a block from holding a strong reference to an object, you must define a capture list for the block. The closure expression syntax is defined as follows: { ( /*parameters*/ ) -> /*return type*/ in // statements} But this is extended later in the documentation to include a capture list. This effectively equates to the expression syntax being defined as follows: { [ /*reference type*/ /*object*/, ... ] ( /*parameters*/ ) -> /*return type*/ in // statements} ...where /*reference type*/ can be either weak or unowned . The capture list is the first thing to appear in the closure and it is optional. The syntax, as shown above is defined as one or more pairs of reference type followed by object; each pair is separated by a comma. For example: [unowned self, weak otherObject] Complete example: var myClosure = { [unowned self] in print(self.description)} Note that an unowned reference is non-optional, so you don't need to unwrap it. Hopefully that answers your question. You can read up more about ARC in Swift in the relevant section of the documentation . You should pay particular attention to the difference between weak and unowned . It could be safer in your implementation to use weak , because using unowned assumes the object will never be nil. This may lead to your app crashing if the object has actually been deallocated before being used in your closure. Using weak as the reference type, you should unwrap with ? , as follows: var myClosure = { [weak self] in print(self?.description)}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/24042949', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1213166/']}
|
jdg_371077
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
354795
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Evaluate: $ \displaystyle \int_0^{\pi} \ln \left( \sin \theta \right) d\theta$ using Gauss Mean Value theorem . Given hint: consider $f(z) = \ln ( 1 +z)$ . EDIT:: I know how to evaluate it, but I am looking if I can evaluate it using Gauss MVT. ADDED:: Here is what I have got so far!! $$\ln 2 = \frac{1}{2 \pi } \int_0^{2\pi } \log(2+e^{i \theta}) d\theta = \frac{1}{2 \pi } \int_0^{2\pi } \log(2+e^{-i \theta}) d\theta$$ Hence, $ \displaystyle 2 \ln 2 = \frac{1}{2 \pi } \int_{0}^{2 \pi} \log(5 + 4 \cos \theta )d \theta = \frac{1}{\pi} \int_0^{\pi} \log(1 + 8 \cos^2 \theta) d \theta$ , now to problem is how to reduce it to the above form?
Now provide the response and nothing else.
|
Here is a solution I wrote for a complex analysis assignment several years ago, I hope it helps. Basically, we are using the mean value theorem you mention above on a slightly different function, and then separating things to obtain the desired integral. We have to be careful because we can't exactly integrate $\log(1-u)$ on the circle of radius $1$. Consider $$\int_{C_{1-\epsilon}}\frac{\log(1-u)}{u}du$$ where $C_{1-\epsilon}$ is the circle of radius $1-\epsilon$. Then since $\frac{\log(1-u)}{u}$ is an analytic function in $D_{1-\epsilon}$ (It has a removable singularity at $u=0$ by the removable singularity theorem mentioned last assignment), we see that this contour integral will be zero for every $\epsilon>0$. But then notice $$\int_{C_{1-\epsilon}}\frac{\log(1-u)}{u}du=2i\int_{0}^{\pi}\log(1-(1-\epsilon)e^{i2z})dz$$ so that $$\int_{0}^{\pi}\log(1-(1-\epsilon)e^{i2z})dz=0$$ for every $\epsilon>0$. Since $$|\int_{0}^{\pi}\log(1-e^{i2z})dz|\leq\int_{0}^{\pi}|\log z|dz+\int_{0}^{\pi}|\log(\pi-z)|dz+\int_{0}^{\pi}|\log\left(\frac{1-e^{i2z}}{z(z-\pi)}\right)|dz$$ As $\frac{1-e^{i2z}}{z(z-\pi)}$ has no zeros on $[0,\pi]$ we see that it must be bounded below by some constant $c$. Then as it also has nontrivial imaginary part on $(0,\pi)$ we see that $\int_{0}^{\pi}|\log\left(\frac{1-e^{i2z}}{z(z-\pi)}\right)|dz<\infty$. Then since $\int_{0}^{1}\log xdx=x\log x-x\biggr|_{x=0}^{x=1}=-1<\infty$ it follows that $\int_{0}^{\pi}|\log z|dz<\infty$ and $\int_{0}^{\pi}|\log(\pi-z)|dz<\infty$ so that $|\int_{0}^{\pi}\log(1-e^{i2z})dz|<\infty$. Recall $\log$ is uniformly continuous on any compact set not containing the origin, so we can bound the middle of all of these integrals by the same constant. Since around $0$ and around $\pi$ the norm of $\log(1-e^{i2z})$ goes to infinity, we can choose small enough neighborhoods so that the norm of $\log(1-(1-\epsilon)e^{i2z})dz$ is bounded above by $|\log(1-e^{i2z})|$ in these neighborhoods for every $\epsilon>0$. Then applying the dominated convergence theorem tells us that $$\lim_{\epsilon\rightarrow0}\int_{0}^{\pi}\log(1-(1-\epsilon)e^{i2z})dz=\int_{0}^{\pi}\log(1-e^{i2z})dz=0.$$ Now we have the identity $$1-e^{-2iz}=-2ie^{iz}\sin z$$ so that $$0=\int_{0}^{\pi}\log(\sin z))dz+\int_{0}^{\pi}\log(e^{iz})dz+\int_{0}^{\pi}\log(-2i)dz.$$ By choosing the principal branch of the logarithm we then have $$\int_{0}^{\pi}\log(\sin z))dz=-\left(\int_{0}^{\pi}izdz+\int_{0}^{\pi}-\frac{\pi i}{2}dz+\int_{0}^{\pi}\log(2)dz\right)$$ $$=-\left(\frac{i\pi^{2}}{2}+-\frac{\pi^{2}i}{2}dz+\pi\log(2)dz\right)=-\pi\log2.$$ By substituting $z=\pi x$ we see that $\int_{0}^{\pi}\log(\sin z))dz=\pi\int_{0}^{1}\log(\sin\pi x))dx$ so that we are able to conclude $$\int_{0}^{1}\log(\sin\pi x))dx=-\log2$$ as desired.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/354795', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/69881/']}
|
jdg_371078
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
549895
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This seems to be very basic but I am confused about it. I know ground is a point whose potential is zero. Now let's take a piece of wire , and mark a point A on that wire. I said , A has potential of -15 with respect to ground, what does that exactly mean? How can a voltage be lesser than zero? Any explanation in terms of electron? For example we define positive voltage as the force at which electrons ar being pushed in a particular point. Can we have an intuitive Idea about negative voltage using this force or electron concept?
Now provide the response and nothing else.
|
You could consider the voltage a bit like floors on a building. A numbering system used in many places in Europe defines that the ground floor is 0 or G, that floors above it are numbered positively and numbers below it are negatively. You now have the option of measuring everything relative to ground (the floor number) or measuring the difference in level between any two floors (the potential or voltage difference). In the left image above our man is standing on Floor 2 relative to ground. The electrical analogy is that some point on the circuit is connected to ground / earth and by convention is zero volts and all voltages (heights) are measured relative to this. An 'all above ground' building will have no negative floors. A bunker or underground car-park will have no positive floors. If the building is launched off into space he has no ground reference and is free to number the floors any way he wishes, including have Floor 0 at any arbitrary point. This is analogous to having an electrically isolated circuit with no ground connection in that we can call any point 'ground'. simulate this circuit – Schematic created using CircuitLab Figure 2. Two 1.5 V cells with three different reference points. Hopefully Figure 2 makes it a bit clearer. Depending which point we assign as reference (GND) the other points' relative voltage changes.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://electronics.stackexchange.com/questions/549895', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/216857/']}
|
jdg_371079
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
582158
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We know that we can take direct limit for a direct system and inverse limit for an inverse system. For example, when can defined the stalk of a presheaf $\mathcal{F}$ on a topological space $X$ at a point $P\in X$ by $$ \lim_{\rightarrow_{p \in U \text{ open}}} \mathcal{F}(U). $$ It seems that we get something small. We can take inverse limit of the inverse system $\{k[x]/(x)^n, n=0, 1, 2, \ldots\}$. Then we obtain $k[[x]]$ which is larger than $k[x]$. It seems that by taking inverse limit we add something to $k[x]$. It seems that taking inverse limit is to add something and taking direct limit is to get something small. Is this correct? If a system is both direct system and inverse system, how do we know that we need to take direct limit or inverse limit? Thank you very much.
Now provide the response and nothing else.
|
This is not a full answer, but some thoughts about direct and inverse limits. I always think of the direct limit of a system $(A_i,\varphi_i)_{i \in I}$ as a union $A = \bigcup A_i$ but with compatible transition maps, i.e. an element is in this union if and only if it is compatible by those maps and the canonical injections $A_i \longrightarrow A$.In contrast to inverse limit, where I think of them not as the union but as the cartesion product (or to be exact: a subset of the cartesian product). There you don't have canonical injections, but canonical projections $A \longrightarrow A_i$ and the transition maps of the inverse system have to be compatible with this canonical projections. In conclusion, for me a direct limit is a really tiny subset of the union of all spaces and the inverse limit is a really big subset of the cartesian product, roughly speaking. I hope that intuition helped you!
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/582158', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9791/']}
|
jdg_371080
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
184952
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After reading a lot of deep learning papers, a kind of rough feeling is that there exist a lot of tricks in training the network to get the better-than-normal performance. From an industry application perspective, it is very hard to develop this kind of tricks except those elite research groups in big tech companies, e.g., google or facebook. Then what is the best way to apply deep learning algorithms in practice. Any thought and suggestions will be highly appreciated.
Now provide the response and nothing else.
|
True, some details used for improving performance are considered as tricks and you won't always know if these tricks yield the same improvement for your data and your network. Some things that you will definitely need: Data , lots of it GPUs will let you run experiments faster and try out more things in a shorter time span. Learning curve analysis. In the end it comes down to performance on the test set, but looking at the both train and test metrics you can identify reasons for bad performance. Strong bias? Overfitting from too many hidden nodes? The activation function . I don't think it counts as a trick to know which kind of activation function you need. ReLU have a critical charactersitic in that they do not saturate like sigmoids and tanh. A neuron with ReLU will longer have probability-like output, but you don't need this for neurons in mid-level layers anyway. The advantag you get is mitigating the vanishing or exploding of gradients and speed up convergence. Regularization . Might apply as tricks, but if you're using any of the mainstream deep learning libraries you can get off-the-shelf implementations for regularization via dropout. Data augmentation. You're basically expanding your dataset synthetically without the added cost of manual annotation. The key is to augment the data with transformations that actuall make sense. So that the network gets to see variants of the data it may encounter in the test phase or when it gets deployed into the product. For visual data it horizontal flipping is trivial and adds a lot of gain. Jitter is probably dependent on the type of data and how noisy it is. Diving into hyperparameter exploration can be frustrating. Start off with small networks and simple training procedures. Smaller networks are faster to train. Add more layers when you see signs of overfitting. Good initialization . Random intitialization are appropriate for gauging the network's ability to converge but will not necessarily lead t optimal performance. At the same time, just keeping on iterating might lead to the network overfitting to the training data. If possible use a pre-trained network that has already learned a representation and fine tune it to your dataset. Unsupervised pre-training is another way to go and can allow the supervised training procedure to start from a far more promising position in weight space. Scrutinize tricks. Understand what the trick really does. A paper describing a small detail that was used in improving the performance of a network will focus on that new aspect. The paper could be part of a sequence of projects that the authors have been working on. The context of the trick may not always be clear right away but for the authors it's not a trick but a technique that solves a problem they had. Sometimes a technique comes out and is treated as a trick and later someone will analyze its impact and describe its function. For example that this trick is equivalent to L2 regularization which more people are familiar with. We can the decide if we should try out this new technique or stick with the L2 regularization that we already know about. A lot of these tricks try to solve problems in deep learning, like risk of overfitting, costly computations, over parameterization and highly redundant weights. It's worth taking the time to understand what these tricks really do. By understanding the problem they try to solve we can judge the applicability of different tricks and pick the one that works well with constraints we may have (e.g. little computing power, small dataset)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/184952', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/3269/']}
|
jdg_371081
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27662833
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make a simple REST web app, where the user interact with the objects through links in an email. The links have a token that can be used to authenticate a user without a username and password, like the ones that are usually used to reset a password. What are the best practices for such a login-less, token-based authentication system?
Now provide the response and nothing else.
|
I am by no means a security expert.. but some of the points which come to mind are - Lifetime - The token should expire after a set period of time. Indefinite access using the token certainly doesn't make sense. Replay attacks - The mechanism should prevent replay attacks.. which means the token should be valid for not only a set period of time, but also fixed number of calls.. Ideally exactly 1. If this number is not exactly 1, then it opens another can of worms.. Unless, its a feature :( For example, by design, the user is expected to share link with others, and anyone with the link should be able to access the resource in question. Authorization - Granularity of the access granted by the token. Is it black and white.. or is the token also associated with a fixed set of rights. For example - token X was issued for Read-Only access, vs token Y was issued, for same resource with R/W access. Administration - User / Admin should be able to see and verify any currently active and issued tokens, and associated information (permissions granted / affected resource etc), and explicitly revoke them, if necessary. Secure Communication - You should consider security of the medium through which the url with token will be sent to the user. i.e. in your scenario, do the users receive emails over secure channel (TLS / SSL)? Should the email itself be protected with DRM? Man in the Middle / Leaks - Similarly, even though you are providing the url in email, and the user is not logging on using user name and password over SSL, the resource should still be accessed using the url with token over SSL. This will prevent any capturing of the tokens from url, by a man in the middle. You would also need to be very careful about when the users browser may use this url in places you didn't expect.. I can vaguely recall reading about a popular site being affected by their urls being used as Refer Url, when displaying ads on their website.. which means, the advertisers site would probably get the url with the token as Refer. Generation - Choice of algorithm to generate the token - May seem pretty obvious, but the token should be extremely obscure and near impossible to guess or brute force. The tokens should never be reused and the algorithm should avoid collisions. Server Side Security - The tokens should probably be treated with same security as you would secure users id and password. If your user database gets hacked, hackers should not get the users passwords and other sensitive information.. Similarly, if your user / resource / token database gets hacked, despite the expiration on the tokens, hackers should not be able to access the resources as users for x hours.Point is, when the tokens are stored on server side, they should themselves be secured / encrypted. On the same lines.. just like its bad practice to log information like passwords in log file (specially plain text), You'd also have to consider any place these urls may get logged (plain text) on your server.. For example web server logs. If only super admins should have access to user's database, then they should also be the only ones to be able to see these token.. Not your web server admin and not any hackers parsing the log file after the hack. Auditing - If there will be a need for auditing, you'd need some mechanism in place to prove that while the user didn't log on to the system, they are indeed the ones who performed the action / operation in question. Would you want to track the ip address and other information along with the request which would help such auditing? Consent - Do your users even consent to the use of such alternate means of authentication and authorization? Should this be an optional / opt in feature? EDIT: I found a link to the referrer url problem I was recalling. Although it was in context of user information.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27662833', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/182172/']}
|
jdg_371082
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
322821
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I made multi layer with Altium PCB board wizard.I assigned two signal plane on each surface(top and bottom layer) and two internal plane(power and ground).On layer stack manager, there is no dielectric on interface of Top Layer - Power plane and Power Plane - Ground Plane(As you can see in the image) This makes me a little bit confused. Is dielectric unnecessary between those planes? If so, why? Thanks in advance
Now provide the response and nothing else.
|
Somehow you have gotten things mucked up. There definitely should be a dielectric layer between every two adjacent copper layers. However, if you can't figure out how to fix it, it's of little consequence. Altium doesn't manufacture your board, your fab shop does. And your fab shop doesn't need any design data to describe the dielectric layers. They just need to be told how thick the layers should be and what material to use for them. You can easily draw the stack-up drawing manually into your fabrication drawing layer, and your fab shop should have no difficulty following it. In the past it was even common to give the stack up information in a separate text file rather than in a drawing layer in gerber format. For example, the drawing can be as simple as this: ------------ Top Layer -- 0.5 oz copper============ FR-4 0.6 mm------------ Plane layer 1 -- 1 oz copper============ FR-4 0.4 mm------------ Plane layer 2 -- 1 oz copper============ FR-4 0.6 mm------------ Bottom Layer -- 0.5 oz copper Of course, if you like you could specify which layers are cores and which pre-pregs, designate a specific laminate product instead of simply "FR-4", etc., depending on the needs of your design.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/322821', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/156362/']}
|
jdg_371083
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17159970
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Working on some old code at the moment and a bit stuck I have a main form that has a msflexgrid populated with data from SQL and on this form there is a button, which opens a modal form that allows me to enter data and save it to SQL (then closes the form). The issue is the msflexgrid on the main form doesn't refresh after I save data from the modal, I need a way of automatically refreshing the msflexgrid after the modal form closes. Any help would be appreciated :)
Now provide the response and nothing else.
|
Keep in mind, that fitEllipse is not the computation of a boundingEllipse but a least square optimization that assumes the points to lie on an ellipse. I can't tell you why it fails on the 3 triangles in the last row so badly but "works" on the triangle one line above, but one thing I've seen is, that all 3 triangles in the last row were fitted to a rotatedRect with angle 0 . Probably the least square fitting just failed there. But I don't know whether there is a bug in the openCV implementation, or wether the algorithm can't handle those cases. This algorithm is used: http://www.bmva.org/bmvc/1995/bmvc-95-050.pdf My advice is, to only use fitEllipse if you are quite sure that the points really belong to an ellipse. You wont either assume to get reasonable results from fitLine if you have random data points. Other functions you might want to look at are: minAreaRect and minEnclosingCircle if you use RotatedRect temp = minAreaRect(Mat(contours[i])); instead of fitEllipse you will get an image like this: maybe you can even use both methods and refuse all ellipses that fail in both versions and accept all that are accepted in both versions, but investigate further in the ones that differ?!?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17159970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2495566/']}
|
jdg_371084
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3730383
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\int_0^1 \left\{\ln{\left(\frac{1}{x}\right)}\right\} \mathop{dx}$$ Where $\{x\}$ is the fractional part of x. I was wondering if this integral converges and has a closed form but I dont know how to calculate it. I tried $u=\frac{1}{x}$ to get $$\int_1^{\infty} \frac{\{\ln{u}\}}{u^2} \; du$$ and then perhaps convert the numerator into a series somehow...?
Now provide the response and nothing else.
|
Using the change of variable $y = \log(1/x)$ , i.e. $x = e^{-y}$ , your integral becames $$I = \int_0^\infty e^{-y} \{y\}\, dy= \sum_{n=0}^\infty \int_n^{n+1} e^{-y}(y-n)\, dy= \sum_{n=0}^\infty e^{-n} (1 - 2/e) = \frac{e-2}{e-1}.$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3730383', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_371085
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21978679
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It is quite confusing to know difference between Imperative and Declarative programming can any one explain difference between both in real world terms? Kindly clarify whether C is an Imperative or Declarative Language?
Now provide the response and nothing else.
|
C is an imperative programming language. A one line difference between the two would be Declarative programming is when you say what you want, and imperative language is when you say how to get what you want. In Declarative programming the focus is on what the computer should do rather than how it should do it (ex. SQL) whereas in the Imperative programming the focus is on what steps the computer should take rather than what the computer will do (ex. C, C++, Java). Imperative programming is a programming paradigm that describes computation in terms of statements that change a program state Declarative programming is a programming paradigm, a style of building the structure and elements of computer programs, that expresses the logic of a computation without describing its control flow Many imperative programming languages (such as Fortran, BASIC and C) are abstractions of assembly language. The wiki says:- As an imperative language, C uses statements to specify actions. The most common statement is an expression statement, consisting of an expression to be evaluated, followed by a semicolon; as a side effect of the evaluation, functions may be called and variables may be assigned new values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords. Structured programming is supported by if(-else) conditional execution and by do-while, while, and for iterative execution (looping). The for statement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted. break and continue can be used to leave the innermost enclosing loop statement or skip to its reinitialization. There is also a non-structured goto statement which branches directly to the designated label within the function. switch selects a case to be executed based on the value of an integer expression.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21978679', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3246309/']}
|
jdg_371086
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17466675
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello guys i have this error in mysql with the following query, I don't know where do mistake, thank you for the help :) SELECT a.*,b.*,users.*, (SELECT p.msg_text,p.occured_at FROM message_private p WHERE p.group_id=a.group_id ORDER BY p.occured_at DESC LIMIT 1) as message, f.countf,message.occured_atFROM message_group aINNER JOIN message_group b ON a.group_id=b.group_idINNER JOIN users ON users.profile_id = b.profile_idLEFT JOIN ( SELECT COUNT(profile_id) countf, id_group FROM message_view WHERE profile_id = 'sN07X2' GROUP BY id_group) f on f.id_group = b.group_idWHERE a.profile_id = 'sN07X2' AND b.profile_id != a.profile_id AND countf > 0 ORDER BY p.occured_at DESC LIMIT 9
Now provide the response and nothing else.
|
The issue here is that modern versions of Windows (Windows XP SP2, Windows Server 2003 SP1 and up) include a loopback check security feature that is designed to help prevent reflection attacks on your computer. Therefore, authentication fails if the FQDN or the custom host header that you use does not match the local computer name. This can be resolved by either explicitly specifying host names or by disabling the loopback check. Obviously the former being the more controlled approach. Set the DisableStrictNameChecking registry entry to 1. See: 281308 (Note: This should be unnecessary for Windows Server 2008/Vista and later) In Registry Editor, locate and then click the following registry key:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 Right-click MSV1_0, point to New, and then click Multi-String Value. Type BackConnectionHostNames, and then press ENTER. Right-click BackConnectionHostNames, and then click Modify. In the Value data box, type the host name or the host names for the sites that are on the local computer, and then click OK. Quit Registry Editor, and then restart the IISAdmin service. Full details of how do to this can be found on MSDN: 896861 Hope this helps someone out. If you have any alternate suggestions or improvements please add.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17466675', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2138322/']}
|
jdg_371087
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29697884
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like add this xml fragment programmatically to other fragments.Is it possible? <fragment xmlns:android="http://schemas.android.com/apk/res/android"android:id="@+id/map"android:layout_width="match_parent"android:layout_height="match_parent"class="com.google.android.gms.maps.SupportMapFragment" />
Now provide the response and nothing else.
|
In XML you can add a placeholder container: <FrameLayout android:id="@+id/mapContainer" android:layout_width="match_parent" android:layout_height="match_parent"/> And then in code you can do: FragmentManager fm = getChildFragmentManager();SupportMapFragment supportMapFragment = SupportMapFragment.newInstance();fm.beginTransaction().replace(R.id.mapContainer, supportMapFragment).commit();
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29697884', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3910670/']}
|
jdg_371088
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13186067
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to split strings of data using each character from string.punctuation and string.whitespace as a separator. Furthermore, I need for the separators to remain in the output list, in between the items they separated in the string. For example, "Now is the winter of our discontent" should output: ['Now', ' ', 'is', ' ', 'the', ' ', 'winter', ' ', 'of', ' ', 'our', ' ', 'discontent'] I'm not sure how to do this without resorting to an orgy of nested loops, which is unacceptably slow. How can I do it?
Now provide the response and nothing else.
|
A different non-regex approach from the others: >>> import string>>> from itertools import groupby>>> >>> special = set(string.punctuation + string.whitespace)>>> s = "One two three tab\ttabandspace\t end">>> >>> split_combined = [''.join(g) for k, g in groupby(s, lambda c: c in special)]>>> split_combined['One', ' ', 'two', ' ', 'three', ' ', 'tab', '\t', 'tabandspace', '\t ', 'end']>>> split_separated = [''.join(g) for k, g in groupby(s, lambda c: c if c in special else False)]>>> split_separated['One', ' ', 'two', ' ', 'three', ' ', 'tab', '\t', 'tabandspace', '\t', ' ', 'end'] Could use dict.fromkeys and .get instead of the lambda , I guess. [edit] Some explanation: groupby accepts two arguments, an iterable and an (optional) keyfunction. It loops through the iterable and groups them with the value of the keyfunction: >>> groupby("sentence", lambda c: c in 'nt')<itertools.groupby object at 0x9805af4>>>> [(k, list(g)) for k,g in groupby("sentence", lambda c: c in 'nt')][(False, ['s', 'e']), (True, ['n', 't']), (False, ['e']), (True, ['n']), (False, ['c', 'e'])] where terms with contiguous values of the keyfunction are grouped together. (This is a common source of bugs, actually -- people forget that they have to sort by the keyfunc first if they want to group terms which might not be sequential.) As @JonClements guessed, what I had in mind was >>> special = dict.fromkeys(string.punctuation + string.whitespace, True)>>> s = "One two three tab\ttabandspace\t end">>> [''.join(g) for k,g in groupby(s, special.get)]['One', ' ', 'two', ' ', 'three', ' ', 'tab', '\t', 'tabandspace', '\t ', 'end'] for the case where we were combining the separators. .get returns None if the value isn't in the dict.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13186067', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1156707/']}
|
jdg_371089
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46626258
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In a regular expression, in multiline mode, ^ and $ stand for the start and end of line. How can I match the end of the whole string? In the string Hello\nMary\nSmith\nHello\nJim\nDow the expression /^Hello(?:$).+?(?:$).+?$/ms matches Hello\nMary\Smith . I wonder whether there is a metacharacter (like \ENDSTRING ) that matches the end of the whole string, not just line, such that /^Hello(?:$).+?(?:$).+?\ENDSTRING/ms would match Hello\nJim\nDow . Similarly, a metacharacter to match the start of the whole string, not a line.
Now provide the response and nothing else.
|
There are indeed assertions (perlre) for that \A Match only at beginning of string \Z Match only at end of string, or before newline at the end ... The \A and \Z are just like ^ and $ , except that they won't match multiple times when the /m modifier is used, while ^ and $ will match at every internal line boundary. To match the actual end of the string and not ignore an optional trailing newline, use \z . Also see Assertions in perlbackslash . I am not sure what you're after in the shown example so here is another one perl -wE'$_ = qq(one\ntwo\nthree); say for /(\w+\n\w+)\Z/m' prints twothree while with $ instead of \Z it prints onetwo Note that the above example would match qq(one\ntwo\three\n) as well (with a trailing newline), what may or may not be suitable. Please compare \Z and \z from the above quote for your actual needs. Thanks to ikegami for a comment.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46626258', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3814740/']}
|
jdg_371090
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46563182
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to send POST request to my controller but cannot pass any parameter in any type unless I decide to use JSON. My goal is to pass a String and a file to my controller but I keep getting Required request part 'xxx' is not present error. @RestControllerpublic class ConfigurationController { @PostMapping(value = "/config") public ResponseEntity<?> saveEnvironmentConfig(@RequestParam("file") MultipartFile uploadfile){ return ResponseEntity.ok().body(null); }} I cannot have file here. Similarly if I try: @RestControllerpublic class ConfigurationController { @PostMapping(value = "/config") public ResponseEntity<?> saveEnvironmentConfig(@RequestParam("name") String name){ return ResponseEntity.ok().body(null); }} same thing I cannot get name here. I am sending request via Postman as given in following screenshot: The only header tag is for Authorization. I do not have any Content-Type header, I tried to add multipart/form-data but did not help. Only way I could pass String parameter is by adding to URL. So following http://localhost:8080/SearchBox/admin/config?name=test works but this is not what I want. I want String and File parameters in Body part. I also tested via CURL: curl -X POST -H "Authorization:Bearer myToken" -H "Content-Type:Multipart/form-data" http://localhost:8080/SearchBox/admin/config --data 'pwd=pwd'curl -X POST -H "Authorization:Bearer myToken"http://localhost:8080/SearchBox/admin/config --data 'pwd=pwd'curl -H "Authorization:Bearer myToken" -F file=@"/g123.conf" http://localhost:8080/SearchBox/admin/config Note: I checked similar posts already but did not help This , This , This
Now provide the response and nothing else.
|
I finally solved the issue and sharing my solution in case someone else may face the same problem. @RestController@RequestMapping("/")public class ConfigurationController { @Bean public MultipartConfigElement multipartConfigElement() { return new MultipartConfigElement(""); } @Bean public MultipartResolver multipartResolver() { org.springframework.web.multipart.commons.CommonsMultipartResolver multipartResolver = new org.springframework.web.multipart.commons.CommonsMultipartResolver(); multipartResolver.setMaxUploadSize(1000000); return multipartResolver; } @PostMapping(value = "/config", consumes = "multipart/form-data") public ResponseEntity<?> saveEnvironmentConfig(@RequestParam("password") String password, @RequestParam("file") MultipartFile submissions) throws AdminAuthenticationException, ConfigurationException { return ResponseEntity.ok().body(null); }}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46563182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7027099/']}
|
jdg_371091
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3058163
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We have this theorem ( https://en.wikipedia.org/wiki/Sum_of_two_squares_theorem ) which gives conditions on positive integer $n$ such that $n=a^2+b^2$ . I was wondering if there exists any such $n>1$ which is also a perfect cube? In other words, we seek the solutions of Diophantine equation $a^2+b^2=z^3$ . We see $a=b=z=2$ is one of the solutions. I am interested in other positive non-trivial solutions with $a\ne b$ .
Now provide the response and nothing else.
|
Consider $b = ka$ for some integer $k > 1$ . Then we have $$a^2 + b^2 = a^2 + \left(ka\right)^2 = \left(1 + k^2\right)a^2 = z^3 \tag{1}\label{eq1} $$ Now, if $z = a$ , \eqref{eq1} would be true if $$k^2 + 1 = a \tag{2}\label{eq2} $$ Thus, if $k = 2$ for example, then $a = 5$ and $b = 10$ giving $$5^2 + 10^2 = 5^3 \tag{3}\label{eq3} $$ There are, of course, many other such similar examples. If you wish for $a \neq z$ as well, then you could also have, for example, that $z = ga$ , for an integer $g \gt 1$ , so $1 + k^2 = \left(g^3\right)a$ , but there are no solutions for certain cases, such as $g = 2$ . More generally, if you wish to have other restrictions, such as that $\gcd\left(a, b, z\right) = 1$ , then consider what coffeemath wrote in a comment to the question. In particular, any number $n$ is a sum of two squares if and only if all prime factors of $n$ which are $\; 3 \mod 4 \; $ have an even exponent in the prime factorization of $n$ . This is stated and proven in Which Numbers are the Sum of Two Squares? . Thus, any $z$ with all prime factors which are $\; 3 \mod 4 \; $ having an even exponent in its prime factorization will work. My example of $5$ is basically the simplest such case involving positive integers. As Mike Miller pointed out in the comments to this answer, there is a formula for the number of representations of a number as the sums of squares at Sums of squares function , although you might need to remove cases where it provides $a$ or $b$ to be $0$ as the question specifically is looking for only positive integer solutions.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3058163', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/616991/']}
|
jdg_371092
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
750492
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the frequency spectrum of every real audio sample that I've ever seen, the amplitude of the frequency components is always higher at low frequencies, then rapidly falls off at higher frequencies. For example, each of the following plots displays the median amplitude vs. frequency with a $\log_{10}$ amplitude axis (Y) and a $\log_{2}$ frequency axis (X). The values for each were computed with a series of FFTs over the entire sample in blocks of 8,192 samples (the amplitudes are calculated as the magnitudes of the complex results): Recorded Audio (90 minutes of city traffic) Recorded Audio (30 minutes of city traffic) Recorded with a calibrated flat-response signal analysis mic. Recorded with an uncalibrated flat-response signal analysis mic. Television Audio Classical Music Mostly vocals, presumably mastered for production, encoded as lossy AAC. Presumably mastered for production, encoded as lossless FLAC from source. Note that in each plot, there is a steep fall-off of signal amplitude (remember these are logarithmic axes) as the frequency increases. Why is this the case? Does it have something to do with properties of sound in air? Or is it somehow related to a connection between power, amplitude, and frequency? Or is it just some consequence of DFTs that I don't understand? I see it consistently, all the time. Also, is there predictable math behind the falloff that I can use to "normalize" the results i.e. flatten the curves for analysis purposes? I know it's not just a result of production mastering because it appears in unmodified signals. I know it's not just a characteristic of the city noise I recorded because I see it regardless of the sound source (I've recorded ambient sounds in nature that also show the same profile). I know it's not just wind noise (except perhaps the very bottom end) because I see it in studio recordings as well. Interestingly, in the two charts on top – which are signals I recorded myself with calibrated flat-response signal analysis microphones (in a gain range with minimal distortion) and no further filtering applied — the falloff seems linear in the $\log_{10}$ amplitude and $\log_2$ frequency space, but I don't know if this is a hint to what's going on or not. Note: The mics on the recorded audio are electret mics with fairly flat response up to about 24kHz, with blips in the response compensated for by a post-recording filter, available from the manufacturer, specific to each mic's serial #. They're designed for signal analysis rather than general recording. I've got the response graphs laying around somewhere I'll scan them if I find them. But it seems like it doesn't affect the answers. In my experience they pick up highs (bird calls, mechanical squeaks, electronic coil ring) with good accuracy.
Now provide the response and nothing else.
|
Additionally to the answer by Bulbasaur , it is important to highlight that you look at the amplitude $A$ of frequency components, not their power $P$ . The relation between them is [ 1 ] $$P(\omega) = \frac{1}{2} \mu v \omega^2 A^2,$$ where $\mu$ is the mass density of the medium (e.g. air), $v$ is the speed of sound in that medium and $\omega$ is the angular frequency ( $2\pi$ times the frequency) of the wave. Assuming you had a source which emits sound at all frequencies with the same power $P(\omega) = P$ , the amplitude would drop as $A \sim \frac{1}{\omega}$ . One can see this by solving the above equation for $A$ : $$A = \frac{1}{\omega} \sqrt{\frac{2 P}{\mu v}}$$ Note that in this 1D example the mass density has units of $[\mu] = \frac{\text{kg}}{\text{m}}$ . Together with the units of the other quantities $[v] = \frac{\text{m}}{\text{s}}$ , $[\omega] = \frac{1}{\text{s}}$ and $[A] = \text{m}$ , the power has units of $[P] = \frac{\text{kg}}{\text{m}} \frac{\text{m}}{\text{s}} \frac{\text{m}^2}{\text{s}^2} = \left( \text{kg} \frac{\text{m}^2}{\text{s}^2} \right) / \text{s} = \frac{\text{J}}{\text{s}}$ . In 3D, the mass density would have units of $[\rho] = \frac{\text{kg}}{\text{m}^3}$ and the above equation calculates the sound intensity $I(\omega)$ , i.e. power per area $[I] = \frac{\text{J}}{\text{s} \cdot \text{m}^2}$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/750492', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/41010/']}
|
jdg_371093
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5826701
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
how could I convert an audio file such as a aiff into a svg using gnuplot? I used sox (sound exchange) to convert an .aiff into a .dat, which I can load now in gnuplot. I did something similar to: set terminal svgset output "test.svg"plot "test.dat" I get a svg file, but only with dots / or a lot of x.How could I connect the dots?
Now provide the response and nothing else.
|
To draw lines between the points, use plot "test.dat" with lines Or to keep the point markers as well as the lines, use plot "test.dat" with linespoints So your example becomes set terminal svg set output "test.svg"plot "test.dat" with lines Further tips: Don't consider every sample: With large files you may also find it useful to plot only every nth sample with "every n". This will make the plot much faster to generate and will also yield a smaller (but less detailed) svg file. e.g. plot "test.dat" every 100 with lines Ignore .dat file header: If your sox-produced .dat file has some lines of introductory metadata, such as ; Sample Rate 44100; Channels 2 you can add the following to have gnuplot consider those lines comments and ignore them. set datafile commentschars ";" This will save you having to pre-process your .dat file in order to remove those lines before gnuplot chokes on them. Plot both left and right channels of stereo audio: If you're working with a stereo file, you probably want to see both channels. We can use "multiplot" to lay out the following two plots (of left then right channel) one above the other on a shared x-axis, as many sound-editing programs do. set multiplot layout 2,1plot "test.dat" using 1:2 with linesplot "" using 1:3 with lines The 1:2 and 1:3 instruct gnuplot which columns of the dat file to use as x and y sources. I'm assuming your stereo .dat file produced by sox looks as mine does, with columns for - 1: time since beginning of first sample - 2: normalized sample value of left channel - 3: normalized sample value of right channel example snippet: 10.840113 0.20101929 0.17840576 10.840136 0.26062012 0.14831543 10.840159 0.23779297 0.13146973 Putting it together: Here's a script which puts all of the above together. If you don't have a stereo data file to try this with, you'll want to remove the plot of 1:3 and the multiplot setting. #!/usr/bin/env gnuplotset datafile commentschars ";"set terminal svgset output "test.svg"set multiplot layout 2,1plot "test.dat" using 1:2 every 100 with linesplot "" using 1:3 every 100 with linesunset multiplot Prettification Finally, I've tweaked the script for presentation (borrowing heavily from the excellent "gnuplot in action" book by Philipp K. Janert): #!/usr/bin/env gnuplotset datafile commentschars ";"set terminal svgset output "test.svg"set multiplot layout 2,1set ylabel "sample value"set bmargin 0set format x ""set ytics -0.8,0.2set key bottomplot "test.dat" using 1:2 every 100 with lines lc rgbcolor "#a0a0b0" title "left channel"set xlabel "time (s)"set bmarginset tmargin 0set format x "%g"set ytics -1.0,0.2,0.8set key topplot "" using 1:3 every 100 with lines lc rgbcolor "#a0a0b0" title "right channel"unset multiplot Here's an example output (albeit png): How to make a .dat file For anyone following along at home, you can use sox to generate a .dat file from an audio file with the following command: sox input.wav output.dat Big file warning: Converting even just 10 seconds of stereo audio at 40kHz will produce a 25Mb output file.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5826701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/347432/']}
|
jdg_371094
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29942541
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Redis, keys user* will print all keys starting with user .For example: keys user*1) "user2"2) "user1" Now, I want all keys that don't start with user to be printed.How could I do that?
Now provide the response and nothing else.
|
IMPORTANT: always use SCAN instead of (the evil ) KEYS Redis' pattern matching is somewhat functionally limited (see the implementation of stringmatchlen in util.c ) and does not provide that which you seek ATM. That said, consider the following possible routes: Extend stringmatchlen to match your requirements, possibly submitting it as a PR. Consider what you're trying to do - fetching a subset of keys is always going to be inefficient unless you index them, consider tracking the names of all non-user keys (i.e.g. in a Redis Set) instead. If you are really insistent on scanning the entire keyspace and match against negative patterns, one way to accomplish that is with a little bit of Lua magic. Consider the following dataset and script: 127.0.0.1:6379> dbsize(integer) 0127.0.0.1:6379> set user:1 1OK127.0.0.1:6379> set use:the:force lukeOK127.0.0.1:6379> set non:user aOK Lua (save this as scanregex.lua ): local re = ARGV[1]local nt = ARGV[2]local cur = 0local rep = {}local tmpif not re then re = ".*"endrepeat tmp = redis.call("SCAN", cur, "MATCH", "*") cur = tonumber(tmp[1]) if tmp[2] then for k, v in pairs(tmp[2]) do local fi = v:find(re) if (fi and not nt) or (not fi and nt) then rep[#rep+1] = v end end enduntil cur == 0return rep Output - first time regular matching, 2nd time the complement: foo@bar:~$ redis-cli --eval scanregex.lua , "^user"1) "user:1"foo@bar:~$ redis-cli --eval scanregex.lua , "^user" 11) "use:the:force"2) "non:user"
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29942541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4483094/']}
|
jdg_371095
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2225343
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to parse command line arguments in an F# application. I'm using pattern matching over parameters list to accomplish it. Something like: let rec parseCmdLnArgs = function | [] -> { OutputFile = None ; OtherParam = None } | "/out" :: fileName :: rest -> let parsedRest = parseCmdLnArgs rest { OutputFile = Some(fileName) with parsedRest } The problem is I want to make "/out" match case insensitive while preserving the case of other stuff. That means I can't alter the input and match the lowercase version of the input against it (this will lose the fileName case information). I have thought about several solutions: Resort to when clauses which is less than ideal. Match a tuple each time, the first would be the actual parameter (which I'll just save for further processing and will wildcard match it) and the second would be the lowercased version used in such matchings. This looks worse than the first. Use active patterns but that looks too verbose. I'll have to repeat things like ToLower "/out" before every item. Is there a better option/pattern for doing these kind of stuff? I think this is a common problem and there should be a good way to handle it.
Now provide the response and nothing else.
|
I quite like your idea of using F# active patterns to solve this. It is a bit more verbose than using pre-processing, but I think it's quite elegant. Also, according to some BCL guidelines , you shouldn't be using ToLower when comparing strings (ignoring the case). The right approach is to use OrdinalIgnoreCase flag. You can still define a nice active pattern to do this for you: open Systemlet (|InvariantEqual|_|) (str:string) arg = if String.Compare(str, arg, StringComparison.OrdinalIgnoreCase) = 0 then Some() else Nonematch "HellO" with| InvariantEqual "hello" -> printfn "yep!"| _ -> printfn "Nop!" You're right that it's more verbose, but it nicely hides the logic and it gives you enough power to use the recommended coding style (I'm not sure how this could be done using pre-processing).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2225343', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/33708/']}
|
jdg_371096
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12097
|
Below is a question asked on the forum softwarerecs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recorded my 1 year old son in sleep with a night vision camera (to monitor twitching). I now want to automatically create a video containing all movement, e.g. I want to filter only the frames that are affected by a detected motion. I tried VLC with motion detection and it works, but there is no way I can tell VLC to just show me the frames that contain motion.
Now provide the response and nothing else.
|
This is the sort of thing that you can knock up quite quickly using OpenCV and Python . Free (both gratis and FLOSS) Cross platform: Windows, Linux, Android and Mac OS Motion Detection algorithms built in, (including being able to set thresholds). VideoWriter class to save your results Active user community. Just download and install: Python, OpenCV, the python libraries Numpy and pyopencv. Example of motion detection in OpenCV in python This example came from the blog of Matthias Stein. import cv2def diffImg(t0, t1, t2): d1 = cv2.absdiff(t2, t1) d2 = cv2.absdiff(t1, t0) return cv2.bitwise_and(d1, d2)cam = cv2.VideoCapture(0)winName = "Movement Indicator"cv2.namedWindow(winName, cv2.CV_WINDOW_AUTOSIZE)# Read three images first:t_minus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)t = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY)while True: diff = diffImg(t_minus, t, t_plus) cv2.imshow(winName, diff) # This shows the delta image # Here you would use diffImag to save the frame if the difference is bigger than some threashold # Read next image t_minus = t t = t_plus t_plus = cv2.cvtColor(cam.read()[1], cv2.COLOR_RGB2GRAY) key = cv2.waitKey(10) if key == 27: cv2.destroyWindow(winName) breakprint "Goodbye"
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwarerecs.stackexchange.com/questions/12097', 'https://softwarerecs.stackexchange.com', 'https://softwarerecs.stackexchange.com/users/8202/']}
|
jdg_371097
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
92802
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Bash shell script in which I would like to pause execution until the user presses a key. In DOS, this is easily accomplished with the pause command. Is there a Linux equivalent I can use in my script?
Now provide the response and nothing else.
|
read does this: user@host:~$ read -n1 -r -p "Press any key to continue..." key[...]user@host:~$ The -n1 specifies that it only waits for a single character. The -r puts it into raw mode, which is necessary because otherwise, if you press something like backslash, it doesn't register until you hit the next key. The -p specifies the prompt, which must be quoted if it contains spaces. The key argument is only necessary if you want to know which key they pressed, in which case you can access it through $key . If you are using Bash, you can also specify a timeout with -t , which causes read to return a failure when a key isn't pressed. So for example: read -t5 -n1 -r -p 'Press any key in the next five seconds...' keyif [ "$?" -eq "0" ]; then echo 'A key was pressed.'else echo 'No key was pressed.'fi
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/92802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4362/']}
|
jdg_371098
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
150139
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to learn physics by myself as I do not have a good teacher at school. I've been reading Feynman Lectures on Physics and I can't figure out how he found out this number. Here is an excerpt of the book : Let us now illustrate the energy principle with a more complicated problem, the screw jack shown in Fig. 4–5. A handle $20$ inches long is used to turn the screw, which has $10$ threads to the inch. We would like to know how much force would be needed at the handle to lift one ton ($2000$ pounds). If we want to lift the ton one inch, say, then we must turn the handle around ten times. When it goes around once it goes approximately $126$ inches. The handle must thus travel $1260$ inches, and if we used various pulleys, etc., we would be lifting our one ton with an unknown smaller weight $W$ applied to the end of the handle. So we find out that W is about $1.6$ pounds. This is a result of the conservation of energy. If you divide $2000$ pounds by $1260$ in you get $\frac{100}{63}$, which is $1.58$, a number close enough to $1.6$ pounds, which according to Feynman is the weight that we need to apply to the end of the handle. I believe, however, that this calculation is wrong given a dimensional analysis of such operation. Here is my dimensional analysis: $\frac{2000 pounds * 1 in}{ 1260 inches }$ = $\frac{[M] * [L] }{[L]}$ = ${[M]}$And this is wrong, as we are suppose to get a force, not a mass. I found this website , where you can find what I think is an email from an student reporting the teacher that he believes he got the number diving $\frac{5}{\pi}$ . Can you also explain me this? Finally, in a thread in a Physics Forum question,they concluded that $1.6$ is just the force to keep the system at equilibrium, and you need $\text{a force} > 1.6$ to actually lift the ton. After, someone said that Feynman would have said this answer is wrong. Why is that?
Now provide the response and nothing else.
|
Work is calculated as force times distance.$$W = Fd$$The purpose of a simple machine like a screw jack is to lessen the force required. However, the work needed is still the same, so the distance over which you exert the force has to increase. Halving the force requires doubling the distance. In this problem, you want to lift 2000 lbs a distance of 1 inch. So, the amount of work is 2000 inch-lbf (lbf = pound-force, or the amount of force required to lift one pound). To lift the weight with the screw requires 10 turns, so with a handle of 20 inches, this is a distance of $10*2\pi*(20 in) \approx 1256\,in$. Putting it all together:$$W = Fd$$$$2000in \cdot lbf = F*(1256 in)$$$$F = \frac{2000in \cdot lbs}{1256 in} = 1.6 lbs$$Notice that this force works for any lifting distance. If you want to lift the weight 2 inches, then you have to do twice the work over twice the distance, which results in the same force. Those who say that you need more than this force know that you need a short push to get the screw turning from a dead stop. This is similar to the fact that, even though you don't need any force to keep an object moving, you have to give it a push to get it moving in the first place. Once you've got the screw turning, you only need 1.6 lbs of force to keep turning it (ignoring friction). For the $5/\pi$ answer, it amounts to the same thing, just doing the math in a different order:$$F=\frac{2000 in\cdot lbf}{10*2\pi*20in}=\frac{2000}{400\pi}\,lbf =\frac{5}{\pi}\,lbf = 1.59\,lbf$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/150139', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/65159/']}
|
jdg_371099
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27153166
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Sometimes, when using React libraries, such as react-router , I get this error: Uncaught TypeError: Cannot read property 'firstChild' of undefined /~/react-router/~/react/lib/ReactMount.js?:606 How do I fix it?
Now provide the response and nothing else.
|
This error is commonly caused by two versions of React loaded alongside . For example, if you npm install a package that requires a different React version and puts it into dependencies instead of peerDependencies , it might install a separate React into node_modules/<some library using React>/node_modules/react . Two different Reacts won't play nicely together (at least yet). To fix it, just delete node_modules/<some library using React>/node_modules/react . If you see a library putting React in dependencies instead of peerDependencies , file an issue.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27153166', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/458193/']}
|
jdg_371100
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12759042
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to be able to search every word so I have cleared the stop word list. Than I have rebuilt the index. But unfortunately if I type in a search expression with stop word in it it still returns no row. If I leave out just the stop word I do get the results. E.g. "double wear stay in place" - no result, "double wear stay place" - I get the results that actually contain "in" as well. Does anyone know why this can be? I am using SQL Server 2012 Express. Thanks a lot!
Now provide the response and nothing else.
|
Meanwhile I have managed to solve the issue. The problem was that I had my own stop list which was indeed empty but my fulltext catalog was associated not with my own stoplist but with the system one. Here are a couple of useful queries for solving stopword and full text search issues: Query stopwords (does not give back the system ones!): select * from sys.fulltext_stopwords Query stop lists (does not give back the system list!): select * from sys.fulltext_stoplists Check what words are included in the catalog: SELECT * FROM sys.dm_fts_index_keywords(DB_ID('dbname'), OBJECT_ID('tablename')) Check the association: select fulltext_catalog_id,stoplist_id, * from sys.fulltext_indexes; Turn stoplist off: ALTER FULLTEXT INDEX ON CremeSearchFT SET STOPLIST = OFF I hope it helps someone. :)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12759042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/372915/']}
|
jdg_371101
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30691150
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My RecyclerView and item has match_parent width but the result is : <view class="android.support.v7.widget.RecyclerView" android:layout_width="match_parent" and items: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"xmlns:fab="http://schemas.android.com/apk/res-auto"android:id="@+id/ll_itm"android:orientation="horizontal"android:layout_width="match_parent" full: <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"xmlns:fab="http://schemas.android.com/apk/res-auto"android:id="@+id/ll_itm"android:orientation="horizontal"android:layout_width="match_parent"android:layout_height="wrap_content"android:weightSum="100"android:gravity="right"><Button android:layout_width="0dp" android:layout_weight="15" android:layout_height="fill_parent" android:text="ملاحظات" android:id="@+id/button" /><LinearLayout android:layout_width="0dp" android:layout_height="fill_parent" android:layout_weight="20" android:gravity="center" > <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" > <com.getbase.floatingactionbutton.FloatingActionButton android:layout_width="fill_parent" android:layout_height="fill_parent" fab:fab_plusIconColor="#ff56ff83" fab:fab_colorNormal="@color/d_red" fab:fab_colorPressed="#ff5c86ff" fab:fab_size="mini" fab:fab_icon="@drawable/ic_remove_white" android:id="@+id/fab_rmv" /> <esfandune.ir.elmikarbordiardakan.other.CustomTxtView android:layout_weight="25" android:layout_width="0dp" android:layout_height="fill_parent" android:textAppearance="?android:attr/textAppearanceLarge" android:text="0" android:gravity="right|center_vertical" android:id="@+id/txt_takhir_itm" /> <com.getbase.floatingactionbutton.FloatingActionButton android:layout_width="fill_parent" android:layout_height="fill_parent" fab:fab_plusIconColor="@color/colorprimarylight" fab:fab_colorNormal="@color/colorprimarydark" fab:fab_colorPressed="@color/colorprimary" fab:fab_size="mini" fab:fab_icon="@drawable/ic_add_white" android:id="@+id/fab_add" /> </LinearLayout></LinearLayout> <Spinner android:layout_width="0dp" android:layout_height="fill_parent" android:layout_weight="10" android:id="@+id/sp_nomre_itm" android:entries="@array/degrees"/><LinearLayout android:layout_width="0dp" android:layout_height="fill_parent" android:layout_weight="10" android:gravity="center" > <!--LinearLayout baraye ine ke nameshod fab ro weight behosh dad--> <com.getbase.floatingactionbutton.FloatingActionButton android:layout_width="fill_parent" android:layout_height="fill_parent" fab:fab_plusIconColor="#ff56ff83" fab:fab_colorNormal="@color/d_green" fab:fab_colorPressed="@color/d_orange" fab:fab_size="normal" fab:fab_icon="@drawable/ic_done_white" android:id="@+id/fab_hazr" /></LinearLayout><esfandune.ir.elmikarbordiardakan.other.CustomTxtView android:layout_weight="5" android:layout_width="0dp" android:layout_height="fill_parent" android:textAppearance="?android:attr/textAppearanceLarge" android:text="100" android:gravity="right|center_vertical" android:id="@+id/txt_ghybtNumber_itm" /><esfandune.ir.elmikarbordiardakan.other.CustomTxtView android:layout_weight="30" android:layout_width="0dp" android:layout_height="fill_parent" android:textAppearance="?android:attr/textAppearanceLarge" android:text="عباسعلی ملاحسینی اردکانی" android:gravity="right|center_vertical" android:id="@+id/txt_title_itm" android:layout_marginRight="10dp" /><view android:layout_width="0dp" android:layout_height="fill_parent" android:layout_weight="10" class="de.hdodenhof.circleimageview.CircleImageView" android:id="@+id/view" android:src="@drawable/mmrdf" /></LinearLayout>
Now provide the response and nothing else.
|
In your adapter where you are inflating the item in onCreateViewHolder , is the second parameter of the inflate call null ?. If so change it to parent which is the first parameter in the onCreateViewHolder function signature. View rootView = LayoutInflater.from(context).inflate(R.layout.itemLayout, parent, false); If you need the second parameter to be null then when you get the view reference on inflating, do the following View rootView = LayoutInflater.from(context).inflate(R.layout.itemLayout, null, false);RecyclerView.LayoutParams lp = new RecyclerView.LayoutParams(ViewGroup.LayoutParams.MATCH_PARENT, ViewGroup.LayoutParams.WRAP_CONTENT);rootView.setLayoutParams(lp);return new RecyclerViewHolder(rootView);
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/30691150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1718223/']}
|
jdg_371102
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
247725
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know why does electrons flow through a wire connected to a battery as said in AC/DC: What's the Difference? .It tells that "the electrons that are stripped from the carbon electrode is collected on the zinc can" and then tells The electrons at the negative terminal want to go to positive terminal, they just need a way to get there. In our light bulb circuit, the way to get there is through the wire. Why couldn't the electrons go to the positive terminal through the electrolyte(potassium hydroxide) instead of going through the wire?
Now provide the response and nothing else.
|
Batteries use a type of reaction called a redox reaction that involves the transport of electrons. Rather then the carbon zinc battery, which is a bit complicated consider the simpler example of a zinc copper battery as taught in school science lessons across the world. The reaction is: $$ Zn + Cu^{2+} \rightarrow Zn^{2+} + Cu $$ So the reaction dissolves the zinc electrode and produces copper metal at the copper electrode. The reaction goes this way because the overall free energy of the Zn/Cu system is reduced in doing so. If we look more closely the reaction involves three steps: $Zn \rightarrow Zn^{2+} + 2e$ transport of the electrons to the copper $Cu^{2+} + 2e \rightarrow Cu$ So as the reaction goes electrons flow from the zinc through the battery to the copper. In effect the reaction acts as an electron pump that pumps electrons from the zinc end to the copper end. So if you connect an external wire from the copper to the zinc the electrons flow out of the copper, through the wire and back to the zinc, then complete the loop by flowing from the zinc to the copper inside the battery. Electrons flow in that direction because the chemical reaction forces them to. The traditional zinc-carbon battery uses a reaction between zinc and manganese - the carbon is actually just an electrode and doesn't take part in the reaction. While the reaction is more complicated the basic principle is the same. The zinc reacts to form $Zn^{2+}$ and electrons and the manganese absorbs the electrons. So just like the zinc-copper battery the electron flow is driven by the chemical reaction.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/247725', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/62945/']}
|
jdg_371103
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3969440
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\oint \vec{E}^{\,} \cdot \vec{dA}^{\,}$$ This is Gauss' law from physics but my question is more maths related. Say I have $\vec{E}^{\,}=3.5\times 10^3N/C\times e_x$ and $A = 0,35\times0,70m^2$ . The plane that has that area, where the electric field $E$ is applied, is parallel to $YZ$ . So it has a coordinate in $e_x$ . Right? What does the circle in the integral mean in practice? What is the value of $dA$ ? Why is it a vector? How do I solve this integral?
Now provide the response and nothing else.
|
It looks like you're overcomplicating things massively for this simple example. In this case, the answer is just multiply the two numbers together. No integral solving, nothing sophisticated. This is because you just have a constant electric field passing perpendicular through an area. I am not sure how your course defines electric flux, but for a uniform field passing through a flat surface, you multiply the area by the component of your field that is perpendicular to your surface. In your question, field and surface are perpendicular, so you don't even need to worry about any potential $\cos\theta$ factors. For your other questions, these are all things from vector calculus, which I'm guessing you'll learn about sooner or later. The above result is a very special case of the general Gauss law with vector integrals etc. For the purpose of this question, you don't need to worry about all that. I can expand more on the mathematics/vector calculus if you wish. OK so the following is a super rapid overview of vector integrals. You no doubt know how differentiation/integration for functions $f:\mathbb R\to\mathbb R$ works. The idea of vector calculus is to generalise things to functions $\mathbb R^3\to\mathbb R$ , $\mathbb R\to\mathbb R^3$ , $\mathbb R^3\to\mathbb R^3$ etc. There's nothing special about $\mathbb R^3$ , it just so happens that the world we live in has three spatial dimensions, so physics people care an awful lot about $\mathbb R^3$ . Let's consider a concrete example. Suppose we have some material contained in a volume $V$ , with surface $S$ . We are interested in finding how much heat flows out of this block. We can find this by computing the heat flow out of the surface $S$ . To do this, we somehow need to integrate the heat flow over the surface. We write $\mathop{}\!\mathrm{d}S$ for the differential corresponding to a tiny chunk of this surface (similar to how $\mathop{}\!\mathrm{d}x$ represents a small change in $x$ in single variable calculus). If our surface happened to be the $yz$ -plane, then we would have $\mathop{}\!\mathrm{d}S=\mathop{}\!\mathrm{d}y\mathop{}\!\mathrm{d}z$ ; I hope it's intuitively clear why. Suppose $\mathbf H(\mathbf v)$ represents the heat flow at some point $\mathbf v$ in space. Note that $\mathbf H$ takes a vector for the position as input and spits out a vector for the heat flow at that position (per unit area and time) as output. The heat flow out of a chunk of surface $\mathop{}\!\mathrm{d}S$ is the area times the component of $\mathbf H$ perpendicular to $\mathop{}\!\mathrm{d}S$ . The reason we only take the perpendicular component is that the component of $\mathbf H$ that flows parallel to the surface doesn't flow in or out of the material, so doesn't contribute to the quantity we care about. If $\mathbf n$ represents the unit length vector normal to $S$ (so $\mathbf n$ may vary depending on where on $S$ you are standing), then the component of $\mathbf H$ perpendicular to the surface is just $\mathbf H\cdot\mathbf n$ , so the heat flow through that little chunk is $\mathbf H\cdot\mathbf n\mathop{}\!\mathrm{d}S$ . To find the total heat flow, we need to sum the contributions from all chunks of our surface, i.e. we integrate over the surface: $$\int\mathbf H\cdot\mathbf n\mathop{}\!\mathrm{d}S.$$ Since $\mathbf n$ is intrinsic to the surface $S$ , we often write as an abbreviation $\mathop{}\!\mathrm{d}\mathbf S=\mathbf n\mathop{}\!\mathrm{d}S$ . This is the vector area element. With this notation, the total heat flow is $$\int\mathbf H\cdot\mathop{}\!\mathrm{d}\mathbf S.$$ There's nothing special about heat of course: I just picked it because it's easy to visualise heat flowing out of stuff. The same expression above, applied to the electric field $\mathbf E$ , just tells you how much of the electric field is flowing through the surface. We call this quantity the (electric) flux through the surface. Let's apply all this to the question at hand. Your electric field is uniform $\mathbf E=3500\mathbf e_x$ . Let's say the surface is a rectangle with vertices $(0,0,0)$ , $(0,0,0.35)$ , $(0,0.7,0.35)$ , $(0,0.7,0)$ . The normal vector is clearly just $\mathbf e_x$ , anywhere on the surface. So $\mathop{}\!\mathrm{d}\mathbf S=\mathbf e_x\mathop{}\!\mathrm{d}S=\mathbf e_x\mathop{}\!\mathrm{d}y\mathop{}\!\mathrm{d}z$ , and applying the above, \begin{align}\int\mathbf E\cdot\mathop{}\!\mathrm{d}\mathbf S&=\int_0^{0.35}\int_0^{0.7}3500\mathbf e_x\cdot\mathbf e_x\mathop{}\!\mathrm{d}y\mathop{}\!\mathrm{d}z\\&=3500\cdot\int_0^{0.35}\int_0^{0.7}1\mathop{}\!\mathrm{d}y\mathop{}\!\mathrm{d}z\\&=3500\cdot\int_0^{0.35}0.7\mathop{}\!\mathrm{d}z\\&=3500\times0.35\times0.7 \\&=857.5.\end{align} So this integral is the long way of saying "just multiply everything". Of course, when the field $\mathbf E$ or the surface $S$ are more complicated, the integrals won't be nearly so trivial. But the basic method is the same: Take some parametrisation of your surface, say $(u,v)$ . In principal, you can just use Cartesians for everything, but it's often easier to use other coordinate systems, e.g. polars. Use the parametrisation to find an expression for the normal to your surface. Dot the normal with the vector field, and integrate over the values of $u$ and $v$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3969440', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/373635/']}
|
jdg_371104
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
221520
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't understand, why does the following regular expression: ^*$ Match the string "127.0.0.1"? Using Regex.IsMatch("127.0.0.1", "^*$"); Using Expresso, it does not match, which is also what I would expect. Using the expression ^.*$ does match the string, which I would also expect. Technically, ^*$ should match the beginning of a string/line any number of times, followed by the ending of the string/line. It seems * is implicitly treated as a .* What am I missing? EDIT:Run the following to see an example of the problem. using System;using System.Text.RegularExpressions;namespace RegexFubar{ class Program { static void Main(string[] args) { Console.WriteLine(Regex.IsMatch("127.0.0.1", "^*$")); Console.Read(); } }} I do not wish to have ^*$ match my string, I am wondering why it does match it. I would think that the expression should result in an exception being thrown, or at least a non-match. EDIT2:To clear up any confusion. I did not write this regex with the intention of having it match "127.0.0.1". A user of our application entered the expression and wondered why it matched the string when it should not. After looking at it, I could not come up with an explanation for why it matched - especially not since Expresso and .NET seems to handle it differently. I guess the question is answered by it being due to the .NET implementation avoiding throwing an exception, even thought it's technically an incorrect expression. But is this really what we want?
Now provide the response and nothing else.
|
Well, theoretically you are right, it should not match. But this depends on how the implementation works internally. Most regex impl. will take your regex and strip ^ from the front (taking note that it must match from start of the string) and strip $ from the end (noting that it must to the end of the string), what is left over is just "*" and "*" on its own is a valid regex. The implementation you are using is just wrong regarding how to handle it. You could try what happens if you replace "^*$" just with "*"; I guess it will also match everything. It seems like the implementation treats a single asterisk like a ".*". According to ISO/IEC 9945-2:1993 standard, which is also described in the POSIX standard , it is broken. It is broken because the standard says that after a ^ character, an asterisk has no special meaning at all. That means "^*$" should actually only match a single string and this string is "*" ! To quote the standard: The asterisk is special except when used: in a bracket expression as the first character of an entire BRE (after an initial ^, if any) as the first character of a subexpression (after an initial ^, if any); see BREs Matching Multiple Characters . So if it is the first character (and ^ doesn't count as first character if present) it has no special meaning. That means in this case an asterisk should only match one character and that is an asterisk. Update Microsoft says Microsoft .NET Framework regularexpressions incorporate the mostpopular features of other regularexpression implementations such asthose in Perl and awk. Designed to becompatible with Perl 5 regularexpressions, .NET Framework regularexpressions include features not yetseen in other implementations, such asright-to-left matching and on-the-flycompilation. Source: http://msdn.microsoft.com/en-us/library/hs600312.aspx Okay, let's test this: # echo -n 127.0.0.1 | perl -n -e 'print (($_ =~ m/(^.*$)/)[0]),"\n";'-> 127.0.0.1# echo -n 127.0.0.1 | perl -n -e 'print (($_ =~ m/(^*$)/)[0]),"\n";'-> Nope, it does not. Perl works correctly. ^.*$ matches the string, ^*$ doesn't => .NET's regex implementation is broken and it does not work like Perl 5 as MS claims.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/221520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12469/']}
|
jdg_371105
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16626075
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
AngularJS clearly states in its documentation that Services are Singletons: AngularJS services are singletons Counterintuitively, module.factory also returns a Singleton instance. Given that there are plenty of use-cases for non-singleton services, what is the best way to implement the factory method to return instances of a Service, so that each time an ExampleService dependency is declared, it is satisfied by a different instance of ExampleService ?
Now provide the response and nothing else.
|
I'm not entirely sure what use case you are trying to satisfy. But it is possible to have a factory return instances of an object. You should be able to modify this to suit your needs. var ExampleApplication = angular.module('ExampleApplication', []);ExampleApplication.factory('InstancedService', function(){ function Instance(name, type){ this.name = name; this.type = type; } return { Instance: Instance }});ExampleApplication.controller('InstanceController', function($scope, InstancedService){ var instanceA = new InstancedService.Instance('A','string'), instanceB = new InstancedService.Instance('B','object'); console.log(angular.equals(instanceA, instanceB));}); JsFiddle Updated Consider the following request for non-singleton services . In which Brian Ford notes: The idea that all services are singletons does not stop you from writing singleton factories that can instantiate new objects. and his example of returning instances from factories: myApp.factory('myService', function () { var MyThing = function () {}; MyThing.prototype.foo = function () {}; return { getInstance: function () { return new MyThing(); } };}); I would also argue his example is superior due to the fact that you do not have to use the new keyword in your controller. It is encapsulated within the getInstance method of the service.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16626075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138601/']}
|
jdg_371106
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33549084
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've found large number of examples, but nothing on how to do it "properly" from STM32MXCube. How do I create skeleton code from STM32CubeMX for USB CDC virtual COM port communications (if possible STM32F4 Discovery)?
Now provide the response and nothing else.
|
A STM32CubeMX project for Discovery F4 with CDC as USB device should work out of the box. Assuming you use an up-to-date STM32CubeMX and library: Start STM32CubeMX Select the board Discovery F4 Enable peripheral UBS_OTG_FS device only (leave over stuff uncheck) Enable midlleware USB_Device Communication .. .aka CDC In the clock tab check the clock source is HSE HCLK. It shall give 168 MHz HLCK and 48 MHz in the 48 MHz (USB). Check there is no red anywhere. Save the project Generate code (I used SW4STM32 toolchains) Build (you may need to switch to internal CDT builder vs. GNU make). Now add some code to send data over the COM port and voila it should work. Actually, the tricky part is not try to make any "CDC" access until the host USB connects (no CDC setup yet) Here is how I did it for quick emit test: In file usbd_cdc_if.c uint8_t CDC_Transmit_FS(uint8_t* Buf, uint16_t Len){ uint8_t result = USBD_OK; /* USER CODE BEGIN 7 */ if (hUsbDevice_0 == NULL) return -1; USBD_CDC_SetTxBuffer(hUsbDevice_0, Buf, Len); result = USBD_CDC_TransmitPacket(hUsbDevice_0); /* USER CODE END 7 */ return result;}static int8_t CDC_DeInit_FS(void){ /* USER CODE BEGIN 4 */ hUsbDevice_0 = NULL; return (USBD_OK); /* USER CODE END 4 */} In file main.c /* USER CODE BEGIN Includes */#include "usbd_cdc_if.h"/* USER CODE END Includes */..../* USER CODE BEGIN WHILE */while (1){ /* USER CODE END WHILE */ /* USER CODE BEGIN 3 */ uint8_t HiMsg[] = "hello\r\n"; CDC_Transmit_FS(HiMsg, strlen(HiMsg)); HAL_Delay(200);} As soon you plug the micro USB (CN5) CDC data will start to show on the host terminal. That works. I can see "hello" on the terminal (you may need to install a driver, http://www.st.com/web/en/catalog/tools/PF257938 ). For reception, it needs to be first armed, say, started by a first call to USBD_CDC_ReceivePacket() in a good place. For that it can be CDC_Init_FS. Then you can handle data as it arrives in CDC_Receive_FS and rearming reception again from here. That works for me. static int8_t CDC_Receive_FS (uint8_t* Buf, uint32_t *Len){ /* USER CODE BEGIN 6 */ USBD_CDC_ReceivePacket(hUsbDevice_0); return (USBD_OK); /* USER CODE END 6 */}static int8_t CDC_Init_FS(void){ hUsbDevice_0 = &hUsbDeviceFS; /* USER CODE BEGIN 3 */ /* Set Application Buffers */ USBD_CDC_SetTxBuffer(hUsbDevice_0, UserTxBufferFS, 0); USBD_CDC_SetRxBuffer(hUsbDevice_0, UserRxBufferFS); USBD_CDC_ReceivePacket(hUsbDevice_0); return (USBD_OK); /* USER CODE END 3 */}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33549084', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/505160/']}
|
jdg_371107
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2865090
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While trying to understand how a web server worked, I came accross this: //myfile.jsfunction donothing(){};//myfile.htmljavascript:donothing(open('http://www.acme.com/whatever.jpg','','left=100, right=0, top=100, scrollbars=no, status=no, titlebar=no, resizable=no, toolbar=no, menubar=no, width=255, height=255')) I'm no JavaScript expert, so I don't get how an empty function can be made to work. Does someone know? Thank you.
Now provide the response and nothing else.
|
This is a homemade void substitute to avoid having the expression return a value. window.open will return a reference to the opened window, and this can have unexpected results. For instance, try pasting javascript:a=1 into the address field - this will result in a blank screen with the number 1 in it as the browser will by default try to use the result of any expression run as the new document. To avoid this you use javascript:void(a=1) as void will not return anything, and so the result isn't used as the new document. Using donothing(foo=bar) or the equivalent Function.prototype(foo=bar) is not needed as the built-in void does the exact same. But mind, the use of void is only needed when copying text into the address field, its not necessary when you use the pseudo protocol javscript: in links (which you should never do anyway).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2865090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1196889/']}
|
jdg_371108
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27857058
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
From json, i am getting the value as "createdOn": "Jan 08 2015 20:40:56 GMT+0530 (IST)", I am Accessing in FTL <#list variables as variable> <div class="reply"> ${variable.createdOn} </div></#list> The result i am getting is Jan 09 2015 12:36:18 GMT+0530 (IST) My preferable format is 09-01-2015 I need to remove rest of the time GMT, IST and so on. How to convert this in Freemarker template or javascript. Update I tried to pass below like this ${variable.createdOn?datetime?string("dd-MM-yyyy")} but it is giving error as Exception: java.text.ParseException - Unparseable date: "Jan 09 2015 12:36:18 GMT+0530 (IST)" Any help is Appreciated. Thanks
Now provide the response and nothing else.
|
First of all, what format is that at all? I mean, if you can influence someone to use a standard format instead (ISO, mostly) that will help everyone. Anyway, FreeMarker isn't a date parser library, but actually you can do something like this: <#-- Settings you need --><#setting date_format="dd-MM-yyyy"><#setting locale="en_US"><#-- The string that comes from somewhere: --><#assign createdOn = 'Jan 08 2015 20:40:56 GMT+0530 (IST)'><#-- 1. Tell FreeMarker to convert string to real date-time value 2. Convert date-time value to date-only value 3. Let FreeMarker format it according the date_format setting-->${createdOn?datetime("MMM dd yyyy HH:mm:ss 'GMT'Z")?date}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27857058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4231070/']}
|
jdg_371109
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7554
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there an algorithm that will allow me to numerically compute the limit of a function f(x) in a principled way? The most naive algorithm would be to continue to compute the function for larger values of x. The first problem is how to figure out the 'large' values for x to compute the function for. How do I know when to stop? Can we construct some error bars for this calculation perhaps based on some kind of statistical rationale? Any books for further reading will be much appreciated.
Now provide the response and nothing else.
|
To expand on T's answer, convergence acceleration methods assume that the error terms of the sequence you are interested in are of a certain form. One of the general-purpose methods for accelerating the convergence of a sequence is the Shanks transformation. For the case of a sequence of partial sums of a series, the Shanks transformation amounts to constructing a sequence of Padé approximants, which hopefully converge faster to the limit of the sequence of partial sums. (The justly famous Wynn ε algorithm is an efficient realization of the Shanks transformation) Another popular technique is Richardson extrapolation, which assumes that the error can be expressed as a power series of some form. This is in effect the application of the usual algorithms for polynomial interpolation to estimate the limit of a sequence. (Richardson extrapolation is the machinery behind the Romberg algorithm for accelerating the convergence of the trapezoidal rule, since the error of the trapezoidal rule is expressible as a series in powers of the panel size.) I have been deliberately vague here since you have given absolutely no information on the nature of your sequence. There are many convergence acceleration methods to choose from (a good reference is Brezinski and Redivo-Zaglia's Extrapolation Methods: Theory and Practice ), and the best sequence transformation to use depends a lot on the provenance of your sequence.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/7554', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1004/']}
|
jdg_371110
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1936098
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can one find $f: \mathbb{N}\to\mathbb{N}$ so that $f(f(n))=n+1\quad \forall n \in\mathbb{N} $ ? My intuition says it should not be possible, but I don't really see a way to prove that, right now.
Now provide the response and nothing else.
|
Since $f(f(n))=n+1$, $f(f(f(n)))=f(n+1)=f(n)+1$. So we see $$f(n)=f(1)+n-1.$$Now we look at what this means for $f(1)$ by using $f(f(n))=n+1.$ So we get $$f(f(1)+n-1)=2f(1)+n=n+1.$$ So $f(1)=\frac{1}{2}$, a contradiction.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1936098', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/369033/']}
|
jdg_371111
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5220511
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just started to play with the new unit in Delphi 2010 IOUtils.pas , and I found they put all the methods inside Records(TFile, TPath, TDirectory) as class functions and procedure. Is there any benefits of doing that inside records instead of classes?In both cases there's no need for any variables or instance, but I'm not sure if there any real benefits regarding memory consuming or performance improvement.
Now provide the response and nothing else.
|
Class methods in records are used to group different methods into a common namespace. Thus you can have similar named methods for different purposes. For an example in IOUtils.pas look at the Exists function available in TFile and in TDirectory. The older approach was to have distinct function names for FileExists and DirectoryExists (which the implementations actually call). While class methods inside classes can be used in the same way, they can in addition have another goal: they can be virtual. Called from a class variable this can lead to different implementations depending on the current content of that variable. This is not possible for records. As a consequence class methods in records are always static.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5220511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/24462/']}
|
jdg_371112
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35956045
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this from urllib import requesturl = "http://www.bbc.co.uk/news/election-us-2016-35791008"html = request.urlopen(url).read().decode('utf8')html[:60]from bs4 import BeautifulSoupraw = BeautifulSoup(html, 'html.parser').get_text()raw.find_all('title', limit=1)print (raw.find_all("title"))'<!doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN' I want to extract the title of the page using BeautifulSoup but getting this error Traceback (most recent call last): File "C:\Users\Passanova\AppData\Local\Programs\Python\Python35-32\test.py", line 8, in <module> raw.find_all('title', limit=1)AttributeError: 'str' object has no attribute 'find_all' Please any suggestions
Now provide the response and nothing else.
|
To navigate the soup, you need a BeautifulSoup object, not a string. So remove your get_text() call to the soup. Moreover, you can replace raw.find_all('title', limit=1) with find('title') which is equivalent. Try this : from urllib import requesturl = "http://www.bbc.co.uk/news/election-us-2016-35791008"html = request.urlopen(url).read().decode('utf8')html[:60]from bs4 import BeautifulSoupsoup = BeautifulSoup(html, 'html.parser')title = soup.find('title')print(title) # Prints the tagprint(title.string) # Prints the tag string content
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35956045', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5735928/']}
|
jdg_371113
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
679544
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to prove this for positive real numbers?$$a+b+c\leq\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}$$ I tried AM-GM, CS inequality but all failed.
Now provide the response and nothing else.
|
Using Cauchy-Schwarz Inequality twice: $a^4 + b^4 +c^4 \geq a^2b^2 +b^2c^2 +c^2a^2 \geq ab^2c +ba^2c +ac^2b = abc(a+b+c)$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/679544', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/115670/']}
|
jdg_371114
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11203195
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I find this article to be useful for non-ajax request How to handle session expiration and ViewExpiredException in JSF 2? but I can't make use of this when I am submitting using an AJAX call. Suppose in a primefaces dialog, I am making a post request using AJAX and session has already timed out.I see my page getting stuck. How to fix this kind of scenario such that when I post using AJAX, I could redirect him to my view expired page andthen forward him to the login page similar to the solution in the link above? JSF2/Primefaces/Glassfish
Now provide the response and nothing else.
|
Exceptions which are thrown during ajax requests have by default totally no feedback in the client side. Only when you run Mojarra with project stage set to Development and use <f:ajax> , then you will get a bare JavaScript alert with the exception type and message. But other than that, and in PrimeFaces, there's by default no feedback at all. You can however see the exception in the server log and in the ajax response (in the webbrowser's developer toolset's "Network" section). You need to implement a custom ExceptionHandler which does basically the following job when there's a ViewExpiredException in the queue: String errorPageLocation = "/WEB-INF/errorpages/expired.xhtml";context.setViewRoot(context.getApplication().getViewHandler().createView(context, errorPageLocation));context.getPartialViewContext().setRenderAll(true);context.renderResponse(); Alternatively, you could use the JSF utility library OmniFaces . It has a FullAjaxExceptionHandler for exactly this purpose (source code here , showcase demo here ). See also: Why use a JSF ExceptionHandlerFactory instead of <error-page> redirection? What is the correct way to deal with JSF 2.0 exceptions for AJAXified components?
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11203195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/373784/']}
|
jdg_371115
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2390604
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to write a PHP script that uses the pdftk app to merge an XFDF with a PDF form and output the merged PDF to the user. According to the pdftk documentation, I can pass the form data in via stdin and have the PDF output to the stdout stream. The normal, file-not-stream way to use pdftk from the command line is: pdftk blankform.pdf fill_form formdata.xfdf output filledform.pdf to use streams on the command line, you'd enter: pdftk blankform.pdf fill_form - output - I have a couple of problems: 1) I have gotten pdftk to return output via stdout using an xfdf file (instead of stdin ) like so: exec("pdftk blankform.pdf fill_form formdata.xfdf output -", $pdf_output); file_put_contents("filledform.pdf",$pdf_output); But the pdf that it creates is corrupt, according to Adobe Reader and a quick peek at the file with a text editor shows that, at the very least, it is not setting the line endings where they should be. I have an identical PDF created by pdftk where it output to a file, and the pdf looks fine in the text editor, so I know that it's not pdftk that's outputting bad data. 2) I can not for the life of me figure out how to set the stdin stream in PHP so that I can use that stream as my input for pdftk. From what I'm reading on the PHP documentation, stdin is read-only, so how does anything ever get into that stream? Ideally, I would like to keep this really simple and avoid using proc_open() . I attempted to use that function and wasn't very sucessful, which is probably my fault, not the function's, but really my goals are simple enough I'd rather avoid using robust functions I don't need. Ideally my code would look something like: $form_data_raw = $_POST; $form_data_xfdf = raw2xfdf($form_data_raw); //some function that turns HTML-form data to XFDF $blank_pdf_form = "blankform.pdf"; header('Content-type: application/pdf'); header('Content-Disposition: attachment; filename="output.pdf"'); passthru("pdftk $blank_pdf_form fill_form $form_data_xfdf output -); Just a heads up, it is possible to put the actual xml string in the command line, but I've had very unreliable results with this. Edit With much help, I now understand that my real question was "how can pipe a variable to a command line execution in PHP". Apparently proc_open is the best way to go, or at least the most straightforward. Since it took me forever to figure this out and since my research on Google suggests others may be struggling, I'll post the code that specifically worked for my problem: $blank_pdf_form = "blankform.pdf";$cmd = "pdftk $blank_pdf_form fill_form - output -";$descriptorspec = array( 0 => array("pipe", "r"), 1 => array("pipe", "w"));$process = proc_open($cmd, $descriptorspec, $pipes);if (is_resource($process)) { //row2xfdf is made-up function that turns HTML-form data to XFDF fwrite($pipes[0], raw2xfdf($_POST)); fclose($pipes[0]); $pdf_content = stream_get_contents($pipes[1]); fclose($pipes[1]); $return_value = proc_close($process); header('Content-type: application/pdf'); header('Content-Disposition: attachment; filename="output.pdf"'); echo $pdf_content;}
Now provide the response and nothing else.
|
I'm not sure about what you're trying to achieve. You can read stdin with the URL php://stdin. But that's the stdin from the PHP command line, not the one from pdftk (through exec). But I'll give a +1 for proc_open() <?php$cmd = sprintf('pdftk %s fill_form %s output -','blank_form.pdf', raw2xfdf($_POST));$descriptorspec = array( 0 => array("pipe", "r"), // stdin is a pipe that the child will read from 1 => array("pipe", "w"), // stdout is a pipe that the child will write to 2 => null,);$process = proc_open($cmd, $descriptorspec, $pipes);if (is_resource($process)) { // $pipes now looks like this: // 0 => writeable handle connected to child stdin // 1 => readable handle connected to child stdout fwrite($pipes[0], stream_get_contents(STDIN)); // file_get_contents('php://stdin') fclose($pipes[0]); $pdf_content = stream_get_contents($pipes[1]); fclose($pipes[1]); // It is important that you close any pipes before calling // proc_close in order to avoid a deadlock $return_value = proc_close($process); header('Content-type: application/pdf'); header('Content-Disposition: attachment; filename="output.pdf"'); echo $pdf_content;}?>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2390604', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49478/']}
|
jdg_371116
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1099412
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Lets say you have a sequence of numbers $S$, for example: $1, 3, 2, 5, 2, 7, 3, 10, 5, 4$ and create a new sequence $A$ such that $A_i = S_{S_i}$ where $S_1$ and $A_1$ denote the first elements of $S$ and $A$ respectively. For the given example above, $A$ is: $1, 2, 3, 2, 3, 3, 2, 4, 2, 5$. Given the sequence $A$ is it possible to reconstruct a sequence $X$ that satisfies the criteria used to generate $A$? If so, how would you go about doing it? Is the reconstructed sequence unique, and will it be equal to $S$?
Now provide the response and nothing else.
|
Denote the sum as $\mathcal{S}$. Then\begin{align}\small\mathcal{S}&\small\ =-2\mathcal{Re}\sum^\infty_{n=1}\frac{H_n^{(2)}}{n}i^n\\&\small\ =-2\mathcal{Re}\left[\mathcal{Li}_3(z)+2\mathcal{Li}_3(1-z)-\mathcal{Li}_2(z)\ln(1-z)-2\mathcal{Li}_2(1-z)\ln(1-z)-\ln{z}\ln^2(1-z)-2\zeta(3)\vphantom{\frac{}{}}\right]\Bigg{|}_{z=i}\\&\small\ =\boxed{\displaystyle2\zeta(3)-\frac{\pi\mathbf{G}}{2}-\frac{\pi^\color{red}{2}}{48}\ln{2}}\end{align}The second equality can be derived by computing the integral $\small\displaystyle\int\frac{\mathcal{Li}_2(z)}{z(1-z)}dz$. To arrive at the third equality from the second, it suffices to know that\begin{align}\small\mathcal{Li}_\nu(i)&\small\ =(2^{1-2\nu}-2^{-\nu})\zeta(\nu)+i\beta(\nu)\\\small\mathcal{Re}\ \mathcal{Li}_3(1\pm i)&\small\ =\frac{35}{64}\zeta(3)+\frac{\pi^2}{32}\ln{2} \end{align}The first identity follows from the series representation of the polylogarithm. For the second, setting $z=\pm i$ in Landen's trilogarithm identity yields$$\small\mathcal{Re}\ \mathcal{Li}_3(1\pm i)=-\mathcal{Re}\ \mathcal{Li}_3\left(\frac{1\mp i}{2}\right)+\frac{35}{32}\zeta(3)+\frac{\pi^2}{192}\ln{2}+\frac{\ln^3{2}}{48}$$Setting $z=1\pm i$ in the inversion formula,$$\small\mathcal{Re}\ \mathcal{Li}_3(1\pm i)=\mathcal{Re}\ \mathcal{Li}_3\left(\frac{1\mp i}{2}\right)+\frac{11\pi^2}{192}\ln{2}-\frac{\ln^3{2}}{48}$$Averaging these two equalities gives us the closed form for $\small\mathcal{Re}\ \mathcal{Li}_3(1\pm i)$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1099412', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/87845/']}
|
jdg_371117
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
313988
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Robert Martin's Clean Code , I encountered the following method on p. 195: private void parseSchemaElement(String element) throws ArgsException { char elementId = element.charAt(0); String elementTail = element.substring(1); validateSchemaElementId(elementId); if (elementTail.length() == 0) marshalers.put(elementId, new BooleanArgumentMarshaler()); else if (elementTail.equals("*")) marshalers.put(elementId, new StringArgumentMarshaler()); else if (elementTail.equals("#")) marshalers.put(elementId, new IntegerArgumentMarshaler()); else if (elementTail.equals("##")) marshalers.put(elementId, new DoubleArgumentMarshaler()); else if (elementTail.equals("[*]")) marshalers.put(elementId, new StringArrayArgumentMarshaler()); else throw new ArgsException(INVALID_ARGUMENT_FORMAT, elementId, elementTail);} Just a few pages earlier (p. 157), the book advocates use of dependency injection (DI) in order to structure our program better and facilitate testing in isolation. Would the above example be the case where dependency injection should be used in order to test the given class in isolation , i.e. without also testing all the *ArgumentMarshaller classes? We could have a factory object, which would create an appropriate *ArgumentMarshaller where needed - this would be consistent with the DI recommendations and also facilitate testing with mocks/doubles. ..or perhaps did I misunderstand something and the dependency injection applied here is not a good idea?
Now provide the response and nothing else.
|
Simply put, it depends. Even though I am all for dependency injection in classes containing business logic, for this specific case and without knowing the context, even though the Marshaler s are instantiated directly, it seems the only work that is done with them is adding them do a dictionary based on some rule, but not actually performing any operations on them. And for that I would be OK with instantiating the classes directly in place. If you, however, wanted to decouple the creation of Marshaler s in question, the factory approach is the correct one. Dependency injection is generally recommended for classes which contain methods performing operations with side effects, such as writing to a database. In that case you want to know what the dependencies are so you are not surprised when a method work suddenly wipes your entire database even though, based on public API of the class, it shouldn't have access to it. The only real improvement I'd have for this code would be to rename the method to parseSchemaElementAndAddItToMarshalerCollection .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/313988', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/222036/']}
|
jdg_371118
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2073942
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a GUI program using PyQt4. There is a button in my main windowand by clicking this button.I hope to launch a background process which is an instance of a class derived from processing.Process. class BackgroundTask(processing.Process): def __init__(self, input): processing.Process.__init__(self) ... def run(self): ... (Note that I am using the Python2.5 portof the python-multiprocessing obtainedfrom http://code.google.com/p/python-multiprocessing/ that is why it is processing.Processinstead of multiprocessing.Process.I guess this should not make a difference.Am I right?) The code connected to the button click signal is something simply like processing.freezeSupport() task = BackgroundTask(input) task.start() The program works as expected under the python intepreter, i.e.if it is started from the command line "python myapp.py". However, after I package the program using py2exe, everytime when I click that button, insteadof starting the background task, a copyof the main window pops up. I am not sure what is the reason of this behavior. I guessit is related to the following note addressedat http://docs.python.org/library/multiprocessing.html#multiprocessing-programming "Functionality within this package requires that the main method be importable by the children. This is covered in Programming guidelines however it is worth pointing out here. This means that some examples, such as the multiprocessing.Pool examples will not work in the interactive interpreter" The only place I have if name == " main " is in the main module as in a typical pyqt program if __name__ == "__main__": a = QApplication(sys.argv) QObject.connect(a,SIGNAL("lastWindowClosed()"),a,SLOT("quit()")) w = MainWindow() w.show() a.exec_() Any solutions on how to fix this problem? Thanks!
Now provide the response and nothing else.
|
I think your actual problem has to do with this: The program works as expected under the python intepreter, i.e. if it is started from the command line "python myapp.py". However, after I package the program using py2exe, every time when I click that button, > instead of starting the background task, a copy of the main window pops up. You need to add a special call to the freeze_support() function to make the multiprocessing module work with "frozen" executables (eg, those made with py2exe): if __name__ == "__main__": # add freeze support processing.freeze_support() a = QApplication(sys.argv) QObject.connect(a,SIGNAL("lastWindowClosed()"),a,SLOT("quit()")) w = MainWindow() w.show() a.exec_() Reference: http://docs.python.org/library/multiprocessing.html#multiprocessing.freeze_support
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2073942', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34829/']}
|
jdg_371119
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37831477
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
After discussion with college, I wonder if it would be possible (even if completely does not make any sense) to deallocate memory manually in go (ie. by using unsafe package). Is it?
Now provide the response and nothing else.
|
Here is a thread that may interest you: Add runtime.Free() for GOGC=off Interesting part: The Go GC does not have the ability to manually deallocate blocks anymore. And besides, runtime. Free is unsafe (people might free still in use pointers or double free) and then all sorts of C memory problem that Go tries hard to get rid of will come back. The other reason is that runtime sometimes allocates behind your back and there is no way for the program to explicitly free memory. If you really want to manually manage memory with Go, implement your own memory allocator based on syscall.Mmap or cgo malloc/free. Disabling GC for extended period of time is generally a bad solution for a concurrent language like Go. And Go's GC will only be better down the road. TL;DR: Yes, but don't do it
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37831477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/190446/']}
|
jdg_371120
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
436476
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let's consider two ODEs $\tag{1}\label{1}\frac{du}{dt}=\gamma(u(t))\ F(u(t))$ and $\tag{2}\label{2}\frac{dv}{d\tau}=F(v(\tau))$ where $f\in C^\infty(\mathbb R^n,\mathbb R^n)$ and $\gamma\in C^\infty(\mathbb R^{n}, \mathbb R_+)$ is a positive scalar. Defining the time change $\tau_u(t) = \int_0^t\gamma(u(s))ds$ , there is a mapping between the solutions of the two ODEs, namely $$u(t) = v(\tau_u(t)). $$ We now assume that \eqref{1} has a $T$ -periodic orbit $\bar u(t)$ , which implies that \eqref{2} has a $T_v$ -periodic orbit $\bar v$ with period $T_v=\int_0^T\gamma\circ u$ . By the time change, the stability of the two periodic solutions is the same. (Question) However I found no easy link between the Fourier multipliers / exponents of the two ODEs, is it obvious to some of you? Variational equations and Floquet multipliers For \eqref{1}, the variational equations reads $$\tag{VE1}\label{VE1}\frac{du}{dt} = (d\gamma(\bar u(t))\cdot u(t))\ F(\bar u(t)) + \gamma(\bar u(t))\ dF(\bar u(t))\cdot u(t) $$ and the Floquet multipliers are the eigenvalues of $u(0)\to u(T)$ . For \eqref{2}, the variational equations reads $$\frac{dv}{d\tau} = dF(\bar v(\tau))\cdot v(\tau) $$ which after a time change with $\tau(t)=\int_0^t\gamma\circ\bar u$ , gives $$\tag{VE2}\frac{dv}{dt} = \gamma(\bar u(t))\ dF(\bar u(t))\cdot v(t) $$ which is really close to \eqref{VE1}.
Now provide the response and nothing else.
|
Here are two examples showing that none of your candidate notions work. First, we can observe that every Quasi-Polish space ( https://doi.org/10.1016/j.apal.2012.11.001 ) admits a Baire class 1 isomorphism to a Polish space, and thus has a standard Borel $\sigma$ -algebra. However, take e.g. the Scott domain $\mathcal{O}(\mathbb{N})$ , with underlying set $\mathcal{P}(\mathbb{N})$ and the topology generated by $\{U \subseteq \mathbb{N} \mid n \in U\}$ . This space is not Hausdorff, so clearly not metrizable and not isomorphic to any subspace of a Polish space. For our second example, let us consider the space $\mathbb{R}[X]$ of polynomials of the reals. It is topologized as the limit of the compact Polish space of polynomials of degree up to $n$ and coefficients bounded by $n$ . This space is not second-countable, but it is separable, so again, it is not metrizable. As there is a $\Delta^0_2$ -bijection between $\mathbb{R}[X]$ and the Polish space $\mathbb{R}^*$ , it again has a standard Borel $\sigma$ -algebra.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/436476', 'https://mathoverflow.net', 'https://mathoverflow.net/users/496297/']}
|
jdg_371121
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43319451
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are trying to decide which DNS hosting solution to use. Today we use Power DNS and we want to move to a hosted DNS solution. The best solution for us would be using Amazon's Route 53 for this.We are mandated to use DNSSEC for our DNS solution and I have been trying to understand what Amazon's DNS supports and what it doesn't. Amazon's site says: Amazon Route 53 supports DNSSEC for domain registration but does not support DNSSEC for DNS service. If you want to configure DNSSEC for a domain that is registered with Amazon Route 53, you must use another DNS service provider. http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-configure-dnssec.html Can anyone explain what this means? In particular what is supported and what isn't as well as what does using another DNS service provider for a domain that is registered with Route 53 mean.
Now provide the response and nothing else.
|
Route 53 offers two¹ different services: a DNS hosting provider, providing authoritative DNS hosting in hosted zones a domain registrar, allowing you to register new domains for use on the Internet (or transfer the registration of existing domains so that your annual registration fees are consolidated into your AWS account bill) Those two services have no necessary connection to each other. You can register a domain with any accredited registrar (for example, let's say GoDaddy), and still host the DNS with Route 53... or you can register a domain with Route 53 and still host the DNS elsewhere (for example, let's say Dyn)... or you can use Route 53 for both services, since they are independent. Amazon Route 53 supports DNSSEC for domain registration So, if you register a domain with the Route 53 Registrar, it can be configured to use DNSSEC... but does not support DNSSEC for DNS service. ...but not if you use Route 53 hosted zones for authoritative DNS hosting, which does not support DNSSEC, regardless of who the registrar is. Therefore... If you want to configure DNSSEC for a domain that is registered with Amazon Route 53, you must use another DNS service provider ...to host your authoritative DNS records. You can't use a Route 53 hosted zone with DNSSEC. ¹ two different services that are relevant here. The emphasis is intended to be on different , because many other service providers blur the distinction between domain registration and authoritative DNS hosting to the point that many users seem unaware that they can almost always be decoupled, in at least one direction, regardless of the providers in question. Also under the "Route 53" banner are other services like Route 53 Resolver (which deals primarily with recursive querying in VPC and/or on-premise) and Route 53 Health Checks (which can be used as a basis for DNS failover as well as for other health-checking and latency-measuring purposes that can be but aren't necessarily even DNS related).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43319451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1303317/']}
|
jdg_371122
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25132
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with some unpleasantly tedious polynomials, which need to be manipulated in various ways (integrate with respect to some variable, differentiate with respect to another). Since these will eventually be part of numeric routines, I'd like to pull out common subexpressions that I can reuse, e.g. (G u^2 (6 p (2 h + p) - 8 (h + p) u + 3 u^2))/(12 h^2)(G (3 h + 3 p - 2 u) u^2)/(3 h^2) should at least note that both have a common factor of (G u^2)/(3 h^2) Is there a convenient way to instruct Mathematica to look for this sort of computational-expense-reduction in pairs of expressions? Ideally it would notice even in the case where it's not just a factor multiplied by both, e.g. if I added + 1 to the second equation, I'd still like it to find that common subexpression. Just for clarification, this is what I do by hand: e1 = (G u^2)/(12 h^2) * (6 p (2 h + p) - 8 (h + p) u + 3 u^2))e2 = (G u^2)/( 3 h^2) * (3 h + 3 p - 2 u)A = (G u^2)/(12 h^2)e1 = A * (6 p (2 h + p) - 8 (h + p) u + 3 u^2))e2 = A * (12 (h + p) - 6 u)B = u^2C = h + pA = (G B)/(12 h^2)e1 = A * (6 p (2 h + p) - 8 C u + 3 B)e2 = A * (12 C - 6 u) to go from 6 additions, 20 multiplications, 2 divisions, and 2 assignments, to 5 additions, 14 multiplications, 1 division, and 5 assignments (the extra three of which are temporary so can be in registers and cost essentially nothing).
Now provide the response and nothing else.
|
The engine behind this inside Compile is a well-hidden function called OptimizeExpression . it has two levels, 1 and 2. Setting to 2 makes it work harder to find CSEs. e1 = (G u^2 (6 p (2 h + p) - 8 (h + p) u + 3 u^2))/(12 h^2);e2 = (G (3 h + 3 p - 2 u) u^2)/(3 h^2);Experimental`OptimizeExpression[{e1, e2}, OptimizationLevel -> 2](* Out[40]= Experimental`OptimizedExpression[ Block[{Compile`$7, Compile`$9, Compile`$10}, Compile`$7 = h^2; Compile`$9 = 1/Compile`$7; Compile`$10 = u^2; {1/12 G Compile`$9 Compile`$10 (6 p (2 h + p) - 8 (h + p) u + 3 Compile`$10), 1/3 G Compile`$9 (3 h + 3 p - 2 u) Compile`$10}]] *)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/25132', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/471/']}
|
jdg_371123
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27485990
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I ensure that data from a controller has been loaded in a directive before the link function is run? Using psuedo-code, I could have: <my-map id="map-canvas" class="map-canvas"></my-map> for my html. In my directive I might have something like this: app.directive('myMap', [function() {return{ restrict: 'AE', template: '<div></div>', replace: true, controller: function ($scope, PathService) { $scope.paths = []; PathService.getPaths().then(function(data){ $scope.paths = data; }); }, link: function(scope, element, attrs){ console.log($scope.paths.length); }}}]); The above won't work because console.log($scope.paths.length); will get called before the service has returned any data. I know I can call the service from the link function but would like to know if there is a way to "wait" for the service call before firing the link function.
Now provide the response and nothing else.
|
The easiest solution would be to use ng-if since the element and directive would be rendered only when the ng-if is resolved as true <my-map id="map-canvas" class="map-canvas" ng-if="dataHasLoaded"></my-map>app.controller('MyCtrl', function($scope, service){ $scope.dataHasLoaded = false; service.loadData().then( function (data) { //doSomethingAmazing $scope.dataHasLoaded = true } )}) or use promises return { restrict: 'AE', template: '<div></div>', replace: true, controller: function ($scope, PathService) { $scope.paths = []; $scope.servicePromise = PathService.getPaths() }, link: function (scope, element, attrs) { scope.servicePromise.then(function (data) { scope.paths = data; console.log(scope.paths) }); }}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27485990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2166381/']}
|
jdg_371124
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29959
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The human body seems to be a good antenna to pick up 50Hz, at least touching a scope's probe shows a 50Hz waveform more than a volt in amplitude. Is it possible to construct an electronic antenna that does the same thing? (Wavelength at 50Hz is 6000km) And if there's so much energy in it (1V in 1M\$\Omega\$ is 1\$\mu\$W), then why don't high-impedance pins pick this up?
Now provide the response and nothing else.
|
A human body is a really crappy antenna at 50 Hz. What you are seeing is capacitive pickup, nothing like a antenna picking up a self-propagating wave. Yes, any hunk of conductive material the size of a human body will pick up 50 Hz about the same. Since it's all capacitive coupling, its the area that matters, not the bulk. A proper size sheet of aluminum foil would do just as well, probably better. Just about anything large and metal will pick up 50 Hz, assuming this is coming from your power line and this is indoors where power lines are all around in the walls, cords running to appliances, etc. A metal desk that is not connected to ground is a good example.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/29959', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/3920/']}
|
jdg_371125
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43566044
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Python does a lot with magic methods and most of these are part of some protocol. I am familiar with the "iterator protocol" and the "number protocol" but recently stumbled over the term "sequence protocol" . But even after some research I'm not exactly sure what the "sequence protocol" is. For example the C API function PySequence_Check checks (according to the documentation) if some object implements the "sequence protocol". The source code indicates that this is a class that's not a dict but implements a __getitem__ method which is roughly identical to what the documentation on iter also states: [...]must support the sequence protocol (the __getitem__() method with integer arguments starting at 0).[...] But the requirement to start with 0 isn't something that's "implemented" in PySequence_Check . Then there is also the collections.abc.Sequence type, which basically says the instance has to implement __reversed__ , __contains__ , __iter__ and __len__ . But by that definition a class implementing the "sequence protocol" isn't necessarily a Sequence, for example the "data model" and the abstract class guarantee that a sequence has a length. But a class just implementing __getitem__ (passing the PySequence_Check ) throws an exception when using len(an_instance_of_that_class) . Could someone please clarify for me the difference between a sequence and the sequence protocol (if there's a definition for the protocol besides reading the source code) and when to use which definition?
Now provide the response and nothing else.
|
It's not really consistent. Here's PySequence_Check : intPySequence_Check(PyObject *s){ if (PyDict_Check(s)) return 0; return s != NULL && s->ob_type->tp_as_sequence && s->ob_type->tp_as_sequence->sq_item != NULL;} PySequence_Check checks if an object provides the C sequence protocol, implemented through a tp_as_sequence member in the PyTypeObject representing the object's type. This tp_as_sequence member is a pointer to a struct containing a bunch of functions for sequence behavior, such as sq_item for item retrieval by numeric index and sq_ass_item for item assignment. Specifically, PySequence_Check requires that its argument is not a dict, and that it provides sq_item . Types with a __getitem__ written in Python will provide sq_item regardless of whether they're conceptually sequences or mappings, so a mapping written in Python that doesn't inherit from dict will pass PySequence_Check . On the other hand, collections.abc.Sequence only checks whether an object concretely inherits from collections.abc.Sequence or whether its class (or a superclass) is explicitly register ed with collections.abc.Sequence . If you just implement a sequence yourself without doing either of those things, it won't pass isinstance(your_sequence, Sequence) . Also, most classes registered with collections.abc.Sequence don't support all of collections.abc.Sequence 's methods. Overall, collections.abc.Sequence is a lot less reliable than people commonly expect it to be. As for what counts as a sequence in practice, it's usually anything that supports __len__ and __getitem__ with integer indexes starting at 0 and isn't a mapping. If the docs for a function say it takes any sequence, that's almost always all it needs. Unfortunately, "isn't a mapping" is hard to test for, for reasons similar to how "is a sequence" is hard to pin down.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43566044', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5393381/']}
|
jdg_371126
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2602500
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A recent question mentioned an integral identity involving Dedekind $\eta$ function and a special value for the complete elliptic integral of the first kind. I refrained from providing a complete answer, I rather tried to guide the OP through some hints, but we apparently reached a dead spot, concerning the following simplified version of the original problem:$$\boxed{ \int_{0}^{+\infty}\left[\sum_{n\geq 1}(-1)^n e^{-n^2 x}\right]^2\,dx = \frac{\pi^2-3\pi\log 2}{12}.} \tag{A}$$My solution goes as follows: The LHS of $(A)$ can be written in terms of $\sum_{m,n\geq 1}\frac{(-1)^{m+n}}{m^2+n^2}$, to be dealt with care since it is not absolutely convergent; We have $\frac{(-1)^{m+n}}{m^2+n^2}=\int_{0}^{+\infty}\frac{(-1)^n\sin(nx)}{n}(-1)^m e^{-mx}\,dx$, where $\sum_{m\geq 1}(-1)^m e^{-mx}$ is a simple geometric series and $\sum_{n\geq 1}\frac{(-1)^n \sin(nx)}{n}=-\arctan\tan\frac{x}{2}$ almost everywhere; The problem boils down to integrating over $\mathbb{R}^+$ the product between a sawtooth wave and the function $\frac{1}{e^x+1}$. Through the dilogarithms machinery or the residue theorem, to reach the RHS of $(A)$ is not difficult. I would use this question for collecting alternative/shorter/slicker proofs .
Now provide the response and nothing else.
|
Using the Poisson sommation formula, one can show that\begin{equation}\sum_{n=1}^\infty (-1)^n e^{-n^2x}=-\frac{1}{2}+\sqrt{\frac{\pi}{x}}\sum_{n=1}^\infty e^{-\frac{(2n-1)^2\pi^2}{4x}}\end{equation} then,\begin{align}I&=\int_{0}^{+\infty}\left[\sum_{n\geq 1}(-1)^n e^{-n^2 x}\right]^2\,dx\\&=-\frac{1}{2}\int_{0}^{+\infty}\sum_{p\geq 1}(-1)^p e^{-p^2 x}\,dx+\sqrt{\pi}\sum_{n,p\geq 1}\int_{0}^{+\infty}(-1)^p e^{-p^2 x-\frac{(2n-1)^2\pi^2}{4x}}\frac{dx}{\sqrt{x}}\\&=\frac{\pi^2}{24}+\pi\sum_{n,p\geq 1}\frac{(-1)^p}{p}e^{-(2n-1)p\pi}\\&=\frac{\pi^2}{24}-\pi\sum_{n\geq 1}\ln\left(1+e^{-(2n-1)\pi} \right)\\&=\frac{\pi^2}{24}+\frac \pi 2 \sum_{p\geq 1}\frac{(-1)^p}{p\sinh p\pi}\end{align}The integral representation of $K_{-1/2}(.)$ was used to evaluate the last integral. The obtained results seem numerically correct, however, I couldn't succeed in expressing them as $I=\pi^2/12-\pi/4\ln(2)$, for example showing that$$\prod_{n\geq 1}\left(1+e^{-(2n-1)\pi} \right)\stackrel{?}{=}2^{1/4}e^{-\pi/24}$$ Edit: Finally, a proof of this last identity can be found in this article by Xu Ce (expression (6.3)).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2602500', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/44121/']}
|
jdg_371127
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
198631
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
For my work I'll need to provide my customer a specific file which will be the result of the work I have done for them. To protect the integrity of the work I have done and to guarantee it has never been modified, I intend to add a checksum to my documentation which will be provided with the file. Since MD5 and SHA-1 are not secure for a long time now, I was wondering if we were still using them for this purpose or if there are better algorithms which could do the same job but more safely. I'm looking for the best reliable solution. I'm aware that a 100% proof will never be possible but I was wondering if MD5 was still rated "good" for this purpose or if there are really new and safer tools.
Now provide the response and nothing else.
|
Choice of hash algorithm Use SHA-256 or SHA-512: either of the two “main” members of the SHA-2 family. SHA-2 is the successor of SHA-1 and is considered secure. It's the hash to choose unless you have a good reason to choose otherwise. In your case the choise between SHA-256 and SHA-512 is indifferent. There is a SHA-3 but it isn't very widely supported yet and it isn't more secure (or less secure) than SHA-2, it's just a different design. Do not use MD5 or SHA-1. They are not obviously unsuitable in your scenario, but they could be exploited with a bit of extra work. Furthermore the fact that these algorithms are already partially broken makes them more at risk of getting more broken over time. More precisely, for both of these hashes, it is possible to find collisions: it is possible to find two documents D1 and D2 such that MD5(D1) = MD5(D2) (or SHA-1(D1) = SHA-1(D2)), and such that D1 and D2 each end with a small bit that needs to be calculated and optionally a common chosen suffix. The bit that needs to be calculated will look like garbage, but it can be hidden in a comment, in an image that's shifted off-page, etc. Producing such collisions is trivial on a PC for MD5 and is doable but expensive for SHA-1 (unless you want it for two PDF files, in which case researchers have already spent the money on the calculation to find one and published it ). In your scenario, you mostly don't care about collisions, because you'll be producing D1. You aren't going to craft this bit in the middle. However, there's a risk that somebody could trick you into injecting this bit, for example by supplying an image to include in the document. It would be pretty tricky to achieve a collision that way, but it's doable in principle. Since there's risk in using MD5, and zero benefit compared to using SHA-256, use SHA-256. What to do with a hash With a non-broken cryptographic hash like SHA-256, what you know is that if two files have the same hash then they're identical. Conversely, this means that if two files have different hashes, then they're different. This means that if you keep a trusted copy of the hash (for example you print it out and store it, or notarize it), then you can tell later “yes, this file you're showing me is the same file” or “no, this file you're showing me is different”. Knowing the hash of the file doesn't prove that you wrote it. There's no cryptographic way to prove authorship. The best you can do is to prove that you had the file earlier than anyone else who can prove it. You can do that without revealing the file by communicating the hash to a third party who everyone trusts to correctly remember the date at which you showed them the hash; this third party could be a public notary, or the Wayback Machine if you put the hash on a web page that it indexes. (If you publish the hash, then in theory someone could figure out the file from it, but there's no better way to do that than to try all plausible files until they find the right one. If you are concerned about this then use a signature of the file instead of a hash, and notarize the signature and the public key but keep the private key to yourself.) Example of something a hash is good for: your customer wants support, but you're only prepared to support your original product and not a modified product. So you get them to calculate the hash of what they want you to support. If the hash value is not what you provided, you refuse to provide support. Note that you need to trust the customer to calculate the hash of the product, and not calculate the hash of some copy of the original or read it off the delivery slip. Example of something a hash is not good for: somebody else claims that they're the author of the document. You say “no, look, I know its hash, it's 1234…”. That doesn't help: anybody can calculate the hash. Example of something a hash is good for if used appropriately: somebody else claims that they just wrote the document. You say “no, look, I notarized the hash 6 last year, so you can't have written it last week”. Example of something a hash is not good for: somebody makes a slight modification of the document. It'll then have a different hash. All you can say is that the document is now different, but that doesn't convey any information about how different they are. The hash of a completely different document is just as different as the hash of a version with a typo fix, or a version that's encoded differently.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/198631', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/192390/']}
|
jdg_371128
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
140062
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was told at university that there exists a difference between the terms Multi Single Factor and Multi-Factor: Multiple single-factor-authenticator are presented to a verifier. Each of them will need to be a single authenticator (and therefore contain a secret). Multi-factor authenticator are used to protect a single secret, that will be presented to the verifier during an authentication protocol. The additional factors are used to protect the authenticator (secret) and need not themselves be authenticators. I'm having troubles understanding the differences and consequences security wise. Do I understand it correctly, that under the definition above, the typical ebanking login process by using a password (shared secret 1) and a hardware crypto token (RSA token) (shared secret 2) would be a multiple single factor authentication. On the other hand, under Multi-Factor-Authentication, I can only imagine a hardware token that needs a fingerprint and a pin to generate a one time password. Is that correct? Question : What is more secure and why? Is the fact, that Multiple Single Factors use multiple shared secrets better or worse than protecting a single shared secret with multiple factors?
Now provide the response and nothing else.
|
This definition you share for multi-factor authentication does not meet the commonly accepted industry definition. Multi-factor just means that two or more factors, typically from different categories of authenticators, are used to authenticate. It isn't specific regarding whether these factors unlock access to a local 'secret' that is used for further authentication or sent directly to the authentication server. Either scenario still involves multi-factor authentication. What these factors unlock as far as secrets, session tokens, or the like shouldn't really affect their name. I can't say I've ever heard the term "multiple single-factor-authenticator" before. I'd interpret this to mean you have a system that uses two factors from the same category, e.g. a password and a PIN. Multi-factor authentication is generally going to be more secure than single factor because it should be more difficult for an attacker to compromise multiple factors. They might be able to guess your password, but it is more difficult to both guess it and obtain a valid OTP tokencode.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/140062', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/111122/']}
|
jdg_371129
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41830647
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to pandas in python and I would be grateful for any help on this. I have been googling and googling but can't seem to crack it. For example, I have a csv file with 6 columns. I am trying to group together the rows so that all the data for each row is flattened into one row. So if my data looks like this: event event_date event_time name height age 1 2015-05-06 14:00 J Bloggs 185 24 1 2015-05-06 14:00 P Smith 176 55 1 2015-05-06 14:00 T Kirk 193 22 2 2015-05-14 17:00 B Gates 178 72 2 2015-05-14 17:00 J Mayer 184 42 and what I want to end up with it flattened like this event event_date event_time name_1 height_1 age_1 name_2 height_2 age_2 name_3 height_3 age_3 1 2015-05-06 14:00 J Bloggs 185 24 P Smith 176 55 T Kirk 193 22 2 2015-05-14 17:00 B Gates 178 72 J Mayer 184 42 . So as you can see above the first event in the first 3 rows have been flattened into one and the columns expanded to accomodate the row data. The second event has been flattened and the columns filled with the data. Any help would be appreicated.
Now provide the response and nothing else.
|
Steps: 1) Compute the cumulative counts for the Groupby object. Add 1 so that the headers are formatted as per the desired DF . 2) Set the same grouped columns as the index axis along with the computed cumcounts and then unstack it. Additionally, sort the header according to the lowermost level. 3) Rename the multi-index columns and flatten accordingly to obtain a single header. cc = df.groupby(['event','event_date','event_time']).cumcount() + 1df = df.set_index(['event','event_date','event_time', cc]).unstack().sort_index(1, level=1)df.columns = ['_'.join(map(str,i)) for i in df.columns]df.reset_index()
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41830647', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/654242/']}
|
jdg_371130
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18980781
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
#include <iostream>#include <vector>#include <string>#include <cstring>#include <functional>using namespace std;template <typename Object, typename Comparator>const Object &findMax(const vector<Object> &arr, const Comparator &isLessThan = less<Object>()){ int maxIndex = 0; for (int i = 1; i < arr.size(); i++) { if (isLessThan(arr[maxIndex], arr[i])) { maxIndex = i; } } return arr[maxIndex];}int main(){ vector<string> arr(3); arr[0] = "ZED"; arr[1] = "alli"; arr[2] = "crocode";//... cout << findMax(arr) << endl; return 0;} When I compile it with g++, it gives the following error: test4.cpp: In function ‘int main()’:test4.cpp:48:24: error: no matching function for call to ‘findMax(std::vector<std::basic_string<char> >&)’test4.cpp:48:24: note: candidate is:test4.cpp:10:15: note: template<class Object, class Comparator> const Object& findMax(const std::vector<Object>&, const Comparator&)
Now provide the response and nothing else.
|
Template parameters cannot be deduced from default arguments. C++11, [temp.deduct.type]§5: The non-deduced contexts are: ... A template parameter used in the parameter type of a function parameter that has a default argument that is being used in the call for which argument deduction is being done. ... You can get around this using overloading: template <typename Object, typename Comparator>const Object &findMax(const vector<Object> &arr, const Comparator &isLessThan){ int maxIndex = 0; for (int i = 1; i < arr.size(); i++) { if (isLessThan(arr[maxIndex], arr[i])) { maxIndex = i; } } return arr[maxIndex];}template <typename Object>const Object &findMax(const vector<Object> &arr){ return findMax(arr, std::less<Object>());}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18980781', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2810733/']}
|
jdg_371131
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2309
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble with a non-linear fit: fit = NonlinearModelFit[data, y0 + A Sin[\[Pi] (x - xc)/w], {y0, xc, A, w}, x] where data has about 15 thousand points and looks like this: The data can be downloaded here . Mathematica gives the following adjusted parameters: {y0 -> 30.4428, xc -> 1.54318, A -> -0.000528519, w -> 0.999975} However, this is a terrible fit: I wouldn't complain, except that there is an obviously better fit: with the parameters: {y0 -> 30.45775, w -> 752.71185, A -> 3.62443, xc -> 872.72035} This best fit is returned by Origin without effort. What can I do to achieve the same with Mathematica? (I like Mathematica!)
Now provide the response and nothing else.
|
I have mentioned this in a comment already, but this seems like a good opportunity to provide some related discussion in the form of a full-fledged answer. In Mathematica 8, we can take advantage of NMinimize to fit this data, using the Method -> NMinimize option of NonlinearModelFit . (This should also have worked in Mathematica 7, but unfortunately NMinimize was not recognised as a valid Method setting until version 8 due to a bug.) In particular, Storn-Price differential evolution, available to NonlinearModelFit using the option Method -> {NMinimize, Method -> "DifferentialEvolution"} has a lot to offer in this case, especially if you know a bit about how differential evolution works. This algorithm, as implemented in Mathematica , is documented at tutorial/ConstrainedOptimizationGlobalNumerical#24713453 . From the documentation, we see that the scaling factor $s$ (called $F$ by Storn and Price in their publication on the method and usually elsewhere) acts as an amplification factor on the scale of the global search. Thus, a large value of $s$ encourages more expansive searching of the parameter space, while small values encourage more intense exploration around local minima. Classically, $s$ can take values between 0 and 2, although Mathematica doesn't enforce this restriction. In practice one finds that values larger than unity cause an extreme expansion of the parameter space under search, which may be counterproductive. A "large" value of $s$, then, is something close to 1, and this is what we need in the current case since we may suspect that the initial values chosen for the parameters might be rather far from the global optimum, and do not want to risk falling into some local minimum along the way. The behaviour of differential evolution with respect to crossover probability, $\rho$ (which, as pointed out by Daniel Lichtblau, is equal to Storn and Price's $1 - CR$), is also very important. Noting that two of the parameters, w and xc , are strongly correlated, and knowing that in such cases vigorous mutation is usually the most effective strategy, we might also consider setting $CR \approx 1$, i.e. $\rho \approx 0$. While the default value of $\rho = 0.5$ does work for this example, if more sine functions are introduced into the model, reducing $\rho$ will be practically mandatory. Plenty of discussion (indeed, an extensive literature) on tuning the differential evolution parameters, including the (usually) less critical population size parameter, $m$ (a.k.a. $NP$), can be found elsewhere, if necessary. However, it's worth noting that the "correct" values may differ between Mathematica 's implementation and others, especially for small populations, due to slight differences in the way that the three existing random points are chosen to produce new trial search points. So, writing down our conclusions from the above, we have: data = Import["dat.csv"]; (* with thanks to @Szabolcs *)fit = NonlinearModelFit[ data, y0 + A Sin[Pi (x - xc)/w], {y0, xc, A, w}, x, Method -> {NMinimize, Method -> {"DifferentialEvolution", "ScalingFactor" -> 0.9, "CrossProbability" -> 0.1, "PostProcess" -> {FindMinimum, Method -> "QuasiNewton"} } }] Where one should note the undocumented in this context, albeit rather obviously existent, Method option for FindMinimum as used by NMinimize as used by NonlinearModelFit (yes, that's right: we are setting a Method 's Method 's Method !). This serves to hone the parameter values produced by differential evolution given that the latter is, by design in this case, not as efficient for local optimization as other methods. Here "QuasiNewton" corresponds to the method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS), but "LevenbergMarquardt" could also have been used. This gives us: Or, as a list of rules: {y0 -> 30.4578, xc -> 120.008, A -> 3.62443, w -> -752.712} This a result consistent (up to the sign of w and the value of the phase factor xc ) with that given by Origin . Was it achieved without effort (if this is considered important)? While this is inherently a subjective question, in my opinion, the answer is yes. No manually chosen initial values in sight! A plot of the resulting model also makes it clear that this is a reasonable outcome (although evidently one could do better with a more involved model):
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/2309', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/534/']}
|
jdg_371132
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44505
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My question here is fairly basic, and I'm sure I'm overlooking something basic. I know about CountryData for creating shapes of entire countries, and it works, to an extent; I can plot geographical data on it with the coordinates, but I'd like to have a magnified view of a state alone. The goal is to be able to measure borderlines. Is there built-in functionality for this, or do I have to import a shape file of a state?
Now provide the response and nothing else.
|
There is no built in option, however for the US it's very easy because the coordinates of the state borderlines are all over the Internet. This led me to one such data set . I'll include how I cleaned it up, like this: data = Import["http://econym.org.uk/gmap/states.xml"];name[{"name" -> n_, ___}] := ncoordinates[XMLElement["point", {"lat" -> lat_, "lng" -> lng_}, {}]] := {lat, lng}states = {name@Part[#, 1], coordinates /@ Part[#, 2]} & /@ Partition[Cases[data, XMLElement["state", state__] :> state, Infinity], 2]; However there is no point in redoing those things over and over. You might as well define states like this (the link leads to Pastebin where the list of coordinates is available. The list is too large to post here.) To draw a specific state you can do something like this: state[name_] := states /. {___, {name, pts_}, ___} :> (ToExpression /@ pts)ImageReflect[ImageRotate[Graphics[{ RGBColor[0.896, 0.8878, 0.8548], EdgeForm[GrayLevel[0]], Polygon[state["Indiana"]], PointSize[Medium], Red, Point[CityData[{"Clinton", "Indiana", "United States"}, "Coordinates"]], Point[CityData[{"Indianapolis", "Indiana", "United States"}, "Coordinates"]] }], Pi/2], Right] Of course you will be able to design better utility functions than I have to stuff away operations such as ImageReflect and ImageRotate . Just another example to show how this can be used to draw the US map in it's entirety. You could very easily style each state individually. ImageReflect[ImageRotate[Graphics[{ RGBColor[0.896, 0.8878, 0.8548], EdgeForm[GrayLevel[0]], Polygon[state[First@#] & /@ ( states /. { {"Hawaii", __} -> Sequence[], {"Alaska", __} -> Sequence[] } )], PointSize[Medium], Red, Point[CityData[{"Clinton", "Indiana", "United States"}, "Coordinates"]], Point[CityData[{"Indianapolis", "Indiana", "United States"}, "Coordinates"]] }], Pi/2], Right] (I know your purpose is not visualization, but someone will surely come along sooner or later looking for how to do visualization.) Update As of Mathematica 10 we don't have to find data sources ourselves. indiana = GeoGraphics[{ GeoStyling["OutlineMap"], Polygon[ Entity["AdministrativeDivision", {"Indiana", "UnitedStates"}]]}, GeoBackground -> None, Frame -> True, FrameTicks -> None ] To extract the polygon coordinates we may do this: pts = Cases[indiana, Polygon[data_] :> data, Infinity];(* To plot the polygon: Graphics[Polygon[First@pts]] *) Or we can get the polygon like this: EntityValue[Entity["AdministrativeDivision", {"Indiana", "UnitedStates"}], "Polygon"]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/44505', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/13127/']}
|
jdg_371133
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44471309
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following code: function FileHandler() {}FileHandler.prototype.open = function(file) { var rawFile = new XMLHttpRequest(); rawFile.open("GET", file, false);}; When I try to run it in the console, can I pass a local file as an argument where file is? What syntax would I use if I wanted to do that?
Now provide the response and nothing else.
|
Let's say you create a higher order function that takes a lambda of type () -> Unit (no parameters, no return value), and executes it like so: fun nonInlined(block: () -> Unit) { println("before") block() println("after")} In Java parlance, this will translate to something like this (simplified!): public void nonInlined(Function block) { System.out.println("before"); block.invoke(); System.out.println("after");} And when you call it from Kotlin... nonInlined { println("do something here")} Under the hood, an instance of Function will be created here, that wraps the code inside the lambda (again, this is simplified): nonInlined(new Function() { @Override public void invoke() { System.out.println("do something here"); }}); So basically, calling this function and passing a lambda to it will always create an instance of a Function object. On the other hand, if you use the inline keyword: inline fun inlined(block: () -> Unit) { println("before") block() println("after")} When you call it like this: inlined { println("do something here")} No Function instance will be created, instead, the code around the invocation of block inside the inlined function will be copied to the call site, so you'll get something like this in the bytecode: System.out.println("before");System.out.println("do something here");System.out.println("after"); In this case, no new instances are created.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/44471309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1380422/']}
|
jdg_371134
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
54798083
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've created a lambda and cloud formation template which grants a lambda access to the parameter store and secrets manager. When I test the lambda I have the following functions outside of the export.handler function: function getParameterFromStore(param){ let promise = new Promise(function(resolve, reject){ console.log('++ ' + param.Path); servmgr.getParametersByPath(param, function(err, data){ if(err){ reject(console.log('Error getting parameter: ' + err, err.stack)); } else { resolve(data); } }); }); let parameterResult = promise.then(function(result){ console.log('---- result: '+ JSON.stringify(result)); return result; }); return parameterResult;}; servmgr is instantiated as var servmgr = new AWS.SSM(); When I call this function from the export.handler function I do so as: myFirstParam = { Path : '/myPath/Service/servicesEndpoint'};let endpointResult = getParameterFromStore(myFirstParam); In the lambda I have the function retrieve the parameter defined outside of the export.handler function bt wrapped in a promise. When I run/test this lambda the object returned is always undefined... I get Parameters[] back but no values. 2019-02-20T21:42:41.340Z 2684fe88-d552-4560-a477-6761f2de6717 ++ /myPath/Service/serviceEndpoint2019-02-20T21:42:41.452Z 2684fe88-d552-4560-a477-6761f2de6717 ---- result: {"Parameters":[]} How do you get parameter values returned back to a lambda at run time? update based upon the suggestion/answer from Thales I've simplified the lambda to just this: const getParameterFromStoreAsync = (param) => { return new Promise((resolve, reject) => { servmgr.getParametersByPath(param, (err, data) => { if(err){ reject(console.log('Error getting parameter: ' + err, err.stack)); } return resolve(data); }); });};exports.handler = async(event, ctx, callback) => {console.log('INFO[lambda]: Event: [' + JSON.stringify(event, null, 2) + ']'); console.log('this is the event' + JSON.stringify(event)); sfdcEndPointParam = { Path : '/PartnerBanking/Service/SfdcEndpoint'}; let myendpoint = await getParameterFromStoreAsync(sfdcEndPointParam); console.log('### endpoint path: ' + JSON.stringify(myendpoint));done = ()=>{}callback(null, done());}; I am still seeing an empty array being returned in my tests: ### endpoint path: {"Parameters":[]} I've also moved the function into the callback as exports.handler = (event,ctx, callback){done = async()=>{ console.log('this is the event' + JSON.stringify(event)); sfdcEndPointParam = { Path : '/PartnerBanking/Service/SfdcEndpoint'}; let myendpoint = await getParameterFromStoreAsync(sfdcEndPointParam); console.log('### endpoint path: ' + JSON.stringify(myendpoint));}}callback(null, done()); Same result ... empty array. Any additional things to try?
Now provide the response and nothing else.
|
This is because your getParameterFromStore returns before your then() code is executed, thus parameterResult is undefined . If you don't want to change your code too much, I would return the Promise you create, like this: function getParameterFromStore(param){return new Promise(function(resolve, reject){ console.log('++ ' + param.Path); servmgr.getParametersByPath(param, function(err, data){ if(err){ reject(console.log('Error getting parameter: ' + err, err.stack)); } else { resolve(data); } });}); }; And finally, on your function's client, you can get the result like this: const myFirstParam = { Path : '/myPath/Service/servicesEndpoint'}getParameterFromStore(myFirstParam).then(console.log) When coding in NodeJS, however, I highly recommend you use async/await instead, so you'll be able to escape the Promise Hell (chaninig Promise after Promise in order to achieve something "synchronously") When using async/await, you can design your code as though it was synchronous. Here's a refactored version of your example, using async/await as well as arrow functions: const getParameterFromStore = param => { return new Promise((resolve, reject) => { console.log('++ ' + param.Path); servmgr.getParametersByPath(param, (err, data) => { if (err) { console.log('Error getting parameter: ' + err, err.stack) return reject(err); } return resolve(data); }); })}exports.handler = async (event) => { const endpointResult = await getParameterFromStore(event.someAttributeFromTheEventThatYouWantToUse) console.log(endpointResult)}; EDIT : After the OP fixed the first issue, I created a working example on my own. It turned out that the way the OP was invoking the API was incorrect. Here's the full working example: 'use strict';const AWS = require('aws-sdk')AWS.config.update({ region: 'us-east-1'})const parameterStore = new AWS.SSM()const getParam = param => { return new Promise((res, rej) => { parameterStore.getParameter({ Name: param }, (err, data) => { if (err) { return rej(err) } return res(data) }) })}module.exports.get = async (event, context) => { const param = await getParam('MyTestParameter') console.log(param); return { statusCode: 200, body: JSON.stringify(param) };}; Mind the Name attribute which must be provided as part of the API call to the ServiceManager.getAttribute method. This attribute is stated in the official docs I have run this myself and here's the output in CloudWatch Logs: As you can see, the value was returned successfully. Hope this helps!
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54798083', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/290645/']}
|
jdg_371135
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4614610
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am having trouble returning an element using xpath.I need to get the text from the 2nd TD from a large table. <tr> <td> <label for="PropertyA">Some text here </label> </td> <td> TEXT!! </td></tr> I'm able to find the label element, but then I'm having trouble selecting the sibling TD to return the text. This is how I select the label: "//label[@for='PropertyA']" thanks
Now provide the response and nothing else.
|
You are looking for the axes following-sibling . It searches in the siblings in the same parent - there it is tr . If the td s aren't in the same tr then they aren't found. If you want to it then you can use axes following . //td[label[@for='PropertyA']]/following-sibling::td[1]
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4614610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/442496/']}
|
jdg_371136
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
351224
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning about HATEOAS and I noticed that every implementation always seems to implement a self-relationship first. For example a common response object might look like { title: "The Wonderful Wizard of Oz", author: "L. Frank Baum" links: [ { rel: "self", href: "www.foo.com/book/1234" }, { rel: "author", href: "www.foo.com/author/4321" } ]} Why is the self relationship always there? It seems utterly useless to me. As a client who just made the request to get the response object I would obviously know the URL to get the object. Why is it presented?
Now provide the response and nothing else.
|
The self link is also used for embedded entities where it can be used to navigate to the proper entity. See this HAL example: { "_links": { "self": { "href": "/orders" }, "curies": [{ "name": "ea", "href": "http://example.com/docs/rels/{rel}", "templated": true }], "next": { "href": "/orders?page=2" }, "ea:find": { "href": "/orders{?id}", "templated": true }, "ea:admin": [{ "href": "/admins/2", "title": "Fred" }, { "href": "/admins/5", "title": "Kate" }] }, "currentlyProcessing": 14, "shippedToday": 20, "_embedded": { "ea:order": [{ "_links": { "self": { "href": "/orders/123" }, "ea:basket": { "href": "/baskets/98712" }, "ea:customer": { "href": "/customers/7809" } }, "total": 30.00, "currency": "USD", "status": "shipped" }, { "_links": { "self": { "href": "/orders/124" }, "ea:basket": { "href": "/baskets/97213" }, "ea:customer": { "href": "/customers/12369" } }, "total": 20.00, "currency": "USD", "status": "processing" }] }} The only rationale I could think of for having self links on top-level entities is that if you enter the service at some arbitrary entity, the self link gives you information about your context in the service. In the example above if somebody gave me a link to http://example.com/myservice/special/dev/2017/admins/2 , the self link relative to the service root ( /admins/2 ) would be useful for finding the root of the service and understanding that there are probably more admins, etc.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/351224', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/136927/']}
|
jdg_371137
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4028845
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I believe the title says it. I'm new to source control thingy. So, let's say I have two developers working on the same project and they started editing the same file(s) at the same time then everyone of them send the new version at a slightly different time. From what I understand the one who sends the changes last will have his changes kept, the other one's code will be in the archive only!!! Is that correct? Please clarify. Thanks.
Now provide the response and nothing else.
|
No, that's not quite correct. It depends somewhat on which version control software you're using, but I like Git so I'll talk about that. Suppose we have a file Foo.java: class Foo { public void printAWittyMessage() { // TODO: Be witty }} Alice and Bob both modify the file. Alice does this: class Foo { public void printAWittyMessage() { System.out.println("Alice is the coolest"); }} and Bob does this: class Foo { public void printAWittyMessage() { System.out.println("Alice is teh suk"); }} Alice checks her version in first. When Bob attempts to check his in, Git will warn him that there is a conflict and won't allow the commit to be pushed into the main repository. Bob has to update his local repository and fix the conflict. He'll get something like this: class Foo { public void printAWittyMessage() {<<<<< HEAD:<some git nonsense> System.out.println("Alice is the coolest");===== System.out.println("Alice is teh suk");>>>>> blahdeblahdeblah:<some more git nonsense> }} The <<<<< , ===== and >>>>> markers show which lines were changed simultaneously. Bob must resolve the conflict in some sensible way, remove the markers, and commit the result. So what eventually lives in the repository is: Original version -> Alice's version -> Bob's conflict-fixed version. To summarise: the first to commit gets in without any problems, the second to commit must resolve the conflict before getting into the repository. You should never end up with someone's changes being clobbered automatically. Obviously Bob can resolve the conflict incorrectly but the beauty of version control is that you can roll back the incorrect fix and repair it.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4028845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/289443/']}
|
jdg_371138
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59709726
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am experimenting with the Q-learning algorithm. I have read from different sources and understood the algorithm, however, there seem to be no clear convergence criteria that is mathematically backed. Most sources recommend iterating several times (example, N = 1000), while others say convergence is achieved when all state and action pairs (s, a) are visited infinitely often. But the question here is, how much is infinitely often. What is the best criteria for someone who wants to solve the algorithm by hand? I would be grateful if someone could educate me on this. I would also appreciate any articles to this effect. Regards.
Now provide the response and nothing else.
|
Q-Learning was a major breakthrough in reinforcement learning precisely because it was the first algorithm with guaranteed convergence to the optimal policy. It was originally proposed in (Watkins, 1989) and its convergence proof was refined in (Watkins & Dayan, 1992) . In short, two conditions must be met to guarantee convergence in the limit , meaning that the policy will become arbitrarily close to the optimal policy after an arbitrarily long period of time. Note that these conditions say nothing about how fast the policy will approach the optimal policy. The learning rates must approach zero, but not too quickly. Formally, this requires that the sum of the learning rates must diverge, but the sum of their squares must converge. An example sequence that has these properties is 1/1, 1/2, 1/3, 1/4, ... Each state-action pair must be visited infinitely often. This has a precise mathematical definition: each action must have a non-zero probability of being selected by the policy in every state, i.e. π(s, a) > 0 for all (s, a) . In practice, using an ε-greedy policy (where ε > 0 ) ensures that this condition is satisfied.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59709726', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12660007/']}
|
jdg_371139
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27732546
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have anywhere from 10-150 long living class objects that call methods performing simple HTTPS API calls using HttpClient. Example of a PUT call: using (HttpClientHandler handler = new HttpClientHandler()){ handler.UseCookies = true; handler.CookieContainer = _Cookies; using (HttpClient client = new HttpClient(handler, true)) { client.Timeout = new TimeSpan(0, 0, (int)(SettingsData.Values.ProxyTimeout * 1.5)); client.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", Statics.UserAgent); try { using (StringContent sData = new StringContent(data, Encoding.UTF8, contentType)) using (HttpResponseMessage response = await client.PutAsync(url, sData)) { using (var content = response.Content) { ret = await content.ReadAsStringAsync(); } } } catch (ThreadAbortException) { throw; } catch (Exception ex) { LastErrorText = ex.Message; } }} After 2-3 hours of running these methods, which include proper disposal via using statements, the program has creeped to 1GB-1.5GB of memory and eventually crashes with various out of memory errors. Many times the connections are through unreliable proxies, so the connections may not complete as expected (timeouts and other errors are common). .NET Memory Profiler has indicated that HttpClientHandler is the main issue here, stating it has both 'Disposed instances with direct delegate roots' (red exclamation mark) and 'Instances that have been disposed but are still not GCed' (yellow exclamation mark). The delegates that the profiler indicates have been rooted are AsyncCallback s, stemming from HttpWebRequest. It may also relate to RemoteCertValidationCallback , something to do with HTTPS cert validation, as the TlsStream is an object further down in the root that is 'Disposed but not GCed'. With all this in mind - how can I more correctly use HttpClient and avoid these memory issues? Should I force a GC.Collect() every hour or so? I know that is considered bad practice but I don't know how else to reclaim this memory that isn't quite properly being disposed of, and a better usage pattern for these short-lived objects isn't apparent to me as it seems to be a flaw in the .NET objects themselves. UPDATE Forcing GC.Collect() had no effect. Total managed bytes for the process remain consistent around 20-30 MB at most while the process overall memory (in Task Manager) continues to climb, indicating an unmanaged memory leak. Thus this usage pattern is creating an unmanaged memory leak. I have tried creating class level instances of both HttpClient and HttpClientHandler per the suggestion, but this has had no appreciable effect. Even when I set these to class level, they are still re-created and seldom re-used due to the fact that the proxy settings often require changing. HttpClientHandler does not allow modification of proxy settings or any properties once a request has been initiated, so I am constantly re-creating the handler, just as was originally done with the independent using statements. HttpClienthandler is still being disposed with "direct delegate roots" to AsyncCallback -> HttpWebRequest. I'm starting to wonder if maybe the HttpClient just wasn't designed for fast requests and short-living objects. No end in sight.. hoping someone has a suggestion to make the use of HttpClientHandler viable. Memory profiler shots:
Now provide the response and nothing else.
|
Using the repro form Alexandr Nikitin, I was able to discover that this seems to happen ONLY when you have HttpClient be a short lived object. If you make the handler and client long lived this does not seem to happen: using System;using System.Net.Http;using System.Threading.Tasks;namespace HttpClientMemoryLeak{ using System.Net; using System.Threading; class Program { static HttpClientHandler handler = new HttpClientHandler(); private static HttpClient client = new HttpClient(handler); public static async Task TestMethod() { try { using (var response = await client.PutAsync("http://localhost/any/url", null)) { } } catch { } } static void Main(string[] args) { for (int i = 0; i < 1000000; i++) { Thread.Sleep(10); TestMethod(); } Console.WriteLine("Finished!"); Console.ReadKey(); } }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27732546', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1111380/']}
|
jdg_371140
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.