source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
18041622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am novice with Win32 , and I have been pursuing a problem (if it can be called a problem at all) with Windows blocking your program's flow during the event when a user grabs the window title bar and moves it around the screen. I have no legitimate reason to solve this problem, except that it bothers me. A few possibilities include removing the frame altogether, but it seems an inconvenient hack. Some games (single player) do not find this a problem at all. I have read however, that multiplayer games might experience problems when the program freezes as it expects continuous flow of information and can be overwhelmed after such a delay. I have tried adding this to my WindowProc switch (uMsg){ case WM_SYSCOMMAND: if (wParam == SC_CLOSE) PostQuitMessage(0); return 0; ... ... default: return DefWindowProc(hwnd, uMsg, wParam, lParam);}return 0; And this seems a quick hack, except that when I mousedown over the close icon I can pull the mouse away and let go without closing the program, and during that time, when the close icon is held down, the program once again is blocked. Furthermore, I do not know how to manually include the code necessary to move the window when the user clicks the titlebar and drags the mouse. For starters I do not know which uMsg 's and wParam 's to handle. My question is then, how do I disallow blocking during the case when the user clicks down the exit button (or minimize/maximize buttons) while still handling the case when the mouse is clicked and released over the button, and how do I allow the user to move/drag the window without it blocking the program (or what message is sent when the title bar is clicked without it being a button or menu)? I am creating the window with WS_SYSMENU | WS_MINIMIZEBOX . I still want the program to respond to minimize, maximize, and exit commands. If multi-threading can fix it, then that is interesting, but I wonder if I can get it to work on single-core processors. And I have read about hooks, but the MSDN page is still hard for me to interpret. Now provide the response and nothing else.
Why Is My App Freezing?—An Introduction to Message Loops & Threads This phenomenon is not isolated to any particular message. It's a fundamental property of the Windows message loop: when one message is being processed, no other message can be processed at the same time. It's not exactly implemented this way, but you can think of it as a queue, where your app pulls the messages out of the queue to process in the reverse order that they are inserted. Therefore, spending too long processing any message is going to suspend the processing of other messages, effectively freezing your application (because it cannot process any input). The only way to solve this problem is the obvious one: don't spend too long processing any one message. Often that will mean delegating the processing to a background thread. You will still need to handle all messages on the main thread, and the background worker threads need to report back to the main method when they are finished. All interaction with the GUI needs to happen on a single thread, and that is almost always the main thread in your application (which is why it is often called the UI thread). (And to answer an objection raised in your question, yes, you can operate multiple threads on single processor machines. You won't necessarily see any performance improvements, but it will make the UI more responsive. The logic here is that a thread can only do one thing at a time, but a processor can switch between threads extremely rapidly, effectively simulating doing more than one thing at a time.) More useful information is available here in this MSDN article: Preventing Hangs in Windows Applications Special Cases: Modal Event Processing Loops Certain window operations on Windows are modal operations. Modal is a common word in computing that basically refers to locking the user into a particular mode where they cannot do anything else until they change (i.e. get out of that) modes. Whenever a modal operation is begun, a separate new message processing loop is spun up and message handling happens there (instead of your main message loop) for the duration of the mode. Common examples of these modal operations are drag-and-drop, window resizing, and message boxes. Considering the example here of window resizing, your window receives a WM_NCLBUTTONDOWN message, which you pass to DefWindowProc for default processing. DefWindowProc figures out that the user intends to start a move or resize operation, and entered a moving/sizing message loop located somewhere deep in the bowels of Windows' own code. Thus, your application's message loop is no longer running because you've entered into a new moving/sizing mode. Windows runs this moving/sizing loop as long as the user is interactively moving/sizing the window. It does this so that it can intercept mouse messages and process them accordingly. When the moving/sizing operation completes (e.g., when the user releases the mouse button or presses the Esc key), control will return to your application code. It is worth pointing out that you are notified that this mode change has occurred via the WM_ENTERSIZEMOVE message ; the corresponding WM_EXITSIZEMOVE message indicates that the modal event-processing loop has exited. That allows you to create a timer that will continue to generate WM_TIMER messages that your application can process. The actual details of how this is implemented are relatively unimportant, but the quick explanation is that DefWindowProc continues to dispatch WM_TIMER messages to your application inside of its own modal event processing loop. Use the SetTimer function to create a timer in response to the WM_ENTERSIZEMOVE message, and the KillTimer function to destroy it in response to the WM_EXITSIZEMOVE message. I only point that out for completeness, though. In the majority of Windows apps that I've written, I've never needed to do that. So, What Is Wrong With My Code? Aside from all of that, the behavior you describe in the question are unusual. If you create a new, blank Win32 application using the Visual Studio template, I doubt you will be able to replicate this behavior. Without seeing the rest of your window procedure, I can't tell if you're blocking on any messages (as discussed above), but the part I can see in the question is wrong. You must always call DefWindowProc for messages that you do not explicitly process yourself. In this case, you might be fooled into thinking that you're doing that, but WM_SYSCOMMAND can have lots of different values for its wParam . You only handle one of those, SC_CLOSE . All of the rest of them just get ignored because you return 0 . That includes all of the window moving and resizing functionality (e.g. SC_MOVE , SC_SIZE , SC_MINIMIZE , SC_RESTORE , SC_MAXIMIZE , etc. etc.). And there's really no good reason to handle WM_SYSCOMMAND yourself; just let DefWindowProc take care of it for you. The only time you need to handle WM_SYSCOMMAND is when you've added custom items to the window menu, and even then, you should pass every command that you do not recognize on to DefWindowProc . A basic window procedure should look like this: LRESULT CALLBACK WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam){ switch(uMsg) { case WM_CLOSE: DestroyWindow(hWnd); return 0; case WM_DESTROY: PostQuitMessage(0); return 0; } return DefWindowProc(hWnd, uMsg, wParam, lParam);} It is also possible that your message loop is wrong. The idiomatic Win32 message loop (located near the bottom of your WinMain function) looks like this: BOOL ret;MSG msg;while ((ret = GetMessage(&msg, nullptr, 0, 0)) != 0){ if (ret != -1) { TranslateMessage(&msg); DispatchMessage(&msg); } else { // An error occurred! Handle it and bail out. MessageBox(nullptr, L"Unexpected Error", nullptr, MB_OK | MB_ICONERROR); return 1; }} You do not need hooks of any kind. The MSDN documentation on these is very good, but you're right: they're complicated. Stay away until you have a better understanding of the Win32 programming model. It is a rare case indeed where you need the functionality provided by a hook.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18041622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1667131/']}
jdg_377741
stackexchange
llm_judgeable_groundtruth_similarity
29658240
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The application uses JDK 8, Spring Boot & Spring Boot Jersey starter and is packaged as a WAR (although it is locally run via Spring Boot Maven plugin). What I would like to do is to get the documentation I generate on the fly (at build time) as a welcome page. I tried several approaches: letting Jersey serving the static contents by configuring in application.properties the proper init parameter as described here introduce a metadata-complete=false web.xml in order to list the generated HTML document as a welcome-file. None of that worked out. I would like to avoid having to enable Spring MVC or creating a Jersey resource just for serving a static file. Any idea? Here is the Jersey configuration class (I unsuccessfully tried to add a ServletProperties.FILTER_STATIC_CONTENT_REGEX there): @ApplicationPath("/")@ExposedApplication@Componentpublic class ResourceConfiguration extends ResourceConfig { public ResourceConfiguration() { packages("xxx.api"); packages("xxx.config"); property(ServerProperties.BV_DISABLE_VALIDATE_ON_EXECUTABLE_OVERRIDE_CHECK, true); property(ServerProperties.BV_SEND_ERROR_IN_RESPONSE, true); }} And here is Spring Boot application class (I tried adding an application.properties with spring.jersey.init.jersey.config.servlet.filter.staticContentRegex=/.*html but it didn't work, I'm not exactly sure what the property key should be here): @SpringBootApplication@ComponentScan@Import(DataConfiguration.class)public class Application extends SpringBootServletInitializer { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(Application.class); } public static void main(String[] args) { SpringApplication.run(Application.class, args); }} Now provide the response and nothing else.
Let me just first state, that the reason the static content won't be served is because of the default servlet mapping of the Jersey servlet, which is /* , and hogs up all the requests. So the default servlet that serves the static content can't be reached. Beside the below solution, the other solution is to simply change the servlet mapping. You can do that by either annotating your ResourceConfig subclass with @ApplicationPath("/another-mapping") or set the application.properties property spring.jersey.applicationPath . In regards to your first approach, take a look at the Jersey ServletProperties . The property you are trying to configure is FILTER_STATIC_CONTENT_REGEX . It states: The property is only applicable when Jersey servlet container is configured to run as a Filter, otherwise this property will be ignored Spring Boot by default configures the Jersey servlet container as a Servlet (as mentioned here ): By default Jersey will be set up as a Servlet in a @Bean of type ServletRegistrationBean named jerseyServletRegistration . You can disable or override that bean by creating one of your own with the same name. You can also use a Filter instead of a Servlet by setting spring.jersey.type=filter (in which case the @Bean to replace or override is jerseyFilterRegistration ). So just set the property spring.jersey.type=filter in your application.properties , and it should work. I've tested this. And FYI, whether configured as Servlet Filter or a Servlet, as far as Jersey is concerned, the functionality is the same. As an aside, rather then using the FILTER_STATIC_CONTENT_REGEX , where you need to set up some complex regex to handle all static files, you can use the FILTER_FORWARD_ON_404 . This is actually what I used to test. I just set it up in my ResourceConfig @Componentpublic class JerseyConfig extends ResourceConfig { public JerseyConfig() { packages("..."); property(ServletProperties.FILTER_FORWARD_ON_404, true); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29658240', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/277128/']}
jdg_377742
stackexchange
llm_judgeable_groundtruth_similarity
82243
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm building RPMs in a continuous integration environment, and I need to distinguish between successive builds, but obviously the spec file does not get modified on each checkin. The spec file header contains something like Version: 1.0beta7Release: 1.1 I'd like to override it so that the the built RPM will have a qualified appended to the version, e.g. Version: 1.0.beta7.2913 Assuming that the qualifier is already available, as an environment variable, what's the easiest way of appending a qualified to the version? Now provide the response and nothing else.
Fairly easy: Version: 1.0.beta7.%(echo $ENVVAR) You can run whatever command you want inside %(); just make sure there's no whitespace in the output. Personally, I think it's a touch cleaner to stick it in a variable at the top: %define buildnumber %(whatever command to get the build number)Version: 1.0.beta7.%{buildnumber} Or to make it conditional on the variable being set: Version: 1.0.beta7%{?buildnumber:.%{buildnumber}} That should only print the '.' if %buildnumber is set.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/82243', 'https://serverfault.com', 'https://serverfault.com/users/4947/']}
jdg_377743
stackexchange
llm_judgeable_groundtruth_similarity
32940684
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using bootstrap and jQuery CDN for my web app and I'm just wondering is there a way to not having to go to bunch of my files and paste the CDN links and scripts and instead just put it all in one place? Let's say that I can't save bootstrap or jQuery locally or make the web app a single page web app. I believe Require.js can do this but I'm not sure how or is there another JavaScript libraries that can do this? Now provide the response and nothing else.
git subtree split doesn't appear to offer an option for what you want (but it sounds useful so maybe you could contribute one to the project!) So, there's two ways to do this, depending what you want. 1) Export a single directory (simpler option) This takes advantage of the fact you want to move to another repo, so we can extract the subtree, and then relocate it in separate steps. Use git subtree split to extract the files you want to the an intermediate branch in your repository (you have already done this). git subtree split -P lib3 -b new-branch Create a new, empty repository: git init lib3-repo cd lib3-repo git commit --allow-empty -m 'Initial commit' Add the contents of the intermediate branch as a subtree : git subtree add -P lib3 repo new-branch This should rewrite the history again and reinsert the missing directory level. Every time you want to exchange history between the two repos you'll have to go through the intermediate branch (i.e. subtree split , then subtree pull ), but it ought to work. 2) Export any set of files (more complex) To keep multiple, specific subtrees, you'll need git filter-branch . There are lots of ways to pick and choose which commits and files to keep or discard, but this recipe uses --index-filter to select files without having any access to the contents of the files. To keep all files in the "lib3" and "src/core" directories, without editing their locations in any way. git co -b new-branchgit filter-branch --index-filter \ 'git ls-files \ | grep -v "^lib3/\|^src/core/" \ | xargs --no-run-if-empty git rm --cached' \ HEAD The filter code is a shell-script that edits the Git index (we're using --index-filter , remember). git ls-files is the same as ls except that it lists files in the repo, not in the working tree. grep -v <pattern> gives everything that does not match the pattern, and \| in the pattern is an alternative, so we get the list of files to delete. xargs --no-run-if-empty runs a command for each filename in the input from the pipe (unless there aren't any). git rm --cached deletes files from the index. This creates a branch ( new-branch ) that has the filtered files you want. You can import them into another repo using a normal pull command: git init new-repocd new-repogit remote add origin /path/to/old-repo git pull origin new-branch
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32940684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3128376/']}
jdg_377744
stackexchange
llm_judgeable_groundtruth_similarity
62210294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As part of learning how to integrate OneLogin SSO in my ColdFusion app I pulled this git repo - https://github.com/GiancarloGomez/ColdFusion-OneLogin and set up locally. But, while sending the auth request to OneLogin we are getting an error message saying " We're sorry, but something went wrong.We've been notified about this issue and we'll take a look at it shortly. " I could not find the root cause of this issue. Appreciate your timely help on this. Configuration on OneLogin looks like below. Note that consumer URL I modified to http://127.0.0.1:8500/coldfusion-onelogin/consume.cfm instead of actual format mentioned ( http://127.0.0.1:8500/coldfusion-onelogin/consume/ ) in the YouTube video provided in the readme file of this git repo. I had tried changing the consumer URL format as this http://127.0.0.1:8500/coldfusion-onelogin/consume/ but we are still getting the error message. Access Tab in OneLogin looks like below, Below is the code which sends auth request to OneLogin. <cfscript> try{ // used to encode string - chose to use Java version just in case CF did not encode correctly // encodeForURL appears to work but to keep the same as the samples from OneLogin I will use the Java reference urlEncoder = createObject("java","java.net.URLEncoder"); // the appSettings object contain application specific settings used by the SAML library appSettings = createObject("java","com.onelogin.AppSettings"); // set the URL of the consume file for this app. The SAML Response will be posted to this URL appSettings.setAssertionConsumerServiceUrl(request.company.getConsumeUrl()); // set the issuer of the authentication request. This would usually be the URL of the issuing web application appSettings.setIssuer(request.company.getIssuerUrl()); // the accSettings object contains settings specific to the users account. accSettings = createObject("java","com.onelogin.AccountSettings"); // The URL at the Identity Provider where to the authentication request should be sent accSettings.setIdpSsoTargetUrl("https://app.onelogin.com/saml/signon/" & request.company.getIssuerID()); // Generate an AuthRequest and send it to the identity provider authReq = createObject("java","com.onelogin.saml.AuthRequest").init(appSettings, accSettings); // now send to one login location ( accSettings.getIdp_sso_target_url() & "?SAMLRequest=" & authReq.getRidOfCRLF(urlEncoder.encode(authReq.getRequest(authReq.base64),"UTF-8")), false); }catch(Any e){ writeDump(e);}</cfscript> Below is the format of auth request URL , https://app.onelogin.com/saml/signon/[issuerId]?SAMLRequest=[SamlRequest] . I am not providing the actual URL here since I am not sure whether someone can tamper it or not. But please do let us know if it is really required to solve this issue. Below is the screenshot of the SAML Login Page , from here I am clicking on the button and send auth request to OneLogin. Also, In the index.cfm , form action attribute is "/post/". Since it was throwing an error I had to replace it with "/coldfusion-onelogin/post.cfm". Here coldfusion-onelogin is a folder under wwwroot. Any settings in ColdFusion to be modified so that it will not throw any error if we keep the form action attribute as "/post/" ?. Now provide the response and nothing else.
I recommend you use type definitions more explicitly For example, with React DT, you can define ref exotic component with ForwardRefRenderFunction instead of FC . type CountdownProps = {} type CountdownHandle = { start: () => void,} const Countdown: React.ForwardRefRenderFunction<CountdownHandle, CountdownProps> = ( props, forwardedRef,) => { React.useImperativeHandle(forwardedRef, ()=>({ start() { alert('Start'); } }); return <div>Countdown</div>;}export default React.forwardRef(Countdown); and then use React utility ElementRef , TypeScript can infer exact ref type of your component const App: React.FC = () => { // this will be inferred as `CountdownHandle` type CountdownHandle = React.ElementRef<typeof Countdown>; const ref = React.useRef<CountdownHandle>(null); // assign null makes it compatible with elements. return ( <Countdown ref={ref} /> );};
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/62210294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3742125/']}
jdg_377745
stackexchange
llm_judgeable_groundtruth_similarity
544945
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I deployed a application a while ago and it is currently in production. I learned about and attempted to set up code reloading. I read and followed the documentation. and now I'm looking for a way to check to see if I have configured everything correctly without making an obvious change to the production website. How can I go about testing this? Now provide the response and nothing else.
Yes, you can! For general help with SmartOS: Google treats SmartOS as a synonym for Solaris. Always use the search located at smartos.org; do not use Google directly. #SmartOS on Freenode and the mailing list are also invaluable resources. Security concerns with a single IP setup at a datacenter: In SmartOS you typically have an admin interface (private/LAN) and a public one (public/WAN). I realize this may not be possible in your deployment, but it's very important that to know that KVM virtual hosts will run unprotected VNC on the admin interface. You will need to secure that using a firewall or configuration change. What you'll want to do is set up a zone that will act as a firewall,gateway,load balancer, etc. That firewall zone will have two nics, one on the admin interface, one on the WAN. You can place all of your guests on a virtual switch which will be like having them connected to a physical one. Since you only have one IP, which I would try to rectify, this can be tricky to set up in a zone, if you mess up your network configuration you may loose access to the machine. Although generally not advised, you may need to run your firewall in the global zone. For the guests to get out to the internet, set up NAT: http://wiki.smartos.org/display/DOC/NAT+using+Etherstubs For incoming https/http: I run an nginx instance and use a file for eachservice/website in the sites-available/sites-enabled folders. Thedefault configuration of NGINX with SmartOS is minimalistic and willnot contain these folders. For incoming tcp/udp services: You can use the built-in firewall (seeman fwadm) in your firewall zone or use HAProxy if you want to dovirtual hosts. (You could use HAProxy exclusively and eliminate NGINX)
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/544945', 'https://serverfault.com', 'https://serverfault.com/users/147142/']}
jdg_377746
stackexchange
llm_judgeable_groundtruth_similarity
27633297
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to write C++ code where we rely on the return value optimization (RVO) when possible, but fall back on move semantics when not? For example, the following code can not use the RVO due to the conditional, so it copies the result back: #include <iostream>struct Foo { Foo() { std::cout << "constructor" << std::endl; } Foo(Foo && x) { std::cout << "move" << std::endl; } Foo(Foo const & x) { std::cout << "copy" << std::endl; } ~Foo() { std::cout << "destructor" << std::endl; }};Foo f(bool b) { Foo x; Foo y; return b ? x : y; }int main() { Foo x(f(true)); std::cout << "fin" << std::endl;} This yields constructorconstructorcopydestructordestructorfindestructor which makes sense. Now, I could force the move constructor to be called in the above code by changing the line return b ? x : y; to return std::move(b ? x : y); This gives the output constructorconstructormovedestructordestructorfindestructor However, I don't really like to call std::move directly. Really, the issue is that I'm in a situation where I absolutely, positively, can not call the copy constructor even when the constructor exists. In my use case, there's too much memory to copy and although it'd be nice to just delete the copy constructor, it's not an option for a variety of reasons. At the same time, I'd like to return these objects from a function and would prefer to use the RVO. Now, I don't really want to have to remember all of the nuances of the RVO when coding and when it's applied an when it's not applied. Mostly, I want the object to be returned and I don't want the copy constructor called. Certainly, the RVO is better, but the move semantics are fine. Is there a way to the RVO when possible and the move semantics when not? Edit 1 The following question helped me figure out what's going on. Basically, 12.8.32 of the standard states: When the criteria for elision of a copy operation are met or would be met save for the fact that the source object is a function parameter, and the object to be copied is designated by an lvalue, overload resolution to select the constructor for the copy is first performed as if the object were designated by an rvalue. If overload resolution fails, or if the type of the first parameter of the selected constructor is not an rvalue reference to the object’s type (possibly cv-qualified), overload resolution is performed again, considering the object as an lvalue. [ Note: This two-stage overload resolution must be performed regardless of whether copy elision will occur. It determines the constructor to be called if elision is not performed, and the selected constructor must be accessible even if the call is elided. —end note ] Alright, so to figure out what the criteria for a copy elison are, we look at 12.8.31 in a return statement in a function with a class return type, when the expression is the name of a non-volatile automatic object (other than a function or catch-clause parameter) with the same cvunqualified type as the function return type, the copy/move operation can be omitted by constructing the automatic object directly into the function’s return value As such, if we define the code for f as: Foo f(bool b) { Foo x; Foo y; if(b) return x; return y;} Then, each of our return values is an automatic object, so 12.8.31 says that it qualifies for copy elison. That kicks over to 12.8.32 which says that the copy is performed as if it were an rvalue. Now, the RVO doesn't happen because we don't know a priori which path to take, but the move constructor is called due to the requirements in 12.8.32. Technically, one move constructor is avoided when copying into x. Basically, when running, we get: constructorconstructormovedestructordestructorfindestructor Turning off elide on constructors generates: constructorconstructormovedestructordestructormovedestructorfindestructor Now, say we go back to Foo f(bool b) { Foo x; Foo y; return b ? x : y;} We have to look at the semantics for the conditional operator in 5.16.4 If the second and third operands are glvalues of the same value category and have the same type, the result is of that type and value category and it is a bit-field if the second or the third operand is a bit-field, or if both are bit-fields. Since both x and y are lvalues, the conditional operator is an lvalue, but not an automatic object. Therefore, 12.8.32 doesn't kick in and we treat the return value as an lvalue and not an rvalue. This requires that the copy constructor be called. Hence, we get constructorconstructorcopydestructordestructorfindestructor Now, since the conditional operator in this case is basically copying out the value category, that means that the code Foo f(bool b) { return b ? Foo() : Foo();} will return an rvalue because both branches of the conditional operator are rvalues. We see this with: constructorfindestructor If we turning off elide on constructors, we see the moves constructormovedestructormovedestructorfindestructor Basically, the idea is that if we return an rvalue we'll call the move constructor. If we return an lvalue, we'll call the copy constructor. When we return a non-volatile automatic object whose type matches that of the return type, we return an rvalue. If we have a decent compiler, these copies and moves may be elided with the RVO. However, at the very least, we know what constructor is called in case the RVO can't be applied. Now provide the response and nothing else.
When the expression in the return statement is a non-volatile automatic duration object, and not a function or catch-clause parameter, with the same cv-unqualified type as the function return type, the resulting copy/move is eligible for copy elision. The standard also goes on to say that, if the only reason copy elision was forbidden was that the source object was a function parameter, and if the compiler is unable to elide a copy, the overload resolution for the copy should be done as if the expression was an rvalue. Thus, it would prefer the move constructor. OTOH, since you are using the ternary expression, none of the conditions hold and you are stuck with a regular copy. Changing your code to if(b) return x;return y; calls the move constructor. Note that there is a distinction between RVO and copy elision - copy elision is what the standard allows, while RVO is a technique commonly used to elide copies in a subset of the cases where the standard allows copy elision.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27633297', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1932452/']}
jdg_377747
stackexchange
llm_judgeable_groundtruth_similarity
12521798
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: From the python documentation docs.python.org/tutorial/introduction.html#strings : Slice indices have useful defaults; an omitted first index defaults to zero, an omitted second index defaults to the size of the string being sliced. For the standard case, this makes a lot of sense: >>> s = 'mystring'>>> s[1:]'ystring'>>> s[:3]'mys'>>> s[:-2]'mystri'>>> s[-1:]'g'>>> So far, so good. However, using a negative step value seems to suggest slightly different defaults: >>> s[:3:-1]'gnir'>>> s[0:3:-1]''>>> s[2::-1]'sym' Fine, perhaps if the step is negative, the defaults reverse. An ommitted first index defaults to the size of the string being sliced, an omitted second index defaults to zero: >>> s[len(s):3:-1]'gnir' Looking good! >>> s[2:0:-1]'sy' Whoops. Missed that 'm'. Then there is everyone's favorite string reverse statement. And sweet it is: >>> s[::-1]'gnirtsym' However: >>> s[len(s):0:-1]'gnirtsy' The slice never includes the value of the second index in the slice. I can see the consistency of doing it that way. So I think I am beginning to understand the behavior of slice in its various permutations. However, I get the feeling that the second index is somewhat special, and that the default value of the second index for a negative step can not actually be defined in terms of a number. Can anyone concisely define the default slice indices that can account for the provided examples? Documentation would be a huge plus. Now provide the response and nothing else.
There actually aren't any defaults; omitted values are treated specially. However, in every case, omitted values happen to be treated in exactly the same way as None. This means that, unless you're hacking the interpreter (or using the parser , ast , etc. modules), you can just pretend that the defaults are None (as recursive's answer says), and you'll always get the right answers. The informal documentation cited isn't quite accurate—which is reasonable for something that's meant to be part of a tutorial. For the real answers, you have to turn to the reference documentation. For 2.7.3, Sequence Types describes slicing in notes 3, 4, and 5. For [i:j] : … If i is omitted or None , use 0 . If j is omitted or None , use len(s) . And for [i:j:k] : If i or j are omitted or None , they become “end” values (which end depends on the sign of k ). Note, k cannot be zero. If k is None , it is treated like 1 . For 3.3, Sequence Types has the exact same wording as 2.7.3.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12521798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/714293/']}
jdg_377748
stackexchange
llm_judgeable_groundtruth_similarity
15717
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This is my first question. It appeared while solving a research problem in cryptography. I am computer science student, so I apologize for lack of mathematical rigor in this question. Thanks for any help. Consider the Riemann zeta function at s = +1. It diverges, but the expression for the function is $\zeta(1) = \lim_{n \rightarrow \infty} \sum_{i = 1}^{n} \frac{1}{i}$ , the truncated sum of which are the $n$-th harmonic number, $\mathcal{H}(n)$. The question is, how about the expression $\zeta(1) = \lim_{n \rightarrow \infty} \prod_{\textrm{primes } p_i \leq n} \frac{1}{1-p_i^{-1}}$. is the value of the truncated product $\mathcal{H}(n)$ too? My simulations for large values of $n$ tells me that it is some function of $\log n$ (for example comparing the ratio of the function for $n$ and $n^2$ and $n^3$ etc) How do we prove this? In summary, what is the value of $\prod_{\textrm{primes } p_i \leq n} \frac{1}{1-p_i^{-1}}$?Thanks Now provide the response and nothing else.
Formula (8) on this page gives the result $$\prod_{p \le n} \frac1{1-p^{-1}} = e^\gamma \log n \,(1 + o(1)).$$
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/15717', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4074/']}
jdg_377749
stackexchange
llm_judgeable_groundtruth_similarity
48944819
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to open a geotiff file with PIL's Image function. It raises the error: OSError: cannot identify image file 'Whatever\\image\\I\\use.tiff' I saw the question asked here for example, the sollutions are to either use Import Image instead of From PIL import Image Which is I think an outdated sollution; I can't import Image. An other sollution is to update pillow to 2.9, but 5 years later we are on 5.0.0. I tried 4.0.0 as well where I receive the same error. Is there an up to date sollution for this? here is my code and here is a link to a file: image_path = 'each\\image\\I\\use.tiff'from PIL import ImageImage.open(image_path) Now provide the response and nothing else.
You have a multi-channel 32-bit TIFF image, and Pillow doesn't yet support that format. See issue #1888 : Pillow (and PIL) is currently able to open 8 bit per channel multi-channel images (such as RGB) but is able to open higher bit depth images (e.g. I16, I32, or Float32 images) if they are single channel (e.g., grayscale). [...] Requirements We should be able to support common GIS formats as well as high bit depth RGB(A) images. At least 4 channels, but potentially more (see #1839) Different pixel formats, including I16, I32, and Float. I determined this by using the TIFF plugin image reader directly, with debug mode enabled: from PIL import TiffImagePluginTiffImagePlugin.DEBUG = Truewith open(image_path, 'rb') as f: TiffImagePlugin.TiffImageFile(f) which includes the output: tag: BitsPerSample (258) - type: short (3) Tag Location: 46 - Data Location: 218 - value: (32, 32, 32) (full debug output below) You can use the Python GDAL bindings to read this format. You can also use the gdal_translate command line utility to convert your files to a format that Pillow can handle; for multiband, you'd have to go down to 8 bits, or move to grayscale. For example, to translate your input file to PNG, you can use: gdal_translate -of PNG rgb_CGI.tiff rgb_CGI.png after which Pillow can open the PNG file. Full debug output from Pillow's TIFF plugin: >>> from PIL import TiffImagePlugin>>> TiffImagePlugin.DEBUG = True>>> with open(image_path, 'rb') as f:... TiffImagePlugin.TiffImageFile(f)...*** TiffImageFile._open ***- __first: 8- ifh: b'II*\x00\x08\x00\x00\x00'Seeking to frame 0, on frame -1, __next 8, location: 8Loading tags, location: 8tag: ImageWidth (256) - type: short (3) - value: 2924tag: ImageLength (257) - type: short (3) - value: 2088tag: BitsPerSample (258) - type: short (3) Tag Location: 46 - Data Location: 218 - value: (32, 32, 32)tag: Compression (259) - type: short (3) - value: 1tag: PhotometricInterpretation (262) - type: short (3) - value: 1tag: StripOffsets (273) - type: long (4) Tag Location: 82 - Data Location: 8576 - value: <table: 8352 bytes>tag: SamplesPerPixel (277) - type: short (3) - value: 3tag: RowsPerStrip (278) - type: short (3) - value: 1tag: StripByteCounts (279) - type: long (4) Tag Location: 118 - Data Location: 224 - value: <table: 8352 bytes>tag: PlanarConfiguration (284) - type: short (3) - value: 1tag: ExtraSamples (338) - type: short (3) - value: (0, 0)tag: SampleFormat (339) - type: short (3) Tag Location: 154 - Data Location: 16928 - value: (2, 2, 2)tag: ModelPixelScaleTag (33550) - type: double (12) Tag Location: 166 - Data Location: 16934 - value: (0.25, 0.25, 0.0)tag: ModelTiepointTag (33922) - type: double (12) Tag Location: 178 - Data Location: 16958 - value: <table: 48 bytes>tag: GeoKeyDirectoryTag (34735) - type: short (3) Tag Location: 190 - Data Location: 17006 - value: <table: 72 bytes>tag: GeoDoubleParamsTag (34736) - type: double (12) Tag Location: 202 - Data Location: 17078 - value: <table: 56 bytes>tag: GeoAsciiParamsTag (34737) - type: string (2) Tag Location: 214 - Data Location: 17134 - value: Amersfoort / RD New|Amersfoort|tag: ImageWidth (256) - type: short (3) - value: 2924tag: ImageLength (257) - type: short (3) - value: 2088tag: BitsPerSample (258) - type: short (3) Tag Location: 46 - Data Location: 218 - value: (32, 32, 32)tag: Compression (259) - type: short (3) - value: 1tag: PhotometricInterpretation (262) - type: short (3) - value: 1tag: StripOffsets (273) - type: long (4) Tag Location: 82 - Data Location: 8576 - value: <table: 8352 bytes>tag: SamplesPerPixel (277) - type: short (3) - value: 3tag: RowsPerStrip (278) - type: short (3) - value: 1tag: StripByteCounts (279) - type: long (4) Tag Location: 118 - Data Location: 224 - value: <table: 8352 bytes>tag: PlanarConfiguration (284) - type: short (3) - value: 1tag: ExtraSamples (338) - type: short (3) - value: (0, 0)tag: SampleFormat (339) - type: short (3) Tag Location: 154 - Data Location: 16928 - value: (2, 2, 2)tag: ModelPixelScaleTag (33550) - type: double (12) Tag Location: 166 - Data Location: 16934 - value: (0.25, 0.25, 0.0)tag: ModelTiepointTag (33922) - type: double (12) Tag Location: 178 - Data Location: 16958 - value: <table: 48 bytes>tag: GeoKeyDirectoryTag (34735) - type: short (3) Tag Location: 190 - Data Location: 17006 - value: <table: 72 bytes>tag: GeoDoubleParamsTag (34736) - type: double (12) Tag Location: 202 - Data Location: 17078 - value: <table: 56 bytes>tag: GeoAsciiParamsTag (34737) - type: string (2) Tag Location: 214 - Data Location: 17134 - value: Amersfoort / RD New|Amersfoort|*** Summary ***- compression: raw- photometric_interpretation: 1- planar_configuration: 1- fill_order: 1- size: (2924, 2088)format key: (b'II', 1, (2, 2, 2), 1, (32, 32, 32), (0, 0))- unsupported formatTraceback (most recent call last): File "/Users/mjpieters/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/PIL/TiffImagePlugin.py", line 1196, in _setup self.mode, rawmode = OPEN_INFO[key]KeyError: (b'II', 1, (2, 2, 2), 1, (32, 32, 32), (0, 0))During handling of the above exception, another exception occurred:Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mjpieters/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/PIL/ImageFile.py", line 102, in __init__ self._open() File "/Users/mjpieters/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/PIL/TiffImagePlugin.py", line 950, in _open self._seek(0) File "/Users/mjpieters/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/PIL/TiffImagePlugin.py", line 1017, in _seek self._setup() File "/Users/mjpieters/Development/venvs/stackoverflow-3.6/lib/python3.6/site-packages/PIL/TiffImagePlugin.py", line 1200, in _setup raise SyntaxError("unknown pixel mode")SyntaxError: unknown pixel mode
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48944819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5396681/']}
jdg_377750
stackexchange
llm_judgeable_groundtruth_similarity
1322380
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Folks, is there a collection of gotchas where Numpy differs from python,points that have puzzled and cost time ? "The horror of that moment I shall never never forget !" "You will, though," the Queen said, "if you don't make a memorandum of it." For example, NaNs are always trouble, anywhere.If you can explain this without running it, give yourself a point -- from numpy import array, NaN, isnanpynan = float("nan")print pynan is pynan, pynan is NaN, NaN is NaNa = (0, pynan)print a, a[1] is pynan, any([aa is pynan for aa in a])a = array(( 0, NaN ))print a, a[1] is NaN, isnan( a[1] ) (I'm not knocking numpy, lots of good work there, just think a FAQ or Wiki of gotchas would be useful.) Edit: I was hoping to collect half a dozen gotchas (surprises for people learning Numpy). Then, if there are common gotchas or, better, common explanations,we could talk about adding them to a community Wiki (where ?)It doesn't look like we have enough so far. Now provide the response and nothing else.
The biggest gotcha for me was that almost every standard operator is overloaded to distribute across the array. Define a list and an array >>> l = range(10)>>> l[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]>>> import numpy>>> a = numpy.array(l)>>> aarray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Multiplication duplicates the python list, but distributes over the numpy array >>> l * 2[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]>>> a * 2array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18]) Addition and division are not defined on python lists >>> l + 2Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: can only concatenate list (not "int") to list>>> a + 2array([ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])>>> l / 2.0Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: unsupported operand type(s) for /: 'list' and 'float'>>> a / 2.0array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5]) Numpy overloads to treat lists like arrays sometimes >>> a + aarray([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])>>> a + larray([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1322380', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/86643/']}
jdg_377751
stackexchange
llm_judgeable_groundtruth_similarity
133409
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This is a crosspost from math.SE . Suppose $G$ and $H$ are discrete groups. Is it always the case that any finite dimensional complex representation of $G\times H$ is of the form$$\bigoplus_i V_i \otimes W_i,$$where $V_i, W_i$ are reps of $G$ and $H$, respectively? I know this is true when $G$ and $H$ are finite and when the representation of $G\times H$ is completely reducible, but is there a simple counterexample to the general case? I'm also curious if it is ``usually true," in some sense, that any rep of $G\times H$ has the above form. Now provide the response and nothing else.
Let $G = H = \mathbb{Z}$. Now a $G \times H$ representation is a pair of commuting invertible matrices. Let's try$$\left(\begin{array}{cc} 1 & 1 \\ 0 & 1 \\\end{array}\right)\mbox{ and } \left(\begin{array}{cc} -1 & 1 \\ 0 & -1 \\\end{array}\right).$$Certainly this rep is indecomposable since it already is after restricting to either group. Why can't it be a tensor product of two representations of $\mathbb{Z}$? One representation would have to be one-dimensional, given by a scalar multiplication. But this is impossible since neither matrix is a scalar. I think it is true that there is an open set on the representation variety where things decompose into a sum of tensor products as you say but I do not see why exactly.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/133409', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4622/']}
jdg_377752
stackexchange
llm_judgeable_groundtruth_similarity
1092859
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am refactoring some legacy code. The app was not using querystrings. The previous developer was hard coding some variables that the app uses in other places. Like this using VB.NET so.Cpage = "ContractChange.aspx" My question is can I programatically set this value and include the current querystring? I want so.Cpage to be something like ContractChange.aspx?d=1&b=2 Can I do this with the request object or something? Note, I don't need the domain. Now provide the response and nothing else.
To get the current query string you would simply do something like the following: Dim query as String = Request.QueryString("d") This will assign the value of the "d" querystring to the string variable "query". Note that all query string values are strings, so if you're passing numbers around, you'll need to "cast" or convert those string values to numerics (be careful of exceptions when casting, though). For example: Dim query as String = Request.QueryString("d")Dim iquery as Integer = CType(query, Integer) The QueryString property of the Request object is a collection of name/value key pairs. Specifically, it's of type System.Collections.Specialized.NameValueCollection, and you can iterate through each of the name/value pairs as so: Dim coll As System.Collections.Specialized.NameValueCollection = Request.QueryStringDim value As StringFor Each key As String In coll.AllKeys value = coll(key)Next Using either of these mechanisms (or something very similar) should enable you to construct a string variable which contains the full url (page and querystrings) that you wish to navigate to.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1092859', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105035/']}
jdg_377753
stackexchange
llm_judgeable_groundtruth_similarity
224701
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a dataset whose mathematical model is unknown. There is my code. ClearAll["Global`*"]data = {{0., 0.25}, {0.05, 0.04535957501876631}, {0.1, -0.22959683050251156}, \{0.15000000000000002, -0.5158795465693689}, {0.2, \-0.7301957913348458}, {0.25, -0.8011017169956666}, \{0.30000000000000004, -0.7044608640550588}, {0.35000000000000003, \-0.4765378605500501}, {0.4, -0.196100747567105}, {0.45, 0.05366775689601985}, {0.5, 0.2206242256461488}, {0.55, 0.29732043449572876}, {0.6000000000000001, 0.3095987116186817}, {0.65, 0.29278966180408206}, {0.7000000000000001, 0.27284518841568056}, {0.75, 0.261387359850386}, {0.8, 0.2606941881651699}, {0.8500000000000001, 0.26817927575775835}, {0.9, 0.27446733372325716}, {0.9500000000000001, 0.2592721592749264}, {1., 0.1947001957678513}, {1.05, 0.06047018239905877}, {1.1, -0.1362654725629964}, \{1.1500000000000001, -0.3517586997981886}, {1.2000000000000002, \-0.5183389114682548}, {1.25, -0.5755346214961556}, {1.3, \-0.5019115834325736}, {1.35, -0.3272715823202753}, \{1.4000000000000001, -0.11688696144090496}, {1.4500000000000002, 0.06223484823497412}, {1.5, 0.17182231969774298}, {1.55, 0.21215161276646213}, {1.6, 0.2092238920099766}, {1.6500000000000001, 0.1923165533808785}, {1.7000000000000002, 0.17854999898792742}, {1.75, 0.1720188782322812}, {1.8, 0.17190219696043543}, {1.85, 0.17794131796485643}, {1.9000000000000001, 0.18708794903860115}, {1.9500000000000002, 0.18618443013013833}, {2., 0.15163266492815844}, {2.0500000000000003, 0.062023864530717204}, {2.1, -0.08252831016642331}, {2.15, \-0.24937887284517374}, {2.2, -0.38250999453025925}, {2.25, \-0.42977193986190576}, {2.3000000000000003, -0.3719090193656952}, \{2.35, -0.23403979566383987}, {2.4000000000000004, \-0.07150507862117503}, {2.45, 0.060544004366313964}, {2.5, 0.1338153571297475}, {2.5500000000000003, 0.15377710551268767}, {2.6, 0.1458096898465104}, {2.6500000000000004, 0.1338788539262672}, {2.7, 0.12782685785681855}, {2.75, 0.12571190340963798}, {2.8000000000000003, 0.12435717478341927}, {2.85, 0.12558951152535677}, {2.9000000000000004, 0.13173895990209336}, {2.95, 0.13577668352986438}, {3., 0.11809163818525376}, {3.0500000000000003, 0.05703848372174477}, {3.1, -0.052171141264986065}, \{3.1500000000000004, -0.18507407808261417}, {3.2, \-0.2944160290738003}, {3.25, -0.3343926651223669}, \{3.3000000000000003, -0.28748988516911944}, {3.35, \-0.17539126742962546}, {3.4000000000000004, -0.04605407250026007}, \{3.45, 0.054154817787421754}, {3.5, 0.10421550491962703}, {3.5500000000000003, 0.11313996787827663}, {3.6, 0.10537370919096235}, {3.6500000000000004, 0.10038340729687355}, {3.7, 0.10210038914116974}, {3.75, 0.10377607675326449}, {3.8000000000000003, 0.10043392741012536}, {3.85, 0.09569030102719618}, {3.9000000000000004, 0.09633736364909878}, {3.95, 0.10046944586840713}, {4., 0.09196986029286065}, {4.05, 0.04939817146023659}, {4.1000000000000005, -0.035500038827273483}, {4.15, -0.14435142990428168}, {4.2, -0.23653772606398482}, {4.25, \-0.2711141722776035}, {4.3, -0.23195026722976075}, \{4.3500000000000005, -0.13818319486254826}, {4.4, \-0.03223287701492064}, {4.45, 0.0461023748153309}, {4.5, 0.08116311683958738}, {4.55, 0.08441822812800531}, {4.6000000000000005, 0.0793112218398028}, {4.65, 0.08158512653257002}, {4.7, 0.09042061843787946}, {4.75, 0.09526155242756693}, {4.800000000000001, 0.08975097883615274}, {4.8500000000000005, 0.07899922694726703}, {4.9, 0.07343943043119187}, {4.95, 0.07536815287115564}, {5., 0.07162619921504758}, {5.050000000000001, 0.04116873379851412}, {5.1000000000000005, \-0.026744747659383464}, {5.15, -0.11831249917611593}, {5.2, \-0.1979697358294582}, {5.25, -0.22850646569942168}, \{5.300000000000001, -0.19488626358397199}, {5.3500000000000005, \-0.11434128511531431}, {5.4, -0.025108144357342724}, {5.45, 0.03796511041395856}, {5.5, 0.06320989895118646}, {5.550000000000001, 0.06382472357812302}, {5.6000000000000005, 0.06230664342404383}, {5.65, 0.07136540577634531}, {5.7, 0.08639258817951433}, {5.75, 0.09382786497802414}, {5.800000000000001, 0.08625209616707691}, {5.8500000000000005, 0.06999994420319763}, {5.9, 0.058440733553212644}, {5.95, 0.05727250056847864}, {6., 0.05578254003710749}, {6.050000000000001, 0.033377263014135045}, {6.1000000000000005, \-0.022490638702671296}, {6.15, -0.10147596664039486}, {6.2, \-0.1718800607438274}, {6.25, -0.19937131359947818}, \{6.300000000000001, -0.1697753918045676}, {6.3500000000000005, \-0.09888818685910143}, {6.4, -0.02176669865153779}, {6.45, 0.03049600690836037}, {6.5, 0.04922791880104855}, {6.550000000000001, 0.04886308340261533}, {6.6000000000000005, 0.05106071357932963}, {6.65, 0.06608738435836632}, {6.7, 0.08632956423403415}, {6.75, 0.09586370917900694}, {6.800000000000001, 0.08645125158199114}, {6.8500000000000005, 0.0654172599800155}, {6.9, 0.048478784946397874}, {6.95, 0.044061035698346705}, {7., 0.04344348586261131}, {7.050000000000001, 0.026470804650265044}, {7.1000000000000005, \-0.020732994511128804}, {7.15, -0.09045170867633029}, {7.2, \-0.1539554421089161}, {7.25, -0.1791359267816674}, \{7.300000000000001, -0.15249630720578145}, {7.3500000000000005, \-0.08874264467440107}, {7.4, -0.02050317425559245}, {7.45, 0.023992594506665153}, {7.5, 0.038338741711232055}, {7.550000000000001, 0.03786393558494788}, {7.6000000000000005, 0.04351376744916997}, {7.65, 0.0636030295959461}, {7.7, 0.08814496194819264}, {7.75, 0.09936125416148656}, {7.800000000000001, 0.08837990298510912}, {7.8500000000000005, 0.0633196836002487}, {7.9, 0.0417630670498372}, {7.95, 0.034306558906331726}, {8., 0.033833820809153196}, {8.05, 0.020583500915716844}, {8.1, -0.020307573530223846}, {8.15, \-0.08313247231935975}, {8.200000000000001, -0.1414477941963944}, \{8.25, -0.16486568052709186}, {8.3, -0.14042058394731663}, {8.35, \-0.0819872328343009}, {8.4, -0.020331161370387024}, \{8.450000000000001, 0.018511367496210886}, {8.5, 0.029858242066679915}, {8.55, 0.029693848662359766}, {8.6, 0.038370946791177404}, {8.65, 0.06265453561103906}, {8.700000000000001, 0.09068965988664124}, {8.75, 0.10324484849113105}, {8.8, 0.09095765005921527}, {8.85, 0.06257855183652253}, {8.9, 0.03716526320884847}, {8.950000000000001, 0.027034029615809366}, {9., 0.026349806140466097}, {9.05, 0.015690013868017962}, {9.1, -0.02054847679666216}, {9.15, \-0.07820039478789328}, {9.200000000000001, -0.13258754658167485}, \{9.25, -0.1546551796088675}, {9.3, -0.13185376644945565}, {9.35, \-0.07742116319072777}, {9.4, -0.020689713175164805}, \{9.450000000000001, 0.013990045870832553}, {9.5, 0.02325362230266585}, {9.55, 0.02357119989594019}, {9.600000000000001, 0.034811360556287134}, {9.65, 0.06251408316536598}, {9.700000000000001, 0.09335737679307088}, {9.75, 0.10697279056057489}, {9.8, 0.09361765371778913}, {9.850000000000001, 0.06254266362460573}, {9.9, 0.03396806149215307}, {9.950000000000001, 0.021566854697117256}, {10., 0.020521249655974707}};lp = ListPlot[data, PlotStyle -> {PointSize[0.01]}, DisplayFunction -> Identity];Show[lp, DisplayFunction -> $DisplayFunction, PlotRange -> Full] By a very long manual selection of functions, I was able to establish that this curve is well described by the following function. Plot[0.25` E^(-0.245` t) - 0.48` E^(-0.47` t) Sin[2 \[Pi] t] - 0.51` E^(-0.253` t) Sin[2 \[Pi] t]^2 - 0.126` Sin[2 \[Pi] t]^3, {t, 0, 10}] Very similar, isn't it? I decided to try using the FindFormula command. But the result that I got does not make me happy. It turns out that this curve is described by a set of not the most complex functions, and FindFormula cannot determine this. fit = FindFormula[data, t, 5, TargetFunctions -> {Exp, Sin}]Out[548]= {-0.00731088, Sin[21.096^(-23. t)], Sin[18.2321^(-15 t)], Sin[10.8084^(-7 t)], Sin[Sin[t]]}Show[ListPlot[data], Plot[fits, {x, 0, 10}, PlotRange -> All]] How to choose the model structure for NonlinearFitModel ? I would be grateful for any help. Now provide the response and nothing else.
I'm afraid that what you are asking is impossible. Let me explain: If you do not have a mathematical model restricting the set of possible functions in some way, but only require the function to be e.g. continuous you have a few problems: You optimize over an infinite-dimensional space which is generally a hard task. You only have a finite amount of information (=data points) to pick an element from an infinite-dimensional space. Thus, your optimal solution will not be unique (as @MarcoB said you could interpolate the points, but you can also find infinitely many other functions that go perfectly through these points.) So which of the functions do you choose? Most continuous functions cannot easily be described using mathematical formulas. Therefore you have many solutions but you can not write them down. These solutions most certainly do not give you any useful insight into what is going on in your problem, because all the optimal solutions are too different from each other. Therefore you should restrict the set of functions to consider only functions of a certain form like in the answer of @Bob Hanlon https://mathematica.stackexchange.com/a/224707/67019 . Even a choice of functions like this often not enough to solve all the problems above (e.g. get a unique solution). Therefore one often chooses to regularize theparameters (i.e. penalizing weird parameter values).However, then you have to decide how exactly you want to penalize the parameters.And there is also no perfect answer to this task.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/224701', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/67019/']}
jdg_377754
stackexchange
llm_judgeable_groundtruth_similarity
8074665
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a common authentication form across multiple subdomains (example.com, blog.example.com, and app.example.com). The login form must submit this data to example.com irrespective of where it is shown, so I thought of using CORS, but this: header("Access-Control-Allow-Origin: http://example.com http://blog.example.com http://app.example.com") does not work So I thought of doing the next thing, and checking the Origin header manually on server side, and allowing a Access-Control-Allow-Origin: * so that requests might be made, but unfortunately, this crops up in the MDN Important note: when responding to a credentialed request, server must specify a domain, and cannot use wild carding. Is there any way to make my request work across multiple domains, and still send credentials using CORS ? Now provide the response and nothing else.
Two thoughts: 1) are you also including the "Access-Control-Allow-Credentials: true" header? This is needed for passing cookie credentials (and the corresponding XHR client must set .withCredentials = true) 2) Have you tried the suggestion from your link and only include the origin for the current request. For example, if a request comes in with the header "Origin: http://blog.example.com ", you would respond with "Access-Control-Allow-Origin: http://blog.example.com ", and not a list of origins. This requires a little more work on your server side implementation. 3) One other thought, you mention that you have a single login form that must be shared by various domains. Well, if it is a standard HTML form, you can do a regular form-post across domains. You don't need to use CORS. Just set the "action" property of the form to the url you wish to post to. For example: <form name="login" action="http://login.example.com/doLogin">
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8074665', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/368328/']}
jdg_377755
stackexchange
llm_judgeable_groundtruth_similarity
33684663
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a series of div's whose heights will be dynamically generated based on the content inside them. Right now I'm able to align them horizontally using float: left but as soon as the div's go onto the next line, there is white space in between the divs that don't have matching heights. Here's what I'm trying to accomplish: Oh and I'm looking for a pure CSS solution. The project I'm currently working on is archaic and I can't use any new frameworks on it. Now provide the response and nothing else.
There are two issues that are causing problems here: The first issue is that the Session.run() call only accepts a small number of types as the keys of the feed_dict . In particular, lists of tensors are not supported as keys, so you have to put each tensor as a separate key. * One convenient way to do this is using a dictionary comprehension: inputs = [tf.placeholder(...), ...]data = [np.array(...), ...]sess.run(y, feed_dict={i: d for i, d in zip(inputs, data)}) The second issue is that the 10 * [tf.placeholder(...)] syntax in Python creates a list with ten elements, where each element is the same tensor object (i.e. has the same name property, the same id property, and is reference-identical if you compare two elements from the list using inputs[i] is inputs[j] ). This explains why, when you tried to create a dictionary using the list elements as keys, you ended up with a dictionary with a single element - because all of the list elements were identical. To create 10 different placeholder tensors, as you intended, you should instead do the following: inputs = [tf.placeholder(tf.float32, shape=(batch_size, input_size)) for _ in xrange(10)] If you print the elements of this list, you'll see that each element is a tensor with a different name. EDIT: * You can now pass tuples as the keys of a feed_dict , because these may be used as dictionary keys.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/33684663', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3642000/']}
jdg_377756
stackexchange
llm_judgeable_groundtruth_similarity
57814195
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently came across a piece of python code that looked like this groups = {} for d, *v in dishes: for x in v: groups.setdefault(x, []).append(d) dishes represents a 2d array.What does the 1st for loop statement mean?What is *v? What does the asterisk before v indicate?What other situations is an asterisk before a variable used? Now provide the response and nothing else.
It's essentially a combination of tuple/list unpacking and *args iterable unpacking. Each iterable is getting unpacked on each iteration of the for loop. First let's look at a simple tuple/list unpacking: >>> x, y = (1, 2)>>> x1>>> y2# And now in the context of a loop:>>> for x, y in [(1, 2), (3, 4)]:>>> print(f'x={x}, y={y}')"x=1, y=2""x=3, y=4" Now consider the following (and imagine the same concept within the loop as shown above): >>> x, y = (1, 2, 3)ValueError: too many values to unpack (expected 2)>>> x, *y = 1, 2, 3>>> x1 >>> y [2, 3] Note how * allows y to consume all remaining arguments. This is similar to how you would use * in a function - it allows an unspecified number of arguments and it consumes them all.You can see more examples of (*args) usage here . >>> def foo(x, *args):>>> print(x)>>> print(args)>>>foo(1, 2, 3, 4)1[2, 3, 4] As for practical examples, here is a quick one: >>> names = ("Jack", "Johnson", "Senior")>>> fist_name, *surnames = names>>> print(surnames)["Johnson", "Senior"]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57814195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11531123/']}
jdg_377757
stackexchange
llm_judgeable_groundtruth_similarity
11191264
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: consider this simple function def foo(l=[]): if not l: print "List is empty" else : print "List is not empty" Now let's call foo x=[]foo(x)#List is emptyfoo('')#List is empty But if x=[''] the list is not considered as empty!!! x=['']foo(x)#List is not empty Questions - Why list of empty values are not considered as empty? (In case of variable it is considered as empty e.g.) x=''if x:print 'not empty!!'else: print 'empty' How can I modify function foo() so that list will be considered as empty in all these cases: x=[] , x=[''] , x=['', ''] Now provide the response and nothing else.
Using the built-in any() def foo(l=[]): if any(l): print 'List is not empty' else: print 'List is empty'foo([''])# List is empty
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11191264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1099709/']}
jdg_377758
stackexchange
llm_judgeable_groundtruth_similarity
36992922
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have found similar questions in Stack Overflow, but all of them were addressed one and two years ago. Now Chart.js has come up in version 2, and lots of the documentation changes. Can someone please help me showing an example of pie chart with labels - or pie chart with all of its segment's tooltips are visible? UPDATE Thanks to @potatopeelings, his answer works perfectly for Chart.js v2.1. Although I initially asked how to permanently show tooltips on pie chart here, I found a better solution: showing values as labels in percentages! It is now enabled for pie chart in Chart.js v2.1. In the chart options: animation: { duration: 0, onComplete: function () { var self = this, chartInstance = this.chart, ctx = chartInstance.ctx; ctx.font = '18px Arial'; ctx.textAlign = "center"; ctx.fillStyle = "#ffffff"; Chart.helpers.each(self.data.datasets.forEach(function (dataset, datasetIndex) { var meta = self.getDatasetMeta(datasetIndex), total = 0, //total values to compute fraction labelxy = [], offset = Math.PI / 2, //start sector from top radius, centerx, centery, lastend = 0; //prev arc's end line: starting with 0 for (var val of dataset.data) { total += val; } Chart.helpers.each(meta.data.forEach( function (element, index) { radius = 0.9 * element._model.outerRadius - element._model.innerRadius; centerx = element._model.x; centery = element._model.y; var thispart = dataset.data[index], arcsector = Math.PI * (2 * thispart / total); if (element.hasValue() && dataset.data[index] > 0) { labelxy.push(lastend + arcsector / 2 + Math.PI + offset); } else { labelxy.push(-1); } lastend += arcsector; }), self) var lradius = radius * 3 / 4; for (var idx in labelxy) { if (labelxy[idx] === -1) continue; var langle = labelxy[idx], dx = centerx + lradius * Math.cos(langle), dy = centery + lradius * Math.sin(langle), val = Math.round(dataset.data[idx] / total * 100); ctx.fillText(val + '%', dx, dy); } }), self); }}, Now provide the response and nothing else.
Solution for ChartJs Version > 2.1.5: Chart.pluginService.register({ beforeRender: function (chart) { if (chart.config.options.showAllTooltips) { // create an array of tooltips // we can't use the chart tooltip because there is only one tooltip per chart chart.pluginTooltips = []; chart.config.data.datasets.forEach(function (dataset, i) { chart.getDatasetMeta(i).data.forEach(function (sector, j) { chart.pluginTooltips.push(new Chart.Tooltip({ _chart: chart.chart, _chartInstance: chart, _data: chart.data, _options: chart.options.tooltips, _active: [sector] }, chart)); }); }); // turn off normal tooltips chart.options.tooltips.enabled = false; }}, afterDraw: function (chart, easing) { if (chart.config.options.showAllTooltips) { // we don't want the permanent tooltips to animate, so don't do anything till the animation runs atleast once if (!chart.allTooltipsOnce) { if (easing !== 1) return; chart.allTooltipsOnce = true; } // turn on tooltips chart.options.tooltips.enabled = true; Chart.helpers.each(chart.pluginTooltips, function (tooltip) { tooltip.initialize(); tooltip.update(); // we don't actually need this since we are not animating tooltips tooltip.pivot(); tooltip.transition(easing).draw(); }); chart.options.tooltips.enabled = false; } }});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36992922', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1912746/']}
jdg_377759
stackexchange
llm_judgeable_groundtruth_similarity
722327
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a scenario where I need to print a line, but using an awk if to search for a number the is appened with a double colon with more numbers: See below example: test1 test2 37:375003 test3 test4test1 test2 38:375004 test3 test4test1 test2 39:375005 test3 test4test1 test2 40:375006 test3 test4test1 test2 41:375007 test3 test4 What I want to achieve is using the command like below: cat test_out.txt | awk "{if ($3 == 37~/\:*/ ) print $0;}" The above should give me the below line: test1 test2 37:375003 test3 test4 Getting the syntax error below: Syntax Error The source line is 1.The error context is {if ( >>> == <<<awk: 0602-502 The statement cannot be correctly parsed. The source line is 1. Now provide the response and nothing else.
You need to use the ~ binary operator whose syntax is: string ~ regexp To match a string against a regular expression, so: <test_out.txt awk '$3 ~ /^37:[[:digit:]]+$/' To print the records ( {print} , short for {print $0} being the default action) whose third field matches the ^37:[[:digit:]]+$ extended regexp. In ERE syntax: ^ matches at the start of the subject [...] : matches any character or collating element in the set. [:digit:] in the set above means any character classified as decimal digit in the locale (on most systems, that's limited to 0123456789). Change to 0123456789 in mawk which doesn't support those POSIX character classes or if you don't want to match other decimal digits. 0-9 would also work in mawk but could also match on other characters in some awk implementations. + is for one-or-more of the preceding thing. So here one-or-more digits $ matches at the end of the subject. If you don't care whether the part after 37: is made of digits or not, then the regexp is just ^37: ( 37: at the start of the subject). Another approach would be: <test_out.txt awk '$3 + 0 == 37' Where the + 0 numeric operation forces awk to try and convert $3 to a number, ignoring anything past the initial number. Then that would match on 37:anything , but also 37.0;whatever ¹, 3.7e+1 ¹, possibly 0x25#xxx with some awk implementations, +37+38 ... Using +$3 == 37 though standard, doesn't work with some awk implementations. For the value (here 37 ) to come from a shell variable, you could construct the regexp in the shell and pass it to awk via an ENVIRON ment variable: var=37ERE='^'$var':[[:digit:]]+$' <test_out.txt awk '$3 ~ ENVIRON["ERE"]' Or make an awk v ariable out of the shell variable²: var=37<test_out.txt awk -v n="$var" '$3 ~ "^" n ":[[:digit:]]+"' Avoid expanding the shell variable into the awk code as in: <test_out.txt awk '$3 ~ /^'"$var"':[[:digit:]]+$/' as that typically introduces command injection vulnerabilities (the worst type of vulnerability). Some comments on your attempt: as already noted by @RudyC , you used double quotes around your awk code. Shells perform parameter expansion inside those, so the $3 would be expanded to the value of the third argument to the shell script, and $0 to the name of the script. $3 == 37 ~ /\:*/ . == has higher precedence than ~ . So that's ($3 == 37) ~ /\:*/ . So that's matching the \:* regexp against the result of that comparison (1 or 0 depending on whether $3 is 37 or not) \:* as a regexp is unspecified as \: is unspecified. To match a literal : , it's : alone. :* would be 0 or more : s so match on anything since any string contains at least 0 : s. * in regexps matches on 0 or more of the previous thing. You may be confusing it with the * of shell wildcards that matches 0 or more characters. In regexps, 0 or more characters is .* , . being the operator to match a single character. awk statements are of the form condition {action} , where either condition or action can be omitted. In your case, you omitted the condition and used if in the action , and used {print $0} which happens to be the default action . While that works, that will look very awk ward to awk users. you used cat to con cat enate a single file which hardly makes sense. The shell can open the file by itself to make it the stdin of awk using redirection which saves a process and the need to shove the contents through a pipe. You could also pass the file name as argument to awk which can also open it by itself. ¹ assuming the decimal radix character is . and not , in the locale, at least with some awk implementations such as GNU awk in POSIX mode. ² beware that -v mangles backslashes, so using ENVIRON is safer in the general case.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/722327', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/186444/']}
jdg_377760
stackexchange
llm_judgeable_groundtruth_similarity
8637889
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using delphi 2010, i am wondering if there someway to escape the following string to make it safe from sql injection attacks : my string : SQLQuery1.SQL.Text := 'SELECT * FROM registered WHERE email="'+ email+'" and login_pass="'+password+'"'; How to rewrite this string, to make it safer than it is when someone type " in my TEditbox as his email or password ! Now provide the response and nothing else.
Use parameters, and let the database drivers handle that stuff. SQLQuery1.SQL.Text := 'SELECT * FROM registered WHERE email= :email'+ ' and login_pass = :password';SQLQuery1.ParamByName('email').AsString := EMail;SQLQuery1.ParamByName('password').AsString := Password;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8637889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/783410/']}
jdg_377761
stackexchange
llm_judgeable_groundtruth_similarity
12663091
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm on windows 7 using Visual Studio 2012. When I compile I get a lot of macro redefinition warnings caused by winerror.h versus dxgi.h, dxgitype.h, d3d11.h, d3d10.h; for example DXGI_STATUS_OCCLUDED, DXGI_STATUS_CLIPPED, DXGI_STATUS_NO_REDIRECTION etc. I suppose this is because the windows sdk 8 is installed.How can I avoid this? Is there a way I can exclude these new libraries from my project? Now provide the response and nothing else.
I ran into this problem using Visual Studio 2012 Express on Windows 8; however, my errors were almost exactly the same, and the fix is centered around the Windows SDK. This was in MSDN (see #5) : D3DX is not considered the canonical API for using Direct3D in Windows 8 and therefore isn't included with the corresponding Windows SDK. Investigate alternate solutions for working with the Direct3D API. For legacy projects, such as the Windows 7 (and earlier) DirectX SDK samples, the following steps are necessary to build applications with D3DX using the DirectX SDK: a. Modify the project’s VC++ directories as follows to use the right order for SDK headers and libraries. Open Properties for the project and select the VC++ Directories page. Select All Configurations and All Platforms. Set these directories as follows: Include Directories: $(IncludePath);$(DXSDK_DIR)Include Library Directories: $(LibraryPath);$(DXSDK_DIR)Lib\x86 Click Apply. Choose the x64 Platform. Set the Library Directory as follows: Library Directories: $(LibraryPath);$(DXSDK_DIR)Lib\x64 b. Wherever "d3dx9.h", "d3dx10.h", or "d3dx11.h" are included in your project, be sure to explicitly include "d3d9.h", "d3d10.h" and "dxgi.h", or "d3d11.h" and "dxgi.h" first to ensure you are picking up the newer version. You can disable warning C4005 if needed; however, this warning indicates you are using the older version of these headers. c. Remove all references to DXGIType.h in your project. This header doesn't exist in the Windows SDK, and the DirectX SDK version conflicts with the new winerror.h. d. All D3DX DLLs are installed onto your development computer by the DirectX SDK installation. Ensure that the necessary D3DX dependencies are redistributed with any sample or with your application if it is moved to another machine. e. Be aware that replacement technologies for current uses of D3DX11 include DirectXTex and DirectXTK . D3DXMath is replaced by DirectXMath . I can't say for sure, but I think the Windows 8 SDK might be your problem. It may be a bit of a pain, but using the fix above will help make your code require less dependencies as well as making it compatible with both Windows 7 and Windows 8.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12663091', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/359415/']}
jdg_377762
stackexchange
llm_judgeable_groundtruth_similarity
52855924
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am running the following code to connect to a mqtt server. import paho.mqtt.client as mqttimport sslimport uuidclient = mqtt.Client(str(uuid.uuid1()))client.tls_set( "ca.crt", "client.crt", "client.key", cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLSv1)client.connect( "127.0.0.1", 8883,)client.loop_forever() This code works fine with python2.7 version. But when I run it with python3.7 version I am getting the below error. Traceback (most recent call last): File "test.py", line 29, in <module> 8883, File "virtualenvs/mqtt-xG2h6zri/lib/python3.7/site-packages/paho/mqtt/client.py", line 839, in connect return self.reconnect() File "mqtt-xG2h6zri/lib/python3.7/site-packages/paho/mqtt/client.py", line 994, in reconnect sock.do_handshake() File ".pyenv/versions/3.7.0/lib/python3.7/ssl.py", line 1108, in do_handshake self._sslobj.do_handshake()ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1045) Please help me figure out how to make this work in python 3.7. Now provide the response and nothing else.
Found the answer. Actually, according to this link matching server IP address with CN field of certificate is deprecated for more than 15 years. But python versions lower than 3.7 still allow this even though it is deprecated. Therefore I had to create a certificate with the ip address of the server added in the SAN field. Creating certificates with SAN fields is explained in this answer. But the solution in the answer uses domain names. If you are creating certificates with IP address use this command to create a certificate instead of the command in that answer. openssl x509 -req -in server.csr \ -extfile <(printf "subjectAltName=IP:127.0.0.1") \ -CA ca.crt \ -CAkey ca.key \ -CAcreateserial -out server.crt \ -days 365 After using these certificates the error is solved.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52855924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8370670/']}
jdg_377763
stackexchange
llm_judgeable_groundtruth_similarity
7755
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm coding a Proximal Policy Optimization (PPO) agent with the Tensorforce library (which is built on top of TensorFlow). The first environment was very simple. Now, I'm diving into a more complex environment, where all the actions are not available at each step. Let's say there are 5 actions and their availability depends on an internal state (which is defined by the previous action and/or the new state/observation space): 2 actions (0 and 1) are always available 2 actions (2 and 3) are only available when the internal state is 0 1 action (4) is only available when the internal state is 1 Hence, there are 4 actions available when the internal state is 0 and 3 actions available when the internal state is 1. I'm thinking of a few possibilities to implement that: Change the action space at each step, depending on the internal state. I assume this is nonsense. Do nothing: let the model understand that choosing an unavailable action has no impact. Do almost nothing: impact slightly negatively the reward when the model chooses an unavailable action. Help the model: by incorporating an integer into the state/observation space that informs the model what's the internal state value + bullet point 2 or 3 Are there other ways to implement this? From your experience, which one would be the best? Now provide the response and nothing else.
The most straightforward solution is to simply make every action "legal", but implementing a consistent, deterministic mapping from potentially illegal actions to different legal actions. Whenever the PPO implementation you are using selects an illegal action, you simply replace it with the legal action that it maps to. Your PPO algorithm can then still update itself as if the illegal action were selected (the illegal action simply becomes like... a "nickname" for the legal action instead). For example, in the situation you describe: 2 actions (0 and 1) are always available 2 actions (2 and 3) are only available when the internal_state == 0 1 action (4) is only available when the internal_state == 1 In cases where internal_state == 0 , if action 4 was selected (an illegal action), you can always swap it out for one of the other actions and play that one instead. It doesn't really matter (theoretically) which one you pick, as long as you're consistent about it. The algorithm doesn't have to know that it picked an illegal action, whenever it picks that same illegal action in the future again in similar states it will consistently get mapped to the same legal action instead, so you just reinforce according to that behaviour. The solution described above is very straightforward, probably the most simple to implement, but of course it... "smells" a bit "hacky". A cleaner solution would involve a step in the Network that sets the probability outputs of illegal actions to $0$ , and re-normalizes the rest to sum up to $1$ again. This requires much more care to make sure that your learning updates are still performed correctly though, and is likely a lot more complex to implement on top of an existing framework like Tensorforce (if not already somehow supported in there out of the box). For the first "solution", I wrote above that it does not matter "theoretically" how you choose you mapping. I absolutely do expect your choices here will have an impact on learning speed in practice though. This is because, in the initial stages of your learning process, you'll likely have close-to-random action selection. If some actions "appear multiple times" in the outputs, they will have a greater probability of being selected with the initial close-tor-andom action selection. So, there will be an impact on your initial behaviour, which has an impact on the experience that you collect, which in turn also has an impact on what you learn. I certainly expect it will be beneficial for performance if you can include input feature(s) for the internal_state variable. If some legal actions can be identified that are somehow "semantically close" to certain illegal actions, it could also be beneficial for performance to specifically connect those "similar" actions in the "mapping" from illegal to legal actions if you choose to go with that solution. For example, if you have a "jump forwards" action that becomes illegal in states where the ceiling is very low (because you'd bump your head), it may be better to map that action to a "move forwards" action (which is still kind of similar, they're both going forwards), than it would be to map it to a "move backwards" action. This idea of "similar" actions will only be applicable to certain domains though, in some domains there may be no such similarities between actions.
{}
{'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/7755', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/17818/']}
jdg_377764
stackexchange
llm_judgeable_groundtruth_similarity
257956
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This may be a slightly vague question but if one defines a function (of some arity) recursively on the natural numbers, the "simplest" examples are things like addition, multiplication, or factorial. How do these functions fit into a general sense of defining recursively functions on the natural numbers? Starting only from the successor operator. What indeed exactly is an "arithmetic function"? Now provide the response and nothing else.
It's trivial to show that primes are irreducible. So, assume that $a$ is an irreducible in a UFD (Unique Factorization Domain) $R$ and that $a \mid bc$ in $R$ . We must show that $a \mid b$ or $a \mid c$ . Since $a\mid bc$ , there is an element $d$ in $R$ such that $bc=ad$ . Now replace $b,c$ and $d$ by their factorizations as a product of irreducibles and use uniqueness.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/257956', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/47590/']}
jdg_377765
stackexchange
llm_judgeable_groundtruth_similarity
32075
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm managing a huge forum (over 15k posts and 500 users) and lately some guy uses a VPN (probably) to spam my forums with topics such as:"LOLOLOLOLOLOLOLOL" Now, I'm pretty sure he's using Tor or a proxy, the IP and the countries are always changing but the posts have the same pattern - it is the same guy. However, I can't seem to find a way to identify him. I am willing to take it seriously as a criminal case. The guy created over 40 spam accounts and posted over 250 topics and posts with spam. Here are some posts from whois: - http://puu.sh/2cZRJ - data - http://puu.sh/2cZS4 - whois ___ - http://puu.sh/2cZOZ - data - http://puu.sh/2cZP8 - whois ___ - http://puu.sh/2cZPZ - data - http://puu.sh/2cZQb - whois ___ - http://puu.sh/2cZQX - data - http://puu.sh/2cZQr - whois ___ - http://puu.sh/2cZSm - data - http://puu.sh/2cZSF - whois ___ - http://puu.sh/2cZTh - data - http://puu.sh/2cZT2 - whois ___ - http://puu.sh/2cZTK - data - http://puu.sh/2cZTY - whois And loads more, but the list won't end.How can I identify the user? Now provide the response and nothing else.
Your forum accepts posts from anybody. That is your core problem. Connecting to your site from various IP throughout the world is trivial, if only by using Tor . Tor provides "high anonymity" in that not only the user's identity is hidden, but each request is anonymous -- you cannot, from the outside, make sure whether two distinct requests are from the same human user or not. This can be fixed at two levels: Restricted anonymity: enforce user registration and authentication. Users may use a pseudonym, and need not provide an email address or any other identification, but your forum will insist on a login process before posting. That way, you can know whether two comments come from the same person. Note that this does not totally solve the issue; instead, that moves it to the registration process. The poor sob who has nothing better to do with his time than defacing your forum will adapt and engage into mass registration of phony accounts. No anonymity (with regards to you): user registration, this time with an email address, which is verified during registration (you make sure that the registrant can read an email sent at the address he provided). The possibility to be identified, if only by law enforcement agencies, could be a powerful deterrent for wannabe spammers (even if the said spam is not necessarily punished by Law: to my constant dismay, there is no law against writing "LOLOLOLOL"). If you choose to retain anonymous posting, then, well, welcome to the wonderful world of Mankind. While most humans are civilized, honest and polite, there is always one user who thinks exposing his lack of education is a smart thing to do. The best you could do, then, is to patiently clean up (possibly proactively, by enforcing pre-publication control of all posts by trusted moderators) until the perpetrator loses interest or reaches the age of 14, whichever comes first. This may take a few weeks or months. Historically, most societies have dealt with troublemakers by a mixture of ostracism and actual penalties (up to and including death). The apparent anonymity of Internet prevents efficient penalties (with a lot of resources, this anonymity can usually be unraveled, but police forces will not do that until an actual crime is committed). Ostracism is social pressure, so it does not work on people who do not feel the target forum as being a "society" they are part of.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/32075', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/21589/']}
jdg_377766
stackexchange
llm_judgeable_groundtruth_similarity
3170589
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have couple of questions about the javascript for loop. First question: for (i=0; i<=2; i++) {;}console.log(i); Output is 3. Shouldn't it be 2? Second question: for (var i=0; i<=2; i++) { setTimeout(function(){console.log(i);}, i*1000);} Timeouts are set correctly: 0, 1000 and 2000. But the output is 3,3,3 (should be 0, 1, 2). Does this mean the delayed functions are executed after the loop exits? Why? What should I read to understand all this mysterious javascript stuff? Thank you. Now provide the response and nothing else.
First question: No because the i variable is incremented after the last successful iteration, then the condition is checked and it evaluates to false so the loop ends. The for statement is composed by: for ([initialExpression]; [condition]; [incrementExpression]) statement And it is executed in the following steps: The initialExpression is evaluated at the start The condition is evaluated, if it evaluates to false , the loop ends, if it evaluates to true , the statement is evaluated. Evaluate the statement. The incrementExpression is evaluated, go to step 2. Second question: The function is executed asynchronously after the loop has ended, at this time as you know i contains 3 . The common workaround for this is to use a function to preserve the value of the looping variable on each iteration, for example: for (var i=0; i<=2; i++) { (function (i) { setTimeout(function(){console.log(i);}, i*1000); })(i);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3170589', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/270233/']}
jdg_377767
stackexchange
llm_judgeable_groundtruth_similarity
10621099
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Some weeks ago Amazon announced that they have lowered the content expiration period: Amazon CloudFront Lowers Minimum Content Expiration Period So much that you can actually set now TTL in CloudFront to 0. So my question is, why could it be useful to have a CloudFront distribution with TTL set to 0. To me this means no caching at all so every request that gets to CloudFront will end up hitting the origin. What am I missing? Now provide the response and nothing else.
This new feature of Amazon CloudFront is actually extremely useful for many use cases, because hitting the origin works a bit different than it looks at first sight and is not necessarily a problem, to the contrary; while this feature has been released earlier already, it all comes together with the recent release of Amazon CloudFront - Support for Dynamic Content , e.g. for the question at hand: Variable Time-To-Live (TTL) - In many cases, dynamic content is either not cacheable or cacheable for a very short period of time, perhaps just a few seconds. In the past, CloudFront's minimum TTL was 60 minutes since all content was considered static. The new minimum TTL value is 0 seconds. If you set the TTL for a particular origin to 0, CloudFront will still cache the content from that origin. It will then make a GET request with an If-Modified-Since header , thereby giving the origin a chance to signal that CloudFront can continue to use the cached content if it hasn't changed at the origin . [emphasis mine] In other words, using a TTL of 0 primarily means, that CloudFront delegates the authority for cache control to the origin, i.e. the origin server decides whether or not, and if for how long CloudFront caches the objects; please note specifically, that a GET request with an If-Modified-Since header doesn't necessarily mean that the object itself is retrieved from the origin, rather the origin can (and should) return the HTTP status code 304 - Not Modified where applicable: Indicates the resource has not been modified since last requested. [...] Using this saves bandwidth and reprocessing on both the server and client, as only the header data must be sent and received in comparison to the entirety of the page being re-processed by the server, then sent again using more bandwidth of the server and client. [emphasis mine] See Mark Nottingham's excellent Caching Tutorial for details on the mechanics and benefits of HTTP cache control, a really important and effective part of the HTTP architecture. Understanding how all these parts work together can be a bit difficult indeed, accordingly the table in section Specifying the Minimum Time that CloudFront Caches Objects for Download Distributions within Specifying How Long Objects Stay in a CloudFront Edge Cache (Object Expiration) attempts to summarize the effects when applied in the context of CloudFront with or without TTL = 0 specifically.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10621099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1398920/']}
jdg_377768
stackexchange
llm_judgeable_groundtruth_similarity
17480984
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In Qt, what is the slot that corresponds to the event of the user clicking the 'X' (close) button of the window frame i.e. this button: If there isn't a slot for this, is there any other way to trigger a function after the user presses the close button? Now provide the response and nothing else.
If you have a QMainWindow you can override closeEvent method. #include <QCloseEvent>void MainWindow::closeEvent (QCloseEvent *event){ QMessageBox::StandardButton resBtn = QMessageBox::question( this, APP_NAME, tr("Are you sure?\n"), QMessageBox::Cancel | QMessageBox::No | QMessageBox::Yes, QMessageBox::Yes); if (resBtn != QMessageBox::Yes) { event->ignore(); } else { event->accept(); }} If you're subclassing a QDialog , the closeEvent will not be called and so you have to override reject() : void MyDialog::reject(){ QMessageBox::StandardButton resBtn = QMessageBox::Yes; if (changes) { resBtn = QMessageBox::question( this, APP_NAME, tr("Are you sure?\n"), QMessageBox::Cancel | QMessageBox::No | QMessageBox::Yes, QMessageBox::Yes); } if (resBtn == QMessageBox::Yes) { QDialog::reject(); }}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/17480984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2243767/']}
jdg_377769
stackexchange
llm_judgeable_groundtruth_similarity
34283405
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't know how to sort using Realm. My current code is. data = realm.objects(WorkoutSet)data = data!.sorted("date") I want to sort date an Int from high numbers to low numbers. The docs need more information and the GitHub link throws a 404 message. Now provide the response and nothing else.
You can add an ascending parameter to the sorted method: data = data!.sorted("date", ascending: false) This sorts your WorkoutSet using the date field in descending order. Update With Swift 3 and the latest RealmSwift version this has now changed to: data = data!.sorted(byKeyPath: "date", ascending: false) If you want to evaluate the sort criteria yourself you could use: data = data!.sorted(by: { (lhsData, rhsData) -> Bool in return lshData.something > rhsData.something}) But be aware that sorting your results by yourself does return an Array instead of a Realm Results object. That means there will be a performance and memory overhead, because Results is lazy and if do the sorting with the above method you will lose that lazy behavior because Realm has to evaluate each object! You should stick to Results whenever possible. Only use the above method if there really is no other way to sort your items.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34283405', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3475084/']}
jdg_377770
stackexchange
llm_judgeable_groundtruth_similarity
152787
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In this question the following observation was made: Consider a sequence of boxes numbered 0, 1, ... each containing one real number. The real number cannot be seen unless the box is opened. Define a play to be a series of steps followed by a guess . A step opens a set of boxes. A guess guesses the contents of an unopened box. A strategy is a rule that determines the steps and guess in a play, where each step or guess depends only on the values of the previously opened boxes of that play. Then for every positive integer $k$, there is a set $S$ of $k$ strategies such that, for any sequence of (closed) boxes, there is at at most one strategy in $S$ that guesses incorrectly. My question is this: Can $k$ be countably infinite (instead of a positive integer)? If not, is there a proof? [Edit: the original question also asked whether $k$ can be uncountable; this was answered by Dan Turetsky in the negative in comments]. The best I have been able to show is that, if the function $f:\mathbb{N}\to\mathbb{R}$ defined by the contents of the initial sequence of boxes is recursive (viewing elements of $\mathbb{R}$ as binary sequences), then $k$ can be countably infinite. To see this, call a subset $X$ of $\mathbb{N}$ signature if two recursive functions on $\mathbb{N}$ that eventually agree on $X$ also eventually agree on $\mathbb{N}$. (Two functions "eventually agree" if they differ in finitely many places). Call two Turing Machines equivalent if their associated functions on $\mathbb{N}$ are equivalent (that is, eventually agree). A diagonalization argument on the class representatives of the Turing Machines yields an infinite partition $U$ of $\mathbb{N}$ into signature subsets. The $i$'th strategy in $S$ first opens all the boxes whose indices are not in the $i$'th element $U_i$ of $U$, determines the class representative Turing Machine T that generates the resulting values on the opened boxes for boxes whose indices are greater than $N$ (for some positive $N$), and guesses that a box with index greater than $N$ and in $U_i$ has a value specified by $T$. However, I have not been able to modify this for the non-computable case. Now provide the response and nothing else.
It is possible to have every mathematician guess the number in one of the boxes with at most one error. Partition the natural numbers into countably many sets, $\{S_i\}_{i=0}^\infty$, where each $S_i=\{n_{i_1},n_{i_2},\dots,\}$ is countably infinite. (There are many ways to do this) Since we have countably many mathematicians, we may list them, and assign $S_i$ to the $i^{th}$ mathematician. If $u_k$ denotes the real number in the $k^{th}$ box, then the $i^{th}$ mathematician will be assigned the sequence of real numbers $u_{n_{i_j}}$, for $j=1,2,3\dots$. Using the axiom of choice, we may chose a representative for each equivalence class of sequences of real numbers under the equivalence relation $\{u_n\}_{n=1}^\infty\equiv\{v_n\}_{n=1}^\infty$ if there exists $M>0$ such that $v_n=u_n$ for all $n>M$. Thus, for the $i^{th}$ mathematician there will exist an integer $M_i$ such that for all $j>M_i$, the sequence $u_{n_{i_j}}$ is equal to the representative of its equivalence class. The goal is to have mathematician $i$ guess an integer $H_i>M_i$ by looking at every box except those in the set $S_i$. If this happens, then mathematician $i$ may look at all of the elements of $u_{n_{i_j}}$ with $j\geq H_i+1$, determine the equivalence class, and guess the box with $j=H_i$. Since $H_i>M_i$, his guess will be correct. It follows that we need all but possibly one mathematician to guess an integer $H_i>M_i$. If the sequence $M_i$ is bounded, then the problem is easy. The difficulty is handling an unbounded sequence $M_i$. Under the same system of representatives, the sequence $\{M_i\}_{i=0}^\infty$ lies in some equivalence class of real numbers. Since mathematician $i$ knows the value of $M_l$ for all $l\neq i$, each mathematician can determine the equivalence class of the sequence $\{M_i\}_{i=0}^\infty$. Let $\{v_i\}_{i=0}^\infty$ denote the representative of this equivalence class. Then there exists an integer $N$ such that for every $i>N$, $M_i=v_i$. Mathematician $i$ with $i\leq N$ can determine $N$, however each mathematician with $i>N$ only knows that $N\leq i$. The strategy for guessing is as follows: Assign to mathematician $i$ with $i>N$ the integer $$H_i=1+\max\{v_i,M_{i-1},M_{i-2},\dots,M_1,M_0\},$$ and to each mathematician with $i\leq N$, the integer $$H_i=1+\max\{M_{N},M_{N-1},\dots,M_{i+1},M_{i-1},\dots,M_1,M_0\}.$$ Then we must have $H_i>M_i$ for every $i$ except possibly one. Thus, we have set up a strategy which allows every mathematician except possibly one to guess some box correctly.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/152787', 'https://mathoverflow.net', 'https://mathoverflow.net/users/44653/']}
jdg_377771
stackexchange
llm_judgeable_groundtruth_similarity
487636
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In mechanics, almost all the time, force is the cause and acceleration the effect. If you apply a net force on an object, an acceleration can be observed. However this seems not that straightforward in electronics in relation to voltage and current. I used to think that voltage is the cause and current is the effect. But then I came across solarcells, current sources etc and immediately realized my lack of imagination. For each of the 3 passive components, is there a preference to think one quantity as cause and the other as effect or it is completely arbitrary? Mechanics: If force is applied, velocity is changed . If velocity is changed(accelerating electrons), a force(emi) is produced. Resistor: If voltage is applied, current is produced. If current is applied, voltage is produced.Capacitor: If current is applied, voltage is *changed*. If voltage is changed, a current is produced. Inductor: If voltage is applied, current is *changed*. If current is changed, voltage is produced. Now provide the response and nothing else.
It's not so simple even in mechanics. Imagine you are a nail, with a hammer coming fast towards you. In order to stop the hammer in the space available, it has to be accelerated (decelerated with a change of sign). While undergoing that rapid acceleration, it applies a large force to the nail. The problem is what you mean by 'cause'. Usually, we think of effect following cause, that cause never follows effect. The wayward football broke the neighbour's window. It is totally reasonable to argue that the broken window did not cause the football to be moving. However, in electronics, and mechanics, where the behaviour of accelerations, forces, voltages and currents are conveniently described by equations of the form dI/dt = V/L, and f = ma, the two go hand in hand. We can say the voltage across the coil caused the current to change, if that's convenient at the time, for instance when 'charging' the inductor of a boost DC-DC converter. We can also say that the changing current caused a large voltage across the coil, when 'discharging' the inductor of the same converter into the higher output voltage. Both descriptions are correct. It is not reasonable to argue that one did not cause the other. I've had long arguments about whether a BJT transistor is voltage controlled or current controlled. It is of course both, it's nonsense to argue that the current flow doesn't cause the voltage difference or vice versa , but concentrating on one or the other may be more useful choice at any given time. Similarly, the large force applied to the hammer head by the nail caused it to slow down rapidly.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/487636', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/243585/']}
jdg_377772
stackexchange
llm_judgeable_groundtruth_similarity
151520
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It is unclear to me from the MSDN documentation if I should provide a deep or a shallow clone when implementing ICloneable. What is the preferred option? Now provide the response and nothing else.
Short answer: Yes. Long Answer: Don't use ICloneable. That is because .Clone isn't defined as being a shallow or a deep clone. You should implement your own IClone interface, and describe how the clone should work.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/151520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20164/']}
jdg_377773
stackexchange
llm_judgeable_groundtruth_similarity
2762042
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a ComboBox that has a SelectionChanged event, it fires when I'm loading the control. So at page load I set the SelectedValue and the SelectionChanged event fires which is not what I want to happen. What is the accepted apporach to stopping this? Now provide the response and nothing else.
Two obvious solutions to this would be 1) Wait until the Loaded event of the Window/Page/UserControl which contains the ComboBox and hook up SelectionChanged there...eg in the constructor: // set the inital selected index for the combo box here...this.Loaded += (s, args) => { cmbBox.SelectionChanged += new SelectionChangedEventHandler(HandleChanged); }; or 2) Check that the ComboBox has loaded in the selection changed handler before doing anything and return if it hasn't...eg in the handler: if (!cmbBox.IsLoaded) return; I would prefer number 1 as it doesn't require the check every time the SelectionChanged handler is fired.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2762042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129195/']}
jdg_377774
stackexchange
llm_judgeable_groundtruth_similarity
54688
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a function something like this: test[x_] := (Print[1]; Return[2];) Except it prints 1 when executed as test[a] for any a on the notebook, but when used in computations uses the 2 value instead. Thus test[2342] will output just 1 but test[43589] + 1 will output just 2 (because the value returned, 2, is used instead). I have looked into Interpretation but that returns a held object and does not allow you do perform operations like test[1] + 123 without explicitly releasing the hold. Now provide the response and nothing else.
Does this do what you're after? ClearAll[test];test[a_] /; Length[Stack[]] == 3 := 1test[a_] := 2test[134123](* 1 *)Identity[test[134123]](* 2 *)1 + test[134123](* 3 *) The value return by Stack[] in the condition is in the first example {test, Equal, Length} In the second, it is {Identity, test, Equal, Length} One can see that if test is called by itself, the length of the stack will be three. If test appears inside other functions, the stack will be longer.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/54688', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/13164/']}
jdg_377775
stackexchange
llm_judgeable_groundtruth_similarity
11569284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a store backed combobox selection that fires an event under ExtJS 4, how do I go about getting the full object that is represented by that selection? Now provide the response and nothing else.
In general, you can use the findRecordByValue method on the combobox: combobox.on('change', function(combobox, newValue, oldValue) { // Get the old and the new records. // NOTE: The underlying store is not guaranteed to // contain an associated record. var oldRecord = combobox.findRecordByValue(oldValue); if (oldRecord) { // Do something... } var newRecord = combobox.findRecordByValue(newValue); if (newRecord) { // Do something... }});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11569284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20774/']}
jdg_377776
stackexchange
llm_judgeable_groundtruth_similarity
34486346
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My code using json_decode() worked correctly with PHP 5.6. After migration to PHP 7.0, json_decode() returns NULL and json_last_error() tells me that my error is: Control character error, possibly incorrectly encoded After debugging, I found out that my problem are both tabs and new line characters in string values. If I remove them both, it works. If I leave either new lines or tabs, the error occurs. Is json_decode() behavior changed in PHP 7? I would like to keep tabs and new lines in my .json files for better readability. The code works if I replace tabs to \t and new lines to \n. How can I keep new lines and tabs? Now provide the response and nothing else.
Due to a software licensing issue, the old json module was replaced with the jsond module. You can see the discussion of this change and the attached pull request here . Now, there's not much information about the changes or about workarounds for things, but I can see that all control characters inside strings ( [\0x00-\0x1F] ) trigger an error. Unfortunately for you, it seems that this behavior is correct per the JSON Standard : Insignificant whitespace is allowed before or after any token. The whitespace characters are: character tabulation (U+0009), line feed (U+000A), carriage return (U+000D), and space (U+0020). Whitespace is not allowed within any token, except that space is allowed in strings. So, in other words, literal tabs are not allowed inside JSON strings at all; they must be \t or \u0009 . So, the JSON you're consuming is in direct violation of the standard. Ideally, you should get your JSON source to return standards-compliant JSON. If that won't work, you'll have to pre-process the JSON and convert tabs inside strings to \t .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34486346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4403732/']}
jdg_377777
stackexchange
llm_judgeable_groundtruth_similarity
292318
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I log in via ssh on a Linux(Ubuntu) server, I notice that all the bash commands executed by other users on the server are saved in the command history. Is there a way that could allow me to hide the commands that I have typed in the command line from other users on the server? Now provide the response and nothing else.
There are many ways to hide your command history, but it's a bad idea to turn off history altogether as it is very useful. Here are three good ways to turn it off temporarily. Quickest solution : Type unset HISTFILE That will prevent all commands run in the current login session from getting saved to the .bash_history file when you logout. Note that HISTFILE will get reset the next time you login, so history will be saved as usual. Also, note that this removes all commands from the session, including ones run before you typed unset HISTFILE, which may not be what you want. Another downside is that you cannot be sure you did it right until you logout as bash will still let you use the up arrow to see previous commands. Best solution : type a space before a command Try it and then hit up arrow to see if it got added to your history. Some sites have it already set up so that such commands are not saved. If it does not work, add the line export HISTCONTROL=ignoreboth to your .bashrc file. When you login in the future, commands that start with a space will be forgotten immediately. Easiest to remember : Type sh That will start a subshell with the original Bourne shell. Any commands written in it (until you exit ) will not be saved in your history. Anybody looking at your history file will be able to see that you ran sh (which is suspicious), but not see what you ran after that. There are many other ways of doing this. You can even tell bash which commands to never remember (HISTIGNORE). See the man page for bash(1) and search for HIST to see lots of possibilities.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/292318', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/172771/']}
jdg_377778
stackexchange
llm_judgeable_groundtruth_similarity
322225
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can some one give me an example of a function which is Absolutely continuous but not Holder continuous? Thanks Now provide the response and nothing else.
$$f(x)=\begin{cases} 1/\log x \quad &\text{if } x\in (0,1/2] \\ 0 &\text{if }x=0\end{cases}$$ Consider the behavior of $f(x)/x^\alpha$ at zero. Also notice that $f'$ is bounded on $[1/n,1/2]$ for all $n=3,4,\dots$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/322225', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/64736/']}
jdg_377779
stackexchange
llm_judgeable_groundtruth_similarity
8365380
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: how can i convert my string of date to a datetime.timedelta() in Python? I have this code : import datetimedate_select = '2011-12-1'delta = datetime.timedelta(days=1)target_date = date_select + deltaprint target_date thanks in advance ... Now provide the response and nothing else.
You wouldn't convert date_select to a timedelta , instead, you need a datetime object, which can be added to a timedelta to produce an updated datetime object: from datetime import datetime, timedeltadate_select = datetime.strptime('2011-12-1', '%Y-%m-%d')delta = timedelta(days=1)target_date = date_select + deltaprint target_date Or, if you prefer, without the fancy from ... import ... import line: import datetime # <- LOOK HERE, same as in your exampledate_select = datetime.datetime.strptime('2011-12-1', '%Y-%m-%d')delta = datetime.timedelta(days=1)target_date = date_select + deltaprint target_date
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8365380', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1038815/']}
jdg_377780
stackexchange
llm_judgeable_groundtruth_similarity
6821
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Hibert Schmidt norm $\mathrm {tr}(A^{\dagger}B)$ works well for unitaries.It has a value of one when the matrices are equal and less than one otherwise. But this norm is absolutely unsuitable for non-unitary matrices.I thought maybe $\frac{\mathrm{tr}(A^{\dagger}B)}{\sqrt{\mathrm{tr}(A^{\dagger}A)} \sqrt{\mathrm{tr}(B^{\dagger}B)}}$ would be a good idea? Now provide the response and nothing else.
When you ask about an 'ideal' fidelity measure, it assumes that there is one measure which inherently is the most meaningful or truest measure. But this isn't really the case. For unitary operators, our analysis of the error used in approximating one unitary by another involves the distance induced by the operator norm : $$ \bigl\lVert U - V \bigr\rVert_\infty := \max_{\substack{\lvert \psi\rangle \ne \mathbf 0}} \frac{\bigl \lVert (U - V) \lvert \psi \rangle \bigr\rVert_2}{\bigl \lVert \lvert \psi \rangle \bigr\rVert_2} $$ That is, it is the greatest factor by which the Euclidean norm (or 2-norm) of a vector will be increased by the action of $(U - V)$ : if the two operators are very nearly equal, this factor will be very small. I know you asked for norms on non-unitary matrices, but if a norm is useful for non-unitary matrices, you might hope that it would also be useful for unitary matrices, and the point here is that the 'operator norm' is. It is also useful for (non-unitary) observables: for two Hermitian operators $E$ and $F$ — representing evolution Hamiltonians, for instance, or measurement projectors — the operator norm $\lVert E - F \rVert$ conveys how similar $E$ and $F$ are in a way which directly relates to how easily you can operationally distinguish one from the other. On the other hand, for density operators $\rho$ and $\sigma$ , the best distance measure to describe how easily you can distinguish them is the trace norm : $$\bigl\lVert \rho - \sigma \bigr\rVert_{\mathrm{tr}} := \mathrm{tr} \Bigl( \sqrt{(\rho - \sigma) ^2} \Bigr)$$ which is the same as (in fact, it's just a fancy way of writing) the sum of the absolute values of the eigenvalues of $(\rho - \sigma) $ : if the two operators are very nearly equal, this sum will be very small. So, which norm you want to use to describe distances on operators, depends on what those operators are and what you would like to say about them.
{}
{'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/6821', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/462/']}
jdg_377781
stackexchange
llm_judgeable_groundtruth_similarity
290319
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Investigations on the Theory of the Browning Movement , on page 5, Einstein wrote: of all atoms of the system), and if the complete system of the equations of change of these variables of state is given in the form $$\dfrac{\partial p_v}{\partial t}=\phi_v(p_1\ldots p_l)\ (v=1,2,\ldots l)$$ whence $$\sum\frac{\partial\phi_v}{\partial p_v}=0,$$ I assume it is an elementary result, since he gives no explanation on how to deduce it. How can I obtain this relation? Attempt: I tried to consider $$\sum\frac{\partial \phi_v}{\partial p_v} ~=~ \sum\frac{\mathrm{d}t \phi_v}{\mathrm{d}t} \left(\partial_t p_v \right)^{-1} ~=~ \sum \frac{\partial_t \phi_v}{ \phi_v} \,,$$but I couldn't go any further. Now provide the response and nothing else.
The variables $$p^{\nu}, \qquad \nu=1,\ldots, \ell \tag{A}$$ are the phase space coordinates. The derivative $\frac{\partial p^{\nu}}{\partial t}$ in Einstein's paper is a total time derivative. The vector field $$\phi~=~\sum_{\nu=1}^{\ell}\phi^{\nu}\frac{\partial }{\partial p_{\nu}} \tag{B}$$ generates time evolution. The divergence of a vector field $$ {\rm div}\phi~=~ \frac{1}{\rho}\sum_{\nu=1}^{\ell}\frac{\partial (\rho\phi^{\nu})}{\partial p^{\nu}},\tag{C}$$ where $\rho$ is the density in phase space, which we will assume is constant $$\rho={\rm constant} \tag{D}$$ (wrt. the chosen coordinate system). Apparently Einstein assumes that the vector field $\phi$ is divergencefree, $$ {\rm div}\phi~=~0 .\tag{E}$$ We stress that not all vector fields are divergencefree. Counterexample: The dilation vector field $$\phi~=~\sum_{\nu=1}^{\ell}p^{\nu}\frac{\partial }{\partial p^{\nu}}\tag{F}$$ is not divergencefree. The corresponding flow solution reads$$ p^{\nu}(t)~=~p^{\nu}_{(0)} e^t.\tag{G}$$ Assumption (D) and (E) follow e.g. in a Hamiltonian formulation because of (among other things) Liouville's theorem . Recall that Hamiltonian vector fields are divergence-free. See also this related Phys.SE post.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/290319', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/87709/']}
jdg_377782
stackexchange
llm_judgeable_groundtruth_similarity
3415162
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The other day my friend was asked to find $A$ and $B$ in the equation $$(x^3+2x+1)^{17} \equiv Ax+B \pmod {x^2+1}$$ A method was proposed by our teacher to use complex numbers and especially to let $x=i$ where $i$ is the imaginary unit. We obtain from that substitution $$(i+1)^{17} \equiv Ai+B \pmod {0}$$ which if we have understood it correctly is valid if we define $a \equiv b \pmod n$ to be $a=b+dn$ . Running through with this definition we have $$\begin{align*}(i+1)^{17} &=\left(\sqrt{2}\left(\cos\left(\frac{\pi}{4}\right)+\sin\left(\frac{\pi}{4}\right)\right)\right)^{17}\\&=\sqrt{2}^{17}\left(\cos\left(\frac{17\pi}{4}\right)+\sin\left(\frac{17\pi}{4}\right)\right) \tag{De Moivre}\\&=256\left(\sqrt{2}\left(\cos\left(\frac{\pi}{4}\right)+\sin\left(\frac{\pi}{4}\right)\right)\right)\\&=256\left(1+i\right) \\&=256+256i\end{align*}$$ which gives the correct coefficient values for $A$ and $B$ .Our questions are Why is this substitution valid to begin with? It seems here that the special case ( $x=i$ ) implies the general case ( $x$ ), why is that valid? Now provide the response and nothing else.
Here I think it's easier to see what's going on if we forgo the modular arithmetic and look at simple factoring and remainder. We have $$(x^3+2x+1)^{17}=(x^2+1)Q(x)+Ax+B$$ for some polynomial $Q$ . Which polynomial? We don't really care. The main point is that the left-hand side and the right-hand side are the same polynomial. And since they are the same, they must give the same value when we evaluate them at $x=i$ . So we insert $x=i$ and get $$(i^3+2i+1)^{17}=(i^2+1)Q(i)+Ai+B\\(i+1)^{17}=0\cdot Q(i)+Ai+B$$ Knowing that $A,B$ are real means we can find them directly from this, as $Q$ disappears.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3415162', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_377783
stackexchange
llm_judgeable_groundtruth_similarity
18453771
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In NumPy, x*x*x is an order of magnitude faster than x**3 or even np.power(x, 3). x = np.random.rand(1e6)%timeit x**3100 loops, best of 3: 7.07 ms per loop%timeit x*x*x10000 loops, best of 3: 163 µs per loop%timeit np.power(x, 3)100 loops, best of 3: 7.15 ms per loop Any ideas as to why this behavior happens? As far as I can tell all three yield the same output (checked with np.allclose). Now provide the response and nothing else.
As per this answer , it's because the implementation of exponentiation has some overhead that multiplication does not. However, naive multiplication will get slower and slower as the exponent increases. An empirical demonstration: In [3]: x = np.random.rand(1e6) In [15]: %timeit x**2 100 loops, best of 3: 11.9 ms per loop In [16]: %timeit x*x 100 loops, best of 3: 12.7 ms per loop In [17]: %timeit x**3 10 loops, best of 3: 132 ms per loop In [18]: %timeit x*x*x 10 loops, best of 3: 27.2 ms per loop In [19]: %timeit x**4 10 loops, best of 3: 132 ms per loop In [20]: %timeit x*x*x*x 10 loops, best of 3: 42.4 ms per loop In [21]: %timeit x**10 10 loops, best of 3: 132 ms per loop In [22]: %timeit x*x*x*x*x*x*x*x*x*x 10 loops, best of 3: 137 ms per loop In [24]: %timeit x**15 10 loops, best of 3: 132 ms per loop In [25]: %timeit x*x*x*x*x*x*x*x*x*x*x*x*x*x*x 1 loops, best of 3: 212 ms per loop Note the exponentiation time stays more or less constant, except for the x**2 case which I suspect is special-cased, while multiplication gets slower and slower. It seems you could exploit this to get faster integer exponentiation... for example: In [26]: %timeit x**1610 loops, best of 3: 132 ms per loopIn [27]: %timeit x*x*x*x*x*x*x*x*x*x*x*x*x*x*x*x1 loops, best of 3: 225 ms per loopIn [28]: def tosixteenth(x): ....: x2 = x*x ....: x4 = x2*x2 ....: x8 = x4*x4 ....: x16 = x8*x8 ....: return x16 ....:In [29]: %timeit tosixteenth(x)10 loops, best of 3: 49.5 ms per loop It seems you could apply this technique generically by splitting any integer into a sum of the powers of two, computing each power of two as above, and summing: In [93]: %pastedef smartintexp(x, exp): result = np.ones(len(x)) curexp = np.array(x) while True: if exp%2 == 1: result *= curexp exp >>= 1 if not exp: break curexp *= curexp return result## -- End pasted text --In [94]: xOut[94]:array([ 0.0163407 , 0.57694587, 0.47336487, ..., 0.70255032, 0.62043303, 0.0796748 ])In [99]: x**21Out[99]:array([ 3.01080670e-38, 9.63466181e-06, 1.51048544e-07, ..., 6.02873388e-04, 4.43193256e-05, 8.46721060e-24])In [100]: smartintexp(x, 21)Out[100]:array([ 3.01080670e-38, 9.63466181e-06, 1.51048544e-07, ..., 6.02873388e-04, 4.43193256e-05, 8.46721060e-24])In [101]: %timeit x**2110 loops, best of 3: 132 ms per loopIn [102]: %timeit smartintexp(x, 21)10 loops, best of 3: 70.7 ms per loop It's fast for small even powers of two: In [106]: %timeit x**3210 loops, best of 3: 131 ms per loopIn [107]: %timeit smartintexp(x, 32)10 loops, best of 3: 57.4 ms per loop But gets slower as the exponent gets larger: In [97]: %timeit x**6310 loops, best of 3: 133 ms per loopIn [98]: %timeit smartintexp(x, 63)10 loops, best of 3: 110 ms per loop And not faster for large worst-cases: In [115]: %timeit x**51110 loops, best of 3: 135 ms per loopIn [114]: %timeit smartintexp(x, 511)10 loops, best of 3: 192 ms per loop
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18453771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2719458/']}
jdg_377784
stackexchange
llm_judgeable_groundtruth_similarity
226694
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Based on @xzczd's excellent answer on solving an equation system with unknown functions defined on different domains, I've tried to apply the same technique to a similar system shown below: Equations: $$\frac{\partial c(x,z,t)}{\partial t}=D_{eff}\frac{\partial^2c(x,z,t)}{\partial x^2}+D_{eff}\frac{\partial^2c(x,z,t)}{\partial z^2}$$ $$\frac{2*len*k_x(c(l/2,z,t)-Cv(z,t))}{\pi*rad^2-len*l}-v_z\frac{\partial Cv(z,t)}{\partial z}=\frac{\partial Cv(z,t)}{\partial t}$$ Initial conditions: $$c(x,z,0)=1$$ $$Cv(z,0)=0$$ Boundary conditions: $$\frac{\partial c(x,z,t)}{\partial x}\Bigm|_{x=0}=0$$ $$\frac{\partial c(x,z,t)}{\partial z}\Bigm|_{z=0,len}=0$$ $$D_{eff}\frac{\partial c(x,z,t)}{\partial x}\Bigm|_{x=\pm l/2}=k_x(c(\pm l/2,z,t)-Cv(z,t))$$ New possible b.cs for $Cv$ : $$\frac{\partial Cv(z,t)}{\partial z}\Bigm|_{z=0, len}=0$$ This is the code I have so far using the function pdetoode in this post as well as other functions in @xzczd's post linked at the top. The main ways it differs from the post at the top is that the domain is different in the x and z directions, and obviously different boundary conditions. len = 0.1; l = 0.004; rad = 0.1; vz = 0.0024; kx = 8.6*10^-4;Deff = 8*10^-9domainx = {-l/2, l/2}; domainz = {0, len};T = 10000;{eq1, eq2} = {D[c[x, z, t], t] == Deff*D[c[x, z, t], {x, 2}] + Deff*D[c[x, z, t], {z, 2}], 2*len*kx ((c2[z, t]) - Cv[z, t])/(Pi*rad^2 - len*l) - vz*D[Cv[z, t], {z, 1}] == D[Cv[z, t], {t, 1}]};{ic1, ic2} = {c[x, z, 0] == 1, Cv[z, 0] == 0};{bc1, bc2, bc3, bc4, bc5, bc6, bc7} = {(D[c[x, z, t], x] /. x -> 0) == 0, (Deff*D[c[x, z, t], x] /. x -> l/2) == kx*((c[l/2, z, t]) - Cv2[x, z, t]), (Deff*D[c[x, z, t], x] /. x -> -l/2) == kx*((c[-l/2, z, t]) - Cv2[x, z, t]), (D[c[x, z, t], z] /. z -> len) == 0, (D[c[x, z, t], z] /. z -> 0) == 0, (D[Cv[z, t], z] /. z -> 0) == 0, (D[Cv[z, t], z] /. z -> len) == 0}; Then attempting to solve using @xzczd's method (I know there are likely many problems here, especially with how I deal with the boundary conditions): points = 71;gridx = Array[# &, points, domainx];gridz = Array[# &, points, domainz];difforder = 4;ptoofunc1 = pdetoode[{c, Cv2}[x, z, t], t, {gridx, gridz}, difforder];ptoofunc2 = pdetoode[{c2, Cv}[z, t], t, gridz, difforder];del = #[[2 ;; -2]] &;rule1 = Cv2[x_, z_][t_] :> Cv[z][t];rule2 = c2[z_][t_] :> c[l/2, z][t];ode1 = del /@ del@ptoofunc1@eq1;ode2 = del@ptoofunc2@eq2 /. rule2;odeic1 = ptoofunc1@ic1;odeic2 = ptoofunc2@ic2;odebc1 = ptoofunc1@bc1;odebc2 = ptoofunc1@bc2 /. rule1;odebc3 = ptoofunc1@bc3 /. rule1;odebc4 = ptoofunc1@bc4;odebc5 = ptoofunc1@bc5;odebc6 = ptoofunc2@bc6;odebc7 = ptoofunc2@bc7;sol = NDSolveValue[{ode1, ode2, odeic1, odeic2, odebc1, odebc2, odebc3, odebc4, odebc5, odebc6, odebc7}, {Outer[c, gridx, gridz], Cv /@ gridz}, {t, 0, T}];solc = rebuild[sol[[1]], {gridx, gridz}, 3];solCv = rebuild[sol[[2]], gridz, 2]; EDIT: I fixed a silly mistake and am now getting this error for NDSolveValue. I'm wondering if there is a problem with how I'm dealing with the boundary conditions using pdetoode (which I believe is the case) or other variables and parameters, or if there's a problem in my equation setup to begin with. NDSolveValue: There are fewer dependent variables, {c[-0.0002, 0.][t], c[-0.002, 0.00142857][t], c[-0.002, 0.00285714][t], <<45>>, c[-0.002, 0.0685714][t], c[-0.002, 0.07][5], <<5062>>}, than equations, so the system is overdetermined. Thanks so much for reading this long post, and I'd appreciate any insight into how to fix the errors and what parameters I should modify from the post linked up top for this specific system. I'm relatively new to and still learning the ropes in Mathematica, so any help would be greatly appreciated! Now provide the response and nothing else.
Observing $D_{eff}$ and $\pi$ in the OP suggests cylinders and porous media are present. When one starts to deviate from rectangular shapes, the FEM is superior. Because the FEM is quite tolerant to mesh cell shape, often it is easier to extend the model to where simpler boundary conditions exist and let Mathematica solve for the interface. I will demonstrate an alternate approach following the documentation for Modeling Mass Transport . Copy and Modify Operator Functions The tutorials and verification tests provide helper functions that allow you to generate a well formed FEM operator. We will reproduce these functions here. Furthermore, we will adapt the functions for generating an axisymmetric operator from the Heat Transfer Verification Tests and also include porosity as shown below: (* From Mass Transport Tutorial *)Options[MassTransportModel] = {"ModelForm" -> "NonConservative"};MassTransportModel[c_, X_List, d_, Velocity_, Rate_, opts : OptionsPattern[]] := Module[{V, R, a = d}, V = If[Velocity === "NoFlow", 0, Velocity]; R = If[Rate === "NoReaction", 0, Rate]; If[ FreeQ[a, _?VectorQ], a = a*IdentityMatrix[Length[X]]]; If[ VectorQ[a], a = DiagonalMatrix[a]]; (* Note the - sign in the operator *) a = PiecewiseExpand[Piecewise[{{-a, True}}]]; If[ OptionValue["ModelForm"] === "Conservative", Inactive[Div][a.Inactive[Grad][c, X], X] + Inactive[Div][V*c, X] - R, Inactive[Div][a.Inactive[Grad][c, X], X] + V.Inactive[Grad][c, X] - R]]Options[TimeMassTransportModel] = Options[MassTransportModel];TimeMassTransportModel[c_, TimeVar_, X_List, d_, Velocity_, Rate_, opts : OptionsPattern[]] := D[c, {TimeVar, 1}] + MassTransportModel[c, X, d, Velocity, Rate, opts](* Adapted from Heat Transfer Verification Tests *)MassTransportModelAxisymmetric[c_, {r_, z_}, d_, Velocity_, Rate_, Porosity_ : "NoPorosity"] := Module[{V, R, P}, P = If[Porosity === "NoPorosity", 1, Porosity]; V = If[Velocity === "NoFlow", 0, Velocity.Inactive[Grad][c, {r, z}]]; R = If[Rate === "NoReaction", 0, P Rate]; 1/r*D[-P*d*r*D[c, r], r] + D[-P*d*D[c, z], z] + V - R]TimeMassTransportModelAxisymmetric[c_, TimeVar_, {r_, z_}, d_, Velocity_, Rate_, Porosity_ : "NoPorosity"] := Module[{P}, P = If[Porosity === "NoPorosity", 1, Porosity]; P D[c, {TimeVar, 1}] + MassTransportModelAxisymmetric[c, {r, z}, d, Velocity, Rate, Porosity]] Estimating the Timescale Assuming the dimensions are SI, you have a high aspect ratio geometry, small radius (2 mm), and relatively large $D_{eff}$ for a liquid. Generally, it is not a good idea to simulate greatly beyond the fully responded time as instabilities can creep in. Let's set up a simple axisymmetric model with the following parameters: rinner = 0.002;len = 0.1;(* No gradients in the z-direction so make len small for now *)len = rinner/5;tend = 200;Deff = 8*10^-9;(* Porosity *)epsilon = 0.5; We will create an operator, initialize the domain to a concentration of 1, impart a DirichletCondition of 0 on the outer wall (named rinner for now), and create a couple of visualizations. (* Set up the operator *)op = TimeMassTransportModelAxisymmetric[c[t, r, z], t, {r, z}, Deff, "NoFlow", "NoReaction", epsilon];(* Create Domain *)Ω2Daxi = Rectangle[{0, 0}, {rinner, len}];(* Setup Boundary and Initial Conditions *)Subscript[Γ, wall] = DirichletCondition[c[t, r, z] == 0, r == rinner];ic = c[0, r, z] == 1;(* Solve PDE *)cfun = NDSolveValue[{op == 0, Subscript[Γ, wall], ic}, c, {t, 0, tend}, {r, z} ∈ Ω2Daxi];(* Setup ContourPlot Visualiztion *)cRange = MinMax[cfun["ValuesOnGrid"]];legendBar = BarLegend[{"TemperatureMap", cRange(*{0,1}*)}, 10, LegendLabel -> Style["[\!\(\*FractionBox[\(mol\), SuperscriptBox[\(m\), \\(3\)]]\)]", Opacity[0.6`]]];options = {PlotRange -> cRange, ColorFunction -> ColorData[{"TemperatureMap", cRange}], ContourStyle -> Opacity[0.1`], ColorFunctionScaling -> False, Contours -> 30, PlotPoints -> 100, FrameLabel -> {"r", "z"}, PlotLabel -> Style["Concentration Field: c(t,r,z)", 18], AspectRatio -> 1, ImageSize -> 250};nframes = 30;frames = Table[ Legended[ ContourPlot[cfun[t, r, z], {r, z} ∈ Ω2Daxi, Evaluate[options]], legendBar], {t, 0, tend, tend/nframes}];frames = Rasterize[#1, "Image", ImageResolution -> 100] & /@ frames;ListAnimate[frames, SaveDefinitions -> True, ControlPlacement -> Top](* Setup Fake 3D Visualization *)nframes = 40;axisymPlot = Function[{t}, Legended[ RegionPlot3D[ x^2 + y^2 <= (rinner)^2 && 0 <= PlanarAngle[{0, 0} -> {{rinner, 0}, {x, y}}] <= (4 π)/ 3, {x, -rinner, rinner}, {y, -rinner, rinner}, {z, 0, len}, PerformanceGoal -> "Quality", PlotPoints -> 50, PlotLegends -> None, PlotTheme -> "Detailed", Mesh -> None, AxesLabel -> {x, y, z}, ColorFunctionScaling -> False, ColorFunction -> Function[{x, y, z}, Which[x^2 + y^2 >= (rinner)^2, Blue, True, ColorData[{"TemperatureMap", cRange}][ cfun[t, Sqrt[x^2 + y^2], z]]]], ImageSize -> Medium, PlotLabel -> Style[StringTemplate["Concentration Field at t = `` [s]"][ ToString@PaddedForm[t, {3, 4}]], 12]], legendBar]];framesaxi = Table[axisymPlot[t], {t, 0, tend, tend/nframes}];framesaxi = Rasterize[#1, "Image", ImageResolution -> 100] & /@ framesaxi;ListAnimate[framesaxi, SaveDefinitions -> True, ControlPlacement -> Top] The system responds in about 200 s, indicating that 10,000 s end time may be excessive for a small diameter system. Modeling Flow Constant convective heat/mass transfer film coefficients only apply to fully developed thermal and flow boundary layers. Indeed the film coefficients are infinite at the entrance. Instead of making the assumption that the film coefficients are constant, I will demonstrate work flow that allows the FEM solver do the heavy lifting of managing the transport at the interface. Boundary Layer Meshing If the mesh is too coarse, the fluxes across interfaces are overpredicted. Therefore, one requires boundary layer meshing to reduce the overprediction error. Unfortunately, you have to roll-your-own boundary layer meshing for now. Define Quad Mesh Helper Functions Here some helper functions that can be useful in defining a anisotropic quad mesh. (* Load Required Package *)Needs["NDSolve`FEM`"](* Define Some Helper Functions For Structured Quad Mesh*)pointsToMesh[data_] := MeshRegion[Transpose[{data}], Line@Table[{i, i + 1}, {i, Length[data] - 1}]];unitMeshGrowth[n_, r_] := Table[(r^(j/(-1 + n)) - 1.)/(r - 1.), {j, 0, n - 1}]unitMeshGrowth2Sided [nhalf_, r_] := (1 + Union[-Reverse@#, #])/2 &@ unitMeshGrowth[nhalf, r]meshGrowth[x0_, xf_, n_, r_] := (xf - x0) unitMeshGrowth[n, r] + x0firstElmHeight[x0_, xf_, n_, r_] := Abs@First@Differences@meshGrowth[x0, xf, n, r]lastElmHeight[x0_, xf_, n_, r_] := Abs@Last@Differences@meshGrowth[x0, xf, n, r]findGrowthRate[x0_, xf_, n_, fElm_] := Quiet@Abs@ FindRoot[firstElmHeight[x0, xf, n, r] - fElm, {r, 1.0001, 100000}, Method -> "Brent"][[1, 2]]meshGrowthByElm[x0_, xf_, n_, fElm_] := N@Sort@Chop@meshGrowth[x0, xf, n, findGrowthRate[x0, xf, n, fElm]]meshGrowthByElmSym[x0_, xf_, n_, fElm_] := With[{mid = Mean[{x0, xf}]}, Union[meshGrowthByElm[mid, x0, n, fElm], meshGrowthByElm[mid, xf, n, fElm]]]reflectRight[pts_] := With[{rt = ReflectionTransform[{1}, {Last@pts}]}, Union[pts, Flatten[rt /@ Partition[pts, 1]]]]reflectLeft[pts_] := With[{rt = ReflectionTransform[{-1}, {First@pts}]}, Union[pts, Flatten[rt /@ Partition[pts, 1]]]]extendMesh[mesh_, newmesh_] := Union[mesh, Max@mesh + newmesh]uniformPatch[p1_, p2_, ρ_] := With[{d = p2 - p1}, Subdivide[0, d, 2 + Ceiling[d ρ]]] Build a Two Region Mesh (Porous/Fluid). The following workflow builds a 2D annular mesh with green porous inner region and a red outer fluid region. I've adjusted some parameters to slow things down a bit to be seen in the animations. Annular Velocity Profile for Laminar Newtonian Flow To make things a bit more interesting, we will create flow field for axial laminar flow in the annular region based on this diagram. For laminar flow in an annulus, the following equation for the velocity profile may be used: Vannular[vavgz_, Ro_, κ_][r_] := vavgz (2 (Ro^2 (-1 + κ^2) Log[Ro/r] + (-r^2 + Ro^2) Log[ 1/κ]))/( Ro^2 (-1 + κ^2 + (1 + κ^2) Log[1/κ]))Plot[Vannular[vzfluid, router, kappa][r], {r, kappa router, router}] Setup Region Dependent PDE and Apply it to the Mesh The following workflow will region dependent properties to the mesh based on the previously defined element markers, solve the PDE system, and create two animations. (* Region Dependent Diffusion, Porosity, and Velocity *)diff = Evaluate[ Piecewise[{{Deff, ElementMarker == reg["porous"]}, {Dfluid, True}}]];porous = Evaluate[ Piecewise[{{epsilon, ElementMarker == reg["porous"]}, {1, True}}]];velocity = Evaluate[Piecewise[{{{{0, 0}}, ElementMarker == reg["porous"]}, {{{0, Vannular[vzfluid, router, kappa][r]}}, True}}]];(* Create Operator *)op = TimeMassTransportModelAxisymmetric[c[t, r, z], t, {r, z}, diff, velocity, "NoReaction", porous];(* Set up BCs and ICs *)Subscript[Γ, in] = DirichletCondition[c[t, r, z] == 0, z == 0 && r >= rinner];ic = c[0, r, z] == 1;(* Solve *)cfun = NDSolveValue[{op == 0, Subscript[Γ, in], ic}, c, {t, 0, tend}, {r, z} ∈ mesh];(* Display ContourPlot Animation*)cRange = MinMax[cfun["ValuesOnGrid"]];legendBar = BarLegend[{"TemperatureMap", cRange(*{0,1}*)}, 10, LegendLabel -> Style[ "[\!\(\*FractionBox[\(mol\), SuperscriptBox[\(m\), \(3\)]]\)]", Opacity[0.6`]]];options = {PlotRange -> cRange, ColorFunction -> ColorData[{"TemperatureMap", cRange}], ContourStyle -> Opacity[0.1`], ColorFunctionScaling -> False, Contours -> 20, PlotPoints -> All, FrameLabel -> {"r", "z"}, PlotLabel -> Style["Concentration Field: c(t,r,z)", 18],(*AspectRatio\[Rule]Automatic,*)AspectRatio -> 1, ImageSize -> 250};nframes = 30;frames = Table[ Legended[ ContourPlot[cfun[t, r, z], {r, z} ∈ mesh, Evaluate[options]], legendBar], {t, 0, tend, tend/nframes}];frames = Rasterize[#1, "Image", ImageResolution -> 100] & /@ frames;ListAnimate[frames, SaveDefinitions -> True] (* Display RegionPlot3D Animation *)nframes = 40;axisymPlot2 = Function[{t}, Legended[ RegionPlot3D[ x^2 + y^2 <= (router)^2 && 0 <= PlanarAngle[{0, 0} -> {{router, 0}, {x, y}}] <= (4 π)/ 3, {x, -router, router}, {y, -router, router}, {z, 0, len}, PerformanceGoal -> "Quality", PlotPoints -> 50, PlotLegends -> None, PlotTheme -> "Detailed", Mesh -> None, AxesLabel -> {x, y, z}, ColorFunctionScaling -> False, ColorFunction -> Function[{x, y, z}, Which[x^2 + y^2 >= (router)^2, Blue, True, ColorData[{"TemperatureMap", cRange}][ cfun[t, Sqrt[x^2 + y^2], z]]]], ImageSize -> Medium, PlotLabel -> Style[StringTemplate["Concentration Field at t = `` [s]"][ ToString@PaddedForm[t, {3, 4}]], 12]], legendBar]];framesaxi2 = Table[axisymPlot2[t], {t, 0, tend, tend/nframes}];framesaxi2 = Rasterize[#1, "Image", ImageResolution -> 95] & /@ framesaxi2;ListAnimate[framesaxi2, SaveDefinitions -> True, ControlPlacement -> Top] The simulation produces qualitatively reasonable results. The Mass Transport Tutorial also shows how to add an equilibrium condition between the porous and fluid phases by adding a thin interface. I also demonstrated this principle in my Wolfram Community post Modeling jump conditions in interphase mass transfer . Conclusion By extending the model to where simple boundary conditions exist, we have obviated the need for complex boundary conditions. Appendix As per the OP request in the comments, the bullet list below shows several examples where I have used anisotropic quad meshing to capture sharp interfaces that would otherwise be very computationally expensive. The code is functional, but not optimal and some of the functions have been modified over time. Use at your own risk 2D-Stationary Mathematica vs. MATLAB: why am I getting different results for PDE with non-constant boundary condition? Improving mesh and NDSolve solution convergence 2D-Transient Controlling dynamic time step size in NDSolveValue How to model diffusion through a membrane? 3D-Stationary How to Improve FEM Solution with NDSolve? If you have access to other tools, such as COMSOL, that have boundary layer functionality, you can import meshes via the FEMAddOns resource function . It will not work for 3D meshes which require additional element types like prisms and pyramids that are not currently supported in Mathematica's FEM.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/226694', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/73662/']}
jdg_377785
stackexchange
llm_judgeable_groundtruth_similarity
18622372
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a web app serving a WCF REST API for JSON and an ASMX web service. The application has been around for a few years. It's based on ASP.NET 2.0, but upgraded to .NET 4.0 a couple of years ago, and I just upgraded to .NET 4.5 to be able to use the new async framework. Behind the application are some legacy services, and I realized that there is a big potential for increasing the performance by going async. I have implemented async all the way through the application, and everything is working perfectly through the WCF REST API. Too late I discovered that the ASMX API fails, I wanted methods like this: [WebMethod(Description = "Takes an internal trip ID as parameter.")]async public Task<Trip> GetTrip(int tripid){ var t = await Trip.GetTrip(tripid); return t;} I then learned that async/await isn't supported in ASMX at all, and everybody advises to migrate to WCF. I am not too joyful about this. The ASMX (actually three of them) are stuffed with different methods, and there are loads of API consumers that we want to keep serving from the old API. But we need the increased performance! Does anybody know about a workaround so I can keep using async/await behind the ASMX, but expose the ASMX as before? Now provide the response and nothing else.
It may be possible to do this, but it would be a bit awkward. ASMX supports APM-style asynchronous methods , and you can convert TAP to APM (however, note that the MSDN example on that page does not propagate exceptions correctly). I have an example on my blog that shows how to wrap TAP implementations in APM (with exception propagation that keeps the correct exception type but loses the stack; see ExceptionDispatchInfo for fully correct exception propagation). I used this for a while when WCF only supported APM. A very similar approach should work for ASMX. However, note that you will have to target 4.5 (i.e., httpRuntime.targetFramework ) for async / await to work as expected .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18622372', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302393/']}
jdg_377786
stackexchange
llm_judgeable_groundtruth_similarity
40400634
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have these two swift classes: class A { static func list(completion: (_ result:[A]?) -> Void) { completion (nil) } static func get(completion: (_ result:A?) -> Void) { completion (nil) }}class B: A { static func list(completion: (_ result:[B]?) -> Void) { completion (nil) } static func get(completion: (_ result:B?) -> Void) { completion (nil) } } Trying to compile this raise the error "overriding declaration requires an 'override' keyword" but just for the 'get' method of class B. 'list' method compiles fine. What is the difference between [B]? and B? for the compiler in this case? Edit: Also notice that adding 'override' is not possible. I get the error 'Cannot override static method'. Now provide the response and nothing else.
In class B , the method list is a separate method from list in class A . They just share the same name, that's all. The parameters of the two list methods are actually different: // A.liststatic func list(completion: (_ result:[A]?) -> Void) {// B.liststatic func list(completion: (_ result:[B]?) -> Void) { A.list takes an argument of type (_ result: [A]?) -> Void while B.list takes a (_ result: [B]?) -> Void . The array type in the closure type's parameter list is different! So you're not overridding anything, you're just overloading. Note: static methods can never be overridden! If you want to override a method, use class instead of static . class A { class func get(completion: (_ result:A?) -> Void) { completion (nil) }}class B: A { override class func get(completion: (_ result:B?) -> Void) { completion (nil) }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40400634', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3820161/']}
jdg_377787
stackexchange
llm_judgeable_groundtruth_similarity
391643
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have looked at this question, and also at this one, but I don't understand how the quadratic formula can change from $ax^2+bx+c=0$ to $x = \frac{-b\pm\sqrt{b^2-4ac}}{2a}$. I am not particularly good at maths, so can someone prove the quadratic formula in a simple way, with no complicated words? All help appreciated. Now provide the response and nothing else.
Look at each step here:$$\begin{align*}a x^2 + b x + c &= 0 \\a \left( x^2 + \frac{b}{a} x \right) + c &= 0 \\a \left( x^2 + \frac{b}{a} x + \frac{b^2}{4 a^2} \right) - \frac{b^2}{4 a} + c &= 0 \\a \left( x + \frac{b}{2 a} \right)^2 &= \frac{b^2}{4 a} - c \\\left( x + \frac{b}{2 a} \right)^2 &= \frac{b^2 - 4 a c}{4 a^2} \\x + \frac{b}{2 a} &= \frac{\pm\sqrt{b^2 - 4 a c}}{2 a} \\x &= \frac{-b \pm\sqrt{b^2 - 4 a c}}{2 a}\end{align*}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/391643', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/42078/']}
jdg_377788
stackexchange
llm_judgeable_groundtruth_similarity
34709072
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there an annotation that I can apply to a field that when calling createOrUpdate and passing the object that the specific field will not be updated if the object already exists. The use case is I have a creation date in my object (which is set to current time of Java object creation) but if the object already exists in the database I do not want the date to be updated but would want the other fields updated. Would it be better to do a query on the id (create if it doesn't exist) and then in my code just iterate through the other fields and do updates for any that are different. Now provide the response and nothing else.
I am going to suggest 2 solutions. Both approaches will need Post to be Hashable and Equatable Conforming Post to Hashable and Equatable Here I am assuming your Post struct (or class) has an id property of type String . struct Post: Hashable, Equatable { let id: String var hashValue: Int { get { return id.hashValue } }}func ==(left:Post, right:Post) -> Bool { return left.id == right.id} Solution 1 (losing the original order) To remove duplicated you can use a Set let uniquePosts = Array(Set(posts)) Solution 2 (preserving the order) var alreadyThere = Set<Post>()let uniquePosts = posts.flatMap { (post) -> Post? in guard !alreadyThere.contains(post) else { return nil } alreadyThere.insert(post) return post}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/34709072', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4958676/']}
jdg_377789
stackexchange
llm_judgeable_groundtruth_similarity
5009869
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I saw some post about implement GC in C and some people said it's impossible to do it because C is weakly typed. I want to know how to implement GC in C++. I want some general idea about how to do it. Thank you very much! This is a Bloomberg interview question my friend told me. He did badly at that time. We want to know your ideas about this. Now provide the response and nothing else.
Garbage collection in C and C++ are both difficult topics for a few reasons: Pointers can be typecast to integers and vice-versa. This means that I could have a block of memory that is reachable only by taking an integer, typecasting it to a pointer, then dereferencing it. A garbage collector has to be careful not to think a block is unreachable when indeed it still can be reached. Pointers are not opaque. Many garbage collectors, like stop-and-copy collectors, like to move blocks of memory around or compact them to save space. Since you can explicitly look at pointer values in C and C++, this can be difficult to implement correctly. You would have to be sure that if someone was doing something tricky with typecasting to integers that you correctly updated the integer if you moved a block of memory around. Memory management can be done explicitly. Any garbage collector will need to take into account that the user is able to explicitly free blocks of memory at any time. In C++, there is a separation between allocation/deallocation and object construction/destruction. A block of memory can be allocated with sufficient space to hold an object without any object actually being constructed there. A good garbage collector would need to know, when it reclaims memory, whether or not to call the destructor for any objects that might be allocated there. This is especially true for the standard library containers, which often make use of std::allocator to use this trick for efficiency reasons. Memory can be allocated from different areas. C and C++ can get memory either from the built-in freestore (malloc/free or new/delete), or from the OS via mmap or other system calls, and, in the case of C++, from get_temporary_buffer or return_temporary_buffer . The programs might also get memory from some third-party library. A good garbage collector needs to be able to track references to memory in these other pools and (possibly) would have to be responsible for cleaning them up. Pointers can point into the middle of objects or arrays. In many garbage-collected languages like Java, object references always point to the start of the object. In C and C++ pointers can point into the middle of arrays, and in C++ into the middle of objects (if multiple inheritance is used). This can greatly complicate the logic for detecting what's still reachable. So, in short, it's extremely hard to build a garbage collector for C or C++. Most libraries that do garbage collection in C and C++ are extremely conservative in their approach and are technically unsound - they assume that you won't, for example, take a pointer, cast it to an integer, write it to disk, and then load it back in at some later time. They also assume that any value in memory that's the size of a pointer could possibly be a pointer, and so sometimes refuse to free unreachable memory because there's a nonzero chance that there's a pointer to it. As others have pointed out, the Boehm GC does do garbage collection for C and C++, but subject to the aforementioned restrictions. Interestingly, C++11 includes some new library functions that allow the programmer to mark regions of memory as reachable and unreachable in anticipation of future garbage collection efforts. It may be possible in the future to build a really good C++11 garbage collector with this sort of information. In the meantime though, you'll need to be extremely careful not to break any of the above rules.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5009869', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/435645/']}
jdg_377790
stackexchange
llm_judgeable_groundtruth_similarity
267357
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Many DACs like PCM1794 (current output) and PCM1793 (voltage output) have a differential output. What are the benefits from this, other than an easy way to convert its output signal to ground-referenced AC via a differential amplifier. Now provide the response and nothing else.
Other pins on the same chip carry logic level signals, which will cause measurable currents into the input impedances of those pins, as well as further switching activity within the DAC. Those currents will cause voltage drops across the GND bond wires. If it's a high resolution DAC, (above 16 bits), those voltage drops can be comparable to the analog output signal, and considerably larger than the output signal by the time you reach 20 bits. Remember that the digital input signals are a million times larger in amplitude, (for a 20 bit ADC), with fast switching edges, and in close proximity to the analog output and ground. Now separating analog and digital grounds can minimise the pollution on the analog ground, but even so, they will be connected at some point, and without extraordinary care, some coupling between them will occur. Providing both true and inverted analog outputs is relatively cheap and simple. They both contain this noise, as they are both referenced to the same analog ground. But it is common mode noise, allowing a differential amplifier to eliminate this noise in a location relatively remote from the DAC itself.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/267357', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/17902/']}
jdg_377791
stackexchange
llm_judgeable_groundtruth_similarity
7746158
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to read and print the object attribute dynamically(Java) ? for example if I have following object public class A{ int age ; String name; float income;}public class B{ int age; String name;}public class mainA{ A obj1 = new A(); method(A); method(B); }the output should be likeWhile running method(A):Attribute of Object are age,name,income;While executing method(B):Attribute of Objects are age,name; My question is I can pass various object in method(), is there any way I can access the attribute of the differnt object in general. Now provide the response and nothing else.
You want to use The Reflection API . Specifically, take a look at discovering class members . You could do something like the following: public void showFields(Object o) { Class<?> clazz = o.getClass(); for(Field field : clazz.getDeclaredFields()) { //you can also use .toGenericString() instead of .getName(). This will //give you the type information as well. System.out.println(field.getName()); }} I just wanted to add a cautionary note that you normally don't need to do anything like this and for most things you probably shouldn't . Reflection can make the code hard to maintain and read. Of course there are specific cases when you would want to use Reflection, but those relatively rare.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7746158', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/977520/']}
jdg_377792
stackexchange
llm_judgeable_groundtruth_similarity
38878
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: What can be used to distinguish two $\Sigma_g$-bundles over $\Sigma_h$ up to (1) homotopy?(2) homeomorphism?(3) fiberwise homeomorphism?(4) bundle isomorphism? And can these always be computed given 2 specific surface bundles over $\Sigma_h$? Now provide the response and nothing else.
I'm supposing you mean for $g, h > 0$. Associated with a surface bundle, there is a homomorphism of $\pi_1(\Sigma_h)$ to the outer automorphism group of $\pi_1(\Sigma_g)$.This is equivalent (with slight low-genus modifications) to a homotopy class of maps into the modular orbifold, Teichmüller space modulo the mapping class groups. The group of homeomorphisms of a surface homotopic to the identity is contractible, so these bundles are determined up isomorphism that acts as the identity on the base and on one fiber. The conjugacy problem for the mapping class group is solved, using either the theory of pseudo-Anosov homeomophisms or automatic group theory, and either of those tools allows you to solve isomorphism up to bundle maps that are the identity on the base. Peter Brinkmann's program xtrain, which you can find online, computes the dilatation constant, which is typically enough to distinguish conjugacy classes in the mapping class group. Snappea, also available online, will usually distinguish homeomorphism classes of the 3-manifolds obtained by an element of the mapping class group (with exceptions that can be analyzed). This will also distinguish conjugacy classes, by looking for homoemorphism preserving a cohomology class. The action of the mapping class group of the base on bundle maps seems trickier, and I don't think I know an immediate answer of classifying them. The troublesome cases would be where the image of the surface group in the mapping class group is not a quasi-isometric map of groups. A classification of homeomorphism types would include the special case when the surface bundle is induced from a map of the base to a circle, so the bundle comes from a 3-manifold that fibers over a circle. 3-manifolds can fiber in many different ways, so not all homeomorphisms in these cases are fiber preserving, and the homeomorphism classification for these particular cases is solvable, but it gets into a complicated theory that won't usually work for 4-manifolds. I'm not sure what's known about surface fiber bundles over surfaces that fiber in multiple ways, apart from these. One other point: the fundamental group of such a 4-manifold has an action on $S^1$, namely, the circle at infinity to the fibers. The action is faithful if the monodromy of the bundle is faithful. In these cases, the isomorphism class of the 4-manifold I believe is determined by the subgroup of homeomorphisms of the circle, up to conjugacy. For $h > 1$, there is always some branched cover of the base surface so that when you pull the bundle back to the branched cover, there is a section of the bundle, the map to the outer automorphism group of the fiber lifts to the automorphism group, and the fundamental group of the 4-manifold is a semi-direct product. I'm not an expert in these, and I'm sure there is more that is known.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/38878', 'https://mathoverflow.net', 'https://mathoverflow.net/users/7867/']}
jdg_377793
stackexchange
llm_judgeable_groundtruth_similarity
162809
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I came across a if/then statement like this: if ! foo ; then echo "blah" exit 1fi What specifically does if ! mean? "If the result of foo is not true?" "If the exit code of foo is not 0"? Now provide the response and nothing else.
! inverts the meaning of the exit status of the command -- it's part of POSIX shell syntax, it's not part of if . From the POSIX spec : If the reserved word ! does not precede the pipeline, the exit status shall be the exit status of the last command specified in the pipeline. Otherwise, the exit status shall be the logical NOT of the exit status of the last command. That is, if the last command returns zero, the exit status shall be 1; if the last command returns greater than zero, the exit status shall be zero.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/162809', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1822/']}
jdg_377794
stackexchange
llm_judgeable_groundtruth_similarity
29528915
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I make ggplot2 give a separate legend for different geoms that both use color to represent 2 different variables. I'd want 3 legends, one for z , a , and b , but a & b seem to be combined into a singe legend even though a & b represent different variables. I'd also like to be able to control the colors in each legend. dat <- data.frame( y = rnorm(200), x = sample(c("A", "B"), 200, TRUE), z = sample(100:200, 200, TRUE), a = sample(c("male", "female"), 200, TRUE), b = factor(sample(1:2, 200, TRUE)))ggplot(dat, aes(y = y, x = x)) + geom_point(aes(color = a, size = z)) + geom_boxplot(fill = NA, size=.75, aes(color=b)) + scale_color_manual(values = c("#F8766D", "#00BFC4", "orange", "purple")) Now provide the response and nothing else.
If you use a filled plotting symbol, you can map one factor to fill and the other to colour, which then separates them into two scales and, therefore, legends. ggplot(dat, aes(y = y, x = x)) + geom_point(aes(fill = a, size = z), pch = 21) + geom_boxplot(fill = NA, size=.75, aes(color=b)) + scale_color_manual(values = c("orange", "purple")) + scale_fill_manual(values = c("#F8766D", "#00BFC4"))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29528915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1000343/']}
jdg_377795
stackexchange
llm_judgeable_groundtruth_similarity
14214396
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm lost in a twisty maze of certificates and private keys. I am writing a server in C#. I want it to accept SSL connections. For test purposes, I want to generate a certificate for "localhost", and use that certificate for the server. Ideally I don't want to pollute my certificate store, so I just want a file, or files, on disk that the C# code can load to provide the certificate. Everything I have tried (using cookbook approaches suggested by web searches) either gives me "The credentials supplied to the package were not recognized" or "The server mode SSL must use a certificate with the associated private key." Is there a way to generate a certificate and private key, and subsequently to load them into an X509Certificate object without also loading them into my machine certificate store? Now provide the response and nothing else.
In the end, I ran the following to create a server.pfx file: makecert.exe -r -pe -n "CN=localhost" -sky exchange -sv server.pvk server.cerpvk2pfx -pvk server.pvk -spc server.cer -pfx server.pfx Then I loaded it in code with: certificate = new X509Certificate2("server.pfx", "password"); (I didn't actually hard code the password like that :-) The trick was to know that I needed a pfx file, and that I needed to load it using the X509Certificate2 class, rather than X509Certificate.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14214396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/862344/']}
jdg_377796
stackexchange
llm_judgeable_groundtruth_similarity
54343378
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When using regular expression, I get: import restring = r'http://www.example.com/abc.html'result = re.search('^.*com', string).group() In pandas, I write: df = pd.DataFrame(columns = ['index', 'url'])df.loc[len(df), :] = [1, 'http://www.example.com/abc.html']df.loc[len(df), :] = [2, 'http://www.hello.com/def.html']df.str.extract('^.*com')ValueError: pattern contains no capture groups How to solve the problem? Thanks. Now provide the response and nothing else.
According to the docs , you need to specify a capture group (i.e., parentheses) for str.extract to, well, extract. Series.str.extract(pat, flags=0, expand=True) For each subject string in the Series, extract groups from the first match of regular expression pat. Each capture group constitutes its own column in the output. df.url.str.extract(r'(.*.com)') 00 http://www.example.com1 http://www.hello.com # If you need named capture groups,df.url.str.extract(r'(?P<URL>.*.com)') URL0 http://www.example.com1 http://www.hello.com Or, if you need a Series, df.url.str.extract(r'(.*.com)', expand=False)0 http://www.example.com1 http://www.hello.comName: url, dtype: object
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/54343378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9087866/']}
jdg_377797
stackexchange
llm_judgeable_groundtruth_similarity
11752997
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Extract from Android Developer Guide link above: 320dp: a typical phone screen (240x320 ldpi, 320x480 mdpi, 480x800 hdpi, etc). 480dp: a tweener tablet like the Streak (480x800 mdpi). 600dp: a 7” tablet (600x1024 mdpi). 720dp: a 10” tablet (720x1280 mdpi, 800x1280 mdpi, etc) So i got graphics(images) at resolution 320 pixels per inch from designer in these dimension only 480x800 hdpi 720x1280 mdpi 800x1280 mdpi I am confused which size of images should be placed in mdpi folder, hdpi folder and xhdpi folder. I want to make one application which can work on most android phones and tablets ? Now provide the response and nothing else.
You can create different graphic objects for use at different pixel densities. Android treats mdpi (160 pixels/inch) as the base density. So for mdpi devices, 1 dp = 1 pixel. At higher densities, there are more pixels per inch (240 for hdpi, 320 for xhdpi). Android attempts to make graphic images occupy the same physical dimensions on the screen regardless of the device pixel density. So if all it finds is an mdpi resource, and the device is hdpi, it will scale the graphic by 240/160 = 150%, and it will double the size of the graphic for xhdpi. If you don't want this automatic scaling (which can make graphics look poor), you can simply supply your own version of graphic resources for use at higher densities. These graphics should be of the same size that Android would scale an mdpi resource. Note that the pixels/inch that was stored in the image file has nothing to do with this. It's all based on where you put the graphics files in the resources directory for your project. Any graphics placed in res/drawable are assumed to be properly sized for mdpi displays, as are graphics placed in res/drawable-mdpi . Image files that it finds in res/drawable-hdpi are assumed to be properly sized for hdpi displays, etc. When your program runs on a particular device, Android will first look for a graphic that matches the display density of that device. If it does not find one but instead finds one for a different density, it will use that and automatically scale the image based on the above rules.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11752997', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/353593/']}
jdg_377798
stackexchange
llm_judgeable_groundtruth_similarity
6275380
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a situation where I am passing a string to a function. I want to convert &nbsp; to " " (a blank space) before passing it to function. Does html_entity_decode does it? If not how to do it? I am aware of str_replace but is there any other way out? Now provide the response and nothing else.
Quote from html_entity_decode() manual: You might wonder why trim(html_entity_decode('&nbsp;')); doesn't reduce the string to an empty string, that's because the '&nbsp;' entity is not ASCII code 32 (which is stripped by trim()) but ASCII code 160 (0xa0) in the default ISO 8859-1 characterset. You can use str_replace() to replace the ascii character #160 to a space: <?php$a = html_entity_decode('>&nbsp;<');echo 'before ' . $a . PHP_EOL;$a = str_replace("\xA0", ' ', $a);echo ' after ' . $a . PHP_EOL;
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6275380', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/165370/']}
jdg_377799
stackexchange
llm_judgeable_groundtruth_similarity
9402
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that we do not know anything about the complex analysis (numbers). In this case, how to calculate the following integral in closed form? $$\int_0^\infty\frac{\cos x}{1+x^2}\,\mathrm{d}x$$ Now provide the response and nothing else.
This can be done by the useful technique of differentiating under the integral sign. In fact, this is exercise 10.23 in the second edition of "Mathematical Analysis" by Tom Apostol. Here is the brief sketch (as laid out in the exercise itself). Let $$ F(y) = \int\limits_{0}^{\infty} \frac{\sin xy}{x(1+x^2)} \ dx \ \ \text{for} \quad\quad y > 0$$ Show that $\displaystyle F''(y) - F(y) + \pi/2 = 0$ and hence deduce that $\displaystyle F(y) = \frac{\pi(1-e^{-y})}{2}$. Use this to deduce that for $y > 0$ and $a > 0$ $$\displaystyle \int_{0}^{\infty} \frac{\sin xy}{x(x^2 + a^2)} \ dx = \frac{\pi(1-e^{-ay})}{2a^2}$$ and $$\int_{0}^{\infty} \frac{\cos xy}{x^2 + a^2} dx = \frac{\pi e^{-ay}}{2a}$$
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/9402', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2323/']}
jdg_377800
stackexchange
llm_judgeable_groundtruth_similarity
780027
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I noticed some not so old VM languages like Lua, NekoVM, and Potion written in C. It looked like they were reimplementing many C++ features. Is there a benefit to writing them in C rather than C++? Now provide the response and nothing else.
I know something about Lua. Lua is written in pure ANSI Standard C and compiles on any ANSI platform with no errors and no warnings. Thus Lua runs on almost any platform in the world , including things like Canon PowerShot cameras . It's a lot harder to get C++ to run on weird little embedded platforms. Lua is a high-performance VM, and because C cannot express method calls (which might be virtual or might not) and operator overloading, it is much easier to predict the performance of C code just by looking at the code. C++, especially with the template library, makes it a little too easy to burn resources without being aware of it. (A full implementation of Lua including not only VM but libraries fits in 145K of x86 object code. The whole language fits even in a tiny 256K cache, which you find at L2 on Intel i7 and L1 on older chips. Unless you really know what you're doing, it's much harder to write C++ that compiles to something this small.) These are two good reasons to write a VM in C.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/780027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/57757/']}
jdg_377801
stackexchange
llm_judgeable_groundtruth_similarity
3114543
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As a general rule, are there ever any circumstances in which it's acceptable for a method responsible for listening to an event to throw an exception (or allow to be thrown) that the class raising the event will have to handle? Given that such an exception would stop other listeners to that event from being called subsequently, it seems a bit 'antisocial' to allow this to happen, but on the other hand, if there is an exception, what should it do? Now provide the response and nothing else.
Throwing an exception from a event handler is in many ways similar to throwing an exception from a IDisposable.Dispose method (or a C++ destructor). Doing so creates havoc for your caller because you leave them with little option. Ignore the exception and let it propagate. This breaks their contract to inform all listeners of an event. This is a very real problem if anyone above them on the stack catches the exception. Catch it call the other handlers and rethrow. But what happens if one of the others throw as well? Swallow the exception. This is just bad in general. Event sources should have no knowledge of their caller and hence can't know what they're swallowing. Crash the process because you're toast. Of all of these #4 is the best option. But this is rarely done and can't be counted on. I think in your component you really only have a few options You are calling the code which is throwing and are in the best position to handle the exception. If it's not handleable by you then it's unreasonable to expect it to be handled by anyone else. Hence crash the process and be done with it. Don't call the API which throws
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3114543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283505/']}
jdg_377802
stackexchange
llm_judgeable_groundtruth_similarity
6062
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I know that obviously we must avoid sql injection attacks through user input validation and parametrized queries. There's already a firewall in place on the database server to limit remote connections to be accepted only from the web server. Would it also add value from a security standpoint to limit the actual database user account that the ASP.NET website uses to the EXECUTE permission only on the stored procedures that they need? All database interaction would take place using these stored procedures. This seems to me that even in a scenario where an attacker figures out a way to access the database connection, the attack is limited to only executing predefined queries and no open ended queries? Now provide the response and nothing else.
There are two main (security) reasons to do this, above and beyond just using parameterized queries: Parameter type enforcement Least privilege. The principle of Least Privilege requires you to allow any entity (user or application) access only to whatever it needs to do the defined task. If you don't restrict the webapp only to the SPs, the application could potentially execute any arbitrary query. Note that this is relevant in two situations: preventing an attacker, that managed to find a vulnerability in your application (either SQL injection, or any other vuln that could allow him to execute code), from running malicious SQL queries; and, much lesser risk, developers that are looking for insecure, unapproved shortcuts (or even malicious developers). Granting only EXECUTE privileges on the required SPs, will stop the application from running any query that was not predefined. Wrt enforcing parameter types, while it is possible to implement this in other ways, this brings the type enforcement to the database, but before it hits the db server. I.e. using the types that are actually defined on the database, and without accidentally skipping a parameter. Note that in order to do this properly, and avoid some common mistakes, you want to: define a specific user account for the ASP.NET application assign the account to a custom DB role remove the account from all other roles, such as dbo . grant EXECUTE privileges to the custom DB role you created remove all other privileges on the SPs, tables, and other DB objects. This includes default "public" roles, and so forth. ensure that the custom DB role has no other privileges.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/6062', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/2901/']}
jdg_377803
stackexchange
llm_judgeable_groundtruth_similarity
15527133
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Ok I know that related questions have been asked over and over again and I read pretty much everything I found about this, but things are still unclear. Probably also because I found and read things contradicting each other (maybe because, being from different times, they referred to devices with different compute capability, between which there seems to be quite a gap). I am looking to be more efficient, to reduce my execution time and thus I need to know exactly how many threads/warps/blocks can run at once in parallel. Also I was thinking of generalizing this and calculating an optimal number of threads and blocks to pass to my kernel based only on the number of operations I know I have to do (for simpler programs) and the system specs. I have a GTX 550Ti, btw with compute capability 2.1.4 SMs x 48 cores = 192 CUDA cores. Ok so what's unclear to me is: Can more than 1 block run AT ONCE (in parallel) on a multiprocessor (SM)? I read that up to 8 blocks can be assigned to a SM, but nothing as to how they're ran. From the fact that my max number of threads per SM (1536) is barely larger than my max number of threads per block (1024) I would think that blocks aren't ran in parallel (maybe 1 and a half?). Or at least not if I have a max number of threads on them. Also if I set the number of blocks to, let's say 4 (my number of SMs), will they be sent to a different SM each?Or I can't really control how all this is distributed on the hardware and then this is a moot point, my execution time will vary based on the whims of my device ... Secondly, I know that a block will divide it's threads into groups of 32 threads that run in parallel, called warps. Now these warps (presuming they have no relation to each other) can be ran in parallel aswell? Because in the Fermi architecture it states that 2 warps are executed concurrently, sending one instruction from each warp to a group of 16 (?) cores, while somewhere else i read that each core handles a warp, which would explain the 1536 max threads (32*48) but seems a bit much. Can 1 CUDA core handle 32 threads concurrently? On a simpler note, what I'm asking is: (for ex) if I want to sum 2 vectors in a third one, what length should I give them (nr of operations) and how should I split them in blocks and threads for my device to work concurrently (in parallel) at full capacity (without having idle cores or SMs). I'm sorry if this was asked before and I didn't get it or didn't see it. Hope you can help me. Thank you! Now provide the response and nothing else.
The distribution and parallel execution of work are determined by the launch configuration and the device. The launch configuration states the grid dimensions, block dimensions, registers per thread, and shared memory per block. Based upon this information and the device you can determine the number of blocks and warps that can execute on the device concurrently. When developing a kernel you usually look at the ratio of warps that can be active on the SM to the maximum number of warps per SM for the device. This is called the theoretical occupancy. The CUDA Occupancy Calculator can be used to investigate different launch configurations. When a grid is launched the compute work distributor will rasterize the grid and distribute thread blocks to SMs and SM resources will be allocated for the thread block. Multiple thread blocks can execute simultaneously on the SM if the SM has sufficient resources. In order to launch a warp, the SM assigns the warp to a warp scheduler and allocates registers for the warp. At this point the warp is considered an active warp. Each warp scheduler manages a set of warps (24 on Fermi, 16 on Kepler). Warps that are not stalled are called eligible warps. On each cycle the warp scheduler picks an eligible warp and issue instruction(s) for the warp to execution units such as int/fp units, double precision floating point units, special function units, branch resolution units, and load store units. The execution units are pipelined allowing many warps to have 1 or more instructions in flight each cycle. Warps can be stalled on instruction fetch, data dependencies, execution dependencies, barriers, etc. Each kernel has a different optimal launch configuration. Tools such as Nsight Visual Studio Edition and the NVIDIA Visual Profiler can help you tune your launch configuration. I recommend that you try to write your code in a flexible manner so you can try multiple launch configurations. I would start by using a configuration that gives you at least 50% occupancy then try increasing and decreasing the occupancy. Answers to each Question Q: Can more than 1 block run AT ONCE (in parallel) on a multiprocessor (SM)? Yes, the maximum number is based upon the compute capability of the device. See Tabe 10. Technical Specifications per Compute Capability : Maximum number of residents blocks per multiprocessor to determine the value. In general the launch configuration limits the run time value. See the occupancy calculator or one of the NVIDIA analysis tools for more details. Q:From the fact that my max number of threads per SM (1536) is barely larger than my max number of threads per block (1024) I would think that blocks aren't ran in parallel (maybe 1 and a half?). The launch configuration determines the number of blocks per SM. The ratio of maximum threads per block to maximum threads per SM is set to allow developer more flexibility in how they partition work. Q: If I set the number of blocks to, let's say 4 (my number of SMs), will they be sent to a different SM each? Or I can't really control how all this is distributed on the hardware and then this is a moot point, my execution time will vary based on the whims of my device ... You have limited control of work distribution. You can artificially control this by limiting occupancy by allocating more shared memory but this is an advanced optimization. Q: Secondly, I know that a block will divide it's threads into groups of 32 threads that run in parallel, called warps. Now these warps (presuming they have no relation to each other) can be ran in parallel as well? Yes, warps can run in parallel. Q: Because in the Fermi architecture it states that 2 warps are executed concurrently Each Fermi SM has 2 warps schedulers. Each warp scheduler can dispatch instruction(s) for 1 warp each cycle. Instruction execution is pipelined so many warps can have 1 or more instructions in flight every cycle. Q: Sending one instruction from each warp to a group of 16 (?) cores, while somewhere else i read that each core handles a warp, which would explain the 1536 max threads (32x48) but seems a bit much. Can 1 CUDA core handle 32 threads concurrently? Yes. CUDA cores is the number of integer and floating point execution units. The SM has other types of execution units which I listed above. The GTX550 is a CC 2.1 device. On each cycle a SM has the potential to dispatch at most 4 instructions (128 threads) per cycle. Depending on the definition of execution the total threads in flight per cycle can range from many hundreds to many thousands.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15527133', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2025598/']}
jdg_377804
stackexchange
llm_judgeable_groundtruth_similarity
18238227
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some angular factories for making ajax calls towards legacy ASP.NET .asmx web services like so: module.factory('productService', ["$http",function ($http) { return { getSpecialProducts: function (data) { return $http.post('/ajax/Products.asmx/GetSpecialProducs', data); } }} ]); I'm testing on a local network so response times are "too" good. Is there a smart way of delaying the $http a couple of seconds from making the call to simulate a bad connection? Or do I need to wrap all calls to the factory methods in a $timeout ? $timeout(function() { productService.getSpecialProducs(data).success(success).error(error);}, $scope.MOCK_ajaxDelay); Now provide the response and nothing else.
Interesting question! As you mentioned yourself, $timeout is the most logical choice for a delayed call. Instead of having $timeout calls everywhere, you could push a response interceptor that wraps the $http promise in a $timeout promise, as conceptually outlined in the documentation of $http , and register it in one of your configuration blocks. This means all $http calls are affected by the $timeout delay. Something along the lines of: $httpProvider.interceptors.push(function($timeout) { return { "response": function (response) { return $timeout(function() { return response; }, 2500); } };}); As a bonus to your "to simulate a bad connection?", you could reject or do absolutely nothing randomly, too. Heh heh heh.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18238227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/416797/']}
jdg_377805
stackexchange
llm_judgeable_groundtruth_similarity
4540
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've got the following Dockerfile : FROM ubuntu:xenialRUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntuWORKDIR /home/ubuntuUSER ubuntuVOLUME /opt/myvolume Which I built it: $ docker build -t vol-test .Sending build context to Docker daemon 2.048kBStep 1/5 : FROM ubuntu:xenial ---> 0b1edfbffd27Step 2/5 : RUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntu ---> Using cache ---> d82e3ecc5fe8Step 3/5 : WORKDIR /home/ubuntu ---> Using cache ---> ab1db29ee8bfStep 4/5 : USER ubuntu ---> Using cache ---> 129393a35d9eStep 5/5 : VOLUME /opt/myvolume ---> Running in 691a4cbd077eRemoving intermediate container 691a4cbd077e ---> 11bc9e9db9d3Successfully built 11bc9e9db9d3Successfully tagged vol-test:latest However, when run, the /opt/myvolume directory is owned by root , not ubuntu : $ docker run vol-test iduid=1000(ubuntu) gid=0(root) groups=0(root),27(sudo)$ docker run vol-test find /opt/myvolume -ls 66659 4 drwxr-xr-x 2 root root 4096 Jul 18 23:02 /opt/myvolume$ docker run -u ubuntu vol-test find /opt/myvolume -ls 66940 4 drwxr-xr-x 2 root root 4096 Jul 18 23:12 /opt/myvolume because it's created during the run. Is it possible to define or change the default owner of VOLUME directory in Dockerfile ? I'm running it on macOS and Linux. Now provide the response and nothing else.
As stated in the documentation , VOLUME instruction inherits the directory content and permissions existing in the container, so you can workaround the problem with a dockerfile like this: FROM ubuntu:xenialRUN useradd -d /home/ubuntu -ms /bin/bash -g root -G sudo -p ubuntu ubuntuRUN mkdir /opt/myvolume && chown ubuntu /opt/myvolumeWORKDIR /home/ubuntuVOLUME /opt/myvolume The creation of the directory has to be done as root (to be able to write within /opt).
{}
{'log_upvote_score': 6, 'links': ['https://devops.stackexchange.com/questions/4540', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/3/']}
jdg_377806
stackexchange
llm_judgeable_groundtruth_similarity
7228076
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to control the ordering of jars in the testRuntime configuration. I must make sure that robolectric-x.x.jar comes before android.jar, or else I get the dreaded RuntimeException("Stub!"). How do I do that? Now provide the response and nothing else.
Here is my complete build.gradle for running Robolectric tests against my Android app, which uses RoboGuice: apply plugin: 'java'androidJar = new File(System.getenv('ANDROID_HOME'), '/platforms/android-7/android.jar')configurations { robo }dependencies { robo('com.pivotallabs:robolectric:1.0-RC1') testCompile('org.roboguice:roboguice:1.1.2') testCompile('junit:junit:4.8.2') testCompile project (':app') testCompile files(androidJar)}sourceSets.test.compileClasspath = configurations.robo + sourceSets.test.compileClasspathsourceSets.test.runtimeClasspath = configurations.robo + sourceSets.test.runtimeClasspathtest { excludes = ['**/MyRobolectricTestRunner.class']} I had to add an exclusion for the test runner, or else Gradle will throw an exception. MyRobolectricTestRunner.java looks like this: package com.acme.myapp;import java.io.File;import org.junit.runners.model.InitializationError;import roboguice.application.RoboApplication;import roboguice.inject.ContextScope;import com.google.inject.Injector;import com.xtremelabs.robolectric.Robolectric;import com.xtremelabs.robolectric.RobolectricTestRunner;public class MyRobolectricTestRunner extends RobolectricTestRunner { public MyRobolectricTestRunner(Class<?> testClass) throws InitializationError { // Tell Robolectric where to find AndroidManifest.xml and res/ super(testClass, new File("../app")); } /** * Enable injection into tests as well... */ @Override public void prepareTest(Object test) { RoboApplication myApplication = (RoboApplication) Robolectric.application; Injector injector = myApplication.getInjector(); ContextScope contextScope = injector.getInstance(ContextScope.class); contextScope.enter(myApplication); injector.injectMembers(test); }} And here's a sample test that illustrates injection: package com.acme.myapp;import static org.junit.Assert.assertEquals;import org.junit.Test;import org.junit.runner.RunWith;import roboguice.inject.InjectResource;@RunWith(MyRobolectricTestRunner.class)public class StringFormattingTest { @InjectResource(R.string.info_pending_amount) private String pendingAmountPattern; @Test public void testFormatInfoPendingAmount() { String s = String.format(pendingAmountPattern, 20.0d, "EUR"); assertEquals("Only a part of your stake (20,00 EUR) was accepted", s); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7228076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/870712/']}
jdg_377807
stackexchange
llm_judgeable_groundtruth_similarity
43680712
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I create a component using ViewContainerRef and assign instance to a property of parent component, which is responsible for child component creation, do I need to set this property to null after I call ViewContainerRef.clear() if I want memory to be freed? Now provide the response and nothing else.
No, if you assign parent component property to componentRef angular won't remove component from memory. Angular only destroys component and removes its own references to this component. But reference to componentRef remains to live in your component property. So i would assign null to it. This way garbage collect will be able to clear memory Plunker Example (add => clear => check) @Component({ selector: 'my-app', template: ` <div> <button (click)="addComponent()">Add component</button> <div #container></div> <button (click)="clear()">Clear</button> <button (click)="check()">check</button> </div> `,})export class App { comp: ComponentRef<DynamicComponent>; constructor( private vcRef: ViewContainerRef, private resolver: ComponentFactoryResolver) {} addComponent() { let factory = this.resolver.resolveComponentFactory(DynamicComponent); this.comp = this.vcRef.createComponent(factory); } clear() { this.vcRef.clear(); } check() { alert(this.comp); }} See also https://developer.mozilla.org/en/docs/Web/JavaScript/Memory_Management#Garbage_collection
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43680712', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2693851/']}
jdg_377808
stackexchange
llm_judgeable_groundtruth_similarity
206890
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In this MO post , I ran into the following family of polynomials: $$f_n(x)=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}.$$ In the context of the post, $x$ was a prime number, and $f_n(x)$ counted the number of subspaces of an $n$ -dimensional vector space over $GF(x)$ (which I was using to determine the number of subgroups of an elementary abelian group $E_{x^n}$ ). Anyway, while I was investigating asymptotic behavior of $f_n(x)$ in Mathematica, I got sidetracked and (just for fun) looked at the set of complex roots when I set $f_n(x)=0$ . For $n=24$ , the plot looked like this: (The real and imaginary axes are from $-1$ to $1$ .) Surprised by the unusual symmetry of the solutions, I made the same plot for a few more values of $n$ . Note the clearly defined "tails" (on the left when even, top and bottom when odd) and "cusps" (both sides). You can see that after approximately $n=60$ , the "circle" of solutions starts to expand into a band of solutions with a defined outline. To fully absorb the weirdness of this, I animated the solutions from $n=2$ to $n=112$ . The following is the result: Pretty weird, right!? Anyhow, here are my questions: First, has anybody ever seen anything at all like this before? What's up with those "tails?" They seem to occur only on even $n$ , and they are surely distinguishable from the rest of the solutions. Look how the "enclosed" solutions rotate as $n$ increases. Why does this happen? [Explained in edits.] Anybody have any idea what happens to the solution set as $n\rightarrow \infty$ ? Thanks to @WillSawin, we now know that all the roots are contained in an annulus that converges to the unit circle , which is fantastic. So, the final step in understanding the limit of the solution sets is figuring out what happens on the unit circle. We can see from the animation that there are many gaps, particularly around certain roots of unity; however, they do appear to be closing. The natural question is, which points on the unit circle "are roots in the limit"? In other words, what are the accumulation points of $\{z\left|z\right|^{-1}:z\in\mathbb{C}\text{ and }f_n(z)=0\}$ ? Is the set of accumulation points dense? @NoahSnyder's heuristic of considering these as a random family of polynomials suggests it should be- at least, almost surely. These are polynomials in $\mathbb{Z}[x]$ . Can anybody think of a way to rewrite the formula (perhaps recursively?) for the simplified polynomial, with no denominator? If so, we could use the new formula to prove the series converges to a function on the unit disc, as well as cut computation time in half. [See edits for progress.] Does anybody know a numerical method specifically for finding roots of high degree polynomials? Or any other way to efficiently compute solution sets for high $n$ ? [Thanks @Hooked!] Thanks everyone. This may not turn out to be particularly mathematically profound, but it sure is neat . EDIT : Thanks to suggestions in the comments, I cranked up the working precision to maximum and recalculated the animation. As Hurkyl and mercio suspected, the rotation was indeed a software artifact, and in fact evidently so was the thickening of the solution set. The new animation looks like this: So, that solves one mystery: the rotation and inflation were caused by tiny roundoff errors in the computation. With the image clearer, however, I see the behavior of the cusps more clearly. Is there an explanation for the gradual accumulation of "cusps" around the roots of unity? (Especially 1.) EDIT : Here is an animation $Arg(f_n)$ up to $n=30$ . I think we can see from this that $f_n$ should converge to some function on the unit disk as $n\rightarrow \infty$ . I'd love to include higher $n$ , but this was already rather computationally exhausting. Now, I've been tinkering and I may be onto something with respect to point $5$ (i.e. seeking a better formula for $f_n(x)$ ). The folowing claims aren't proven yet, but I've checked each up to $n=100$ , and they seem inductively consistent. Here denote $\displaystyle f_n(x)=\sum_{m}a_{n,m}x^m$ , so that $a_{n,m}\in \mathbb{Z}$ are the coefficients in the simplified expansion of $f_n(x)$ . First, I found $\text{deg}(f_n)=\text{deg}(f_{n-1})+\lfloor \frac{n}{2} \rfloor$ . The solution to this recurrence relation is $$\text{deg}(f_n)=\frac{1}{2}\left({\left\lceil\frac{1-n}{2}\right\rceil}^2 -\left\lceil\frac{1-n}{2}\right\rceil+{\left\lfloor \frac{n}{2} \right\rfloor}^2 + \left\lfloor \frac{n}{2} \right\rfloor\right)=\left\lceil\frac{n^2}{4}\right\rceil.$$ If $f_n(x)$ has $r$ more coefficients than $f_{n-1}(x)$ , the leading $r$ coefficients are the same as the leading $r$ coefficients of $f_{n-2}(x)$ , pairwise. When $n>m$ , $a_{n,m}=a_{n-1,m}+\rho(m)$ , where $\rho(m)$ is the number of integer partitions of $m$ . (This comes from observation, but I bet an actual proof could follow from some of the formulas here .) For $n\leq m$ the $\rho(m)$ formula first fails at $n=m=6$ , and not before for some reason. There is probably a simple correction term I'm not seeing - and whatever that term is, I bet it's what's causing those cusps. Anyhow, with this, we can make almost make a recursive relation for $a_{n,m}$ , $$a_{n,m}= \left\{ \begin{array}{ll} a_{n-2,m+\left\lceil\frac{n-2}{2}\right\rceil^2-\left\lceil\frac{n}{2}\right\rceil^2} & : \text{deg}(f_{n-1}) < m \leq \text{deg}(f_n)\\ a_{n-1,m}+\rho(m) & : m \leq \text{deg}(f_{n-1}) \text{ and } n > m \\ ? & : m \leq \text{deg}(f_{n-1}) \text{ and } n \leq m \end{array} \right.$$ but I can't figure out the last part yet. EDIT :Someone pointed out to me that if we write $\lim_{n\rightarrow\infty}f_n(x)=\sum_{m=0}^\infty b_{m} x^m$ , then it appears that $f_n(x)=\sum_{m=0}^n b_m x^m + O(x^{n+1})$ . The $b_m$ there seem to me to be relatively well approximated by the $\rho(m)$ formula, considering the correction term only applies for a finite number of recursions. So, if we have the coefficients up to an order of $O(x^{n+1})$ , we can at least prove the polynomials converge on the open unit disk, which the $Arg$ animation suggests is true. (To be precise, it looks like $f_{2n}$ and $f_{2n+1}$ may have different limit functions, but I suspect the coefficients of both sequences will come from the same recursive formula.) With this in mind, I put a bounty up for the correction term, since from that all the behavior will probably be explained. EDIT : The limit function proposed by Gottfriend and Aleks has the formal expression $$\lim_{n\rightarrow \infty}f_n(x)=1+\prod_{m=1}^\infty \frac{1}{1-x^m}.$$ I made an $Arg$ plot of $1+\prod_{m=1}^r \frac{1}{1-x^m}$ for up to $r=24$ to see if I could figure out what that ought to ultimately end up looking like, and came up with this: Purely based off the plots, it seems not entirely unlikely that $f_n(x)$ is going to the same place this is, at least inside the unit disc. Now the question is, how do we determine the solution set at the limit? I speculate that the unit circle may become a dense combination of zeroes and singularities, with fractal-like concentric "circles of singularity" around the roots of unity... :) Now provide the response and nothing else.
First, has anybody ever seen anything at all like this before? Yes, and in fact the interesting patterns that arise here are more than just a mathematical curiosity, they can be interpreted to have a physical context. Statistical Mechanics In a simple spin system , say the Ising model , a discrete set of points are arranged on a grid. In physics, we like to define the energy of the system by the Hamiltonian , which gives the energy of any particular microstate. In this system, if the spins are aligned they form a bond. This favorable and the energy is negative. If they are misaligned, the energy is positive. Let's consider a simple system of two points, adjacent to each other. Furthermore, let each site point up (1) or down (-1). For an Ising-like system we would write the Hamiltonian as: $$H = - \sum_{ij} J \sigma_i \sigma_j$$ where $\sigma_i$ is the spin of the $i$th point and the summation runs over all pairs of adjacent sites. $J$ is the strength of the bond (which we can set to one for our example). In our simple system we have only four possible states: 0 - 0 H = -J1 - 0 H = 00 - 1 H = 01 - 1 H = -J Now we can write the partition function $\mathcal{Z}$, a term which encompasses all information of the Hamiltonian from the perspective of statistical mechanics: $$\mathcal{Z} = \sum_s \exp (H(s)/kT)$$ Here the summation runs over all possible (micro)states of the system. The partition function is really useful as it is related to the free energy $A = -kT \ln{\mathcal{Z} }$. When the partition function goes to zero, the free energy explodes and this signifies a phase change - a physically interesting event. What about our simple system? $$\mathcal{Z} = 2 \exp({\beta J}) + 2 = 2x + 2$$ You'll notice that I changed $x=\exp({\beta J})$ to make things a little neater. You may also notice that $\mathcal{Z}$ looks like polynomial . Which means if we want to find the interesting events in the system we find the zeros of the partition function $\mathcal{Z}=0$. This zero will correspond to a particular temperature $T$. In this case the only temperature we get is a complex one ... Complex Temperatures? Before you discount the idea that a temperature not on the real number line is impossible (and that $T<0$ is strange as well), let's see where this takes us. If we continue the to add sites to our simple little system, our polynomial will get a bit more complicated and we will find more roots on the complex plane. In fact, as we take ever more roots the points appear to form a pattern, much like the pattern you've shown above . For a finite spin system, you'll never find a zero on the real axis, however... Anybody have any idea what happens to the solution set as n→∞? At the thermodynamic limit (which corresponds to an infinite number of sites) the points become dense on the plane. At this limit the points can touch the real axis (corresponding to a phase change in the system). For example, in the 2D Ising model the points do touch the real axis (and make a beautiful circle on the complex plane) where the system undergoes a phase transition from ordered to disordered. Prior work The study of these zeros (from a physics perspective) is fascinating and started with the seminal papers by Yang and Lee : Yang, C. N.; Lee, T. D. (1952), "Statistical Theory of Equations of State and Phase Transitions. I. Theory of Condensation", Physical Review 87: 404–409, doi:10.1103/PhysRev.87.404 Lee, T. D.; Yang, C. N. (1952), "Statistical Theory of Equations of State and Phase Transitions. II. Lattice Gas and Ising Model", Physical Review 87: 410–419, doi:10.1103/PhysRev.87.410 Which are surprisingly accessible. For a good time, search for images of Yang-Lee zeros. In addition you can extend the fugacity to the complex plane, these are called the Fisher zeros and make even more complex patterns!
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/206890', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12952/']}
jdg_377809
stackexchange
llm_judgeable_groundtruth_similarity
160692
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Calculate the following sum for integers $n\ge2$: $$\sum_{k=0}^{n-2} 2^{k} \tan \left(\frac{\pi}{2^{n-k}}\right)$$ I'm trying to obtain a closed form if that is possible. Now provide the response and nothing else.
We have this nice identity $$\tan(\theta) = \cot(\theta)-2 \cot(2 \theta)$$Making use of this, and denoting $\displaystyle \sum_{k=0}^{m} 2^k \tan(2^k \theta)$ as $S$, we get that \begin{align}S & = \tan(\theta) + 2 \tan(2 \theta) + 4 \tan(4 \theta) + \cdots + 2^m \tan(2^m \theta)\\& = \cot(\theta) -2 \cot(2 \theta) +2 \cot(2\theta) - \cdots + 2^m \cot(2^m \theta) - 2^{m+1} \cot(2^{m+1} \theta)\\& = \cot(\theta) - 2^{m+1} \cot(2^{m+1} \theta)\end{align}In your case, $\theta = \dfrac{\pi}{2^n}$ and $m= n-2$. Hence, we get the sum to be $$S = \cot \left(\dfrac{\pi}{2^n} \right) - 2^{n-1} \cot \left( 2^{n-1} \cdot \dfrac{\pi}{2^n}\right) = \cot \left(\dfrac{\pi}{2^n} \right) - 2^{n-1} \cot \left( \dfrac{\pi}2\right) = \cot \left(\dfrac{\pi}{2^n} \right)$$ Proof for $\tan(\theta) = \cot(\theta)-2 \cot(2 \theta)$ $$\cot(\theta) - \tan(\theta) = \dfrac{\cos(\theta)}{\sin(\theta)} - \dfrac{\sin(\theta)}{\cos(\theta)} = \dfrac{\cos^2(\theta) - \sin^2(\theta)}{\sin(\theta) \cos(\theta)}= 2\dfrac{\cos(2\theta)}{\sin(2 \theta)} = 2 \cot(2 \theta)$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/160692', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32016/']}
jdg_377810
stackexchange
llm_judgeable_groundtruth_similarity
3570687
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've seen this done in C before: #define MY_STRING "12345"...#define SOMETHING (MY_STRING + 2) What does SOMETHING get expanded to, here? Is this even legal? Or do they mean this?: #define SOMETHING (MY_STRING[2]) Now provide the response and nothing else.
String literals exist in the fixed data segment of the program, so they appear to the compiler as a type of pointer. +-+-+-+-+-+--+|1|2|3|4|5|\0|+-+-+-+-+-+--+ ^ MY_STRING ^ MY_STRING + 2
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3570687', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71391/']}
jdg_377811
stackexchange
llm_judgeable_groundtruth_similarity
30159
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Somebody asked me this question in a job interview and I replied that their joint distribution is always Gaussian. I thought that I can always write a bivariate Gaussian with their means and variance and covariances. I am wondering if there can be a case for which the joint probability of two Gaussians is not Gaussian? Now provide the response and nothing else.
The bivariate normal distribution is the exception , not the rule! It is important to recognize that "almost all" joint distributions with normal marginals are not the bivariate normal distribution. That is, the common viewpoint that joint distributions with normal marginals that are not the bivariate normal are somehow "pathological", is a bit misguided. Certainly, the multivariate normal is extremely important due to its stability under linear transformations, and so receives the bulk of attention in applications. Examples It is useful to start with some examples. The figure below contains heatmaps of six bivariate distributions, all of which have standard normal marginals. The left and middle ones in the top row are bivariate normals, the remaining ones are not (as should be apparent). They're described further below. The bare bones of copulas Properties of dependence are often efficiently analyzed using copulas . A bivariate copula is just a fancy name for a probability distribution on the unit square $[0,1]^2$ with uniform marginals. Suppose $C(u,v)$ is a bivariate copula. Then, immediately from the above, we know that $C(u,v) \geq 0$ , $C(u,1) = u$ and $C(1,v) = v$ , for example. We can construct bivariate random variables on the Euclidean plane with prespecified marginals by a simple transformation of a bivariate copula. Let $F_1$ and $F_2$ be prescribed marginal distributions for a pair of random variables $(X,Y)$ . Then, if $C(u,v)$ is a bivariate copula, $$F(x,y) = C(F_1(x), F_2(y))$$ is a bivariate distribution function with marginals $F_1$ and $F_2$ . To see this last fact, just note that $$\renewcommand{\Pr}{\mathbb P}\Pr(X \leq x) = \Pr(X \leq x, Y < \infty) = C(F_1(x), F_2(\infty)) = C(F_1(x),1) = F_1(x) \>.$$ The same argument works for $F_2$ . For continuous $F_1$ and $F_2$ , Sklar's theorem asserts a converse implying uniqueness. That is, given a bivariate distribution $F(x,y)$ with continuous marginals $F_1$ , $F_2$ , the corresponding copula is unique (on the appropriate range space). The bivariate normal is exceptional Sklar's theorem tells us (essentially) that there is only one copula that produces the bivariate normal distribution. This is, aptly named, the Gaussian copula which has density on $[0,1]^2$ $$c_\rho(u,v) := \frac{\partial^2}{\partial u \, \partial v} C_\rho(u,v) = \frac{\varphi_{2,\rho}(\Phi^{-1}(u),\Phi^{-1}(v))}{\varphi(\Phi^{-1}(u)) \varphi(\Phi^{-1}(v))} \>,$$ where the numerator is the bivariate normal distribution with correlation $\rho$ evaluated at $\Phi^{-1}(u)$ and $\Phi^{-1}(v)$ . But, there are lots of other copulas and all of them will give a bivariate distribution with normal marginals which is not the bivariate normal by using the transformation described in the previous section. Some details on the examples Note that if $C(u,v)$ is am arbitrary copula with density $c(u,v)$ , the corresponding bivariate density with standard normal marginals under the transformation $F(x,y) = C(\Phi(x),\Phi(y))$ is $$f(x,y) = \varphi(x) \varphi(y) c(\Phi(x), \Phi(y)) \> .$$ Note that by applying the Gaussian copula in the above equation, we recover the bivariate normal density. But, for any other choice of $c(u,v)$ , we will not. The examples in the figure were constructed as follows (going across each row, one column at a time): Bivariate normal with independent components. Bivariate normal with $\rho = -0.4$ . The example given in this answer of Dilip Sarwate . It can easily be seen to be induced by the copula $C(u,v)$ with density $c(u,v) = 2 (\mathbf 1_{(0 \leq u \leq 1/2, 0 \leq v \leq 1/2)} + \mathbf 1_{(1/2 < u \leq 1, 1/2 < v \leq 1)})$ . Generated from the Frank copula with parameter $\theta = 2$ . Generated from the Clayton copula with parameter $\theta = 1$ . Generated from an asymmetric modification of the Clayton copula with parameter $\theta = 3$ .
{}
{'log_upvote_score': 9, 'links': ['https://stats.stackexchange.com/questions/30159', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/172/']}
jdg_377812
stackexchange
llm_judgeable_groundtruth_similarity
1871676
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following dictionary in the programming language Python: D = {'A': 1, 'B': 2, 'C': 3} It is saying that the value of A is 1, the value of B is 2, and the value of C is 3. It also has the property that D['A'] = 1 etc. How would I write such thing in math? I was thinking about $$D = \{A = 1, B = 2, C = 3\}.$$ However, I am not sure if this is the right or best way to do such thing. I would like to use the structure for taking sums: e.g. 'AAAA' is interpreted as $1+1+1+1$ etc. What kind of notation should I use? Now provide the response and nothing else.
A dictionary is just a function $\mathrm{Dict}\colon \mathrm{Keys} \rightarrow \mathrm{Values}\cup\{\epsilon\}$ where $\epsilon$ is a "null character" with the understanding that $\epsilon\notin\mathrm{Values}$. For example, let $\mathrm{Keys}=\{A,B,C,...,Z\}$, and $\mathrm{Values}=\mathbb{Z}$. Then, in your case, $$\mathrm{Dict}(x)=\begin{cases}1 & \text{if }x=A\\2 & \text{if }x=B\\3 & \text{if }x=C\\\epsilon & \text{otherwise}\end{cases}$$
{}
{'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/1871676', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/356363/']}
jdg_377813
stackexchange
llm_judgeable_groundtruth_similarity
14006290
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My query - select cu.CustomerID,cu.FirstName,cu.LastName, COUNT(si.InvoiceID)as inv --1from Customer as cu inner join SalesInvoice as si --2on cu.CustomerID = si.CustomerID -- 3-- put the WHERE clause here ! --4 group by cu.CustomerID,cu.FirstName,cu.LastName -- 5where cu.FirstName = 'mark' -- 6 Output with correct code - Error i get - Incorrect syntax near the keyword 'where'. Can you tell me why I get this error ? I want to know why WHERE comes before GROUP BY and not after. Now provide the response and nothing else.
You have the order wrong. The WHERE clause goes before the GROUP BY : select cu.CustomerID,cu.FirstName,cu.LastName, COUNT(si.InvoiceID)as inv from Customer as cu inner join SalesInvoice as si on cu.CustomerID = si.CustomerID where cu.FirstName = 'mark' group by cu.CustomerID,cu.FirstName,cu.LastName If you want to perform a filter after the GROUP BY , then you will use a HAVING clause: select cu.CustomerID,cu.FirstName,cu.LastName, COUNT(si.InvoiceID)as inv from Customer as cu inner join SalesInvoice as si on cu.CustomerID = si.CustomerID group by cu.CustomerID,cu.FirstName,cu.LastNamehaving cu.FirstName = 'mark' A HAVING clause is typically used for aggregate function filtering, so it makes sense that this would be applied after the GROUP BY To learn about the order of operations here is article explaining the order . From the article the order of operation in SQL is: To start out, I thought it would be good to look up the order in which SQL directives get executed as this will change the way I can optimize: FROM clauseWHERE clauseGROUP BY clauseHAVING clauseSELECT clauseORDER BY clause Using this order you will apply the filter in the WHERE prior to a GROUP BY . The WHERE is used to limit the number of records. Think of it this way, if you were applying the WHERE after then you would return more records then you would want to group on. Applying it first, reduces the recordset then applies the grouping.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14006290', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1667147/']}
jdg_377814
stackexchange
llm_judgeable_groundtruth_similarity
2295766
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a scrollable div tag with a fixed height that gets a vertical scroll bar. I'm trying to get this to work in chrome. This is my CSS: #designDiv{ width:249px; height:299px; background-color:Gray; overflow-y: scroll; max-width:230px; max-height:100px;} It does show the vertical scroll bar but the problem is during the run-time when the user adds some content to the #designDiv . It does not scroll and the #designDiv begins to expand vertically. How do I create a scrollable div tag vertically for chrome? Now provide the response and nothing else.
Well, your code worked for me (running Chrome 5.0.307.9 and Firefox 3.5.8 on Ubuntu 9.10), though I switched overflow-y: scroll; to overflow-y: auto; Demo page over at: http://davidrhysthomas.co.uk/so/tableDiv.html . xhtml below: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head> <META http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Div in table</title> <link rel="stylesheet" type="text/css" href="css/stylesheet.css" /> <style type="text/css" media="all"> th {border-bottom: 2px solid #ccc; } th,td {padding: 0.5em 1em; margin: 0; border-collapse: collapse; } tr td:first-child {border-right: 2px solid #ccc; } td > div {width: 249px; height: 299px; background-color:Gray; overflow-y: auto; max-width:230px; max-height:100px; } </style> <script type="text/javascript" src="js/jquery.js"></script> <script type="text/javascript"> </script></head><body><div> <table> <thead> <tr><th>This is column one</th><th>This is column two</th><th>This is column three</th> </thead> <tbody> <tr><td>This is row one</td><td>data point 2.1</td><td>data point 3.1</td> <tr><td>This is row two</td><td>data point 2.2</td><td>data point 3.2</td> <tr><td>This is row three</td><td>data point 2.3</td><td>data point 3.3</td> <tr><td>This is row four</td><td><div><p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum ultricies mattis dolor. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Vestibulum a accumsan purus. Vivamus semper tempus nisi et convallis. Aliquam pretium rutrum lacus sed auctor. Phasellus viverra elit vel neque lacinia ut dictum mauris aliquet. Etiam elementum iaculis lectus, laoreet tempor ligula aliquet non. Mauris ornare adipiscing feugiat. Vivamus condimentum luctus tortor venenatis fermentum. Maecenas eu risus nec leo vehicula mattis. In nisi nibh, fermentum vitae tincidunt non, mattis eu metus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Nunc vel est purus. Ut accumsan, elit non lacinia porta, nibh magna pretium ligula, sed iaculis metus tortor aliquam urna. Duis commodo tincidunt aliquam. Maecenas in augue ut ligula sodales elementum quis vitae risus. Vivamus mollis blandit magna, eu fringilla velit auctor sed.</p></div></td><td>data point 3.4</td> <tr><td>This is row five</td><td>data point 2.5</td><td>data point 3.5</td> <tr><td>This is row six</td><td>data point 2.6</td><td>data point 3.6</td> <tr><td>This is row seven</td><td>data point 2.7</td><td>data point 3.7</td> </body> </table></div></body></html>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2295766', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/276889/']}
jdg_377815
stackexchange
llm_judgeable_groundtruth_similarity
306697
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I posted an answer to a question, and when I returned to it I was dumbfounded by what I saw. My description was no longer my own. It had been completely re-written by another user. Is this acceptable?? I understand when small grammatical errors or formatting improvements should be made. But completely rewriting an explanation?? Here's the revision in question . Now provide the response and nothing else.
I agree that the edit was inappropriate, for two very big reasons: It did not appear to improve the answer appreciably. The first paragraph was reworded, but not in a way that clearly and significantly improves the clarity, nor adds significant new information. It changes the answer. The original answer limits the changes of height and width to the one element in question. The edit modifies this to effect that change on all elements of that class ( profile_pic ). Either of these reasons would be cause for a rollback, but the latter is particularly egregious. There is nothing in the question that would lead a reader to believe that the height and width changes must be applied to the entire class. Maybe it should be, but doing so is not a given nor does failing to do so detract from the original answer. Someone who feels that the actual code in an answer should be different, where the original answer's code is not clearly wrong per se (e.g. has a serious bug that is inconsistent with the answer author's intent), should be adding a new answer themselves, providing detailed explanation for why the differences in their answer are important. This does two important things: it preserves the original, correct answer for future readers, and it enhances knowledge (potentially) by providing a clear contrast of differing opinions and justification for that difference. Rolling back to the previous edit was entirely appropriate here. That said, I would refrain from rolling back edits, even on your own posts, unless you can articulate in a clear, specific way why the edit was inappropriate (failing to improve the answer is a perfectly legitimate basis for a rollback, so the bar is not very high here). The default attitude should IMHO be to accept edits; give the editor the benefit of the doubt, until you can (at least to yourself) clearly state why the edit should not be accepted. Finally note that rolling back edits may get you into a tug-of-war; I would roll-back only once. If the author of the edit persists, that's the time to bring a moderator into the mix. Remember to stay professional and polite to everyone involved, including the edit's author (as you have here).
{}
{'log_upvote_score': 6, 'links': ['https://meta.stackoverflow.com/questions/306697', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/693614/']}
jdg_377816
stackexchange
llm_judgeable_groundtruth_similarity
41450
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to perform a Pearson's $\chi^2$ test to analyse contingency tables; but because I have small numbers, it is recommended to perform instead what is called a Fisher's Exact Test. This requires generating all integer matrices with the same column and row totals as the one given, and compute and sum all p-values from the corresponding distribution which are lower than the one from the data. See Wikipedia and MathWorld for relevant context. Apparently R offers that, but couldn't find it in Mathematica , and after extensive research couldn't find an implementation around, so I did my own. The examples in the links are with 2x2 matrices, but I did a n x m implementation and, at least for the MathWorld example, numbers match. I have one question: The code I wrote uses Reduce ; although it seemed to me generating all matrices was more a combinatorial problem. I pondered using FrobeniusSolve , but still seemed far from what's needed. Am I missing something or is Reduce the way to go? The essential part of the code, which I made available in github here , is that for a matrix like $$\left(\begin{array}{ccc} 1 & 0 & 2 \\ 0 & 1 & 2 \\\end{array}\right)$$ with row sums 3, 3 and column sums 1, 1, 4, it creates a system of linear equations like: $$\begin{array}{c} x_{1,1}+x_{1,2}+x_{1,3}=3 \\ x_{2,1}+x_{2,2}+x_{2,3}=3 \\\end{array}$$$$\begin{array}{c} x_{1,1}+x_{2,1}=1 \\ x_{1,2}+x_{2,2}=1 \\ x_{1,3}+x_{2,3}=4 \\\end{array}$$ subject to the constrains $ x_{1,1}\geq 0$, $x_{1,2}\geq 0$, $x_{1,3}\geq 0$, $x_{2,1}\geq 0$, $x_{2,2}\geq 0$, $ x_{2,3}\geq 0 $ and feeds this into Reduce to solve this system over the Integers . Reduce returns all the solutions, which is what we need to compute Fisher's exact p-value. Note: I just found this advice on how to use github better for Mathematica projects. For the time being, I leave it as-is. Hope easy to use and test. You can test the above mentioned code like FisherExact[{{1, 0, 2}, {0, 0, 2}, {2, 1, 0}, {0, 2, 1}}] It has some debugging via Print which shows all the generated matrices and their p-value. The last part (use of Select ) to process all found matrices didn't seem very Mathematica to me, but it was late and I was tired - feedback is welcome. I would give my tick to the answer with more votes after a couple of days if anyone bothers to write me two lines :) Thanks in advance! Now provide the response and nothing else.
Maybe you are willing to consider a Bayesian approach to this perennial problem. Beware though: Bayesians have no random variables, no p-values, no null hypotheses, etc. They have probabilities, or ratios thereof. The (out of print) book "Rational Descriptions, Decisions and Designs" by Miron Tribus (1969!) has an excellent chapter on contingency tables. From this book I have copied the solutions below. His solutions are exact and work for small counts as well as non-square tables. He considers two mutially exclusive hypotheses: "the rows and columns are independent" vs "the rows and columns are dependent", under a variety of different types of knowledge. Here I give only two cases: -- Knowledge type 1A, with no specific prior knowledge on the (in-)dependence and no controls, -- Knowledge type 1B, also with no specific prior knowledge but with a control on the counts of the rows (see examples below). Tribus computes the "evidence in favor of the hypothesis of independence of rows and columns" for these types. (The references in the code are to chapters and pages in his book.) The evidence for type 1A is: (* Evidence rows-cols independent: 1A VI-38 p. 194 *)evidence1A[table_] :=Module[{r, s, nidot, ndotj, ntot, ev, prob},(* Table dimensions r=nr of rows, s=nr of cols *){r, s} = Dimensions[table];(* Margin and Total counts *)nidot = Total[table, {2}] ;(* sum in r-direction *)ndotj = Total[table, {1}] ;(* sum in s-direction *)ntot = Total[table, 2]; (* overall total *)(* evidence of VI-38 p.194 *)ev = Log[ ((ntot + r*s - 1)! * ntot!)/ ((ntot + r - 1)!*(ntot + s - 1)!)] -Log[ (r*s - 1)!/((r - 1)!*(s - 1)!) ] +(Total[ Log[ nidot!]] - Log[ntot!]) +(Total[ Log[ ndotj!]] - Log[ntot!]) -(Total[Log[table!], 2] - Log[ntot!]);(* probability from evidence: III-13 p.84 *)prob = (1 + Exp[-ev])^-1 ;{ev // N, prob // N} (* output *)] (* End of Module *) Tribus tests this using an interesting example of eye-color vs hair-color correlation of soldiers in conscription military service (!). Note that this is a 3x4 table. (* Soldier table VI-1 p.183: eye color vs. hair color *)soldier = {(* blonde,brown,black,red *)(* blue *) {1768, 807, 189, 47},(* green *) {946, 1387, 786, 53},(* brown *) {115, 438, 288, 16}};(* Tribus p.197 gives 560 Napiers *)(* prob that the table is row-col independent *)evidence1A[soldier](* output: {-560.661, 3.22157*10^-244} *) The probability of independence of rows and columns is 3.22*10^-244, and thus virtually zero. As expected. The case 1B applies to tests with a pre-set count for the columns. In Tribus' tobacco test flavor example: 250 packages with mixed cigarettes + pipe tobacco vs. 150 packages with only cigarettes. (* Tobacco problem p.198 : solution is independent of s *)tobacco = {(* cigaret+pipe tobacco: mixed, not mixed *)(* no change *) {72, 119},(* change aroma *) {178, 31}(* fixed counts : {250,150} *)}; The evidence for this problem is: (* Evidence rows-cols independent: 1B VI-54 p. 200 *)(* solution is independent of s *)evidence1B[table_] :=Module[ {r, s, nidot, ndotj, ntot, ev, prob},(* Table dimensions r=nr of rows, s=nr of cols *){r, s} = Dimensions[table];(* Margin and Total counts *)nidot = Total[table, {2}] ;(* sum in r-direction *)ndotj = Total[table, {1}] ;(* sum in s-direction *)ntot = Total[table, 2]; (* overall total *)(* evidence Eq.VI-54 p.200 *)ev = Log[(r - 1)!/(ntot + r - 1)!] +Total[Log[(ndotj + r - 1)!/(r - 1)!]] +(Total[Log[nidot!]] - Log[ntot!]) -(Total[Log[table!], 2] - Log[ntot!]) ;(* probability from evidence: III-13 p.84 *)prob = (1 + Exp[-ev])^-1 ;{ev // N, prob // N} (* output *)] (* End of Module *) Tribus' solution: (* Tribus p.200 : 1.45 10^-21 *)evidence1B[tobacco](* output: {-47.9818, 1.45138*10^-21} *) Also here the probability for rows and columns to be independent is pretty small 1.45*10^-21. Your example of a 3x3 table: caya = {{1, 0, 2}, {0, 0, 2}, {2, 1, 0}, {0, 2, 1}};evidence1A[caya](* output: {-2.62276, 0.0676881} *)evidence1B[caya](* output: {-1.7158, 0.152413} *) The probabilities for independence of rows and columns are small-ish. But they are not very small. Depending on the details of your problem, such probability values can signal: inconclusive .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/41450', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1215/']}
jdg_377817
stackexchange
llm_judgeable_groundtruth_similarity
13988556
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In background, I want it to reload and shows the number how many unread messages are there. I want that without refreshing page. I mean using ajax. If I had this in menu, how can I refresh only this section every 30 secs? <li><%= link_to sanitize('<i class="icon-envelope"></i> ') + "received messages" + sanitize(' <span class="badge badge-info">'+current_user.mailbox.inbox(:read => false).count(:id, :distinct => true).to_s+'</span>'), messages_received_path %></li> messages_controller.rb def received if params[:search] @messages = current_user.mailbox.inbox.search_messages(@search).page(params[:page]).per(10) else @messages = current_user.mailbox.inbox.page(params[:page]).per(10) end add_crumb 'Messages Received', messages_received_path @box = 'inbox' render :index end UPDATE: _ __ _ __ _ __ _ __ _ __ _ __ _ __ _ __ _ __ _ __ _ assets/javascript/refresh_messages_count.js $(document).ready(function () { // will call refreshPartial every 3 seconds setInterval(refreshPartial, 3000)});function refreshPartial() { $.ajax({ url: "messages/refresh_part"; })} messages_controller.rb def refresh_part @message_count = current_user.mailbox.inbox(:read => false).count(:id, :distinct => true) # get whatever data you need to a variable named @data respond_to do |format| format.js {render :action=>"refresh_part.js"} endend views/layouts/_menu.html.erb <span id="message_received_count"><%= render :partial => "layouts/message_received_count" %></span> views/layouts/_message_received_count.html.erb <% if user_signed_in? && current_user.mailbox.inbox(:read => false).count(:id, :distinct => true) > 0 %> <li><%= link_to sanitize('<i class="icon-envelope"></i> ') + "Received" + sanitize(' <span class="badge badge-info">'+@message_count.to_s+'</span>'), messages_received_path %></li><% else %> <li><%= link_to sanitize('<i class="icon-envelope"></i> ') + "Received", messages_received_path %></li><% end %> views/messages/refresh_part.js.erb $('#message_received_count').html("#{escape_javascript(render 'layouts/messages_received_count', data: @message_count)}"); Now provide the response and nothing else.
You will use setInterval to send the ajax request: $(document).ready(function () { // will call refreshPartial every 3 seconds setInterval(refreshPartial, 3000)});// calls action refreshing the partialfunction refreshPartial() { $.ajax({ url: "whatever_controller/refresh_part" })} Then you make an action in a controller like this: def refresh_part # get whatever data you need to a variable named @data respond_to do |format| format.js endend then you will write a js file named refresh_part.js.haml (you could erb instead of haml). refresh_part.js.haml would look like this: $('#part_you_want_to_refresh').html("#{escape_javascript(render 'name_of_partial', data: @data)}"); make sure you set the correct routes in routes.rb .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13988556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1886175/']}
jdg_377818
stackexchange
llm_judgeable_groundtruth_similarity
2446027
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've encountered the following paragraph: “Debug vs. Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimization. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimization. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.” The source is here and the podcast is here . Can someone direct me to a Microsoft article that can actually prove this? Googling " C# debug vs release performance " mostly returns results saying " Debug has a lot of performance hit ", " release is optimized ", and " don't deploy debug to production ". Now provide the response and nothing else.
Partially true. In debug mode, the compiler emits debug symbols for all variables and compiles the code as is. In release mode, some optimizations are included: unused variables do not get compiled at all some loop variables are taken out of the loop by the compiler if they are proven to be invariants code written under #debug directive is not included, etc. The rest is up to the JIT. Full list of optimizations here courtesy of Eric Lippert .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2446027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/241557/']}
jdg_377819
stackexchange
llm_judgeable_groundtruth_similarity
141120
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: One assumption for regression analysis is that $X$ and $Y$ are not intertwined. However when I think about it It seems to me that it makes sense. Here is an example. If we have a test with 3 sections (A B and C). The overall test score is equal to the sum of individual scores for the 3 sections. Now it makes sense to say that $X$ can be score in section A and $Y$ the overall test score. Then the linear regression can answer this question: what is the variability in overall test score that is attributable to section A? Here, several scenarios are possible: Section A is the hardest of the 3 sections and students always score lowest on it. In such a case, intuitively $R^2$ would be low. Because most of the overall test score would be determined by B and C. Section A was very easy for students. In this case also the correlation would not be high. Because students always score 100% of this section and therefore this section tells us nothing about the overall test score. Section A has intermmediate difficulty. In this case the correlation would be stronger (but this also depends on the other scores (B and C). Another example is this: we analyze the total content of a trace element in urine. And we analyze independently the individual species (chemical forms) of that trace element in urine. There can be many chemical forms. And if our analyses are correct, the sum of chemical forms should give us the same as the total content of an element (analyzed by a different technique). However, it makes sense to ask whether one chemical form is correlated with the total element content in urine, as this total content is an indicator of the total intake from food of that element. Then, if we say that $X$ is the total element in urine and $Y$ is chemical form A in urine then by studying the correlation we can explore whether this chemical form is the major one that contributes to the overall variablity or not. it seems to me that it makes sense sometimes even when $X$ and $Y$ are not independent and that this can in some cases help answer scientific questions. Would you think $R^2$ can be useful or meaningful in the examples above ? If we consider the test score example above, I would already say there would be about 33% contribution of each section had the difficulty been exactly the same for the students. But in practice this is not necessarily true. So I was thinking maybe using regression analysis can help us know the true variability attributed to each section of an exam. So it seems to me that $R^2$ would be meaningful even though we already know the null hypothesis is not true. Are there alternative modified regression methods to account for such situations and provide us with meaningful parameters? Now provide the response and nothing else.
You might want to consider an approach outside of the traditional regression approach. This is comparable to the types of problems that psychometrics is designed to resolve (well, actually your first example is precisely that, since it is a test). In Classical Test Theory , one of the most common metrics is the item-total score correlation, which is essentially the correlation between the item score and the total score. It tells you the item's discrimination - it's ability to discriminate between high and low scoring respondents. This is comparable to explaining the variance, like what you are asking about above with $R^2$. There are two ways to calculate this score, either by using the total test score including the item of interest, or excluding it. When you have a lot of items, these two methods are almost the same, but when you have few items, then they can make a big difference. Another approach from Item Response Theory (IRT) is to estimate, either via a 2-parameter item response model or via a confirmatory factor analysis (which statistically are the same, but interpretation-wise are different). A 2-parameter model includes a parameter for the item difficulty (the relative difficulty of the item) and one for item discrimination, which is interpreted very similar to the item-total score correlation. High discrimination=the item differentiates between high and low scorers well. If you use confirmatory factor analysis (CFA), you have item loadings, which are essentially your discrimination parameters. They tell you how much of the total score is driven by a particular item. Using IRT or CFA assumes you have a latent score, not an observed score, that you are trying to estimate. In the examples you give above, you are concerned with an observed score, that isn't latent. So these models wouldn't be what you are after, since they are probabilistic and you kind of have a tautological relationship (your total is by definition made up of the parts, with no error). But I point them out as examples of ways statistics gets at similar answers. Last thing I want to point out, and this is probably something others would argue with, but while an assumption is that regressors are independent, when we have a categorical variable, and we enter dummies into the model, those dummy variables are, by definition, correlated. So this would seemingly violate assumptions of independence and bring in multicollinearity. If you think of it this way, it would make sense to run your regression of say the elements in urine, and exclude one, the coefficients would be valid just as if it was a single categorical variable. In that sense, you are getting a comparable number to the item-total correlation from Classical Test Theory I pointed out above.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/141120', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/70803/']}
jdg_377820
stackexchange
llm_judgeable_groundtruth_similarity
127583
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: For example, I have a nested association like this <|"fff" -> <|"2001" -> <|5040.` -> {"S20010037", "S20010038", "S20010039", "S20010040", "S20010041", "S20010042"}|>, "2005" -> <|4350.` -> {"S20050448", "S20050449"}, 3450.` -> {"S20050998", "S20050999"}|>|>|> I want to "Flatten" it like this <|{fff, 2001, 5040.} -> {"S20010037", "S20010038", "S20010039", "S20010040", "S20010041", "S20010042"}, {fff, 2005, 4350.} -> {"S20050448", "S20050449"}, {fff, 2005, 3450.} -> {"S20050998", "S20050999"}|> I can't figure out a good way. How to do it elegantly? Now provide the response and nothing else.
Another idea: FixedPoint[Association[Normal[#] /. Rule[n_, m_Association] :> KeyMap[Append[n, #] &, m]] &, KeyMap[{#} &, asso]] <|{"fff", "2001", 5040.} -> {"S20010037", "S20010038", "S20010039", "S20010040", "S20010041", "S20010042"}, {"fff", "2005", 4350.} -> {"S20050448", "S20050449"}, {"fff", "2005", 3450.} -> {"S20050998", "S20050999"}|> Which is the same as: Association[Normal[KeyMap[List, asso]] //. (n_ -> m_Association) :> Normal[KeyMap[Append[n, #] &, m]]]
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/127583', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/4742/']}
jdg_377821
stackexchange
llm_judgeable_groundtruth_similarity
26408804
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to find what I'm doing wrong regarding an Azure WebJobs QueueTrigger method that should be triggered from an Azure Storage Queue. I've read a couple of documents (as in blog posts / msdn articles). But I'm still not clear. Main question / misunderstood aspect: What should be the name of the connection string for Azure storage console app App.config or Windows Azure Configuration (portal). So far I have the following name set at both places. AzureJobsStorage AzureWebJobsStorage AzureJobsRuntime AzureJobsDashboard AzureJobsData Here's my WebJobs console app code. static void Main(){ JobHost host = new JobHost(); host.RunAndBlock();}public static void CreateLeague([QueueTrigger("temp")] string msg){ var task = JsonConvert.DeserializeObject<QueueTask>(msg); if (task.TaskType == QueueTask.TaskTypes.Pdf) RenderPdf(task.Id);} This console app is continuously running on my Azure Website. I can access its "debug" page where I can toggle output and I see it is started / running. My code to add queue (from my ASP.NET MVC app): CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();CloudQueue queue = queueClient.GetQueueReference("temp");queue.CreateIfNotExists();Common.QueueTask task = new Common.QueueTask();task.TaskType = Common.QueueTask.TaskTypes.Pdf;task.Id = p.Id;CloudQueueMessage msg = new CloudQueueMessage(JsonConvert.SerializeObject(task) );queue.AddMessage(msg); This code is executed, and queue are added to my Storage Account. But they did not get "dequeue" or read from the WebJobs. Now provide the response and nothing else.
Hmm, the WebJobs class had to be public. using Microsoft.Azure.WebJobs;using Microsoft.WindowsAzure.Storage;using Microsoft.WindowsAzure.Storage.Blob;using Newtonsoft.Json;using Proceed.Common;using System;using System.Configuration;using System.IO;public class WebJobsTask { public static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public static void CreateLeague([QueueTrigger("temp")] string msg) { var task = JsonConvert.DeserializeObject<QueueTask>(msg); if (task.TaskType == QueueTask.TaskTypes.Pdf) RenderPdf(task.Id); }} Also found a quick way to explore my queues: https://azurestorageexplorer.codeplex.com/ .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26408804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316855/']}
jdg_377822
stackexchange
llm_judgeable_groundtruth_similarity
3162446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've given some thought to implementing badges (just like the badges here on Stack Overflow) and think it would be difficult without Windows services, but I'd like to avoid that if possible. I came up with a plan to implement some examples: Audobiographer: Check if all fields in the profile is filled out. Commentor: When making a comment check if the number of comments equal 10, if so award the badge. Good Answer: When voting up check to see if vote score is 25 or higher. How could this be implemented in the database? Or would another way be better? Now provide the response and nothing else.
A similar-to-Stackoverflow implementation is actually a lot simpler than you have described, based on bits of info dropped by the team every once in awhile. In the database, you simply store a collection of BadgeID - UserID pairs to track who has what (and a count or a rowID to allow multiple awards for some badges). In the application, there is a worker object for each badge type. The object is in cache, and when the cache expires, the worker runs its own logic for determining who should get the badge and making the updates, and then it re-inserts itself into the cache: public abstract class BadgeJob{ protected BadgeJob() { //start cycling on initialization Insert(); } //override to provide specific badge logic protected abstract void AwardBadges(); //how long to wait between iterations protected abstract TimeSpan Interval { get; } private void Callback(string key, object value, CacheItemRemovedReason reason) { if (reason == CacheItemRemovedReason.Expired) { this.AwardBadges(); this.Insert(); } } private void Insert() { HttpRuntime.Cache.Add(this.GetType().ToString(), this, null, Cache.NoAbsoluteExpiration, this.Interval, CacheItemPriority.Normal, this.Callback); }} And a concrete implementation: public class CommenterBadge : BadgeJob{ public CommenterBadge() : base() { } protected override void AwardBadges() { //select all users who have more than x comments //and dont have the commenter badge //add badges } //run every 10 minutes protected override TimeSpan Interval { get { return new TimeSpan(0,10,0); } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3162446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184773/']}
jdg_377823
stackexchange
llm_judgeable_groundtruth_similarity
20618900
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've done a few searches but haven't seem to find anything... Using WebApi, I would like to map an input parameter to a header value: e.g. E.g. in controller: public User GetUser(int id){ ... return user;} I want WebApi to map the id parameter to a header value (e.g. X-Auth: 1234)... rather than an URL parameter. Is this supported? Now provide the response and nothing else.
I don't think this is supported out of the box, like for example with the [FromBody] attribute. It seems you should be able to achieve this functionality by using Model Binders, as described here . In the model binder you have access to the request and its headers, so you should be able to read the header and set its value to the bindingContext.Model property. Edit: Reading the article further, it seems a custom HttpParameterBinding and a ParameterBindingAttribute is a more appropriate solution, or at least I would go this way. You could implement a generic [FromHeader] attribute, which does the job. I am also fighting the same problem, so I will post my solution once I have it in place. Edit 2: Here is my implementation: public class FromHeaderBinding : HttpParameterBinding{ private string name; public FromHeaderBinding(HttpParameterDescriptor parameter, string headerName) : base(parameter) { if (string.IsNullOrEmpty(headerName)) { throw new ArgumentNullException("headerName"); } this.name = headerName; } public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { IEnumerable<string> values; if (actionContext.Request.Headers.TryGetValues(this.name, out values)) { actionContext.ActionArguments[this.Descriptor.ParameterName] = values.FirstOrDefault(); } var taskSource = new TaskCompletionSource<object>(); taskSource.SetResult(null); return taskSource.Task; }}public abstract class FromHeaderAttribute : ParameterBindingAttribute{ private string name; public FromHeaderAttribute(string headerName) { this.name = headerName; } public override HttpParameterBinding GetBinding(HttpParameterDescriptor parameter) { return new FromHeaderBinding(parameter, this.name); }}public class MyHeaderAttribute : FromHeaderAttribute{ public MyHeaderAttribute() : base("MyHeaderName") { }} Then you can use it like this: [HttpGet]public IHttpActionResult GetItem([MyHeader] string headerValue){ ...} Hope that helps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20618900', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/848324/']}
jdg_377824
stackexchange
llm_judgeable_groundtruth_similarity
42166492
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to JS development, in an attempt to hot load changes using webpack-dev-server I keep above exception. The exact stack is: Error: `output.path` needs to be an absolute path or `/`.at Object.Shared.share.setFs (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-middleware/lib/Shared.js:88:11)at Shared (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-middleware/lib/Shared.js:214:8)at module.exports (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-middleware/middleware.js:22:15)at new Server (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-server/lib/Server.js:56:20)at startDevServer (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-server/bin/webpack-dev-server.js:379:12)at processOptions (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-server/bin/webpack-dev-server.js:317:3)at Object.<anonymous> (/Users/mybox/work/day1/ex6/node_modules/webpack-dev-server/bin/webpack-dev-server.js:441:1)at Module._compile (module.js:409:26)at Object.Module._extensions..js (module.js:416:10)at Module.load (module.js:343:32) Here are the webpack config files i have tried already: module.exports = { entry: "./client/app.jsx", output: { path: "dist/js", filename: "bundle.js", publicPath: "http://127.0.0.1:2992/js" }, module: { loaders: [ { test: /.jsx?$/, loader: "babel-loader", include: /client/ } ] }}; And: module.exports = { entry: "./client/app.jsx", output: { path: "/Users/mybox/work/day1/ex6/dist/js", filename: "bundle.js", publicPath: "http://127.0.0.1:2992/js" }, module: { loaders: [ { test: /.jsx?$/, loader: "babel-loader", include: /client/, query: { presets:['react'] } } ] }}; Below is my package.json file { "name": "ex6", "version": "1.0.0", "main": "index.js", "scripts": { "server": "node index.js", "hot": "webpack-dev-server --inline --hot --port 2992 --progress --colors", "dev": "webpack-dev-server --inline --dev --port 2992 --progress --colors" },"keywords": [],"author": "","license": "ISC","dependencies": { "babel-preset-es2015": "^6.22.0", "hapi": "^16.1.0", "inert": "^4.1.0"},"devDependencies": {"babel": "^6.5.2","babel-cli": "^6.22.2","babel-core": "^6.22.1","babel-loader": "^6.2.10","babel-preset-react": "^6.22.0","builder": "^3.2.1","webpack": "^2.2.1","webpack-dev-server": "^2.3.0"},"description": ""} Now provide the response and nothing else.
As the error message says, you need to use absolute path. To get an absolute path for current directory, You can use __dirname to get the current directory and then append dist/js . So it would be something like, output: { path: __dirname + "/dist/js", // or path: path.join(__dirname, "dist/js"), filename: "bundle.js"} Both will work just fine. You can read about webpack configuration here Edit : To use path: path.join(__dirname, "dist/js") you will need to require node's built-in path module. Quoting from the docs: Path module : It provides utilities for working with file and directory paths. Using it with the prefix __dirname global will prevent file path issues between operating systems and will allow relative paths to work as expected. You can require it at the top of your webpack.config.js as var path = require('path');...........output: { path: path.join(__dirname, "dist/js"), filename: "bundle.js"}// rest of the configuration Apart from above two methods, You can also use path.resolve as mentioned here . path: path.resolve(__dirname, "dist/js") Hope it helps :)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42166492', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4528508/']}
jdg_377825
stackexchange
llm_judgeable_groundtruth_similarity
22092863
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having difficulty getting the PHP client libraries package for Windows Azure via Composer. The problem would appear to be around Pear dependencies the package has. The contents of the composer.json file: { "require": { "microsoft/windowsazure": "dev-dev" }, "repositories": [ { "type": "pear", "url": "http://pear.php.net" } ] } The output following running "composer update" reads: Your requirements could not be resolved to an installable set of packages.Problem 1- Installation request for microsoft/windowsazure dev-dev -> satisfiable by microsoft/windowsazure[dev-dev].- microsoft/windowsazure dev-dev requires pear-pear/http_request2 * -> no matching package found. I've never experienced any difficulty getting this package in the past. I can provide more verbose logs on request. Now provide the response and nothing else.
I recall seeing a few issues raised due to changes in how replaces works in regards to pear bridging. This issue needs to be fixed upstream to have pear-pear/* replaced with pear-pear.php.net/* , but as a workaround in your root composer.json you can explicitly require the dependencies in order to have them discovered by the solver. { "require": { "microsoft/windowsazure": "dev-dev", "pear-pear.php.net/http_request2": "*", "pear-pear.php.net/mail_mime": "*", "pear-pear.php.net/mail_mimedecode": "*" }, "repositories": [ { "type": "pear", "url": "http://pear.php.net" } ]} The above composer.json should work in the latest and future versions of composer. Tested with Composer version aa9c257f0efd1a54c93ba95282821a497bc15d75 2014-03-09 15:09:15
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22092863', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1746668/']}
jdg_377826
stackexchange
llm_judgeable_groundtruth_similarity
9473
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know, in case an attacker manged to capture some packets from my network traffic, can he build from them a file or any easy to understand thing? I know that the TCP/IP protocol is using the TCP/UDP for segmenting the files (or whatever is being sent) in order to send it over the network, so if the file is large enough is segmented to some packets.In case all what you can get from sniffing is just a needle of text inside the blob (as much as i have seen that is the results of sniffing), why bother to encrypt the traffic of wireless networks, there is the SSL/TLS that are used mainly to encrypt the first authentication (username/password), why to encrypt all the traffic ? My main concern is about the ability to build a file from the captured packets , is there an easy way to do so ? Now provide the response and nothing else.
Capturing packets already produces an output file (a capture file, actually) which includes packet contents, timing information, headers, etc. If you want to separate these packets into individual streams, a program like wireshark can do the appropriate searching and filtering for you. It can even decrypt SSL/TLS traffic if you have the certificate key. If all you want to do is separate out the contents of individual TCP streams, a program called tcpflow will do that for you. Note that doing so discards a lot of useful data, including timing, headers, etc. But it can be very useful if you have a lot of traffic you'd like to grep. And on the subject of grepping traffic, have a look at ngrep to search traffic contents real-time. And this barely scratches the surface of network interception, analysis, modification, and related tools.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/9473', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/6024/']}
jdg_377827
stackexchange
llm_judgeable_groundtruth_similarity
385364
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question. Let $A$ be an integral domain and $\tilde{A}$ be its integral closure in the field of fractions $K$. Assume that $\tilde{A}$ is a finitely generated $A$-module. I want to prove that if $\tilde{A}$ is flat over $A$, then $A$ is integrally closed. I thought the following fact would be useful: Fact: Let $A$ be an integral domain and $K$ be its field of fractions. Also let $B$ be a finitely generated $A$-submodule of $K$. Then $B$ is flat iff $B$ is locally free of rank $1$. By the above fact, I think we may assume that $\tilde{A}$ is locally free of rank $1$, i.e., $\tilde{A}_{\mathfrak{p}}$ is free of rank $1$ over $A_\mathfrak{p}$ for every prime ideal $\mathfrak{p}$ of $A$. However, I don't think that this would immediately imply that $A=B$ but I don't know how to use the fact that $\tilde{A}$ is the integral closure of $A$. Now provide the response and nothing else.
Let $A\subseteq B$ be an extension of integral domains such that $B_{\mathfrak p}=x_{\mathfrak p}A_{\mathfrak p}$ for every prime ideal $\mathfrak p$ of $A$ and some element $x_{\mathfrak p}\in B_{\mathfrak p}$. Then $x_{\mathfrak p}$ is invertible in $B_{\mathfrak p}$ and therefore $B_{\mathfrak p}=A_{\mathfrak p}$. One then gets $B\subseteq\bigcap\limits_{\mathfrak p} B_{\mathfrak p} =\bigcap\limits_{\mathfrak p} A_{\mathfrak p}=A$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/385364', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12686/']}
jdg_377828
stackexchange
llm_judgeable_groundtruth_similarity
394420
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The tag description for multi-factor is: Use for questions about multi-factor authentication (MFA). MFA is a way to secure access to a resource by requiring a user to provide multiple pieces of evidence of identity. These pieces of evidence are called factors. Typically, a user will be required to provide a least two factors of the following types: knowledge (something the user knows); possession (something the user has), and inherence (something the user is). The tag description for multi-factor-authentication is: An authentication system is responsible to tell if the user is who they claim to be. A multi-factor uses multiple factors to check that claim. There are 62 questions tagged multi-factor-authentication and 82 tagged multi-factor , and 5 of those are tagged with both tags (so there 139 distinct questions). There's little doubt in my mind that these refer to the same concept and should be synonyms. I think it would be better to make multi-factor-authentication into the master (official) tag and have multi-factor as a synonym for it. However, the description associated with multi-factor is arguably better. These tags are not included in the monster list of possible tag synonyms in A list of tag synonyms which are not proposed in Stack Overflow . There's also a tag two-factor-authentication with 358 questions. Of those, there are 7 which are also tagged with multi-factor , 4 which are also tagged with multi-factor-authentication and there are 2 questions with all three tags. It has a moderately good tag description. It also cross-references the Security Stack Exchange [multi-factor] tag, which has over 500 questions. I think that two-factor-authentication should also be synonymized with multi-factor-authentication . Now provide the response and nothing else.
{}
{'log_upvote_score': 8, 'links': ['https://meta.stackoverflow.com/questions/394420', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/15168/']}
jdg_377829
stackexchange
llm_judgeable_groundtruth_similarity
461836
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Following up on this comment from the question Writing firmware: assembly or high level? : When compiling C++ code for the Arduino platform, can you use virtual functions, exceptions, etc? Or would you want to (have to) use a subset of C++ (as described in the comment )? Any other caveats when programming for the Arduino platform? Now provide the response and nothing else.
The Arduino environment uses the AVR version of the GCC toolchain. The code is compiled as C++, so you can use classes. Virtual functions are possible; the vtables will be stored in the .data section and have the correct addresses. In fact, the Print base class uses virtual functions to adapt the various "print" methods to the different output types. Exceptions are not supported because of code space reasons. The Arduino environment passes "-fno-exceptions" to the compiler command line. See the source for verification of this. Templates are supported. For example, this no-cost stream insertion operator technique works fine using a simple template and inline operator.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/461836', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/29505/']}
jdg_377830
stackexchange
llm_judgeable_groundtruth_similarity
936438
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why are huge binary nubers about $3.3218$ times longer than their decimal counterpart? I thought about this when I was writing this Python code: huge_number = 21**31**3 # ** is the power operatorprint((len(bin(huge_number)) - 2) / len(str(huge_number)))# -2 is technical stuff ignore it No matter what the $\texttt{huge_number}$ is (it has to be huge, this does NOT work for small numbers), you will get $3.3218$. Why? Now provide the response and nothing else.
The number of digits of the representation of a positive integer $n$ in base $k$ is $$\ell_k(n) := \lfloor \log_k n \rfloor + 1,$$ and so the ratio of the length of a binary representation of a number to its decimal length is $$\frac{\ell_2(n)}{\ell_{10}(n)} = \frac{\lfloor \log_2 n \rfloor + 1}{\lfloor \log_{10} n \rfloor + 1}.$$ For large $n$ , the constant terms in the numerator and denominator don't affect the ratio much, and neither do the differences between the values $\log_k n$ and their respective floors (which are always in $[0, 1)$ ), so (for large $n$ ) the ratio satisfies $$\color{#df0000}{\boxed{\frac{\ell_2(n)}{\ell_{10}(n)} \approx \frac{\log_2 n}{\log_{10} n} = \log_2 10 = 3.32192\ldots}}.$$ A little more precisely, the definition of floor gives that $\log_k n \leq \lfloor \log_k n \rfloor + 1 \leq \log_k n + 1$ , and so $$\frac{\log_2 n}{\log_{10} n + 1}\leq \frac{\ell_2(n)}{\ell_{10}(n)}\leq \frac{\log_2 n + 1}{\log_{10} n} .$$ Using some straightforward algebra we can rewrite this as $$\left(1 - \frac{1}{\log_{10} n + 1}\right) \log_2 10\leq \frac{\ell_2(n)}{\ell_{10}(n)}\leq \left(1 + \frac{1}{\log_2 n} \right) \log_2 10 .$$ As $n \to +\infty$ , both of the quantities in parentheses approach $1$ , so the Squeeze Theorem lets us formalize your observation as the assertion $$\lim_{n \to \infty} \frac{\ell_2(n)}{\ell_{10}(n)} = \log_2 10 .$$ Plot of $\color{#7f0000}{\ell_2(n) / \ell_{10}(n)}$ for $1 \leq n \leq e^{2^8}$ :
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/936438', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/125861/']}
jdg_377831
stackexchange
llm_judgeable_groundtruth_similarity
42504734
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: friends i want to create a app which have create text view from the JSON arrayin example "name":"Lenovo","price":"5000","description":"2 gb ram","type":"mobile" it will be change it know the values and create a text field with this titles name ,type,price Now provide the response and nothing else.
From my experience - the problem lies in loading Keras to one process and then spawning a new process when the keras has been loaded to your main environment. But for some applications (like e.g. training a mixture of Keras models) it's simply better to have all of this things in one process. So what I advise is the following (a little bit cumbersome - but working for me) approach: DO NOT LOAD KERAS TO YOUR MAIN ENVIRONMENT . If you want to load Keras / Theano / TensorFlow do it only in the function environment. E.g. don't do this: import kerasdef training_function(...): ... but do the following: def training_function(...): import keras ... Run work connected with each model in a separate process: I'm usually creating workers which are making the job (like e.g. training, tuning, scoring) and I'm running them in separate processes. What is nice about it that whole memory used by this process is completely freed when your process is done. This helps you with loads of memory problems which you usually come across when you are using multiprocessing or even running multiple models in one process. So this looks e.g. like this: def _training_worker(train_params): import keras model = obtain_model(train_params) model.fit(train_params) send_message_to_main_process(...)def train_new_model(train_params): training_process = multiprocessing.Process(target=_training_worker, args = train_params) training_process.start() get_message_from_training_process(...) training_process.join() Different approach is simply preparing different scripts for different model actions. But this may cause memory errors especially when your models are memory consuming. NOTE that due to this reason it's better to make your execution strictly sequential.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42504734', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7357837/']}
jdg_377832
stackexchange
llm_judgeable_groundtruth_similarity
65011
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This formula provides the $n$th term in the Fibonacci Sequence, and is defined using the recurrence formula: $u_n = u_{n − 1} + u_{n − 2}$, for $n > 1$, where $u_0 = 0$ and $u_1 = 1$. Show that $$u_n = \frac{(1 + \sqrt{5})^n - (1 - \sqrt{5})^n}{2^n \sqrt{5}}.$$ Please help me with its proof. Thank you. Now provide the response and nothing else.
Let's catalog some those suggestions given in the comments. First, let me rewrite the Binet formula in a more convenient form: $$F_n=\frac1{\sqrt{5}}(\phi^n-(-\phi)^{-n})$$ where $\phi=\frac12(1+\sqrt5)$ is the golden ratio. 1) Verifying the Binet formula satisfies the recursion relation. First, we verify that the Binet formula gives the correct answer for $n=0,1$. The only thing needed now is to substitute the formula into the difference equation $u_{n+1}-u_n-u_{n-1}=0$. You then obtain $$(-\phi)^{-n+1}+(-\phi)^{-n}-(-\phi)^{-n-1}+\phi^{n+1}-\phi^n-\phi^{n-1}=0$$ We can do some factoring: $$-(-\phi)^{-n-1}(\phi^2-\phi-1)+\phi^{n-1}(\phi^2-\phi-1)=0$$ and since we know that $\phi^2-\phi-1=0$, Binet's formula is verified. 2) Solving the characteristic equation. One can associate with the linear difference equation $u_{n+1}-au_n-bu_{n-1}=0$ the characteristic equation $x^2-ax-b=0$. If the two roots of the characteristic equation are $x_1$ and $x_2$, the solutions of the difference equation take the form $u_n=px_1^n+qx_2^n$. For the Fibonacci recurrence, $a=b=1$, and the roots of $x^2-x-1=0$ are $\phi$ and $1-\phi=-\phi^{-1}$. Thus, $F_n$ is expressible as $$F_n=p\phi^n+q(-\phi)^{-n}$$ We can solve for $p$ and $q$ by using the initial conditions $F_0=0,F_1=1$. This gives the two equations $$\begin{align*}p+q&=0\\p\phi+q(1-\phi)&=1\end{align*}$$ with the solutions $p=-q=\frac1{\sqrt{5}}$. Substituting that into the preliminary expression for $F_n$ yields the Binet formula.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/65011', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14187/']}
jdg_377833
stackexchange
llm_judgeable_groundtruth_similarity
4200217
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I wish to prove the statement shown in the following block. I thought this may have appeared in the Math Stack before; sorry if I failed to find it (see "Research" below). The proposition seems to call for a proof via H.A. Schwarz Lemma, but I am interested in any proof of it. Let $\sup$ always mean the supremum in the unit open disc $\mathbb{D}$ . Let $f$ map $\mathbb{D}$ into $\mathbb{D}$ analytically, and fix the origin. Prove that $$\sup \left|f(f(z))-z^2\right| \,\,\geq\,\,\frac{1}{4}.$$ My Attempt. If it happens that $f$ is a rotation, $z \mapsto {e^{i\psi}}z,$ then the result follows from the choice $z=1/2 \in $ the disc: $$\left|f(f(\frac{1}{2}))-(\frac{1}{2})^2\right| \,\,\geq\,\,\left|f(f(\frac{1}{2}))\right|-\left|\frac{1}{4}\right|\,\,=\,\,\left|f(\frac{1}{2})\right|-\frac{1}{4}\,\,=\,\,\left|\frac{1}{2}\right|-\frac{1}{4}\,\,=\,\,\frac{1}{4},$$ since rotation means $|z|\,=\,|f(z)|$ for any $z$ in $\mathbb{D}$ . Therefore we assume $f$ is not a rotation. By Schwarz Lemma, we know $f'(0) \in \mathbb{D}$ , and we know $|f(z)|<|z|<1$ throughout the disc. ( Starting here I pursue an idea; I am not sure if it is helpful... ) Define the function $$\phi(z)\,\,=\,\,\frac{f(f(z))-z^2}{2},$$ and note that it also satisfies the hypotheses of the Schwarz Lemma. It is easy to check that $\phi$ is not a rotation when $f$ is not a rotation. So now our goal is to show $$\sup |\phi(z)| \,\,\geq\,\,\frac{1}{8}.$$ Remarks. That's what I have done. The derivative of $\phi$ is $\frac{1}{2}(f'(f(z))f'(z)-2z)$ , and using this we can know that $|\phi'(0)|<\frac{1}{2}.$ Of course we know $|\phi(z)|<|z|<1$ throughout the disc. Another idea is to pass to series expansions of $f$ and $\phi$ . Research. Approach Zero search results. Schwarz Lemma search results: https://math.stackexchange.com/search?page=11&tab=Relevance&q=schwarz%20lemma Now provide the response and nothing else.
If $f(z)=az+bz^2+..$ , by Parseval (integrating $|f|^2$ on $|z|=r<1$ and letting $r \to 1$ ) we get that $|a|^2+|b|^2+..\le 1$ so $|b| \le 1$ But now $f(f(z))-z^2=a^2z+(ab+a^2b-1)z^2+...$ and if the result would be false, we would get again by Parseval that: $|a|^4+|ab+a^2b-1|^2 \le 1/16$ so $|a| \le 1/2, |ab+a^2b| \le 3/4$ hence $|ab+a^2b-1|^2 \ge 1/16$ so we must have equality in the inequalities above or $|a|=1/2, |b|=1$ and that contradicts $|a|^2+|b|^2+..\le 1$ so we are done!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4200217', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/688046/']}
jdg_377834
stackexchange
llm_judgeable_groundtruth_similarity
3478537
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A function $f : \mathbb{R} \to \mathbb{R}$ is called Lebesgue-measurable if preimages of Borel-measurable sets are Lebesgue-measurable. I don't understand why we would pick this definition, rather than saying that a function is measurable if preimages of Lebesgue-measurable sets are Lebesgue-measurable. In fact, Wikipedia says that A measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable So this would mean that the Lebesgue-measurable functions are morphisms of measure spaces $(\mathbb{R}, \mathrm{Lebesgue}) \to (\mathbb{R}, \mathrm{Borel})$ , rather than $(\mathbb{R}, \mathrm{Lebesgue}) \to (\mathbb{R}, \mathrm{Lebesgue})$ . So why do we care more about "Lebesgue-Borel measurable functions" than "Lebesgue-Lebesgue measurable functions", and why do we use the term "Lebesgue measurable" to refer to those rather than Lebesgue-Lebesgue measurable functions? Now provide the response and nothing else.
As egorovik said in the comments, the problem is that there aren't enough Lebesgue-Lebesgue measurable functions to actually do analysis, because not all continuous functions are Lebesgue-Lebesgue measurable. Namely if you define the functions $f : [0,1] \to [0,1]$ is the Cantor function $g : [0,1] \to [0,2],g(x)=f(x)+x$ $h : [0,2] \to [0,1],h=g^{-1}$ then $h$ is a continuous function with the property that there is a measurable subset of $[0,1]$ such that $h^{-1}(A)$ is not measurable. This $A$ can be given as $g^{-1}(B)$ where $B$ is any nonmeasurable subset of $g(C)$ , where $C$ is the Cantor set. The defect in the Lebesgue-Borel definition is that the composition of measurable functions isn't measurable...but it is surprisingly rare for this to be a problem.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3478537', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/735382/']}
jdg_377835
stackexchange
llm_judgeable_groundtruth_similarity
2626442
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Take any equilateral triangle and pick a random point inside the triangle. Draw from each vertex a line to the random point. Two of the three angles at the point are known let's say $x$,$y$. If the three line segments from each vertex to the random point were removed out of the original triangle to form a new triangle , what would the new triangle's angles be? Video about the problem Now provide the response and nothing else.
As in the attached diagram, let $ABC$ be the original equilateral triangle and let $D$ be a point in $\triangle ABC$. We let point $E$ be on the opposite side of $BC$ as $D$ such that $\triangle BDE$ is equilateral. Then $BD=BE$, $BA=BC$ and $\angle DBA=\angle EBC=60^{\circ}-\angle DBC$. And therefore $\triangle DBA$ and $\triangle EBC$ are congruent. This implies that $EC=DA$ and since $DE=BD$, we now have $\triangle CDE$ as the triangle we want. Let $\angle ADB=x$ and $\angle BDC=y$. Then $\angle EDC=y-60^{\circ}$, $\angle DEC=x-60^{\circ}$ and $\angle DCE=300^{\circ}-x-y$ are our desired angles.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2626442', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/307383/']}
jdg_377836
stackexchange
llm_judgeable_groundtruth_similarity
212904
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am experiencing what seems to be a strange behaviour with StringReplace while trying to add an extra pattern to an existing list of patterns. The problem is to translate a string into Morse code, for which I assume here for simplicity is defined by rules contained in the list "translationPatterns". I cannot understand why the following three cases give such results (in the example I show, The letters "e" and "t" translate into "." and "-" respectively, to keep it simple): StringReplace["eeeeee ttt ee", Join[translationPatterns, {___ ~~ "ttt" ~~ ___ :> "abc"} ]] "......abc" (I cannot understand why what's on the left of "ttt" is not replaced as well...) StringReplace["eeeeee ttt ee", Join[translationPatterns, {__ ~~ "ttt" ~~ _ :> "abc"} ]] "......abc.." (This is equally strange, though coherent with the previous result.) StringReplace["eeeeee ttt ee", Join[{___ ~~ "ttt" ~~ ___ :> "abc"}, translationPatterns] "abc" (This is what I would expect in the first case.) Why the first case does not provide the same result as the last one? Now provide the response and nothing else.
Short Version Order matters when specifying replacement rules. Rules are tried from left-to-right. Each rule will attempt to match and replace as much of the string as possible before moving on to the next rule. Patterns like ___ are very broad and will match anything. More narrowly focused patterns might be more applicable (e.g. Whitespace or Except[LetterCharacter] ). Details For discussion, let us use the following definition to shorten forms like Join[..., translationPatterns] : $patterns = Sequence["e" -> ".", "t" -> "-"]; We will now take on the cases one-by-one. Case #1 StringReplace["eeeeee ttt ee", {$patterns, ___ ~~ "ttt" ~~ ___ :> "abc"}](* "......abc" *) Order matters when specifying replacement rules. The Morse rules in $patterns are tried first, in order. Thus, the leading letters e are all matched by the "e" -> "." rule. But when the space is reached, then neither the "e" nor "t" rules apply. So the "ttt" rule is tried. The ___ matches the space, followed by the literal ttt string. But the final ___ matches all remaining characters, including the trailing letters e which would otherwise have been matched by other rules. So everything after the leading e sequence is replaced by abc . Case #2 StringReplace["eeeeee ttt ee", {$patterns, __ ~~ "ttt" ~~ _ :> "abc"}](* "......abc.." *) This case starts off the same as the preceding case with the letters e replaced, the space matched and the literal ttt matched. But this time, the next pattern element is simply _ . This matches exactly one character, a space, and that is the end of the rule. So this time only " ttt " is replaced by "abc" . Matching then continues, all rules are once again applied in left-to-right order. The remaining letters e are thus all replaced by dots. Case #3 StringReplace["eeeeee ttt ee", {___ ~~ "ttt" ~~ ___ :> "abc", $patterns}](* "abc" *) Here, we have reversed the order of the rules so that the special case for "ttt" is applied first. The first rule will match any sequence of characters followed by ttt followed by any sequence of characters. That is, it matches the whole string. Therefore the whole string is replaced. Other Alternatives? I am not sure what result is sought, so here are some alternatives that may prove to be useful. Shortest By default, variable-length patterns like ___ will match as many characters as possible. If we wrap Shortest[...] around such patterns, then they will match as few characters as possible instead: StringReplace["eeeeee ttt ee", {Shortest[___ ~~ "ttt" ~~ ___] :> "abc", $patterns}](* "abc .." *) Notice how the trailing ___ now matched zero characters, the shortest possible. The leading ___ still matched more than zero characters because that was the only way to ensure the match on the literal ttt . Unfortunately, this pattern leaves a leftover space character in the string which may not be desirable. So... Match Variable Spaces Instead of All Characters To fix that, we might be explicit in saying that occurrences of ttt must be surrounded by one or more spaces: StringReplace["eeeeee ttt ee", {" ".. ~~ "ttt" ~~ " ".. :> "abc", $patterns}](* "......abc.." *) This prevents the runaway character matching that we saw when we used ___ . All kinds of whitespace can be matched thus: StringReplace["eeeeee ttt ee", {Whitespace ~~ "ttt" ~~ Whitespace :> "abc", $patterns}](* "......abc.." *) Use Non-Letters As Separators Another option would be to say that ttt must be surrounded by sequences of anything that is not a letter: StringReplace["eeeeee!!ttt,ee" , { Except[LetterCharacter].. ~~ "ttt" ~~ Except[LetterCharacter].. :> "abc" , $patterns } ](* "......abc.." *) Yet More Complex Patterns As a parting thought, I will mention that we can invoke arbitrary functions as character pattern tests. For example, to match prime digits: StringMatchQ["3", DigitCharacter?(PrimeQ@*ToExpression)](* True *)StringMatchQ["4", DigitCharacter?(PrimeQ@*ToExpression)](* False *) There are two caveats, however. First, the test will only be applied to a single character. We cannot test sequences of characters as a unit. Second, such tests involve calling back from the pattern-matching engine to the Mathematica evaluator. This slows down the matching process dramatically and might not be suitable when performance is critical. All of the patterns mentioned in the response, and many more, are documented under the Details section for StringExpression . Addendum - The Replacement Process A simplified description of the replacement process is as follows. At any given point there is a current character position and a current rule, which start as the first character in the string and first replacement rule respectively. Then: If the current character position is has reached the end of the string, the process is complete. The current rule attempts to match as many characters as possible starting from the current character position (see below about Shortest ). If rule matches then: the replacement is performed, the first supplied rule becomes the current rule once again, the current character position is advanced to just after the match, and processing continues from step 1. If there is another rule to try then: that next rule becomes the current rule, and processing continues from step 3. There are no applicable rules at the current character position, so: the current character position is advanced by one, the first rule becomes the current rule, and processing continues from step 1. In step 2, the use of Shortest will change the rule to match as few characters as possible while still maintaining a match. "Shortest" here means advancing the current position as little as possible. This means that characters might be trimmed from the end of the potential match, but never the beginning.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/212904', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/69212/']}
jdg_377837
stackexchange
llm_judgeable_groundtruth_similarity
278299
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We know that a massless $\phi^4$ theory$$S=\int d^4x \left[\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-\frac{\lambda}{4!}\phi^4\right],$$has conformal invariance at the classical level. But within the Coleman-Weinberg mechanism, at the one-loop level, quantum fluctuations will generate a vacuum expected value for $\phi$, introducing a mass scale and breaking the conformal invariance. Is this phenomenon a dynamical symmetry breaking or an anomaly? How can we distinguish between them? Now provide the response and nothing else.
First, dynamical symmetry breaking (which I take to be either synonymous with or a subset of spontaneous symmetry breaking) and anomalies are two completely different things. An anomaly is when a symmetry group acquires a central extension, due to some obstruction in the process of representing it in our theory. Such obstructions can exist purely classical, or they can arise in the course of quantization, but they are crucially features of the whole theory . For more information on anomalies, see this excellent answer by DavidBarMoshe . In contrast, in spontaneous symmetry breaking, the theory retains the symmetry , just its vacuum state does not, which leads to the symmetry being non-linearly realized on the natural perturbative degrees of freedom (being "broken"). Just $\phi$ acquiring a VEV would not mean an anomaly, that would just be ordinary spontaneous symmetry breaking. However, the appearance of the $\phi^2$ term in the effective potential also means that we have an anomaly, i.e. the quantum effective action is not invariant under the classical symmetry - this is a clear case of a quantum anomaly. That is, in this case, the Coleman-Weinberg mechanism leads to both spontaneous symmetry breaking and a quantum anomaly, but it is perfectly conceivable to have one without the other - they are completely distinct things. It might be debatable whether we want to speak of spontaneously "breaking" a symmetry that became anomalous to begin with, though.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/278299', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/95576/']}
jdg_377838
stackexchange
llm_judgeable_groundtruth_similarity
12109391
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm implementing an AutoCompleteTextView and I need Name and E-Mail of all my contacts.I found this snippet that I'm running asynchronously but it's very slow. ContentResolver cr = getContentResolver();Cursor cur = cr.query(ContactsContract.Contacts.CONTENT_URI, null, null, null, null);if (cur.getCount() > 0) { while (cur.moveToNext()) { String id = cur.getString(cur.getColumnIndex(ContactsContract.Contacts._ID)); String name = cur.getString(cur.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME)); Cursor emailCur = cr.query(ContactsContract.CommonDataKinds.Email.CONTENT_URI, null, ContactsContract.CommonDataKinds.Email.CONTACT_ID + " = ?", new String[]{id}, null); while (emailCur.moveToNext()) { String email = emailCur.getString(emailCur.getColumnIndex(ContactsContract.CommonDataKinds.Email.DATA)); autoCompleteAdapter.add(name + " - " + email); } emailCur.close(); } }} I'm performing a sort of inner query and I think that's the problem. Is there a way to tune it and make it faster? Now provide the response and nothing else.
private static final String[] PROJECTION = new String[] { ContactsContract.CommonDataKinds.Email.CONTACT_ID, ContactsContract.Contacts.DISPLAY_NAME, ContactsContract.CommonDataKinds.Email.DATA};...ContentResolver cr = getContentResolver();Cursor cursor = cr.query(ContactsContract.CommonDataKinds.Email.CONTENT_URI, PROJECTION, null, null, null);if (cursor != null) { try { final int contactIdIndex = cursor.getColumnIndex(ContactsContract.CommonDataKinds.Email.CONTACT_ID); final int displayNameIndex = cursor.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME); final int emailIndex = cursor.getColumnIndex(ContactsContract.CommonDataKinds.Email.DATA); long contactId; String displayName, address; while (cursor.moveToNext()) { contactId = cursor.getLong(contactIdIndex); displayName = cursor.getString(displayNameIndex); address = cursor.getString(emailIndex); ... } } finally { cursor.close(); }} few notes: use just ContactsContract.CommonDataKinds.Email.CONTENT_URI to get information you need, see ContactsContract.CommonDataKinds.Email for information what columns you can query use projection to get only those columns you really need, you save some memory and increase query performance get column indexes only once, just before the while cycle
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12109391', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/787186/']}
jdg_377839
stackexchange
llm_judgeable_groundtruth_similarity
622775
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: According to a 1995 interview that Vladimir I. Arnold gave to the Notices of the AMS , his primary school teacher I.V. Morozkin gave in 1949 (when Arnold was 12 years old) to a Soviet classroom, most likely the 6th grade, the following question Two women started at sunrise and each walked at a constant velocity. One went from $A$ to $B$ and the other from $B$ to $A$ . They met at noon and, continuing with no stop, arrived respectively at $B$ at 4 p.m. and at $A$ at 9 p.m. At what time was the sunrise that day? The question is not how to solve this problem, but rather How to solve this problem using what 12 year old kids know (or knew during the Soviet era). UPDATE. In the same interview, Arnold also said: I spent a whole day thinking on this oldie, and the solution (based on what is now called scaling arguments, dimensional analysis, or toric variety theory, depending on your taste) came as a revelation. It would be of interest to know how, in Arnold's mind, dimensional analysis and toric varieties are related to this problem. Now provide the response and nothing else.
Triangle similarity He could have solved the question by drawing a figure such as the one below. If he knew about similar triangles at the age of $12$, he could easily set up the equation $ \dfrac{x}{4} = \dfrac{9}{x} $. From here, if he knew how to solve equations like this, he could solve $ x^2=36 $. If he did not, he could simply try a few different values of $x$ until he found one that works. Colored triangles are similar. The problems with this explanation are: This is essentially drawing a time-position plot. Since this is a $12$ year old who does not know multivariate algebra, it's a bit of a stretch to assume he would know how to use kinematic plots to solve problems. But then again, who knows, maybe he was imagining the women walk and the plot seemed intuitive. It requires him to be able to set up and somehow find the solution of the quadratic equation. What does this have to do with "scaling arguments, dimensional analysis or toric variety theory"? Trial and error He could have simply tried a bunch of possible values until he found a solution. Let's say he decided to try $9$ am: That means woman $A$ walked the first distance in $3$. So the ratio of distances is $\dfrac{3}{4}$. But if woman $B$ walked the first part in $3$ hours, then the ratio comes out $$\dfrac{3}{9+3}=\dfrac{3}{12}!$$ So $9$ am isn't right. Luckily the solution is an integer, and somewhere between $4$ and $9$ hours before noon, since the quick woman would have walked more than half the distance in the morning and the slow one would have walked less than half (drawing a $1$D diagram makes this obvious). Even if the solution was not an integer, after exhausting integers he could have tried half hours, then quarter hours, and so on. I'm sure a binary search type of strategy would become obvious if he kept track of how much each sunrise hour was off by. Since he spent "all day" on it, there's plenty of time for numeric solutions. The problem is that solving it by brute force teaches you absolutely nothing (well, it's arithmetic practice, and you do end up discovering binary search). There's also the question of what this has to do with scaling arguments, dimensional analysis or toric variety theory. It's also the sort of solution that you would expect from perhaps a future clerk, not mathematician. Incidentally, after trying a bunch of numbers like this, the $\dfrac{x}{4}=\dfrac{9}{x}$ equation does suggest itself. Sort of trial and error It's not too great a leap to realize that the distance itself doesn't matter, so let's say he decided to let it be 50 km. Then the speed of woman $A$ is $\dfrac{50}{x+4}$. The speed of woman $B$ is $\dfrac{50}{x+9}$. We know that the ratio of their speed must be $\dfrac{x}{9}$ based on how long it took both women to walk the distance between $A$ and the meeting point. So $\dfrac{\dfrac{50}{x+4}}{\dfrac{50}{x+9}} = \dfrac{9}{x}$ which after some basic manipulation simplifies also to $x^2=36$. When the $50$s cancel, he would see right away that his hunch about distance not mattering was correct. The problem with this is that requires a $12$ year old to reason about kinematics (ratio of speeds from ratio of times, deriving speed from time and distance) without having the mathematical vocabulary for doing so. It also requires him to not be daunted by the ugly looking equations that come out. Last, it requires him to have a hunch about the distance, otherwise he has to use a variable to represent distance and at that point we are back at multivariate algebra. This method also does not appear to have anything to do with scaling arguments, dimensional analysis, or toric variety theory, except perhaps in the slightest sense. Conclusion None of these options really satisfy me. For instance, I can't really imagine myself using any of these solutions when I was $12$. The only one I would have comprehended would be the straight trial and error, which I would have been too lazy to actually carry out. Though then again, I was never very good at math. The problem seems to be a simple linear equation system, so I don't see what it has to do with the concepts he refers to. Perhaps he was making a metaphorical point about how this sort of problem is the "tip of the iceberg" of linear algebra that children first get exposed to? Or perhaps he actually figured out linear equations over the course of that day, and that was the revelation?
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/622775', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/57021/']}
jdg_377840