source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20372421
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to set a ui:param depending on a bean value and I thought using c:if was a good idea. So I put in my page the following code: <ui:composition xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:c="http://java.sun.com/jsp/jstl/core" xmlns:wai="http://www.id.ethz.ch/wai/jsf" template="/view/listView.xhtml"> <c:if test="#{subscriptionListController.model.listViewName eq 'mySubscriptions'}"> <ui:param name="title" value="#{msg.subscriptionTitleMySubscriptions}"/> </c:if> <c:if test="#{subscriptionListController.model.listViewName eq 'paidSubscriptions'}"> <ui:param name="title" value="#{msg.subscriptionTitlePaidSubscriptions}"/> </c:if> <c:if test="#{subscriptionListController.model.listViewName eq 'allSubscriptions'}"> <ui:param name="title" value="#{msg.subscriptionTitleAllSubscriptions}"/> </c:if> .... but the parameter is not set... If I let print out the value of #{subscriptionListController.model.listViewName eq 'mySubscriptions'} I get true in the corresponding case and false in the other two cases. At the beginning I had only 2 possibilities and solved it with the ternary operator: <ui:param name="title" value="#{subscriptionListController.model.listViewName eq 'mySubscriptions' ? msg.subscriptionTitleMySubscriptions : msg.subscriptionTitlePaidSubscriptions}"/> and it worked. But now I have more possibilities... What am I doing wrong?
Now provide the response and nothing else.
|
As indicated by <ui:composition template> , this page represents a template client. Any <ui:param> outside <ui:define> applies to the master template (the file which you declared in template attribute) and is ignored inside the template client itself. If you intend to prepare variables for inside the template client, you should put <ui:param> inside <ui:define> . But there's another thing: the original purpose of <ui:param> is to pass variables to the file referenced by <ui:composition template> , <ui:decorate template> or <ui:include src> , not to prepare/set variables inside the current facelet context. For the sole functional requirement of preparing/setting variables in the current EL context, you'd better be using JSTL <c:set> for the job. You can use <ui:param> for this, but this isn't its original intent and didn't work that way in older MyFaces versions. Thus, so: <ui:define> <c:if test="#{subscriptionListController.model.listViewName eq 'mySubscriptions'}"> <c:set var="title" value="#{msg.subscriptionTitleMySubscriptions}"/> </c:if> <c:if test="#{subscriptionListController.model.listViewName eq 'paidSubscriptions'}"> <c:set var="title" value="#{msg.subscriptionTitlePaidSubscriptions}"/> </c:if> <c:if test="#{subscriptionListController.model.listViewName eq 'allSubscriptions'}"> <c:set var="title" value="#{msg.subscriptionTitleAllSubscriptions}"/> </c:if> ...</ui:define> Unrelated to the concrete problem, you can optimize this as follows without the need for an unmaintainable <c:if> group which would only grow with every subscription type: <ui:define> <c:set var="subscriptionTitleKey" value="subscriptionTitle.#{subscriptionListController.model.listViewName}"> <c:set var="title" value="#{msg[subscriptionTitleKey]}"/> ...</ui:define> with those keys subscriptionTitle.mySubscriptions = Title for my subscriptionssubscriptionTitle.paidSubscriptions = Title for paid subscriptionssubscriptionTitle.allSubscriptions = Title for all subscriptions
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20372421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/666040/']}
|
jdg_369841
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13282786
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I display all files greater than 10k bytes in my current directory and it's subdirectories. Tried ls -size +10k but that didn't work.
Now provide the response and nothing else.
|
find . -size +10k -exec ls -lh {} \+ the first part of this is identical to @sputnicks answer, and sucesffully finds all files in the directory over 10k (don't confuse k with K), my addition, the second part then executes ls -lh or ls that lists(-l) the files by human readable size(-h). negate the h if you prefer. of course the {} is the file itself, and the \+ is simply an alternative to \; which in practice \; would repeat or: ls -l found.file; ls -l found.file.2; ls -l found.file.3 where \+ display it as one statement or: ls -l found.file found.file.2 found.file.3 more on \; vs + with find Additionaly, you may want the listing ordered by size. Which is relatively easy to accomplish. I would at the -s option to ls , so ls -ls and then pipe it to sort -n to sort numerically which would become: find . -size +10k -exec ls -ls {} \+ | sort -n or in reverse order add an -r : find . -size +10k -exec ls -ls {} \+ | sort -nr finally, your title says find biggest file in directory. You can do that by then piping the code to tail find . -size +10k -exec ls -ls {} \+ | sort -n | tail -1 would find you the largest file in the directory and its sub directories. note you could also sort files by size by using -S, and negate the need for sort. but to find the largest file you would need to use head so find . -size +10k -exec ls -lS {} \+ | head -1 the benefit of doing it with -S and not sort is one, you don't have to type sort -n and two you can also use -h the human readable size option. which is one of my favorite to use, but is not available with older versisions of ls , for example we have an old centOs 4 server at work that doesn't have -h
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13282786', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1273169/']}
|
jdg_369842
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39376840
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to be able to mock the activated route parameters to be able to test my component. Here's my best attempt so far, but it doesn't work. { provide: ActivatedRoute, useValue: { params: [ { 'id': 1 } ] } }, The ActivatedRoute is used in the actual component like this: this.route.params.subscribe(params => { this.stateId = +params['id']; this.stateService.getState(this.stateId).then(state => { this.state = state; });}); The error I get with my current attempt is simply: TypeError: undefined is not a constructor (evaluating 'this.route.params.subscribe') Any help would be greatly appreciated.
Now provide the response and nothing else.
|
Your mock must reflect the object it's replacing. You .subscribe because it returns an observable, not just the object, so your mock value should too: import { Observable } from 'rxjs/Rx';...{ provide: ActivatedRoute, useValue: { 'params': Observable.from([{ 'id': 1 }]) } }
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39376840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5329320/']}
|
jdg_369843
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1713771
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I detect the page zoom level in all modern browsers? While this thread tells how to do it in IE7 and IE8, I can't find a good cross-browser solution. Firefox stores the page zoom level for future access. On the first page load, would I be able to get the zoom level? Somewhere I read it works when a zoom change occurs after the page is loaded. Is there a way to trap the 'zoom' event? I need this because some of my calculations are pixel-based and they may fluctuate when zoomed. Modified sample given by @tfl This page alerts different height values when zoomed. [jsFiddle] <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js" type="text/javascript"/></script> </head> <body> <div id="xy" style="border:1px solid #f00; width:100px;">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque sollicitudin tortor in lacus tincidunt volutpat. Integer dignissim imperdiet mollis. Suspendisse quis tortor velit, placerat tempor neque. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Praesent bibendum auctor lorem vitae tempor. Nullam condimentum aliquam elementum. Nullam egestas gravida elementum. Maecenas mattis molestie nisl sit amet vehicula. Donec semper tristique blandit. Vestibulum adipiscing placerat mollis.</div> <button onclick="alert($('#xy').height());">Show</button> </body></html>
Now provide the response and nothing else.
|
Now it's an even bigger mess than it was when this question was first asked. From reading all the responses and blog posts I could find, here's a summary. I also set up this page to test all these methods of measuring the zoom level. [↑ broken link. Archived copy → here ]. Edit (2011-12-12): I've added a project that can be cloned: https://github.com/tombigel/detect-zoom IE8 : screen.deviceXDPI / screen.logicalXDPI (or, for the zoom level relative to default zoom, screen.systemXDPI / screen.logicalXDPI ) IE7 : var body = document.body,r = body.getBoundingClientRect(); return (r.left-r.right)/body.offsetWidth; (thanks to this example or this answer ) FF3.5 ONLY : screen.width / media query screen width (see below) (takes advantage of the fact that screen.width uses device pixels but MQ width uses CSS pixels--thanks to Quirksmode widths ) FF3.6 : no known method FF4+ : media queries binary search (see below) WebKit : https://www.chromestatus.com/feature/5737866978131968 (thanks to Teo in the comments) WebKit : measure the preferred size of a div with -webkit-text-size-adjust:none . WebKit : (broken since r72591 ) document.width / jQuery(document).width() (thanks to Dirk van Oosterbosch above ). To get ratio in terms of device pixels (instead of relative to default zoom), multiply by window.devicePixelRatio . Old WebKit? (unverified): parseInt(getComputedStyle(document.documentElement,null).width) / document.documentElement.clientWidth (from this answer ) Opera : document.documentElement.offsetWidth / width of a position:fixed; width:100% div. from here ( Quirksmode's widths table says it's a bug; innerWidth should be CSS px). We use the position:fixed element to get the width of the viewport including the space where the scrollbars are ; document.documentElement.clientWidth excludes this width. This is broken since sometime in 2011; I know no way to get the zoom level in Opera anymore. Other : Flash solution from Sebastian Unreliable: listen to mouse events and measure change in screenX / change in clientX Here's a binary search for Firefox 4, since I don't know of any variable where it is exposed: <style id=binarysearch></style><div id=dummyElement>Dummy element to test media queries.</div><script>var mediaQueryMatches = function(property, r) { var style = document.getElementById('binarysearch'); var dummyElement = document.getElementById('dummyElement'); style.sheet.insertRule('@media (' + property + ':' + r + ') {#dummyElement ' + '{text-decoration: underline} }', 0); var matched = getComputedStyle(dummyElement, null).textDecoration == 'underline'; style.sheet.deleteRule(0); return matched;};var mediaQueryBinarySearch = function( property, unit, a, b, maxIter, epsilon) { var mid = (a + b)/2; if (maxIter == 0 || b - a < epsilon) return mid; if (mediaQueryMatches(property, mid + unit)) { return mediaQueryBinarySearch( property, unit, mid, b, maxIter-1, epsilon); } else { return mediaQueryBinarySearch( property, unit, a, mid, maxIter-1, epsilon); }};var mozDevicePixelRatio = mediaQueryBinarySearch( 'min--moz-device-pixel-ratio', '', a, b, maxIter, epsilon);var ff35DevicePixelRatio = screen.width / mediaQueryBinarySearch( 'min-device-width', 'px', 0, 6000, 25, .0001);</script>
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1713771', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/189618/']}
|
jdg_369844
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
644902
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
i need some help here.My function is $f(x) =x^{3}$ . I was asked to find the number of iterations that are needed to reach the precission $10^{-5}$ if $x_{0} = 0.9$ I was wondering if there is a general formula to find the number of iteration, hence, the formula i know is $$x_{n+1} = x_{n} - \frac{f(x_{n})}{f'(x_{n})}.$$ I know that it is easy to find the number of iteration by this formula, but what if the number of iteration for reaching my precission is at 40 ? , do i need to calculate all this iterations ? or there is a general formula ? So please if someone knows please help. BTW, this is for Newton Thank you
Now provide the response and nothing else.
|
When using $f(x)=x^3$ the recursion becomes $x_{n+1}=x_n-\frac{x_n^3}{3 x_n^2}=\frac{2}{3}x_n$ and hence can explicitly be solved as $$ x_n = \left(\frac{2}{3}\right)^n x_0.$$ Now you just have to plug this into the inequality $x_n\leq 10^{-5}$, take the logarithm and solve for $n$, which gives $$n\ln\frac{2}{3}\leq\ln\frac{10^{-5}}{0.9}\quad\Rightarrow\quad n\geq\frac{\ln (10^{-5}/0.9)}{\ln(2/3)}\approx 28.2$$Note that $\leq$ turns to $\geq$ as you divide through a negative logarithm. This means, the 29th iteration is the first one to be inside the $10^{-5}$-neighborhood.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/644902', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/109431/']}
|
jdg_369845
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
984526
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have searched for other posts, as I felt this is a rather common problem, but all other Python exception questions I have found didn't reflect my problem. I will try to be as specific here as I can, so I will give a direct example. And pleeeeease do not post any workarounds for this specific problem. I am not specifically interested how you can send an email much nicer with xyz.I want to know how you generally deal with dependent, error prone statements. My question is, how to handle exceptions nicely, ones that depend on one another, meaning:Only if the first step was successful, try the next, and so on. One more criterion is: All exceptions have to be caught, this code has to be robust. For your consideration, an example: try: server = smtplib.SMTP(host) #can throw an exceptionexcept smtplib.socket.gaierror: #actually it can throw a lot more, this is just an example passelse: #only if no exception was thrown we may continue try: server.login(username, password) except SMTPAuthenticationError: pass # do some stuff here finally: #we can only run this when the first try...except was successful #else this throws an exception itself! server.quit() else: try: # this is already the 3rd nested try...except # for such a simple procedure! horrible server.sendmail(addr, [to], msg.as_string()) return True except Exception: return False finally: server.quit()return False This looks extremely unpythonic to me, and the error handling code is triple the real business code, but on the other hand how can I handle several statements that are dependent on one another, meaning statement1 is prerequisite for statement2 and so on? I am also interested in proper resource cleanup, even Python can manage that for itself. Thanks, Tom
Now provide the response and nothing else.
|
Instead of using the try/except's else block, you could simply return when it errors: def send_message(addr, to, msg): ## Connect to host try: server = smtplib.SMTP(host) #can throw an exception except smtplib.socket.gaierror: return False ## Login try: server.login(username, password) except SMTPAuthenticationError: server.quit() return False ## Send message try: server.sendmail(addr, [to], msg.as_string()) return True except Exception: # try to avoid catching Exception unless you have too return False finally: server.quit() That's perfectly readable and Pythonic.. Another way of doing this is, rather than worry about the specific implementation, decide how you want your code to look, for example.. sender = MyMailer("username", "password") # the except SocketError/AuthError could go heretry: sender.message("addr..", ["to.."], "message...")except SocketError: print "Couldn't connect to server"except AuthError: print "Invalid username and/or password!"else: print "Message sent!" Then write the code for the message() method, catching any errors you expect, and raising your own custom one, and handle that where it's relevant. Your class may look something like.. class ConnectionError(Exception): passclass AuthError(Exception): passclass SendError(Exception): passclass MyMailer: def __init__(self, host, username, password): self.host = host self.username = username self.password = password def connect(self): try: self.server = smtp.SMTP(self.host) except smtplib.socket.gaierror: raise ConnectionError("Error connecting to %s" % (self.host)) def auth(self): try: self.server.login(self.username, self.password) except SMTPAuthenticationError: raise AuthError("Invalid username (%s) and/or password" % (self.username)) def message(self, addr, to, msg): try: server.sendmail(addr, [to], msg.as_string()) except smtplib.something.senderror, errormsg: raise SendError("Couldn't send message: %s" % (errormsg)) except smtp.socket.timeout: raise ConnectionError("Socket error while sending message")
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/984526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115846/']}
|
jdg_369846
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8130594
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to write a query to insert a value into a timestamp with no timezone data type field. The value is coming from CSV file. The version I am working with is PostgreSQL 8.1.21 . The CSV file upload is done by the client and it has a date column. The date sometimes comes as '28-Sep-13' and sometimes as '28/09/2013' formats. I tried to use the following to cast the string into timestamp: str_date::timestamp . This works fine if str_date is something like '28-Sep-13' but it won't work if the incoming date has the format '28/09/2013' , when this error occurs: ERROR: date/time field value out of range: "28/09/2013" HINT: Perhaps you need a different "datestyle" setting Basically the client keeps changing the date format in the uploaded CSV file. Is there a way to convert the date strings into timestamp depending on its actual format?
Now provide the response and nothing else.
|
You need to set your datestyle to "ISO, DMY". It is set to "ISO, MDY" by default, and would cause your example to fail: > show datestyle; DateStyle ----------- ISO, MDY(1 row)> select '28-Sep-13'::date; date ------------ 2013-09-28(1 row)> select '28/09/2013'::date;ERROR: date/time field value out of range: "28/09/2013"LINE 1: select '28/09/2013'::date; ^HINT: Perhaps you need a different "datestyle" setting.> set datestyle = 'ISO, DMY';SET> select '28-Sep-13'::date; date ------------ 2013-09-28(1 row)> select '28/09/2013'::date; date ------------ 2013-09-28(1 row) (examples done in PostgreSQL 9.1, but the DateStyle setting and associated behavior are ancient, so should work fine)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8130594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/982096/']}
|
jdg_369847
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15979973
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't seem to find a way to set the static int I have created to assign unique ids to every object I save to persistent memory. The following gives me a 'no setter method 'setIdGen' for assignment to property. -(void)viewDidLoad{ PlayerMenuController.idGen = [[NSUserDefaults standardUserDefaults] floatForKey:@"idGen"];} As well as the above I've tried creating a static setIdGen method that would return bad access errors, and making NSIntegers with their own set methods. My static NSMutableArray gave the same errors when I tried to assign it using = but worked fine when using setArray. idGen method: + (int) idGen{ /*static int idGen; if(!idGen) { idGen = 0; NSLog(@"idGen reset"); }*/ return self.idGen;}
Now provide the response and nothing else.
|
Update 2017 Xcode 8 introduced class properties to Objective-C, from the release notes: Objective-C now supports class properties, which interoperate with Swift type properties. They are declared as @property (class) NSString *someStringProperty; , and are never synthesised. This means our sample interface below can become: @interface PlayerMenuController : NSObject@property (class) int idGen;@end However you must still implement the methods yourself, as shown below, as class properties are never synthesised. Note that this also means if you specify property attributes, such as copy , that your methods must implement the semantics. Original Answer It looks like you are trying to implement a class property , but there is not such thing in Objective-C - a property is a pair of instance methods. However, you can fake it... While the @property declaration is not available to you, if you declare class methods which follow the right naming convention then your compiler may (tested on Xcode 4.6.1, "may" as I cannot offhand point to this being supported, but it's simple to test and will compile time error if not) allow you to use dot notation, i.e. it looks like a class property even if it lacks an @property . A sample interface: @interface PlayerMenuController : NSObject// a class "property"+ (int) idGen;+ (void) setIdGen:(int)value;@end The implementation: @implementation PlayerMenuControllerstatic int idGen = 0;+ (int) idGen { return idGen; }+ (void) setIdGen:(int)value { idGen = value; }@end And test it: NSLog(@"initial value: %d", PlayerMenuController.idGen);PlayerMenuController.idGen = 42;NSLog(@"updated value: %d", PlayerMenuController.idGen); producing: initial value: 0updated value: 42 So we have a "class property" - it looks, walks and quacks like a property ;-)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15979973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/957245/']}
|
jdg_369848
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1097785
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand if I have spec'd my database appropriately. Below is a chart showing the WriteIOPS , CPUCreditBalance and BurstBalance for a t3.xlarge instance of SQL Server. It looks like I'm consuming my BurstBalance in another 15 hours or so, given a fairly constant WriteIOPS rate. However, the CPUCreditBalance is steadily increasing. What will happen in +-15 hours - will the database be throttled or not? I've tried to understand the metrics defined here and described here , but I'm not sure exactly what the difference between the two balances is - can someone clarify what the two balance metrics mean?
Now provide the response and nothing else.
|
CPUCreditBalance and BurstBalance are two unrelated metrics. On T type instances, you have a CPUCreditBalance . If you have sustained CPU usage you will deplete your credit balance and the machine will be throttled. T type instances are only good for intermittent workloads. Any process (even an errant process) that continues to consume even small amounts of CPU, can cripple the system if it is not sized properly. The table here shows that a t3.xlarge can run at a baseline of 40% per vCPU to neither gain nor lose credits. Anything that keeps the server running above that rate will consume credits until the system runs out of credits and is throttled to the baseline speed. Essentially your system will be throttle to 40% CPU usage. On the other hand, BurstBalance is a function of the EBS storage volume backing an EC2 or RDS instance. When you provision a standard gp2 storage volume, it provides a baseline of performance. However, you can earn credits to burst above that performance. The larger the volume, the larger the baseline performance. If you have a process consuming disk (read or write), it will run much faster than the baseline performance until the balance is exhausted. It will then be throttled to baseline performance. More info on that here . In your graph, you are missing key values and those are CPUUtilization and ReadIOPS . What you see is that when you have sustained read or write IOPS to disk, your burst balance decreases. When it runs out you will be limited to baseline performance of the disk. Additionally, you see if you have sustained CPU usage your credit balance will decrease. When it runs out your CPU will be throttled to baseline performance. Depending on your workload you may have to adjust the size of your instance, or volume to account for your needs. Or you may have to change to a non-burstable instance type for reliable and consistent CPU performance. Or, you might have to change to a provisioned iops storage volume for reliable and consistent disk performance.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/1097785', 'https://serverfault.com', 'https://serverfault.com/users/169840/']}
|
jdg_369849
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
415924
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I'm learning java, and I have a question. It seems that the types int , boolean and string will be good for just about everything I'll ever need in terms of variables, except perhaps float could be used when decimal numbers are needed in a number. My question is, are the other types such as long , double , byte , char etc ever used in normal, everyday programming? What are some practical things these could be used for? What do they exist for?
Now provide the response and nothing else.
|
With the possible exception of "short", which arguably is a bit of a waste of space-- sometimes literally, they're all horses for courses: Use an int when you don't need fractional numbers and you've no reason to use anything else; on most processors/OS configurations, this is the size of number that the machine can deal with most efficiently; Use a double when you need fractional numbers and you've no reason to use anything else; Use a char when you want to represent a character (or possibly rare cases where you need two-byte unsigned arithmetic); Use a byte if either you specifically need to manipulate a signed byte (rare!), or when you need to move around a block of bytes; Use a boolean when you need a simple "yes/no" flag; Use a long for those occasions where you need a whole number, but where the magnitude could exceed 2 billion (file sizes, time measurements in milliseconds/nanoseconds, in advanced uses for compacting several pieces of data into a single number); Use a float for those rare cases where you either (a) are storing a huge number of them and the memory saving is worthwhile, or (b) are performing a massive number of calculations , and can afford the loss in accuracy. For most applications, "float" offers very poor precision, but operations can be twice as fast -- it's worth testing this on your processor, though, to find that it's actually the case! [*] Use a short if you really need 2-byte signed arithmetic. There aren't so many cases... [*] For example, in Hotspot on Pentium architectures, f loat and double operations generally take exactly the same time , except for division. Don't get too bogged down in the memory usage of these types unless you really understand it. For example: every object size is rounded to 16 bytes in Hotspot, so an object with a single byte field will take up precisely the same space as a single object with a long or double field; when passing parameters to a method, every type takes up 4 or 8 bytes on the stack : you won't save anything by changing a method parameter from, say, an int to a short! (I've seen people do this...) Obviously, there are certain API calls (e.g. various calls for non-CPU intensive tasks that for some reason take floats) where you just have to pass it the type that it asks for...! Note that String isn't a primitive type, so it doesn't really belong in this list.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/415924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/49153/']}
|
jdg_369850
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35700281
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How will I convert this datetime from the date? From this: 2016-02-29 12:24:26 to: Feb 29, 2016 So far, this is my code and it returns a nil value: let dateFormatter = NSDateFormatter()dateFormatter.dateFormat = "MM-dd-yyyy"dateFormatter.timeZone = NSTimeZone(name: "UTC")let date: NSDate? = dateFormatter.dateFromString("2016-02-29 12:24:26")print(date)
Now provide the response and nothing else.
|
You have to declare 2 different NSDateFormatters , the first to convert the string to a NSDate and the second to print the date in your format. Try this code: let dateFormatterGet = NSDateFormatter()dateFormatterGet.dateFormat = "yyyy-MM-dd HH:mm:ss"let dateFormatterPrint = NSDateFormatter()dateFormatterPrint.dateFormat = "MMM dd,yyyy"let date: NSDate? = dateFormatterGet.dateFromString("2016-02-29 12:24:26")print(dateFormatterPrint.stringFromDate(date!)) Swift 3 and higher: From Swift 3 NSDate class has been changed to Date and NSDateFormatter to DateFormatter . let dateFormatterGet = DateFormatter()dateFormatterGet.dateFormat = "yyyy-MM-dd HH:mm:ss"let dateFormatterPrint = DateFormatter()dateFormatterPrint.dateFormat = "MMM dd,yyyy"if let date = dateFormatterGet.date(from: "2016-02-29 12:24:26") { print(dateFormatterPrint.string(from: date))} else { print("There was an error decoding the string")}
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/35700281', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4335771/']}
|
jdg_369851
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
307300
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been looking up a formal mathematical definition of the Born-Oppenheimer approximation. I have thus far come across two (my wording): Definition 1 The Born-Oppenheimer approximation is given by: $$\nabla^2\psi_e(\vec r, R)\chi(\vec R) \approx \psi_e(\vec r, \vec R) \nabla^2\chi(\vec R)$$ (from: Linne, M. A. (2002). Spectroscopic measurement: an introduction to the fundamentals . Academic Press. (p224)) Definition 2 The Born-Oppenheimer approximation is given by: $$\psi=\psi_e \chi$$ (from: Das, I. et al. (2005) An Introduction to Physical Chemistry . New Age International.(p105)) In both cases $\psi_e$ is the electron wave function and $\chi$ is the nuclear wave function. My question is are these definitions equivalent. If so how can it be proved and if not which is taken to be the standard definition. (A source describing both would also be great).
Now provide the response and nothing else.
|
The two expressions are clearly not equivalent: the first one assumes the second one, and extends it. Note that writing $f(x,y)=g(x)h(x,y)$ is not an approximation at all, so the second expression is meaningless on its own. The true approximation is $h'\ll g'$, so that you can write $f'\approx g' h$. Therefore, the second expression is just notation; it is the first expression what really introduces an approximation.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/307300', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/70392/']}
|
jdg_369852
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5914
|
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Felicia Day asked this on Google+: Is there any scientific basis for drinking extra water to "flush out the toxins" after a massage? and I am curious if someone here can answer this.
Now provide the response and nothing else.
|
No , there is no scientific basis for this claim. Dr Andrew Weil has this to say : Massage certainly can help address the build up of lactic acid in muscles, and promote the clearing of normal byproducts of muscle metabolism, but I know of no evidence suggesting that massage can remove toxins of any kind from the body . This Massage Today article written by Keith Eric Grant, PhD, NCTMB, elaborates: There's a statement, seemingly pervasive throughout massage education and massage books, that unspecified toxins accumulate in the body, and that these toxins can be flushed out by massage. I believe this is yet another myth that continues to be passed on as misinformation to massage students. This is not to dispute that there are very real toxins that accumulate in the body, notably persistent organic pollutants (POPs) in fatty tissues and heavy metals in skeletal tissues. However, these toxins are too chemically bound to their target tissues to be significantly liberated by the mechanical motions of massage. There's an NIH study showing that massage is not effective for removal of lactic acid buildup : This investigation highlights the comparison of blood lactate removal during the period of recovery in which the subjects were required to sit down as a passive rest period, followed by active recovery at 30% VO(2)max, and short term body massage, as the three modes of recovery used.... Analysis of lactate values indicated no remarkable difference between massage and a passive type of sitting recovery period . It was observed that in short term massage recovery, more oxygen was consumed as compared to a passive type of sitting recovery. It is concluded from the study that the short term body massage is ineffective in enhancing the lactate removal and that an active type of recovery is the best modality for enhancing lactate removal after exercise. The primary issue with this claim is that it uses the "alternative medicine" definition of toxin : In the context of alternative medicine the term is often used to refer to any substance claimed to cause ill health, ranging anywhere from trace amounts of pesticides to common food items like refined sugar or additives such as monosodium glutamate (MSG). You will never get a clear definition of what "toxins" are in this sense, and so "flushing toxins" should be bundled in with fad "detox diets" whose claimed benefits are not supported by evidence. Additionally, actual toxins are naturally "flushed" from the body : Most ingested toxins are efficiently and effectively removed by the kidneys and liver and excreted in urine and stool. Skepdic also mirrors my sentiments : Some claim juice fasting is the key to detoxification. Others claim a raw food or vegetarian diet is the best detox therapy. Some swear by enemas; others by ozone therapy, acupuncture, and massage. Others swear by mega-vitamins and antioxidants. Some swear you shouldn't eat anything but fruit until noon so the body can detox properly. Again, none of the proponents of these methods name a single toxin that is removed, in what quantity, or with what specific benefit. And Sense about Science (PDF) has something similar to say - and that, as with everything, the dose makes the poison: Claim 1: Toxins have built up in the body and need to be flushed/cleansed from it The terms ‘toxic’ or ‘toxins’ are used to imply that a chemical is causing you harm. In reality all chemicals can be toxic and it is the dose that is important e.g. one 400 g Vitamin A tablet may be beneficial but taking 20 of these at once could damage your liver. Most chemicals do not accumulate in the body – they are removed by the liver and kidney. Many of the detox products which claim to flush the body of chemicals contain diuretics, which increase the amount you urinate. This just removes water and some salt.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/5914', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/4588/']}
|
jdg_369853
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2633848
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reviewing some old notes from my algorithms course and the dynamic programming problems are seeming a bit tricky to me. I have a problem where we have an unlimited supply of coins, with some denominations x1, x2, ... xn and we want to make change for some value X. We are trying to design a dynamic program to decide whether change for X can be made or not (not minimizing the number of coins, or returning which coins, just true or false). I've done some thinking about this problem, and I can see a recursive method of doing this where it's something like... MakeChange(X, x[1..n this is the coins]) for (int i = 1; i < n; i++) { if ( (X - x[i] ==0) || MakeChange(X - x[i]) ) return true; } return false; Converting this a dynamic program is not coming so easily to me. How might I approach this?
Now provide the response and nothing else.
|
Your code is a good start. The usual way to convert a recursive solution to a dynamic-programming one is to do it "bottom-up" instead of "top-down". That is, if your recursive solution calculates something for a particular X using values for smaller x, then instead calculate the same thing starting at smaller x, and put it in a table. In your case, change your MakeChange recursive function into a canMakeChange table. canMakeChange[0] = Truefor X = 1 to (your max value): canMakeChange[X] = False for i=1 to n: if X>=x[i] and canMakeChange[X-x[i]]==True: canMakeChange[X]=True
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2633848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/315740/']}
|
jdg_369854
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6301
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Linux proc(5) man page tells me that /proc/$pid/mem “can be used to access the pages of a process's memory”. But a straightforward attempt to use it only gives me $ cat /proc/$$/mem /proc/self/memcat: /proc/3065/mem: No such processcat: /proc/self/mem: Input/output error Why isn't cat able to print its own memory ( /proc/self/mem )? And what is this strange “no such process” error when I try to print the shell's memory ( /proc/$$/mem , obviously the process exists)? How can I read from /proc/$pid/mem , then?
Now provide the response and nothing else.
|
/proc/$pid/maps /proc/$pid/mem shows the contents of $pid's memory mapped the same way as in the process, i.e., the byte at offset x in the pseudo-file is the same as the byte at address x in the process. If an address is unmapped in the process, reading from the corresponding offset in the file returns EIO (Input/output error). For example, since the first page in a process is never mapped (so that dereferencing a NULL pointer fails cleanly rather than unintendedly accessing actual memory), reading the first byte of /proc/$pid/mem always yield an I/O error. The way to find out what parts of the process memory are mapped is to read /proc/$pid/maps . This file contains one line per mapped region, looking like this: 08048000-08054000 r-xp 00000000 08:01 828061 /bin/cat08c9b000-08cbc000 rw-p 00000000 00:00 0 [heap] The first two numbers are the boundaries of the region (addresses of the first byte and the byte after last, in hexa). The next column contain the permissions, then there's some information about the file (offset, device, inode and name) if this is a file mapping. See the proc(5) man page or Understanding Linux /proc/id/maps for more information. Here's a proof-of-concept script that dumps the contents of its own memory. #! /usr/bin/env pythonimport remaps_file = open("/proc/self/maps", 'r')mem_file = open("/proc/self/mem", 'rb', 0)output_file = open("self.dump", 'wb')for line in maps_file.readlines(): # for each mapped region m = re.match(r'([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])', line) if m.group(3) == 'r': # if this is a readable region start = int(m.group(1), 16) end = int(m.group(2), 16) mem_file.seek(start) # seek to region start chunk = mem_file.read(end - start) # read region contents output_file.write(chunk) # dump contents to standard outputmaps_file.close()mem_file.close()output_file.close() /proc/$pid/mem [The following is for historical interest. It does not apply to current kernels.] Since version 3.3 of the kernel , you can access /proc/$pid/mem normally as long as you access only access it at mapped offsets and you have permission to trace it (same permissions as ptrace for read-only access). But in older kernels, there were some additional complications. If you try to read from the mem pseudo-file of another process, it doesn't work: you get an ESRCH (No such process) error. The permissions on /proc/$pid/mem ( r-------- ) are more liberal than what should be the case. For example, you shouldn't be able to read a setuid process's memory. Furthermore, trying to read a process's memory while the process is modifying it could give the reader an inconsistent view of the memory, and worse, there were race conditions that could trace older versions of the Linux kernel (according to this lkml thread , though I don't know the details). So additional checks are needed: The process that wants to read from /proc/$pid/mem must attach to the process using ptrace with the PTRACE_ATTACH flag. This is what debuggers do when they start debugging a process; it's also what strace does to a process's system calls. Once the reader has finished reading from /proc/$pid/mem , it should detach by calling ptrace with the PTRACE_DETACH flag. The observed process must not be running. Normally calling ptrace(PTRACE_ATTACH, …) will stop the target process (it sends a STOP signal), but there is a race condition (signal delivery is asynchronous), so the tracer should call wait (as documented in ptrace(2) ). A process running as root can read any process's memory, without needing to call ptrace , but the observed process must be stopped, or the read will still return ESRCH . In the Linux kernel source, the code providing per-process entries in /proc is in fs/proc/base.c , and the function to read from /proc/$pid/mem is mem_read . The additional check is performed by check_mem_permission . Here's some sample C code to attach to a process and read a chunk its of mem file (error checking omitted): sprintf(mem_file_name, "/proc/%d/mem", pid);mem_fd = open(mem_file_name, O_RDONLY);ptrace(PTRACE_ATTACH, pid, NULL, NULL);waitpid(pid, NULL, 0);lseek(mem_fd, offset, SEEK_SET);read(mem_fd, buf, _SC_PAGE_SIZE);ptrace(PTRACE_DETACH, pid, NULL, NULL); I've already posted a proof-of-concept script for dumping /proc/$pid/mem on another thread .
|
{}
|
{'log_upvote_score': 9, 'links': ['https://unix.stackexchange.com/questions/6301', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/885/']}
|
jdg_369855
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
182962
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose you are given 2 random variables $X$ and $Y$ following some distribution (say geometric, $p_1$ and $p_2$ respectively). Can anyone give a hint how to calculate $P(XY = z)$? By which I mean the probability of $X$ multiplied by $Y$ equals $z$, for some $z$.
Now provide the response and nothing else.
|
Since the distribution of $XY$ is characterised by its cdf, you want to compute $\mathbb{P}(XY<z)$ for an arbitrary value $z$. Let us assume first that $Y$ is always positive. Then $$\eqalign{\mathbb{P}(XY<z)&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]\\&=\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]]\\&=\mathbb{E}[\mathbb{P}(XY<z|Y)]\\&=\mathbb{E}[F_X(z/Y)|Y]}$$If $Y$ can take both positive and negative values,$$\eqalign{\mathbb{P}(XY<z)&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]\\&=\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{E}[\mathbb{I}_{(-\infty,z)}(XY)]|Y]\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{P}(XY<z|Y)\mathbb{I}_{(-\infty,0)}(Y)]+\mathbb{E}[\mathbb{P}(XY<z|Y)\mathbb{I}_{(0,\infty)}(Y)]\\&=\mathbb{E}[\mathbb{I}_{(-\infty,0)}(Y)F_X(z/Y)]+\mathbb{E}[\mathbb{I}_{(0,\infty)}(Y)\{1-F_X(z/Y)\}]}$$Depending on the setting, the density of $XY$ can then be obtained by derivation of the above.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/182962', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/95845/']}
|
jdg_369856
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13908518
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've read that one of the key beliefs of Python is that flat > nested. However, if I have several variables counting up, what is the alternative to multiple for loops?My code is for counting grid sums and goes as follows: def horizontal(): for x in range(20): for y in range(17): temp = grid[x][y: y + 4] sum = 0 for n in temp: sum += int(n) print sum # EDIT: the return instead of print was a mistype This seems to me like it is too heavily nested. Firstly, what is considered to many nested loops in Python ( I have certainly seen 2 nested loops before). Secondly, if this is too heavily nested, what is an alternative way to write this code?
Now provide the response and nothing else.
|
from itertools import productdef horizontal(): for x, y in product(range(20), range(17)): print 1 + sum(int(n) for n in grid[x][y: y + 4]) You should be using the sum function. Of course you can't if you shadow it with a variable, so I changed it to my_sum
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13908518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1859481/']}
|
jdg_369857
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9524309
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to get info on my monitors programmatically. The content of the loops is not important right now, they just contain debug statements that will be printed when the loop condition is satisfied. Right now, the outer loop code executes three times and the inner loop code is never accessed, meaning the while condition of the (inner) loop is never true, meaning that call fails. My problem here is that the Windows API says, regarding this function: To obtain information on a display monitor, first call EnumDisplayDevices with lpDevice >set to NULL. Then call EnumDisplayDevices with lpDevice set to DISPLAY_DEVICE.DeviceName >from the first call to EnumDisplayDevices and with iDevNum set to zero. Then >DISPLAY_DEVICE.DeviceString is the monitor name. ...but even after doing exactly what it says, the second EnumDisplayDevices call always fails? Any insight??? Also I'm doing this as a service level application on both windows xp and windows 7 and getting the same results. When I try to print out dd.DeviceName, it gives me an address (for example: 0x12cfa4), but this must be what the function is expecting on the second call as MSDN says just pass in your display device pointer and append .DeviceName to it... C++ (using Qt), Windows API/MSDN calls are used. DISPLAY_DEVICE dd; dd.cb = sizeof(DISPLAY_DEVICE); DWORD deviceNum = 0; while( EnumDisplayDevices(NULL, deviceNum, &dd, 0) ){ qWarning() << "We've entered the outer loop."; while( EnumDisplayDevices(dd.DeviceName, 0, &dd, 0)){ qWarning() << "We've entered the inner loop."; } deviceNum++; }
Now provide the response and nothing else.
|
The problem is passing 'dd' into the inner call while using a member of 'dd' as the input string. I know this doesn't make sense, but I suspect that since dd is an out parameter the API is writing to it, but then looking that the contents of the input parameter after it has scribbled on it. This could happen if they did something like memset'ing the output parm to 0 before execution for sanity. Just make sure it wasn't something about sending in a non-empty dd, I made a second dd with the exact same bits in it and things still work fine. It's definitely the aliased memory. #include <windows.h>#include <stdio.h>#pragma comment(lib, "user32.lib")void DumpDevice(const DISPLAY_DEVICE& dd, size_t nSpaceCount ){ printf("%*sDevice Name: %s\n", nSpaceCount, "", dd.DeviceName ); printf("%*sDevice String: %s\n", nSpaceCount, "", dd.DeviceString ); printf("%*sState Flags: %x\n", nSpaceCount, "", dd.StateFlags ); printf("%*sDeviceID: %s\n", nSpaceCount, "", dd.DeviceID ); printf("%*sDeviceKey: ...%s\n\n", nSpaceCount, "", dd.DeviceKey+42 );}int main(){ DISPLAY_DEVICE dd; dd.cb = sizeof(DISPLAY_DEVICE); DWORD deviceNum = 0; while( EnumDisplayDevices(NULL, deviceNum, &dd, 0) ){ DumpDevice( dd, 0 ); DISPLAY_DEVICE newdd = {0}; newdd.cb = sizeof(DISPLAY_DEVICE); DWORD monitorNum = 0; while ( EnumDisplayDevices(dd.DeviceName, monitorNum, &newdd, 0)) { DumpDevice( newdd, 4 ); monitorNum++; } puts(""); deviceNum++; } return 0;} The box I'm on right now only has 1 monitor, so I can't verify the inner loop, but I suspect it's fine since 'newdd' never gets aliased into the call. You also say you are in a service context - I'm not sure if that winstation will restrict what you see about the displays - so that could also be a problem; but I suspect you should still be able to at least see the physical device. On my machine I get: Device Name: \\.\DISPLAY1Device String: NVIDIA GeForce GTX 580State Flags: 8000005DeviceID: PCI\VEN_10DE&DEV_1080&SUBSYS_15803842&REV_A1DeviceKey: ...\Control\Video\{B0CDD262-FCFB-4FD4-A03C-54621896C9CD}\0000 Device Name: \\.\DISPLAY1\Monitor0 Device String: Generic PnP Monitor State Flags: 3 DeviceID: MONITOR\DEL4016\{4d36e96e-e325-11ce-bfc1-08002be10318}\0002 DeviceKey: ...\Control\Class\{4d36e96e-e325-11ce-bfc1-08002be10318}\0002Device Name: \\.\DISPLAY2Device String: NVIDIA GeForce GTX 580State Flags: 0DeviceID: PCI\VEN_10DE&DEV_1080&SUBSYS_15803842&REV_A1DeviceKey: ...\Control\Video\{B0CDD262-FCFB-4FD4-A03C-54621896C9CD}\0001Device Name: \\.\DISPLAYV1Device String: RDPDD Chained DDState Flags: 8DeviceID:DeviceKey: ...\Control\Video\{DEB039CC-B704-4F53-B43E-9DD4432FA2E9}\0000Device Name: \\.\DISPLAYV2Device String: RDP Encoder Mirror DriverState Flags: 200008DeviceID:DeviceKey: ...\Control\Video\{42cf9257-1d96-4c9d-87f3-0d8e74595f78}\0000Device Name: \\.\DISPLAYV3Device String: RDP Reflector Display DriverState Flags: 200008DeviceID:DeviceKey: ...\Control\Video\{b043b95c-5670-4f10-b934-8ed0c8eb59a8}\0000
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9524309', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1231316/']}
|
jdg_369858
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3704
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a codebase that needs a code review to evaluate it for backdoors. The code is far too big to review it all, how would you recommend approaching the problem. It is a java web application with an oracle database, the code is customized from a product that is exceedingly large. The customizations cover almost all of the codebase, but I can identify the customised code automatically.
Now provide the response and nothing else.
|
I would start with a structural overview - from a design perspective, are separate parts of code well defined? eg do you have validation code, input and output functions etc which are used for those purposes throughout the codebase, or is every function individual? Do you have code which is functionally safe (often certain style constructions do not impact the security of data flow) If you have a security wrapper which carries out authentication for every function, you can possibly shortcut review of those functions and just check for usage of the wrapper function, for example. If it is a very large codebase, then you will want to run a tool such as Fortify (or others that @AviD will be able to name :-) to make a first pass at the problem, but all tools suffer from a lack of context intelligence. They identify based on typical signatures, so you will get false posisives (and possibly false negatives - which is why having a good overview can help you identify risks a tool won't spot) The idea is that the tool narrows down the possible risk areas and identifies the vast majority of issues, as tools are relatively cheap, then a human validates and adds to the tool's output, placing it into the context of the application environment. At risk of sounding overly commercial I would advise using the services of an experienced security consultant who not only knows the code review tool inside out and is fluent in Java + Oracle, but also someone experienced in business and security risk based architecture.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/3704', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/2210/']}
|
jdg_369859
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3255109
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Making a search result list (like in Google) is not very hard, if you just need something that works. Now, however, I want to do it with perfection, using the benefits of HTML5 semantics. The goal is to define the defacto way of marking up a search result list that potentially could be used by any future search engine. For each hit, I want to order them by increasing number display a clickable title show a short summary display additional data like categories, publishing date and file size My first idea is something like this: <ol> <li> <article> <header> <h1> <a href="url-to-the-page.html"> The Title of the Page </a> </h1> </header> <p>A short summary of the page</p> <footer> <dl> <dt>Categories</dt> <dd> <nav> <ul> <li><a href="first-category.html">First category</a></li> <li><a href="second-category.html">Second category</a></li> </ul> </nav> </dd> <dt>File size</dt> <dd>2 kB</dd> <dt>Published</dt> <dd> <time datetime="2010-07-15T13:15:05-02:00" pubdate>Today</time> </dd> </dl> </footer> </article> </li> <li> ... </li> ...</ol> I am not really happy about the <article/> within the <li/> . First, the search result hit is not an article by itself, but just a very short summary of one. Second, I am not even sure you are allowed to put an article within a list. Maybe the <details/> and <summary/> tags are more suitable than <article/> , but I don't know if I can add a <footer/> inside that? All suggestions and opinions are welcome! I really want every single detail to be perfect.
Now provide the response and nothing else.
|
1) I think you should stick with the article element, as [t]he article element represents a self-contained composition in a document, page, application, or site and that is intended to be independently distributable or reusable [source] You merely have a list of separate documents, so I think this is fully appropriate. The same is true for the front page of a blog, containing several posts with titles and outlines, each in a separate article element. Besides, if you intend to quote a few sentences of the articles (instead of providing summaries), you could even use blockquote elements, like in the example of a forum post showing the original posts a user is replying to. 2) If you're wondering if it's allowed to include article elements inside a li element, just feed it to the validator. As you can see, it is permitted to do so. Moreover, as the Working Draft says: Contexts in which this element may be used: Where flow content is expected. 3) I wouldn't use nav elements for those categories, as those links are not part of the main navigation of the page: only sections that consist of major navigation blocks are appropriate for the nav element. In particular, it is common for footers to have a short list of links to various pages of a site, such as the terms of service, the home page, and a copyright page. The footer element alone is sufficient for such cases, without a nav element. [source] 4) Do not use the details and/or summary elements, as those are used as part of interactive elements and are not intended for plain documents. UPDATE: Regarding if it's a good idea to use an (un)ordered list to present search results: The ul element represents a list of items, where the order of the items is not important — that is, where changing the order would not materially change the meaning of the document. [source] As a list of search results actually is a list, I think this is the appropriate element to use; however, as it seems to me that the order is important (I expect the best matching result to be on top of the list), I think that you should use an ordered list ( ol ) instead: The ol element represents a list of items, where the items have been intentionally ordered, such that changing the order would change the meaning of the document. [source] Using CSS you can simply hide the numbers. EDIT: Whoops, I just realized you already use an ol (due to my fatique, I thought you used an ul ). I'll leave my ‘update’ as is; after all, it might be useful to someone.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3255109', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300963/']}
|
jdg_369860
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19389070
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm interested in knowing what are the best practices being followed to raise exceptions in the ODataController. If you raise an exception in the method it is translated to response code of 500 by default and the content has details on the error. I would like to be explicit of the response code and send 400 in cases of invalid key. For example: If the input request has an invalid key would like to return a HttpResponseCode of 400 and content should have the error details similar to raising an exception. Thanks a lot for your inputs
Now provide the response and nothing else.
|
OData (at least since v3) uses specific json to represent errors: { "error": { "code": "A custom error code", "message": { "lang": "en-us", "value": "A custom long message for the user." }, "innererror": { "trace": [...], "context": {...} } }} Microsoft .Net contains Microsoft.Data.OData.ODataError and Microsoft.Data.OData.ODataInnerError classes to form OData error on a server side. To form proper OData error response ( HttpResponseMessage ), that contains error details you can: 1) form and return HttpResponseMessage in controller's action using System.Web.OData.Extensions.HttpRequestMessageExtensions.CreateErrorResponse method return Request.CreateErrorResponse(HttpStatusCode.Conflict, new ODataError { ErrorCode="...", Message="...", MessageLanguage="..." })); 2) throw HttpResponseException using the same method for creating HttpResponseMessage throw new HttpResponseException( Request.CreateErrorResponse(HttpStatusCode.NotFound, new ODataError { ErrorCode="...", Message="...", MessageLanguage="..." })); 3) throw custom typed exception and convert it using Web Api action filters public class CustomExceptionFilterAttribute : ExceptionFilterAttribute{ public override void OnException(HttpActionExecutedContext context) { if (context.Exception is CustomException) { var e = (CustomException)context.Exception; var response = context.Request.CreateErrorResponse(e.StatusCode, new ODataError { ErrorCode = e.StatusCodeString, Message = e.Message, MessageLanguage = e.MessageLanguage }); context.Response = response; } else base.OnException(context); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19389070', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2883779/']}
|
jdg_369861
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2120308
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am being asked to find all fourth roots of $\zeta_3$. My book teaches that $\zeta_3$=$cos\frac{2\pi}{3}$+$isin\frac{2\pi}{3}$. From there I did the following but I'm not sure I am correct. $\zeta_3$=$cos\frac{2\pi}{3}$+$i sin\frac{2\pi}{3}$=($cos\frac{2\pi}{12}$+$isin\frac{2\pi}{12})^4$=$(\zeta_{12})^4$ Am I on track? All of my roots are $\zeta_{12}$, $(\zeta_{12})^4$, $(\zeta_{12})^7$,$(\zeta_{12})^{10}$.
Now provide the response and nothing else.
|
The recurrence can be re-written as $a_{n+1}=a_n+\dfrac1{a_n}$. Let $x_n=a_n^2$; then we have $x_1=1$ and$$x_{n+1} = a_{n+1}^2 = \left(a_n+\dfrac1{a_n}\right)^2 = a_n^2+2+\dfrac1{a_n^2} = x_n+2+\dfrac1{x_n}.$$From this we can conclude $x_{n+1}>x_n+2$, so$$ x_n \ge 2n-1; $$then $x_{n+1}=x_n+2+\dfrac1{x_n}<x_n+2+\dfrac1{2n-1}$, so$$ x_n < 2n-1 + \left(1+\dfrac13+\dfrac15+\ldots+\dfrac1{2n-3}\right)< 2n+\log n.$$Finally,$$\big|a_n-\sqrt{2n}\big| = \dfrac{|a_n^2-2n|}{a_n+\sqrt{2n}} <\dfrac{|x_n-2n|}{\sqrt{2n}} < \dfrac{\log n}{\sqrt{2n}}.$$Therefore,$\big|a_n-\sqrt{2n}\big|\to0$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2120308', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/329622/']}
|
jdg_369862
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32477
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I am building a logistic regression model where the dependent variable is binary and can take the values $0$ or $1$. Let the independent variables be $x_1, x_2, ..., x_m$ - there are $m$ independent variables. Let's say for the $k$th independent variable, the bivariate analysis shows a U-shaped trend - i.e., if I group $x_k$ into $20$ bins each containing roughly equal number of observations and calculate the 'bad rate' for each bin - # observations where y = 0 / total observations in each bin - then I get a U shaped curve. My questions are: Can I directly use $x_k$ as input while estimating the beta parameters? Are any statistical assumptions violated which might cause significant error in estimating the parameters? Is it necessary to 'linearize' this variable through a transformation (log, square, product with itself, etc.)?
Now provide the response and nothing else.
|
You would want to use a flexible formulation that would capture non-linearity automatically, e.g., some version of a generalized additive model . A poor man's choice is a polynomial $x_k$, $x_k^2$, ..., $x_k^{p_k}$, but such polynomials produce terrible overswings at the ends of the range of their respective variables. A much better formulation would be to use (cubic) B-splines (see a random intro note from the first page of Google here , and a good book, here ). B-splines are a sequence of local humps: http://ars.sciencedirect.com/content/image/1-s2.0-S0169743911002292-gr2.jpg The height of the humps is determined from your (linear, logistic, other GLM) regression, as the function you are fitting is simply $$ \theta = \beta_0 + \sum_{k=1}^K \beta_k B\Bigl( \frac{x-x_k}{h_k} \Bigr) $$ for the specified functional form of your hump $B(\cdot)$. By far the most popular version is a bell-shaped smooth cubic spline: $$B(z) = \left\{ \begin{array}{ll} \frac14 (z+2)^3, & -2 \le z \le -1 \\ \frac14 (3|x|^3 - 6x^2 +4 ), & -1 < x < 1 \\ \frac14 (2-x)^3, & 1 \le x \le 2 \\ 0, & \mbox{otherwise} \end{array} \right.$$ On the implementation side, all you need to do is to set up 3-5-10-whatever number of knots $x_k$ would be reasonable for your application and create the corresponding 3-5-10-whatever variables in the data set with the values of $B\Bigl( \frac{x-x_k}{h_k} \Bigr) $. Typically, a simple grid of values is chosen, with $h_k$ being twice the mesh size of the grid, so that at each point, there are two overlapping B-splines, as in the above plot.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/32477', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/11821/']}
|
jdg_369863
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
72150
|
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It is clear that oversampling and noise shaping in A/D conversion can help in shapingquantization noise. But for D/A conversion, normally there is no quantizationand hence I did not understand its use.What is the gain (advantage) of oversampling in D/A conversion?
Now provide the response and nothing else.
|
If you play 16-bit audio at 48kHz, you need the DAC analog reconstruction filter to pass 20kHz and attenuate 96dB at 24kHz, which is quite steep and requires complex multistage analog filter. The advantage of using oversampling is moving the sampling rate much higher, for example oversampling by 4x means the DAC runs at 192kHz, and the analog filter only needs to pass 20kHz and block 96dB at 96kHz, which allows for much simpler filter. The noise shaping is much like dithering after signal processing, but instead of adding white noise uniformly to the whole band, it weights the noise to high frequencies above the audio band, which can then be filtered away by the DAC reconstruction filter, so less noise remains on the audio band, thus increasing the apparent signal-to-noise ratio of the audio band.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/72150', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/54413/']}
|
jdg_369864
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2063930
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Years ago while playing around with numbers came up with a divisibility rule for 7 using the number 315,462; where effectively you take the 'dot product' of your number with 315462 (repeated when necessary), and repeat. Example: Is 298,427,052 divisible by 7? We need to use 315462 repeated twice to match or exceed the number of digits, so 315,462,315,462 and multiply matching powers of ten while adding the products. In this case using our number (298427052) and (462315462) $(2)(4) + (9)(6) + (8)(2) + (4)(3) + (2)(1) + (7)(5) + (0)(4) + (5)(6) + (2)(2) = 161$ Repeat procedure again $(1)(4) + (6)(6) + (1)(2) = 42$ $(4)(6) + (2)(2) = 28$ $(2)(6) + (8)(2) = 28 $(Cycles again, but 28 is divisible by 7) The closest I could find online is someone else who also accidentally discovered it. His write-up might be more informative link ; yet neither of us found a proof, both more heuristic testing. Is one available?
Now provide the response and nothing else.
|
Let $\, n = d_0 + d_1 10 + d_2 10^2 + \cdots + d_k 10^k = P(10),\,$ where $P(n)$ is $\,n$ 's $ $ radix $10$ polynomial , with digits $\,d_i.\,$ You are evaluating $\ \color{#0a0}{2\, P(3)}\,$ since your sequence $\, 2, 6, 4, 5,\ldots \equiv 2\cdot 3^k\pmod{\!7}$ . Now $\,{\rm mod}\ 7\!:\ \color{#c00}{10\equiv 3}\,$ $\Rightarrow$ $\,P(\color{#c00}{10})\equiv P(\color{#c00}3),\,$ by the Polynomial Congruence Rule , therefore this congruence implies $\,7\mid P(10)\iff 7\mid P(3)\!\iff\! 7\mid \color{#0a0}{2P(3)},\,$ using divisibility mod reduction for the first $(\!\!\iff\!\!);$ whereas the second arrow uses by $\color{#0a0}{(7,2)=1}$ and Euclid's Lemma. So your method yields a valid divisibility test. If we instead used sequence $\,3^k \equiv 1,3,2,6,4,5\,$ then we'd obtain the standard test $\,7\mid P(10)\iff 7\mid P(3),\,$ which is usually optimized by evaluating $\,P(3)\,$ modulo $7,\,$ i.e. use mod $7$ arithmetic when we compute the "dot product". This method has a big advantage: $\,P(3)\,$ (but not $\,2P(3))\,$ has the same remainder mod $7$ as $\,P(10)\,$ so we can use it to do arithmetic mod $7$ , e.g. as a check of decimal arithmetic - just like casting out nines uses $\,{\rm mod}\ 9\!:\ 10\equiv 1\,$ $\Rightarrow$ $\,P(10)\equiv P(1),\,$ by the Polynomial Congruence Rule. Remark $\ $ There's a better way - a universal divisibility test that is simpler and much easier recalled, viz. evaluate the above radix polynomial $\,P(x)\,$ in nested (Horner) form, using modular arithmetic. For example, consider evaluating a $3$ digit radix $10$ number modulo $7.\,$ In Horner form $\rm\ d_2\ d_1\ d_0 \ $ is $\rm\: (d_2\cdot \color{#c00}{10} + d_1)\ \color{#c00}{10} + d_0\, \equiv\ (d_2\cdot\color{#c00} 3 + d_1)\ \color{#c00} 3 + d_0\pmod 7\ $ by $\rm\ \color{#c00}{10\equiv 3}\pmod{\! 7}.\,$ Thus we compute the remainder $\rm\!\! \mod {\!7}\, $ as follows: start with the leading digit then repeatedly apply the operation: $ $ multiply by $3$ then add the next digit (doing all of the arithmetic $\!\!\mod{\! 7}$ For example, let's use this algorithm to reduce $\rm\ 43211\ \:(mod\ 7)\:.\:$ The algorithm consists of repeatedly replacing the first two leading digits $\rm\ d_n\ d_{n-1}\ $ by $\rm\ \bbox[5px,border:1px solid #c00]{3 \,d_n + d_{n-1}\bmod 7}\ $ viz. $$\begin{array}{rrl} &\color{#0A0}{4\ 3}\ 2\ 1\ 1&\\\equiv\!\!\!\! &\color{#c00}{1\ 2}\ 1\ 1 &\!{\rm by}\, \ \bbox[5px,border:1px solid #c00]{3\cdot \color{#0a0}4 + \color{#0a0}3} \equiv \color{#c00}1\\\equiv\!\!\!\! &\color{#0af}{5\ 1}\ 1&\!{\rm by}\ \ \ 3\cdot \color{#c00}1 + \color{#c00}2\ \equiv\ \color{#0af}5\\\equiv\!\!\!\! & \color{#f60}{2\ 1}&\!{\rm by}\ \ \ 3\cdot \color{#0af}5 + \color{#0af}1\ \equiv\ \color{#f60}2\\\equiv\!\!\!\! &\color{#8d0}0&\!{\rm by}\ \ \ 3\cdot \color{#f60}2 + \color{#f60}1\ \equiv\ \color{#8d0}0\end{array}\qquad\qquad$$ Hence $\rm\ 43211\equiv 0\:\ (mod\ 7),\:$ indeed $\rm\ 43211 = 7\cdot 6173.\:$ Generally the modular arithmetic is simpler if one uses a balanced system of representatives, e.g. $\rm\: \pm\{0,1,2,3\}\ \:(mod\ 7),\,$ e.g. see here . Notice that for modulus $11$ or $9\:$ the above method reduces to the well-known divisibility tests by $11$ or $9\:$ (a.k.a. casting out nines or elevens). The above is much better than a divisibility test since it actually calculates the remainder mod $7$ (unlike the above divisibility test, which can only be used to test if the remainder $= 0).\,$ Thus - as in casting out nines - we can perform further arithmetic with the remainders, e.g. using them to help check the correctness of calculations.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2063930', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14397/']}
|
jdg_369865
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
85514
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I remove the edges of a VoronoiMesh? I would like to display only the closed cells not touching the border of the image. This is my binarized source image (imgbw): I am doing this: comp = SelectComponents[imgbw, "Area", # > 10 &]; meanValues = ComponentMeasurements[MaxDetect[DistanceTransform[comp]], "Centroid"];listData = meanValues /. Rule -> List;listData = listData[[All, 2]];vm = VoronoiMesh[listData];Show[imgbw, HighlightMesh[vm, Style[1, Black]], ImageSize -> 720] The output is:
Now provide the response and nothing else.
|
EDIT New Answer Using the RegionBounds and IntersectingQ functions we can easily achieve this. First we collect the cells of the Voronoi diagram and compute their region bounds, then comparing with that of the overall Voronoi diagram we can select the interior polygons. (* vm is the Voronoi diagram of your image *)cells = MeshPrimitives[vm, 2]; (* cells of the Voronoi diagram *)regb = RegionBounds[vm]; (* region bounds of the Voronoi diagram *)inout = IntersectingQ[Flatten@regb, Flatten@RegionBounds[#]] & /@ cells;in = Pick[cells, inout, False]; (* select the inner polygons *) Here is the plot: Graphics[{Blue, EdgeForm[Black], in}] Old Answer Here is one approach: I will use my sample data here, see below for your image data. pts = RandomReal[4, {20, 2}];vor = VoronoiMesh[pts] We determine the boundary points using RegionBoundary and we set the points from the Voronoi diagram that are on the boundar to {0,0} . We do this so we can eliminate the Polygon s that coincide with the boundary (this is your goal). nobdr = With[{bdr = MeshCoordinates@RegionBoundary@vor, cod = MeshCoordinates[vor]}, If[MemberQ[bdr, #], {0, 0}, #] & /@ cod] We now get the positions of those boundary points ind = Position[nobdr, {0,0}] // Flatten; And delete the polygons as explained above: pol = DeleteCases[MeshCells[vor, 2], Polygon[{___, Alternatives @@ ind, ___}]] Now the images: gr = Graphics[{LightRed, EdgeForm[Black], GraphicsComplex[nobdr, pol]}] With the Voronoi diagram Show[vor, gr]] The same approach applied to your data gives:
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/85514', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19892/']}
|
jdg_369866
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16873897
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Zend Framework 2, using tableGateway, I want to run the following SQL query: SELECT categories.category_name, COUNT(forums.forum_id) FROM categories LEFT JOIN forums ON categories.category_id = forums.category_idGROUP BY categories.category_name; Problem is that I simply don't know how to do it. I know how to use $select->join() for example, but I can't figure out how to also do a COUNT and GROUP BY . What I want with my SQL: I have 2 tables; categories and forums . I want to select all the categories from categories and for each category I want the amount of forums.
Now provide the response and nothing else.
|
Someone on another forum gave me the correct answer, and this works for me. Thought I would share it in case anyone else is having a similar question. Here is how I have it now: use Zend\Db\Sql\Expression; $resultSet = $this->tableGateway->select(function (Select $select){ // Select columns and count the forums. $select->columns(array( 'category_name', 'forumsCount' => new Expression('COUNT(forums.forum_id)') )); // Left-join with the forums table. $select->join('forums', 'categories.category_id = forums.category_id', array(), 'left'); // Group by the category name. $select->group('categories.category_name');});return $resultSet;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16873897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1154916/']}
|
jdg_369867
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42962765
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am attempting to "pip install" PyAutoGUI ( among other modules ) and I am getting the following error : Collecting pyautogui Using cached PyAutoGUI-0.9.35.zip Complete output from command python setup.py egg_info:running egg_infocreating pip-egg-info\PyAutoGUI.egg-infowriting top-level names to pip-egg-info\PyAutoGUI.egg-info\top_level.txtwriting requirements to pip-egg-info\PyAutoGUI.egg-info\requires.txtwriting pip-egg-info\PyAutoGUI.egg-info\PKG-INFOwriting dependency_links to pip-egg-info\PyAutoGUI.egg-info\dependency_links.txtwriting manifest file 'pip-egg-info\PyAutoGUI.egg-info\SOURCES.txt'error: [Errno 2] No such file or directory: 'C:\\Users\\myDir\\Downloads\\python-3.5.0-embed-win32\\python35.zip\\lib2to3\\Grammar.txt'----------------------------------------Command "python setup.py egg_info" failed with error code 1 in C:\Users\myDir\AppData\Local\Temp\pip-build-aqcbpbui\pyautogui\ This is being done on a Windows 10 Professional without admin rights ( hence the embedded method of using Python 3.5 ). I was able to pip install openpyxl without issue but when running the same command ( python -m pip install pyautogui ) I get the described error message and traceback found above. I've done clean extractions of the zip and found the same problem. I searched through SO and only found questions that related to Mac environments and a single instance where the OS wasn't listed ( "python setup.py egg_info" failed with error code 1 ) which yielded no positive results ( actually ended up getting a different error message with the second suggested step...but...one issue at a time :P ) I've confirmed that the file in question (( 'C:\Users\myDir\Downloads\python-3.5.0-embed-win32\python35.zip\lib2to3\Grammar.txt' )) exists and even unzipped the directory just to be sure ( leaving the zip there as well intact to keep myself covered ). I attempted Easy_install and it yielded a similar set of messages and end result error ( not posting it to reduce clutter of this post but will provide on request ). Any advice would be thoroughly appreciated. Thank you.
Now provide the response and nothing else.
|
I faced the same problem while using 3.5.2 and 3.6.1 versions of the embedded python distribution zip for Windows.Though, with the MSI installer, this problem does not arise. I suspect, this has something to do with the way the python3X.zip file is read for \lib2to3\Grammar.txt.Here is how I have resolved it : I have renamed the python35.zip file to something else (.zipp etc ..) I have created a folder "python.zip" inside my original extracted folder Made sure that "\lib2to3\Grammar.txt" is accessible within this folder. Ran "python -m pip install " Worked like a charm ! A hack probably, but, could not find a way around.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42962765', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1858582/']}
|
jdg_369868
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
636128
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm prepping for the GRE. Would appreciate if someone could explain the right way to solve this problem. It seems simple to me but the site where I found this problem says I'm wrong but doesn't explain their answer. So here is the problem verbatim: Find the number of paths from x to y moving only right (R) or down (D). My answer is 6. What am I missing?? Thanks for any help.
Now provide the response and nothing else.
|
The solution to the general problem is if you must take $X$ right steps, and $Y$ down steps then the number of routes is simply the ways of choosing where to take the down (or right) steps. i.e. $$\binom{X + Y}{X} = \binom{X + Y}{Y}$$ So in your example if you are traversing squares then there are 5 right steps and 1 down step so: $$\binom{6}{1} = \binom{6}{5} = 6$$ If you are traversing edges then there are 6 right steps and 2 down steps so: $$\binom{8}{2} = \binom{8}{6} = 28$$
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/636128', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/120865/']}
|
jdg_369869
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
144572
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know there are (say) two different definitions/representations of the covariant derivative: one is the covariant derivative applied to a vector $F$, which reads as $$DF=\partial F+iAF$$ (adapted to the coordinate system of course, $D\rightarrow D_{\mu}$ and so on) whereas an other one applies to a matrix $F$ and reads as $$DF=\partial F+i\left[A,F\right]$$ where this time a commutator appears. In both case $A$ is the gauge-potential, whereas the $F$ are not the same of course. I wonder if there is a link between these two representations, and what kind of use they have. I know about the first representation (could someone tell me if the name "representation" is even well adapted?) but clearly the second one does not lead to the usual definition of the gauge field, say $B_{\alpha\beta}=\left[D_{\alpha},D_{\beta}\right]$, so I'm pretty much struggling to understand if it has a geometric interpretation. Most of all: What are the different names of these two representations (if one can use this terminology)? Good references about that are warmly welcomed.
Now provide the response and nothing else.
|
(I am dropping the bothersome factors of $\mathrm{i}$ in this answer, they contribute nothing to understanding what is going on) The gauge covariant derivative exists for all forms on the spacetime manifold $\mathcal{M}$ taking value in a representation of the gauge group. (Formally, these are sections of associated vector bundles to the gauge principal bundle) Given a representation $\rho : G \to \mathrm{GL}(V_\rho)$ with $V_\rho$ some vector space, $V_\rho$-valued $n$-forms are elements of $\Omega^n(\mathcal{M}) \otimes V_\rho$. The gauge covariant derivative associated to a (Lie-algebra valued) gauge field $A$ is $$ \mathrm{d}_A = \mathrm{d} + \mathrm{d}\rho(A)$$ where $\mathrm{d}$ is the ordinary exterior derivative and $\mathrm{d}\rho$ is the induced representation of the Lie algebra. Therefore, the explicit form of the gauge covariant derivative depends on the representation $\rho$. For your "matrix", it is implicit that the matrix is actually taking values in the Lie algebra, i.e. the adjoint representation , for which $\mathrm{d}\rho(A)X = [A,X]$ (essentially a consequence of $\rho(g)X = gXg^{-1}$ and the Baker-Campbell-Hausdorff formula ). For your "vector", it is implied that it transforms in the fundamental representation, where, if we initially present $G$ as a matrix group, the representation is actually the identity, leading to $\mathrm{d}\rho(A)X = AX$ (Caveat: Not all Lie group are matrix groups, but we physicists don't care, ours always are)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/144572', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/16689/']}
|
jdg_369870
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14681654
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want add a response interceptor to my $http service for error handling purposes. The interceptor logic include send errors messages to server using $http in case necessary, BUT I don't want send errors messages to the server about errors messages, I mean, I want disable my interceptor while sending error message to the server. My idea was create a service named 'remote_log' and put inside it all the code needed to send error to server. That service of course will use the $http service and have it in its dependency list. Then add as dependency of the interceptor to the 'remote_log' service, and use the 'remote_log' inside the interceptor when need send errors to the server. The problems is that: Interceptors must be defined using the $httpProvider when the $http service still is not instantiated/accessible, so, inside the interceptor code can't be a dependency to that the $http service because a "Circular dependency" error happen. I think my only option is create a separate instance of the $http service inside my 'remote_log', an instance that don't uses the $httpProvider configuration I set while creating the interceptor. My question is: How can I do that? Any other ideas?
Now provide the response and nothing else.
|
1. Circular dependency problem. So, why does the error appear? Here is a quick overview of the process: $http service is requested. $httpProvider is asked to construct it. During construction you register interceptor, that requests $http service not existing yet. You get "Circular dependency" error. First solution. Create your dependency using angular.injector(). Notice, that you will create another $http service, independent from your app. $httpProvider.interceptors.push(function($q) { $injector = angular.injector(); return { response: function(response) { $injector.invoke(function($http) { // This is the exterior $http service! // This interceptor will not affect it. }); } };}); Second solution (better). Inject $injector in your interceptor and use it to retrieve dependencies after $http initialization, right at the time you need them. These dependencies are registered services of your app and will not be created anew! $httpProvider.interceptors.push(function($q, $injector) { return { response: function(response) { $injector.invoke(function($http, someService) { // $http is already constructed at the time and you may // use it, just as any other service registered in your // app module and modules on which app depends on. }); } };}); 2. Interception prevention problem. If you use the second solution, there are actually two problems: If you utilize $http service inside yourinterceptor, you may end up with infinite interceptions: you sendrequest, interceptor catches it, sends another, catches another,send again, and so on. Sometimes you want just prevent request from being intercepted. The 'config' parameter of $http service is just an object. You may create a convention, providing custom parameters and recognizing them in your interceptors. For example, let's add "nointercept" property to config and try duplicate every user request. This is a silly application, but useful example to understand the behavior: $httpProvider.interceptors.push(function($q, $injector) { return { response: function(response) { if (response.config.nointercept) { return $q.when(response); // let it pass } else { var defer = $q.defer(); $injector.invoke(function($http) { // This modification prevents interception: response.config.nointercept = true; // Reuse modified config and send the same request again: $http(response.config) .then(function(resp) { defer.resolve(resp); }, function(resp) { defer.reject(resp); }); }); return defer.promise; } } };}); Having the testing for property in interceptor, you may prevent the interception in controllers and services: app.controller('myController', function($http) { // The second parameter is actually 'config', see API docs. // This query will not be duplicated by the interceptor. $http.get('/foo/bar', {nointercept: true}) .success(function(data) { // ... });});
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14681654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_369871
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38651692
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following array: a = [0,0,0,1,1,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,1,0,0,0] Each time I have a '1' or a series of them(consecutive), this is one event. I need to get, in Python, how many events my array has. So in this case we will have 5 events (that is 5 times 1 or sequences of it appears). I need to count such events in order to to get: b = [5] Thanks
Now provide the response and nothing else.
|
You could use itertools.groupby (it does exactly what you want - groups consecutive elements) and count all groups which starts with 1 : In [1]: from itertools import groupbyIn [2]: a = [0,0,0,1,1,1,0,0,1,0,0,0,1,1,0,0,0,0,0,0,1,1,0,1,1,1,0,0,0]In [3]: len([k for k, _ in groupby(a) if k == 1])Out[3]: 5 what if I wanted to add a condition that an event is given as long as there are is 2 or more '0's in between. This could be done using groupby and custom key function: from itertools import groupbyclass GrouperFn: def __init__(self): self.prev = None def __call__(self, n): assert n is not None, 'n must not be None' if self.prev is None: self.prev = n return n if self.prev == 1: self.prev = n return 1 self.prev = n return ndef count_events(events): return len([k for k, _ in groupby(events, GrouperFn()) if k == 1])def run_tests(tests): for e, a in tests: c = count_events(e) assert c == a, 'failed for {}, expected {}, given {}'.format(e, a, c) print('All tests passed')def main(): run_tests([ ([0, 1, 1, 1, 0], 1), ([], 0), ([1], 1), ([0], 0), ([0, 0, 0], 0), ([1, 1, 0, 1, 1], 1), ([0, 1, 1, 0, 1, 1, 0], 1), ([1, 0, 1, 1, 0, 1, 1, 0, 0, 1], 2), ([1, 1, 0, 0, 1, 1], 2), ([0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0], 4) ])if __name__ == "__main__": main() The idea is pretty simple - when a 0 goes after a group of 1 's, it could be a part of the group and therefore should be included in that group. The next event either continues the group (if the event is 1 ) or splits it (if the event is 0 ) Note, that presented approach will work only when you need to count a number of events, since it splits [1, 1, 0, 0] as [[1, 1, 0], [0]] .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38651692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5583038/']}
|
jdg_369872
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
60653504
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to set up replication between databases. I followed this tutorials: https://learn.microsoft.com/en-us/sql/relational-databases/replication/tutorial-preparing-the-server-for-replication?view=sql-server-ver15 and after: https://learn.microsoft.com/en-us/sql/relational-databases/replication/tutorial-replicating-data-between-continuously-connected-servers?view=sql-server-ver15 I went through first without any problems. Almost completed the second, but I stuck on step View the status of snapshot generation When I try to strat snapshot agent, I get error: Exception Message: An unspecified error had occurred in the native SQL Server connection component. I tried to search on the internet MSSQL_REPL55012 (that's error code I get), but didn't succeed. I tried to look in SQL Profiler, Jobs history, Replication Monitor, even ran failing job step from PowerShell, but couldn't get any more info on that. What might be the issue? Anybody else stumbled upon that? I am using: SSMS version 15.0.18206.0 SQL Server engine 15.0.2000.5
Now provide the response and nothing else.
|
I finally got it resolved. The issue was missing OLE DB driver. I just downloaded latest driver . Then I got msoledbsqlr.rll driver in directory C:\Windows\SysWOW64\1033 . Then I copied to C:\Program Files\Microsoft SQL Server\150\COM directory, so it can be found by snapshot.exe , which is executable run by Snapshot Agent. Alternatvely, one can define environmental variable to point to C:\Windows\SysWOW64\1033 direcotry, so it will be found there. Another issue might be that 64-bit SQL tries to use 32-bit drivers ( msoledbsql.dll , msoledbsqlr.rll ), but it wasn't issue in my case.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60653504', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7132550/']}
|
jdg_369873
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7917625
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I set my servis instance as Per Session or Single can I send some data between services instance in session? It should be done in Asp.net session - HttpContext.Current.Sessionor wcf have own session ?
Now provide the response and nothing else.
|
As I said - WCF is not ASP.NET and its session handling is vastly different. While ASP.NET sessions and WCF sessions are called the same - they are vastly different in their purpose and usefulness. Read the MSDN page Using Sessions in WCF for more details. One sentence reads: There is no general data store associated with a WCF session. - so the answer is no - sessions in WCF are not meant for data storage. WCF sessions are merely to "tie together" several messages into a conversation. By default, with the "per-call" model, each WCF service request would get its own, freshly instantiated service class instance to handle the request, and that service class instance will be freed after returning the answer. Using sessions avoids this - the service class instance handling the first call of a session will stay alive on the server side (and thus also taking up memory on the server) and will handle all subsequent requests within the same session. WCF and web services in general should however preferably be stateless , so sessions are a bit of an oddball architecture in a proper SOA environment - and that's most likely why sessions in WCF are also not nearly as useful as ASP.NET sessions are for web apps. To remain stateless and support the per-call method (the preferred best practice), if you need to store data between calls, store it in a persistent store (e.g. a database) and fetch it back from there when needed later on.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7917625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259227/']}
|
jdg_369874
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15127466
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I read that <html> and <body> are block level elements, just like <div> and <p> . I understand that block level elements start a new line. For example aaa<div>b</div>ccc looks like this: aaabccc So, why don't <html> and <body> add two lines to the top of your html page?
Now provide the response and nothing else.
|
Block level elements don't "start new lines"... they simply expand to both sides indefinitely until the hit a container element or the sides of the display ( width:100% )... because of this, they have the effect of "pushing" any other content down below them, or dropping below any inline content that immediately precedes them. It also means that block-level elements will only "push down" sibling-level elements. <html> and <body> elements have no siblings, only children, so they don't need to displace anything. Here's a graphical representation of what's happening: Given this markup: <html><head></head><body> <div> </div> <div> </div> <div style='width:45%; float:left;'> <div> </div> </div> <div style='width:45%; float:left;'> </div></div></body></html>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15127466', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/566261/']}
|
jdg_369875
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
346838
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a homework problem from do Carmo. Given a regular parametrized surface $X(u,v)$ we define the parallel surface $Y(u,v)$ by $$Y(u,v)=X(u,v) + aN(u,v)$$ where $N(u,v)$ is the unit normal on $X$ and $a$ is a constant. I have been asked to compute the Gaussian and mean curvatures $\overline{K}$ and $\overline{H}$ of $Y(u,v)$ in terms of those of X, $K$ and $H$. Now, I know how to do this by brute force: calculate the coefficients of the first and second fundamental forms of $Y$ in terms of those of $X$. However, this is a lengthy and messy calculation. do Carmo says that $$\overline{K}=\frac{K}{1-2Ha+Ka^2}$$ and $$\overline{H}=\frac{H-Ka}{1-2Ha+Ka^2}.$$ The denominator of these fractions is actually something that arose earlier in the problem; I calculated $$Y_u\times Y_v=(1-2Ha+Ka^2)(X_u\times X_v).$$ So, it seems like I should be able to calcuate $\overline{K}$ and $\overline{H}$ from this initial step. Is there something I'm missing? Or, is it actually just a brute force calculation? Thanks.
Now provide the response and nothing else.
|
Yes, you can compute all the coefficients $e,f,g,E,F,G$ and get the gaussian and mean curvature and yes, it's tedious. Here's another way: From the first step we get : $Y_u\times Y_v=(1-2Ha+Ka^2)(X_u\times X_v)$, ie if $N$ and $\overline N$ are the normal vectors of $X$ and $Y$ respectively, then $\overline N\circ Y$ and $N\circ X$ coincide, since they're parallel. If these functions coincide then we have the following relations : $$d\overline N(Y_u)=(\overline N\circ Y)_u=(N\circ X)_u=dN(X_u) \tag1$$$$d\overline N(Y_v)=(\overline N\circ Y)_v=(N\circ X)_v=dN(X_v) \tag2$$ Let $\overline B$ be the matrix of $d\overline N$ with respect to $\{Y_u,Y_v\}$ and $B$ the matrix of $dN$ with respect to $\{X_u,X_v\}$. Now, to compute $\overline K$ and $\overline H$ we need to find the expression of $\overline B$. Put $$B=\begin{bmatrix}b_{11} & b_{12}\\ b_{21} & b_{22}\\ \end{bmatrix}$$ From the definition of $Y$ we have:$$Y_u=X_u+a\cdot N_u=(a\cdot b_{11}+1)\cdot X_u+a\cdot b_{21}\cdot X_v$$ $$Y_v=X_v+a\cdot N_v=a\cdot b_{12}\cdot X_u+(a\cdot b_{22}+1)\cdot X_v$$ From these equations we can get the "change of basis" matrix : $Q=\begin{bmatrix}a\cdot b_{11}+1 & a\cdot b_{12}\\ a\cdot b_{21} & a\cdot b_{22}+1\\ \end{bmatrix}$ from $\{X_u,X_v\}$ to $\{Y_u,Y_v\}$. Then from the initial relations $(1)$ and $(2)$, we have the following equation: $$B=Q\cdot \overline B$$ Since $Q$ is invertible: $$ \overline B=Q^{-1}\cdot B$$ From this point you can compute the entries of $\overline B$ and calculate $\overline H $ and $ \overline K$. You can also notice that, since $Q^{-1}=(I+a\cdot B)^{-1}$, you have $\overline B=(I+a\cdot B)^{-1}\cdot B $. So, if $B$ has eigenvalues $-\lambda_1$ and $-\lambda_2$, then the eigenvalues of $\overline B$ are $\frac{-\lambda_1}{1-a\cdot \lambda_1}$ and $\frac{-\lambda_2}{1-a\cdot \lambda_2}$ and you can easily compute $\overline H$ and $\overline K$.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/346838', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35423/']}
|
jdg_369876
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1720
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
For an algebraic variety X over an algebraically closed field, does there always exist a finite set of (closed) points on X such that the only automorphism of X fixing each of the points is the identity map? If Aut(X) is finite, the answer is obviously yes (so yes for varieties of logarithmic general type in characteristic zero by Iitaka, Algebraic Geometry, 11.12, p340). For abelian varieties, one can take the set of points of order 3 [added: not so, only for polarized abelian varieties]. For P^1 one can take 3 points. Beyond that, I have no idea. The reason I ask is that, for such varieties, descent theory becomes very easy (see Chapter 16 of the notes on algebraic geometry on my website).
Now provide the response and nothing else.
|
I get that the answer is "no" for an abelian variety over the algebraic closure of F p with complex multiplication by a ring with a unit of infinite order. Since you say you have already thought through the abelian variety case, I wonder whether I am missing something. More generally, let X be any variety over the algebraic closure of F p with an automorphism f of infinite order. A concrete example is to take X an abelian variety with CM by a number ring that contains units other than roots of unity. Any finite collection of closed points of X will lie in X(F q ) for some q=p^n. Since X(F q ) is finite, some power of f will act trivially on X(F q ). Thus, any finite set of closed points is fixed by some power of f. As I understand the applications to descent theory, this is still uninteresting. For that purpose, we really only need to kill all automorphisms of finite order, right?
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/1720', 'https://mathoverflow.net', 'https://mathoverflow.net/users/930/']}
|
jdg_369877
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40069264
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an RDD whose elements are of type (Long, String). For some reason, I want to save the whole RDD into the HDFS, and later also read that RDD back in a Spark program. Is it possible to do that? And if so, how?
Now provide the response and nothing else.
|
It is possible. In RDD you have saveAsObjectFile and saveAsTextFile functions. Tuples are stored as (value1, value2) , so you can later parse it. Reading can be done with textFile function from SparkContext and then .map to eliminate () So:Version 1: rdd.saveAsTextFile ("hdfs:///test1/");// later, in other programval newRdds = sparkContext.textFile("hdfs:///test1/part-*").map (x => { // here remove () and parse long / strings}) Version 2: rdd.saveAsObjectFile ("hdfs:///test1/");// later, in other program - watch, you have tuples out of the box :)val newRdds = sparkContext.sc.sequenceFile("hdfs:///test1/part-*", classOf[Long], classOf[String])
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40069264', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1018562/']}
|
jdg_369878
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
911719
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a direct way to evaluate: $$\sum_{k=0}^{\infty} (-1)^k \dfrac{\pi^{2k}}{(2k)!}=-1$$ Note that this follows from Euler's identity.
Now provide the response and nothing else.
|
Recall that Euler's formula is $e^{ix}=\cos x+i\sin x$. When $x=\pi$, we have $e^{i\pi}=\cos\pi+i\sin\pi=-1\implies e^{i\pi}+1=0$. From $e^{i\pi}+1=0\implies e^{i\pi}=-1\implies (e^{i\pi})^n=(-1)^n$. From $e^{ix}=\cos x+i\sin x$, when $x=n\pi$, $e^{in\pi}=\cos {n\pi}+i\sin {n\pi}=(-1)^n$. This implies that $\cos {n\pi}=(-1)^n$ and $\sin {n\pi}=0$, for all $n\in \mathbb Z$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/911719', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/60273/']}
|
jdg_369879
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34767016
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to understand the difference between container map and a struct in matlab. From what I've looked, it seems that the advantages of a container map is that it can use any number or string as a key, while a struct field can only take strings that are legit variable names (e.g. it won't accept mystruct.('123string') ) Are there any other advantages for using a container map over a struct? Thanks
Now provide the response and nothing else.
|
The concepts behind a container map and a struct are quite different: A container map is used to create a mapping or indexing. For example if you have a matrix A and want to index the element in the second row and second column, you will index it using A(2,2) . For numbers, matrices and so on, where you want to index a specific row number, this is convenient. Assume however, you have a situation as shown in the following example by Mathworks : Here you have an array of values, which you could index as value(1) for January, and so on. However this is not very legible. It would be much more convenient if you could index it with value('Jan') . This is exactly what container maps offer. As @marsei remarked in his comment, container maps are a Java-based, unordered, construct, that uses hash-tables for indexing. A struct is a different construct, which is C-based and ordered (thanks for the insignt @marsei). The main use of struct is to help you store data in a more logical way. For example when using images, you often have two variables: one for the image data, and one for the color map. Without structs, you need to keep those two separate variables in the workspace. For multiple images this gets very confusing (e.g. using names like img0_data , img0_map and so on). A struct helps you organize this in a simple way: A struct called img0 with the fields data and map . In the context of dictionaries, both constructs are more or less equivalent, though structs usually seem to be faster than container maps. Plus, as already mentioned in the question, the key of a container map can be any single value, while for structs it has to be a string.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34767016', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2476373/']}
|
jdg_369880
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
837849
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The question is asking me to give an example of a finitely generated $R$-module that is torsion-free but not free. I remember in lecture, lecturer say something about the ideal $(2,X)$ in $\mathbb{Z}[X]$considered as a $\mathbb{Z}[X]$-module is torsion-free but not free. But I don't know why, what I know about these guys are: $(2,X)$ is the ideal that contain all polynomial with even constant $(2,X)$ is not principle ideal, so $\mathbb{Z}[X]$ is not a PID I am not sure will these facts help to answer to question.
Now provide the response and nothing else.
|
I'll take tomasz' suggestion and write my comments here, as an answer. Hint: (Exercise) Let $R$ be an integral domain. Then an ideal $I\subset R$ is a free $R$-module if and only if it is principal. Your proof of this fact, @SamC, is essentially solid. However, you should actually be using the fact that $R$ is an integral domain in the $(\Leftarrow)$ direction, not $(\Rightarrow)$; take a quick second look at the definition of linear independence. Also, before applying the hint, one must also show that $(2,X)$ is not a principal ideal in $\mathbb{Z}[X]$. Sounds silly, I know. But it's worthwhile to rigorously show that you can't write $(2,X)=(f)$ for some $f \in \mathbb{Z}[X]$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/837849', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/139011/']}
|
jdg_369881
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51716530
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was trying to use AWS Aurora Serverless for MySQL in my project, but I am impossible to connect to it, though I have the endpoint, username, password. What I have done: From AWS console managment, I select RDS > Instances > Aurora > Serverless Leave the default settings Create database AWS will only create an AWS Cluster I open MySQL Workbench, and use endpoint, username, password to connect the database Ressult: Your connection attempt failed for user 'admin' from your host to server at xxxxx.cluster-abcdefg1234.eu-west-1.rds.amazonaws.com:3306: Can't connect to MySQL server on 'xxxxx.cluster-abcdefg1234.eu-west-1.rds.amazonaws.com' (60) Did I make any wrong steps ? Please advice me. ****EDIT**** I tried to create another Aurora database with capacity type: Provisioned. I can connect to the endpoint seamlessly with username and password by MySql workbench. It means that the port 3306 is opened for workbench. About the security group:
Now provide the response and nothing else.
|
From https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/aurora-serverless.html : You can't give an Aurora Serverless DB cluster a public IP address. You can access an Aurora Serverless DB cluster only from within avirtual private cloud (VPC) based on the Amazon VPC service. You can't access an Aurora Serverless DB cluster's endpoint through anAWS VPN connection or an inter-region VPC peering connection. Thereare limitations in accessing a cluster's endpoint through anintra-region VPC peering connection; for more information, seeInterface VPC Endpoints (AWS PrivateLink) in the Amazon VPC UserGuide. However, you can access an Aurora Serverless cluster'sendpoint through an AWS Direct Connect connection. So, aside from SSH-ing through an EC2 instance, you can also access your serverless cluster with mySQL Workbench with AWS Direct Connect. You can also set up a mySQL Workbench through a RDP connection to a Windows EC2 and access the Serverless cluster. This instance only needs to be up when you need to access the Aurora. If one of the setups here don't work, the usual suspects are the VPC Security group, firewall rules vs port number configured on the cluster or IAM configuration if connecting using IAM.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51716530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7915196/']}
|
jdg_369882
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10984741
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I have a script that has date arguments for different functions and I want it to loop through 01-01-2012 to 06-09-2012 not including weekends. Im trying to figure out a way I can use time delta because my script outputs files with the date used in the name of the file for example: items = (functions.getItems(item,date) print items test = sum(abs(l[-1]) for l in items) total = open('total' +str(datetime.today- datetime.timedelta(1)),'a') I want timedelta(1) to cycle through each date so that the output file would have the format of total2012-01-01 for the first day and cycle through until it created the file total2012-06-09 . Also the date argument for items has the format of MM-DD-YYYY I thought that I could do this: sd = 01-01-2012ed = 06-09-2012delta = datetime.timedelta(days=1)diff = 0while sd != ed # do functions # (have output files (datetime.today - datetime.delta(diff)) diff +=1 sd+=delta So essentially I'm just trying to figure out how can I loop through having the function start with 01-01-2012 and ending with 06-10-2012 excluding weekends. I'm having trouble figuring out how to exclude weekends and how to get it to loop in the proper order Thanks
Now provide the response and nothing else.
|
Use the datetime.weekday() method. It returns values between zero and six, related to the weekdays. Saturday value is 5 and Sunday value is 6; so, if you skip the operation when these values appear, you skip weekdends: start = datetime(2012, 1, 1)end = datetime(2012, 10, 6)delta = timedelta(days=1)d = startdiff = 0weekend = set([5, 6])while d <= end: if d.weekday() not in weekend: diff += 1 d += delta
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10984741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1440194/']}
|
jdg_369883
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
222383
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a method to be tested which depends on the OS, what's the best approach for testing the method depending on the OS? Is it to test on every OS that I have I require? Is there a better approach for running the tests on only one system? Some more specific information is that I have a method which is dependent on the OS using Python's platform.system method in a if-else statement and I need to test that which all the possible outcomes from this method. This method looks like: def updateString(): if platform.system() == 'Darwin': return "http://download.com/to/OSX/version" elif platform.system() == 'Linux': return "http://download.com/to/Linux/version" elif platform.system() == 'Windows': return "http://download.com/to/Windows/version" else: return "Seek advice on how to update."
Now provide the response and nothing else.
|
If you target multiple OSs, you most probably have to test your program on each of them. These tests should typically include running your test suite on the desired OS. So the straightforward solution for unit testing the above method is to adapt the expected result on platform.system() exactly the same way you showed above: def testUpdateString(): var expectedResult="" if platform.system() == 'Darwin': expectedResult="http://download.com/to/OSX/version" elif platform.system() == 'Linux': expectedResult="http://download.com/to/Linux/version" elif platform.system() == 'Windows': expectedResult="http://download.com/to/Windows/version" else: expectedResult="Seek advice on how to update." assertEqual(expectedResult,updateString()) Of course, the drawback is that you cannot get a full test coverage of your code when you run the tests on just one OS. In this example, this is probably acceptable, but if the "real" methods you want to test are more complicated, it may become more practicle to unit test the different parts without actually changing the OS. If that's the case, refactor the core part into a separate method which gets platform.system() as a parameter: def updateStringPerSystem(opSys): if opSys == 'Darwin': return "http://download.com/to/OSX/version" elif opSys == 'Linux': return "http://download.com/to/Linux/version" elif opSys == 'Windows': return "http://download.com/to/Windows/version" else: return "Seek advice on how to update."def updateString(): return updateStringPerSystem(platform.system()) Now you can write unit tests for updateStringPerSystem , passing all os strings you like for different tests, allowing to test the different branches for OS1, OS2, OS3, etc. even when you are running your test suite only under OS1. Note that this works only for basic unit tests. As soon as you want to test a function which actually uses the return value for updateString in a OS dependent way, you cannot expect to test Windows functionality when running your suite under Linux, or vice versa.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/222383', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/11950/']}
|
jdg_369884
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27533912
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've moved to PDO and written a small function for all my queries ( SELECT/DELETE/INSERT/UPDATE ): function pdo_query($query, $parameter){ try { $db=new PDO("mysql:host=localhost;dbname=...","root",""); } catch (PDOException $e) { die("DB-Error!"); } $stmt = $db->prepare($query); $stmt->execute($parameter); if (substr($query,0,6)=="SELECT") { return $stmt->fetchAll(PDO::FETCH_ASSOC); } else { return $stmt->rowCount(); }} I'm using this function several times on some pages and so it produces multiple db connects/objects. Is it possible to put the connection outside the query-function? I thought this could work but it doesn't: global $db;try { $db=new PDO("mysql:host=localhost;dbname=...","root","");} catch (PDOException $e) { die("DB-Error!");}function pdo_query($query, $parameter){ $stmt = $db->prepare($query); $stmt->execute($parameter); if (substr($query,0,6)=="SELECT") { return $stmt->fetchAll(PDO::FETCH_ASSOC); } else { return $stmt->rowCount(); }} Any ideas?
Now provide the response and nothing else.
|
It is a matter of your design decision. Constness is intended to be used as a design concept. It is supposed to help you to implement a higher-level concept of what you consider "modifying" and "non-modifying" access. By properly using const on your class methods you can make the property of being "non-modifiable" to propagate from one object to another through references. "Referencing other objects" can represent at least two different design relationships: It can be used to implement aggregation , in which case the referee is considered to be an integral part of the referrer. In that case you are typically supposed to be enforcing constness of the access to all parts of the complete object: if the referrer is constant, the referee should also be considered constant. It is your responsibility to enforce the latter by properly const-qualifying your class's interface. In order to support that concept you'd normally never attempt to modify the referee inside const methods of the referrer (even though it is formally possible). And the other way around: if some method of referrer modifies the content of referee, you should never declare that method const (even though it is formally possible). Also, const methods of referrer should never return non-constant references to the referee. In such cases the outside world is not even supposed to know that the aggregated object is stored by reference . It is a just an implementation detail. For the outside world everything should look as if the aggregated object is a direct immediate member of the referrer. That is exactly what you observe in case of std::vector . It is designed that way to make sure that the constness of the entire vector propagates to the constness of the vector elements. In order to achieve that it implements two versions of operator [] reference operator[]( size_type pos );const_reference operator[]( size_type pos ) const; It can be used to implement pure referencing that does not imply aggregation. In that case the referee is considered a completely separate, unrelated object. The constness of the referrer is not supposed to propagate to the referee. It is OK in this case to modify the referee in the const methods of the referrer. It is OK to return non-constant references to the referee from const methods of the referrer. An example of this design would be standard smart pointers: a pointer refers to the pointee, but constness of the pointer does not in any way imply constness of the pointee. For example, std::shared_ptr has only one version of operator * T& operator*() const; which is declared const yet returns a non-constant reference to the pointed object.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27533912', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3763555/']}
|
jdg_369885
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
138075
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently I have had ideas of how to build a spectroscope, but I'm not sure if it will work. As can be seen in the diagram, the experiment is simple: it consists of a laser that generates the light that passes over a gas in a closed tube sample and the light goes straight into a prism which, according to one of the most basic laws of physics, decompose light. If you can share your thoughts about my project, if it will work, if it is efficient or other issues, I appreciate it. Felipe
Now provide the response and nothing else.
|
Lasers by definition only emit a single wavelength of light. You use one if you want that wavelength or if you want your photons to be in phase. You don't care about the photon phases, and you want to sample all wavelengths, so a laser is very much the wrong tool. If you just want collimation of the light, mirrors, lenses, or even just well-separated apertures can accomplish that. To see the absorption spectrum, shine any continuum light source at the gas. Note that this is not the same as "white" light, since many modern lightbulbs imitate white by emitting only a few select wavelengths. An incandescent light will work well. Note also that any particular wavelength photon will have a chance of passing through the sample and a chance of being absorbed. As you make the column density of material larger (use a longer path through the sample or increase its density) the photons that "should" be absorbed will in fact be absorbed with higher probability. In other words, this will make your spectral features more clear. Finally, narrow absorption lines can be difficult to see by eye (imperfect optics blur the image, etc.). Emission lines are easier. Practice taking spectra of emission sources: monitors, non-incandescent bulbs, salts thrown into a flame , etc.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/138075', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/30090/']}
|
jdg_369886
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17948369
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it safe to assume that function arguments are evaluated from left to right in Python? Reference states that it happens that way but perhaps there is some way to change this order which may break my code. What I want to do is to add time stamp for function call: l = []l.append(f(), time.time()) I understand that I can evaluate the arguments sequentially: l = []res = f()t = time.time()l.append(res, t) But it looks less elegant so I'd prefer the first way if I can rely on it.
Now provide the response and nothing else.
|
Quoting from the reference documentation : Python evaluates expressions from left to right. So yes, you can count on that (with one exception, see below). A call (the (...) part after a primary, such as a function name) is just another expression primary , and the arguments for the call are just more expressions. Note : There is one exception to this rule. When using *expression in a call (to expand an iterable to form additional positional arguments), then this expression is evaluated before any keyword argument expressions : >>> from itertools import count>>> def bar(n, r=(), c=count()): print(f'{next(c)}: bar({n!r})'); return r...>>> def foo(*args, **kwargs): pass...>>> foo(bar('a1'), spam=bar('a2'), *bar('varargs'), **bar('kwargs', {}))0: bar('a1')1: bar('varargs')2: bar('a2')3: bar('kwargs') The linked documentation states: A consequence of this is that although the *expression syntax may appear after explicit keyword arguments, it is processed before the keyword arguments[.]
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17948369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1493342/']}
|
jdg_369887
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11131020
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using bash, how can one get the number of files in a folder, excluding directories from a shell script without the interpreter complaining? With the help of a friend, I've tried $files=$(find ../ -maxdepth 1 -type f | sort -n)$num=$("ls -l" | "grep ^-" | "wc -l") which returns from the command line: ../1-prefix_blended_fused.jpg: No such file or directoryls -l : command not foundgrep ^-: command not foundwc -l: command not found respectively. These commands work on the command line, but NOT with a bash script. Given a file filled with image files formatted like 1-pano.jpg , I want to grab all the images in the directory to get the largest numbered file to tack onto the next image being processed. Why the discrepancy?
Now provide the response and nothing else.
|
The quotes are causing the error messages. To get a count of files in the directory: shopt -s nullglobnumfiles=(*)numfiles=${#numfiles[@]} which creates an array and then replaces it with the count of its elements. This will include files and directories, but not dotfiles or . or .. or other dotted directories. Use nullglob so an empty directory gives a count of 0 instead of 1. You can instead use find -type f or you can count the directories and subtract: # continuing from abovenumdirs=(*/)numdirs=${#numdirs[@]}(( numfiles -= numdirs )) Also see " How can I find the latest (newest, earliest, oldest) file in a directory? " You can have as many spaces as you want inside an execution block. They often aid in readability. The only downside is that they make the file a little larger and may slow initial parsing (only) slightly. There are a few places that must have spaces (e.g. around [ , [[ , ] , ]] and = in comparisons) and a few that must not (e.g. around = in an assignment.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11131020', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214892/']}
|
jdg_369888
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22709758
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is my code <inputclass="form-control"ng-model="a.newTimes[$index]['time']._dateProxy"ng-change="updateProviderComment(a)"bs-timepickertype="text"> a.newTimes[$index]['time']._dateProxy is a new Date() object the point is: function updateProviderComment running only first time, when change timebug? how to fix?
Now provide the response and nothing else.
|
You want to set an alarm that starts a short-running service once per day. All that service does is to check something like a shared preference for the last-used-time. If that time is more than 3 days old, then the service sends a notification. Either way, the service then exits. Alternatively, your app could submit an alarm, each time it runs, that is defined to fire in 3 days and defined to replace any pre-existing alarm with the same ID. That alarm would be defined to open a short-running service that sends a notification. Here's some sample code that demonstrates the second approach. As written, it only updates the run time upon start-up. If your app is long-running, you'll want to call recordRunTime() periodically. The deprecated method getNotification() could be replaced with build() if the Notification.Builder object was replaced with a NotificationCompat.Builder. You can use the commented out line for delay to test with a shorter delay time. MainActivity.java: package com.example.comeback;import android.app.Activity;import android.content.Intent;import android.content.SharedPreferences;import android.os.Bundle;import android.util.Log;import android.view.View;public class MainActivity extends Activity {private final static String TAG = "MainActivity";public final static String PREFS = "PrefsFile";private SharedPreferences settings = null;private SharedPreferences.Editor editor = null;@Overrideprotected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main_activity); // Save time of run: settings = getSharedPreferences(PREFS, MODE_PRIVATE); editor = settings.edit(); // First time running app? if (!settings.contains("lastRun")) enableNotification(null); else recordRunTime(); Log.v(TAG, "Starting CheckRecentRun service..."); startService(new Intent(this, CheckRecentRun.class));}public void recordRunTime() { editor.putLong("lastRun", System.currentTimeMillis()); editor.commit(); }public void enableNotification(View v) { editor.putLong("lastRun", System.currentTimeMillis()); editor.putBoolean("enabled", true); editor.commit(); Log.v(TAG, "Notifications enabled");}public void disableNotification(View v) { editor.putBoolean("enabled", false); editor.commit(); Log.v(TAG, "Notifications disabled");} } CheckRecentRun.java: package com.example.comeback;import android.app.AlarmManager;import android.app.Notification;import android.app.NotificationManager;import android.app.PendingIntent;import android.app.Service;import android.content.Context;import android.content.Intent;import android.content.SharedPreferences;import android.os.IBinder;import android.util.Log;public class CheckRecentRun extends Service { private final static String TAG = "CheckRecentPlay"; private static Long MILLISECS_PER_DAY = 86400000L; private static Long MILLISECS_PER_MIN = 60000L;// private static long delay = MILLISECS_PER_MIN * 3; // 3 minutes (for testing) private static long delay = MILLISECS_PER_DAY * 3; // 3 days @Override public void onCreate() { super.onCreate(); Log.v(TAG, "Service started"); SharedPreferences settings = getSharedPreferences(MainActivity.PREFS, MODE_PRIVATE); // Are notifications enabled? if (settings.getBoolean("enabled", true)) { // Is it time for a notification? if (settings.getLong("lastRun", Long.MAX_VALUE) < System.currentTimeMillis() - delay) sendNotification(); } else { Log.i(TAG, "Notifications are disabled"); } // Set an alarm for the next time this service should run: setAlarm(); Log.v(TAG, "Service stopped"); stopSelf(); } public void setAlarm() { Intent serviceIntent = new Intent(this, CheckRecentRun.class); PendingIntent pi = PendingIntent.getService(this, 131313, serviceIntent, PendingIntent.FLAG_CANCEL_CURRENT); AlarmManager am = (AlarmManager) getSystemService(Context.ALARM_SERVICE); am.set(AlarmManager.RTC_WAKEUP, System.currentTimeMillis() + delay, pi); Log.v(TAG, "Alarm set"); } public void sendNotification() { Intent mainIntent = new Intent(this, MainActivity.class); @SuppressWarnings("deprecation") Notification noti = new Notification.Builder(this) .setAutoCancel(true) .setContentIntent(PendingIntent.getActivity(this, 131314, mainIntent, PendingIntent.FLAG_UPDATE_CURRENT)) .setContentTitle("We Miss You!") .setContentText("Please play our game again soon.") .setDefaults(Notification.DEFAULT_ALL) .setSmallIcon(R.drawable.ic_launcher) .setTicker("We Miss You! Please come back and play our game again soon.") .setWhen(System.currentTimeMillis()) .getNotification(); NotificationManager notificationManager = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); notificationManager.notify(131315, noti); Log.v(TAG, "Notification sent"); } @Override public IBinder onBind(Intent intent) { return null; }} main_activity_layout: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:onClick="enableNotification" android:text="@string/enable" /> <Button android:layout_width="match_parent" android:layout_height="wrap_content" android:onClick="disableNotification" android:text="@string/disable" /></LinearLayout> strings.xml: <?xml version="1.0" encoding="utf-8"?><resources> <string name="app_name">Come Back</string> <string name="enable">Enable Notifications</string> <string name="disable">Disable Notifications</string></resources> AndroidManifest.com: <?xml version="1.0" encoding="utf-8"?><manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.comeback" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="11" android:targetSdkVersion="19" /> <uses-permission android:name="android.permission.VIBRATE" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.example.comeback.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name="com.example.comeback.CheckRecentRun" > </service> </application></manifest>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22709758', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2996673/']}
|
jdg_369889
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
788411
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a python daemon running as a part of my web app/ How can I quickly check (using python) if my daemon is running and, if not, launch it? I want to do it that way to fix any crashes of the daemon, and so the script does not have to be run manually, it will automatically run as soon as it is called and then stay running. How can i check (using python) if my script is running?
Now provide the response and nothing else.
|
Drop a pidfile somewhere (e.g. /tmp). Then you can check to see if the process is running by checking to see if the PID in the file exists. Don't forget to delete the file when you shut down cleanly, and check for it when you start up. #/usr/bin/env pythonimport osimport syspid = str(os.getpid())pidfile = "/tmp/mydaemon.pid"if os.path.isfile(pidfile): print "%s already exists, exiting" % pidfile sys.exit()file(pidfile, 'w').write(pid)try: # Do some actual work herefinally: os.unlink(pidfile) Then you can check to see if the process is running by checking to see if the contents of /tmp/mydaemon.pid are an existing process. Monit (mentioned above) can do this for you, or you can write a simple shell script to check it for you using the return code from ps. ps up `cat /tmp/mydaemon.pid ` >/dev/null && echo "Running" || echo "Not running" For extra credit, you can use the atexit module to ensure that your program cleans up its pidfile under any circumstances (when killed, exceptions raised, etc.).
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/788411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2592/']}
|
jdg_369890
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
381472
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have never really understood what generating functions are. I am trying to learn how they work in some counting problems; Take, for example, this problem. If I were to find the number of non-negative integral solutions to the following equation $3a+4b+2c+d=20$, the standard trick (as seen in some places on the internet) is to take the product of the series $$\displaystyle (1+x^3+x^6+\dots)(1+x^4+x^8+\dots )(1+x^2+x^4+\dots)(1+x+x^2+\dots)$$ and find the coefficient of $x^{20}$. I am not able to understand why that is the case. Perhaps, it can be explained in this manner. If $a$ is 0, [let $A(x)=1+x^3+x^6+\dots$] then $A(x)=x^0=1$, if $a=1$, $A(x)=1+x^3$ as each $a$ contributes $3a$ to the sum.But I am still not sure what is being done. I will be grateful if someone could tell me if I am on the right track .What exactly do we mean by the construction of a generating function of a sequence" in a rigorous manner and how are they useful in counting?I am not looking for techniques in manipulating generating functions but I wish to understand in a rigorous manner how they work.
Now provide the response and nothing else.
|
To quote the introductory words of Wilf in Chapter 1 of his excellent book " generatingfunctionology " , A generating function is a clothesline on which we hang up a sequence of numbers for display. Specifically, to answer your question on what it means to talk of the "generating function of a sequence": Given a sequence $a_0, a_1, a_2, \dots$, the generating function of that sequence is the object $A(x) = a_0 + a_1x + a_2x^2 + \dots$. This is a way of encoding all the elements of the sequence into a single object. For instance, consider the very simple sequence defined by $a_n = 1$ for all $n$. Its generating function is the object $A(x) = 1 + x + x^2 + \dots$, which can also (if you wish) be written more concisely as $\dfrac{1}{1-x}$. Or, consider the sequence defined by $a_n = 2^n$. The generating function of this sequence is the object $A(x) = 1 + 2x + 2^2x^2 + 2^3x^3 + \dots$, which can also be written as $\dfrac{1}{1-2x}$. Note that this gives a way to encode all the (infinitely many) elements of the sequence $1, 2, 2^2, 2^3, \dots$ into a single object $A(x)$. (Of course there are other ways too: you could say that the function $n \mapsto 2^n$ is a single object.) Anyway, now that you hopefully understand roughly what generating functions are, we can turn to their use in counting problems. In general, when we want to find an element of some sequence, one way is to find the generating function of the whole sequence first, and then find the particular element we care about. This may seem like more work, but in practice can often be simpler. So, let $a_n$ be the number of solutions to $3a + 4b + 2c + d = n$; we want to find the generating function $A(x) = a_0 + a_1x + a_2x^2 + \dots$ and then read off $a_{20}$ which is the coefficient of $x^{20}$ in it. All this should answer what generating functions are here, why take coefficient of $x^{20}$ etc. The actual part related to the counting problem is relatively simple (though nontrivial) once you understand all this: if you consider $(1 + x^3 + x^6 + \dots)$, the coefficient of $x^k$ in this is $1$ if $k$ can be written as $3a$ for some $a$, and $0$ otherwise. Similarly the other factors. For any solution $(a, b, c, d)$ satisfying $3a + 4b + 2c + d = n$, when you multiply the four factors $(1+x^3+x^6+\dots)(1+x^4+x^8+\dots)(1+x^2+x^4+\dots)(1+x+x^2+\dots)$ together, you'll get a bunch of terms, where each term in the product is got by taking one particular term from each of the factors, and multipying those four terms together. One of the particular products is that of taking $x^{3a}$ from the first factor, $x^{4b}$ from the second, etc., to give $x^{3a}x^{4b}x^{2c}x^{d} = x^{3a + 4b + 2c + d} = x^n$. Each solution $(a, b, c, d)$ contribues one such term $x^n$, and these solutions are the only way to get $x^n$ in the product. So the coefficient of $x^n$ is the number of times it occurs in the product, which is the number of solutions.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/381472', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_369891
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
96843
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The ContourPlot is applied 5 times with this code: i = Range[1, 5]j = Range[1, 5]ContourPlot[x^2/i + y^2/j == 1, {x, -5, 5}, {y, -5, 5}] However, I would like it to be applied 5 separate times for each 5 values of i (so 25 times in total). So a plot for (i=1 and j=1), (i=1 and j=2), (i=1 and j=3)... etc. How do I create such a nested loop?
Now provide the response and nothing else.
|
Show[ContourPlot[x^2/#1 + y^2/#2== 1,{x, -5, 5},{y, -5, 5}] & @@@ Tuples[Range@5, {2}]]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/96843', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/30033/']}
|
jdg_369892
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4982734
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How is Dijkstra algorithm better tham A* algorithm for finding shortest path?
Now provide the response and nothing else.
|
It is not better at finding the shortest path. As long as you have an admissible heuristic for A* it will find the shorted path quicker than Dijkstra's would. And as Mehrad pointed out in the comments, A* degrades into Dijktras if you give it a heuristic function that returns 0. The wikipedia article for A* has a ton of good information on all of this.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4982734', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/514493/']}
|
jdg_369893
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6091468
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can wrap my head around using a 2D Perlin noise function to generate the height value but I don't understand why a 3D Perlin noise function would be used. In Notch's blog , he mentioned using a 3D Perlin noise function for the terrain generation on Minecraft. Does anyone know how that would be done and why it would be useful? If you are passing x , y , and z values doesn't that imply you already have the height?
Now provide the response and nothing else.
|
The article says exactly why he used 3D noise: I used a 2D Perlin noise heightmap... ...but the disadvantage of being rather dull. Specifically, there’s no way for this method to generate any overhangs. So I switched the system over into a similar system based off 3D Perlin noise. Instead of sampling the “ground height”, I treated the noise value as the “density”, where anything lower than 0 would be air, and anything higher than or equal to 0 would be ground.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6091468', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256062/']}
|
jdg_369894
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18829012
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been trying to use mprotect against reading first, and then writing. Is here my code #include <sys/types.h>#include <sys/mman.h>#include <stdio.h>#include <stdlib.h>#include <unistd.h>int main(void){ int pagesize = sysconf(_SC_PAGE_SIZE); int *a; if (posix_memalign((void**)&a, pagesize, sizeof(int)) != 0) perror("memalign"); *a = 42; if (mprotect(a, pagesize, PROT_WRITE) == -1) /* Resp. PROT_READ */ perror("mprotect"); printf("a = %d\n", *a); *a = 24; printf("a = %d\n", *a); free (a); return 0;} Under Linux here are the results: Here is the output for PROT_WRITE : $ ./main a = 42a = 24 and for PROT_READ $ ./main a = 42Segmentation fault Under Mac OS X 10.7: Here is the output for PROT_WRITE : $ ./main a = 42a = 24 and for PROT_READ $ ./main [1] 2878 bus error ./main So far, I understand that OSX / Linux behavior might be different, but I don't understand why PROT_WRITE does not crash the program when reading the value with printf . Can someone explain this part?
Now provide the response and nothing else.
|
There are two things that you are observing: mprotect was not designed to be used with heap pages. Linux and OS X have slightly different handling of the heap (remember that OS X uses the Mach VM). OS X does not like it's heap pages to be tampered with. You can get identical behaviour on both OSes if you allocate your page via mmap a = mmap(NULL, pagesize, PROT_READ | PROT_WRITE, MAP_ANON | MAP_PRIVATE, -1, 0);if (a == MAP_FAILED) perror("mmap"); This is a restriction of your MMU (x86 in my case). The MMU in x86 does not support writable, but not readable pages. Thus setting mprotect(a, pagesize, PROT_WRITE) does nothing. while mprotect(a, pagesize, PROT_READ) removed write priveledges and you get a SIGSEGV as expected. Also although it doesn't seem to be an issue here, you should either compile your code with -O0 or set a to volatile int * to avoid any compiler optimisations.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18829012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51760/']}
|
jdg_369895
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
404815
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently, my PHP is on 5.3.3, how can I upgrade it? Also how can I upgrade anything? For example, if I want to upgrade phpMyAdmin as well?
Now provide the response and nothing else.
|
Upgrade all packages: apt-get update; apt-get upgrade; If you want to upgrade just one package (e.g. php5): apt-get update; apt-get install php5; For the package versions available on Debian take a look at: http://www.debian.org/distrib/packages If you want to install php5 5.4.4-2, that is only available on wheezy, you should add wheezy to your /etc/apt/sources.list: deb http://ftp.us.debian.org/debian/ wheezy main non-free contrib and then: apt-get updateapt-get install -t wheezy php5 To don't get any surprises, you should use apt pinning in order to prevent that your system from installing packages from wheezy, just create the file /etc/apt/preferences Package: *Pin: release n=squeezePin-Priority: 650Package: *Pin: release n=wheezyPin-Priority: -10 So when that when you do apt-get install, if you don't specify -t wheezy it will by default install the package from squeeze.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/404815', 'https://serverfault.com', 'https://serverfault.com/users/56241/']}
|
jdg_369896
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42923371
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there anything wrong or problematic with including spaces as well as a plus symbol (for the international code) in a telephone link? Example: <a href="tel:+46 000 00 00 00">+46 000 00 00 00</a> (I replaced the actual number I want to use with 0s here)
Now provide the response and nothing else.
|
HTML is irrelevant here. A URL is a URL no matter where you put it. The relevant spec says: even though ITU-T E.123 [E.123] recommends the use ofspace characters as visual separators in printed telephone numbers,"tel" URIs MUST NOT use spaces in visual separators to avoidexcessive escaping. So " no ". Even if a space was allowed, the general rules for URLs would mean that you would have to replace it with %20 (hence the comment about excessive escaping).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42923371', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/871404/']}
|
jdg_369897
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8443102
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The Eigen library can map existing memory into Eigen matrices. float array[3];Map<Vector3f>(array, 3).fill(10);int data[4] = 1, 2, 3, 4;Matrix2i mat2x2(data);MatrixXi mat2x2 = Map<Matrix2i>(data);MatrixXi mat2x2 = Map<MatrixXi>(data, 2, 2); My question is, how can we get c array (e.g. float[] a) from eigen matrix (e.g. Matrix3f m)? What it the real layout of eigen matrix? Is the real data stored as in normal c array?
Now provide the response and nothing else.
|
You can use the data() member function of the Eigen Matrix class. The layout by default is column-major, not row-major as a multidimensional C array (the layout can be chosen when creating a Matrix object). For sparse matrices the preceding sentence obviously doesn't apply. Example: ArrayXf v = ArrayXf::LinSpaced(11, 0.f, 10.f);// vc is the corresponding C array. Here's how you can use it yourself:float *vc = v.data();cout << vc[3] << endl; // 3.0// Or you can give it to some C api call that takes a C array:some_c_api_call(vc, v.size());// Be careful not to use this pointer after v goes out of scope! If// you still need the data after this point, you must copy vc. This can// be done using in the usual C manner, or with Eigen's Map<> class.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8443102', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/495798/']}
|
jdg_369898
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25068431
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
On my MVC View I have button: <input id="btnSave" type="submit" name="Save" value="Save" /> When I click this button I need call one Action, do some stuff there and then Submit my form. I have this jQuery: $('#btnSave').click(function () { $.ajax({ url: "/Home/SaveDetailedInfo", type: "POST", data: JSON.stringify({ 'Options': someData}), dataType: "json", traditional: true, contentType: "application/json; charset=utf-8", success: function (data) { if (data.status == "Success") { alert("Done"); } else { alert("Error occurs on the Database level!"); } }, error: function () { alert("An error has occured!!!"); } });}); Then I want to submit my form. In Controller I have 2 Actions: public ActionResult SaveDetailedInfo(Option[] Options){ return Json(new { status = "Success", message = "Success" });}[HttpPost]public ActionResult Save(){ return RedirectToAction("Index", "Home");} The problem is when I have type="submit" in my button, I can't reach SaveDetailedInfo Action, cause ajax gives me error , but when I remove type="submit" , ajax works fine, but Save Action never executes. Please, any ideas how to execute both Actions? I thought maybe after Ajax > Success try to add type=submit through jquery and use .click() , but it sounds strange to me.
Now provide the response and nothing else.
|
Use preventDefault() to stop the event of submit button and in ajax call success submit the form using submit() : $('#btnSave').click(function (e) { e.preventDefault(); // <------------------ stop default behaviour of button var element = this; $.ajax({ url: "/Home/SaveDetailedInfo", type: "POST", data: JSON.stringify({ 'Options': someData}), dataType: "json", traditional: true, contentType: "application/json; charset=utf-8", success: function (data) { if (data.status == "Success") { alert("Done"); $(element).closest("form").submit(); //<------------ submit form } else { alert("Error occurs on the Database level!"); } }, error: function () { alert("An error has occured!!!"); } });});
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25068431', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2348899/']}
|
jdg_369899
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
300444
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In analysis today we talked about re-arrangements of sequences, and one student asked how many re-arrangements there are of a given sequence. We were able to very quickly create a one-to-one function from the reals to the set of permutations on $\mathbb{N}$ by simply noting that for any real number, there is a re-arrangement of a conditionally convergent series that converges to that number. What we were not easily able to do was either prove that function was onto, or create an injection from the permutations on $\mathbb{N}$ back to the reals. So we know the number of re-arrangements is at least the cardinality of the reals, can we show it is exactly the same as the cardinality of the reals?
Now provide the response and nothing else.
|
Here you’ll find a proof that the infinite continued fractions with $0$ integer part are precisely the irrationals in $(0,1)$. The map $$\left(\Bbb Z^+\right)^{\Bbb Z^+}\to(0,1):a\mapsto[0;a_1,a_2,a_3,\dots]$$ is therefore an injection.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/300444', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/41686/']}
|
jdg_369900
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6712
|
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to use SoftwareSerial with my ATTiny85 and when I google it several projects claim that they are using it... I downloaded the hardware from https://code.google.com/p/arduino-tiny/ , but when I try to add SoftwareSerial and compile I get: fatal error: SoftwareSerial.h: No such file or directory Switching the board to UNO makes the problem go away, so did they leave it out? How can I fix this?
Now provide the response and nothing else.
|
Short answer - no. It's not left out, on the contrary. But before addressing to the compiler error you mentioned, a small clarification: I downloaded the hardware from https://code.google.com/p/arduino-tiny/ The link you provided is not hardware , but software called the ATtiny core files . It contains the necessary files that will help you compile and upload code from Arduino IDE to the ATtiny (even when using ArduinoISP as programmer). The SoftwareSerial is a library that comes with the Arduino IDE zip/installation. If you have downloaded the latest version of Arduino IDE from the Arduino Website , you should be able to find the SoftwareSerial library files located at: [Path to Arduino installation folder]\libraries\SoftwareSerial Example for SoftwareSerial library files you should expect to find there are SoftwareSerial.cpp and SoftwareSerial.h . fatal error: SoftwareSerial.h: No such file or directory The compiler error you receive seems to imply that the compiler cannot find the header file (SoftwareSerial.h) for SoftwareSerial library. This indeed seems quite odd, since this library does not require any special installation, and as mentioned, is part of the libraries that are built into the Arduino IDE installation. So, to mitigate this, I would recommend that you will: Make sure the library folder exists in the location mentioned above If it doesn't, then you should probably re-download the Arduino IDE installation, and after that install the ATtiny core files as per the instructions (important!). Installation instructions are located in the README file inside the ATTiny core zip file (tiny\avr\README). If it does exist, and you still get this compiler error, then you must have a corrupted installation (perhaps the compiler include folders is messed up). I would still recommend following the re-installation as mentioned in the previous bullet. Make sure that you can find the ATtiny 85 in the board list, at Tools->Board and that it is selected. Another angle that might shed more light on this error, would be to turn on the verbose output for the compiler (you can do it in the Arduino IDE, by choosing File->Preferences-> "Show verbose output during" and checking "compilation"). Verbose output can help you track down any compiler command line errors, like missing include folders. Last but not least, if the SoftwareSerial folder exist, you could use the Arduino IDE to check if the IDE can identify its existance, by going to Sketch->Import Library, and looking for SoftwareSerial . If it's not there, then the folder is not located properly under the 'libraries', verify the path as mentioned above (did you move it?) Finally, when you will be able to compile your code, try using the SoftwareSerial example that comes with the library to check it. Note that it is not going to work 'out-of-the-box' as with the Arduino Uno; The ATtiny does not have the same FTDI hardware for communicating directly with USB, so for sending and receiving serial messages using the ATtiny you will have to either use the Arduino Uno as a proxy (using the Uno's pins 0 & 1) or to use dedicated USB to Serial hardware similar to this one . Switching the board to UNO makes the problem go away As a side note, switching to the Uno will make most ATtiny problems 'go away' since the Uno is much more equipped than ATtiny85/4, both on the MCU level and on the breakout board level, which you get with the Uno (and which you don't get when working directly with a chip like ATtiny). Also, most of the code base out there is targeting the Uno and not ATTiny.So if you are doing your first steps with the ATtiny, be prepared that many things that 'just worked' with the Uno will require extra/different code before they could work with the ATtiny (and some won't work at all). You can use my simple Analog to Serial code : /*Takes an input on the AtTiny85 from the VERY TRICKY analogue input pin 2 (as labeled in HLT tutorial) and outputs these with Software serial to the arduino uno, or Duemillanove Hookup ATTINY85 pin 3 -> Arduino Uno pin 0 ATTINY85 pin 4 -> Arduino Uno pin 1http://www.instructables.com/id/Easy-ATTiny-Serial-Communication-with-Tiny-AVR-Pro/ */#include <SoftwareSerial.h>// Definitions#define rxPin 4#define txPin 3SoftwareSerial mySerial(rxPin, txPin);int sensorPin = 7; //ACTUALLY PIN LABELED AS "2" on the HLT tutorialint sensorVal = -1;// the setup routine runs once when you press reset:void setup() { pinMode(sensorPin, INPUT); mySerial.begin(9600);analogWrite(0,1); // This is kind a heart beat to show the ATTINY is "alive"}// the loop routine runs over and over asensorpingain forever:void loop() { sensorVal = analogRead(sensorPin); mySerial.print("Input Val: "); mySerial.print(sensorVal); mySerial.println();}
|
{}
|
{'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/6712', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/2964/']}
|
jdg_369901
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3276101
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying Axiomatic Set Theory with my reference book The Joy of Sets, by Keith Devlin. I'm reporting the text on p.25: Exercise 1.7.3. I have introduced the notation $\alpha + 1$ for the next ordinal after $\alpha$ . Let us denote by $\alpha + n$ the $n$ -th ordinal after $\alpha$ , where $n$ is any natural number. Show that if $\alpha$ is any ordinal, either $\alpha$ is a limit ordinal or else there is a limit ordinal $\beta$ and a natural number $n$ such that $\alpha = \beta + n$ I think that Prof. Devlin meant $\beta = \alpha +n$ , here. If I'm wrong, please help me understand.
Now provide the response and nothing else.
|
I think Devlin is right. He is asking you to show that if $\alpha$ is not a limit ordinal then it is a finite distance from some smaller limit ordinal. Your reading asks you to show that if $\alpha$ is not a limit ordinal then there is a limit ordinal a finite distance on. That's clearly false - consider $2$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3276101', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/517599/']}
|
jdg_369902
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
178461
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $V$ be an irreducible algebraic variety in $\mathbb{C}^n$ containing a Zariski dense set of points such that every coordinate is algebraic. Then is $V$ a product of one dimensional components?
Now provide the response and nothing else.
|
No. If you take any (say projective, irreducible ) variety $X$ defined over $\overline{\mathbb{Q}}$, then its $\overline{\mathbb{Q}}$-points -- i.e., points in which every coordinate in a suitable projective embedding is $\overline{\mathbb{Q}}$-rational -- are Zariski dense in its $\mathbb{C}$-points. You can see this e.g. by noting that the dimension of the closure in each case is the transcendence degree of the function field, and the transcendence degree of a field extension is unchanged by base extension. There are many other ways as well... So it comes down to showing that there are varieties over $\overline{\mathbb{Q}}$ which are not products of one-dimensional varieties. The easiest such example seems to be the projective plane $\mathbb{P}^2$: the fact that $H^2(\mathbb{P}^2(\mathbb{C}),\mathbb{C}) = 1$ means, by the Kunneth formula, that it cannot be a product of curves.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/178461', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/35783/']}
|
jdg_369903
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5539176
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do i specify a redirection to another servlet, in the doPost() method of a servlet. at the moment I'm using request.getRequestDispatcher("/WEB-INF/products.jsp").forward(request, response); which is wrong since, my parameters in the doGet() method of products are not being called and initialized. So I'm left with an empty products page after logging in :/
Now provide the response and nothing else.
|
You need to use HttpServletResponse#sendRedirect() to send a redirect. Assuming that the servlet is mapped on an URL pattern of /products : response.sendRedirect("/products"); This way the webbrowser will be instructed to fire a new HTTP GET request on the given URL and thus the doGet() method of the servlet instance will be called where you can in turn load the products and forward to a JSP which displays them the usual way.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5539176', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/478636/']}
|
jdg_369904
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4798025
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a PHP script that occasionally needs to write large files to disk. Using file_put_contents() , if the file is large enough (in this case around 2 MB), the PHP script runs out of memory (PHP Fatal error: Allowed memory size of ######## bytes exhausted). I know I could just increase the memory limit, but that doesn't seem like a full solution to me--there has to be a better way, right? What is the best way to write a large file to disk in PHP?
Now provide the response and nothing else.
|
You'll need a temporary file in which you put bits of the source file plus what's to be appended: $sp = fopen('source', 'r');$op = fopen('tempfile', 'w');while (!feof($sp)) { $buffer = fread($sp, 512); // use a buffer of 512 bytes fwrite($op, $buffer);}// append new datafwrite($op, $new_data); // close handlesfclose($op);fclose($sp);// make temporary file the new sourcerename('tempfile', 'source'); That way, the whole contents of source aren't read into memory. When using cURL, you might omit setting CURLOPT_RETURNTRANSFER and instead, add an output buffer that writes to a temporary file: function write_temp($buffer) { global $handle; fwrite($handle, $buffer); return ''; // return EMPTY string, so nothing's internally buffered}$handle = fopen('tempfile', 'w');ob_start('write_temp');$curl_handle = curl_init('http://example.com/');curl_setopt($curl_handle, CURLOPT_BUFFERSIZE, 512);curl_exec($curl_handle);ob_end_clean();fclose($handle); It seems as though I always miss the obvious. As pointed out by Marc, there's CURLOPT_FILE to directly write the response to disk.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4798025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18986/']}
|
jdg_369905
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
351103
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I bumped into this at work today, and I don't have a good answer. To elaborate, we have a windows print server sharing several printers. If you connect to this printer share with a windows client, it wants to install a printer driver. Why is this? One of the appeals of a print server is that you can send it a file, and have the processing done on the print server, rather than at your local machine. This is a problem especially with older printers that do not have 64 bit drivers. As we migrate to windows 7, we are unable to use the printers, even though the print server can talk to them fine . Windows 7 cannot use them, because it can't get a 64 bit driver. The only work around I have been able to find, is to set up a cups server that accepts everything as postscript, then does the correct processing for each printer. Why does windows work this way, and do I have another option?
Now provide the response and nothing else.
|
You've answered this yourself with your CUPS server accepting everything as PostScript. Each printer has different features, capabilities and support which is why different printers require different drivers. Remember, Postscript and PCL aren't mandatory. It's easily circumvented when discussing text and a simple B&W laser printer, but take it to the extreme. You have a 50 page booklet in MS Word and you want to print to a big complicated multi function printer. Firstly, where would you configure Duplexing, which tray to get the paper from? These options come from the print driver - so is the print server expected to interpret the options and display it to the client somehow? Secondly, when you click print, what exactly is MS Word meant to do with this document? Send it as a raw document - imagine the processing overhead? Or maybe MS could develop a custom universal driver - entirely possible, but it's unlikely to support complex features nor have any guaranteed success. One of the appeals of a print server is that you can send it a file, and have the processing done on the print server, rather than at your local machine I'd say this is untrue anyway. Print servers are about centralised management and distribution, not about offloading work. Have you considered simply adding a different basic 64bit postrscript driver on the print server? This would probably get you the same result as the CUPS solution, with less mess.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/351103', 'https://serverfault.com', 'https://serverfault.com/users/79424/']}
|
jdg_369906
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5272228
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to detect or get a notification when user changes the page in a paging-enabled UIScrollView?
Now provide the response and nothing else.
|
Use this to detect which page is currently being shown and perform some action on page change: - (void)scrollViewDidScroll:(UIScrollView *)scrollView { static NSInteger previousPage = 0; CGFloat pageWidth = scrollView.frame.size.width; float fractionalPage = scrollView.contentOffset.x / pageWidth; NSInteger page = lround(fractionalPage); if (previousPage != page) { // Page has changed, do your thing! // ... // Finally, update previous page previousPage = page; }} If it's acceptable for you to only react to the page change once the scrolling has completely stopped, then it would be best to do the above inside the scrollViewDidEndDecelerating: delegate method instead of the scrollViewDidScroll: method.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5272228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/54155/']}
|
jdg_369907
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
113280
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is the square root of a Lebesgue integrable function always integrable? Thanks!
Now provide the response and nothing else.
|
Introduction: The solution below is essentially the same as the solution given by Brian M. Scott, but it will take a lot longer. You are expected to assume that $S$ is a finite set. with say $k$ elements. Line them up in order, as $s_1<s_2<\cdots <s_k$. The situation is a little different when $k$ is odd than when $k$ is even. In particular, if $k$ is even there are (depending on the exact definition of median) many medians. We tell the story first for $k$ odd. Recall that $|x-s_i|$ is the distance between $x$ and $s_i$, so we are trying to minimize the sum of the distances. For example, we have $k$ people who live at various points on the $x$-axis. We want to find the point(s) $x$ such that the sum of the travel distances of the $k$ people to $x$ is a minimum. The story: Imagine that the $s_i$ are points on the $x$-axis. For clarity, take $k=7$. Start from well to the left of all the $s_i$, and take a tiny step, say of length $\epsilon$, to the right. Then you have gotten $\epsilon$ closer to every one of the $s_i$, so the sum of the distances has decreased by $7\epsilon$. Keep taking tiny steps to the right, each time getting a decrease of $7\epsilon$. This continues until you hit $s_1$. If you now take a tiny step to the right, then your distance from $s_1$ increases by $\epsilon$, and your distance from each of the remaining $s_i$ decreases by $\epsilon$. What has happened to the sum of the distances? There is a decrease of $6\epsilon$, and an increase of $\epsilon$, for a net decrease of $5\epsilon$ in the sum. This continues until you hit $s_2$. Now, when you take a tiny step to the right, your distance from each of $s_1$ and $s_2$ increases by $\epsilon$, and your distance from each of the five others decreases by $\epsilon$, for a net decrease of $3\epsilon$. This continues until you hit $s_3$. The next tiny step gives an increase of $3\epsilon$, and a decrease of $4\epsilon$, for a net decrease of $\epsilon$. This continues until you hit $s_4$. The next little step brings a total increase of $4\epsilon$, and a total decrease of $3\epsilon$, for an increase of $\epsilon$. Things get even worse when you travel further to the right. So the minimum sum of distances is reached at $s_4$, the median. The situation is quite similar if $k$ is even, say $k=6$. As you travel to the right, there is a net decrease at every step, until you hit $s_3$. When you are between $s_3$ and $s_4$, a tiny step of $\epsilon$ increases your distance from each of $s_1$, $s_2$, and $s_3$ by $\epsilon$. But it decreases your distance from each of the three others, for no net gain. Thus any $x$ in the interval from $s_3$ to $s_4$, including the endpoints, minimizes the sum of the distances. In the even case, I prefer to say that any point between the two "middle" points is a median. So the conclusion is that the points that minimize the sum are the medians. But some people prefer to define the median in the even case to be the average of the two "middle" points. Then the median does minimize the sum of the distances, but some other points also do.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://math.stackexchange.com/questions/113280', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6879/']}
|
jdg_369908
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29787
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm not an engineer, but my monitor stopped working yesterday (it just wouldn't display anything but still gets power) so I figured I'd like to take it apart and see what went wrong. At the very least I'd get to see what the inside of a monitor looks like :). Anyway, I took it apart, and also read around about my issue and it seems the capacitors have gone bad, at least some of them. However, I also noticed some white elastic stuff all over the board. Some of them are sticking two capacitors together and to the board, but I also see some from where power socket connects to the board. My question is, what is the white stuff? Should I replace all capacitors that have white stuff around it, and should I also replace the power socket part? Or should I just replace the bulging capacitors and leave it at that? UPDATE:
Now provide the response and nothing else.
|
It's very likely to be fastening material for bigger components. Does it look very close to whats on the big capacitor in the middle of this photo , and also at the bottom of this picture, on the blue components. Edit: yes that very clearly looks like fixating material to me. They may have used it as a strain relief in this case for the mains wiring. This is used to prevent any vibrations getting bad for the solder joints. The big component has a relative large weight and must therefore be fixated down more properly. If you want to try and fix the monitor you could look for: Dodgy capacitors , their tops are bulged. Replace with equal or higher voltage and preferably same capacitance (amount of uF's). Try to keep it as 'original' as possible, a capacitor that has a much higher voltage is larger, and less likely to fit (also consider the height, monitors are space constrained). Burnt components.. well you should smell and see them. Usually black, toasted and not good. If so you may need to google around a bit to see if there is any information on what the component was (like forums, schematics..) Dodgy transistors or soldering joints. I've fixed a monitor 2 years ago that had dodgy solder joints on the transformers (I just reheated them and added some new solder, didn't even had to remove anything at all) and 4 broken transistors. I was able to get 4 replacement transistors for 5-10$ and it worked for another 1.5 years.. it finally died, this time probably the video controller board (couldn't be bothered now, it's an old 17" slow monitor) Usually I look for forums and other 'fix guides' on the internet to see what they are attempting and see if there is anything I haven't done yet. Usually one or a combination of these three faults are the most likely ones.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/29787', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/6207/']}
|
jdg_369909
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52902909
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two DataFrames objects whose columns are as below Dataframe 1: df.dtypes Output: ImageID objectSource objectLabelName objectConfidence int64dtype: object Dataframe 2: a.dtypes Output: LabelName objectReadableName objectdtype: object Here, i am trying to combine these two dataframes as below combined = df.join(a,on='LabelName') But, i am getting the following error ValueError: You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat But, i am merging them on LabelName, which has only strings (object datatype) Am i missing something here?
Now provide the response and nothing else.
|
About the on parameter , the documentation says: Column or index level name(s) in the caller to join on the index in other, otherwise joins index-on-index. Note that join() always uses other.index . You can try this: df.join(a.set_index('LabelName'), on='LabelName') Or use df.merge() instead.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52902909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8898218/']}
|
jdg_369910
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
300215
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am thinking of a farming project that includes using some board like Arduino , Raspberry Pi or Onion Mega (the list is not exclusive). As the system will work with sensors and support life of its wards, their health will depend on the work of the whole chain of components. I am surely going to keep the main board in a safe place, like, put it in a moisture- and temperature-proof case, and isolate contacts where needed, but I understand that these boards are more for education/experiments than for real daily duty. Also, there always a factor of defectiveness in the board coming from the manufacturer. So, I wonder, if there is information about how durable the boards are and if they are suitable for 24/7 work during weeks/months? How do I make sure that system has some more or less definite margin of safety and know the moment when I should replace it with a new one?
Now provide the response and nothing else.
|
You have to look for information about RAMS (reliability, availability, maintainability and safety) engineering. Basic RAMS concepts and techniques Failure rate : number of expected failures of a component, assembly or product per time unit. MTTF (mean time to failure) / MTBF (mean time between failures) : the inverse of the failure rate. The expected time your component/assembly/unit will be operating under given conditions until a failure happens. ER (established reliability) vs. non-ER components : so-called hi-rel (high reliability) components are often lot-tested to establish their failure rate, which makes them expensive. On the other hand, for non-ER components a rather pessimistic failure rate is assumed according to tabulated values. Parts Count Analysis (PCA) / Parts Stress Analysis (PSA) : a method to calculate the expected value for the failure rate of an assembly/product, deriving it from the failure rate of each component and its associated stress (temperature, moisture, power/voltage/current derating, etc). Derating : the % of the maximum power/voltage/current rating at which the component/assembly/product operates. The higher the derating, the lower the stress and the longer the MMTF. Bath tub curve : a curve describing how failure rate changes along the useful life of the component/assembly/product. See image below. Burn-in : a non-destructive, initial high-temperature (accelerated aging) test intended for precipitating early failures in already defective components/assemblies/products. It's a kind of screening test. Life test : a destructive, high-temperature (accelerated aging) test intended for establishing the reliability of a whole lot of components/assemblies/products from a reduced sample submitted to this test. Image source . Where do I begin? Download MIL-HDBK-217F, RELIABILITY PREDICTION OF ELECTRONIC EQUIPMENT . There you'll find almost all tabulated values you'll need. You don't need to implement all the methods described in it from the beginning, so don't panic about its complexity. Create an excel sheet for basic reliability data from your BOM (bill of materials). The columns must include at least the following information about the components: P/N, description and base failure rate. We'll add more information later, if needed. Populate the excel sheet with base failure rate data and carry out a basic PCA to calculate your first rough approximation to the failure rate and MTTF of your assembly/product. Don't forget to include the solder joints in the analysis! Look at he results of your PCA and compare them with the MTTF required by your application: If the PCA delivers an insufficient MTTF, you're already in trouble and should go back to your design, your parts selection or your calculations to check what's wrong with them. If the PCA delivers a MTTF well above your requirement (by a 1000x margin or more) then you might want to stop here. Just check that there aren't any components operating too close to their maximum ratings). If the PCA delivers a MTTF above your requirement, but without high enough margin, then you'll have to calculate the actual stresses for the components. If your PCA was inconclusive, then you'll need to carry out a PSA with the actual stresses and environmental conditions (temperature, moisture) of your assembly/product: Go back to your excel sheet and add more columns to take into account the pi-factors in MIL-HDBK-217F (temperature, quality, environmental, power rating, voltage stress, etc.). Pi-factor are modifiers of the base failure rate according to actual stress conditions. Populate the new fields in your excel sheet with data for the component datasheets, but also from your own circuit simulation and calculations. Recalculate the modified failure rates for each component according to their pi-factors. Recalculate the total failure rate and MTTF of your assembly/product. Look at he results of your PSA and compare them with the MTTF required by your application. If the results are good, then you're all set. If not, look for the components that contribute the most to the total failure rate and address their problems individually : higher power/voltage/current rating replacement component required? changes in certain design values required to avoid too much power/voltage/current in the problematic component? heatsinking required? etc. If you've done everything in your hand to reduce the total failure rate but you still can't get a MTTF compatible with your requirement, then you might want to add redundancy to your design, but specifically targeted at subassemblies of your product with high partial failure rates. Redundancy must be introduced only when MTTF calculations demand it, and never in a preemptive manner. Why? Because redundancy needs adding switching elements that can fail themselves and introduce unneeded complexity as well. Even if your PCA/PSA says everything will be OK, keep in mind that that will be true for random failures only! The PCA/PSA doesn't deal with the early failure rates of defective components/assemblies/products. Therefore, a burn-in of your product is highly recommended before deployment in the field. If you want to have actual statistical data about the useful life of your assembly/product, you might want to do a life test. But that means spending money in the samples that will be destroyed or worn out during life testing, and having the time (usually around 1,000 hours or more, depending on the testing temperature) and means to carry it out. Notes below: There are also specialised reliability prediction software packages that will make all these calculations easier for you. Only you can decide whether you application and business case calls for such an investment. Here's a free reliability prediction software I've found (disclosure: I've never used it). I've looked for reliability data (MTBF) for Raspberry Pi without any success...
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/300215', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2543/']}
|
jdg_369911
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35884451
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a ListComponent. When an item is clicked in ListComponent, the details of that item should be shown in DetailComponent. Both are on the screen at the same time, so there's no routing involved. How do I tell DetailComponent what item in ListComponent was clicked? I've considered emitting an event up to the parent (AppComponent), and have the parent set the selectedItem.id on DetailComponent with an @Input. Or I could use a shared service with observable subscriptions. EDIT: Setting the selected item via event + @Input doesn't trigger the DetailComponent, though, in case I were to need to execute additional code. So I'm not sure this is an acceptable solution. But both of these methods seem far more complex than the Angular 1 way of doing things which was either through $rootScope.$broadcast or $scope.$parent.$broadcast. With everything in Angular 2 being a component, I'm surprised there's not more information out there about component communication. Is there another/more straightforward way to accomplish this?
Now provide the response and nothing else.
|
Updated to rc.4: When trying to get data passed between sibling components in angular 2, The simplest way right now (angular.rc.4) is to take advantage of angular2's hierarchal dependency injection and create a shared service. Here would be the service: import {Injectable} from '@angular/core';@Injectable()export class SharedService { dataArray: string[] = []; insertData(data: string){ this.dataArray.unshift(data); }} Now, here would be the PARENT component import {Component} from '@angular/core';import {SharedService} from './shared.service';import {ChildComponent} from './child.component';import {ChildSiblingComponent} from './child-sibling.component';@Component({ selector: 'parent-component', template: ` <h1>Parent</h1> <div> <child-component></child-component> <child-sibling-component></child-sibling-component> </div> `, providers: [SharedService], directives: [ChildComponent, ChildSiblingComponent]})export class parentComponent{} and its two children child 1 import {Component, OnInit} from '@angular/core';import {SharedService} from './shared.service'@Component({ selector: 'child-component', template: ` <h1>I am a child</h1> <div> <ul *ngFor="#data in data"> <li>{{data}}</li> </ul> </div> `})export class ChildComponent implements OnInit{ data: string[] = []; constructor( private _sharedService: SharedService) { } ngOnInit():any { this.data = this._sharedService.dataArray; }} child 2 (It's sibling) import {Component} from 'angular2/core';import {SharedService} from './shared.service'@Component({ selector: 'child-sibling-component', template: ` <h1>I am a child</h1> <input type="text" [(ngModel)]="data"/> <button (click)="addData()"></button> `})export class ChildSiblingComponent{ data: string = 'Testing data'; constructor( private _sharedService: SharedService){} addData(){ this._sharedService.insertData(this.data); this.data = ''; }} NOW: Things to take note of when using this method. Only include the service provider for the shared service in the PARENT component and NOT the children. You still have to include constructors and import the service in the children This answer was originally answered for an early angular 2 beta version. All that has changed though are the import statements, so that is all you need to update if you used the original version by chance.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35884451', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/493817/']}
|
jdg_369912
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3029321
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Am getting the below error when trying to do a select through a stored procedure in MySQL. Illegal mix of collations (latin1_general_cs,IMPLICIT) and (latin1_general_ci,IMPLICIT) for operation '=' Any idea on what might be going wrong here? The collation of the table is latin1_general_ci and that of the column in the where clause is latin1_general_cs .
Now provide the response and nothing else.
|
This is generally caused by comparing two strings of incompatible collation or by attempting to select data of different collation into a combined column. The clause COLLATE allows you to specify the collation used in the query. For example, the following WHERE clause will always give the error you posted: WHERE 'A' COLLATE latin1_general_ci = 'A' COLLATE latin1_general_cs Your solution is to specify a shared collation for the two columns within the query. Here is an example that uses the COLLATE clause: SELECT * FROM table ORDER BY key COLLATE latin1_general_ci; Another option is to use the BINARY operator: BINARY str is the shorthand for CAST(str AS BINARY). Your solution might look something like this: SELECT * FROM table WHERE BINARY a = BINARY b; or, SELECT * FROM table ORDER BY BINARY a; Please keep in mind that, as pointed out by Jacob Stamm in the comments, "casting columns to compare them will cause any indexing on that column to be ignored". For much greater detail about this collation business, I highly recommend eggyal's excellent answer to this same question .
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3029321', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/355562/']}
|
jdg_369913
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
67122432
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are using Microsoft Graph API Beta version to retrieve all users from the Azure AD using below code. API returns only 100 users in the response and to use paginated response we tried NextPageRequest property. But it always return null for the NextPageRequest property. And due to that it never enters to the while loop to retrieve rest of the users. Beta SDK version: 4.0.1.0 Code: List<User> usersList = new List<User>(); IGraphServiceUsersCollectionPage users = await graphClient.Users.Request().GetAsync(); // Add the first page of results to the user list usersList.AddRange(users.CurrentPage); // Fetch each page and add those results to the list while (users.NextPageRequest != null) { users = await users.NextPageRequest.GetAsync(); usersList.AddRange(users.CurrentPage); } log.Info("Users count: " + usersList.Count.ToString()); return usersList; Reference links that I followed: Microsoft Graph only returning the first 100 Users https://learn.microsoft.com/en-us/graph/api/user-list?view=graph-rest-1.0&tabs=csharp Any help on this will be appreciated!
Now provide the response and nothing else.
|
The below code works perfectly fine for me. public static async Task<List<User>> getUsers() { List<User> usersList = new List<User>(); graphClient.BaseUrl = "https://graph.microsoft.com/beta"; IGraphServiceUsersCollectionPage users = await graphClient.Users .Request() .GetAsync(); usersList.AddRange(users.CurrentPage); while (users.NextPageRequest != null) { users = await users.NextPageRequest.GetAsync(); usersList.AddRange(users.CurrentPage); } return usersList; } Check your users in Azure Active Directory Users Blade and see how many users are present in it. And also you can test the number of users if there are more than 100 or not by simply extending the code with $top query parameter which gives 998 records per request as shown below. IGraphServiceUsersCollectionPage users = await graphClient.Users .Request() .Top(998) .GetAsync(); You can also test Graph API calls in Graph Explorer . EDIT: After a long research I got to find out that its a bug in Microsoft Graph Beta SDK as it is always sending null in the NextPageRequest . But the interesting thing here is, it is sending the odata.nextLink in the AdditionalData property. So use the below code if you are using Graph Beat SDK. public static async Task<List<User>> getUsers() { List<User> usersList = new List<User>(); IGraphServiceUsersCollectionPage users = await graphClient.Users .Request() .GetAsync(); usersList.AddRange(users.CurrentPage); try { while (users.AdditionalData["@odata.nextLink"].ToString() != null) { users.InitializeNextPageRequest(graphClient, users.AdditionalData["@odata.nextLink"].ToString()); users = await users.NextPageRequest.GetAsync(); usersList.AddRange(users.CurrentPage); } } catch(Exception e) { } return usersList; } Note: Microsoft doesn't suggest to use its Graph Beta version in Production as they are subjected to change.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/67122432', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4742522/']}
|
jdg_369914
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
200848
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I regard mathematics as being build up in the following way: We havesome collections of symbol and rules (which are and have to be describedin a natural language) to manipulate these symboles. If we now fixcertain string of symbols (which can turn out to be, for example,the ZFC axioms), we are able to derive via our rules all of what weconsider to be mathematics. So for me, any piece of mathematics, may it be a proposition or a definition, is just a deductible string (I probably am a formalist, although I don't have a clear graps on the different philosphical orientations a mathematician can adopt), so I don't have any issues with things like "absolute truth" since I only believe in my system of deduction and my intuition which helps me accept a proof as correct without writing out/reading the complete deduction of it, but rather only the "cornerstone" deductions steps, which then enable me to "write out the complete proof if I wanted" (which of course I would never do - I deliberately exagerated a little bit, in hope to clarify my view, since I feel that "platonistic" mathematicians, when talking about things like these, often misunderstand me). Here is now a list of questions, which are all interconnected, andI think are most due to the fact that I confuse in which settingswe talk about which objects (hence the title) : Is my view of mathematics correct ? Why is it that in many books on mathematical logic (for example imEbbinghaus, Flum and Thomas' book) the first chapters, which describehow to we can manipulate these symbols (i.e. the syntax), often theword "set" is used and mathematical operations (like assignements)are performed, where we at this point don't have sets and functionsand so on at our disposition (meaning although of course I intuitively know what they are, we just don't have them at disposal right know) ? If I understand all appearances of the word "set" and assignementsin those chapter to mean not a set in the sense of ZFC and just acollection of symbols and all uses of functions as applications ofthe simple rules of manipulations of those symbols, which permit meto replace some symbols with others, I can make sense of the syntaxpart. But when semantics come into play and we suddenly deal withstructures like $\left(\mathbb{N},R^{\mathbb{N}}\right)$, I am totallythrown. We don't even have ZFC yet, so how can we talk about $\mathbb{N}$? Last but not least I read that ZFC can be used as a basis for first-orderlogic ? But how can this be if we needed first-order logic in thefirst place to be able to talk about the strings which make up ZFC? I'm hoping very much for detailed answers, since these questions havebugged me for a long time and I am tired reading introductions indifferent logic books without getting these answers.
Now provide the response and nothing else.
|
There are various philosophical approaches to this problem, and it's probably not possible to make any interesting claim about it that every bona-fide logician will agree with. However, I think a reasonably mainstream attitude is that the formal game with symbols and rules that texts in mathematical logic describes is not Mathematics itself . Rather, the formal game is a mathematical model of the kind of reasoning actual working mathematician accept as valid proofs, in the same way that differential equations can be a model of projectile trajectories or graph theory can model electricity distribution grids. First-order logic in general and ZFC in particular comprise a remarkably successful model of mathematical reasoning, in that most arguments employed by actual mathematicians can be modeled exactly in ZFC in a reasonably direct manner (except for some category-theoretic arguments which need some arguably clumsy workarounds to be shoehorned into ZFC), and that most mathematicians would agree that any argument that can be modeled in ZFC is a valid "mainstream mathematics" argument. But still, the model is not the thing itself -- that mathematics can be modeled in ZFC doesn't mean that mathematics is ZFC. What conventional introductions to mathematical logic do is assume that you already have a workable intuitive understanding of ordinary mathematical reasoning, what a valid proof is, how the integers work, and so forth. Then they show you how, using these preexisting tools , you can build a model of mathematical reasoning and use that model to understand it better. Most modern texts will also assume that your existing mathematical education has introduced you to simple "well-typed" uses of sets in mathematical arguments, and therefore feel free to use that in the construction of a model. If you want to minimize the amount of intuitive mathematical concepts you need in order to build the model, you can do it without any concept of "set" -- Gödel showed that arithmetic on the integers is in some technical sense enough, but in practice it can be argued that you need some intuitive sense of what finite symbol strings are, how they concatenate and so forth. However it is not possible to entirely eliminate all prerequisites about mathematical reasoning, because then you wouldn't be able to start saying anything on page 1.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/200848', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8875/']}
|
jdg_369915
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7719273
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a recommended way to determine the maximum height that a DIV can be set and remain visible per browser? This doesn't appear to be documented anywhere and is highly implementation specific. For example, see the following test script: http://jsfiddle.net/NP5Pa/2/ This is a simple test to find the maximum value you can set a DIV style height attribute before the corresponding clientHeight of the element becomes 0. You can confirm this by clicking "Find Max" then incrementing the found height by 1 and clicking "Set Height". Some examples (Win7/64): Chrome (14.0) : 134,217,726 pxSafari (5.1) : 134,217,726 pxIE (9.0) : 10,737,418 pxFF (7.0.1) : 17,895,697 px It's not surprising the WebKit produces the same result, I guess - more surprising that IE and FF are so different. Is there a better way? And do you get different results in 32bit systems? --EDIT: Updated the fiddle to stop at 10,000,000,000 (and get there quicker) for Opera. That's a lot of pixels.
Now provide the response and nothing else.
|
This is your code, modified to use binary search (so it's much quicker). http://jsfiddle.net/thai/zkuGv/4/ It begins at 1 pixel and doubling its size until the it hits the maximum (I use 2 53 , which is the biggest integer that can be stored in JavaScript without losing precision and would make the binary search buggy), or the div collapses to zero pixel. Suppose we set the div to size h and it disappears, then the maximum size must be between h /2 and h . We binary search from there for a height h that does not make the div disappear when set to height h , but disappears when set to h +1. Then we can come to a conclusion for Opera: 2147483583 pixels.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7719273', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471800/']}
|
jdg_369916
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21014338
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can you compile a class using Java 8 and then use it in Java 7? I'd like to write a utility JAR with a simple interface (Java 7 compatible) but using lambdas all over the place inside. Then I'd like to use the utility in some code that has to be compiled with JDK 7.. is it possible? As you can see, there are some contradictory answers.. so, what's the truth? ;-)
Now provide the response and nothing else.
|
You can use -source 1.7 -target 1.7 when compiling with java 8 javac to make .classes that work on java 7. However, you don't get java 8 features such as lambdas then, as you have to use 1.7 as source value as well. That's not the end of story though. In this answer it was explained that there aren't actually new JVM instructions, so this should be possible. In this answer the tool retrolambda was introduced which can be used to run java 8 code on older JVMs. Author claims that compilation step is not required, just a java agent transforming the classes. His solution deals with lambdas only, not any other features.Of course, that won't bring you the new APIs coming with java 8, or other improvements, but at least it would make what you ask possible. Some of the new apis have also been unofficially backported, namely java.util.stream and java.time . If we were in pre-java8 era, even though -source 1.8 -target 1.7 is not supported by javac you could still use Compiler API to do the same. Also Eclipse compiler has been able to do this, as well as ant and maven tasks that use either of the two mentioned options (compiler-plugin uses javac by default). This doesn't seem to no longer be so straight forward, as using other compilers you'd need the implementations of features like lambdas as well, and Compiler API will throw the same "javac: source release 1.8 requires target release 1.8" as others. For the interested, here's what happens going that way using Eclipse compiler & Maven and Calculator example code : pom.xml: <project> <modelVersion>4.0.0</modelVersion> <groupId>fi.eis.applications.java8</groupId> <artifactId>calculator</artifactId> <version>1.0-SNAPSHOT</version> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.3</version> <configuration> <source>1.8</source> <target>1.7</target> <compilerId>eclipse</compilerId> </configuration> <dependencies> <dependency> <groupId>org.codehaus.plexus</groupId> <artifactId>plexus-compiler-eclipse</artifactId> <version>2.6</version> </dependency> </dependencies> </plugin> </plugins> </build></project> Execution on Windows: >mvn clean compile[..snip..][INFO] BUILD SUCCESS>cd target\classes>"%JAVA_HOME%\bin\java.exe" CalculatorException in thread "main" java.lang.NoClassDefFoundError: java/lang/invoke/LambdaMetafactory at Calculator.main(Calculator.java:16)Caused by: java.lang.ClassNotFoundException: java.lang.invoke.LambdaMetafactory at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 1 more So one would need to implement those classes in pure java as a third party library to get that to work.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21014338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2346162/']}
|
jdg_369917
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
129538
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
i know that CDN is just for static contents. but then i still have to spread out by apache servers to all corners of the world right? so when i have done it, why dont i just set up some dedicated apache servers in place only serving static content just like CDN? are there real benefits from still using CDN compared to that scenario?
Now provide the response and nothing else.
|
A CDN is used for any content that is going to be accessed a multitude of times in a lot of various locations. YouTube for example, uses a CDN. The main advantage of a CDN is that you have a local Point of Presence for the majority of your internet users. If you notice that you're getting a lot of traffic from North America and England and Brazil, there's no way that all of them are getting an optimum experience, because if your servers are located in New Zealand, then they're all going to access at different speeds. By putting your servers in the countries where you get the majority of your traffic from, and then directing the user to their closest server, you're eliminating a lot of potential bottlenecks. Simply having "some dedicated servers only serving static content" does not alleviate this issue, as all your servers are going to be in the same place. If you're going to spread your servers out around the world, then your two scenarios are the same. All that said, I would never bother to build my own CDN unless I was someone the size of YouTube or Facebook. There are a few providers out there who already have a massive, robust network that you can use. See the following ServerFault questions for more information: How do CDN (Content Delivery Networks) server work? scalability: when to use CDN? How do I objectively compare the performance of different CDNs? https://serverfault.com/questions/55396/cdn-recommendation When does a CDN (content delivery network) become worth it?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/129538', 'https://serverfault.com', 'https://serverfault.com/users/39101/']}
|
jdg_369918
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41284528
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm pretty much lost on how to make the added contents of my popup window not disappear every time I open a new link or click it "away". I've read about content script, background script and the like but I don't honestly know how to implement that into my own source code. Below is my popup.html , popup.js and my manifest.js file. { "manifest_version": 2, "name": "URL_save", "description": "This extension saves an URL and renames the title to the user's wishes and hyperlink the title.", "version": "0.1", "browser_action": { "default_icon": "/img/icon.png", "default_popup": "popup.html", "default_title": "See your saved websites!" }, "permissions": [ "tabs" ]} popup html : <html> <head> <title>Your articles</title> <link href="/css/style.css" rel="stylesheet"/> <script src="/js/underscore-min.js"></script> <script src="/js/popup.js"></script> </head> <body> <div id="div">No content yet! Click the button to add the link of the current website!</div> <div><ul id="list"></ul></div> <br/> <button id="button">Add link!</button> </body></html> popup.js : // global variablesvar url;// event listener for the button inside popup windowdocument.addEventListener('DOMContentLoaded', function() { var button = document.getElementById('button'); button.addEventListener('click', function() { addLink(); });});// fetch the URL of the current tab, add inside the windowfunction addLink() {// store info in the the queryInfo object as per: // https://developer.chrome.com/extensions/tabs#method-query var queryInfo = { currentWindow: true, active: true }; chrome.tabs.query(queryInfo, function(tabs) { // tabs is an array so fetch the first (and only) object-elemnt in tab // put URL propery of tab in another variable as per: // https://developer.chrome.com/extensions/tabs#type-Tab url = tabs[0].url; // format html var html = '<li><a href=' + url + " target='_blank'>" + url + '</a><br/></li>'; // change the text message document.getElementById("div").innerHTML = "<h2>Saved pages</h2>"; // get to unordered list and create space for new list item var list = document.getElementById("list"); var newcontent = document.createElement('LI'); newcontent.innerHTML = html; // while loop to remember previous content and append the new ones while (newcontent.firstChild) { list.appendChild(newcontent.firstChild); } });} In this images you see what happens when I first add a link but then close (only) the popup window, opening it again: After adding the current URL: After closing and re-opening the popup:
Now provide the response and nothing else.
|
Similar to a web page, the popup's (or an options/settings page's) scope is created when it is shown and destroyed when it is no longer visible. This means that there is no state stored within the popup itself between the times that it is shown. Any information which you desire to persist after the popup is destroyed, you will need to store somewhere else. Thus, you will need to use JavaScript to store any state which you desire to have be the same the next time the popup is opened. Each time the popup is opened, you will need to retrieve that information and restore it to the DOM. The two most commonly used places are a StorageArea MDN , or the background page. Where you store the information will depend on how long you want the data you store to persist, and where you want the data to be seen. The general locations where you could store data include (other possibilities exist, but the followin are the most common): The background page if you want the data to exist only until Chrome is closed. It will not exist once Chrome is restarted. You can send the data to the background page through a couple/few different methods, including message passing MDN , or directly changing values on the background page MDN . Data stored in the StorageArea (the two options below) is also available to the background page, and content scripts. chrome.storage.local MDN if you want the data to persist on the local machine across Chrome being closed and restarted. chrome.storage.sync MDN if you want the data shared with all instances of Chrome which use the current Chrome account/profile. The data will also persist until changed. It will be available through Chrome being closed and restarted. It will be available on other machines using the same profile. window.localStorage : Prior to the existence of chrome.storage it was popular to store data for the extension in window.localStorage . While this will still work, it is generally preferred to use chrome.storage . One of the advantages of using a chrome.storage StorageArea MDN is that the data is directly available to all portions of your extension without the need to pass the data as a message. 1 Your current code Currently your code is not storing the URLs that are entered anywhere other than in the DOM of the popup. You will need to establish a data structure (e.g. an array) in which you store the list of URLs. This data can then be stored into one of the storage locations mentioned above. Google's example on the Options documentation page 2, MDN shows storing chrome.storage.sync and restoring values into the DOM when the options page is displayed. The code used in this example can for the options page can work exactly as-is for a popup by just defining its HTML page as the default_popup for a browser_action . There are many other examples available. Unfortunately, without more specifics from you as to what you desire, it is difficult to give you specific code. However, couple of suggestions to head in the direction you need to go are: Refactor your code so you have a separate function that you call with a URL as a parameter which just adds this URL to the list you have in the DOM (e.g. addUrlToDom(url) ). This function will be used when the user adds a URL and when the URLs are restored when the page loads. Store your list of URLs in an array (e.g. urlList ). This array will be what you save into the storage location outside of your popup. You will read this array from that storage location in your DOMContentLoaded handler and use the refactored addUrlToDom() function to add each value. Restoring it into the DOM could look something like: urlList.forEach(function(url){ addUrlToDom(url);}); Storing your data in chrome.storage.local Assuming you want to store the URLs on the local machine across Chrome shutdown/restart (i.e. use chrome.storage.local ), you code could look something like: manifest.json changes to permissions only: "permissions": [ "tabs", "storage" ] popup.js : // global variablesvar urlList=[];document.addEventListener('DOMContentLoaded', function() { getUrlListAndRestoreInDom(); // event listener for the button inside popup window document.getElementById('button').addEventListener('click', addLink);});// fetch the URL of the current tab, add inside the windowfunction addLink() { chrome.tabs.query({currentWindow: true,active: true}, function(tabs) { // tabs is an array so fetch the first (and only) object-element in tab var url = tabs[0].url; if(urlList.indexOf(url) === -1){ //Don't add duplicates addUrlToListAndSave(url); addUrlToDom(url); } });}function getUrlListAndRestoreInDom(){ chrome.storage.local.get({urlList:[]},function(data){ urlList = data.urlList; urlList.forEach(function(url){ addUrlToDom(url); }); });}function addUrlToDom(url){ // change the text message document.getElementById("div").innerHTML = "<h2>Saved pages</h2>"; //Inserting HTML text here is a bad idea, as it has potential security holes when // including content not sourced entirely from within your extension (e.g. url). // Inserting HTML text is fine if it is _entirely_ sourced from within your // extension. /* // format HTML var html = '<li><a href=' + url + " target='_blank'>" + url + '</a></li>'; //Add URL to DOM document.getElementById("list").insertAdjacentHTML('beforeend',html); */ //Build the new DOM elements programatically instead: var newLine = document.createElement('li'); var newLink = document.createElement('a'); newLink.textContent = url; newLink.setAttribute('href',url); newLink.setAttribute('target','_blank'); newLine.appendChild(newLink); document.getElementById("list").appendChild(newLine);}function addUrlToListAndSave(url){ if(urlList.indexOf(url) === -1){ //URL is not already in list urlList.push(url); saveUrlList(); }}function saveUrlList(callback){ chrome.storage.local.set({urlList},function(){ if(typeof callback === 'function'){ //If there was no callback provided, don't try to call it. callback(); } });} The exception to this is scripts which you insert into the page context. The page context is something you will probably not be running scripts in. To do so you have to use a content script (where your StorageArea MDN data is directly available) to insert a <script> tag into the DOM of a web page. This can be a bit complex, any you probably don't need to be concerned about it. It is mentioned here merely because there is a possible exception to the statement that the StorageArea MDN data is available to all areas of your extension. The example in the Chrome documenation works just fine on Firefox. Yes, Firefox supports both chrome.* , using callbacks, and browser.* , using promises .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41284528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7329422/']}
|
jdg_369919
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2960544
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading "Friendly Introduction to Number Theory". Now I'm working on Primitive Pythagorean Triples Exercises 2.1 (a) on P18. We showed that in any primitive Pythagorean triple $(a, b, c)$ , either $a$ or $b$ is even. Use the same sort of argument to show that either $a$ or $b$ must be a multiple of 3. (1) $a^2 + b^2 = c^2$ with a odd, b even, a,b,c having no common factors (2) $a^2 = c^2 - b^2 = (c-b)(c+b)$ (3) $c + b = s^2$ and $c - b = t^2$ (4) $c = \frac{(s^2 + t^2)}{2}$ and $b = \frac{(s^2 - t^2)}{2}$ (5) $a = \sqrt{(c-b)(c+b)} = st$ (6) $a = st$ , $b = \frac{(s^2 - t^2)}{2}$ , $c = \frac{(s^2 + t^2)}{2}$ https://www.math.brown.edu/~jhs/frintch1ch6.pdf I have no idea how I start doing this. Can you give me a hint? I think I need to show that both the following (1) and (2) are satisfied. $X \neq 0$ (1) $a\equiv 0\pmod 3$ and $b\equiv X\pmod 3$ (2) $b\equiv 0\pmod 3$ and $a\equiv X\pmod 3$
Now provide the response and nothing else.
|
The numbers $a$ and $b$ cannot both be multiples of $3$ , because then $c$ would be a multiple of $3$ and the triple would not be primitive. And if none of them is a multiple of $3$ , then both of them are of the form $3k\pm1$ , for some integer $k$ , from which it follows that both squares $a^2$ and $b^2$ are of the form $3k+1$ . But then $c^2(=a^2+b^2)$ is of the form $3k+2$ . You should be able to show that this is impossible.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2960544', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/603716/']}
|
jdg_369920
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41097652
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I connect to a docker container $> docker exec -it my_container zsh and inside it I want to kill something I started with ctrl+c I noticed that it takes forever to complete. I've googled around and it seems that ctrl+c works a bit different than you would expect. My question, how can I fix ctrl+c inside a container ?
Now provide the response and nothing else.
|
The problem is that Ctrl-C sends a signal to the top-level process inside the container, but that process doesn't necessarily react as you would expect. The top-level process has ID 1 inside the container, which means that it doesn't get the default signal handlers that processes usually have. If the top-level process is a shell, then it can receive the signal through its own handler, but doesn't forward it to the command that is executed within the shell. Details are explained here . In both cases, the docker container acts as if it simply ignores Ctrl-C. Starting with docker 0.6.5 , you can add -t to the docker run command, which will attach a pseudo-TTY . Then you can type Control-C to detach from the container without terminating it. If you use -t and -i then Control-C will terminate the container. When using -i with -t then you have to use Control-P Control-Q to detach without terminating. Test 1: $ ID=$(sudo docker run -t -d ubuntu /usr/bin/top -b)$ sudo docker attach $IDControl-P Control-Q$ sudo docker ps The container is still listed. Test 2: $ ID=$(sudo docker run -t -i -d ubuntu /usr/bin/top -b)$ sudo docker attach $IDControl-C$ sudo docker ps the container is not there (it has been terminated). If you type Control-P Control-Q instead of Control-C in the 2nd example, the container would still be running. Wrap the program with a docker-entrypoint.sh bash script that blocks the container process and is able to catch ctrl-c. This bash example might help: https://rimuhosting.com/knowledgebase/linux/misc/trapping-ctrl-c-in-bash #!/bin/bash# trap ctrl-c and call ctrl_c()trap ctrl_c INTfunction ctrl_c() { echo "** Trapped CTRL-C"}for i in `seq 1 5`; do sleep 1 echo -n "."done
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41097652', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/419425/']}
|
jdg_369921
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
640895
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm running docker with a host directory mounted: $ docker run -v /usr/groups/thing -ti imagename /bin/bash Files in /usr/groups/thing/foo are accessible: # ls /usr/groups/thing/fooa b c But files in /usr/groups/thing/bar are not: # ls /usr/groups/thing/barls: cannot open directory /usr/groups/thing/bar: Too many levels of symbolic links This is on Debian, and /usr/groups/thing is an automounted NFS volume.
Now provide the response and nothing else.
|
This is caused by directories not being automounted when the container is run. I had thought that /usr/groups/thing was the automount point, but evidently the sub-directories are auto-mounted individually. The solution is to make sure each one is mounted before entering the container: $ (cd /usr/groups/thing/foo; cd /usr/groups/thing/bar)$ docker run -v /usr/groups/thing -ti imagename /bin/bash# ls /usr/groups/thing/bard e f
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/640895', 'https://serverfault.com', 'https://serverfault.com/users/26255/']}
|
jdg_369922
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2193094
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The integral I am trying to compute is : $$\int \dfrac{dx}{\sin x + \sec x}$$ I have tried manipulating trignometric functions and it took me nowhere. Then finally I tried putting $\tan\dfrac{x}{2} = t$, and subsequently: $$=\int\dfrac{1-t^2}{(1+t^2)^2-2t(t^2-1)}dt$$ I cannot see how to approach after this, I know we have to factor out two quadratics, but cant see how to. Also, if there was another method instead of this substitution, please hint on that too! Thanks!
Now provide the response and nothing else.
|
Another way: $$\dfrac{2\cos x}{2+2\cos x\sin x}=\dfrac{\cos x-\sin x}{2+2\cos x\sin x}+\dfrac{\cos x+\sin x}{2+2\cos x\sin x}$$ As $\displaystyle\int(\cos x-\sin x)dx=\cos x+\sin x,$ write $2\cos x\sin x=(\cos x+\sin x)^2-1$ and set $\cos x+\sin x=u$ and as $\displaystyle\int(\cos x+\sin x)dx=\sin x-\cos x,$ write $2\cos x\sin x=1-(\sin x-\cos x)^2$ and set $\sin x-\cos x=v$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2193094', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/310635/']}
|
jdg_369923
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8972221
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Clang adds a keyword instancetype that, as far as I can see, replaces id as a return type in -alloc and init . Is there a benefit to using instancetype instead of id ?
Now provide the response and nothing else.
|
There definitely is a benefit. When you use 'id', you get essentially no type checking at all. With instancetype, the compiler and IDE know what type of thing is being returned, and can check your code better and autocomplete better. Only use it where it makes sense of course (i.e. a method that is returning an instance of that class); id is still useful.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8972221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252428/']}
|
jdg_369924
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1134671
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't really care about encoding and stuff, as long as I get back the exact same byte array. So to sum up: How do I convert a byte array into a string, and then that string back into the same byte array I started with?
Now provide the response and nothing else.
|
The absolute safest way to convert bytes to a string and back is to use base64: string base64 = Convert.ToBase64String(bytes);byte[] bytes = Convert.FromBase64String(base64); That way you're guaranteed not to get "invalid" unicode sequences such as the first half of a surrogate pair without the second half. Nothing's going to decide to normalize the data into something strange (it's all ASCII). There's no chance of using code points which aren't registered in Unicode, or anything like that. Oh, and you can cut and paste without much fear, too. Yes, you end up with 4 characters for every 3 bytes - but that's a small price to pay for the knowledge that your data won't be corrupted.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1134671', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/39321/']}
|
jdg_369925
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1160996
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to find the inverse of: $$\cos(c)=\frac{1}{3}$$ In other words, i'm trying to solve for c and without a calculator. If it's hard or not possible, then how would you go about solving inverses in general. For example, lets say: $$\cos(c)=\frac{3+1}{2\sqrt{2}}$$ how would we solve for c, which in this case is 105
Now provide the response and nothing else.
|
The first step is to stop and think about the problem itself. What is the cosine? If you recall, the formula for cosine (remember SOHCAHTOA?) is adjacent over hypotenuse. So, with your $\frac{1}{3}$ example, $1$ represents the length of the adjacent side, and $3$ represents the hypotenuse. In other words, the hypotenuse is going to be 3 times longer than the adjacent side. Picture that on the unit circle. The hypotenuse, as always on the unit circle, would have a length of 1, and the measurement along the $x$ axis would only be $0.\overline{3}$. That's a pretty steep angle. Just from this mental picture, you can already realize that it's going to be much greater than $45^{\circ}$, and probably even more than $60^{\circ}$. Ronald Doerfler, in his book Dead Reckoning: Calculating Without Instruments , teaches an estimation method for finding $arccos(x)$, or $cos^{-1}(x)$ (both just mean finding the original angle given its cosine) in degrees. His estimation formula for $cos^{-1}(x)$ is: $\sqrt{7(1000-1000x)}-\frac{1}{2}$ This looks bad, but can be done mentally with practice. Using your $\frac{1}{3}$ example, let's step through that one step at a time. This works better with decimals, so we'll switch from $\frac{1}{3}$ to $0.\overline{3}$. Step 1: $1000\times0.\overline{3}=333.\overline{3}$, which we'll round to $333$. Step 2: $1000-333=667$. Subtracting from 1000 is easy. If you're not already familiar with the mental method for this, this video will give you a quick refresher . Step 3: $667\times7$? Work this out mentally from left to right! $600\times7=4200$, $60\times7=420$, and $7\times7=49$, so we've got $4200+420+49=4620+49=4669$. Step 4: $\sqrt{4669}$?!? How are you supposed to do that in your head?!? First, you should be familiar with mentally squaring 2-digit numbers . I know $65^{2}=4225$ and $70^{2}=4900$, so I can quickly figure that $\sqrt{4669}$ is somewhere between $65$ and $70$. Mentally figure that $67^{2}=4489$. Hmmm...perhaps $68^{2}$ or $69^{2}$ would be closer. $68^{2}=4624$ and $69^{2}=4761$, so the answer is obviously $68$ point something. We can even quickly refine that, using other mental square root estimation techniques . $4669-4624=45$, so we can use the linked technique to realize that $\sqrt{4669}\approx68\frac{45}{137}$, or about $68\frac{1}{3}$. Step 5: $68\frac{1}{3}-\frac{1}{2}=67\frac{5}{6}$, so this method has given us an estimated angle of about $68^{\circ}$. Through experience, I've found a way to improve on Ronald Doerfler's above estimate. Before step 1, take note of the tenths digit of your original $x$. With $0.\overline{3}$, the tenths digit is obviously $3$. Whenever this digit is less than 6 (for 6 through 9, no adjustment is needed), you're going to add $6$ minus this digit, to the number of degrees, as a last step. Since the original tenths digit was $3$ in this case, we'll add $6-3=3$ more degrees. $68+3=71$, so we have an adjusted estimate of about $71^{\circ}$. If you actually work out $cos^{-1}(\frac{1}{3})$ with a calculator, you'll find the answer is $70.53^{\circ}$, so our mental estimate of $71^{\circ}$ is quite close!
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1160996', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/215034/']}
|
jdg_369926
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
110254
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to prove $\displaystyle \lim_{n \rightarrow \infty}n^3 - 4n^2 -100n = \infty$ using the definition $\forall c \exists N \in \mathbb{N} ,s.t.\ a_n>c\ \forall n\geq N$. I'm not having the best luck in thinking about this one. I think that I cannot use the limit sum properties if the sequence goes to infinity. So I can't look at the parts. I know that like convergence I assume that I'm given $c$ and show that I can find an N that makes the inequality hold. But the function $n^3 - 4n^2 -100n$ has a sign change around n = 12. I'm not sure how that affects things if it does at all. Anyways. If I get $c$ from someone and look at $N > (c+13)$. Then I say $n^3 - 4n^2 -100n > N^3 - 4N^2 -100N = (c+13)^3 - 4(c+13)^2 -100(c+13)$ and then $(c+13)^3 - 4(c+13)^2 -100(c+13)=c^3 + 35 c^2+303 c+ 221 > c$ Does this work or have I screwed up somewhere?
Now provide the response and nothing else.
|
$n^3 - 4n^2 - 100n = n(n^2-4n-100)$ and this is greater than $n$ if $n^2-4n-100 > 1$. Now, $n^2 - 4n - 100 > 1$ iff $n(n-4) >101$ and some trial and error shows that $n>13$ will satisfy the previous inequality. Soo... For every $n \geq 13$ $a_n > n \dots$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/110254', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1582/']}
|
jdg_369927
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
560485
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with a private medical dataset including categorical features coming from patients examinations.However, the problem is that some patients underwent MRI, others scanner, and some underwent both.Thus, scanner-only patients have missing values in the MRI associated features, and vice-versa. How could I handle this situation?I thought about 3 solutions for now: Using an "examination not passed" category to replace missing values, but this would be considered as a full category on itself by machine learning algorithms. They could make correlations such as "exam not passed" => "class number 1" but there is no link between both as the examination rely on availability of imaging devices in the hospitals from where the data were collected. Some just didn't own MRI devices, etc. Treat MRI, scanner, and MRI+scanner patients as 3 different datasets and train a different model on each one. But doing so would imply writing specific code wrapping Sklearn objects in order to automatize the whole training process. Using a model robust to missing values such as XGBoost. I don't think it is a good idea, my problem should be handled beforehand as XGBoost uses its own imputing values. It is just moving the problem elsewhere.
Now provide the response and nothing else.
|
It's really that bad, as detailed in many of Patrick and my writings. Think about it these ways, for starters: If you categorize a marker the loss of information is so great that you will need to go an collect more markers to make up for the loss. Why not instead get the most information out of a single marker? Then there is the issue that it is easy to show that the threshold for a marker is a function of the continuous values of all the other predictors. See the Information Loss chapter in BBR . For example, if the sex of the patient is an important factor with regard to the likelihood of disease, you'll find that a different biomarker threshold is needed for females vs. males. Categorization simplifies decision making but only by making it worse. Many things only seem to be simple precisely because they are wrong. We are taught to assess goodness of fit of models. A model that assumes a piecewise flat relationship between a biomarker and outcome will be easily shown to have poor fit to the data. For example look at the log-likelihood for a piecewise flat model (i.e., one using thresholding) and a flexible but everywhere-smooth model.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/560485', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/315450/']}
|
jdg_369928
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4811801
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this possible at all?I tried adding this to the page but it didn't change a thing. <meta http-equiv="X-UA-Compatible" content="IE=8"> UPDATE- I'm trying to do this because our site has some IE9 specific CSS issues, which wouldn't appear in IE8. Thanks
Now provide the response and nothing else.
|
You can use the document compatibility mode to do this, which is what you were trying.. However, thing to note is: It must appear in the Web page's header (the HEAD section) before all other elements, except for the title element and other meta elements Hope that was the issue.. Also, The X-UA-compatible header is not case sensitive Refer: http://msdn.microsoft.com/en-us/library/cc288325%28v=vs.85%29.aspx#SetMode Edit: in case something happens to kill the msdn link, here is the content: Specifying Document Compatibility Modes You can use document modes to control the way Internet Explorer interprets and displays your webpage. To specify a specific document mode for your webpage, use the meta element to include an X-UA-Compatible header in your webpage, as shown in the following example. <html><head> <!-- Enable IE9 Standards mode --> <meta http-equiv="X-UA-Compatible" content="IE=9" > <title>My webpage</title></head><body> <p>Content goes here.</p></body></html> If you view this webpage in Internet Explorer 9, it will be displayed in IE9 mode. The following example specifies EmulateIE7 mode. <html><head> <!-- Mimic Internet Explorer 7 --> <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" > <title>My webpage</title></head><body> <p>Content goes here.</p></body></html> In this example, the X-UA-Compatible header directs Internet Explorer to mimic the behavior of Internet Explorer 7 when determining how to display the webpage. This means that Internet Explorer will use the directive (or lack thereof) to choose the appropriate document type. Because this page does not contain a directive, the example would be displayed in IE5 (Quirks) mode.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4811801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26192/']}
|
jdg_369929
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11456702
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have following sql query that take only 1 second to execute: select a.date, b.rate, c.type, a.value froma inner join b on a.id = b.aidc inner join b.id = c.bidwhere a.name = 'xxx' But I need a resultset to get the results that has rate greater than 0. So when I change the query to this it takes 7 minutes to execute: select a.date, b.rate, c.type, a.value froma inner join b on a.id = b.aidc inner join b.id = c.bidwhere a.name = 'xxx' and b.rate>0 Why would this make the query time go up from 1 sec to 7 mins? Since the b table is huge I even tried to use CTE but that did not improve the performance either. I thought with the CTE there will be smaller set of values to filter from so it should be faster but that did not help: ;with x as(select a.date, b.rate, c.type, a.value froma inner join b on a.id = b.aidc inner join b.id = c.bidwhere a.name = 'xxx')select * from x where rate>0 I cannot include the execution plan as I don't have permissions to db other than querying.
Now provide the response and nothing else.
|
My guess is that the slow execution plan is doing the rate>0 filter in an unfortunate way, like as part of a scan on the inside of a loop join or something. If it comes down to it, one solution would be to store an intermediate result set and filter it in a separate statement. I suggest this with the understanding that you can't make changes to your vendor's database and that you are basically stuck. This is essentially taking away some control from the optimizer -- something you don't typically want to do -- and is also adding a relatively small amount of overhead with the creation of a temp table. But it should alleviate the slowness in this case. I would continue to work with your vendor on indexing strategies if possible. select a.date, b.rate, c.type, a.value into #tfrom a inner join b on a.id = b.aidc inner join b.id = c.bidwhere a.name = 'xxx' select * from #t where rate>0
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11456702', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/799891/']}
|
jdg_369930
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
167908
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
True/False If $p, q$, and $r$ are relatively primes, then there exist integers $x, y$, and $z$ such that $px + qy + rz = 1$ NOTE: $p, q$, and $r$ are positive primes.
Now provide the response and nothing else.
|
Let's take the more general question: if $a$, $b$, and $c$ are integers and they are relatively prime, i.e., $\gcd(a,b,c)=1$, but not necessarily pairwise relatively prime, then there exist integers $x,y,z$ such that $ax+by+cz=1$. Indeed, $\gcd(a,b,c) = \gcd(\gcd(a,b),c)$. Let $d=\gcd(a,b)$; then there exist integers $m$ and $n$ such that $am+bn=d$. And since $\gcd(d,c)=1$, there exist integers $t$ and $z$ such that $dt + cz = 1$. Now substituting the value of $d$, let $x=mt$ and $y=zn$ to get $$1 = dt + cz = (am+bn)t + cz = a(mt) + b(nt) + cz = ax + by + cz.$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/167908', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34433/']}
|
jdg_369931
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20663841
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a concise, idiomatic way (maybe using Apache Commons) to specify common combinations of OpenOption like StandardOpenOption.WRITE, StandardOpenOption.CREATE, StandardOpenOption.TRUNCATE_EXISTING
Now provide the response and nothing else.
|
These are the easy possibilities you have. Static Imports, to increase readability: import static java.nio.file.StandardOpenOption.CREATE_NEW;import static java.nio.file.StandardOpenOption.WRITE;OpenOption[] options = new OpenOption[] { WRITE, CREATE_NEW }; Use defaults: //no Options anyway Files.newBufferedReader(path, cs) //default: CREATE, TRUNCATE_EXISTING, and WRITE not allowed: READ Files.newBufferedWriter(path, cs, options) //default: READ not allowed: WRITE Files.newInputStream(path, options) //default: CREATE, TRUNCATE_EXISTING, and WRITE not allowed: READ Files.newOutputStream(path, options) //default: READ do whatever you want Files.newByteChannel(path, options) Finally it's possible to specify optionsets like this: Files.newByteChannel(path, EnumSet.of(CREATE_NEW, WRITE));
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20663841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1151521/']}
|
jdg_369932
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40211
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a flag enum below. [Flags]public enum FlagTest{ None = 0x0, Flag1 = 0x1, Flag2 = 0x2, Flag3 = 0x4} I cannot make the if statement evaluate to true. FlagTest testItem = FlagTest.Flag1 | FlagTest.Flag2;if (testItem == FlagTest.Flag1){ // Do something, // however This is never true.} How can I make this true?
Now provide the response and nothing else.
|
In .NET 4 there is a new method Enum.HasFlag . This allows you to write: if ( testItem.HasFlag( FlagTest.Flag1 ) ){ // Do Stuff} which is much more readable, IMO. The .NET source indicates that this performs the same logic as the accepted answer: public Boolean HasFlag(Enum flag) { if (!this.GetType().IsEquivalentTo(flag.GetType())) { throw new ArgumentException( Environment.GetResourceString( "Argument_EnumTypeDoesNotMatch", flag.GetType(), this.GetType())); } ulong uFlag = ToUInt64(flag.GetValue()); ulong uThis = ToUInt64(GetValue()); // test predicate return ((uThis & uFlag) == uFlag); }
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/40211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2469/']}
|
jdg_369933
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15788947
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In an effort to standardize my code and make it more portable, I replaced #ifdef __GNUC__typedef __attribute__((aligned(16))) float aligned_block[4];#elsetypedef __declspec(align(16)) float aligned_block[4];#endif with typedef float alignas(16) aligned_block[4]; in C++11. However, gnu (4.8) doesn't like that but complains test.cc:3:9: warning: attribute ignored [-Wattributes] typedef float alignas(16) aligned_block[4]; ^test.cc:3:9: note: an attribute that appertains to a type-specifier is ignored whereas clang 3.2 creates no warning (even with -Weverything -Wno-c++98-compat -pedantic ).So I wonder whether my code above is correct and, more generally, where alignas() can and cannot be placed. EDIT (Apr 2013) : The relevant article from the standard is 7.6.2, in particular 7.6.2.1 An alignment-specifier may be applied to a variable or to a class data member, but it shall not be applied to a bit-field, a function parameter, the formal parameter of a catch clause (15.3), or a variable declared with the register storage class specifier. An alignment-specifier may also be applied to the declaration of a class or enumeration type. An alignment-specifier with an ellipsis is a pack expansion (14.5.3). as already dug out by Red XIII. However, I'm not expert enough to know what this means for my test above. If the fact that clang accepts my attribute means anything, it's perhaps worth mentioning that when trying to use a using directive instead of a typedef , clang also complains. Also, contrary to a statement in an earlier version of this question, gcc does not only warn, but indeed ignores my wish for alignment.
Now provide the response and nothing else.
|
I think you just placed the alignas in the wrong position. If you move it directly after the identifier, both GCC and Clang are happy and apply the alignment: typedef float aligned_block alignas(16) [4];typedef float aligned_block [4] alignas(16); this is also true if you use using , where the difference also becomes more apparent. Here are two versions that are not accepted by GCC (warning, alignment ignored): using aligned_block = float alignas(16)[4];using aligned_block = float[4] alignas(16); and here's the accepted one: using aligned_block alignas(16) = float[4]; I think that GCC applies 7.1.3 The typedef specifier [dcl.typedef] 2 A typedef-name can also be introduced by an alias-declaration . The identifier following the using keyword becomes a typedef-name and the optional attribute-specifier-seq following the identifier appertains to that typedef-name . It has the same semantics as if it were introduced by the typedef specifier. [...] (emphasis mine) The above is quite clear for using , the rules for typedef are spread through several paragraphs, including at the end of §8.3/1, where you find: 8.3 Meaning of declarators [dcl.meaning] 1 [...] The optional attribute-specifier-seq following a declarator-id appertains to the entity that is declared. (again, emphasis mine) Update: The above answer concentrated on where the alignas has to be placed, not on its exact meaning. After thinking about it some more, I still think that the above should be valid. Consider: 7.6.2 Alignment Specifier [dcl.align] 1 An alignment-specifier may be applied to a variable or to a class data member, but it shall not be applied to a bit-field, a function parameter, an exception-declaration (15.3), or a variable declared with the register storage class specifier. An alignment-specifier may also be applied to the declaration or definition of a class (in an elaborated-type-specifier (7.1.6.3) or class-head (Clause 9), respectively) and to the declaration or definition of an enumeration (in an opaque-enum-declaration or enum-head , respectively (7.2)). An alignment-specifier with an ellipsis is a pack expansion (14.5.3). It lists cases where it can be clearly applied and it lists cases where it clearly can not be applied. The above question's example is neither. One could also argue that the type alias created by typedef or using is carrying the alignment specification as part of the aliased type. This alias can than be used to create a variable, etc. as allowed by 7.6.2p1 but not to create a variable with register , etc. In that sense I think that the attribute specifier is applied (in the sense of 7.6.2) in a deferred way and thus OPs example should still be valid when the alignment specification is put in the syntactically correct place.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15788947', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1023390/']}
|
jdg_369934
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10907386
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently working on map overlay which highlights the route along specified points and I need to implement certain line style (something like on screenshot) What I'm trying to do - is to highlight the route with something like transparent line with black stroke lines from both sides Playing with different fill styles and Paint settings haven't led me to any solution so far. Does anybody know what direction I need to look for? Currently I managed to draw only solid line, but this is not what I'm looking for: Paint setup: mPaint.setStyle(Paint.Style.STROKE);mPaint.setColor(COLOR_DEFAULT);mPaint.setPathEffect(new CornerPathEffect(10));mPaint.setStrokeWidth(6);mPaint.setAntiAlias(true); Drawing routine canvas.drawPath(mPath, mPaint);
Now provide the response and nothing else.
|
I get pretty good results with PathDashPathEffect using a "dash stamp" that's two very thin rectangles and the MORPH style option. See last and 3rd last line here: This was drawn by modifying the PathEffects example in ApiDemos taken from the SDK: package com.example.android.apis.graphics;import android.content.Context;import android.graphics.*;import android.os.Bundle;import android.view.KeyEvent;import android.view.View;public class PathEffects extends GraphicsActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(new SampleView(this)); } private static class SampleView extends View { private Paint mPaint; private Path mPath; private PathEffect[] mEffects; private int[] mColors; private float mPhase = 3; private static void makeEffects(PathEffect[] e, float phase) { e[0] = null; // no effect e[1] = new CornerPathEffect(10); e[2] = new DashPathEffect(new float[] {10, 5, 5, 5}, phase); e[3] = new PathDashPathEffect(makePathDash(), 12, phase, PathDashPathEffect.Style.MORPH); e[4] = new ComposePathEffect(e[2], e[1]); e[5] = new ComposePathEffect(e[3], e[1]); } public SampleView(Context context) { super(context); setFocusable(true); setFocusableInTouchMode(true); mPaint = new Paint(Paint.ANTI_ALIAS_FLAG); mPaint.setStyle(Paint.Style.STROKE); mPaint.setStrokeWidth(6); mPath = makeFollowPath(); mEffects = new PathEffect[6]; mColors = new int[] { Color.BLACK, Color.RED, Color.BLUE, Color.GREEN, Color.MAGENTA, Color.BLACK }; } @Override protected void onDraw(Canvas canvas) { canvas.drawColor(Color.WHITE); RectF bounds = new RectF(); mPath.computeBounds(bounds, false); canvas.translate(10 - bounds.left, 10 - bounds.top); makeEffects(mEffects, mPhase); invalidate(); for (int i = 0; i < mEffects.length; i++) { mPaint.setPathEffect(mEffects[i]); mPaint.setColor(mColors[i]); canvas.drawPath(mPath, mPaint); canvas.translate(0, 28); } } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { switch (keyCode) { case KeyEvent.KEYCODE_DPAD_CENTER: mPath = makeFollowPath(); return true; } return super.onKeyDown(keyCode, event); } private static Path makeFollowPath() { Path p = new Path(); p.moveTo(0, 0); for (int i = 1; i <= 15; i++) { p.lineTo(i*20, (float)Math.random() * 35); } return p; } private static Path makePathDash() { Path p = new Path(); p.moveTo(-6, 4); p.lineTo(6,4); p.lineTo(6,3); p.lineTo(-6, 3); p.close(); p.moveTo(-6, -4); p.lineTo(6,-4); p.lineTo(6,-3); p.lineTo(-6, -3); return p; } }}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10907386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1277362/']}
|
jdg_369935
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4435574
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What are all the differences between Microsofts Window's azure and web hosting companies like GoDaddy, hostgator ?
Now provide the response and nothing else.
|
Here are a few key differences to consider: Server usage. With a hoster like GoDaddy, servers are shared resources, and there's no way to figure out (or limit) the number of tenants sharing a given server with your app. With Azure, a given server, with its 8 cores, is allocated to specific VMs. At most, 8 virtual machines will be deployed to a given server. SLA. With a shared-hoster like GoDaddy, there's no built-in way to scale your app, or to ensure availability with multiple servers running your app. With Azure, you can easily scale to 2 or more instances, ensuring at least 99.95% availability. Virtual servers. As a clarification to the point above: GoDaddy now offers virtual servers, with cost starting at roughly $30 per month. However, this offering does not provide any scaling support. Durable storage. Aside from SQL Azure (GoDaddy provides SQL Server hosting), Azure provides tables, blobs and queues, each triple-replicated. Further, Azure can deploy a new SQL Azure instance in a few seconds. SQL Azure is also triple-replicated. Monitoring and diagnostics infrastructure. Azure provides a diagnostics infrastructure to capture things like performance counters, queue sizes, custom log files, etc. and consolidate them into easily accessible/queryable table and blob storage. You can then build app-monitoring apps around this set of diagnostics. Also, the Azure fabric itself monitors your application's health. In the event of a failed server, for instance, your app is restarted on another server automatically. I'm unaware of any automated health monitoring and recovery system built into cheap/shared hosting solutions such as those offered by GoDaddy. Additional services on-demand. In Azure, you have access to several consumable services, such as SQL Azure, service bus, scalable storage (blobs, tables, queues), access control, caching (going live in 2011), and Reporting Services (going live in 2011).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4435574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/502377/']}
|
jdg_369936
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31639088
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a table with 3 columns Item , City and VisitNumber . In this table there are a lot of records and I need to select top 20 items of each city (according to their visit numbers). For example if there are 100 items in Washington and 250 items in New York and 500 items in Los Angeles, I need to select 60 most-visited rows for each city (20 for Washington, 20 for New York, 20 for Los Angeles) How is it possible in TSQL ?
Now provide the response and nothing else.
|
The easiest way would be to use the row_number() window function to number the rows for each city according to their visitnumber descending and use that as a filter. This query should work in any SQL Server version from 2005 onwards. select * from ( select *, r = row_number() over (partition by City order by VisitNumber desc) from your_table ) a where r <= 20 and City in ('Washington', 'New York', 'Los Angeles') This would select the top 20 items for each city specified in the where clause.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31639088', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1550812/']}
|
jdg_369937
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36859564
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to debug a project written in Python 3 in Visual Studio Code, but I can't seem to find any way of specifying interpreter or Python version in the launch.json file. It works great for Python 2, so the question is, how do I make it work for Python 3?
Now provide the response and nothing else.
|
Python 3 debugging works well also. It is a little confusing as there are two different places to specify the path: settings.json and launch.json . I recommend using Don Jayamanne's Python Extension . After installing it, you have to configure the path to the interpreter you want to use it with. Python Version used for Intellisense, autocomplete, linting, formatting, etc. The same Python interpreter is used for intellisense, autocomplete,linting, formatting, etc. (everything other than debugging). Thestandard interpreter used is the first Python interpreterencountered in the current path. If a different version is to be used,this can be configured in one of two ways: Configure the path to the python interpreter in the User Settings file(settings.json) as follows. Ensure to specify the fully qualified nameof the python executable. "python.pythonPath":"c:/python27/python.exe" Configure the path to the Python interpreterin the Workspace Settings file (settings.json) as follows. Ensure tospecify the fully qualified name of the Python executable."python.pythonPath": "c:/python27/python.exe" Python Version used fordebugging Details on configuration settings for debugging can be found hereDebugging. Simply provide the fully qualified path to the pythonexecutable in the "python" setting within the configuration settingsin the launch.json file as follows: { "name": "Python", "type": "python", "request": "launch", "stopOnEntry": true, "program": "${file}", "pythonPath": "c:/python27/python.exe", "debugOptions": [ "WaitOnAbnormalExit", "WaitOnNormalExit", "RedirectOutput" ]}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36859564', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92326/']}
|
jdg_369938
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19014250
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get React to re-render the view when the browser window is resized? Background I have some blocks that I want to layout individually on the page, however I also want them to update when the browser window changes. The very end result will be something like Ben Holland's Pinterest layout, but written using React not just jQuery. I’m still a way off. Code Here’s my app: var MyApp = React.createClass({ //does the http get from the server loadBlocksFromServer: function() { $.ajax({ url: this.props.url, dataType: 'json', mimeType: 'textPlain', success: function(data) { this.setState({data: data.events}); }.bind(this) }); }, getInitialState: function() { return {data: []}; }, componentWillMount: function() { this.loadBlocksFromServer(); }, render: function() { return ( <div> <Blocks data={this.state.data}/> </div> ); }});React.renderComponent( <MyApp url="url_here"/>, document.getElementById('view')) Then I have the Block component (equivalent to a Pin in the above Pinterest example): var Block = React.createClass({ render: function() { return ( <div class="dp-block" style={{left: this.props.top, top: this.props.left}}> <h2>{this.props.title}</h2> <p>{this.props.children}</p> </div> ); }}); and the list/collection of Blocks : var Blocks = React.createClass({ render: function() { //I've temporarily got code that assigns a random position //See inside the function below... var blockNodes = this.props.data.map(function (block) { //temporary random position var topOffset = Math.random() * $(window).width() + 'px'; var leftOffset = Math.random() * $(window).height() + 'px'; return <Block order={block.id} title={block.summary} left={leftOffset} top={topOffset}>{block.description}</Block>; }); return ( <div>{blockNodes}</div> ); }}); Question Should I add jQuery’s window resize? If so, where? $( window ).resize(function() { // re-render the component}); Is there a more “React” way of doing this?
Now provide the response and nothing else.
|
Using React Hooks: You can define a custom Hook that listens to the window resize event, something like this: import React, { useLayoutEffect, useState } from 'react';function useWindowSize() { const [size, setSize] = useState([0, 0]); useLayoutEffect(() => { function updateSize() { setSize([window.innerWidth, window.innerHeight]); } window.addEventListener('resize', updateSize); updateSize(); return () => window.removeEventListener('resize', updateSize); }, []); return size;}function ShowWindowDimensions(props) { const [width, height] = useWindowSize(); return <span>Window size: {width} x {height}</span>;} The advantage here is the logic is encapsulated, and you can use this Hook anywhere you want to use the window size. Using React classes: You can listen in componentDidMount, something like this component which just displays the window dimensions (like <span>Window size: 1024 x 768</span> ): import React from 'react';class ShowWindowDimensions extends React.Component { state = { width: 0, height: 0 }; render() { return <span>Window size: {this.state.width} x {this.state.height}</span>; } updateDimensions = () => { this.setState({ width: window.innerWidth, height: window.innerHeight }); }; componentDidMount() { window.addEventListener('resize', this.updateDimensions); } componentWillUnmount() { window.removeEventListener('resize', this.updateDimensions); }}
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/19014250', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2816731/']}
|
jdg_369939
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14065425
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Rails 3.2.0 Let's say I have: class Comment < ActiveRecord::Base has_many :articlesendc1 = Comment.last then c1.articles.class# => Arrayc1.articles.where('id NOT IN (999999)').class# => ActiveRecord::Relation Why is the result of an association not a type of ActiveRecord::Relation ? It clearly is / was at some point : c1.articles.to_orig# undefined method `to_orig' for #<ActiveRecord::Relation:0x007fd820cc80a8>c1.articles.class# => Array Certain evaluations act upon an ActiveRecord::Relation object, but inspecting the class gives a different type. Particularly, this breaks building lazy-loaded queries when using merge to concat multiple queries.
Now provide the response and nothing else.
|
It is an ActiveRecord::Relation , but Rails is intentionally lying to you . You can see this already in the method calls, and continue to see it by calling ancestors , which includes a slew of ActiveRecord classes: c1.articles.ancestors.select { |c| c.to_s =~ /ActiveRecord/ }.size #=> 35 which shows that it is very much not an Array . This happens because what you’re getting back when calling c1.articles is an ActiveRecord::Associations::CollectionProxy *, which undefines class (along with many other methods). This means that class gets delegated via its method_missing , which sends it to target . As we can see, the class of target here is, in fact, Array : c1.articles.target.class #=> Array That is where c1.articles.class comes from. Nevertheless, it is an ActiveRecord::Relation . * We can verify that it is indeed an ActiveRecord::Associations::CollectionProxy by calling Ruby’s original class method on the object in question: Object.instance_method(:class).bind(c1.articles).call . This is a nice trick to verify that the object is not trying to pretend to be of a different class.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14065425', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/263858/']}
|
jdg_369940
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.