source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
40450027
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For an assignment I got from school we need to make a chess game. One task is that we need to make a class called chessboard with a couple of methods like place, delete and replace. My chessboard is a dictionary with the keys as place, in the form of a tuple, and the value as the chess-piece as value. But if I want to give a tuple as argument to my methods it fails. This is my code: class ChessBoard: def __init__(self): DICT={ (A,1):None,(A,2):None,(A,3):None,(A,4):None,(A,5):None,(A,6):None,(A,7):None,(A,8):None, (B,1):None,(B,2):None,(B,3):None,(B,4):None,(B,5):None,(B,6):None,(B,7):None,(B,8):None, (C,1):None,(C,2):None,(C,3):None,(C,4):None,(C,5):None,(C,6):None,(C,7):None,(C,8):None, (D,1):None,(D,2):None,(D,3):None,(D,4):None,(D,5):None,(D,6):None,(D,7):None,(D,8):None, (E,1):None,(E,2):None,(E,3):None,(E,4):None,(E,5):None,(E,6):None,(E,7):None,(E,8):None, (F,1):None,(F,2):None,(F,3):None,(F,4):None,(F,5):None,(F,6):None,(F,7):None,(F,8):None, (G,1):None,(G,2):None,(G,3):None,(G,4):None,(G,5):None,(G,6):None,(G,7):None,(G,8):None, (H,1):None,(H,2):None,(H,3):None,(H,4):None,(H,5):None,(H,6):None,(H,7):None,(H,8):None } def place(self, piece,(row,column)): self.piece=piece self.(row,column)=(row,column) DICT[(row,column)]=self.piece Now provide the response and nothing else.
The problem you are facing is that you are trying to getWidth() on your unscaledBitmap in the createScaledBitmap function. Clearly, your unscaledBitmap is null sometimes; and calling getWidth() is causing the Null Pointer exception. The root cause is that decodeResource is returning you a null for whatever reason. The reasons can include - No read permission The image file is corrupt There is not enough memory to decode the file The resource does not exist Invalid options specified in the options variable. I'd suggest that you modify your code to include a null-check on the decoded bitmap, log it and debug from there on the specific devices that you see the error occurring. It may also be that your options variable that you are re-using is being interpreted differently in the second call to decodeResource . You might try passing a null there. The modified code should be as follows - public class BitmapScalingHelper{ public static Bitmap decodeResource(Resources res, int resId, int dstWidth, int dstHeight) { Options options = new Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeResource(res, resId, options); options.inJustDecodeBounds = false; options.inSampleSize = calculateSampleSize(options.outWidth, options.outHeight, dstWidth, dstHeight); options = new Options(); //May use null here as well. The funciton may interpret the pre-used options variable in ways hard to tell. Bitmap unscaledBitmap = BitmapFactory.decodeResource(res, resId, options); if(unscaledBitmap == null) { Log.e("ERR","Failed to decode resource - " + resId + " " + res.toString()); return null; } return unscaledBitmap; }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40450027', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7057792/']}
jdg_369241
stackexchange
llm_judgeable_groundtruth_similarity
231987
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: On periodic domain, people always use Fourier basis, which eigenvectors of Laplace operator. On sphere, people use spherical harmonics, which also are eigenvectors of Laplace operator. In applied science, people decompose functions on a graph using eigenvectors of graph laplacian. What makes eigenvectors of Laplace operator widely used compared to other orthogonal basis? Are there any other operators also provide orthogonal basis which are also useful? Are there any example that we are not using Laplace operator? On non-periodic domain, we have many orthogonal polynomial systems, say, Legendre polynomials, Chebyshev polynomials, Jacobi polynomials. So, we have more than just one set of orthogonal basis, in this case. It motivates me to ask those above questions. Now provide the response and nothing else.
The exponentials used in Fourier series are eigenvalues of shifts, and thus of any operator commuting with shifts, not just Laplacian. Similarly, spherical harmonics carry irreducible representations of $SO(3)$, and so they are eigenfunctions of any rotationally invariant operator. If the underlying space has symmetries, it's no wonder that a basis respecting those symmetries has some nice properties.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/231987', 'https://mathoverflow.net', 'https://mathoverflow.net/users/13838/']}
jdg_369242
stackexchange
llm_judgeable_groundtruth_similarity
128233
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: On $n$ nodes, we have $2^{n(n-1)/2}$ graphs. Asymmetric graph is a graph that has only trivial automorphism. We known that asymptotically almost all finite graphs are asymmetric. Therefore, in the limit, the ratio of asymmetric graphs approaches 1. However, I did not find any reference that provides lower bound on the number of asymmetric graphs on $n$ nodes. What is known about the density of asymmetric graphs as a function of the number of nodes $n$? EDIT 1-3-2014 Thanks to both answers. I received two estimates of the number of symmetric graphs. I'm still hoping for a better tight asymptotic upper-bound such as $O(f(n))$ where $n= |V(G)|$ or even better a tight asymptotic lower-bound on the number of asymmetric graphs $\omega(g(n))$. Now provide the response and nothing else.
Almost all non-asymmetric graphs have exactly one non-trivial automorphism, namely a transposition swapping two vertices. So, an accurate estimate of their number isobtained by taking an arbitrary graph with one vertex less, choosing a vertex $v$, adding a new vertex $w$ with the same neighbours as $v$, then either joining or not joining $v$ to $w$. For labelled graphs, if $g_n=2^{\binom n2}$ is the number of them we have asymptotically $2ng_{n-1}$ non-asymmetric graphs. This is a small fraction. For unlabelled graphs, divide the total by $n!$ and the non-asymmetric ones by $n!/2$.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/128233', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8784/']}
jdg_369243
stackexchange
llm_judgeable_groundtruth_similarity
43384804
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to do validation for checkboxes here without form tag. At least one checkbox should be selected. <div *ngFor="let item of officeLIST"> <div *ngIf=" item.officeID == 1"> <input #off type="checkbox" id="off" name="off" value="1" [(ngModel)]="item.checked"> <label>{{item.officename}}</label> </div> <div *ngIf="item.officeID== 2"> <input #off type="checkbox" id="off" name="off" value="2" [(ngModel)]="item.checked"> <label>{{item.officename}}</label> </div> <div *ngIf="item.officeID== 3"> <input #off type="checkbox" id="off" name="off" value="3" [(ngModel)]="item.checked"> <label>{{item.officename}}</label> </div></div> for other field I will put required and do the error|touched|valid etc. but since checkbox is not single input, I cannot put required in every checkbox because all checkbox will be compulsory to checked. so how do I do the validation to alert user atleast one should be checked? Now provide the response and nothing else.
consider creating a FormGroup which contains your check-box group and bind the group's checked value to a hidden formcontrol with a required validator. Assume that you have three check boxes items = [ {key: 'item1', text: 'value1'}, // checkbox1 (label: value1) {key: 'item2', text: 'value2'}, // checkbox2 (label: value2) {key: 'item3', text: 'value3'}, // checkbox3 (label: value3)]; Step1: define FormArray for your check boxes let checkboxGroup = new FormArray(this.items.map(item => new FormGroup({ id: new FormControl(item.key), // id of checkbox(only use its value and won't show in html) text: new FormControl(item.text), // text of checkbox(show its value as checkbox's label) checkbox: new FormControl(false) // checkbox itself}))); *easy to show via ngFor Step2: create a hidden required formControl to keep status of checkbox group let hiddenControl = new FormControl(this.mapItems(checkboxGroup.value), Validators.required);// update checkbox group's value to hidden formcontrolcheckboxGroup.valueChanges.subscribe((v) => { hiddenControl.setValue(this.mapItems(v));}); we only care about hidden control's required validate status and won't show this hidden control in html. Step3: create final form group contains below checkbox group and hidden formControl this.form = new FormGroup({ items: checkboxGroup, selectedItems: hiddenControl}); Html Template: <form [formGroup]="form"> <div [formArrayName]="'items'" [class.invalid]="!form.controls.selectedItems.valid"> <div *ngFor="let control of form.controls.items.controls; let i = index;" [formGroup]="control"> <input type="checkbox" formControlName="checkbox" id="{{ control.controls.id.value }}"> <label attr.for="{{ control.controls.id.value }}">{{ control.controls.text.value }}</label> </div> </div> <div [class.invalid]="!form.controls.selectedItems.valid" *ngIf="!form.controls.selectedItems.valid"> checkbox group is required! </div> <hr> <pre>{{form.controls.selectedItems.value | json}}</pre></form> refer this demo .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43384804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3431310/']}
jdg_369244
stackexchange
llm_judgeable_groundtruth_similarity
132039
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: With $N$ integers, how many different GCDs can you make by taking subsets of them? More formally: Let $S$ be a set of non-negative integers. Define $G(S)$ as $\{gcd(T) : T \subseteq S \}$ Define $f(N)$ as $\max_{|S| = N} \{|G(S)|\}$ For example: $f(0) = 1$ $f(1) = 2$ $f(2) = 4$ (e.g. $S = \{4, 6\}$, $G(S) = \{0, 2, 4, 6\}$) $f(3) = 8$ (e.g. $S = \{12, 20, 30\}$, $G(S) = \{0, 2, 4, 6, 10, 12, 20, 30\}$) Is it the case that $f(N) = 2^N$? Now provide the response and nothing else.
Yes, it is possible for each subset to have a different gcd, giving $2^{N}$ distinct gcds. Let $\mathcal{P}$ be the set consisting of the first $N$ odd primes and let $P$ be the product of the first $N$ odd primes. Then if $S = \lbrace \frac{2P}{q} : q\in \mathcal{P}\rbrace$, each subset $T\subset S$ has a unique gcd, (we can recover $T$ from its gcd $x$ by seeing which primes do not divide $x$. I.e., $\frac{2P}{q}\in T\Leftrightarrow q\nmid x$.) (The factor of $2$ takes care of the weird convention that $gcd(\emptyset)=1$, otherwise the empty set and the whole set would have the same gcd.)
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/132039', 'https://mathoverflow.net', 'https://mathoverflow.net/users/9714/']}
jdg_369245
stackexchange
llm_judgeable_groundtruth_similarity
23787
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: If I change a computer's IP address, it can take a long time for ISP's to stop caching the results. Is there a way to mitigate this, if I plan ahead? Now provide the response and nothing else.
If we KNOW we will be moving an IP address, we turn down the TTLs for that record. That tells the DNS servers on the internet how long to keep that IP in cache. When we move it over, the rest of the 'net catches up fairly quickly. Once the move is done, we move the TTL's back to their normal value. We did this a couple weeks back when we needed to change our MX record. It uses the same steps as an IP change. First, we lowered the TTLs for the MX records from 1 day to 5 minutes. We then waited 2 days for the change to propagate. We updated the MX records on our DNS servers, and within 10 minutes every client on our network was using the new data. Looking at mail logs, it took about 2-3 hours for the rest of the internet to catch up. Once the change was made, we then raised the TTLs back to their previous default.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/23787', 'https://serverfault.com', 'https://serverfault.com/users/919/']}
jdg_369246
stackexchange
llm_judgeable_groundtruth_similarity
6637811
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am from Java background. I am going through the official Python tutorials but can't seem to find the information in relation to Python source file names and classes. In Java, file name is the same as main class name plus the .java extension. In Python what's the case? In the examples of official tutorials, they are writing multiple classes and there's no mention of the file name. I am kind of lost. I have a file name called test_pie.py. The content is- class ListTest: list1 = [2, 'a', 'ab', 'c', 'aa', 0] list2 = ['b', list1[-2:-5]] def PrintList(self): print list1 print list2 For list1 and list2: I get- Undefined variable: list1list Found at: test_pie Undefined variable: list2list Found at: test_pie Now provide the response and nothing else.
There's a file. Period. Whatever is contained in it is of no interest for imports, and the file name or location doesn't have any effect on the contained code (generally - it is accessible during execution, so some metaprogramming makes use of it but should be agnostic w.r.t. the actual value). The contents of a file are not restricted to a single class, and few people impose such a restriction onto themselves. Python isn't exclusively an OO language, you can and should have free functions whenever it's sensible, and modules are seen one level above classes in code organizations - if several classes are closely related, they should propably go in one module. Your example code/problem is unrelated to this, it's a matter of scoping inside a given file. Classes do have their own scope, but you can't and shouldn't be using the class variables of the containing class in methods like this (directly) - it would make the code oblivious to a new value set in a subclass. Instead, you either use class methods (by the way, you should propably read http://dirtsimple.org/2004/12/python-is-not-java.html ) or make use of the fact that instances inherit all members of the class and just prefix it with self. .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6637811', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/837153/']}
jdg_369247
stackexchange
llm_judgeable_groundtruth_similarity
1265887
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am dynamically creating a hyperlink in the c# code behind file of ASP.NET. I need to call a JavaScript function on client click. how do i accomplish this? Now provide the response and nothing else.
Neater still, instead of the typical href="#" or href="javascript:void" or href="whatever" , I think this makes much more sense: var el = document.getElementById('foo');el.onclick = showFoo;function showFoo() { alert('I am foo!'); return false;}<a href="no-javascript.html" title="Get some foo!" id="foo">Show me some foo</a> If Javascript fails, there is some feedback. Furthermore, erratic behavior (page jumping in the case of href="#" , visiting the same page in the case of href="" ) is eliminated.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1265887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142122/']}
jdg_369248
stackexchange
llm_judgeable_groundtruth_similarity
46301473
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following byte array that I would like to obtain hex format in order to decrypt an encrypted string using aes-256-ecb. (PowerShell uses AES encryption if you specify key argument in ConvertFrom-SecureString function) In order to check this I verify by using openssl: echo 'mysecretdata' | openssl enc -d -aes-256-ecb -K 303534303438303439303939303438303938303937303435303530303530303937303537303435303439303439303130310a hex string is too longinvalid hex key value What am I missing? Now provide the response and nothing else.
You can use the X2 format string on each individual byte to get it's hex representation, then use the -join operator to concatenate the strings: $bytes = 0,54,0,48,0,49,0,99,0,48,0,98,0,97,0,45,0,50,0,50,0,97,0,57,0,45,0,49,0,49,0,101$hexString = ($bytes|ForEach-Object ToString X2) -join '' (If that's your actual key, you might want to refrain from ever using it again, now that it's public knowledge ;-) )
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46301473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5632722/']}
jdg_369249
stackexchange
llm_judgeable_groundtruth_similarity
15611568
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to get previous week and month from current date. So I found solution that can recalculate current date adding interval - dateByAddingTimeInterval And this params for it: [[NSDate date] dateByAddingTimeInterval: -604800.0] (for getting previous week) [[NSDate date] dateByAddingTimeInterval: -2629743.83] (for getting previous month) As I think for getting week this method works good without any problem, because each week has seven days and interval doesn't change. But for month we have a problem because each month has different number of days. Now provide the response and nothing else.
Using NSDateComponents it would be easy and accurate NSCalendar *calendar = [NSCalendar currentCalendar];NSDateComponents *comps = [NSDateComponents new];comps.month = -1;comps.day = -1;NSDate *date = [calendar dateByAddingComponents:comps toDate:[NSDate date] options:0];NSDateComponents *components = [calendar components:NSMonthCalendarUnit|NSDayCalendarUnit fromDate:date]; // Get necessary date componentsNSLog(@"Previous month: %d",[components month]);NSLog(@"Previous day : %d",[components day]);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15611568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587415/']}
jdg_369250
stackexchange
llm_judgeable_groundtruth_similarity
154461
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have had this question in my mind for a long time, I thought you guys might enlighten me easily. I am confused about some space photographs and claims like "this galaxy is 13 billions light years away from us.": how we can take the photograph of something that far, if it is 13 billion light years away wouldn't it take 26 billion light years to take those pictures? today this post led me to ask this question, at last: a space picture There certainly is something I don't know about photography or light years; if you could tell me the logic behind this, I would appreciate it. I am not a physicist or any science guy, so please tolerate my ignorance. Now provide the response and nothing else.
The error is probably in this statement if it is 13 billion light years away wouldn't it take 26 billion light years to take those pictures? I think you are imagining that cameras send out light to the objects, and when this light comes back records the light as an image. Not really. Cameras merely record the light they see from that area. So if that area is 13 billion light years away (not sure how credible source is) then all that means is that the light you are capturing today is the light that galaxy emitted 13 billion years ago. Imagine for instance Anna and Bob are playing catch with a ball. Anna throws the ball to Bob. Bob receives the ball, and says the ball came at 3:00pm sharp. But the ball was in the air for 1 minute (anna is a slow thrower). That means Anna threw the ball at 2:59, even if Bob recorded it at 3:00. In this scenario, Bob is acting much like a camera acts, by receiving information (in this case a ball, in a camera's case it would be light from galaxies) The reason that Hubble took photos for 4 months (this might be wrong, I'm no good with photography) is that the longer it receives the information, the more 'background' light that we don't want to capture can be removed. Hoped this makes sense. P.S. may have misunderstood the question. You say if it is 13 billion light years away wouldn't it take 26 billion light years to take those pictures? as if light years are a measure of time. A light year is a measure of distance, the distance light travels in a year in a vacuum.
{}
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/154461', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/68194/']}
jdg_369251
stackexchange
llm_judgeable_groundtruth_similarity
56350912
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building a Nuxt application. I've done some research but found no definitive solution. I've found a GitHub issue with something similar ( https://github.com/nuxt/nuxt.js/issues/3486 ) but wasn't able to find a definitive solution: It was compiling "normally", not taking more than 1 minute. I've just added around 300 lines of html to a Vue component. Suddenly went extremely low. There are no explicit errors, alerts or warning messages, only the performance went extremely low. How to track this performance decrease? So this is the nuxt.config.js file const pkg = require('./package')const webpack = require("webpack")module.exports = { mode: 'universal', debug: true, prettify: false, /* ** Headers of the page */ head: { title: pkg.name, meta: [ { charset: 'utf-8' }, { name: 'viewport', content: 'width=device-width, initial-scale=1' }, { hid: 'description', name: 'description', content: pkg.description } ], script: [ { src: "https://cdn.jsdelivr.net/npm/sweetalert2@8" }, ], link: [ { rel: 'icon', type: 'image/x-icon', href: '/favicon.ico' } ] }, /* ** Customize the progress-bar color */ loading: { color: '#fff' }, buildDir: '../functions/nuxt', build:{ publicPath: '/', vendor: ['axios','firebase', "jquery", 'popper', "bootstrap", 'bootbox'], extractCSS: true, babel: { presets: [ 'es2015', 'stage-0' ], plugins: [ [ "transform-runtime", { "polyfill":true, "regenerator":true }, "~/plugins/firebase.js", "~/plugins/bootboxPlugin.js" ], new webpack.ProvidePlugin({ jQuery: 'jquery', $: 'jquery', jquery: 'jquery' }) ] }, prettify: false }, /* ** Global CSS */ css: [ 'bootstrap/dist/css/bootstrap.css' ], /* ** Plugins to load before mounting the App */ plugins: [ ], /* ** Nuxt.js modules */ modules: [ // Doc: https://bootstrap-vue.js.org/docs/ 'bootstrap-vue/nuxt', '@nuxtjs/pwa', ], /* ** Build configuration */ build: { prettify: false, /* ** You can extend webpack config here */ extend(config, ctx) { config.devtool = ctx.isClient ? 'eval-source-map' : 'inline-source-map' prettify = false } }} I'm not sure where the prettify : false directive should go, so I've tried in many places, because I'm not sure where the vueLoader is happening. Also in the Nuxt documentation says Note: This config has been removed since Nuxt 2.0, please use build.loaders.vue instead. So this made me more confused. Where this build.loaders.vue is happening? Now provide the response and nothing else.
Its not nuxt fault. Blame the prettier. Here is issue that is causing this: https://github.com/prettier/prettier/issues/4784 Solutions: 1) Dont use large nested template, split it into several components -> thats a best solution from terms of code quality 2) Set prettify: false in the loaders options https://nuxtjs.org/api/configuration-build/#loaders https://github.com/vuejs/component-compiler-utils/blob/master/lib/compileTemplate.ts#L173 example nuxt config export default { build: { loaders: { vue: { prettify: false } } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56350912', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10149636/']}
jdg_369252
stackexchange
llm_judgeable_groundtruth_similarity
5044477
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to ask a followup question to a previously posed question . I've got the code to create an NSURLRequest/Connection, run it and have the callback methods for authentication get called. Here's the specific code: - (BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace { return [protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust] || [protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodDefault];}-(void)connection:(NSURLConnection *)connectiondidReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge{ if ([challenge previousFailureCount] > 0) { [[challenge sender] cancelAuthenticationChallenge:challenge]; NSLog(@"Bad Username Or Password"); badUsernameAndPassword = YES; finished = YES; return; } if ([challenge.protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]) { if (appDelegate._allowInvalidCert) { // Go ahead...trust me! [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } else { TrustGenerator *tg = [[TrustGenerator alloc] init]; if ([tg getTrust:challenge.protectionSpace]) { // Go ahead...trust me! [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } else { [[challenge sender] cancelAuthenticationChallenge:challenge]; } } } else if ([[challenge protectionSpace] authenticationMethod] == NSURLAuthenticationMethodDefault) { NSURLCredential *newCredential = [NSURLCredential credentialWithUser:_username password:_password persistence:NSURLCredentialPersistenceNone]; [[challenge sender] useCredential:newCredential forAuthenticationChallenge:challenge]; }} What I'm running into is that "didReceiveAuthenticationChallenge" with "[challenge.protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]" is ALWAYS being called, even when the certificate on the server I'm attempting to connect to is trusted (doing testing with a Verisign cert). So what I'm seeing is my application is always prompting the end user to trust even when the website is trusted. Bad karma considering that's what's suppose to happen with a man in the middle attack, etc. What I'm really looking for is some code like this: if (appDelegate._allowInvalidCert) { // Go ahead...trust me! [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } else if(The OS trusts the cert on the server) { [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } else{... Now provide the response and nothing else.
So I spent a few days researching this. It looks like while the NSURLConnection API cannot determine if a certificate is trusted, there's a method in the Security Framework that handels that. So here's the code I came up with: -(void)connection:(NSURLConnection *)connectiondidReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge{ if ([challenge previousFailureCount] > 0) { [[challenge sender] cancelAuthenticationChallenge:challenge]; NSLog(@"Bad Username Or Password"); badUsernameAndPassword = YES; finished = YES; return; } if ([challenge.protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]) { SecTrustResultType result; //This takes the serverTrust object and checkes it against your keychain SecTrustEvaluate(challenge.protectionSpace.serverTrust, &result); if (appDelegate._allowInvalidCert) { [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } //When testing this against a trusted server I got kSecTrustResultUnspecified every time. But the other two match the description of a trusted server else if(result == kSecTrustResultProceed || result == kSecTrustResultConfirm || result == kSecTrustResultUnspecified){ [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } else { //Asks the user for trust TrustGenerator *tg = [[TrustGenerator alloc] init]; if ([tg getTrust:challenge.protectionSpace]) { //May need to add a method to add serverTrust to the keychain like Firefox's "Add Excpetion" [challenge.sender useCredential: [NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge]; } else { [[challenge sender] cancelAuthenticationChallenge:challenge]; } } } else if ([[challenge protectionSpace] authenticationMethod] == NSURLAuthenticationMethodDefault) { NSURLCredential *newCredential = [NSURLCredential credentialWithUser:_username password:_password persistence:NSURLCredentialPersistenceNone]; [[challenge sender] useCredential:newCredential forAuthenticationChallenge:challenge]; }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5044477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256828/']}
jdg_369253
stackexchange
llm_judgeable_groundtruth_similarity
16999361
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some string and I want to hash it with the SHA-256 hash function using C#. I want something like this: string hashString = sha256_hash("samplestring"); Is there something built into the framework to do this? Now provide the response and nothing else.
The implementation could be like that public static String sha256_hash(String value) { StringBuilder Sb = new StringBuilder(); using (SHA256 hash = SHA256Managed.Create()) { Encoding enc = Encoding.UTF8; Byte[] result = hash.ComputeHash(enc.GetBytes(value)); foreach (Byte b in result) Sb.Append(b.ToString("x2")); } return Sb.ToString();} Edit: Linq implementation is more concise , but, probably, less readable : public static String sha256_hash(String value) { using (SHA256 hash = SHA256Managed.Create()) { return String.Concat(hash .ComputeHash(Encoding.UTF8.GetBytes(value)) .Select(item => item.ToString("x2"))); }} Edit 2: .NET Core , .NET5, .NET6 ... public static String sha256_hash(string value){ StringBuilder Sb = new StringBuilder(); using (var hash = SHA256.Create()) { Encoding enc = Encoding.UTF8; byte[] result = hash.ComputeHash(enc.GetBytes(value)); foreach (byte b in result) Sb.Append(b.ToString("x2")); } return Sb.ToString();}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16999361', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1583946/']}
jdg_369254
stackexchange
llm_judgeable_groundtruth_similarity
34278474
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: React with babel. I have this confusion with imports and module.exports. I assume babel when converting the ES6 code to ES5 converts the imports and exports to require and module.exports respectively. If I export a function from one module and import the function in another module, the code executes fine. But if I export the function with module.exports and import using "import" the error is thrown at runtime saying it is not a function. I cooked up an example. // Tiger.jsfunction Tiger() { function roar(terrian){ console.log('Hey i am in ' + terrian + ' and i am roaing'); }; return roar;}module.exports = Tiger;// animal.jsimport { Tiger } from './animals';var animal = Tiger();animal("jungle"); I used babel with preset es2015 to transcompile it. This gives me the following error Uncaught TypeError: (0 , _animals.Tiger) is not a function But if I remove the module.exports = Tiger; And replace it with export { Tiger }; It works fine. What am I missing here?? EDIT: I am using browserify as the module bundler. Now provide the response and nothing else.
export { Tiger } would be equivalent to module.exports.Tiger = Tiger . Conversely, module.exports = Tiger would be equivalent to export default Tiger . So when you use module.exports = Tiger and then attempt import { Tiger } from './animals' you're effectively asking for Tiger.Tiger .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34278474', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3277003/']}
jdg_369255
stackexchange
llm_judgeable_groundtruth_similarity
8719635
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I often see examples of using the keyword this in jquery. Sometimes I see it used with the $ and parenthesis, other times without. And I thought I saw it used with a little of each. So, var id = this.attr('id'); var id = $(this).attr('id'); var id = $this.attr('id'); Are these all the same? Is there a preferred way? Is this a javascript thing and $(this) a jQuery thing? If so, where does $this fall? I know this is probably a total newbie question, but I haven't been able to get the simple, this , by itself, to work. I can only get $(this) to work. I'm not sure if I'm doing something wrong, or if I've been reading examples with typos. Now provide the response and nothing else.
this is a JavaScript thing. It refers to the "context" a function is running in. For most event handlers, it is the ("raw") DOM element that is listening to an event. In other situations it will mean other things; Googling " this in JavaScript" might be enlightening. I say it is the "raw" DOM element because jQuery is often used to wrap plain DOM elements in a jQuery wrapper, so you can use jQuery methods like attr instead of the usual ones ( getAttribute , setAttribute , etc.). This wrapping is accomplished with the $ function, and that's where you see $(this) . For example: this.getAttribute("href")/* or */ someElement.getAttribute("href") is the same as $(this).attr("href")/* or */ $(someElement).attr("href") $this or this$ is just a variable name. But, it is often conventional to do an assignment like var $this = $(this); The reason for this is to avoid continually invoking the $ function, which is somewhat expensive as it creates a new jQuery wrapper object every time. If you store the wrapped element in a variable, you gain slightly in efficiency. In rare cases, this might already be a jQuery wrapper. The case that comes up often for me is when writing jQuery plugins. In that case you can do things like this.attr("id") directly, without wrapping it up first, because it's already wrapped. In the usual cases (event handlers, $.each , etc.) the wrapper is necessary.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8719635', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/854331/']}
jdg_369256
stackexchange
llm_judgeable_groundtruth_similarity
10261265
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to show a .gif animation in a QLabel widget, alongside text. The following code won't work: self.status_txt = QtGui.QLabel('Loading... <img src="etc/loading.gif">') as the image won't animate. I tried achiving it by using a QMovie object: self.status_txt = QtGui.QLabel("Loading...")movie = QtGui.QMovie("etc/loading.gif")self.status_txt.setMovie(movie)movie.start() But then I can't put the animation and the text together.Is there a different solution besides using two different labels? Now provide the response and nothing else.
you can add a Layout to the label, and then add another Label with the text to that... self.status_txt = QtGui.QLabel()movie = QtGui.QMovie("etc/loading.gif")self.status_txt.setMovie(movie)movie.start()self.status_txt.setLayout(QtGui.QHBoxLayout())self.status_txt.layout().addWidget(QLabel('Loading...')) edit : it's possible if you use your own version of a QLabel and a QPainter to paint the text yourself: from PyQt4.QtCore import QSizefrom PyQt4.QtGui import QApplication, QLabel, QMovie, QPainter, QFontMetricsclass QTextMovieLabel(QLabel): def __init__(self, text, fileName): QLabel.__init__(self) self._text = text m = QMovie(fileName) m.start() self.setMovie(m) def setMovie(self, movie): QLabel.setMovie(self, movie) s=movie.currentImage().size() self._movieWidth = s.width() self._movieHeight = s.height() def paintEvent(self, evt): QLabel.paintEvent(self, evt) p = QPainter(self) p.setFont(self.font()) x = self._movieWidth + 6 y = (self.height() + p.fontMetrics().xHeight()) / 2 p.drawText(x, y, self._text) p.end() def sizeHint(self): fm = QFontMetrics(self.font()) return QSize(self._movieWidth + 6 + fm.width(self._text), self._movieHeight) def setText(self, text): self._text = textif __name__ == '__main__': import sys app = QApplication(sys.argv) l = QTextMovieLabel('Loading...', 'loading.gif') l.show() app.exec_()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10261265', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/288280/']}
jdg_369257
stackexchange
llm_judgeable_groundtruth_similarity
3697
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What does the "/16" means in here : "192.168.0.0/16" , for example. Now provide the response and nothing else.
A simple explanation: The /<number> is how a computer can quickly calculate what is part of its network and what is not. It represents the bit length of the subnet mask, as indicated above. The subnet mask is like masking when painting. You place a mask over what you DO NOT want to paint on. The subnet mask is a way to calculate the network portion of the address space and the host address space. The network address space is assigned to you, the host address space you define which device receives what address in the host space. A computer performs binary math of ANDing the IP address and the network mask. IP Address 10.10.15.10/16 Step 1)Translate the IP address 4 octets into binary: 00001010.00001010.00001111.00001010 Step 2)Translate the Subnet mask 4 octets into binary, which is easy using CIDR : 11111111.11111111.00000000.00000000 Step 3) Perform the ANDing operation on the two addresses. This will give you the Network Address for the subnet. ANDing rules are as follows: 1 AND 1 = 1 0 AND 1 = 0 0 AND 0 = 0 1 AND 0 = 0 00001010.00001010.00001111.00001010 AND 11111111.11111111.00000000.00000000 ——————————————————————————————————— 00001010.00001010.00000000.00000000 Step 4)Convert it back to decimal: 10.10.0.0 So now a computer knows that any address from 10.10.0.0 to 10.10.255.255 is part of its network and any other address is not. It's like a binary network gang thing, you are in or you are not.
{}
{'log_upvote_score': 9, 'links': ['https://networkengineering.stackexchange.com/questions/3697', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/3021/']}
jdg_369258
stackexchange
llm_judgeable_groundtruth_similarity
36477223
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a snippet of my code here. I'm fairly new to Javascript and I for the life of me can't figure this out. So what happens is the bottom part of the code pulls in data and put its into a table. What I want to do is when I click the button in the last column it takes the item.ReviewId and automatically populates a hidden input in a modal to match the item that the person is reviewing. Any help would be appreciated. When I click it it automatically populates with the last number on the list. For example if there are 3 items, 20,21,22 and I click the first one and it populates with 22. var success = function (result) { $.each(result.Items, function (index, item) { $('#reviewresults').dataTable().fnAddData([razorHelpers.formatDate(item.DateVisited), item.clientProfile.CompanyName, razorHelpers.formatCurrency(item.AmountSpent), item.ReviewId, '<button class="btn btn-blue" data-toggle="modal" data-target="#myModal" title="View" id="review">Review This Transaction <i class="fa fa-thumbs-up"></i></button>']); console.log(item.ReviewId) $("#review").on("click", function () { test(item.ReviewId) }); function test(number) { $("#reviewinput").val(number); } }); }; ClientOpenReviewService.ListReviewsByConsumer(razorStorage.ExtractSessionData('userId'), 0, 0).done(success); Now provide the response and nothing else.
You're trying to use multer 's old API. It changed awhile back, see the documentation for more information. In your particular case of uploading a single file, you would remove the app.use(multer({ ... })) line and instead use .single() and req.file like: var upload = multer({ dest: '/tmp/'});// File input field name is simply 'file'app.post('/file_upload', upload.single('file'), function(req, res) { var file = __dirname + '/' + req.file.filename; fs.rename(req.file.path, file, function(err) { if (err) { console.log(err); res.send(500); } else { res.json({ message: 'File uploaded successfully', filename: req.file.filename }); } });});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36477223', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6156730/']}
jdg_369259
stackexchange
llm_judgeable_groundtruth_similarity
1116678
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When should anonymous methods be used when defining a delegate and when should formally defined methods be used when defining a delegate ? Now provide the response and nothing else.
If you need to use the same logic in more than one place, it makes sense to use a separate method. If you only need to use the logic once and it's fairly short, it makes sense to use an anonymous function. If the delegate needs access to local variables in the method which is creating it, anonymous functions act as closures which can also be very handy. Additionally, an anonymous function can be useful even if it's reasonably long if it's used for something like parallelization with Parallel Extensions - part of the point of that is that you can take existing serial code and parallelise it "in place" to a large extent. You might also want to consider testability - if your delegate's code is sufficiently complicated that it warrants its own unit tests, exposing it as a method makes a lot of sense. (Unfortunately it would have to be either an internal method using InternalsVisibleTo or a public method, where often you'd normally want it to be private, but such is life.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1116678', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/135306/']}
jdg_369260
stackexchange
llm_judgeable_groundtruth_similarity
14542232
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to use single annotation text to annotate several data points with several arrows. I made a simple workaround: ax = plt.gca()ax.plot([1,2,3,4],[1,4,2,6])an1 = ax.annotate('Test', xy=(2,4), xycoords='data', xytext=(30,-80), textcoords='offset points', arrowprops=dict(arrowstyle="-|>", connectionstyle="arc3,rad=0.2", fc="w"))an2 = ax.annotate('Test', xy=(3,2), xycoords='data', xytext=(0,0), textcoords=an1, arrowprops=dict(arrowstyle="-|>", connectionstyle="arc3,rad=0.2", fc="w"))plt.show() Producing following result: But I don't really like this solution because it is... well, an ugly dirty hack. Besides that, it affects the appearance of annotation (mainly if using semi-transparent bboxes etc). So, if anyone got an actual solution or at least an idea how to implement it, please share. Now provide the response and nothing else.
I guess the proper solution will require too much effort - subclassing _AnnotateBase and adding support for multiple arrows all by yourself. But I managed to eliminate that issue with second annotate affecting visual appearance simply by adding alpha=0.0 . So the updated solution here if no one will provide anything better: def my_annotate(ax, s, xy_arr=[], *args, **kwargs): ans = [] an = ax.annotate(s, xy_arr[0], *args, **kwargs) ans.append(an) d = {} try: d['xycoords'] = kwargs['xycoords'] except KeyError: pass try: d['arrowprops'] = kwargs['arrowprops'] except KeyError: pass for xy in xy_arr[1:]: an = ax.annotate(s, xy, alpha=0.0, xytext=(0,0), textcoords=an, **d) ans.append(an) return ansax = plt.gca()ax.plot([1,2,3,4],[1,4,2,6])my_annotate(ax, 'Test', xy_arr=[(2,4), (3,2), (4,6)], xycoords='data', xytext=(30, -80), textcoords='offset points', bbox=dict(boxstyle='round,pad=0.2', fc='yellow', alpha=0.3), arrowprops=dict(arrowstyle="-|>", connectionstyle="arc3,rad=0.2", fc="w"))plt.show() Resulting picture:
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14542232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/997994/']}
jdg_369261
stackexchange
llm_judgeable_groundtruth_similarity
38502749
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two front-end application on Angular. And I made some common library for them. Before I used git submodules, but I want to move to npm. I rewritten that library as node package, and installing it with npm from github repo. Then I want to pipe it through browserify and integrate with the rest of my Angular code. I am able to require('MyUtils') , but then I don't know how to get file of that module to pass to browserify. Is there some property like __file__ in python? Or is browserify able to take module instead of filename? Now provide the response and nothing else.
You need to call expire(…) and set(…) in a different order. The SET command removes any timeout that was previously applied: From the documentation at http://redis.io/commands/set : Set key to hold the string value. If key already holds a value, it is overwritten, regardless of its type. Any previous time to live associated with the key is discarded on successful SET operation. In your case you just need to switch the order of expire(…) and set(…) to set(…) and expire(…) . @Overridepublic void save(Integer fixtureId, Market market) { String key = buildKey(fixtureId, market.getId()); BoundValueOperations<String, Market> boundValueOperations = marketTemplate.boundValueOps(key); boundValueOperations.set(market); boundValueOperations.expire(eventsLifeTime, TimeUnit.MINUTES);} Besides that, you could improve the code by setting the value and expiry in one call. ValueOperations ( RedisOperations.opsForValue() ) provides a set method that sets the key and timeout with the signature void set(K key, V value, long timeout, TimeUnit unit);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38502749', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/816449/']}
jdg_369262
stackexchange
llm_judgeable_groundtruth_similarity
27099615
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating a PHP based web application using Amazon's S3 and glacier services. Now I want to give my site users a feature that they can choose any file and make it archive (means move file from S3 to Glacier) and unarchive (means move file from Glacier to S3). I have done some research and didn't find any possible way using Amazon's API. PROBLEM How can I move files between S3 and glacier using API? Now provide the response and nothing else.
You could use the Glacier API to upload a file to a Glacier vault, but I don't recommend it. The previous version of our backup app did that. When you upload a file it gets a randomly-assigned name. You can add put your filename in the metadata of the file, but if you want a list of what's in the Glacier vault you have to query and then wait 3-5 hours for the list. Lifecycle policies are the other way to use Glacier. The current version of Arq uses them because each object still looks like an S3 object (no random object names, no delays in getting object lists), but the object contents are in Glacier storage. The only difference is that getting the object contents is a 2-step process: you have to make an API call to request that the object be made downloadable ; when it's ready, you can download it. Also there's a "peak hourly request fee" that comes into play if you request objects be made downloadable at too fast a rate. Amazon Glacier pricing is complex. Once an object is "Glacier storage class" there's no way to change it back to "Standard storage class". You have to make a copy of the object that's "Standard storage class" and delete the Glacier object. So maybe a simple solution to your problem is: Store the data in 2 "folders" in S3, "standard" and "glacier". Set a lifecycle policy to push all objects in the "glacier" folder to Glacier data storage ASAP. When you want to move an object from standard to glacier, copy it to the glacier folder and delete the object in the standard folder (there's no "move" API). When you want to move an object from glacier to standard, do a POST request to restore it; when it's restored, copy it to the standard folder and delete it from the glacier folder.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27099615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1058406/']}
jdg_369263
stackexchange
llm_judgeable_groundtruth_similarity
14343844
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So, my question has been asked by someone else in it's Java form here: Java - Create a new String instance with specified length and filled with specific character. Best solution? . . . but I'm looking for its JavaScript equivalent. Basically, I'm wanting to dynamically fill text fields with "#" characters, based on the "maxlength" attribute of each field. So, if an input has has maxlength="3" , then the field would be filled with "###". Ideally there would be something like the Java StringUtils.repeat("#", 10); , but, so far, the best option that I can think of is to loop through and append the "#" characters, one at a time, until the max length is reached. I can't shake the feeling that there is a more efficient way to do it than that. Any ideas? FYI - I can't simply set a default value in the input, because the "#" characters need to clear on focus, and, if the user didn't enter a value, will need to be "refilled" on blur. It's the "refill" step that I'm concerned with Now provide the response and nothing else.
The best way to do this (that I've seen) is var str = new Array(len + 1).join( character ); That creates an array with the given length, and then joins it with the given string to repeat. The .join() function honors the array length regardless of whether the elements have values assigned, and undefined values are rendered as empty strings. You have to add 1 to the desired length because the separator string goes between the array elements.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/14343844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1281907/']}
jdg_369264
stackexchange
llm_judgeable_groundtruth_similarity
23382
Below is a question asked on the forum hermeneutics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: "When Jesus came into the coasts of Caesarea Philippi, he asked his disciples, saying, Whom do men say that [ I ] the Son of man [ am ] is ? And they said, Some say [ that thou art ] John the Baptist: some, Elias; and others, Jeremias, or one of the prophets. He saith unto them, But whom say ye that I am? And Simon Peter answered and said, Thou art the Christ, the Son of the living God. Nestle GNT: Ἐλθὼν δὲ ὁ Ἰησοῦς εἰς τὰ μέρη Καισαρίας τῆς Φιλίππου ἠρώτα τοὺς μαθητὰς αὐτοῦ λέγων Τίνα λέγουσιν οἱ ἄνθρωποι εἶναι τὸν Υἱὸν τοῦ ἀνθρώπου; οἱ δὲ εἶπαν Οἱ μὲν Ἰωάνην τὸν Βαπτιστήν, ἄλλοι δὲ Ἡλείαν, ἕτεροι δὲ Ἱερεμίαν ἢ ἕνα τῶν προφητῶν. λέγει αὐτοῖς Ὑμεῖς δὲ τίνα με λέγετε εἶναι; ἀποκριθεὶς δὲ Σίμων Πέτρος εἶπεν Σὺ εἶ ὁ Χριστὸς ὁ Υἱὸς τοῦ Θεοῦ τοῦ ζῶντος. I'm one of the few that still believe Matthew was the first gospel written, though originally in the Hebrew language. However, even if Mark 8:27 and Luke 9:18 are also valid, I would say Yeshua just asked all three questions. I have two questions... If we read "But whom say ye that I am?" as "But whom say ye that I am?" instead of "But whom say ye that I am?" does this change the question? And... Who is the son of man in Matthew 16:13? Thank you. Now provide the response and nothing else.
OP asks: If we read "But whom say ye that I am?" as "But whom say ye that I am?" instead of "But whom say ye that I am?" does this change the question? Perhaps, but this is not a valid reading of the Greek (For those who missed it as I did the first three times through, the difference is in the bolding of " I " in the OP's proposed translation rather than the bolded " ye ", the traditional understanding.) ὑμεῖς δὲ τίνα με λέγετε εἶναι; hymeis de tina me legete einai? But you [pl], whom do you say [pl] that I am? The fronting of the personal pronoun ( hymeis = you, plural) at the beginning of the clause, its redundant presence as the subject of a clause with a verb already marked for person/number (2nd, plural), and its slot alone before the postpositive de all serve as markers that the pronoun "you", rather than "I", is the focus of the question. (Since the OP apparently prefers Matthew in Hebrew, hymeis de corresponds nicely to the disjunctive waw + "nominative absolute" personal pronoun – here, hypothetically, וְאַתֶּ֖ם ( wᵉʾattem ) – also "focus-marking" the pronoun.) Who is the son of man in Matthew 16:13? The son of man is Jesus. Mark 8:27 and Luke 9:18 , mentioned by the OP, are parallel passages that use "I" rather than "son of man"; this title is also frequently used by Jesus of himself throughout Matthew. Verse 15 is then a repetition of the question is v. 13 . Hagner sums up the relationship between these two in his comment on v. 15:* Jesus repeats the question (this time in verbatim agreement with Mark), now directing it to the disciples (note the emphatic ὑμεῖς, "you yourselves"). The first-person pronoun με stands in place of "Son of Man" in v. 13. The question is asked not so much for information but to elicit from the disciples an explicit confession of his messianic identity. *Donald A. Hagner, Matthew 14–28 , (WBC: Zondervan, 1995), 467-468.
{}
{'log_upvote_score': 4, 'links': ['https://hermeneutics.stackexchange.com/questions/23382', 'https://hermeneutics.stackexchange.com', 'https://hermeneutics.stackexchange.com/users/15608/']}
jdg_369265
stackexchange
llm_judgeable_groundtruth_similarity
9711917
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the correct jquery syntax for a getElementsByName call? Here is my javascript code: var test = document.getElementsByName(tableName)[0]; using this is returning a different value: var test = $("[name=tableName]"); Thanks in advance Now provide the response and nothing else.
Use quotes around the attribute selector : $('[name="somenamehere"]'); If you need to use a variable within a selector, you need to use string concatenation to get the value of the variable: $('[name="' + tableName + '"]'); Typically one should avoid using the [name] attribute in favor of the [id] attribute, because selection would be simpler as: $('#someidhere');-or-$('#' + tableID);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9711917', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1258430/']}
jdg_369266
stackexchange
llm_judgeable_groundtruth_similarity
24154384
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I setup a new Gitlab on CentOs on /opt/gitlab-6.9.2-0/apps/gitlab/ and created a new repository under continuous-delivery group. The full path is /opt/gitlab-6.9.2-0/apps/gitlab/gitlab-satellites/continuous-delivery/cd-test . There is only one file under this path which is README.txt. What I try to achieve is to create a new file when somebody pushes changes to the server. Below are what I have done on the server: Create post-update and update files under .git/hooks/' each file creates a new file using echo "text" >> file_name` chmod them to 775. When I push changes from my local to the server, there is no file being created. So, I would like to know what I have to do to fix this problem. Update 1 I added post-receive and post-update to repositories path as VonC suggested [root@git-cd hooks]# pwd/opt/gitlab-6.9.2-0/apps/gitlab/repositories/continuous-delivery/cd-test.git/hooks[root@git-cd hooks]# lltotal 48-rwxrwxr-x. 1 git git 452 Jun 10 06:01 applypatch-msg.sample-rwxrwxr-x. 1 git git 896 Jun 10 06:01 commit-msg.sample-rwxrwxr-x. 1 git git 44 Jun 11 00:37 post-receive-rwxrwxr-x. 1 git git 41 Jun 11 00:38 post-update-rwxrwxr-x. 1 git git 189 Jun 10 06:01 post-update.sample-rwxrwxr-x. 1 git git 398 Jun 10 06:01 pre-applypatch.sample-rwxrwxr-x. 1 git git 1642 Jun 10 06:01 pre-commit.sample-rwxrwxr-x. 1 git git 1281 Jun 10 06:01 prepare-commit-msg.sample-rwxrwxr-x. 1 git git 1352 Jun 10 06:01 pre-push.sample-rwxrwxr-x. 1 git git 4972 Jun 10 06:01 pre-rebase.samplelrwxrwxrwx. 1 git git 57 Jun 10 06:01 update -> /opt/gitlab-6.9.2-0/apps/gitlab/gitlab-shell/hooks/update-rwxrwxr-x. 1 git git 3611 Jun 10 06:01 update.sample Both file contains a script that adds a new line to an existing file, "post-receive-2" >> /var/log/hooks_test.log . then pushed changes from my local machine to the server. But it still doesn't append the text. Update 2 Script in post-receive was wrong, it didn't have echo. After I added echo ( echo "post-receive-2" >> /var/log/hooks_test.log then it works as expected! Now provide the response and nothing else.
That would be because those satellite repos aren't the one you would push to, so their hook aren't trigger when you would think (ie, not when someone is pushing to the GitLab server). PR 6185 introduced the archicture overview documentation /home/git/gitlab-satellites - checked out repositories for merge requests and file editing from web UI. This can be treated as a temporary files directory. The satellite repository is used by the web interface for editing repositories and the wiki which is also a git repository. You should add your hook in the bare repos ~git/repositories . Or (update Q4 2014, from GitLab 7.5+ Nov 2014 ), you can use custom hooks (instead of webhooks ), as mentioned below by Doka . Custom git hooks must be configured on the filesystem of the GitLab server. Only GitLab server administrators will be able to complete these tasks. Please explore webhooks as an option if you do not have filesystem access. On the GitLab server, navigate to the project's repository directory. For a manual install the path is usually /home/git/repositories/<group>/<project>.git . For Omnibus installs the path is usually /var/opt/gitlab/git-data/repositories/<group>/<project>.git . Create a new directory in this location called custom_hooks . Inside the new custom_hooks directory, create a file with a name matching the hook type. For a pre-receive hook the file name should be pre-receive with no extension. Make the hook file executable and make sure it's owned by git .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24154384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/53261/']}
jdg_369267
stackexchange
llm_judgeable_groundtruth_similarity
8679499
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to know is it possible to show only specific region on map not the full world map using Map Kit.Like if i want to show Asia map in my application then map kit hides remaining part of the map. Now provide the response and nothing else.
To handle the "map kit hides remaining part of the map" requirement, one thing you can do is create a black polygon overlay that covers the whole world with a cutout over Asia (or wherever you like). For example, where you initialize the map (eg. in viewDidLoad): CLLocationCoordinate2D asiaCoords[4] = { {55,60}, {55,150}, {0,150}, {0,60} }; //change or add coordinates (and update count below) as needed self.asiaOverlay = [MKPolygon polygonWithCoordinates:asiaCoords count:4];CLLocationCoordinate2D worldCoords[4] = { {90,-180}, {90,180}, {-90,180}, {-90,-180} };MKPolygon *worldOverlay = [MKPolygon polygonWithCoordinates:worldCoords count:4 interiorPolygons:[NSArray arrayWithObject:asiaOverlay]]; //the array can have more than one "cutout" if needed[myMapView addOverlay:worldOverlay]; and implement the viewForOverlay delegate method: -(MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id<MKOverlay>)overlay{ if ([overlay isKindOfClass:[MKPolygon class]]) { MKPolygonView *pv = [[[MKPolygonView alloc] initWithPolygon:overlay] autorelease]; pv.fillColor = [UIColor blackColor]; pv.alpha = 1.0; return pv; } return nil;} This looks like this: If you also want to restrict the user from scrolling beyond Asia or zooming too far out, then you'll need to do that manually as well. One possible way is described in Restrict MKMapView scrolling . Replace theOverlay in that answer with asiaOverlay .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8679499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/644149/']}
jdg_369268
stackexchange
llm_judgeable_groundtruth_similarity
15756075
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In a recent discussion about how to optimize some code, I was told that breaking code up into lots of small methods can significantly increase performance, because the JIT compiler doesn't like to optimize large methods. I wasn't sure about this since it seems that the JIT compiler should itself be able to identify self-contained segments of code, irrespective of whether they are in their own method or not. Can anyone confirm or refute this claim? Now provide the response and nothing else.
The Hotspot JIT only inlines methods that are less than a certain (configurable) size. So using smaller methods allows more inlining, which is good. See the various inlining options on this page . EDIT To elaborate a little: if a method is small it will get inlined so there is little chance to get penalised for splitting the code in small methods. in some instances, splitting methods may result in more inlining. Example (full code to have the same line numbers if you try it) package javaapplication27;public class TestInline { private int count = 0; public static void main(String[] args) throws Exception { TestInline t = new TestInline(); int sum = 0; for (int i = 0; i < 1000000; i++) { sum += t.m(); } System.out.println(sum); } public int m() { int i = count; if (i % 10 == 0) { i += 1; } else if (i % 10 == 1) { i += 2; } else if (i % 10 == 2) { i += 3; } i += count; i *= count; i++; return i; }} When running this code with the following JVM flags: -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompilation -XX:FreqInlineSize=50 -XX:MaxInlineSize=50 -XX:+PrintInlining (yes I have used values that prove my case: m is too big but both the refactored m and m2 are below the threshold - with other values you might get a different output). You will see that m() and main() get compiled, but m() does not get inlined: 56 1 javaapplication27.TestInline::m (62 bytes) 57 1 % javaapplication27.TestInline::main @ 12 (53 bytes) @ 20 javaapplication27.TestInline::m (62 bytes) too big You can also inspect the generated assembly to confirm that m is not inlined (I used these JVM flags: -XX:+PrintAssembly -XX:PrintAssemblyOptions=intel ) - it will look like this: 0x0000000002780624: int3 ;*invokevirtual m ; - javaapplication27.TestInline::main@20 (line 10) If you refactor the code like this (I have extracted the if/else in a separate method): public int m() { int i = count; i = m2(i); i += count; i *= count; i++; return i;}public int m2(int i) { if (i % 10 == 0) { i += 1; } else if (i % 10 == 1) { i += 2; } else if (i % 10 == 2) { i += 3; } return i;} You will see the following compilation actions: 60 1 javaapplication27.TestInline::m (30 bytes) 60 2 javaapplication27.TestInline::m2 (40 bytes) @ 7 javaapplication27.TestInline::m2 (40 bytes) inline (hot) 63 1 % javaapplication27.TestInline::main @ 12 (53 bytes) @ 20 javaapplication27.TestInline::m (30 bytes) inline (hot) @ 7 javaapplication27.TestInline::m2 (40 bytes) inline (hot) So m2 gets inlined into m , which you would expect so we are back to the original scenario. But when main gets compiled, it actually inlines the whole thing. At the assembly level, it means you won't find any invokevirtual instructions any more. You will find lines like this: 0x00000000026d0121: add ecx,edi ;*iinc ; - javaapplication27.TestInline::m2@7 (line 33) ; - javaapplication27.TestInline::m@7 (line 24) ; - javaapplication27.TestInline::main@20 (line 10) where basically common instructions are "mutualised". Conclusion I am not saying that this example is representative but it seems to prove a few points: using smaller method improves readability in your code smaller methods will generally be inlined, so you will most likely not pay the cost of the extra method call (it will be performance neutral) using smaller methods might improve inlining globally in some circumstances, as shown by the example above And finally: if a portion of your code is really critical for performance that these considerations matter, you should examine the JIT output to fine tune your code and importantly profile before and after.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15756075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/16050/']}
jdg_369269
stackexchange
llm_judgeable_groundtruth_similarity
27793761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I discovered that a single HttpClient could be shared by multiple requests. If shared, and the requests are to the same destination, multiple requests could reuse the connections. WebRequest needs to recreate the connection for each request. I also looked up some documentation on other ways to use HttpClient in examples. The following article summarizes the high-speed NTLM-authenticated connection sharing: HttpWebRequest.UnsafeAuthenticatedConnectionSharing Possible implementations that I tried out are shown below A) private WebRequestHandler GetWebRequestHandler(){ CredentialCache credentialCache = new CredentialCache(); credentialCache.Add(ResourceUriCanBeAnyUri, "NTLM", CredentialCache.DefaultNetworkCredentials); WebRequestHandler handler = new WebRequestHandler { UnsafeAuthenticatedConnectionSharing = true, Credentials = credentialCache }; return handler;}using (HttpClient client = new HttpClient(GetWebRequestHandler(), false)){} B) using (HttpClient client = new HttpClient){} C) HttpWebRequest req = (HttpWebRequest)WebRequest.Create("some uri string") I would appreciate any help in making me understand which approach I should take so as to achieve max performance, minimizing connections and making sure security is not impacted. Now provide the response and nothing else.
If you use either of them with async it should be good for the performance point of view as it will not block the resources waiting for the response and you will get good throughput. HttpClient is preferred over HttpWebRequest due to async methods available out of the box and you would not have to worry about writing begin/end methods. Basically when you use async call (using either of the class), it will not block the resources waiting for the response and any other request would utilise the resources to make further calls. Another thing to keep in mind that you should not be using HttpClient in the 'using' block to allow reuse of same resources again and again for other web requests. See following thread for more information Do HttpClient and HttpClientHandler have to be disposed?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27793761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1650470/']}
jdg_369270
stackexchange
llm_judgeable_groundtruth_similarity
27714969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to create public and private RSA keys for a client/server application, and I'm using the JSch library to do so. I've been generating 4096-bit keys up until now, as I'd like to have the best security possible. However, this takes 3~5 minutes, whereas generating a 2048-bit key takes something to the tune of 10 seconds. Have an sscce: import com.jcraft.jsch.JSch;import com.jcraft.jsch.JSchException;import com.jcraft.jsch.KeyPair;public class KeyGenerator { public static void main(String[] args) { JSch jsch = new JSch(); System.out.println("Starting..."); try { KeyPair keyPair = KeyPair.genKeyPair(jsch, KeyPair.RSA, 4096); } catch (JSchException e) { e.printStackTrace(); } System.out.println("Done."); }} Would this huge difference in generation time be expected? I'm not super clear on how RSA keys are generated (hence using a library) but I suppose the time required might be exponential? It just seems...too exponential. Here's the JSch API (since the library itself and the website it comes from have next to no documentation). Update: I did some profiling. Here's a chart of the keygen times, starting at 512 bits and going up to 4096, with 30 samples per key size. And here's a similar chart with the 4096-bit trials excluded (same dataset): These look pretty similar, which denotes a fairly smooth exponential increase in time. I guess I'm just impatient! Now provide the response and nothing else.
Generating an RSA key requires finding two large, random prime numbers that satisfy certain criteria. Finding such primes is essentially a matter of picking random numbers and then checking if they are prime or not by performing certain tests. The Prime Number Theorem tells us that as prime numbers get bigger, they also get rarer so you have to generate more random numbers in order to find one that's prime. The checking to determine whether the number is prime also takes longer for bigger numbers. All of the above factors contribute to the increased time it takes to generate larger keys, however this aside, it sounds like this library just isn't particularly fast. Using OpenSSL on a reasonably modern PC I can generate a 2048 bit key in ~1 second and a 4096 bit key in <10 seconds, so your times of 10 secs and 3-5 mins seems excessive. If performance is an issue, I'd suggest trying a different library, with the understanding than any library is going to be slower to generate big keys than smaller ones!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27714969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3264657/']}
jdg_369271
stackexchange
llm_judgeable_groundtruth_similarity
28953805
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i am very very new to Java and i would like to know how can i compare 2 integers? I know == gets the job done.. but what about equals? Can this compare 2 integers? (when i say integers i mean "int" not "Integer").My code is: import java.lang.*;import java.util.Scanner;//i read 2 integers the first_int and second_int//Code aboveif(first_int.equals(second_int)){//do smth}//Other Code but for some reason this does not work.. i mean the Netbeans gives me an error: "int cannot be dereferenced" Why? Now provide the response and nothing else.
int is a primitive. You can use the wrapper Integer like Integer first_int = 1;Integer second_int = 1;if(first_int.equals(second_int)){ // <-- Integer is a wrapper. or you can compare by value (since it is a primitive type) like int first_int = 1;int second_int = 1;if(first_int == second_int){ // <-- int is a primitive. JLS-4.1. The Kinds of Types and Values says (in part) There are two kinds of types in the Java programming language: primitive types ( §4.2 ) and reference types ( §4.3 ). There are, correspondingly, two kinds of data values that can be stored in variables, passed as arguments, returned by methods, and operated on: primitive values ( §4.2 ) and reference values ( §4.3 ).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28953805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4316269/']}
jdg_369272
stackexchange
llm_judgeable_groundtruth_similarity
1399406
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $F$ is a finite field of order $q$. What is the size of $\operatorname{GL}_n(F)$ ? I am reading Dummit and Foote "Abstract Algebra". The following formula is given: $(q^n - 1)(q^n - q)\cdots(q^n - q^{n-1})$. The case for $n = 1$ is trivial. I understand that for $n = 2$ the first row of the matrix can be any ordered pair of field elements except for $0,0$. and the second row can be any ordered pair of field elements that is not a multiple of the first row. So for $n = 2$ there are $(q^n - 1)(q^n - q)$ invertible matrices. For $n\geq 3$, I cannot seem to understand why the formula works. I have looked at Sloane's OEIS A002884. I have also constructed and stared at a list of all $168$ $3\times 3$ invertible matrices over $GF(2)$. I would most appreciate a concrete and detailed explanation of how say $(2^3 - 1)(2^3 - 2)(2^3 - 2^2)$ counts these $168$ matrices. Now provide the response and nothing else.
In order for an $n \times n$ matrix to be invertible, we need the rows to be linearly independent. As you note, we have $q^n - 1$ choices for the first row; now, there are $q$ vectors in the span of the first row, so we have $q^n - q$ choices for the second row. Now, let $v_1, v_2$ be the first two rows. Then the set of vectors in the span of $v_1, v_2$ is of the form $\{c_1 v_1 + c_2 v_2 | c_1,c_2 \in F\}$. This set is of size $q^2$, as we have $q$ choices for $c_1$ and $q$ choices for $c_2$. Thus, we have $q^n - q^2$ choices for the third row. Continuing this gives the desired formula.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1399406', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/139257/']}
jdg_369273
stackexchange
llm_judgeable_groundtruth_similarity
4868
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Source There seems to be (anecdotal) evidence that putting your hard drive in the freezer can make it readable again , at least for a short time: Lifehacker - Save a Failed Hard Drive in Your Freezer Geeks are Sexy - Freeze your hard drive to recover data: Myth or reality? Data Recovery Pros - Freezing Method I guess that by doing this, some metal parts in the HD could contract, putting back in place defective parts, and making everything work again for a few minutes. [ Source ] But data recovery expert Scott Holewinski advises against it in this video (YouTube), saying ... putting your hard drive into the freezer is the WORST thing you could possibly do in your attempt to recover your data as it can actually render your HD completely unrecoverable (due to water condensation) But people who say it does work usually recommend wrapping the HD in a sealed bag to avoid condensation (see picture above). My question: Is the method of freezing your HD for data recovery viable? Will sealing the HD in a ziploc bag actually help avoid the condensation problem? (AFAIK there is air inside the HD) Now provide the response and nothing else.
I speak as a professional who has worked as a Data Recovery specialist for a firm that specialised in this. There are three common types of hard drive failure: Control Board failure : This is where there is a short or surge that damages the controller board of your hard drive. The symptoms of this are the drive not being found by the BIOS, the drive not "spinning up", sometimes an odor of ozone (if the short is active). Freezing will not fix this failure. Replacing the board usually allows for full recovery of the data. The difficulty is finding a board that works with your exact model of hard drive. If you have one, it's very simple to replace. Motor Seized : This is the motor that causes the hard drive to spin. It was fairly common in the early hard drives. Parts overheat and expand causing the parts to wedge themselves in a position that doesn't allow the drive to spin freely. Often you could put these in the freezer and it would cause those parts to contract and release allowing the drive to work again. The miniaturization of the motors have reduced the frequency of this failure, and made it so that the damage from them is often irreversible. The data is still on the platter and can be recovered. Symptoms: Drive is recognized by the BIOS But does not spin up. Sometimes you can hear the motor trying to move but it is a constant whine (older drives only). Data recovery places do not want you freezing these for several reasons: it is an easy fix, if you know how to fix it and have the time, so they can charge a lot of money and recover all of your data (win win), if it is not a motor failure and is other mechanical failure you can cause data corruption(see below) Head Crash : This is the noticable kerthunk kerthunk kerthunk that anyone who has ever heard it remembers. The drive seems to spin up, thunks 3 or 4 times then spins down - rinse & repeat. Sometimes it will even boot up. This thunk is the drive head making contact with the platter and being propelled away (momentum) every time the head touches the platter it scrambles some of the data on it. The more often the head crashes the more data is destroyed, the less likely to recover what you want. No amount of freezing is going to help here . There is physical damage to the parts that are responsible for reading the data. In order to recover data from this we have to replace the drive heads in a positive flowbench (a poor-man's clean-room) and hope the disk damage does not cause them to crash again. These are generally a bit more interchangable. Evidence: http://www.dataclinic.co.uk/hard-disk-failures.htm
{}
{'log_upvote_score': 5, 'links': ['https://skeptics.stackexchange.com/questions/4868', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/454/']}
jdg_369274
stackexchange
llm_judgeable_groundtruth_similarity
25064347
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a reporting application for my Django powered website. I want to run several reports and have each report generate a .csv file in memory that can be downloaded in batch as a .zip. I would like to do this without storing any files to disk. So far, to generate a single .csv file, I am following the common operation: mem_file = StringIO.StringIO()writer = csv.writer(mem_file)writer.writerow(["My content", my_value])mem_file.seek(0)response = HttpResponse(mem_file, content_type='text/csv')response['Content-Disposition'] = 'attachment; filename=my_file.csv' This works fine, but only for a single, unzipped .csv. If I had, for example, a list of .csv files created with a StringIO stream: firstFile = StringIO.StringIO()# write some data to the filesecondFile = StringIO.StringIO()# write some data to the filethirdFile = StringIO.StringIO()# write some data to the filemyFiles = [firstFile, secondFile, thirdFile] How could I return a compressed file that contains all objects in myFiles and can be properly unzipped to reveal three .csv files? Now provide the response and nothing else.
zipfile is a standard library module that does exactly what you're looking for. For your use-case, the meat and potatoes is a method called "writestr" that takes a name of a file and the data contained within it that you'd like to zip. In the code below, I've used a sequential naming scheme for the files when they're unzipped, but this can be switched to whatever you'd like. import zipfileimport StringIOzipped_file = StringIO.StringIO()with zipfile.ZipFile(zipped_file, 'w') as zip: for i, file in enumerate(files): file.seek(0) zip.writestr("{}.csv".format(i), file.read())zipped_file.seek(0) If you want to future-proof your code (hint hint Python 3 hint hint), you might want to switch over to using io.BytesIO instead of StringIO, since Python 3 is all about the bytes. Another bonus is that explicit seeks are not necessary with io.BytesIO before reads (I haven't tested this behavior with Django's HttpResponse, so I've left that final seek in there just in case). import ioimport zipfilezipped_file = io.BytesIO()with zipfile.ZipFile(zipped_file, 'w') as f: for i, file in enumerate(files): f.writestr("{}.csv".format(i), file.getvalue())zipped_file.seek(0)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25064347', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3768332/']}
jdg_369275
stackexchange
llm_judgeable_groundtruth_similarity
4106711
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I implement a file's struct in my program but for some arrays in the struct I don't know the size. The size of the array is stored in another variable but it's unknown before the struct is filled in. struct Vertex { float x; float y; float z;};struct myFile { ulong nVertices; Vertex vertices[nVertices];}; That gives an error: "error C2065: 'nVertices' : undeclared identifier". Now provide the response and nothing else.
You should store a pointer in your struct: Vertex *vertices; Then allocate the memory at runtime: myFile f;f.vertices = malloc(nVertices * sizeof(Vertex));if (f.vertices == 0) handle_out_of_memory();f.nVertices = nVertices; Remember to free the memory when done: free(f.vertices);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4106711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/192077/']}
jdg_369276
stackexchange
llm_judgeable_groundtruth_similarity
382849
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: During a course I took on General Relativity, the professor started with an introduction on differential geometry. Vectors were properly motivated: he said that since the differential manifold doesn't have distances it didn't make sense to define vectors as the displacement between two points; we had to use something infinitesimal instead. Then, he gave several advantages for using derivatives as vectors instead of the classic euclidean two-point arrows. However, when he starting explaining 1-forms... he just said what they were and move on. I don't really understand why we need 1-forms. Also, I don't know if 1-forms are also a generalization of an euclidean concept (such as vectors). I've read many questions about 1-forms but none of them asked about clear motivation for introducing them in a General Relativity course So that's the question: What is the motivation for using 1-forms in General Relativity? what do they are useful for? Can't we just use vectors and then introduce a metric to have a direct product? Note: I have checked many books looking for a proper motivation but I just find the definition followed by the usual interpretation of 1-forms as perpendicular planes in space. I've read Gravitation, Carrol's book and Schutz' both books. To be clear, I don't need a physical explanation, what I want is motivation for using 1-forms when we can just use the metric and two vectors if we want a inner product. Now provide the response and nothing else.
The notion of differential forms depends on several structures: the wedge product the dual space the tangent bundle sections So its not suprising that your lecturer wasn't as easily able to motivate them as vectors! Lets take these step by step: 1. The wedge product In a 3d vector space we have the additional structure of an inner product and the cross product. These have geometric interpretations. However, when we generalise to a vector space of any dimension it's easy to see that the inner product generalises in an obvious way. Not so the cross product. In fact, this is only available in 3d. Recall, that the scalar triple product $u.(v \times w)$ gives the volume of the parallelopid formed by the sides $u,v,w$. It is this property that generalises. Given a parallelepid in a n-dimensional vector space $V$ (this is the generalisation of a parallelogram in the plane) whose sides are $v_1,..., v_n$. Then the wedge product $v_1\wedge ... v_n$ gives us the signed volume. It turns out that this is a vector, but they don't lie in the same vector space as $v$. We call them $k$-vectors and say that $v_1\wedge ... v_k$ lies in $\wedge^k V$. 2. The dual space The dual space of a vector space $V$ is usually written $V^*$. It consists of all linear functions to the real line, $f:V\rightarrow R$. What does this mean? Each function is linear, so we can think of it as a kind of measurement or metric on the vector space. It tells us how to measure a vector. Thus $V^*$ is the space of all the ways we can measure vectors in $V$. 3. The Tangent Space Given a manifold $M$, we can construct its tangent bundle $TM$. The easiest example to visualise is when the manifold is a curve or surface. Lets take the curve first: at every point of the curve $C$ we can draw the tangent line to it, this line extends to infinity and is a 1d vector space. We bundle them up all together into the bundle $TC$, and the tangent line at the point $p$ on the curve is $T_pC$. Similarly for a surface $S$, at each point $p$ of the surface we can draw the tangent plane to it, we write this as $T_pS$ and we bundle them all together into the bundle $TS$. Now any bundle $E$ over a manifold $M$, has a projection map $\pi:E\rightarrow M$ and this is how they are usually referred to. It tells us where the 'fibres' are attached to. If we take the first example, $TC$, the tangent bundle of the curve; let $v$ be a vector in one of the tangent spaces, say $T_pC$ - this means that $v$ is in the tangent line (rather vector space) - that is defined (or attached) to the point $p$ of the curve. The projection map $\pi$ simply maps $v$ to the point $p$. So we can see that the image of the entire space $T_pC$ is just the point $p$. 4. Sections Given a bundle $\pi:E\rightarrow M$ then we can take its space of sections $CE$. This is the space of all maps $s:M\rightarrow E$ such that $pi\circ s =Id_M $. For example, suppose $E$ was a bundle of vector spaces over the manifold $M$, then a section is a choice of a vector in each fibre. It is a vector field. Construction of differential forms Finally we put all these structures together: We construct the bundles $\wedge^k T^*M$. That is we take the manifold $M$, we construct the tangent space $TM$ over it, and then take it's dual space $T^*M$ and the finally we take the $k^{th}$ wedge $\wedge^k T^*M$. The sections of this bundle is $C(\wedge^k T^*M)$ and this is the space of all $k$-differential forms and is usually written (at least by mathematicians and sometimes others) as $\Omega^kM$. Uses It turns out that we have a map $d^k:\Omega^kM \rightarrow \Omega^{k+1}M$ called the exterior derivative (another name for the wedge product is the exterior product) and this generalises the $grad$ operator in vector analysis. That is $d^0=grad$. The other vector analysis operators - $div$ & $curl$ - are variants of this. It also turns out that when we integrate a form $\omega$ over a manifold $M$ we get a generalisation of Stokes theorem: $\int_M d\omega=\int_{dM} \omega$, where the symbol $dM$ is the boundary of the manifold. conclusion Thus we see that differential forms allow us to generalise the vector analysis that we're already familiar with in 3d Euclidean space to the context of manifolds of any dimensions. This is important given the importance of vector analysis in physics. But they have many other uses, for example de Rham cohomology. They also bring in many other notions that are important, for example vector, fibre and principal bundles. There is a formulation of General Relativity that uses a connection on the frame bundle of the tangent bundle and this a principal bundle with structure group the Lorentz group. This connects to the way the other forces are described, for example electromagnetism, the electroweak and the strong force are described as principal bundles with structure group $U(1),SU(2)$ and $SU(3)$ respectively in the Standard Model.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/382849', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/130091/']}
jdg_369277
stackexchange
llm_judgeable_groundtruth_similarity
382649
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Can we prove, without using Laguerre polynomials, that $f_n(x)=O(\frac{n!}{\sqrt{n}})$ i.e. that $$\exists C>0, \exists N\in\mathbb N, \forall x\geq0, \forall n\geq N :\ \big| f_n(x) \big|\leq C \frac{n!}{\sqrt{n}},$$ where $$f_n(x)=e^{\frac{x}{2}}\int^\infty_0 e^{-t}t^{n-\frac{1}{2}}\cos(2\sqrt{xt})dt,\quad x \geq 0\;?$$ Proof (by using Laguerre polynomials): it's easy to show that $$f_n(x)=\sqrt \pi e^{-x/2} n! L^{(-1/2)}_n(x)$$ and we know that $$L^{(-1/2)}_n(x)=O\Big(e^{x/2}\frac{1}{\sqrt{n}}\Big).$$ My reference is page 9 formula 18 $$L^{\alpha }_n(x)=O\Big(e^{\frac x2}x^{\frac{-\alpha}2 -\frac 14}n^{\frac{\alpha}2 -\frac 14} \Big) .$$ or see The polynomials' asymptotic behaviour for large n However, I'd like to not use this simple argument. Now provide the response and nothing else.
Making the normalised change of variables $t = ns^2$ , $x = 4ny^2$ (with $y \geq 0$ and $s$ of either sign) one can write $$ f_n(x) = e^{2ny^2} n^n \sqrt{n} \int_{{\bf R}} e^{-n\phi(s)}\ ds$$ where the phase $\phi(s)$ is given by $$ \phi(s) := s^2 - 2 \log s - 4iy s$$ using the standard branch of the complex logarithm. This phase has stationary points at $iy \pm \sqrt{1-y^2}$ . In the "Bessel" regime $y < 1-\delta$ for some fixed $\delta>0$ , the stationary points are non-degenerate, and $\mathrm{Re} \phi(s)$ attains a local minimum of $1 + 2y^2$ at these points (with respect to a horizontal contour), so one gets the required bound $f_n(x) \lesssim n^n/e^n$ (equivalent to the claimed bound by Stirling's formula) in this case by the usual saddle point method. Similarly, in the "exponential" regime $y > 1 + \delta$ , the stationary points are again non-degenerate; if one makes the convenient substitution $y = \cosh \theta$ with $\theta>0$ then at the stationary point $i e^{\theta}$ , $\mathrm{Re} \phi(s)$ attains a local minimum (again wrt a horizontal contour) of $1 + 2y^2 + (\sinh 2\theta - 2\theta) > 1 + 2y^2$ , and stationary phase again gives the desired bound (with room to spare). However in the "Airy" regime in which $y$ is close to $1$ (or in your original coordinates, $x$ is close to $4n$ ) the stationary points coalesce and I don't think your claimed bounds actually hold (one can lose an additional factor of $n^{1/6}$ , I think, and this is consistent with the standard Airy asymptotics given for instance here ). You may want to double check your reference for the Laguerre polynomial bounds; they may not hold in the Airy regime $x \approx 4n$ . For instance, the bound in Theorem 8.91.2 of Szego's book only gives a decay bound of $n^{-1/3}$ instead of $n^{-1/2}$ once one works in a range of parameters that includes the Airy regime.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/382649', 'https://mathoverflow.net', 'https://mathoverflow.net/users/126827/']}
jdg_369278
stackexchange
llm_judgeable_groundtruth_similarity
6915
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Personally I don't even feel that there is a need for ACK. It's faster if we just send NACK(n) for the lost packets instead of sending an ACK for each received packet. So when/which situations would one use ACK over NACK and viceversa? Now provide the response and nothing else.
The reason for the ACK is that a NACK is simply not sufficient. Let's say I send you a data stream of X segments (let's say 10 for simplicity). You are on a bad connection, and only receive segments 1, 2, 4, and 5. Your computer sends the NACK for segment 3, but doesn't realize there should be segments 6-10 and does not NACK those. So, I resend segment 3, but then my computer falsely believes the data is successfully sent. ACKs provide some assurance that the segment has arrived at the destination. If you want the application to deal with order of data and retransmissions, you can simply choose to utilize a protocol like UDP (for instance, like TFTP does).
{}
{'log_upvote_score': 6, 'links': ['https://networkengineering.stackexchange.com/questions/6915', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/4855/']}
jdg_369279
stackexchange
llm_judgeable_groundtruth_similarity
6424
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Hello Mathematica users, I have problems trying to eliminate variable $Q(x,y)$ froma PDE, but Mathematica output is just True . The problem is that I already found how to eliminate it. We can apply $d/dx$ on the second equation and then from the first equation we can determine and substitute $dQ[x,y]/dx$ in the second. Why can't Mathematica do that? My output is in that case a PDE of fourth order, but I don't have Q[x,y] , which is the aim, and I solved it by hand. Eliminate[{A0*D[w[x,y],{y,2}]-B0*D[w[x,y],{x,2}]- C0*D[Q[x,y],{x,1}]+C0*D[w[x,y],{x,2}]==0, E0*D[Q[x,y],{y,2}]-FF0*D[Q[x,y],{x,2}]-CC0*D[w[x,y],{x,1}]-Q[x,y]==0}, {(Q^(0,2))[x,y],(Q^(1,0))[x,y],Q[x,y]}] Now provide the response and nothing else.
Eliminate , and its cousin GroebnerBasis , work with algebraic equations. If you require differential elimination you will need to take derivatives (prolongations, that is). Here is a blind approach: just take some derivatives, sort the variables into two sets, and eliminate all the Q stuff. dpolys = {A0*D[w[x, y], {y, 2}] - B0*D[w[x, y], {x, 2}] - C0*D[Q[x, y], {x, 1}] + C0*D[w[x, y], {x, 2}], E0*D[Q[x, y], {y, 2}] - FF0*D[Q[x, y], {x, 2}] - CC0*D[w[x, y], {x, 1}] - Q[x, y]};derivs = {D[dpolys[[1]], x], D[dpolys[[1]], y]};allpolys = Join[dpolys, derivs];bigger = Join[allpolys, D[allpolys, x], D[allpolys, y]];In[111]:= params = {A0, B0, C0, CC0, E0, FF0};vars = Complement[Variables[bigger], params];qvars = Select[Variables[bigger], ! FreeQ[#, Q] &];wvars = Complement[vars, qvars];In[110]:= GroebnerBasis[bigger, wvars, qvars, MonomialOrder -> EliminationOrder]{(-A0)*Derivative[0, 2][w][x, y] + A0*E0*Derivative[0, 4][w][x, y] + B0*Derivative[2, 0][w][x, y] - C0*Derivative[2, 0][w][x, y] - C0*CC0*Derivative[2, 0][w][x, y] - B0*E0*Derivative[2, 2][w][x, y] + C0*E0*Derivative[2, 2][w][x, y] - A0*FF0*Derivative[2, 2][w][x, y] + B0*FF0*Derivative[4, 0][w][x, y] - C0*FF0*Derivative[4, 0][w][x, y]}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/6424', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1012/']}
jdg_369280
stackexchange
llm_judgeable_groundtruth_similarity
20681420
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't get my first child in the body to 100% height, if the body has min-height specified. <html> <head> <style> html { height:100%; } body { min-height:100%; } #wrapper { height:100%; min-width:1120px; /* 250px each side (content width is 870px) */ max-width:2000px; background-image:url(bg.png); background-position:50% 25px; background-repeat:no-repeat; background-size:cover; } </style> </head> <body> <div id="wrapper"> <!-- web content --> </div> </body></html> This does not resize the wrapper to the height of the window. When I remove the min- and use height , it'll work. But I have to have the content height variable... I did find some other posts here on SO and on google, but they have just questions and no solution. Now provide the response and nothing else.
When you use a percentage value for height , it will always be relative to the specified height of the parent element. Not the actual height of the parent element, but the height specified in CSS. So when your body element has no height specified (only min-height , but that does not count), the 100% will not be able to take effect. One possible solution is to use position: absolute; top: 0; bottom: 0; on your #wrapper , and your div will be stretched. This of course might have some layout consequences that you do not want. jsFiddle Demo
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20681420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1041122/']}
jdg_369281
stackexchange
llm_judgeable_groundtruth_similarity
16816528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this code to generate U+10FFFC var s = Encoding.UTF8.GetString(new byte[] {0xF4,0x8F,0xBF,0xBC}); I know it's for private-use and such, but it does display a single character as I'd expect when displaying it. The problems come when manipulating this unicode character. If I later do this: foreach(var ch in s){ Console.WriteLine(ch);} Instead of it printing just the single character, it prints two characters (i.e. the string is apparently composed of two characters). If I alter my loop to add these characters back to an empty string like so: string tmp="";foreach(var ch in s){ Console.WriteLine(ch); tmp += ch;} At the end of this, tmp will print just a single character. What exactly is going on here? I thought that char contains a single unicode character and I never had to worry about how many bytes a character is unless I'm doing conversion to bytes. My real use case is I need to be able to detect when very large unicode characters are used in a string. Currently I have something like this: foreach(var ch in s){ if(ch>=0x100000 && ch<=0x10FFFF) { Console.WriteLine("special character!"); }} However, because of this splitting of very large characters, this doesn't work. How can I modify this to make it work? Now provide the response and nothing else.
U+10FFFC is one Unicode code point, but string 's interface does not expose a sequence of Unicode code points directly. Its interface exposes a sequence of UTF-16 code units. That is a very low-level view of text. It is quite unfortunate that such a low-level view of text was grafted onto the most obvious and intuitive interface available... I'll try not to rant much about how I don't like this design, and just say that not matter how unfortunate, it is just a (sad) fact you have to live with. First off, I will suggest using char.ConvertFromUtf32 to get your initial string. Much simpler, much more readable: var s = char.ConvertFromUtf32(0x10FFFC); So, this string's Length is not 1, because, as I said, the interface deals in UTF-16 code units, not Unicode code points. U+10FFFC uses two UTF-16 code units, so s.Length is 2. All code points above U+FFFF require two UTF-16 code units for their representation. You should note that ConvertFromUtf32 doesn't return a char : char is a UTF-16 code unit, not a Unicode code point. To be able to return all Unicode code points, that method cannot return a single char . Sometimes it needs to return two, and that's why it makes it a string. Sometimes you will find some APIs dealing in int s instead of char because int can be used to handle all code points too (that's what ConvertFromUtf32 takes as argument, and what ConvertToUtf32 produces as result). string implements IEnumerable<char> , which means that when you iterate over a string you get one UTF-16 code unit per iteration. That's why iterating your string and printing it out yields some broken output with two "things" in it. Those are the two UTF-16 code units that make up the representation of U+10FFFC. They are called "surrogates". The first one is a high/lead surrogate and the second one is a low/trail surrogate. When you print them individually they do not produce meaningful output because lone surrogates are not even valid in UTF-16, and they are not considered Unicode characters either. When you append those two surrogates to the string in the loop, you effectively reconstruct the surrogate pair, and printing that pair later as one gets you the right output. And in the ranting front, note how nothing complains that you used a malformed UTF-16 sequence in that loop. It creates a string with a lone surrogate, and yet everything carries on as if nothing happened: the string type is not even the type of well-formed UTF-16 code unit sequences, but the type of any UTF-16 code unit sequence. The char structure provides static methods to deal with surrogates: IsHighSurrogate , IsLowSurrogate , IsSurrogatePair , ConvertToUtf32 , and ConvertFromUtf32 . If you want you can write an iterator that iterates over Unicode characters instead of UTF-16 code units: static IEnumerable<int> AsCodePoints(this string s){ for(int i = 0; i < s.Length; ++i) { yield return char.ConvertToUtf32(s, i); if(char.IsHighSurrogate(s, i)) i++; }} Then you can iterate like: foreach(int codePoint in s.AsCodePoints()){ // do stuff. codePoint will be an int will value 0x10FFFC in your example} If you prefer to get each code point as a string instead change the return type to IEnumerable<string> and the yield line to: yield return char.ConvertFromUtf32(char.ConvertToUtf32(s, i)); With that version, the following works as-is: foreach(string codePoint in s.AsCodePoints()){ Console.WriteLine(codePoint);}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16816528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69742/']}
jdg_369282
stackexchange
llm_judgeable_groundtruth_similarity
29113714
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was working on sth about MappedByteBuffer of Java NIO recently.I've read some posts about it and all of them mention that "mmap() is faster than read()" In my conclusion: I treat MappedByteBuffer == Memory Mapped File == mmap() read() has to read data through : disk file -> kernel -> application, so it has context switch and buffer copying They all said mmap() has less copying or syscall than read(), but as I know it also need to read from disk file the first time you access the file data. So the first time it read : virtual address -> memory -> page fault -> disk file -> kernel -> memory. Except you can access it randomly, the last 3 steps (disk file -> kernel -> memory) is exactly the same as read(), so how mmap() could be less copying or syscall than read() ? what's the relationship between mmap() and swap file, Is that the os will put the least used file data of memory into swap (LRU) ? So when the second time you access these data, OS retrieves them from swap but not disk file(no need to copy to kernel buffer), that's why mmap() has less copying and syscall ? In java, MappedByteBuffer is allocated out of heap (it's a direct buffer). So when you read from MappedByteBuffer, does it mean it need one more extra memory copy from outside the java heap into java heap? Could anyone answer my questions ? Thanks :) Now provide the response and nothing else.
1: Yes, that is essentially what a MappedByteBuffer is. 2: "disk file -> kernel" doesn't necessarily involve copying. 3: With a memory-mapped file, once the kernel has read the file into its cache, it can simply map that part of the cache into your process - instead of having to copy the data from the cache into a location your process specifies. 4: If the kernel decides to swap out a page from a memory-mapped file, it will not write the page to the page file; it will write the page to the original file (the one it's mapped from) before discarding the page. Writing it to the page file would be unnecessary and waste page file space. 5: Yes. For example, if you call get(byte[]) then the data will be copied from the off-heap mapping into your array. Note that functions such as get(byte[]) need to copy data for any type of buffer - this is not specific to memory-mapped files.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29113714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1787822/']}
jdg_369283
stackexchange
llm_judgeable_groundtruth_similarity
37472579
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to implement Continuous Integration using Azure App Service and VSTS. When I drill down into "Deployment Source" and then select "Visual Studio Team Services" and then "Choose a Project", it doesn't display any projects. The project panel says "no results". I'm currently signed into both Azure portal and VSTS using the same account. I also could not see anything related to Azure in my Security and Policy --> "Apps and Services" section of my account in https://account.live.com . Thanks in Advance.Prawin Now provide the response and nothing else.
It has to do with how Matlab performs name-binding at compilation time. Because matlabBugTest has a line that assigns a value to a , a is determined to be a variable, and the later line with a is a reference to that variable and not a call to the local function. More modern versions of Matlab, like my R2015a install, gives a more clear error message: At compilation, "a" was determined to be a variable and this variable is uninitialized. "a" is also a function name and previous versions of MATLAB would have called the function. However, MATLAB 7 forbids the use of the same name in the same context as both a function and a variable. It's not so much a bug, as it is an ambiguity introduced by the naming scheme that was given a default resolution method, which can be annoying if you have never encountered the problem before and m-lint doesn't mark it. Similar behavior occurs when variables are poofed into the workspace without initialization beforehand. So the solution is to either change the name of the function or the variable to different things, which I would argue is good practice anyways. In considering your follow-up example, I have noticed some interesting behavior in moving things around in the function. Firstly, if the function is either external or nested, you get the behavior discussed very well by Suever's answer . However, if the function is local, you can get around the limitation (at least you can in my R2014b and R2015a installs) by invoking the function prior to converting it to a variable as long as you initialize it or explicitly convert it to a variable at some point. Going through the cases, the following bodies of matlabBugTest perform thusly: Fails: aif false a = 'foo';enda Runs: aif true a = 'foo';enda Runs: a = a;if false % runs with true as well. a = 'foo';enda I'm not entirely sure why this behavior is the way it is, but apparently the parser handles things differently depending on the scope of the function and the order of what symbols appear and in what contexts. So assuming this behavior hasn't and will not change you could try something like: pathConstant = pathConstant;if ~exist('pathConstant.m', 'file') pathConstant = 'C:\some\path';endload(fullfile(pathConstant, 'filename.ext')); Though, entirely personal opinion here, I would do something like pathConstant = getPathConstant();if ~exist('pathConstant.m', 'file') pathConstant = 'C:\some\path';endload(fullfile(pathConstant, 'filename.ext')); Concerning breaking "compatibility between functions and scripts/command line", I don't really see this as an issue since those are two entirely different contexts when it comes to Matlab. You cannot define a named function on the command line nor in a script file; therefore, there is no burden on the Matlab JIT to properly and unambiguously determine whether a symbol is a function call or a variable since each line executes sequentially and is not compiled (aside from certain blocks of code the JIT is designed to recognize and optimize like loops in scripts). Now as to why the above juggling of declarations works, I'm not entirely sure since it relies on the Matlab JIT which I know nothing about (nor have I taken a compiler class, so I couldn't even form an academic reason if I wanted).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37472579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1504096/']}
jdg_369284
stackexchange
llm_judgeable_groundtruth_similarity
13478748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is my requirements: Usable by any mobile application I'm developing I'm developing the mobile application, therefore I can implement any securing strategies. Cacheable using classical HTTP Cache strategy I'm using Varnish with a very basic configuration and it works well Not publicly available I don't want people be able to consume my API Solutions I think of: Use HTTPS, but it doesn't cover the last requirements because proxying request from the application will show the API KEY used. Is there any possibility to do this? Using something like a private/public key for example? Which fits well with HTTP, Apache, and Varnish. Now provide the response and nothing else.
There is no way to ensure that the other end of a network link is your application. This is not a solvable problem. You can obfuscate things with certificates, keys, secrets, whatever. But all of these can be reverse-engineered by the end user because they have access to the application. It's ok to use a little obfuscation like certificates or the like, but it cannot be made secure. Your server must assume that anyone connecting to it is hostile, and behave accordingly. It is possible to authenticate users , since they can have accounts. So you can certainly ensure that only valid users may use your service. But you cannot ensure that they only use your application. If your current architecture requires that, you must redesign. It is not solvable, and most certainly not solvable on common mobile platforms. If you can integrate a piece of secure hardware, such as a smartcard, then it is possible to improve security in that you can be more certain that the human at the other end is actually a customer, but even that does not guarantee that your application is the one connecting to the server, only that the smartcard is available to the application that is connecting. For more on this subject, see Secure https encryption for iPhone app to webpage .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13478748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/962602/']}
jdg_369285
stackexchange
llm_judgeable_groundtruth_similarity
5030
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In radix sort we first sort by least significant digit then we sort by second least significant digit and so on and end up with sorted list. Now if we have list of $n$ numbers we need $\log n$ bits to distinguish between those number. So number of radix sort passes we make will be $\log n$. Each pass takes $O(n)$ time and hence running time of radix sort is $O(n \log n)$ But it is well known that it is linear time algorithm. Why? Now provide the response and nothing else.
if we have a list of $n$ numbers we need $\log n$ bits No: if we have a list of numbers between $0$ and $2^k - 1$ , we need $k$ bits. There is no relationship between $k$ and $\log n$ in general. If the numbers are all distinct, then $\log n \le k$ , and radix sort on distinct numbers therefore has a time complexity of $\Omega(n \log n)$ . In general, the complexity of radix sort is $\Theta(n \, k)$ where $n$ is the number of elements to sort and $k$ is the number of bits in each element. To say that the complexity of radix sort is $O(n)$ means taking a fixed bit size for the numbers. This implies that for large enough $n$ , there will be many duplicate values. There is a general theorem that an array or list sorting method that works by comparing two elements at a time cannot run faster than $\Theta(n \log n)$ in the worst case. Radix sort doesn't work by comparing elements, but the same proof method works. Radix sort is a decision process to determine which permutation to apply to the array; there are $n!$ permutations of the array, and radix sort takes binary decisions, i.e. it decides whether to swap two elements or not at each stage. After $m$ binary decisions, radix sort can decide between $2^m$ permutations. To reach all $n!$ possible permutations, it is necessary that $m \ge \log (n!) = \Theta(n \log n)$ . An assumption in the proof that I did not write out above is that the algorithm must work in the case when the elements are distinct. If it is known a priori that the elements are not all distinct, then the number of potential permutations is less than the full $n!$ . When sorting $k$ -bit numbers, it is only possible to have $n$ distinct elements when $n \le 2^k$ ; in that case, the complexity of radix sort is indeed $\Omega(n \log n)$ . For larger values of $n$ , there must be collisions, which explains how radix sort can have a complexity that's less than $\Theta(n \log n)$ when $n \gt 2^k$ .
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/5030', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/244/']}
jdg_369286
stackexchange
llm_judgeable_groundtruth_similarity
9114664
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm a complete begginer in CSS and HTML and I'm building my first basic page for training. What I want do do now is have spacing between two elements without using <br> . Currently I'm using this: .formClear { clear:left; height:25px;} It works for spacing but do you this a correct way to do it? Would lineheight work better? line-height: 20px; What are your suggestions?Please remember I'm starting to learn :) Thank you! Now provide the response and nothing else.
It depends on what exactly you want to accomplish. Let's assume you have this structure: <p style="width:400px;"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet.</p> If you want the space between the single lines to be bigger, you should increase line-height If you want the space at the end to be bigger, you should increase margin-bottom If you want the space at the end to be bigger, but have the background fill the space (or the border around the space) use padding-bottom Of course, there are also the corresponding values for space on the top: padding-topmargin-top Some examples: <p style="line-height: 30px; width: 300px; border: 1px solid black;"> Space between single lines Space between single lines Space between single lines Space between single lines Space between single lines Space between single lines Space between single lines Space between single lines</p><p style="margin-bottom: 30px; width: 300px; border: 1px solid black;"> Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border Space at the bottom, outside of the border</p><p style="padding-bottom: 30px; width: 300px; border: 1px solid black;"> Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border Space at the bottom, inside of the border</p> here you can see this code in action: http://jsfiddle.net/ramsesoriginal/H7qxd/ Of course you should put your styles in a separate stylesheet, the inline code was just to show the effect. here you have a little schematic demonstration of what which value affects: line-height content + | padding-bottom <----------------+ + content | border-bottom | + | | +-------------+<------------------+ | margin-bottom | + +===================+ <-------------------+ | | +-------------------------+ <------------------------+
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9114664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1185426/']}
jdg_369287
stackexchange
llm_judgeable_groundtruth_similarity
41336177
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This question's answer says that the font-size defines the height of a box so that all letters (with ascenders and descenders) can fit. But why has a span with 40px font-size and line-height an actual size of 45px? If I understand the linked question correctly then "X" should be smaller than 40px but the overall height should be exactly 40px. I thought that maybe it is making some extra room above/below the ascenders/descenders but the image shows that the ascenders/descenders take all the space so there can't be much extra room: When I wrap a div (green) around the span then the div has a height of 40px. Why does the div use the font-size of its child for its height but the child itself doesn't?: Now when I set the span 's line-height to 15px (less than the font-size ) then the div 's height changes to 26px. How is this value calculated? Has this something to do with the baseline?: When I set the span 's line-height to 65px (more than the font-size ) then the div 's height is the height of the line-height . I would have expected the div 's height to be something like (65px - 45px) + 45px.: So how do font-size and line-height affect the actual heights of elements? I read some questions that referenced the spec but I couldn't make much out of it. Are there any easy to understand rules? JSFiddle Now provide the response and nothing else.
First, I recommend reading my answer in Inline elements and line-height . To summarize, there are various heights related to inline boxes: Height of the inline box, given by line-height Height of the line box, which in simple cases is also given by line-height , but not here. Height of the content area of the inline box, which is implementation dependent. This is the area painted by the red background. The other height in your case is the height of the parent div. This is determined by §10.6.3 . In this case, since the box establishes an inline formatting context with one line, The element's height is the distance from its top content edge to [...] the bottom edge of the last line box So the height of the parent block is given by the height of the line box. What happens here is that the height of the line box is not the line-height of your inline box. And that's because the line-height of the parent block is also taken into account : On a block container element whose content is composed of inline-level elements, 'line-height' specifies the minimal height of line boxes within the element. The minimum height consists of a minimum height above the baseline and a minimum depth below it, exactly as if each line box starts with a zero-width inline box with the element's font and line height properties. We call that imaginary box a "strut." If you set parent's line-height to 0 , and child's vertical-align to e.g top , then the height of the parent will exactly be the line-height of the child. .outer { margin-top: 50px; background-color: green; width: 150px; font-family: "Times New Roman"; line-height: 0;}.letter-span-1 { background-color: red; line-height: 40px; font-size: 40px; vertical-align: top;}.letter-span-2 { background-color: red; line-height: 15px; font-size: 40px; vertical-align: top;}.letter-span-3 { background-color: red; line-height: 65px; font-size: 40px; vertical-align: top;} <span class="letter-span-1">XxÀg</span><div class="outer"> <span class="letter-span-1">XxÀg</span></div>The parent block is 40px tall.<div class="outer"> <span class="letter-span-2">XxAg</span></div>The parent block is 15px tall.<div class="outer"> <span class="letter-span-3">XxÀg</span></div>The parent block is 65px tall. If you don't set a line-height to the parent, it will be normal . Tells user agents to set the used value to a "reasonable" value based on the font of the element[...]. We recommend a used value for 'normal' between 1.0 to 1.2 . That means that there will be a minimum height for the parent which will be its font-size (which you don't specify, and the default is implementation-dependent) multiplied by that factor (implementation-dependent). You should also consider the vertical-align of the span. By default it's baseline , and that may create a gap below . The image in web-tiki's answer is especially useful: That's because vertical-align determines how the span will be aligned with the strut, and with baseline the alignment can depend on font-size and end up increasing the height of the line box. The line box height is the distance between the top of the uppermost and the bottom of the lowermost boxes in the line. If you don't want the height of the parent div to be increased by that, you need some other vertical-align , like top , middle , or bottom . Then the font-size of the span shouldn't affect the height of the div. To summarize, the height of the div depends on Its line-height ... which by default depends on div's font-size Span's line-height ... which by default depends on span's font-size Possibly span's font-size , depending on span's vertical-align And obviously height , min-height , max-height , etc.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41336177', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/764754/']}
jdg_369288
stackexchange
llm_judgeable_groundtruth_similarity
646722
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have downloaded Raspberry Pi OS Lite and "burned" it onto the flash card and it's in my RPI (v1). I have also put a FLAC (or OGG, or MP3) sound file onto it. It's not going to have any keyboard/mouse/monitor/network access. Its sole purpose is to perpetually loop the same sound file (10 hours recorded rain), outputting it to the loudspeakers attached to the RPI, as soon as it gets power. If I cut the power at any point, I need it to start back up again the next time I plug it in, and not require me to do any kind of "fiddling about" because it was "unexpectedly shut down" or anything like that. It's a poor man's "white noise generator" to help me sleep with noisy neighbours. Since I have the RPI and the loudspeakers, I thought this would be more than doable, and almost insultingly "low-tech" for such a capable electronic device. What exact steps do I need to take to make it so that it does this? I assume that I have to make some kind of edit on the flash card to make it not ask for username/password on boot, and another edit to make it actually play the sound file (and loop it) when it has started up? Now provide the response and nothing else.
You will need to ignore the backup directory so that find does not enter into it. There is already an answer showing how to do this. However, you may run the risk of deleting data if you back up files in this way. If two or more files, in different subdirectories, have the same names, they would over-write each other on the destination, in the backup directory. It would be better to use some real backup software to back up the data, such as restic . If that is not possible, use a solution that preserves the relative path to the files that you are backing up. The following command uses rsync to copy (not move) all files that have names containing the substring 2019 into the directory backup_2019 : rsync --itemize-changes --archive --prune-empty-dirs \ --exclude='/backup_2019/***' --include='*/' --include='*2019*' --exclude='*' \ ./ ./backup_2019 This would avoid looking inside ./backup_2019 for files or directories to transfer, but would otherwise copy all things that contains the substring 2019 . Directories on the target that end up empty are removed. Everything that is copied is copied into a location under backup_2019 that is the same as the file's location under the current directory: Example: $ tree -F.|-- dir1/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B|-- dir2/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B`-- dir3/ |-- file-1 |-- file-2019-A `-- subdir/ |-- file-2 `-- file-2019-B $ rsync --itemize-changes --archive \ --prune-empty-dirs \ --exclude='/backup_2019/***' --include='*/' --include='*2019*' --exclude='*' \ ./ ./backup_2019cd+++++++++ ./cd+++++++++ dir1/>f+++++++++ dir1/file-2019-Acd+++++++++ dir1/subdir/>f+++++++++ dir1/subdir/file-2019-Bcd+++++++++ dir2/>f+++++++++ dir2/file-2019-Acd+++++++++ dir2/subdir/>f+++++++++ dir2/subdir/file-2019-Bcd+++++++++ dir3/>f+++++++++ dir3/file-2019-Acd+++++++++ dir3/subdir/>f+++++++++ dir3/subdir/file-2019-B $ tree -F.|-- backup_2019/| |-- dir1/| | |-- file-2019-A| | `-- subdir/| | `-- file-2019-B| |-- dir2/| | |-- file-2019-A| | `-- subdir/| | `-- file-2019-B| `-- dir3/| |-- file-2019-A| `-- subdir/| `-- file-2019-B|-- dir1/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B|-- dir2/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B`-- dir3/ |-- file-1 |-- file-2019-A `-- subdir/ |-- file-2 `-- file-2019-B13 directories, 18 files You may add --remove-source-files to the list of rsync options to perform a "move" rather than "copy" of the files that you back up.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/646722', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/468588/']}
jdg_369289
stackexchange
llm_judgeable_groundtruth_similarity
8051912
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to associate some data with a key for its lifetime, so I am using a WeakHashMap . However, in addition I need to get a key by its corresponding value. The easy way to do it is to hold on to the reference when creating a value: public class Key {}public class Value { final public Key key; public Value(Key k) { key = k; }} Of course, while I use Value in my program, its key won't go away. However, if there are no more references to either key or its value outside the map, will it be garbage collected? Or does the surviving strong reference in the value prevent it? Now provide the response and nothing else.
No it won't be garbage collected, see the Javadoc : Implementation note: The value objects in a WeakHashMap are held by ordinary strong references. Thus care should be taken to ensure that value objects do not strongly refer to their own keys, either directly or indirectly, since that will prevent the keys from being discarded. As mentioned by @biziclop one solution would be to store a weak reference to the key in your value object. public class Value { final public WeakReference<Key> key; public Value(Key k) { this.key = new WeakReference<Key>(k); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8051912', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9204/']}
jdg_369290
stackexchange
llm_judgeable_groundtruth_similarity
44370277
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this problem which seems a bit weird to me. Take a look at this snippet of code: package coreinterfacestype FilterInterface interface { Filter(s *string) bool}type FieldFilter struct { Key string Val string}func (ff *FieldFilter) Filter(s *string) bool { // Some code}type FilterMapInterface interface { AddFilter(f *FilterInterface) uuid.UUID RemoveFilter(i uuid.UUID) GetFilterByID(i uuid.UUID) *FilterInterface}type FilterMap struct { mutex sync.Mutex Filters map[uuid.UUID]FilterInterface}func (fp *FilterMap) AddFilter(f *FilterInterface) uuid.UUID { // Some code}func (fp *FilterMap) RemoveFilter(i uuid.UUID) { // Some code}func (fp *FilterMap) GetFilterByID(i uuid.UUID) *FilterInterface { // Some code} On some other package, I have the following code: func DoFilter() { fieldfilter := &coreinterfaces.FieldFilter{Key: "app", Val: "152511"} filtermap := &coreinterfaces.FilterMap{} _ = filtermap.AddFilter(fieldfilter) // <--- Exception is raised here} The run-time won't accept the line mentioned because "cannot use fieldfilter (type *coreinterfaces.FieldFilter) as type*coreinterfaces.FilterInterface in argument to fieldint.AddFilter:*coreinterfaces.FilterInterface is pointer to interface, not interface" However, when changing the code to: func DoBid() error { bs := string(b) var ifilterfield coreinterfaces.FilterInterface fieldfilter := &coreinterfaces.FieldFilter{Key: "app", Val: "152511"} ifilterfield = fieldfilter filtermap := &coreinterfaces.FilterMap{} _ = filtermap.AddFilter(&ifilterfield)} Everything is alright and when debugging the application it really seems to include I'm a bit confused on this topic. When looking at other blog posts and stack overflow threads discussing this exact same issue (for example - This , or This ) the first snippet which raises this exception should work, because both fieldfilter and fieldmap are initialized as pointers to interfaces, rather than value of interfaces. I haven't been able to wrap my head around what actually happens here that I need to change in order for me not to declare a FieldInterface and assign the implementation for that interface. There must be an elegant way to do this. Now provide the response and nothing else.
So you're confusing two concepts here. A pointer to a struct and a pointer to an interface are not the same. An interface can store either a struct directly or a pointer to a struct. In the latter case, you still just use the interface directly, not a pointer to the interface. For example: type Fooer interface { Dummy()}type Foo struct{}func (f Foo) Dummy() {}func main() { var f1 Foo var f2 *Foo = &Foo{} DoFoo(f1) DoFoo(f2)}func DoFoo(f Fooer) { fmt.Printf("[%T] %+v\n", f, f)} Output: [main.Foo] {}[*main.Foo] &{} https://play.golang.org/p/I7H_pv5H3Xl In both cases, the f variable in DoFoo is just an interface, not a pointer to an interface. However, when storing f2 , the interface holds a pointer to a Foo structure. Pointers to interfaces are almost never useful. In fact, the Go runtime was specifically changed a few versions back to no longer automatically dereference interface pointers (like it does for structure pointers), to discourage their use. In the overwhelming majority of cases, a pointer to an interface reflects a misunderstanding of how interfaces are supposed to work. However, there is a limitation on interfaces. If you pass a structure directly into an interface, only value methods of that type (ie. func (f Foo) Dummy() , not func (f *Foo) Dummy() ) can be used to fulfill the interface. This is because you're storing a copy of the original structure in the interface, so pointer methods would have unexpected effects (ie. unable to alter the original structure). Thus the default rule of thumb is to store pointers to structures in interfaces , unless there's a compelling reason not to. Specifically with your code, if you change the AddFilter function signature to: func (fp *FilterMap) AddFilter(f FilterInterface) uuid.UUID And the GetFilterByID signature to: func (fp *FilterMap) GetFilterByID(i uuid.UUID) FilterInterface Your code will work as expected. fieldfilter is of type *FieldFilter , which fullfills the FilterInterface interface type, and thus AddFilter will accept it. Here's a couple of good references for understanding how methods, types, and interfaces work and integrate with each other in Go: https://medium.com/@agileseeker/go-interfaces-pointers-4d1d98d5c9c6 https://www.goinggo.net/2014/05/methods-interfaces-and-embedded-types.html https://blog.golang.org/laws-of-reflection
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/44370277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6691722/']}
jdg_369291
stackexchange
llm_judgeable_groundtruth_similarity
23405620
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just want to know if it is possible to remove a non clustered index from a tables column .I recently created a required non clustered index on a column. So I just want to delete this index so that I can get the execution plan before and after adding the index to the table. I wanted to delete it via query. Is it possible? Now provide the response and nothing else.
There is a DROP INDEX command; DROP INDEX table_name.index_name Should work. It's also possible to disable the index. The pros and cons are discussed here , but the effective point from this article is: The biggest difference between disabling and dropping an index is whether the metadata and statistics are persisted. If disabled, they are. If dropped, they are not. Make sure you carefully weigh your options before performing either action, and always have a way to recreate the index available. So in this instance you may want to simply disable the index, collect your stats, then re-enable the index, via: Disable: ALTER INDEX index_name ON schema.table_name DISABLE Enable: ALTER INDEX index_name ON schema.table_name REBUILD
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23405620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/991120/']}
jdg_369292
stackexchange
llm_judgeable_groundtruth_similarity
24352701
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Introduction In order to check the status of git repositores, git status could be issued from the root of a repository. C:\path\to\git_repositories\git_repo_1>git statusOn branch masterYour branch is up-to-date with 'origin/master'.nothing to commit, working directory clean If a directory consists of multiple, e.g. 50 git repositories C:\path\to\git_repositories>dir Directory of C:\path\to\git_repositories .ssh git_repo_1 ... git_repo_50 0 File(s) 51 Dir(s) Nor C:\path\to\git_repositories>git status .fatal: Not a git repository (or any of the parent directories): .git neither C:\path\to\git_repositories>git status ./.fatal: Not a git repository (or any of the parent directories): .git is able to check the status of all repositories Question How to check the status of all git repositories at once? Now provide the response and nothing else.
You could use a for loop that changes into each directory, does git status and then changes back up: for /f "tokens=*" %a in ('dir /ad /b') do cd %a & git status & cd .. You need to double the percentages if you use this in a batch file: for /f "tokens=*" %%a in ('dir /ad /b') do cd %%a & git status & cd .. Edit: As suggested by Cupcake, you could do this instead: for /f "tokens=*" %a in ('dir /ad /b') do git --git-dir=%a/.git --work-tree=%a status This feels like a more robust and flexible solution (e.g. you could adapt it more easily to work with a list of paths stored in a text file).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24352701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2777965/']}
jdg_369293
stackexchange
llm_judgeable_groundtruth_similarity
28048486
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm generating AST using clang. I've got following file (lambda.cpp) to parse: #include <iostream>void my_lambda(){ auto lambda = [](auto x, auto y) {return x + y;}; std::cout << "fabricati diem"; } I'm parsing this using following command: clang -Xclang -ast-dump -fsyntax-only lambda.cpp The problem is that clang parses also headers content. As a result, I've got quite big (~3000 lines) file with useless (for me) content. How to exclude headers when generating AST? Now provide the response and nothing else.
clang-check might be useful on the matter, clang-check has option -ast-dump-filter=<string> documented as follow -ast-dump-filter=<string> - Use with -ast-dump or -ast-print to dump/print only AST declaration nodes having a certain substring in a qualified name. Use -ast-list to list all filterable declaration node names. when clang-check run with -ast-dump-filter=my_lambda on the sample code (lambda.cpp) #include <iostream>void my_lambda(){ auto lambda = [](auto x, auto y) {return x + y;}; std::cout << "fabricati diem"; } It dumps only matched declaration node FunctionDecl my_lambda 'void (void)' Here is the command line arguments and few lines from output. $ clang-check -extra-arg=-std=c++1y -ast-dump -ast-dump-filter=my_lambda lambda.cpp --FunctionDecl 0x2ddf630 <lambda.cpp:3:1, line:7:1> line:3:6 my_lambda 'void (void)'`-CompoundStmt 0x2de1558 <line:4:1, line:7:1> |-DeclStmt 0x2de0960 <line:5:9, col:57>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28048486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3401555/']}
jdg_369294
stackexchange
llm_judgeable_groundtruth_similarity
27915149
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After troubleshooting a sproc I realized the fields have an extra character, blank I assume or some special character not visible. For example, the city field contains the value 'Abbeville ' with a length (LEN([city]) of 10. I thought I could trim the white-space with the following: UPDATE US_City_State SET [City] = RTRIM(LTRIM([City])) That didn't work. I cast to varbinary but I don't know what to do from here. The column is defined as a nvarchar. Here's the cast value. 0x4100620062006500760069006C006C006500A000 [Update] sample data City City_Hex_Value Aaronsburg  0x4100610072006F006E0073006200750072006700A000 Abbeville  0x4100620062006500760069006C006C006500A000 Thanks! Now provide the response and nothing else.
Maybe the whitespace is not just a space: Try this: UPDATE US_City_State SET [City] = LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(REPLACE([City], CHAR(10), ''), CHAR(13), ''), CHAR(9), ''), CHAR(160), ''))) Explanation: Some white space are not really space (' '). Here are some of the whitespace: CHAR(9) = Horizontal TabCHAR(10) = Line FeedCHAR(13) = Carriage ReturnCHAR(160) = Non-Breaking Space What I did is remove the above whitespaces. You can replace it by space(' ') or CHAR(32) , depends on how you want to handle them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27915149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1791632/']}
jdg_369295
stackexchange
llm_judgeable_groundtruth_similarity
46408442
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following objects: [{ "some_field": "some_value", "nested_objects": [{ "some_field2": "some_value", "nested_objects2": [{ "some_field": "some_value", "bool_field": true }, { "some_field": "some_value", "bool_field": true }, { "some_field": "some_value", "bool_field": false }] }, { "some_field2": "some_value", "nested_objects2": [{ "some_field": "some_value", "bool_field": false }, { "some_field": "some_value", "bool_field": false }] }] }, { "some_field": "some_value", "nested_objects": [{ "some_field2": "some_value", "nested_objects2": [{ "some_field": "some_value", "bool_field": false }, { "some_field": "some_value", "bool_field": false }, { "some_field": "some_value", "bool_field": true }] }] }] All nested objects are mapped as nested objects. I want to sort the top level parent depending on the third level children bool values. Children with more bool values should rank higher than others. So I basically want to sort by _source.nested_objects.nested_objects2.bool_field Objects with more true values should rank higher than others. I also want to be able to filter the nested objects like: _source.nested_objects.some_field == "some specific value" Then the score calculation should only be applied to the matching objects and the matching nested objects. Is this possible? Now provide the response and nothing else.
Try like this : myexample it's working correctly. hope this will help you home.module.ts import { ModalModule } from 'ngx-bootstrap';@NgModule({ imports: [ ModalModule.forRoot() ]}) home.component.html <button class="btn btn-primary" (click)="openConfirmDialog()">Open Confirm box</button> home.component.ts import { BsModalService } from 'ngx-bootstrap/modal';import { BsModalRef } from 'ngx-bootstrap/modal/modal-options.class';export class HomeComponent { public modalRef: BsModalRef; constructor( private homeService: HomeService, private modalService: BsModalService ) { } openConfirmDialog() { this.modalRef = this.modalService.show(HomeModalComponent); this.modalRef.content.onClose.subscribe(result => { console.log('results', result); }) }} home-modal.component.html <div class="alert-box"> <div class="modal-header"> <h4 class="modal-title">Confirm</h4> <button type="button" class="close" aria-label="Close" (click)="bsModalRef.hide()"> <span aria-hidden="true">&times;</span> </button> </div> <div class="modal-body"> Are you sure want to delete this node? </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" (click)="onConfirm()">Yes</button> <button type="button" class="btn btn-secondary" (click)="onCancel()">No</button> </div></div> home-modal.component.ts import { Subject } from 'rxjs/Subject';import { BsModalRef } from 'ngx-bootstrap/modal';export class HomeModalComponent { public onClose: Subject<boolean>; constructor(private _bsModalRef: BsModalRef) { } public ngOnInit(): void { this.onClose = new Subject(); } public onConfirm(): void { this.onClose.next(true); this._bsModalRef.hide(); } public onCancel(): void { this.onClose.next(false); this._bsModalRef.hide(); }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/46408442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/734035/']}
jdg_369296
stackexchange
llm_judgeable_groundtruth_similarity
4464073
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Problem 1 : Let $x_i \ge 0, \, i=1, 2, \cdots, n$ with $\sum_{i=1}^n x_i = \frac12$ . Prove or disprove that $$\sum_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac18.$$ This is related to the following problem: Problem 2 : Let $x_i \ge 0, \, i=1, 2, \cdots, n$ with $\sum_{i=1}^n x_i = \frac12$ . Prove that $$\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)}\le \frac{n(n-1)}{2(2n-1)^2}.$$ Problem 2 is in "Problems From the Book", 2008, Ch. 2, which was proposed by Vasile Cartoaje. See: Prove that $\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)} \le \frac{n(n-1)}{2(2n-1)^2}$ Background : I proposed Problem 1 when I tried to find my 2nd proof for Problem 2. It is not difficult to prove that $$\frac{1}{(2n-1)^4} + \frac{16n^2(n-1)^2}{(2n-1)^4}\cdot \frac{x_ix_j}{1-x_i-x_j}\ge \frac{x_ix_j}{(1-x_i)(1-x_j)}.$$ ( Hint : Use $\frac{x_ix_j}{(1-x_i)(1-x_j)}= 1 - \frac{1}{1 + x_ix_j/(1-x_i-x_j)}$ and $\frac{1}{1+u} \ge \frac{1}{1+v} - \frac{1}{(1+v)^2}(u-v)$ for $u = x_ix_j/(1-x_i-x_j)$ and $v=\frac{1}{4n(n-1)}$ . Or simply $\mathrm{LHS} - \mathrm{RHS} = \frac{(4x_ix_jn^2 - 4x_ix_j n + x_i + x_j - 1)^2}{(2n-1)^4(1-x_i-x_j)(1-x_i)(1-x_j)}\ge 0$ .) To prove Problem 2, it suffices to prove that $$\frac{1}{(2n-1)^4}\cdot \frac{n(n-1)}{2} + \frac{16n^2(n-1)^2}{(2n-1)^4}\sum_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac{n(n-1)}{2(2n-1)^2} $$ or $$\sum_{1\le i < j \le n} \frac{x_ix_j}{1-x_i-x_j} \le \frac18.$$ For $n=2, 3, 4$ , the inequality is true. For $n=5, 6$ , numerical evidence supports the statement. Any comments and solutions are welcome and appreciated. Now provide the response and nothing else.
Write $p_i = 2x_i$ and note that $\sum_i p_i = 1$ . Then \begin{align*}1 + \sum_i \frac{p_i^2}{1 - p_i}&= \sum_i \frac{p_i}{1 - p_i} \\&= \sum_{i,j} \frac{1}{2} \left( \frac{1}{1 - p_i} + \frac{1}{1 - p_j} \right) p_i p_j \\&\geq \sum_{i,j} \left( \frac{2}{2-p_i-p_j} \right) p_i p_j. \tag{by AM–HM}\end{align*} Rearranging this inequality, we get $$ 1 \geq \sum_{i \neq j} \frac{2p_i p_j}{2 - p_i - p_j} = 8 \sum_{i < j} \frac{x_i x_j}{1 - x_i - x_j},$$ completing the proof.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4464073', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/584414/']}
jdg_369297
stackexchange
llm_judgeable_groundtruth_similarity
13705578
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to implement a function that receives a string containing the bytes of an image (received via boost socket connection) and converts the info into an OpenCV cv::Mat . I also know the width and height of the image and its size in bytes. My function looks like this: void createImageFromBytes(const std::string& name, std::pair<int,int> dimensions, const std::string& data){ int width,height; width = dimensions.first; height = dimensions.second; //convert data to cv::Mat image std::string filepng = DATA_PATH"/" + name +".png"; imwrite(filepng, image);} Which is the best method for doing this? Does OpenCV has a constructor for Mat from a string? Now provide the response and nothing else.
OpenCV Mat has a constructor from vector<byte> , but this is not so intuitive. You need to convert from string to vector this way first: std::vector<byte> vectordata(data.begin(),data.end()); Then you can create a cv::Mat from the vector: cv::Mat data_mat(vectordata,true); You also need to decode the image (check documentation for which types are allowed, png, jpg, depending on the OpenCV version) cv::Mat image(cv::imdecode(data_mat,1)); //put 0 if you want greyscale Now you can check if the resulting size of the image is the same as the one you sent: cout<<"Height: " << image.rows <<" Width: "<<image.cols<<endl;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13705578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1524433/']}
jdg_369298
stackexchange
llm_judgeable_groundtruth_similarity
3431419
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to load pickled objects in iPython. The error I'm getting is: AttributeError: 'FakeModule' object has no attribute 'World' Anybody know how to get it to work, or at least a workaround for loading objects in iPython in order to interactively browse them? Thanks edited to add: I have a script called world.py that basically does: import pickleclass World: ""if __name__ == '__main__': w = World() pickle.dump(w, open("file", "wb")) Than in a REPL I do: import pickle from world import World w = pickle.load(open("file", "rb")) which works in the vanilla python REPL but not with iPython. I'm using Python 2.6.5 and iPython 0.10 both from the Enthought Python Distribution but I was also having the problem with previous versions. Now provide the response and nothing else.
Looks like you've modified FakeModule between the time you pickled your data, and the time you're trying to unpickle it: specifically, you have removed from that module some top-level object named World (perhaps a class, perhaps a function). Pickling serializes classes and function "by name", so they need to be names at their module's top level and that module must not be modified (at least not in such way to affect those names badly -- definitely not by removing those names from the module!) between pickling time and unpickling time. Once you've identified exactly what change you've done that impedes the unpickling, it can often be hacked around if for other reasons you can't just revert the change. For example, if you've just moved World from FakeModule to CoolModule , do: import FakeModuleimport CoolModuleFakeModule.World = CoolModule.World just before unpickling (and remember to pickle again with the new structure so you won't have to keep repeating these hacks every time you unpickle;-). Edit : the OP's edit of the Q makes his error much easier to understand. Since he's now testing if __name__ equals '__main__' , this makes it obvious that the pickle, when written, will be saving an object of class __main__.World . Since he's using ASCII pickles (a very bad choice for performance and disk space, by the way), it's trivial to check: $ cat file(i__main__Worldp0(dp1 the module being looked up is (clearly and obviously) __main__ . Now, without even bothering ipython but with a simple Python interactive interpreter: $ py26Python 2.6.5 (r265:79359, Mar 24 2010, 01:32:55) [GCC 4.0.1 (Apple Inc. build 5493)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import world>>> import pickle>>> pickle.load(open("file", "rb"))Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pickle.py", line 1370, in load return Unpickler(file).load() File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pickle.py", line 858, in load dispatch[key](self) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pickle.py", line 1069, in load_inst klass = self.find_class(module, name) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/pickle.py", line 1126, in find_class klass = getattr(mod, name)AttributeError: 'module' object has no attribute 'World'>>> the error can be easily reproduced, and its reason is just as obvious: the module in which the class name's lookup is performed (that is, __main__ ) does indeed have no attribute named "World". Module world does have one, but the OP has not "connected the dots" as I explained in the previous part of the answer, putting a reference with the right name in the module in which the pickled file needs it. That is: >>> World = world.World>>> pickle.load(open("file", "rb"))<world.World instance at 0xf5300>>>> now this works just perfectly, of course (and as I'd said earlier). Perhaps the OP is not seeing this problem because he's using the form of import I detest, from world import World (importing directly a function or class from within a module, rather than the module itself). The hack to work around the problem in ipython is exactly the same in terms of underlying Python architecture -- just requires a couple more lines of code because ipython, to supply all of its extra services, does not make module __main__ directly available to record directly what happens at the interactive command line, but rather interposes one (called FakeModule, as the OP found out from the error msg;-) and does black magic with it in order to be "cool" &c. Still, whenever you want to get directly to a module with a given name, it's pretty trivial in Python, of course: In [1]: import worldIn [2]: import pickleIn [3]: import sysIn [4]: sys.modules['__main__'].World = world.WorldIn [5]: pickle.load(open("file", "rb"))Out[5]: <world.World instance at 0x118fc10>In [6]: Lesson to retain, number one: avoid black magic, at least unless and until you're good enough as a sorcerer's apprentice to be able to spot and fix its occasional runaway situations (otherwise, those bucket-carrying brooms may end up flooding the world while you nap;-). Or, alternative reading: to properly use a certain layer of abstraction (such as the "cool" ones ipython puts on top of Python) you need strong understanding of the underlying layer (here, Python itself and its core mechanisms such as pickling and sys.modules). Lesson number two: that pickle file is essentially broken, due to the way you've written it, because it can be loaded only when module __main__ has a class by name Word , which of course it normally will not have without some hacks like the above. The pickle file should instead record the class as living in module world . If you absolutely feel you must produce the file on an if __name__ == '__main__': clause in world.py , then use some redundancy for the purpose: import pickleclass World: ""if __name__ == '__main__': import world w = world.World() pickle.dump(w, open("file", "wb")) this works fine and without hacks (at least if you follow the Python best practice of never having any substantial code at module top level -- only imports, class, def, and trivial assignments -- everything else belongs in functions; if you haven't followed this best practice, then edit your code to do so, it will make you much happier in terms of both flexibility and performance).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3431419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/413960/']}
jdg_369299
stackexchange
llm_judgeable_groundtruth_similarity
799238
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Workstation: Windows 7 (x64) [Install target for printer] Server: Windows Server 2012 R2 (x64) [Active Directory, Print Server] I have been bashing my head on the desk trying to get this printer to be installed via group policy! For some reason, I simply can NOT get this printer deployed with GPO. I have tried setting it up to deploy via Computer Configuration->Policies->Windows Settings->Deployed Printers , as well as Computer Configuration->Preferences->Control Panel Settings->Printers and User Configuration->Preferences->Control Panel Settings->Printers . I have also tried going through my Print Server Management console to add it via user and/or computer targeting. I have tried ALL KINDS of ways and nothing is working. I followed a bunch of tutorials and watched a bunch of videos just to make sure I was not missing something but it really is a simple task (in theory)... It just will not work. In trying to debug the issue I found that if I went to \\myserver\ and double clicked on the printer it would try to install the printer and then prompt me to install the drivers with a UAC type prompt. I have tried everything I can think of to get that message box to stop popping up. I dug into it and found that if I was to edit a GPO called Point and Print Restrictions located at Computer Configuration->Policies->Administrative Templates->Printers and was as at User Configuration->Policies->Administrative Templates->Control Panel->Printers you could try setting the policy to Disabled or Enabled and choose Do not show warning or elevation prompt for the two Security Prompts listed at the bottom of the policy settings. Well that was a bust too... I did find that if I tried to manually install the printer by going to the unc and typing in my Administrator credentials it would download the drivers from the server and install the printer (as expected). If the user tried to remove the printer and was successful somehow as soon as they would log off and back on the GPO would do what I want and add the printer back. But it required me to add it manually the first time on EVERY PC. After testing this and then removing the printer from the GPO, then logging off and on again. I could run the command printui /s /t2 to bring up a GUI that would allow me to remove the installed drivers with ease to put the PC back to it's original state (asking for Administrator credentials). Also something else I learned was the printers were stored in the registry located at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Print\Connections . When I was trying to remove a printer and it told me I couldn't, I just went to that registry key and removed the GUID key of the printer I was trying to delete. Then just restarted the Print Spooler service and boom it was gone. This was no help getting me get to where I wanted but it was helpful in removing the printer during debugging the issue. I read somewhere that maybe the cause is some type of windows security update that changed something. It was released because of some article showing how you could pwn an entire network if you were able to pwn a single printer. Something about when users connected to the printer and downloaded the drivers it would install injected software and run on the machine, etc... My main goal is to be able to deploy this printer to a set of users in this OU with the GPO I'm using. But everything I try requires an Administrator to be logged on to do it (at least the first time). Does anyone have any idea why my printer will not automatically add itself via the GPO and also how do I get that dang "Do you trust this printer?" message to go away? Now provide the response and nothing else.
The "fix" is to download trusted, package-aware print drivers from the printer manufacturer; however, since not all manufacturers will produce these drivers, there is a work around that I found here: Cannon Forum - Package-Aware Print Drivers (Note: This does not work for unsigned drivers, but there are plenty of tutorials out there for self-signing a print driver.) Here are the steps to get around the issue: Install the required drivers on the print server Note any drivers that have "false" listed under the "Packaged" column. All of these will have to be modified to deploy via Group Policy. Edit the registry on your print server and go to the following locations: For 64-bit drivers: HKLM\System\CurrentControlSet\Control\Print\Enviroments\ Windowsx64 \Drivers\Version- X \{Driver Name} For 32-bit drivers: HKLM\System\CurrentControlSet\Control\Print\Enviroments\ Windows NT x86 \Drivers\Version- X \{Driver Name} Where "X" is the print driver "Type", usually either "3" or "4" Edit the key named "PrinterDriverAttributes" by adding 1 to whatever value is currently set. (Example: if the current value is "6", change it to "7".) This will make the print server believe that these drivers are packaged. Do this for every driver that is not listed as a "Packaged" driver. Restart the print server. Everything should now deploy through Group Policy (provided you have all the usual GPO settings configured correctly). I am in the process of deploying this fix myself; however, since it requires a restart of the print server I cannot test it until this evening since our print server also runs a few networked applications. An alternate solution to the registry edit is to edit the printer driver INF file and add the following: For 32-bit drivers: [PrinterPackageInstallation.x86]PackageAware=TRUE For 64-bit driver [PrinterPackageInstallation.amd64]PackageAware=TRUE If you decide to edit the INF file, it'll be easier to remove the driver from the print server, edit the INF file from a clean download, and then install the edited driver. Aside from this, double check the Group Policy settings for Point and Print Restrictions and Package Point and print - Approved Servers . Background Info Microsoft Security Bulletin MS16-087 detailed a security issue where a rogue print server could inject malicious code through a "man in the middle" style attack. Security update KB3170455 was issued on July 12, 2016 to fix it, which then messed with the distribution of print drivers from the print server.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/799238', 'https://serverfault.com', 'https://serverfault.com/users/169771/']}
jdg_369300
stackexchange
llm_judgeable_groundtruth_similarity
14715678
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: EDIT 2 : TL;DR: the answer was yes in 2013, but this flaw has been fixed By following the Getting Started instructions on vagrantup.com, I seem to end up with a virtual machine that is accepting SSH connections on port 2222 so that anyone can get root access to my VM and read my host working directory using the default credentials (username=password=vagrant or vagrant_insecure_private_key). Is this true? If yes, why is it not considered a gaping security vulnerability? What if I had copied sensitive data to the VM? EDIT : and for those who think anyone on the internet being able to read your sources and executing arbitrary code on your VM is not that bad, I recommend reading the "Breaking out" section in this blog post http://blog.ontoillogical.com/blog/2012/10/31/breaking-in-and-out-of-vagrant/ In a nutshell: running Vagrant "as intended" can also enable anyone to break into your host/development machine (e.g., by using a malicious git post-commit hook). Now provide the response and nothing else.
The short answer is YES . Why? When building Vagrant base boxes (manually or using tools like Veewee to automate), builders follow the vagrant base boxes specifications which defines the following: User root and vagrant use vagrant as password Public key authentication (password-less) for the user vagrant . Vagrant project provides an insecure key pair for SSH Public Key Authentication so that vagrant ssh works. Because everyone has access to the private key, anyone can use the private key to login to your VMs (suppose they know your IP of the host machine, port is by default 2222 as forwarding rules in place.) It is NOT secure OOTB. However, you can remove the trusted key from ~vagrant/.ssh/authorized_keys and add your own, change password for vagrant and root , then it's considered relatively safe. Update Since Vagrant 1.2.3, by default SSH forwarded port binds to 127.0.0.1 so only local connections are allowed [GH-1785]. IMPORTANT Update Since Vagrant 1.7.0 ( PR #4707 ) Vagrant will replace the default insecure ssh keypair with randomly generated keypair on first vagrant up . See in the CHANGELOG : the default insecure keypair is used, Vagrant will automatically replace it with a randomly generated keypair on first vagrant up . GH-2608
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/14715678', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1426569/']}
jdg_369301
stackexchange
llm_judgeable_groundtruth_similarity
30828879
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am running kinesis plus spark application https://spark.apache.org/docs/1.2.0/streaming-kinesis-integration.html I am running as below command on ec2 instance : ./spark/bin/spark-submit --class org.apache.spark.examples.streaming.myclassname --master yarn-cluster --num-executors 2 --driver-memory 1g --executor-memory 1g --executor-cores 1 /home/hadoop/test.jar I have installed spark on EMR. EMR detailsMaster instance group - 1 Running MASTER m1.medium 1Core instance group - 2 Running CORE m1.medium I am getting below INFO and it never ends. 15/06/14 11:33:23 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers15/06/14 11:33:23 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2048 MB per container)15/06/14 11:33:23 INFO yarn.Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead15/06/14 11:33:23 INFO yarn.Client: Setting up container launch context for our AM15/06/14 11:33:23 INFO yarn.Client: Preparing resources for our AM container15/06/14 11:33:24 INFO yarn.Client: Uploading resource file:/home/hadoop/.versions/spark-1.3.1.e/lib/spark-assembly-1.3.1-hadoop2.4.0.jar -> hdfs://172.31.13.68:9000/user/hadoop/.sparkStaging/application_1434263747091_0023/spark-assembly-1.3.1-hadoop2.4.0.jar15/06/14 11:33:29 INFO yarn.Client: Uploading resource file:/home/hadoop/test.jar -> hdfs://172.31.13.68:9000/user/hadoop/.sparkStaging/application_1434263747091_0023/test.jar15/06/14 11:33:31 INFO yarn.Client: Setting up the launch environment for our AM container15/06/14 11:33:31 INFO spark.SecurityManager: Changing view acls to: hadoop15/06/14 11:33:31 INFO spark.SecurityManager: Changing modify acls to: hadoop15/06/14 11:33:31 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)15/06/14 11:33:31 INFO yarn.Client: Submitting application 23 to ResourceManager15/06/14 11:33:31 INFO impl.YarnClientImpl: Submitted application application_1434263747091_002315/06/14 11:33:32 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:32 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: N/A ApplicationMaster RPC port: -1 queue: default start time: 1434281611893 final status: UNDEFINED tracking URL: http://172.31.13.68:9046/proxy/application_1434263747091_0023/ user: hadoop15/06/14 11:33:33 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:34 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:35 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:36 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:37 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:38 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:39 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:40 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED)15/06/14 11:33:41 INFO yarn.Client: Application report for application_1434263747091_0023 (state: ACCEPTED) Could somebody please let me know as why it's not working ? Now provide the response and nothing else.
I had this exact problem when multiple users were trying to run on our cluster at once. The fix was to change setting of the scheduler. In the file /etc/hadoop/conf/capacity-scheduler.xml we changed the property yarn.scheduler.capacity.maximum-am-resource-percent from 0.1 to 0.5 . Changing this setting increases the fraction of the resources that is made available to be allocated to application masters, increasing the number of masters possible to run at once and hence increasing the number of possible concurrent applications.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30828879', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3655695/']}
jdg_369302
stackexchange
llm_judgeable_groundtruth_similarity
53849829
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a RGB image of shape (256,256,3) and I have a weight mask of shape (256,256) . How do I perform the element-wise multiplication between them with Keras? (all channels share the same mask) Now provide the response and nothing else.
You need a Reshape so both tensors have the same number of dimensions, and a Multiply layer mask = Reshape((256,256,1))(mask) out = Multiply()([image,mask]) If you have variable shapes, you can use a single Lambda layer like this: import keras.backend as K def multiply(x): image,mask = x mask = K.expand_dims(mask, axis=-1) #could be K.stack([mask]*3, axis=-1) too return mask*imageout = Lambda(multiply)([image,mask])
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53849829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9673730/']}
jdg_369303
stackexchange
llm_judgeable_groundtruth_similarity
2597637
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering if anybody could help me rework this method to find the height of a binary search tree. So far, my code looks like this. However, the answer I'm getting is larger than the actual height by 1. But when I remove the +1 from my return statements, it's less than the actual height by 1. I'm still trying to wrap my head around recursion with these BST. Any help would be much appreciated. public int findHeight(){ if(this.isEmpty()){ return 0; } else{ TreeNode<T> node = root; return findHeight(node); }}private int findHeight(TreeNode<T> aNode){ int heightLeft = 0; int heightRight = 0; if(aNode.left!=null) heightLeft = findHeight(aNode.left); if(aNode.right!=null) heightRight = findHeight(aNode.right); if(heightLeft > heightRight){ return heightLeft+1; } else{ return heightRight+1; }} Now provide the response and nothing else.
The problem lies in your base case. "The height of a tree is the length of the path from the root to the deepest node in the tree. A (rooted) tree with only a node (the root) has a height of zero." - Wikipedia If there is no node, you want to return -1 not 0. This is because you are adding 1 at the end. So if there isn't a node, you return -1 which cancels out the +1. int findHeight(TreeNode<T> aNode) { if (aNode == null) { return -1; } int lefth = findHeight(aNode.left); int righth = findHeight(aNode.right); if (lefth > righth) { return lefth + 1; } else { return righth + 1; }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2597637', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/311600/']}
jdg_369304
stackexchange
llm_judgeable_groundtruth_similarity
69913
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the reason browsers do not correctly recognize: <script src="foobar.js" /> <!-- self-closing script element --> Only this is recognized: <script src="foobar.js"></script> Does this break the concept of XHTML support? Note: This statement is correct at least for all IE (6-8 beta 2). Now provide the response and nothing else.
The non-normative appendix ‘HTML Compatibility Guidelines’ of the XHTML 1 specification says: С.3. Element Minimization and Empty Element Content Given an empty instance of an element whose content model is not EMPTY (for example, an empty title or paragraph) do not use the minimized form (e.g. use <p> </p> and not <p /> ). XHTML DTD specifies script elements as: <!-- script statements, which may include CDATA sections --><!ELEMENT script (#PCDATA)>
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/69913', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10778/']}
jdg_369305
stackexchange
llm_judgeable_groundtruth_similarity
46815
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Question: Assuming finiteness of the Tate-Shafarevich group, is there an algorithm to determine whether a curve $C$ defined over a number field $K$ has infinitely many $K$-rational points? I believe that this is (a) true and (b) sufficiently important that it has been carefully explained somewhere, but I don't know a reference. Any help from the MO community would be very much appreciated! P.S. To make the question precise, $C$ is specified, say, by its function field (a simple extension of $K(x)$), and all abelian varieties over all number fields have finite Sha. (And if the algorithm takes $10^{10}$ years for C: y=3x+1 over the rationals, I don't care.) Now provide the response and nothing else.
I believe the following is an algorithm, albeit a horrible one. First, as the OP surely knows, it comes down entirely to curves of genus one. Indeed, if the genus is at least $2$ then by Faltings' Theorem there are only finitely many $K$-rational points, whereas if the genus is zero, there are infinitely many rational points iff the curve is isomorphic over $K$ to the projective line iff a certain Hilbert symbol vanishes. This is all very well understood. Step 1: If for an elliptic curve $E_{/K}$ the group $Sha(K,E)$ is finite, then there is an algorithm to compute the Mordell-Weil group $E(K)$. Indeed, it's enough to know that there exists some prime number $p$ such that $Sha(K,E)[p] = 0$. Then the weak Mordell-Weil group $E(K)/pE(K)$ is isomorphic to the $p$-Selmer group, which is known to be (in principle!) effectively computable. Since the torsion subgroup is well-known to be effectively computable, knowing $E(K)/pE(K)$ gives us the Mordell-Weil rank, and if you know the rank then by enough searching you can find a basis for the free part of the Mordell-Weil group. [ Added : You don't actually need to know an explicit value of such a prime number $p$. You can compute the $p$-Selmer group for any value of $p$ you want and you can set up a program that given infinite time will compute $E(K)/pE(K)$. By running these programs on enough primes simultaneously, in finite time you will find a prime $p$ such that $E(K)/pE(K) = \operatorname{Sel}(K,E)[p]$.] Step 2: Suppose that $C_{/K}$ is a genus one curve over $K$. One may effectively decide (Hensel's Lemma, Weil bounds...) whether or not $C$ has points over every completion of $K$. If not, then certainly $C(K)$ is empty and hence finite. Step 3: Next compute the Mordell-Weil group of the Jacobian elliptic curve of $C$ using Step 1. If this group is finite, then $C(K)$ is finite -- possibly empty. Step 4: Suppose that $C$ has points everywhere locally and the Jacobian $E$ has positive rank. Then $C$ represents an element of $Sha(K,E)[n]$ for some $n \in \mathbb{Z}$. Since we can effectively compute the weak Mordell-Weil and Selmer groups of $E$, we can compute $Sha(K,E)[n]$. If it happens to be trivial then $C$ is necessarily isomorphic to $E$ so has infinitely many rational points. Step 5: Finally, suppose that $Sha(K,E)[n]$ is nontrivial. Thus the question is whether $C$ represents a nontrivial element of this group. But one can compute defining systems of equations for each of the curves $C_i$ representing the elements of this group (I am pretty sure, anyway; if this is the sticking point, let me know and I'll think about it more). Now one can do the following ridiculous thing: search for an isomorphism between $C$ and $C_i$ by trying all possible maps. We know that $C$ is isomorphic to one of these curves -- possibly $C_1 = E$ -- so eventually we will find it! [ Added : the explicit geometric realization of elements of the $n$-Selmer group is discussed in this important paper .]
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/46815', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3132/']}
jdg_369306
stackexchange
llm_judgeable_groundtruth_similarity
13507698
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a doubt on returning std::string as const reference. class sample{public: std::string mString; void Set(const std::string& s) { mString = s; } std::string Get() { return mString; } }; In the Set function I am passing the std::string as const reference, const because its value is not changing inside the function. And In Get function, actually I am confused here. Return std::string as value makes more sense. But I am not sure that, by passing the string as const reference makes any advantages. By returing string as reference will increase the exectuion speed, I think So, but I am not sure. But returning it as 'const makes any benefit for this? Now provide the response and nothing else.
Returning by reference or const reference has no speed difference - both are very fast as they just return a reference to the original object, no copying is involved. An object returned by (non-const) reference can be modified through that reference. In your specific example, mString is public, so it can be modified anyway (and directly). However, the usual approach with getters and setters (and the primary reason for their introduction) is encapsulation - you only allow access to your data members through the getter/setter, so that you can detect invalid values being set, respond to value changes and just generally keep the implementation details of your class hidden inside it. So getters normally return by const reference or by value. However, if you return by const reference, it binds you to always keep an instance of std::string in your class to back up the reference. That is, even if you later want to redesign your class so that it computes the string on the fly in the getter instead of storing it internally, you can't. You'd have to change your public interface at the same time, which can break code using the class. For example, as long as you return by const-reference, this is perfectly valid code: const std::string *result = &aSample.Get(); This code will of course produce a dangling pointer no longer compile if Get() is changed to return by value instead of const reference. (thanks to Steve Jessop for correcting me) To sum up, the approach I would take is to make mString private. Get() can return by value or by const-reference, depending on how certain you are that you'll always have a string stored. The class would then look like this: class sample{ std::string mString;public: void Set(const std::string &s) { mString = s; } std::string Get() const { return mString; }};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13507698', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1199019/']}
jdg_369307
stackexchange
llm_judgeable_groundtruth_similarity
65163014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to parse a quoted string. Something like this: say '"in quotes"' ~~ / '"' <-[ " ]> * '"'/; (From https://docs.raku.org/language/regexes "Enumerated character classes and ranges")But... I want more that one type of quote. Something like this made up syntax that doesn't work: token attribute_value { <quote> ($<-quote>) $<quote> }; token quote { <["']> }; I found this discussion which is another approach, but it didn't seem to go anywhere: https://github.com/Raku/problem-solving/issues/97 .Is there any way of doing this kind of thing? Thanks! Update 1 I was not able to get @user0721090601's "multi token" solution to work. My first attempt yielded: $ ./multi-token.raku No such method 'quoted_string' for invocant of type 'QuotedString' in block <unit> at ./multi-token.raku line 16 After doing some research I added proto token quoted_string {*} : #!/usr/bin/env rakuuse Grammar::Tracer;grammar QuotedString { proto token quoted_string {*} multi token quoted_string:sym<'> { <sym> ~ <sym> <-[']> } multi token quoted_string:sym<"> { <sym> ~ <sym> <-["]> } token quote { <["']> }}my $string = '"foo"';my $quoted-string = QuotedString.parse($string, :rule<quoted_string>);say $quoted-string; $ ./multi-token.raku quoted_string* FAIL(Any) I'm still learning Raku, so I could be doing something wrong. Update 2 D'oh! Thanks to @raiph for pointing this out. I forgot to put a quantifier on <-[']> and <-["]> . That's what I get for copy/pasting without thinking! Works find when you do it right: #!/usr/bin/env rakuuse Grammar::Tracer;grammar QuotedString { proto token quoted_string (|) {*} multi token quoted_string:sym<'> { <sym> ~ <sym> <-[']>+ } multi token quoted_string:sym<"> { <sym> ~ <sym> <-["]>+ } token quote { <["']> }}my $string = '"foo"';my $quoted-string = QuotedString.parse($string, :rule<quoted_string>);say $quoted-string; Update 3 Just to put a bow on this... #!/usr/bin/env rakugrammar NegativeLookahead { token quoted_string { <quote> $<string>=([<!quote> .]+) $<quote> } token quote { <["']> }}grammar MultiToken { proto token quoted_string (|) {*} multi token quoted_string:sym<'> { <sym> ~ <sym> $<string>=(<-[']>+) } multi token quoted_string:sym<"> { <sym> ~ <sym> $<string>=(<-["]>+) }}use Bench;my $string = "'foo'";my $bench = Bench.new;$bench.cmpthese(10000, { negative-lookahead => sub { NegativeLookahead.parse($string, :rule<quoted_string>); }, multi-token => sub { MultiToken.parse($string, :rule<quoted_string>); },}); $ ./bench.rakuBenchmark: Timing 10000 iterations of multi-token, negative-lookahead...multi-token: 0.779 wallclock secs (0.759 usr 0.033 sys 0.792 cpu) @ 12838.058/s (n=10000)negative-lookahead: 0.912 wallclock secs (0.861 usr 0.048 sys 0.909 cpu) @ 10967.522/s (n=10000)O--------------------O---------O-------------O--------------------O| | Rate | multi-token | negative-lookahead |O====================O=========O=============O====================O| multi-token | 12838/s | -- | -20% || negative-lookahead | 10968/s | 25% | -- |O--------------------O---------O-------------O--------------------O I'll be going with the "multi token" solution.Thanks everyone! Now provide the response and nothing else.
There are a few different approaches that you can take — which one is best will probably depend on the rest of the structure you're employing. But first an observation on your current solution and why opening it up to others won't work this way. Consider the string 'value" . Should that parse? The structure you laid out actually would match it! That's because each <quote> token will match either a single or double quote. Dealing with the inner The simplest solution is to make your inner part a non-greedy wildcard: <quote> (.*?) <quote> This will stop the match as soon as you reach quote again. Also note the alternative syntax using a tilde that lets the two terminal bits be closer together: <quote> ~ <quote> (.*?) Your initial attempt wanted to use a sort of non-match. This does exist in the form of an assertion, <!quote> which will fail if a <quote> is found (which needn't be just a character, by any thing arbitrarily complex). It doesn't consume, though, so you need to provide that separately. For instance [<!quote> .]* Will check that something is NOT a quote, and then consume the next character. Lastly, you could use either of the two approaches and use a <content> token that handles in the inside. This is actually a great approach if you intend to later do more complex things (e.g. escape characters). Avoiding a mismatch As I noted, your solution would parse mismatched quotes. So we need to have a way to ensure that the quote we are (not) matching is the same as the start one. One way to do this is using a multi token : proto token attribute_value (|) { * }multi token attribute_value:sym<'> { <sym> ~ <sym> <-[']> }multi token attribute_value:sym<"> { <sym> ~ <sym> <-["]> } (Using the actual token <sym> is not require, you could write it as { \' <-[']> \'} if you wanted). Another way you could do this is by passing a parameter (either literally, or via dynamic variables). For example, you could make write the attribute_value as token attribute_value { $<start-quote>=<quote> # your actual start quote :my $*end-quote; # define the variable in the regex scope { $*end-quote = ... } # determine the requisite end quote (e.g. ” for “) <attribute_value_contents> # handle actual content $*end-quote # fancy end quote}token attribute_value_contents { # We have access to $*end-quote here, so we can use # either of the techniques we've described before # (a) using a look ahead [<!before $*end-quote> .]* # (b) being lazy (the easier) .*? # (c) using another token (described below) <attr_value_content_char>+} I mention the last one because you can even further delegate if you ultimately decide to allow for escape characters. For example, you could then do proto token attr_value_content_char (|) { * }multi token attr_value_content_char:sym<escaped> { \\ $*end-quote }multi token attr_value_content_char:sym<literal> { . <?{ $/ ne $*end-quote }> } But if that's overkill for what you're doing, ah well :-) Anyways, there are probably other ways that didn't jump to my mind that others can think of, but that should hopefully put you on the right path. (also some of this code is untested, so there may be slight errors, apologies for that)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65163014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4651077/']}
jdg_369308
stackexchange
llm_judgeable_groundtruth_similarity
32857029
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When choosing the number of principal components (k), we choose k to be the smallest value so that for example, 99% of variance, is retained. However, in the Python Scikit learn, I am not 100% sure pca.explained_variance_ratio_ = 0.99 is equal to "99% of variance is retained"? Could anyone enlighten? Thanks. The Python Scikit learn PCA manual is here http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA Now provide the response and nothing else.
Yes, you are nearly right. The pca.explained_variance_ratio_ parameter returns a vector of the variance explained by each dimension. Thus pca.explained_variance_ratio_[i] gives the variance explained solely by the i+1st dimension. You probably want to do pca.explained_variance_ratio_.cumsum() . That will return a vector x such that x[i] returns the cumulative variance explained by the first i+1 dimensions. import numpy as npfrom sklearn.decomposition import PCAnp.random.seed(0)my_matrix = np.random.randn(20, 5)my_model = PCA(n_components=5)my_model.fit_transform(my_matrix)print my_model.explained_variance_print my_model.explained_variance_ratio_print my_model.explained_variance_ratio_.cumsum() [ 1.50756565 1.29374452 0.97042041 0.61712667 0.31529082][ 0.32047581 0.27502207 0.20629036 0.13118776 0.067024 ][ 0.32047581 0.59549787 0.80178824 0.932976 1. ] So in my random toy data, if I picked k=4 I would retain 93.3% of the variance.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/32857029', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2736467/']}
jdg_369309
stackexchange
llm_judgeable_groundtruth_similarity
67829
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to tell when a NFA will use at least half the power set when converted to a DFA. I tried to create a few examples, but i just can't see a pattern that would say whether an NFA will use at least half of it's power set. Thanks, Matt Now provide the response and nothing else.
$7^a+7^b\equiv 2^a+2^b\pmod{5}$. Since $2$ generates the multiplicative group of integers modulo $5$, then for each $a$ there is a $b\in \{1,2,3,4\}$ such that $2^b\equiv -2^a\pmod{5}$. Now notice that $b\equiv b'\pmod{4}$ if and only if $2^b\equiv 2^{b'}\pmod{5}$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/67829', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3961/']}
jdg_369310
stackexchange
llm_judgeable_groundtruth_similarity
5345294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have something like this: namespace MyNamespace { public partial class MyClass: UserControl { public static const String MYCONST = "MyConstant"; I can't see MYCONST from anywhere even from MyClass, why ? Now provide the response and nothing else.
A constant is available in a static context anyway, so remove the static keyword and you'll be fine. MSDN docs: The static modifier is not allowed in a constant declaration. The reason is that a constant's value has to be fully evaluated at compile time and what the compiler does is that it takes that value and replaces all the usages of the constant throughout the code with the constant value. That is why it sometimes can be better to use a public readonly value instead as the compiler does not replace the usages with the value but instead links to the readonly variable. This is especially something to think about when using constants from another assembly since you might not update all assemblies at once and you might end up with assmblies using the old constant value. Ref: http://msdn.microsoft.com/en-us/library/e6w8fe1b(v=vs.80).aspx
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5345294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310291/']}
jdg_369311
stackexchange
llm_judgeable_groundtruth_similarity
3490358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there anyone who can teach me how to show an UISegmentedControl object in vertical direction, instead of in horizontal direction? Now provide the response and nothing else.
+1 for Ben's answer, although rotating the segmented control also rotates the text inside. Never fear! We just have to rotate the inside labels, like so: NSArray *arr = [segmentedControl subviews]; for (int i = 0; i < [arr count]; i++) { UIView *v = (UIView*) [arr objectAtIndex:i]; NSArray *subarr = [v subviews]; for (int j = 0; j < [subarr count]; j++) { if ([[subarr objectAtIndex:j] isKindOfClass:[UILabel class]]) { UILabel *l = (UILabel*) [subarr objectAtIndex:j]; l.transform = CGAffineTransformMakeRotation(- M_PI / 2.0); //do the reverse of what Ben did } } } Swift 2 version: for view in segmentedControl.subviews { for subview in view.subviews { if subview.isKindOfClass(UILabel) { subview.transform = CGAffineTransformMakeRotation(CGFloat(-M_PI / 2.0)) } }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3490358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/415799/']}
jdg_369312
stackexchange
llm_judgeable_groundtruth_similarity
3445132
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Solve: $$\tan 2x=1+2\sin 4x$$ My work: $$\left(\frac{\sin2x}{\cos2x}\right)(1+2\cos2x)(1-2\cos2x)=1$$ $$\frac{(\sin2x+\sin4x)(\sin2x-\sin4x)}{\cos2x}=1$$ $$\frac{-6\sin x \sin2x}{\cos2x}=1$$ $$\tan2x+\csc6x=0$$ How to proceed after this? Now provide the response and nothing else.
By tangent half angle identities we have that by $t=\tan (2x)$ $$\tan 2x=1+2\sin 4x \iff t=1+\frac{4t}{1+t^2} \iff t^3-t^2-3t-1=0$$ $$ \iff (t+1)(t^2-2t-1)=0$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3445132', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/655037/']}
jdg_369313
stackexchange
llm_judgeable_groundtruth_similarity
296471
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am doing an introduction to ML with tensorflow and I came across softmax activation function. Why is in the softmax formula e ? Why not 2? 3? 7? $$\text{softmax}(x)_i = \frac{\exp(x_i)}{\sum_j \exp(x_j)}$$ $$\begin{eqnarray} \sum_j a^L_j & = & \frac{\sum_j e^{z^L_j}}{\sum_k e^{z^L_k}} = 1.\tag{79}\end{eqnarray}$$ Tensorflow tutorial NN book Now provide the response and nothing else.
Using a different base is equivalent to scaling your data Let $\mathbf{z} = \left(\ln a\right) \mathbf{y}$ Now observe that $e^{z_i} = a^{y_i}$ hence: $$ \frac{e^{z_i}}{\sum_j e^{z_j}} = \frac{a^{y_i}}{\sum_j a^{y_j}}$$ Multiplying vector $\mathbf{y}$ by the natural logarithm of $a$ is equivalent to switching the softmax function to base $a$ instead of base $e$ . You often have a linear model inside the softmax function (eg. $z_i = \mathbf{x}' \mathbf{w}_i$ ). The $\mathbf{w}$ in $\mathbf{x}' \mathbf{w}$ can scale the data so allowing a different base wouldn't add any explanatory power. If the scaling can change, there's a sense in which different base $a$ are all equivalent models. So why base $e$ ? In exponential settings, $e$ is typically the most aesthetically beautiful, natural base to use: $\frac{d}{dx} e^x = e^x$ . A lot of math can look prettier on the page when you use base $e$ .
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/296471', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/172615/']}
jdg_369314
stackexchange
llm_judgeable_groundtruth_similarity
86878
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having problem that even though I specify the level to ERROR in the root tag, the specified appender logs all levels (debug, info, warn) to the file regardless the settings. I am not a Log4j expert so any help is appreciated. I have checked the classpath for log4j.properties (there is none) except the log4j.xml. Here is the log4j.xml file: <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"><log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'> <!-- ============================== --> <!-- Append messages to the console --> <!-- ============================== --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <!-- The default pattern: Date Priority [Category] Message\n --> <param name="ConversionPattern" value="[AC - %5p] [%d{ISO8601}] [%t] [%c{1} - %L] %m%n" /> </layout> </appender> <appender name="logfile" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/server.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="2" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" /> </layout> </appender> <appender name="payloadAppender" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/payload.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="10" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" /> </layout> </appender> <appender name="errorLog" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/error.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="10" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" /> </layout> </appender> <appender name="traceLog" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="./logs/trace.log" /> <param name="MaxFileSize" value="1000KB" /> <param name="MaxBackupIndex" value="20" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="[AccessControl - %-5p] {%t: %d{dd.MM.yyyy - HH.mm.ss,SSS}} %m%n" /> </layout> </appender> <appender name="traceSocketAppender" class="org.apache.log4j.net.SocketAppender"> <param name="remoteHost" value="localhost" /> <param name="port" value="4445" /> <param name="locationInfo" value="true" /> </appender> <logger name="TraceLogger"> <level value="trace" /> <!-- Set level to trace to activate tracing --> <appender-ref ref="traceLog" /> </logger> <logger name="org.springframework.ws.server.endpoint.interceptor"> <level value="DEBUG" /> <appender-ref ref="payloadAppender" /> </logger> <root> <level value="error" /> <appender-ref ref="errorLog" /> </root></log4j:configuration> If I replace the root with another logger, then nothing gets logged at all to the specified appender. <logger name="com.mydomain.logic"> <level value="error" /> <appender-ref ref="errorLog" /></logger> Now provide the response and nothing else.
The root logger resides at the top of the logger hierarchy. It is exceptional in three ways: it always exists, its level cannot be set to null it cannot be retrieved by name. The rootLogger is the father of all appenders. Each enabled logging request for a given logger will be forwarded to all the appenders in that logger as well as the appenders higher in the hierarchy (including rootLogger) For example, if the console appender is added to the root logger , then all enabled logging requests will at least print on the console. If in addition a file appender is added to a logger, say L , then enabled logging requests for L and L's children will print on a file and on the console . It is possible to override this default behavior so that appender accumulation is no longer additive by setting the additivity flag to false . From the log4j manual To sum up: If you want not to propagate a logging event to the parents loggers (say rootLogger) then add the additivity flag to false in those loggers. In your case: <logger name="org.springframework.ws.server.endpoint.interceptor" additivity="false"> <level value="DEBUG" /> <appender-ref ref="payloadAppender" /></logger> In standard log4j config style (which I prefer to XML): log4j.logger.org.springframework.ws.server.endpoint.interceptor = INFO, payloadAppenderlog4j.additivity.org.springframework.ws.server.endpoint.interceptor = false Hope this helps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/86878', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15045/']}
jdg_369315
stackexchange
llm_judgeable_groundtruth_similarity
25168086
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a array of objects for use in D3 e.g var cities = [ { city: "London", country: "United Kingdom", index: 280 }, { city: "Geneva", country: "Switzerland", index: 259 }, { city: "New York City", country: "United States", index: 237 }, { city: "Singapore", country: "Singapore", index: 228 }, { city: "Paris", country: "France", index: 219 }, { city: "San Francisco", country: "United States", index: 218 }, { city: "Copenhagen", country: "Denmark", index: 217 }, { city: "Sydney", country: "Australia", index: 215 }, { city: "Hong Kong", country: "Hong Kong", index: 214 }, { city: "Brisbane", country: "Australia", index: 208 }} I would like to order the objects in ascending order based on their cities.index property. So that I can display them as such in D3.js . Im sure there is a way of doing this in D3 but I am yet to figure it out when dealing with an array of objects. Any help? Now provide the response and nothing else.
You can pass an anonymous function to the Javascript Array.prototype.sort to sort by index . D3 has a function d3.ascending (v 3.x) that makes it easy to sort ascending: cities.sort(function(x, y){ return d3.ascending(x.index, y.index);}) And here's the output: [ {"city":"Brisbane","country":"Australia","index":208}, {"city":"Hong Kong","country":"Hong Kong","index":214}, {"city":"Sydney","country":"Australia","index":215}, {"city":"Copenhagen","country":"Denmark","index":217}, {"city":"San Francisco","country":"United States","index":218}, {"city":"Paris","country":"France","index":219}, {"city":"Singapore","country":"Singapore","index":228}, {"city":"New York City","country":"United States","index":237}, {"city":"Geneva","country":"Switzerland","index":259}, {"city":"London","country":"United Kingdom","index":280}]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/25168086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1945168/']}
jdg_369316
stackexchange
llm_judgeable_groundtruth_similarity
296038
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $X=\{x_1,x_2,...,x_n\}$ and $Y=\{y_1,y_2,...,y_n\}$ be sets over a finite field $F$ with $p=char(F)>2$. Assume $$x_1^k+x_2^k+...+x_n^k=y_1^k+y_2^k+...+y_n^k,\ 1\leq k\leq n$$ I wanna show that $X=Y$. Now provide the response and nothing else.
Use the standard notations $e_k=\sum_{A\subset \{1,\dots,n\}, |A|=k} \prod_{i\in A} x_i$, with the conventions $e_0=1$ and $e_m=0$ for $m>n$; $p_k=\sum_{i=1}^n x_i^k$. If $n=p$, the statement is true if you require your conditions for all $k$, not just $k\le n$. Indeed, Newton's identities say that $$ke_k=\sum_{i=1}^k (-1)^{i-1} p_ie_{k-i} $$for all $k$Of course for $k=p$ we shall have an issue with finding $e_p$ from the $p$-th equation. But, if you look at the $p+1$-st equation, you can recover $e_p$ from it unless $p_1=0$, or if $p_1=p_2=\cdots=p_{m-1}=0$ and $p_m\ne0$, then the $m+p$-th equation will give you $e_p$, the $m+p+1$st equation will give you $e_{p+1}$ etc. Thus, you recover all the elementary symmetric polynomials unless $p_k=0$ for all $k$. In this case, Newton's identities tell us that $e_k=0$ for $k$ not divisible by $p$, so the polynomial $$(x-x_1)(x-x_2)\cdots(x-x_n) $$is a polynomial in $x^p$. Over a finite field of characteristic $p$, we have $g(x^p)=(h(x))^p$ for some $h(x)$, since every element is a $p$-th power. Thus, there will be repetitions among $x_i$, which is what you do not allow. Originally, I claimed this for all $n$, but as noted in comments, for $n>p$ one has to be more careful. In addition, for $n=3=p$, the formula given by @zibadawatimmy here can be made explicit: $(0,1,-1)$ and $(-1+i,i,1+i)$ have the same first three power sums in $\mathbb{F}_9=\mathbb{F}_3[i]/(i^2+1)$. Indeed: \begin{gather}0+1+(-1)=0=(-1+i)+i+(1+i),\\0^2+1^2+(-1)^2=-1=(-1+i)^2+i^2+(1+i)^2,\\0^3+1^3+(-1)^3=0=(-1+i)^3+i^3+(1+i)^3.\end{gather}Thus, it is not sufficient to ask for these formulas for $1\le k\le n$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/296038', 'https://mathoverflow.net', 'https://mathoverflow.net/users/84871/']}
jdg_369317
stackexchange
llm_judgeable_groundtruth_similarity
19964198
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a URL http://www.hdwallpapers.in/walls/honda_v4_concept_widescreen_bike-wide.jpg I want to extract the file name which is "honda_v4_concept_widescreen_bike-wide.jpg" How can I can do this? Now provide the response and nothing else.
The code below should work. Updated it so I removed the top statement. I could've used NSString vs const char * or std::string from C++ but thought C Character Pointers would be quite appropriate for this case in point. Also revamped this so it's in it's own concise function: -(NSString*) extractFile:(const char*) url { NSURL *yourURL = [NSURL URLWithString: [NSString stringWithCString:url encoding:NSUTF8StringEncoding]]; return [yourURL lastPathComponent];} to use: const char *path_ = "http://www.hdwallpapers.in/walls/honda_v4_concept_widescreen_bike-wide.jpg";NSLog(@"\t\tYour Extracted file: \n\t%@", [self extractFile:path_]);
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19964198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2888113/']}
jdg_369318
stackexchange
llm_judgeable_groundtruth_similarity
90751
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a bit of trouble understanding the semiconductor band gap diagram on Wikipedia: (from Band gap article ). Why is the size of the band gap increasing with the Density of States (DOS) in the semiconductor material? I would expect the opposite given the intuiton that more occupiable energy states would mean a higher probability of an electron moving from a valence band to a conduction band? Should the x-axis here maybe be something like "# electrons in the conducting band" to show the increasing amount of energy needed to promote electrons to the conduction band as a function of the electrons already promoted to the conduction band? And why the ovoid geometry, which I've seen elsewhere? Now provide the response and nothing else.
Not only the position in the gravitational field is important, but also the velocity. Consider the Schwarzschild metric$$\text{d}\tau^2 = \left(1 - \frac{2GM}{rc^2}\right)\text{d}t^2 - \frac{1}{c^2}\left(1 - \frac{2GM}{rc^2}\right)^{-1}\left(\text{d}x^2 + \text{d}y^2 +\text{d}z^2\right),$$where $\text{d}\tau$ is the time measured by a moving clock at radius $r$, and $\text{d}t$ is the coordinate time measured by a hypothetical stationary clock infinitely far from the gravitational field. We get$$\frac{\text{d}\tau}{\text{d}t} = \sqrt{\left(1 - \frac{2GM}{rc^2}\right) - \left(1 - \frac{2GM}{rc^2}\right)^{-1}\frac{v^2}{c^2}},$$with $$v = \sqrt{\frac{\text{d}x^2}{\text{d}t^2} + \frac{\text{d}y^2}{\text{d}t^2} + \frac{\text{d}z^2}{\text{d}t^2}}$$the orbital speed of the clock in the gravitational field (assuming a circular orbit, so that $r$ remains constant). For Earth, $GM=398600\;\text{km}^3/\text{s}^2$ (see wiki ). Let us first calculate the time dilation experienced by someone standing on the equator. We have $r_\text{eq}=6371\,\text{km}$ and an orbital speed (due to the Earth's rotation) of $v_\text{eq}=0.465\,\text{km/s}$. Plugging in the numbers, we find$$\frac{\text{d}\tau_\text{eq}}{\text{d}t} = \sqrt{\left(1 - \frac{2GM}{r_\text{eq}\,c^2}\right) - \left(1 - \frac{2GM}{r_\text{eq}\,c^2}\right)^{-1}\frac{v_\text{eq}^2}{c^2}}=0.99999999930267,$$so 1 second outside Earth's gravity corresponds with 0.99999999930267 seconds on the equator. The ISS orbits the Earth at an altitude of $410\,\text{km}$, so that $r_\text{ISS}=6781\,\text{km}$, and it orbits the Earth with a speed of $v_\text{ISS}=7.7\,\text{km/s}$, and we get$$\frac{\text{d}\tau_\text{ISS}}{\text{d}t} = \sqrt{\left(1 - \frac{2GM}{r_\text{ISS}\,c^2}\right) - \left(1 - \frac{2GM}{r_\text{ISS}\,c^2}\right)^{-1}\frac{v_\text{ISS}^2}{c^2}}=0.999999999016118.$$The relative time dilation between someone on the equator and someone in the ISS is thus$$\frac{\text{d}\tau_\text{eq}}{\text{d}\tau_\text{ISS}} = \frac{0.99999999930267}{0.999999999016118} = 1.00000000028655,$$so 1 second in the ISS corresponds with 1.00000000028655 seconds on Earth. In other words, ISS astronauts age slightly less than people on Earth.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/90751', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/36056/']}
jdg_369319
stackexchange
llm_judgeable_groundtruth_similarity
9966760
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How I get the web page's source from WebView? I want to only enter www.google.com in my webview and When I entered this site, I want to get the source for example String a=........;(source) Now provide the response and nothing else.
I am not sure how far this is going to be helpful. But I have used the below snippet to fetch a small html page's data. I hope it helps you. Create a class like the one below, class MyJavaScriptInterface { @SuppressWarnings("unused") public void processHTML(final String html) { Log.i("processed html",html); Thread OauthFetcher=new Thread(new Runnable() { @Override public void run() { String oAuthDetails=null; oAuthDetails=Html.fromHtml(html).toString(); Log.i("oAuthDetails",oAuthDetails); } });OauthFetcher.start(); } } Now in your onCreate(), webview.getSettings().setJavaScriptEnabled(true);webview.addJavascriptInterface(new MyJavaScriptInterface(), "HTMLOUT"); webview.setWebViewClient(new WebViewClient(){ @Override public void onPageFinished(WebView view, final String url) { String oAuthUrl=getString("www.google.com"); if(url.contains(oAuthUrl)) { Log.i("Contains","Auth URL"); twitter_webview.loadUrl("javascript:window.HTMLOUT.processHTML('<html>'+document.getElementsByTagName('html')[0].innerHTML+'</html>');"); } } @Override public void onPageStarted(WebView view, String url, Bitmap favicon) { progressDialog.show(); } }); And now what happens is that, when your page finishes loading, the JavaScript class will be called, which would retrieve the page source and store it in a String as your requirement.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9966760', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1332870/']}
jdg_369320
stackexchange
llm_judgeable_groundtruth_similarity
161868
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have started reading a book on differential equations and it says something like: $$\frac{dx}{x} = k \, dt$$ Integrating both sides gives $$\log x = kt + c$$ How is it that I can 'integrate both sides here' when I am integrating one side with respect to $x$ yet I am integrating the other side with respect to $t$ ? Now provide the response and nothing else.
What is going on there is what is called an abuse of notation. What you really have there is an equation in $t$. $x=x(t)$ is a function dependent on $t$. So what we're doing is the following - I presume this is the original equation: $$\frac{dx}{dt}=k x $$ This is the same as $$x'(t)=k x(t) $$ $$\frac{x'(t)}{x(t)}=k $$ Now we integrate wrt to $t$ $$\int\frac{x'(t)}{x(t)}dt=kt+C $$ But we note letting $X=x(t)$ so $dX = x'(t) dt$ gives $$\int\frac{dX}{X}=kt+C $$ $$\log X = kt+C$$ So switching back $$\log x(t) = kt+C$$ $$x(t)=C e^{kt}$$ What we actually do, in some sense, is integrate with respect to "only" $t$ in one side, and "$x(t)$" in the other (which is done implicitly). The notation is very useful and suggestive, so we use it, understanding what we're doing is the above.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/161868', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23519/']}
jdg_369321
stackexchange
llm_judgeable_groundtruth_similarity
490393
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have encountered comparisions of variables to string literals multiple times over the years which had one character prefixing the variable and the literal, e.g. if [ "x$A" = "xtrue" ]; then in order to check whether $A is "true" . I assume this is done to achieve shell compatibility or to work around a longterm bug, an unintuitive behavior, etc. Nothing obvious comes to mind. Today I figured I want to know the reason, but my research didn't turn up anything. Or maybe it's just me making something out of a rather frequent exposure to rare occurances. Is this practice still useful, maybe even best? Now provide the response and nothing else.
The important thing to understand here is that in most shells¹, [ is just an ordinary command parsed by the shell like any other ordinary command. Then the shell invokes that [ (aka test ) command with a list of arguments, and then it's up to [ to interpret them as a conditional expression. At that point, those are just a list of strings and the information about which ones resulted from some form of expansion is lost, even in those shells where [ is built-in (all Bourne-like ones these days). The [ utility used to have a hard time telling which ones of its arguments were operators and which ones were operands (the thing operators work on). It didn't help that the syntax was intrinsically ambiguous. For instance: [ -t ] used to be (and still is in some shells/ [ s) to test whether stdout is a terminal. [ x ] is short for [ -n x ] : test whether x is a non-empty string (so you can see there's a conflict with the above). in some shells/ [ s, -a and -o can be both unary ( [ -a file ] for accessible file (now replaced by [ -e file ] ), [ -o option ] for is the option enabled? ) and binary operators ( and and or ). Again, ! -a x can be either and(nonempty("!"), nonempty("x")) or not(isaccessible("x")) . ( , ) and ! add more problems. In normal programming languages like C or perl , in: if ($a eq $b) {...} There's no way the content of $a or $b will be taken as operators because the conditional expression is parsed before those $a and $b are expanded. But in shells, in: [ "$a" = "$b" ] The shell expands the variables first ². For instance, if $a contains ( and $b contains ) , all the [ command sees is [ , ( , = , ) and ] arguments. So does that means "(" = ")" (are ( and ) lexically equal) or ( -n = ) (is = a non-empty string). Historical implementations ( test appeared in Unix V7 in the late 70s) used to fail even in cases where it was not ambiguous just because of the order in which they were processing their arguments. Here with version 7 Unix in a PDP11 emulator: $ ls -l /bin/[-rwxr-xr-x 2 bin 2876 Jun 8 1979 /bin/[$ [ ! = x ]test: argument expected$ [ "(" = x ]test: argument expected Most shell and [ implementations have or have had problems with those or variants thereof . With bash 4.4 today: bash-4.4$ a='(' b=-o c=xbash-4.4$ [ "$a" = "$b" -o "$a" = "$c" ]bash: [: `)' expected, found = POSIX.2 (published in the early 90s) devised an algorithm that would make [ 's behaviour unambiguous and deterministic when passed at most 4 arguments (beside [ and ] ) in the most common usage patterns ( [ -f "$a" -o "$b" ] still unspecified for instance). It deprecated ( , ) , -a and -o , and dropped -t without operand. bash did implement that algorithm (or at least tried to) in bash 2.0. So, in POSIX compliant [ implementations, [ "$a" = "$b" ] is guaranteed to compare the content of $a and $b for equality, whatever they are. Without -o , we would write: [ "$a" = "$b" ] || [ "$a" = "$c" ] That is, call [ twice, each time with fewer than 5 arguments. But it took quite a while for all [ implementations to become compliant. bash 's was not compliant until 4.4 (though the last problem was for [ '(' ! "$var" ')' ] which nobody would really use in real life) The /bin/sh of Solaris 10 and older, which is not a POSIX shell, but a Bourne shell still has problems with [ "$a" = "$b" ] : $ a='!' b='!'$ [ "$a" = "$b" ]test: argument expected Using [ "x$a" = "x$b" ] works around the problem as there is no [ operator that starts with x . Another option is to use case instead: case "$a" in "$b") echo same;; *) echo different;;esac (quoting is necessary around $b , not around $a ). In any case, it is not and never has been about empty values. People have problems with empty values in [ when they forget to quote their variables, but that's not a problem with [ then. $ a= b='-o x'[ $a = $b ] with the default value of $IFS becomes: [ = -o x ] Which is a test of whether = or x is a non-empty string, but no amount of prefixing will help³ as [ x$a = x$b ] will still be: [ x = x-o x ] which would cause an error, and it could get a lot worse including DoS and arbitrary command injection with other values like in bash : bash-4.4$ a= b='x -o -v a[`uname>&2`]'bash-4.4$ [ x$a = x$b ]Linux The correct solution is to always quote : [ "$a" = "$b" ] # OK in POSIX compliant [ / shells[ "x$a" = "x$b" ] # OK in all Bourne-like shells Note that expr has similar (and even worse) problems. expr also has a = operator, though it's for testing whether the two operands are equal integers when they look like decimal integer numbers, or sort the same when not. In many implementations, expr + = + , or expr '(' = ')' or expr index = index don't do equality comparison. expr "x$a" = "x$b" would work around it for string comparison, but prefixing with an x could affect the sorting (in locales that have collating elements starting with x for instance) and obviously can't be used for number comparison expr "0$a" = "0$b" doesn't work for comparing negative integers. expr " $a" = " $b" works for integer comparison in some implementations, but not others (for a=01 b=1 , some would return true, some false). ¹ ksh93 is an exception. In ksh93 , [ can be seen as a reserved word in that [ -t ] is actually different from var=-t; [ "$var" ] , or from ""[ -t ] or cmd='['; "$cmd" -t ] . That's to preserve backward compatibility and still be POSIX compliant in cases where it matters. The -t is only taken as an operator here if it's literal, and ksh93 detects that you're calling the [ command. ² ksh added a [[...]] conditional expression operator with its own syntax parsing rules (and some problems of its own) to address that (also found in some other shells, with some differences). ³ except in zsh where split+glob is not invoked upon parameter expansion, but empty removal still is, or in other shells when disabling split+glob globally with set -o noglob; IFS=
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/490393', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/63502/']}
jdg_369322
stackexchange
llm_judgeable_groundtruth_similarity
19213552
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I came across a strage problem, which worked but now it doesn't. I run an OpenMPI program with tau profiling among 2 computers. It seems that mpirun can't run tau_exec program on a remote host, maybe it's a permission issue? cluster@master:~/software/mpi_in_30_source/test2$ mpirun -np 2 --hostfile hostfile -d tau_exec -v -T MPI,TRACE,PROFILE ./hello.exe [master:19319] procdir: /tmp/openmpi-sessions-cluster@master_0/4568/0/0 [master:19319] jobdir: /tmp/openmpi-sessions-cluster@master_0/4568/0 [master:19319] top: openmpi-sessions-cluster@master_0 [master:19319] tmp: /tmp [slave2:06777] procdir: /tmp/openmpi-sessions-cluster@slave2_0/4568/0/1 [slave2:06777] jobdir: /tmp/openmpi-sessions-cluster@slave2_0/4568/0 [slave2:06777] top: openmpi-sessions-cluster@slave2_0 [slave2:06777] tmp: /tmp [master:19319] [[4568,0],0] node[0].name master daemon 0 arch ff000200 [master:19319] [[4568,0],0] node[1].name slave2 daemon 1 arch ff000200 [slave2:06777] [[4568,0],1] node[0].name master daemon 0 arch ff000200 [slave2:06777] [[4568,0],1] node[1].name slave2 daemon 1 arch ff000200 [master:19319] Info: Setting up debugger process table for applications MPIR_being_debugged = 0 MPIR_debug_state = 1 MPIR_partial_attach_ok = 1 MPIR_i_am_starter = 0 MPIR_proctable_size = 2 MPIR_proctable: (i, host, exe, pid) = (0, master, /home/cluster/software/mpi_in_30_source/test2/tau_exec, 19321) (i, host, exe, pid) = (1, slave2, /home/cluster/software/mpi_in_30_source/test2/tau_exec, 0) -------------------------------------------------------------------------- mpirun was unable to launch the specified application as it could not find an executable: Executable: tau_exec Node: slave2 while attempting to start process rank 1. -------------------------------------------------------------------------- [slave2:06777] sess_dir_finalize: job session dir not empty - leaving [slave2:06777] sess_dir_finalize: job session dir not empty - leaving [master:19319] sess_dir_finalize: job session dir not empty - leaving [master:19319] sess_dir_finalize: proc session dir not empty - leaving orterun: exiting with status -123 On slave2: cluster@slave2:~/software/mpi_in_30_source/test2$ tau_exec -T MPI,TRACE,PROFILE ./hello.exehello MPI user: from process = 0 on machine=slave2, of NCPU=1 processescluster@slave2:~/software/mpi_in_30_source/test2$ which tau_exec/home/cluster/tools/tau-2.22.2/arm_linux/bin/tau_exec So there is a working tau_exec on both nodes. When I run mpirun without tau_exec everything works. cluster@master:~/software/mpi_in_30_source/test2$ mpirun -np 2 --hostfile hostfile ./hello.exehello MPI user: from process = 0 on machine=master, of NCPU=2 processeshello MPI user: from process = 1 on machine=slave2, of NCPU=2 processes Now provide the response and nothing else.
Suppose you are using the value of parameter , but if the value is say None , then you would rather like to have an empty string "" instead of None . What would you do in general? if parameter: # use parameter (well your expression using `" " + parameter` in this caseelse: # use "" This is what that expression is doing. First you should understand what and and or operator does: a and b returns b if a is True , else returns a . a or b returns a if a is True , else returns b . So, your expression: parameter and (" " + parameter) or "" which is effectively equivalent to: (parameter and (" " + parameter)) or ""# A1 A2 B# A or B How the expression is evaluated if: parameter - A1 is evaluated to True : result = (True and " " + parameter) or ""result = (" " + parameter) or ""result = " " + parameter parameter - A1 is None : result = (None and " " + parameter) or ""result = None or ""result = "" As a general suggestion, it's better and more readable to use A if C else B form expression for conditional expression. So, you should better use: " " + parameter if parameter else "" instead of the given expression. See PEP 308 - Conditional Expression for motivation behind the if-else expression.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19213552', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1607072/']}
jdg_369323
stackexchange
llm_judgeable_groundtruth_similarity
38884534
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: SampleData.xlsx Type A1001 A1002 A1003 A1004 A1005 TotalTypeA TypeB TypeC TypeD Total I have to refine the data set in the above form using R programming. Looking for an IDEA to count each type of communication w.r.t UserID *data11 <- read.xlsx("SampleData.xlsx", sheetName = "SampleData1", header = TRUE)data22 <- read.xlsx("SampleData.xlsx", sheetName = "SampleData2", header = TRUE)dim(data11)dim(data22)data2 = data22[complete.cases(data22),]data2$User <- as.character(data2$User)data1$User = NAfor(i in data1[,1]){ for(q in data2[,1]) { if(data1$Id[i] == data2$Id[q]) { data1$User[i] = data2$User[q] } }}data.1 = data1[complete.cases(data1),] x1 <- subset(data.1, Type == "Email" ) x2 <- subset(data.1, Type == "Update" )x3 <- subset(data.1, Type == "NA" )x4 <- subset(data.1, Type == "Call" )x5 <- subset(data.1, Type == "Visit" )p1 = as.list(as.data.frame(count(x1, "User")))Type1 = p1$freq * As you can see from the last two lines of CODE. I tried something but it is the wrong way. Help me out Now provide the response and nothing else.
The promise will always log pending as long as its results are not resolved yet. You must call .then on the promise to capture the results regardless of the promise state (resolved or still pending): let AuthUser = function(data) { return google.login(data.username, data.password).then(token => { return token } )}let userToken = AuthUser(data)console.log(userToken) // Promise { <pending> }userToken.then(function(result) { console.log(result) // "Some User token"}) Why is that? Promises are forward direction only; You can only resolve them once. The resolved value of a Promise is passed to its .then or .catch methods. Details According to the Promises/A+ spec: The promise resolution procedure is an abstract operation taking as input a promise and a value, which we denote as [[Resolve]](promise, x). If x is a thenable, it attempts to make promise adopt the state of x, under the assumption that x behaves at least somewhat like a promise. Otherwise, it fulfills promise with the value x. This treatment of thenables allows promise implementations to interoperate, as long as they expose a Promises/A+-compliant then method. It also allows Promises/A+ implementations to “assimilate” nonconformant implementations with reasonable then methods. This spec is a little hard to parse, so let's break it down. The rule is: If the function in the .then handler returns a value, then the Promise resolves with that value. If the handler returns another Promise , then the original Promise resolves with the resolved value of the chained Promise . The next .then handler will always contain the resolved value of the chained promise returned in the preceding .then . The way it actually works is described below in more detail: 1. The return of the .then function will be the resolved value of the promise. function initPromise() { return new Promise(function(res, rej) { res("initResolve"); })}initPromise() .then(function(result) { console.log(result); // "initResolve" return "normalReturn"; }) .then(function(result) { console.log(result); // "normalReturn" }); 2. If the .then function returns a Promise , then the resolved value of that chained promise is passed to the following .then . function initPromise() { return new Promise(function(res, rej) { res("initResolve"); })}initPromise() .then(function(result) { console.log(result); // "initResolve" return new Promise(function(resolve, reject) { setTimeout(function() { resolve("secondPromise"); }, 1000) }) }) .then(function(result) { console.log(result); // "secondPromise" });
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/38884534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6702419/']}
jdg_369324
stackexchange
llm_judgeable_groundtruth_similarity
17079
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the time complexity (not query complexity) of Grover's algorithm? It seems clear to me that it is $\Omega(\log(N) \sqrt{N})$ since there are $\Omega(\sqrt{N})$ iterations and each iteration requires use of the reflection operation which in turn takes time $\Omega(\log(N))$ using any standard set of universal gates. The problem is, I can't find even a single reference which says the time complexity of Grover's algorithm is $\Omega(\log(N) \sqrt{N})$. Wikipedia, and several other web pages, say $O(\sqrt{N})$ time complexity. Grover's paper claims $O(\sqrt{N})$ "steps". Am I missing something? Perhaps people define the reflection operation to take unit time. But that doesn't make sense to me because if we can play the game of allowing arbitrary unitaries to take unit time then there would be no difference between query complexity and time complexity. Now provide the response and nothing else.
The question is usually taken to be moot, for the following reason. Grover's algorithm is a combinatorial search algorithm to find a solution to an arbitrary predicate. While, yes, $\Theta(\log N)$ is the quantum gate complexity in each stage of the black-box algorithm, the predicate needs to be computed too. The quantum gate complexity of that is $\Omega(\log N)$, because otherwise it wouldn't read the whole input and you could discard some of the input bits from the search. On the other hand, an interesting predicate could take a lot more time than that. Hence, the number of calls to the predicate is taken to be the standard coin, just as it is for the classical analogue of Grover's algorithm, namely random guessing.
{}
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/17079', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/7092/']}
jdg_369325
stackexchange
llm_judgeable_groundtruth_similarity
234464
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Clear[y, z, x, eq1, eq2]eq1 = (x^2/400) + (y^2/256) == 1eq2 = (x^2/144) + (-(y^2/289)) == 1ContourPlot[Evaluate[{eq1, eq2}], {x, -60, 60}, {y, -60, 60}, AspectRatio -> Automatic] Im trying too plot them and find the intersection btwn these two functions(i need to mark the intersection by a dot) My teacher does this by zooming in on the graph, but i find this way too difficult. From zooming in how can you extract the exact values of the intersection? Now provide the response and nothing else.
Graphics`Mesh`FindIntersections cp = ContourPlot[Evaluate[{eq1, eq2}], {x, -60, 60}, {y, -60, 60}];intersections = Graphics`Mesh`FindIntersections[cp, Graphics`Mesh`AllPoints -> False] {{-14.3143, -11.0858}, {-14.3143, 11.0858}, {14.3143, -11.0858}, {14.3143, 11.0858}} Show[cp, Graphics[{Red, PointSize[Large], Point @ intersections}]] MeshFunctions + Mesh + MeshStyle ContourPlot[Evaluate[{eq1, eq2}], {x, -60, 60}, {y, -60, 60}, MeshFunctions -> Function[{x, y}, Evaluate[eq1[[1]] - eq2[[1]]]], Mesh -> {{0}}, MeshStyle -> Directive[Red, PointSize[Large]]] ImplicitRegion + Region + Show {ir1, ir2, ir12} = ImplicitRegion[#, {{x, -60, 60}, {y, -60, 60}}] & /@ {eq1, eq2, And[eq1, eq2]}; Show[MapThread[Region[#, BaseStyle -> #2] &][{{ir1, ir2, ir12}, {Blue, Orange, Directive[Red, PointSize[Large]]}}], Frame -> True, AspectRatio -> 1] ImplicitRegion + RegionPlot RegionPlot[{ir1, ir2, ir12}, BaseStyle -> PointSize[Large]] /. p_Point :> {Red, p}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/234464', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/74896/']}
jdg_369326
stackexchange
llm_judgeable_groundtruth_similarity
4520781
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Java abstract class called ImmutableEntity and several subclasses that contain a class-level annotation called @DBTable . I am trying to search a class hierarchy for the annotation using a tail-recursive Scala method: def getDbTableForClass[A <: ImmutableEntity](cls: Class[A]): String = { @tailrec def getDbTableAnnotation[B >: A](cls: Class[B]): DBTable = { if (cls == null) { null } else { val dbTable = cls.getAnnotation(classOf[DBTable]) if (dbTable != null) { dbTable } else { getDbTableAnnotation(cls.getSuperclass) } } } val dbTable = getDbTableAnnotation(cls) if (dbTable == null) { throw new IllegalArgumentException("No DBTable annotation on class " + cls.getName) } else { val value = dbTable.value if (value != null) { value } else { throw new IllegalArgumentException("No DBTable.value annotation on class " + cls.getName) } } } When I compile this code, I am getting the error: "could not optimize @tailrec annotated method: it is called recursively with different type arguments". What is wrong with my inner method? Thanks. Now provide the response and nothing else.
It's because of the way the compiler implements tail-recursion by loops. This is done as one step in a chain of transformations from Scala to Java bytecodes. Each transformation must produce a program that's again type-correct. However, it you can't change the type of variables in mid-loop execution, that's why the compiler could not expand into a type-correct loop.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4520781', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/96233/']}
jdg_369327
stackexchange
llm_judgeable_groundtruth_similarity
44224227
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In d3, when is it appropriate to use d3.select("foo").attr('class', 'bar'); as opposed to d3.select("foo").classed('bar', true); ? Is one recommended or expected to be deprecated? What is industry standard? Now provide the response and nothing else.
I think the classed is a kind of conditional check for example:To add a class, the second parameter to classed must be true, as in this code: d3.selectAll(".bar") .classed("my-selector", true); To remove a class, the second parameter to classed must be false. d3.selectAll(".bar") .classed("my-selector", false); To flip a class to the opposite state – remove it if it exists already, add it if it does not yet exist – you can do one of the following. For a single element, the code might look like this: var oneBar = d3.select(".bar")oneBar.classed("my-selector", !oneBar.classed("my-selector")); Both classed and attr have equal importance and attr has other uses which classed can not be used for. For reference
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44224227', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7046012/']}
jdg_369328
stackexchange
llm_judgeable_groundtruth_similarity
30611801
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to share a database between two apps on Heroku according to Share database between 2 apps in Heroku but setting up database_url on the second app gives an error: $ heroku config:add DATABASE_URL=postgres://...Setting config vars and restarting pacific-headland-1960... failed ! Cannot destroy last attachment to billing app for resource loving-subtly-5807 Now provide the response and nothing else.
DATABASE_URL is what's currently storing your second app's connection to its provisioned DB, which Heroku is kindly preventing you from deleting because there are no other references to it. First, remove the second application's DB. Anything in it will be destroyed. heroku addons:destroy heroku-postgresql:<your DB tier> --app <your second app> If these are new apps, your DB tier is probably hobby-dev , but you can check for it by running heroku addons --app <your second app> . Then, you'll be able to set DATABASE_URL on the second app. If you want to have the second app connected to both DBs, you'll need to store the first app's DB URL in a different environment variable and update your second app's code to use it. Unrelated to your question, you've just pasted your DB credentials into a public space. You should roll them with heroku pg:credentials --reset --app <your first application> . Docs: https://devcenter.heroku.com/articles/heroku-postgresql
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30611801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1167937/']}
jdg_369329
stackexchange
llm_judgeable_groundtruth_similarity
39585573
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't know how I would .map into an array. I want to get all values of children but 2 and then put it into an array format. Here's the code I'm attempting it with: $("#schoolSupplies").submit(function() { var test = $(":input").not("#numOfSupplies, #submitBtn").map(function() { return $(this).val(); }) .get() .join( "\", \"" ); console.log(test);}); And this is the output: Billy", "John I have been working on this for about an hour and I have no idea how. Now provide the response and nothing else.
.get() returns an array - so just take out the .join() call; otherwise you would have a string (since that is what .join() returns). $("#schoolSupplies").submit(function() { var arrayOfValues = $(":input").not("#numOfSupplies, #submitBtn").map(function() { return $(this).val(); }) .get() //.join( "\", \"" ) ; console.log('Array.isArray(arrayOfValues): ', Array.isArray(arrayOfValues)?'yes':'no', ' contents of arrayOfValues: ', arrayOfValues); return false; //for demonstration purposes, don't submit form normally}); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script><form id="schoolSupplies"> Supply Name: <input id="name" type="text" value="Tables" /><br /> Student Name: <input id="studentName" type="text" value="Bobby"/><br /> # of Supplies: <input id="numOfSupplies" type="number" value="3" /><br /> <input type="submit" id="submitBtn" /> </form>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39585573', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6846544/']}
jdg_369330
stackexchange
llm_judgeable_groundtruth_similarity
527799
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In Hatcher's Algebraic Topology section 1.3, Cayley complexes are explained. The book states that we get a Cayley complex out of a Cayley graph by attaching a 2-cell to each loop. There is an example showing the Cayley complex for $\mathbb{Z}\times\mathbb{Z}$ (the fundamental group of the torus). We attach one 2-cell to each loop and we get $\mathbb{R}^{2}$ with vertical and horizontal tiling. I understand this. The book then says (example 1.47) that the Cayley complex of a cyclic group of order $ n $ is $n$ disks with boundaries identified. I can't for the life of me figure out where the $n$ disks come from. In the Cayley graph, we have one loop $e \to x \to x^2 \to \cdots \to x^n = e$. I guess the relation $x^n = e$ somehow generates $n$ loops, but I don't understand why. The next example is for $\mathbb{Z}_2*\mathbb{Z}_2$ in which two 2-cells are attached to each loop. I also don't understand why two. I'm looking for a canonical description of the algorithm to build Cayley complexes, and the application of the algorithm to build Cayley complexes for finite cyclic groups and $\mathbb{ℤ}_2*\mathbb{ℤ}_2$. Thank you. Now provide the response and nothing else.
This is essentially the same answer as user32240 but I will try to explain it differently. Hatcher's description is a bit sloppy. The correct thing to say is that if $R$ is the set of defining relators for $G$, then each element $r\in R$ labels a loop based at every vertex of the Cayley graph. To each of these based loops, you add a 2-cell. The reason for this is you want the group $G$ to act freely on the Cayley complex. Now if you have a relator of the form $r=s^n$ where $s$ is not a proper power, then each loop labeled by $s^n$ in the Cayley graph can be read from $n$ different starting vertices and so you need a 2-cell for each one. So, for example, if $G=\mathbb Z_2$ with presentation $\langle a\mid a^2=1\rangle$, then you want to have 2 2-cells and have $\mathbb Z_2$ permute them so that you have a free action. The two 2-cells come from the loop labelled $a^2$ at 1 and the looped labeled by $a^2$ at $a$. You can think of the 2 2-cells as the northern and southern hemisphere of a sphere. If you attached only one $2$-cell, you would get a disk. The group $G$ would fix the center of this disk and so the action is not free. Although the projective plane is a disk with antipodal points identified, the quotient map is not a covering map. By attaching 2 disks you get a covering map. Incidentally, this issue is not handled properly in the book of Lyndon and Schupp if memory recalls. Cohen makes a big point of this in his book and on the necessity of using $n$ disks for relators $r^n$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/527799', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38011/']}
jdg_369331
stackexchange
llm_judgeable_groundtruth_similarity
45001
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've seen a few questions that speculate as to how much current is available from the HDMI socket but I can't find any concrete values. I've gotten away with using a couple of different HDMI to VGA/3.5mm analogue audio converters, but it would be useful to know a definitive value. Related but unconfirmed reading seems to suggest that the HDMI socket supplies power directly from the 5V rail: Brian's answer: Total speculation: it's reasonably likely that the Raspberry Pi wires the 5V HDMI directly to the Pi's 5V input. If so, the VGA converter will have power equal to 1A minus whatever the Pi uses. afsantos's question: [Using a 1200mA PSU] The Pi grabs 500mA for itself, the HDMI adapter grabs another 200mA, and I intend to connect a 100mA keyboard, yielding a total of 800mA, which should still be in the safe 1A limit the Pi can handle. Is this right, or is available HDMI current determined by something else? Do values vary between Pi models and, if so, how? Now provide the response and nothing else.
I think I might have figured this out as far as is possible. The schematics for various Pis are provided by the RPi Foundation here . The schematics for all revisions (1.0, 2.0 and 2.1) of the Raspberry Pi 1 Model A and B indicate that the HDMI power circuitry on each model is identical: Each revision incorporates a BAT54 diode (D1) which is widely documented (see elinux.org here and here , raspberrypi.org/forums here and here , github.com/raspberrypi ) to max out at 200mA. The schematic for the Pi B+ shows that this arrangement was changed to incorporate an AP23331W 0.2A single channel current-limited load switch : Finally, the schematic for the Pi 3B shows that it uses yet another component, an RT9741CGV 100mΩ, 0.7A High-Side Power Switch : It looks to me as if the original model boards and the Pi B+ will allow up to 200mA through the HDMI socket. The Pi 3 should allow up to 700mA through the HDMI socket. As far as I'm aware there are no schematics available for the Pi 2 or the Pi Zero, which leaves them in uncertain territory. UPDATE A very close look at a couple of product shots suggests that the Pi 2 uses the same RT9741CGV component as the Pi 3: You can just about make out the '4D=***' marking detailed in the datasheet . That suggests that the Pi 2 should also be capable of supplying 700mA through its HDMI socket.
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/45001', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/32756/']}
jdg_369332
stackexchange
llm_judgeable_groundtruth_similarity
1833959
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I was answering a question here, I found a sequence as a recursive one as given below. $a_1=1$, and for $n>1$, $$a_n=\begin{cases}2a_{n-1} & \text{if } n\ \text{ is even, }\\2a_{n-1}+1 & \text{if } n\ \text{ is odd. }\end{cases}$$ I need to find a common term for this sequence. For example, for the sequence $a_1=2$ and $a_n=2a_{n-1}$, for $n>1$, the common term is $a_n=2^n$. I appreciate any answer or hint in advance. Now provide the response and nothing else.
Note that in binary we have $$\begin{align*}a_1&=1\\a_2&=10\\a_3&=101\\a_4&=1010\\a_5&=10101\;,\end{align*}$$ displaying a pattern easily shown by induction to be real. Now note that the binary expansion of $\frac23$ is $\frac23=0.\overline{10}_{\text{two}}$, so that $$\begin{align*}2\cdot\frac23&=1.\overline{01}_{\text{two}}\\2^2\cdot\frac23&=10.\overline{10}_{\text{two}}\\2^3\cdot\frac23&=101.\overline{01}_{\text{two}}\\2^4\cdot\frac23&=1010.\overline{10}_{\text{two}}\\2^5\cdot\frac23&=10101.\overline{01}_{\text{two}}\;,\end{align*}$$ and therefore $$a_n=\left\lfloor 2^n\cdot\frac23\right\rfloor=\left\lfloor\frac{2^{n+1}}3\right\rfloor\;.$$ If you really want to get rid of the floor function, observe that $2^{n+1}\equiv 1\pmod3$ when $n$ is odd, and $2^{n+1}\equiv2\pmod3$ when $n$ is even, so $$\left\lfloor\frac{2^{n+1}}3\right\rfloor=\begin{cases}\dfrac{2^{n+1}-1}3,&\text{if }n\text{ is odd}\\\dfrac{2^{n+1}-2}3,&\text{if }n\text{ is even}\;.\end{cases}$$ Now $$\frac12\big(1+(-1)^n\big)=\begin{cases}0,&\text{if }n\text{ is odd}\\1,&\text{if }n\text{ is even}\;,\end{cases}$$ so $$\begin{align*}\left\lfloor\frac{2^{n+1}}3\right\rfloor&=\frac13\left(2^{n+1}-1-\frac12\big(1+(-1)^n\big)\right)\\&=\frac13\left(2^{n+1}-\frac12\left(3+(-1)^n\right)\right)\\&=\frac16\left(2^{n+2}-3-(-1)^n\right)\;.\end{align*}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1833959', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/254604/']}
jdg_369333
stackexchange
llm_judgeable_groundtruth_similarity
58179
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am continuing to brush up my statistical physics. I just want to gain a better understanding. I have gone through the derivation of the classical virial theorem once more. I have thought about it, googled it and slept about it. The statement: $$\langle x^i \frac{\partial \cal H}{\partial x^j} \rangle= kT \delta^i_j$$ is still just counter-intuitive to me. So I am at a fixed position in phase space and I'm looking at my Hamiltonian. Then I step away from my current position and watch how the Hamiltonian changes and multiply that knowledge by how far away from my initial position I have moved. I do this a lot in a random way and then I take an average. Et voilá, I have arrived at the equilibrium temperature of a system. Right now this is just some math to me (which I totally get) to calculate the temperature of a system of particles in thermal equilibrium. Is there more to it? Am I not getting it? What is the intuition behind this? Now provide the response and nothing else.
The conclusion – the claim of the virial theorem – is not "just some math" because all the objects in the claim have a physical interpretation. So it's physics and it has big implications in theoretical physics as well as applied physics. The derivation is a mathematical derivation but it's not right to attach the disrespectful word "just" to a mathematical derivation. Mathematical derivations are the most solid and the only truly solid derivations one may have in science. On the contrary, it's derivations and intuitions that are not mathematical that should be accompanied by the word "just" because they are inferior. Instead, the right way is to adjust one's intuition so that it's compatible with the most solid results in physics – and they're the mathematically formulated results. Incidentally, there are various derivations – dealing with the microcanonical ensemble, canonical ensemble etc. The details of the proof differ in these variations but the overall physical conclusion is shared and important. The exact proof of the theorem can't be simplified too much – otherwise people would do so – but one may offer heuristic, approximate proofs for approximate versions of the virial theorem and its special cases. For example, the quantity in the expectation value contains the derivative of $H$ with respect to a coordinate. The larger the derivative is, the more the Hamiltonian increases with the coordinate, and the more the Boltzmann factor $\exp(-H/kT)$ of the canonical distribution decreases with the coordinate which makes the expectation value of the coordinate smaller. So if we multiply the quantity by the coordinate again, we get something that behaves constantly, independently of the slope. And indeed, the expectation value of the product only depends on the temperature. This theorem is important in statistical physics because statistical physics is all about the computation of statistical averages of various quantities, the theorem allows us to express some expectation values in a simpler way, and $x_i \cdot \partial H / \partial x_j$ are among the simplest and most important quantities whose statistical averages may be computed or interesting. So we should better know how they behave. An important special case of the theorem you mentioned deals with the calculation of the expectation value of the kinetic energy and the potential energy. The former is $n/2$ times the latter for power-law potentials of the form $ar^n$, for example. So we know how big a percentage of the energy is stored in the kinetic one and how big portion is the potential energy. For example, both the kinetic and potential energy contribute 50% for harmonic-oscillator-like $r^2$ potentials. For the Keplerian or Coulomb $-C/r$ potential, i.e. $n=-1$, the potential energy is negative, $-|V|$, and the kinetic energy is $+|V|/2$, reducing the potential one by 50% while keeping the total energy negative. There are many other things we may learn from the theorem in various situations – and in classes of situations.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/58179', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/20985/']}
jdg_369334
stackexchange
llm_judgeable_groundtruth_similarity
2648364
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got a simple class Currency with overloaded operator<<. I don't know how can i separate the number with spaces every 3 digits, so it looks like: "1 234 567 ISK". #include <cstdlib>#include <iostream>using namespace std;class Currency{ int val; char curr[4]; public: Currency(int _val, const char * _curr) { val = _val; strcpy(curr, _curr); } friend ostream & operator<< (ostream & out, const Currency & c);};ostream & operator<< (ostream & out, const Currency & c){ out << c.val<< " " << c.curr; return out;}int main(int argc, char *argv[]){ Currency c(2354123, "ISK"); cout << c;} What interests me, is somehow the easiest solution for this particular situation. Now provide the response and nothing else.
This can be done with facets struct myseps : numpunct<char> { /* use space as separator */ char do_thousands_sep() const { return ' '; } /* digits are grouped by 3 digits each */ string do_grouping() const { return "\3"; }};int main() { std::cout.imbue(std::locale(std::locale(), new myseps)); std::cout << 10000; // 10 000} Alternatively, you may code your own loop void printGrouped(ostream &out, int n) { if(n < 0) { out << "-"; return printGrouped(out, -n); } if(n < 1000) { out << n; } else { printGrouped(out, n / 1000); out << " " << setw(3) << setfill('0') << (n % 1000); }}ostream & operator<< (ostream & out, const Currency & c) { printGrouped(out, c.val); out << " " << c.curr; return out;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2648364', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/244599/']}
jdg_369335
stackexchange
llm_judgeable_groundtruth_similarity
26034246
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I recently ran (OS X 10.6.8) brew update and brew upgrade and am working in a virtualenv that now fails. I've tried: $ brew unlink python && brew link pythonUnlinking /usr/local/Cellar/python/2.7.8_1... 38 symlinks removedLinking /usr/local/Cellar/python/2.7.8_1... 35 symlinks created But in the virtualenv, I still get this: $ python --versiondyld: Library not loaded: @executable_path/../.Python Referenced from: /Users/admin/.virtualenvs/saves/bin/python Reason: image not foundTrace/BPT trap I've found another post on this, but the solution still seems unclear. I'm in a hurry, so uninstalled the Python version and ran brew doctor which returned the error above as a warning: sh: line 1: 40991 Trace/BPT trap python -V 2>&1 Then brew install python . Still getting the above error in the virtualenv. Related questions and a post on debugfix aren't definitive: Broken references in Virtualenvs dyld-library-loaded-executable_path-python Update Creating a new virtualenv solved this specific problem. Now provide the response and nothing else.
I'll put my comment as an answer, for clarity for future visitors with the same problem. The related links you gave tell you exactly what's the problem: because virtualenv creates symbolic links to files, and homebrew replaces those files when upgrading with differently named files, you end up with broken links. Both related links give suggestions how to solve that: one to create a new virtualenv, the other to fix the broken links. All in all though, when Python upgrades again in homebrew, you may end up with the same situation. This was apparently realised by the virtualenv developers , and there is an option --always-copy to avoid problems like this: $ virtualenv --helpUsage: virtualenv-3.4 [OPTIONS] DEST_DIROptions: ... --always-copy Always copy files rather than symlinking. ... This should prevent problems when upgrading Python through Homebrew in the future. Though then, of course, your virtualenv will have an older version of Python. Which may at times be exactly what you want.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26034246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/123033/']}
jdg_369336
stackexchange
llm_judgeable_groundtruth_similarity
16855957
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know how we can use the Google API to return image results in AJAX, but I want to be able to return images for a specific query and then output them in to HTML on my page. For example: http://ajax.googleapis.com/ajax/services/search/images?v=1.0&q=sausages Returns results with infomation and images about the top 10 results for the keyword sausages. How can I query this url to output the images and titles of the images on my page using PHP in HTML. I am using the following at the top of the function to return the title: $tit = get_the_title(); Then I am apending it here: $json = get_url_contents('http://ajax.googleapis.com/ajax/services/search/images?v=1.0&q='.$tit.''); But it won't recognize the title Now provide the response and nothing else.
function get_url_contents($url) { $crl = curl_init(); curl_setopt($crl, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)'); curl_setopt($crl, CURLOPT_URL, $url); curl_setopt($crl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($crl, CURLOPT_CONNECTTIMEOUT, 5); $ret = curl_exec($crl); curl_close($crl); return $ret;}$json = get_url_contents('http://ajax.googleapis.com/ajax/services/search/images?v=1.0&q=sausages');$data = json_decode($json);foreach ($data->responseData->results as $result) { $results[] = array('url' => $result->url, 'alt' => $result->title);}print_r($results); Output: Array( [0] => Array ( [url] => http://upload.wikimedia.org/wikipedia/commons/thumb/c/c4/Salchicha_oaxaque%25C3%25B1a.png/220px-Salchicha_oaxaque%25C3%25B1a.png [alt] => Sausage - Wikipedia, the free encyclopedia ) [1] => Array ( [url] => http://upload.wikimedia.org/wikipedia/commons/c/c1/Reunion_sausages_dsc07796.jpg [alt] => File:Reunion sausages dsc07796.jpg - Wikimedia Commons ) [2] => Array ( [url] => http://1.bp.blogspot.com/-zDyoLPoM1Zg/ULXDPba_2iI/AAAAAAAAAAs/QzfNNmDFmzc/s1600/shop_sausages.jpg [alt] => Maik's Yummy German Sausage ) [3] => Array ( [url] => http://sparseuropeansausage.com/images/sausage-web/sausagesBiggrilling2.jpg [alt] => Spar's European Sausage Shop )) Showing the images: <?php foreach($results as $image): ?> <img src="<?php echo $image['url']; ?>" alt="<?php echo $image['alt']; ?>"/><br/><?php endforeach; ?> Edit after comments: $url = 'http://ajax.googleapis.com/ajax/services/search/images?v=1.0&q=' . get_the_title(); $json = get_url_contents($url);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16855957', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2095885/']}
jdg_369337
stackexchange
llm_judgeable_groundtruth_similarity
418282
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that $x=2\ln(3x-2)$ can be written as $x=\dfrac{1}{3}(e^{x/2}+2)$. Is there a rule for this? Now provide the response and nothing else.
Solve for the "other" $x$. Notice that:$$ \begin{align*}x &= 2\ln(3x-2) \\\dfrac{x}{2} &= \ln(3x-2) \\e^{x/2} &= 3x-2 \\e^{x/2}+2 &= 3x \\\dfrac{1}{3}(e^{x/2}+2) &= x \\\end {align*} $$as desired.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/418282', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/55507/']}
jdg_369338
stackexchange
llm_judgeable_groundtruth_similarity
120102
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a set of 500 points in 5D. Each point belongs to one of five classes, and the class labels are known. I’d like to visualise the dataset in 2D such that the classes would be separated as much as possible. I am currently using PCA and doing a scatterplot of the first two principal components. This works quite well for some datasets, but not as well for others. This makes intuitive sense, since PCA maximises explained variance rather than separability. Are there any known methods for finding a 2D projection that would maximise separability? I don’t have any specific measure in mind and am open to suggestions. (Tagging with [r] as I'd love to see some R code or pointers.) Now provide the response and nothing else.
You may want to try linear discriminant analysis (LDA).The basic idea of LDA is to project your data in a space where the variance within classes is minimum and the one between classes is maximum. Unlike PCA, LDA uses the labels to reduce the dimensionality. Indeed, discovering the direction that maximizes the variance with PCA does not mean projecting in this direction will help you to classify. Take the example of classification of pictures. The variance of the average light may vary a lot between your images, yet that feature might not help you that much depending on your goal. See this post on how to do it with R.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/120102', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/439/']}
jdg_369339
stackexchange
llm_judgeable_groundtruth_similarity
460595
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have below scenario like: if [file exists]; then exit elif recheck if file exist (max 10 times) if found exit else recheck again as per counter fi Now provide the response and nothing else.
There are many ways to do this loop. With ksh93 syntax (also supported by zsh and bash ): for (( i=0; i<10; ++i)); do [ -e filename ] && break sleep 10done For any POSIX-like shell: n=0while [ "$n" -lt 10 ] && [ ! -e filename ]; do n=$(( n + 1 )) sleep 10done Both of the loops sleep 10 seconds in each iteration before testing the existence of the file again. After the loop has finished, you will have to test for existence of the file a last time to figure out whether the loop exited due to running 10 times or due to the file appearing. If you wish, and if you have access to inotify-tools, you may replace the sleep 10 call with inotifywait -q -t 10 -e create ./ >/dev/null This would wait for a file creation event to occur in the current directory, but would time out after 10 seconds. This way your loop would exit as soon as the given filename appeared (if it appeared). The full code, with inotifywait (replace with sleep 10 if you don't want that), may look like for (( i=0; i<10; ++i)); do [ -e filename ] && break inotifywait -q -t 10 -e create ./ >/dev/nulldoneif [ -e filename ]; then echo 'file appeared!'else echo 'file did not turn up in time'fi
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/460595', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/190945/']}
jdg_369340