text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Strange issue with android volume
I have created an android application , which works just fine .
in the application i am using media player to play .wav files , which also is working fine.
the only problem comes , when i try to increase the volume of the application ,
just when i touch the up-volume button , the volume for call increases
but the volume for application remains same
any suggestion , what can be the effecting factor ??
here is my code to play audio
MediaPlayer mediaPlayer;//as global var in activity
mediaPlayer=new MediaPlayer(); // inside onCreate() method
whenever i need to play audio i call this method
private void playAudio() {
Log.d("hussi","before the media ");
try {
mediaPlayer.stop();
Log.d("hussi","so the file name passed is"+gif_char+".wav");
AssetFileDescriptor descriptor = getApplicationContext().getAssets().openFd(gif_char+".wav");
long start = descriptor.getStartOffset();
long end = descriptor.getLength();
mediaPlayer.reset();
mediaPlayer.setDataSource(descriptor.getFileDescriptor(), start, end);
mediaPlayer.prepare();
mediaPlayer.start();
} catch (IllegalArgumentException e) {
Log.d("hussi","1===>>>"+e);
e.printStackTrace();
Log.d("hussi",e.getMessage());
} catch (SecurityException e) {
Log.d("hussi","2===>>>"+e);
e.printStackTrace();
} catch (IllegalStateException e) {
Log.d("hussi","3===>>>"+e);
e.printStackTrace();
} catch (IOException e) {
Log.d("hussi","4===>>>"+e);
e.printStackTrace();
}catch (Exception e) {
Log.d("hussi","5===>>>"+e);
e.printStackTrace();
}
}
A:
setVolumeControlStream(AudioManager.STREAM_MUSIC); in your onCreate() method. That tells the OS that the volume buttons should affect the "media" volume when your application is visible, and that's the volume it uses for your application.
Remember your application needs focus.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the proper reading order for the Corpse Party mangas?
Of the mangas that are part of the Corpse Party franchise, what order should they be read in? Which of the series are one-shots/not relevant to the main plot?
Release order seems to be:
Blood Covered
Musume
Another Child
Book of Shadows
Sachiko's Game of Love – Hysteric Birthday 2U and Cemetery 0: Kaibyaku no Ars Moriend (released on the same day)
Should they be read in release order, or is there a chronological order that should be used instead?
A:
According to this document uploaded to scribd, there is no 'true' order. Some of the manga are concurrent or they follow their own timeline. This is the (scribd document) author's suggested reading order:
Corpse Party CEMETERY0 ~The Genesis of Ars Moriendi~
This manga gives you Naho Saenoki's backstory
Corpse Party: Musume
A separate timeline from Blood Covered. It is based off of the PC-98 version of Corpse Party with influences from the remakes.
Corpse Party: Blood Covered
Mainly based off of the PC and PSP versions.
Corpse Party: Another Child
This manga is possibly concurrent with Blood Covered's timeline. It is recommend people read Blood Covered first before read Another Child in order to not be spoiled.
Corpse Party: Coupling x Anthology
This manga is a collection of random stories that does not follow any specific timeline. Suggested reading after Blood Covered since it is based off of that.
Corpse Party: Book of Shadows
This is the sequel to the Blood Covered manga.
Corpse Party: Sachiko's Game of Love? Hysteric Birthday 2U
The events in this manga does not follow any specific timeline. Suggested reading after CEMETERY0, Blood Covered, and Book of Shadows to understand the cast of characters better.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
I want to authenticate an admin using OAuth2 and access data for accounts that are under this admin
I want to authenticate an admin using OAuth2 and access data for accounts that are under this admin.
Primarily something like this, I have an organisation : A --> B, C, D where A is the admin.
If i authenticate A can i access data from B,C,D.
We were able to do this in OAuth1.0 and appending email ids in request URL's.
How do we achieve it in OAuth2.0 ?
A:
(assuming your users are under a Google Apps domain)
This can be achieved in OAuth 2.0 using service accounts. You need to:
Create a service accounts and download private key.
Delegate domain-wide authority to your service account (see the link below for instructions).
Use a signed assertion requesting access to the users data to receive an access token for use in subsequent API calls.
See here for an example using Google Drive API:
https://developers.google.com/drive/delegation
See also the "Additional Claims" section here:
https://developers.google.com/accounts/docs/OAuth2ServiceAccount#jwtcontents
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Best practise for PostgreSQL backup
I'm writing a script for backing up PostgreSQL each night and I'm happy with doing a full database dump. I'm curious about how I should backing up though. Is it wise for me to first do a VACUUM and then a full dump? Does this reduce the size of the backed up file? (I will be compressing the file into a tar so I don't know if it even matters)
Since the script will be backing up nightly, is there something like too much VACUUMing? Or should I leave VACUUM to another script that runs say once a month?
A:
VACUUM only affects the size of physical backups (pg_basebackup, etc), not logical backups (dumps). You don't need to.
There's no such thing as too much VACUUM. It's harmless. You shouldn't need manual VACUUM though, just make sure autovacuum is enabled and set to run enough.
I strongly advise that you use point-in-time recovery as well as logical backups though. See the manual. There are helper tools like pgbarman and WAL-E for this.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Show that $\sum_{i=0}^{\frac{p-1}{2}} {{\frac{p-1}{2}}\choose {i}}^2 x^{\frac{p-1}{2}-i}$ is separable
This is a problem occurs in my research. For any algebraically closed field $k$ of characteristic $p$. I want to show that $\sum_{i=0}^{\frac{p-1}{2}} {{\frac{p-1}{2}}\choose {i}}^2 x^{\frac{p-1}{2}-i}$ is a separable polynomial over this field. I am trying to prove that $p$ does not divide the discriminant of the polynomial. It seems like all the prime factors of the discriminant are less than $p$.
A:
Let me elaborate on Noam D. Elkies' comment. If we denote $n=(p-1)/2$, the discriminant of this polynomial $g(x)$ is non-zero modulo $p$ if and only if the discriminant of Legendre's polynomial $f(x)=2^{-n}\sum_{k=0}^n \binom{n}{k}^2(x-1)^{n-k}(x+1)^k=2^{-n}(x-1)^ng((x+1)/(x-1))$ is non-zero modulo $p$ (the roots of $f$ and $g$ are obtained from each other by fractional linear functions, thus if $g$ has only simple roots, so does $f$ and vice versa). The discriminant of Legendre's polynomial and even of Jacobi's polynomial is known, see, for example, the formula on page 5 here. Indeed the prime divisors do not exceed $2n<p$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parse text file in Perl and get a specific string
I have a branch_properties.txt file located in my $ENV{"buildPath"} which contains the string TEST_SEQUENCE=Basic or TEST_SEQUENCE=Extended.
I need to take the value after the TEST_SEQUENCE and put it in a variable .
sub GetValueForTestSequenceSplit {
my $filePath = $ENV{"buildPath"} . "\\" . "branch_properties.txt";
my $fileContent = "";
open(my $fileHandle, "<", $filePath)
or die("Cannot open '" . $filePath . "' for reading! " . $! . "!");
while (my $line = <$fileHandle>) {
chomp $line;
my @strings = $line =~ /sequence/;
foreach my $s (@strings) {
print $s;
}
}
close($fileHandle);
}
Where do I get wrong? The console line output in Jenkins shows nothing.
A:
Try to use regexp:
my $variable;
if ($line =~ /TEST_SEQUENCE=(\w+)/){
$variable = $1;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
using Java to download file
I am trying to download a file from this url, but the code hang at getInputStream();
I type this url in the browser. the url is accessible
http://filehost.blob.core.windows.net/firmware/version.txt
What is the cause of it ?
URL url = new URL("http://filehost.blob.core.windows.net/firmware/version.txt");
HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection();
urlConnection.setRequestMethod("GET");
urlConnection.setDoOutput(true);
urlConnection.connect();
InputStream inputStream = urlConnection.getInputStream(); //hang at this line
int totalSize = urlConnection.getContentLength();
A:
READING THE FILE CONTENT
SOLUTION
Use URL with Scanner.
CODE
URL url = new URL("http://filehost.blob.core.windows.net/firmware/version.txt");
Scanner s = new Scanner(url.openStream());
while (s.hasNextLine())
System.out.println(s.nextLine());
s.close();
OUTPUT
1.016
NOTE MalformedURLException and IOException must be thrown or handled.
DOWNLOADING THE FILE
SOLUTION
Use JAVA NIO.
CODE
URL website = new URL("http://filehost.blob.core.windows.net/firmware/version.txt");
ReadableByteChannel rbc = Channels.newChannel(website.openStream());
FileOutputStream fos = new FileOutputStream("C:/temp/version.txt");
fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE);
fos.close();
OUTPUT file has been created at c:\test\version.txt with 5 bytes size
NOTE MalformedURLException, FileNotFoundException and IOException must be thrown or handled.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to make Bootstrap buttons inactive dynamically?
I'm working on a webpage that uses jquery to show and hide content depending on the user selection.
The site contains three main containers: one for a search, one for the results of the search, and another for the visualization of the results of the search.
I'm using buttons to switch from one 'page' to another. The problem is that the results and visualization buttons are active even before a search is done, which is not correct. I want to adapt my code so the only way the button with results works is when there is a search done. The same would be true for the visualization button. How can I do this?
I would appreciate any help!
Thanks!
A:
To disable it :
$('button').prop('disabled', true);
Example : http://jsfiddle.net/48HKD/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Basic Word Counter in C
Working on a C programming question where I have to write a function which will determine how many words are in a given string. Assume that one or more consecutive white spaces is a delimiter between words, and that the string you pass to your function is null terminated.
Have to use pointers and only the #include(stdio.h) library. Wondering how this can be improved upon or if there are any possible errors. Here it is:
#include <stdio.h>
int word_counter(char string[])
{
//We start with first word unless we have a empty string then we have no words
int count;
if(*string!='\0'){
count=1;
}
else{
count=0;
return 0;
}
//while we dont reach the end of the string
while(*string!='\0'){
//if we detect a whitespace
if(*string==' '){
//get previous character
string--;
// If previous character is not a space we increase the count
// Otherwise we dont since we already counted a word
if(*string!=' '){
count++;
}
//return pointer to current character
string++;
}
// set pointer to next character
string++;
}
return count;
}
//just to test if it works
int main(void)
{
char str[] = "Hello World!";
printf("How many words? = %i\n", word_counter(str));
return 0;
}
A:
isspace
There are other characters besides for spaces. What happens if I do:
Hello<tab><tab>world!
Your code will report that there is one word. I would rewrite these:
if(*string==' '){
//get previous character
string--;
// If previous character is not a space we increase the count
// Otherwise we dont since we already counted a word
if(*string!=' '){
count++;
}
You should instead use isspace for these kind of things. ' ' is for explicitly a space.
Indentation
Fix your indentation. Your main uses 4 spaces, your word_counter uses (maybe?) 2. Be sure that it is consistent. Choose one or the other.
Empty case rework
(Actually, as @200_success points out you don't need this corner case. I'm going to leave this up here though, because sometimes you will get cases like this and you should consider reworking them if they appear awkward)
Your empty corner case can be reworked:
int count;
if(*string!='\0'){
count=1;
}
else{
count=0;
return 0;
}
First you don't need to set count = 0 if you just return immediately. I would restructure you if statement to be:
if (*string == '\0') {
return 0;
}
And from there continue with:
int count = 1;
This means we don't leave count uninitialized either.
A:
This section is verbose and buggy:
if(*string==' '){
//get previous character
string--;
// If previous character is not a space we increase the count
// Otherwise we dont since we already counted a word
if(*string!=' '){
count++;
}
//return pointer to current character
string++;
}
Fist of all, it could be simplified to this (and written with more conventional whitespace):
if (*string == ' ') {
// If previous character is not a space we increase the count
// Otherwise we don't since we already counted a word
if (*(string - 1) != ' ') {
count++;
}
}
But what if the input to the function begins with a space? You would try to look at the preceding character. There isn't any, so your function would have undefined behaviour (likely crashing).
A good remedy for this bug is to use the logic in @Edward's solution, using a variable to keep track of the class of the previous character.
A:
I see a number of things that may help you improve your code.
Use const where possible
The word_counter function does not (and should not) alter the passed string and so the parameter should therefore be declared const.
int word_counter(const char string[])
Evaluate each character only once
There's no need to back up and test the previous character to see if it's whitespace or not -- it was already evaluated the last time through the loop! What's needed then, is just to remember that result. One way to do that would be to use a boolean variable to keep track of whether we're in a word or not in a word. Here's a rework showing how that might look:
#include <stdio.h>
#include <stdbool.h>
#include <ctype.h>
int word_counter(const char string[])
{
int count = 0;
for (bool inword = false; *string; ++string) {
if (isspace(*string)) {
if (inword) {
inword = false;
}
} else { // not whitespace
if (!inword) {
inword = true;
++count;
}
}
}
return count;
}
As noted in the other review, you should use isspace() because tab, spaces, form-feeds, vertical tabs and newlines are all things that might separate words.
Understand the concept of a locale
It's often ignored or overlooked, but the isspace function and its siblings in ctype.h may change behavior under POSIX or POSIX-like environments, depending on the locale that's currently in use. It's not a reason to avoid using these functions, but it's good to be aware of the subtle details.
Omit return 0
When a C or C++ program reaches the end of main the compiler will automatically generate code to return 0, so there is no need to put return 0; explicitly at the end of main.
Note: when I make this suggestion, it's almost invariably followed by one of two kinds of comments: "I didn't know that." or "That's bad advice!" My rationale is that it's safe and useful to rely on compiler behavior explicitly supported by the standard. For C, since C99; see ISO/IEC 9899:1999 section 5.1.2.2.3:
[...] a return from the initial call to the main function is equivalent to calling the exit function with the value returned by the main function as its argument; reaching the } that terminates the main function returns a value of 0.
For C++, since the first standard in 1998; see ISO/IEC 14882:1998 section 3.6.1:
If control reaches the end of main without encountering a return statement, the effect is that of executing return 0;
All versions of both standards since then (C99 and C++98) have maintained the same idea. We rely on automatically generated member functions in C++, and few people write explicit return; statements at the end of a void function. Reasons against omitting seem to boil down to "it looks weird". If, like me, you're curious about the rationale for the change to the C standard read this question. Also note that in the early 1990s this was considered "sloppy practice" because it was undefined behavior (although widely supported) at the time.
So I advocate omitting it; others disagree (often vehemently!) In any case, if you encounter code that omits it, you'll know that it's explicitly supported by the standard and you'll know what it means.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Comparing 2 Integers in C++
I get an error when I try to compare two integers in Qt.
if ((modus==2) & (move != -1))
error: invalid operands of types '<unresolved overloaded function type>' and 'int' to binary 'operator!='
Do I need other operators? I have googled but it seems that Qt uses the same. Thanks for your ansers
A:
You should use && for the and-operation:
if ((modus==2) && (move != -1))
A:
If you're using a C++0x compiler, move might conflict with std::move(). I'm thinking that's what's causing the "unresolved overloaded function type" part of the error message.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cancellation of Direct Products
Given a finite group $G$ and its subgroups $H,K$ such that $$G \times H \cong G \times K$$ does it imply that $H=K$.
Clearly, one can see that this doesn't work out for all subgroups. Is there any condition by which this can remain true.
A:
Yes. This is an easy consequence of the Krull-Schmidt theorem:
http://planetmath.org/encyclopedia/KrullRemakSchmidtTheorem.html
Edit: To be clear, what I am claiming is that if one has finite groups $H,K,G$ such that $G \times H \cong G \times K$, then $H \cong K$. As Steve D points out below, the hypothesis and conclusion of the OP's literal question are a bit different than this. But because of the Krull-Schmidt theorem, the OP's literal question becomes: let $H$ and $K$ be isomorphic subgroups of a finite group $G$. When do we have $H = K$? It is quite clear that the answer is "not always", and I find implausible that there would be a clean necessary and sufficient condition for this. But let's see what transpires...
A:
Here's an entry point into the literature, from the introduction to Lam's paper [1]:
In the study of any algebraic system in which there is a notion of a direct sum, the theme of cancellation arises very naturally: if $A \oplus B \cong A\oplus C$ in the given system, can we conclude that $B \cong C$? (For an early treatment of this problem, see the work of Jonsson and Tarski [JT] in 1947.) The answer is, perhaps not surprisingly, sometimes "yes" and sometimes "no": it all depends on the algebraic system, and it depends heavily on the choice of A as well.
Starting with a simple example, we all know that, by the Fundamental Theorem of Abelian Groups, the category of finitely generated abelian groups satisfies cancellation. But a little more is true, which solved what would have been the "Third Test Problem" for §6 in Kaplansky's book [Ka 1] (see the Notes in [Ka_1:§20]): if A is a f.g. (finitely generated) abelian group, then for any abelian groups $B$ and $C$, $A\oplus B \cong A\oplus C$ still implies $B \cong C$. Thus, f.g. abelian groups A remain "cancellable" (with respect to direct sums) in the category of all abelian groups. This takes a proof, which was first given, independently, by P. M.Cohn [Co] and E. A. Walker [W]. And yet, there exist many torsionfree abelian groups of rank 1 (that is, nonzero subgroups of the rational numbers Q ) that are not cancellable in the category of torsionfree abelian groups of finite rank, according to B. Jonsson [Jo].
[1] T.Y. Lam. A Crash Course on Stable Range, Cancellation, Substitution, and Exchange
University of California, Berkeley, Ca 94720
http://math.berkeley.edu/~lam/ohio.ps
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Download file without asyncTask
I wanted to know if there is a way to download a file to an android devices without using an async task.
My problem is that I want to use an async task in my app, and i need to call a download function INSIDE this asyncTask, but android's documentation says that it is only possible to create an asyncTask from the UI thread.
I tried creating a thread by using extends thread instead of extends asyncTask, but android still gave the "Network on UI thread exception".
I need a function that works like this: String downloadFile(String url){...} that returns the downloaded String (i'm downloading an xml file, nothing fancy)
A:
and i need to call a download function INSIDE this asyncTask
If "INSIDE this asyncTask" really means "from the doInBackground() method of the AsyncTask", then the "download function" (whatever that is) does not need to be asynchronous.
but android still said the "Network on UI thread exception"
Then you are not doing the network I/O from doInBackground() of an AsyncTask or by any background means (e.g., from a regular Thread that you fork).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Accessing relation data in an on save entry event
I have a front end form that saves an entry with a relation field based on an ID.
<input type="hidden" name="items[{{ index }}][finish]" value="{{ finish.id }}">
When the entry is saved this ID saves the field with the correct category relation.
I then have a plugin which triggers on entry save that sends an email and attaches the $entry.
craft()->on( 'entries.saveEntry', function( Event $event ){
$entry = $event->params['entry'];
if( !$event->params['isNewEntry'] || $entry->section->handle != 'sampleOrders' )
{
return;
}
//die(Craft::dump($entry->samplesOrder[0]->finish->find()));
$email = new EmailModel();
$email->toEmail = 'luke@ten4design.co.uk';
$email->subject = 'Order Received';
$email->body = '';
$email->htmlBody = craft()->templates->render( 'pentagonemailservice/_emails/order', array(
'entry' => $entry
) );
try{
craft()->email->sendEmail( $email );
}
catch( Exception $e ){
Craft::log( 'Could not send email.', LogLevel::Error );
}
} );
For some reason in my email template I cannot access the finish field with the following code. The other fields print correctly. The finish field is a category relation.
{% for item in entry.samplesOrder %}
{% set appr = craft.entries.section( 'appearances' ).apprId( item.appearance ).first() %}
<h3 class="appearance__name h beta"><a class="a" href="{{ appr.url }}">{{ appr.apprId }} - {{ appr.displayName }} - {{ appr.pentagonRange[0].title }}</a></h3>
<h3 class="h h6 caps--spaced m-b--05">Material</h3>
<p class="appearance__material h delta basket-bits">{{ appr.material }}</p>
<h3 class="h h6 caps--spaced m-b--05">Finish</h3>
{#<p class="basket-bits">{{ finish.title }}</p>#}
<h3 class="h h6 caps--spaced m-b--05">Quantity</h3>
<p class="basket-bits">{{ item.quantity }}</p>
{% endfor %}
How do I access the finish field in my email template?
A:
Related elements, such as categories, are stored by Craft as arrays, even if there is only a single element to be stored.
In your entry form, you don't appear to be storing an array, but only the id of the selected category:
<input type="hidden" name="items[{{ index }}][finish]" value="{{ finish.id }}">
This seems to be borne out by what you see when you dump the item variable in your email template. item[finish] should be an array, even if only a single category is assigned to it.
Try using the following in your entry form to see if that helps, which should store the category id as an array:
<input type="hidden" name="items[{{ index }}][finish][]" value="{{ finish.id }}">
You should then be able to access the correct value for the category title using the code in @carlcs answer.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sublime text: Do not print path of build command
I have the following build script for my JavaScript files in Sublime Text 3.
{
"shell_cmd": "node --harmony --use-strict --harmony_generators $file"
}
The problem is that when node returns an error, for some reason Sublime will spew out the path, which does not line-wrap, and pollutes the output.
C:\Users\JFD\Desktop\playground.js:2
console.log(b); // ReferenceError: a is not defined
^
ReferenceError: b is not defined
at Object.<anonymous> (C:\Users\JFD\Desktop\playground.js:2:13)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:349:32)
at Function.Module._load (module.js:305:12)
at Function.Module.runMain (module.js:490:10)
at startup (node.js:119:16)
at node.js:827:3
[Finished in 0.1s with exit code 8]
[shell_cmd: node --harmony --use-strict --harmony_generators C:\Users\JFD\Desktop\playground.js]
[dir: C:\Users\JFD\Desktop]
[path: C:\Program Files (x86)\Microchip\xc8\v1.11\bin;C:\Program Files (x86)\CMake 2.8\bin;C:\MinGW\bin;C:\yagarto4.6.0\bin;C:\Python26\;C:\Python26\Scripts;C:\Program Files (x86)\Altium Designer S09 Viewer\System;C:\PROGRA~2\MpAM;C:\windows\system32;C:\Program Files\nodejs\;C:\Cadence\SPB_16.6\tools\bin;C:\Cadence\SPB_16.6\tools\libutil\bin;C:\Cadence\SPB_16.6\tools\fet\bin;C:\Cadence\SPB_16.6\tools\specctra\bin;C:\Cadence\SPB_16.6\tools\pcb\bin;C:\Cadence\SPB_16.6\openaccess\bin\win32\opt;C:\Cadence\SPB_16.6\tools\capture;C:\Users\JFD\AppData\Roaming\npm\;c:\altera\12.1\modelsim_ase\win32aloem;c:\altera\12.1sp1\modelsim_ase\win32aloem;c:\altera\12.1sp1\modelsim_ae\win32aloem]
How can I ask Sublime to not output the path?
A:
A bit of a hack, but the following worked for me. Turns out you can override code in some of the default packages, including the code responsible for the path output:
Go to C:\Program Files\Sublime Text 3\Packages
Extract Default.sublime-package (it's actually a zip file) and get the file exec.py (don't leave the extracted folder hanging around in the directory)
Create the directory Default under C:\Users\USERNAME\AppData\Roaming\Sublime Text 3\Packages, and place exec.py into it
Open exec.py, and comment out (place # at the beginning of the line) the following line, at line 245 for me
self.append_string(proc, self.debug_text)
Restart Sublime Text
A:
Install PackageResourceViewer package
Open PackageResourceViewer:Open Resource using CommandPalette[Ctrl+Shift+P]
Then Select Default -->exec.py
Then Select Sublime Input -->input.py [For Sublime Input]
Comment out (place # at the beginning of the line) the following line, at line 365[ST3 B3126] (383 for Sublime Input) for me
self.append_string(proc, self.debug_text)
This is not only hiding the path but the dir and cmd also .
To hide only the path comment the following block
if "PATH" in merged_env:
self.debug_text += "[path: " + str(merged_env["PATH"]) + "]"
else:
self.debug_text += "[path: " + str(os.environ["PATH"]) + "]"
Update
To remove cmd,finished statement,dir,path
"quiet":true in build file
Source
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Fixing y-axis labels in `plot.xts` (xts_0.10-0)
Here is a reproducible example:
library(xts)
constr.month.ts<-structure(c(5114.14, 2684.58, 6974.38, 6935.93, 3543.58, 33073.07,
8292.42, 18612.79, 9305.35, 7449.95, 23619.85, 76292.39, 2461.65,
10412.17, 69125.81, 3983.8, 8310.06, 41309.99, 7967.86, 14090.79,
34324.36, 14703.94, 57256.19, 122629.83, 539.7, 5595.65, 52425.89,
34090.27, 18597.61, 43133.51, 50044.64, 24416.35, 37564.54, 47467.72,
35315.63, 95817.74, 8477.28, 22719.4, 28389.55, 36987.2, 17535.29,
44724.55, 9911.84, 53962.46, 25183.81, 27610.91, 27216.94, 48955.66,
13979.56, 7287.34, 22234.38, 14414.43, 20087.2, 18910.02, 19331.26,
16552.08, 18319.97, 6364.54, 27689.13, 52966.76, 208.12, 10888.74,
12694.57, 19398.1, 7042.34, 6866.65, 8685.48, 5689.97, 5790.28,
8965.91, 3100.03, 48924.71, 1358.89, 13742.76, 8267.89, 35099.2,
15977.01, 17338.4, 13166.29, 8146.65, 8098.93, 9448.07, 8878.93,
22057.95, 722.72, 4864.02, 4991.31, 6987.43, 2318.44, 47601.72,
6944.43, 2779.62, 4331.18, 11003.15, 5313.1, 26783.29, 522.68,
5521.02, 4880.33, 9313.16, 3214.67, 8980.91, 23046.09, 15698.68,
8147.7, 16896.7, 25409.96, 41303.58, 4389.25, 0, 29231.19, 28087.86,
9315.08, 17372.04, 16833.7, 21343.68, 4512.8, 9618.17, 19171.28,
55794.29, 6855.21, 8583.31, 18337.2, 34689.21, 11893.71, 25079.03,
15638.56, 32321.35, 6845.77, 19872.21, 12361.9, 94009.41, 6.09,
1268.67, 36323.04, 30750.09, 1744.15, 15738.23, 25677.51, 24835.04,
4179.83), .Tsp = c(2006, 2017.66666666667, 12), class = "ts")
constr.annual.ts <- aggregate(constr.month.ts)/12
This plot looks ok
plot(as.xts(constr.month.ts), ylab="", xlab="", main="My Title",las=0,cex.axis=1.1,cex.main=0.9)
But in that one y-axis labels are not ok. How can i fix them?
plot(as.xts(constr.annual.ts), ylab="", xlab="",main= "My Title",las=0,cex.axis=1.1,cex.main=0.9)
> sessionInfo()
R version 3.4.2 (2017-09-28)
Platform: i386-w64-mingw32/i386 (32-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=Greek_Greece.1253 LC_CTYPE=Greek_Greece.1253
[3] LC_MONETARY=Greek_Greece.1253 LC_NUMERIC=C
[5] LC_TIME=Greek_Greece.1253
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] xts_0.10-0 zoo_1.8-0
loaded via a namespace (and not attached):
[1] compiler_3.4.2 grid_3.4.2 lattice_0.20-35
A:
It must have been a bug. Install the development version, from github, to fix it.
devtools::install_github("joshuaulrich/xts")
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Size Parameterization in Scala
Is there anyway to parametrize a type via a value in scala? For example, to parametrize a matrix with it's size so something like...
val m1 = new Matrix[2,3]()
val m2 = new Matrix[5,1]()
val m3 = m1 multiply m2
would fail to compile because the you can't multiply a [2,3] matrix by a [5,1]?
This would also be useful in implementing other types such as Tuples, or Vectors. Does anyone know of a way to achieve this?
A:
Using Peano numbers we can define types that all are natural numbers starting from 0. Here they are all subtypes of Nat but _1 and _2 are distinct types, so they can't be used in place of each other without variance.
Define natural numbers:
scala> sealed trait Nat
defined trait Nat
scala> sealed trait _0 extends Nat
defined trait _0
scala> sealed trait Succ[N <: Nat] extends Nat
defined trait Succ
scala> type _1 = Succ[_0]
defined type alias _1
scala> type _2 = Succ[_1]
defined type alias _2
The Matrix is invariant in its parameter types:
scala> case class Matrix[A <: Nat, B <: Nat](ignoreThis: String)
defined class Matrix
The multiplication function is also invariant:
scala> def multiply[R1 <: Nat, C1 <: Nat, C2 <: Nat](m1: Matrix[R1, C1], m2: Matrix[C1, C2]) = Matrix[R1, C2](m1.ignoreThis + m2.ignoreThis)
multiply: [R1 <: Nat, C1 <: Nat, C2 <: Nat](m1: Matrix[R1,C1], m2: Matrix[C1,C2])Matrix[R1,C2]
Compiler will do the checks for you, dimensions match:
scala> multiply(Matrix[_1, _2]("one"), Matrix[_2, _1]("two"))
res0: Matrix[_1,_1] = Matrix(onetwo)
dimensions don't match, compile time error is much better than runtime:
scala> multiply(Matrix[_1, _2]("one"), Matrix[_1, _1]("two"))
<console>:19: error: type mismatch;
found : Matrix[_1(in object $iw),_2]
(which expands to) Matrix[Succ[_0],Succ[Succ[_0]]]
required: Matrix[_1(in object $iw),Succ[_ >: _0 with _1(in object $iw) <: Nat]]
(which expands to) Matrix[Succ[_0],Succ[_ >: _0 with Succ[_0] <: Nat]]
Note: _2 <: Succ[_ >: _0 with _1 <: Nat], but class Matrix is invariant in type B.
You may wish to define B as +B instead. (SLS 4.5)
multiply(Matrix[_1, _2]("one"), Matrix[_1, _1]("two"))
^
<console>:19: error: type mismatch;
found : Matrix[_1(in object $iw),_1(in object $iw)]
(which expands to) Matrix[Succ[_0],Succ[_0]]
required: Matrix[Succ[_ >: _0 with _1(in object $iw) <: Nat],_1(in object $iw)]
(which expands to) Matrix[Succ[_ >: _0 with Succ[_0] <: Nat],Succ[_0]]
Note: _1 <: Succ[_ >: _0 with _1 <: Nat], but class Matrix is invariant in type A.
You may wish to define A as +A instead. (SLS 4.5)
multiply(Matrix[_1, _2]("one"), Matrix[_1, _1]("two"))
^
I was too lazy to write actual multiplication implementation hence ignoreThis placeholder.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Unable to edit xml file in powershell -WinRM
I am trying to edit a xml file, when I do it on local machine the code block works perfectly fine
[xml]$webXML = Get-Content "C:\Deployment\ABCfolder\web.config"
$mainXML = $WebXml.configuration."system.webServer".httpRedirect
$mainXML.destination -replace "\w.*","http://localhost/ABCname"
$mainXML.Save($webXML)
But when I am trying to run it on target servers using winRM, it ends giving error, code below:
$server = Get-Content "C:\Files\server.txt"
foreach ($s in $server){
$session = New-PSSession -ComputerName $s -ThrottleLimit 500
Invoke-Command -Session $session -ScriptBlock {
[xml]$webXML = Get-Content "C:\Deployment\ABCfolder\web.config"
$mainXML = $WebXml.configuration."system.webServer".httpRedirect
$mainXML.destination -replace "\w.*","http://localhost/ABCname"
$mainXML.Save($webXML)
Note: web.config file exist on target server
Error:
Method invocation failed because [System.Xml.XmlElement] does not contain a method
named 'Save'.
+ CategoryInfo : InvalidOperation: (Save:String) [], RuntimeException
+ FullyQualifiedErrorId : MethodNotFound
+ PSComputerName : "SomeRandomIP"
A:
The logic of the code should be:
Read the xml into webXML
Change the required node
Write webXML back to the file
This code should work:
[xml]$webXML = Get-Content "C:\Deployment\ABCfolder\web.config"
$mainXML = $WebXml.configuration."system.webServer".httpRedirect
$mainXML.destination = $mainXML.destination -replace "\w.*","http://localhost/ABCname"
$webXML.Save("C:\Deployment\ABCfolder\web.config")
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL - specific columns on join?
When making a join (inner, left outer, right outer or whatever), how can I specify which columns on the table to join into the original table?
Consider the following example:
SELECT FirstName FROM User LEFT OUTER JOIN Provider ON User.ProviderID = Provider.ID
This would select FirstName from user, but select everything from Provider. How can I specify which parts of Provider should be included in the resultset?
A:
This will only include User.FirstName and Provider.ProviderID in the final resultset:
SELECT User.FirstName, Provider.ProviderID FROM User LEFT OUTER JOIN Provider ON User.ProviderID = Provider.ID
A:
SELECT User.FirstName, Provider.ID, Provider.YourExtraColumnname, Provider.YourExtraColumnname2 FROM User LEFT OUTER JOIN Provider ON User.ProviderID = Provider.ID
A:
SELECT `User`.FirstName, Provider.*
FROM `User`
LEFT OUTER JOIN Provider
ON `User`.ProviderID = Provider.ID
1. You use the table name before the column, or if you alias your tables, you can use the alias.
E.g. LEFT OUTER JOIN Provider p and then you could access providers id on the select clause like this:
SELECT `User`.FirstName, p.ID
2. I added backticks around the table name User, because it is a reserved word for MySQL
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Setting xsl:value-of into an href attribute and the text field of a link in an XSLT
How can I set an a href that is both a link to and has the text for a link through an XSLT transformation? Here's what I have so far, which gives me the error "xsl:value-of cannot be a child of the xsl:text element":
<xsl:element name="a">
<xsl:attribute name="href">
<xsl:value-of select="actionUrl"/>
</xsl:attribute>
<xsl:text><xsl:value-of select="actionUrl"/></xsl:text>
</xsl:element>
A:
<xsl:text> defines a text section in an XSL document. Only real, plain text can go here, and not XML nodes. You only need <xsl:value-of select="actionUrl"/>, which will print text anyways.
<xsl:element name="a">
<xsl:attribute name="href">
<xsl:value-of select="actionUrl"/>
</xsl:attribute>
<xsl:value-of select="actionUrl"/>
</xsl:element>
A:
You can also do:
<a href="{actionUrl}"><xsl:value-of select="actionUrl"/></a>
A:
You don't need the xsl:text element:
<xsl:element name="a">
<xsl:attribute name="href">
<xsl:value-of select="actionUrl"/>
</xsl:attribute>
<xsl:value-of select="actionUrl"/>
</xsl:element>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How did the man going from/coming to St Ives have seven wives?
Wikipedia indicates that the classic riddle "As I was Going to St Ives" was written at some point in the 1700s or 1800s.
As I was going to St Ives
I met a man with seven wives
Bigamy, however has been illegal in the UK since at least the 1600s and prior to that, a breach of ecclesiastical law.
Why is this man's bigamy not noted upon by the author? Is there any indication why (or how) he even had seven wives in the first place?
As I was going to St Ives
I met a man with seven wives
He was promptly arrested
For a breach of the Bigamy Act 1603
And subsequently executed.
A:
Back when the rhyme was first created wife also commonly meant woman.
A woman considered without reference to marital status, and related senses.
— "wife, n." OED Online. Oxford University Press, March 2017. Web. 8 April 2017. [link]
The OED says that that meaning is still in use in Scotland. This meaning survives in "standard" English in words like housewife and midwife.
So a man with seven (or nine) wives was a man accompanied by that many women.
They could be maids or other servants, or relatives, or just women travelling with him. No need to worry about bigamy.
A:
Because the poem was never intended to be realistic.
It's a simple nursery rhyme, designed to amuse children and to have an unexpected answer. It's not a complex piece of literature with much thought put into worldbuilding, consistency, and realism.
OK, so why do "wives" appear in the poem at all? Wouldn't it have worked equally well with, say, a man accompanied by seven servants, each carrying seven sacks, and so on? Well, not quite as well. "Wives" is a nice simple word - again, remember that the song is meant for children - and it also rhymes with St. Ives. "Servants" is a longer word, with more syllables (thus wouldn't scan as well), and I can't think of any place names that rhyme with it. For a child learning this rhyme, it's easy to imagine a man, women, sacks, cats, and kittens - why complicate it by saying they were his servants or his mistresses or his aunties or anything less familiar to a child than wives?
Alternatively, going with the lateral-thinking theme of this riddle, it never says they were all his wives. Perhaps he was walking together with seven wives, his own and those of six other men!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What kind of optimization problem is that?
I am new here as a writer but I read this group from time to time.
I am thinking about the following problem.
Problem description:
Assume we have an area $A$ of size $1000 \times 1000$ cells.
One cell is addressed with $(x,y)$ pair, where $x$ and $y$ are positive integers.
We are given the set of points (locations) on area $A$, lets say we have $N$ locations. These locations are fixed and given as an input. In practice N is lower than 1000.
In each location we can build a sender station that sends some signal. Sender stations can have 4 levels: 0,1,2,3. Building a sender station with proper level costs: $c_0=0$, $c_1>0$, $c_2>c_1$ and $c_3>c_2$ respectively. Costs are equal for each location. Moreover costs are fixed and given as an input. Each sender station covers given circular area with signal. Radius are following: $r_0=0$, $r_1>0$, $r_2>r_1$, $r_3>r_2$. Radius is fixed and given as an input.
Question: What level of sender station should be build in each location in such a way, that: all locations are covered with signal and the total cost of building is minimized?
My question to that is: what kind of theoretical problem is that? Is there some easy transformation to some well known optimization problem?
This is not a homework from university, just my own riddle :)
A:
As pointed out in the comments already, this is an instance of weighted geometric set cover. Here are some recent relevant references:
http://www.divms.uiowa.edu/~kvaradar/paps/weightedcover.pdf
See also the follow up work:
http://www.cs.uwaterloo.ca/~tmchan/cover_soda.pdf
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Am I misunderstanding String#hash in Ruby?
I am processing a bunch of data and I haven't coded a duplicate checker into the data processor yet, so I expected duplicates to occur. I ran the following SQL query:
SELECT body, COUNT(body) AS dup_count
FROM comments
GROUP BY body
HAVING (COUNT(body) > 1)
And get back a list of duplicates. Looking into this I find that these duplicates have multiple hashes. The shortest string of a comment is "[deleted]". So let's use that as an example. In my database there are nine instances of a comment being "[deleted]" and in my database this produces a hash of both 1169143752200809218 and 1738115474508091027. The 116 is found 6 times and 173 is found 3 times. But, when I run it in IRB, I get the following:
a = '[deleted]'.hash # => 811866697208321010
Here is the code I'm using to produce the hash:
def comment_and_hash(chunk)
comment = chunk.at_xpath('*/span[@class="comment"]').text ##Get Comment##
hash = comment.hash
return comment,hash
end
I've confirmed that I don't touch comment anywhere else in my code. Here is my datamapper class.
class Comment
include DataMapper::Resource
property :uid , Serial
property :author , String
property :date , Date
property :body , Text
property :arank , Float
property :srank , Float
property :parent , Integer #Should Be UID of another comment or blank if parent
property :value , Integer #Hash to prevent duplicates from occurring
end
Am I correct in assuming that .hash on a string will return the same value each time it is called on the same string?
Which value is the correct value assuming my string consists of "[deleted]"?
Is there a way I could have different strings inside ruby, but SQL would see them as the same string? That seems to be the most plausible explanation for why this is occurring, but I'm really shooting in the dark.
A:
If you run
ruby -e "puts '[deleted]'.hash"
several times, you will notice that the value is different. In fact, the hash value stays only constant as long as your Ruby process is alive. The reason for this is that String#hash is seeded with a random value. rb_str_hash (the C implementing function) uses rb_hash_start which uses this random seed which gets initialized every time Ruby is spawned.
You could use a CRC such as Zlib#crc32 for your purposes or you may want to use one of the message digests of OpenSSL::Digest, although the latter is overkill since for detection of duplicates you probably won't need the security properties.
A:
I use the following to create String#hash alternatives that are consistant across time and processes
require 'zlib'
def generate_id(label)
Zlib.crc32(label.to_s) % (2 ** 30 - 1)
end
|
{
"pile_set_name": "StackExchange"
}
|
Q:
iPhone - core data inspecting inside a NSSet attribute on a fetch
I have a core data structure of Books and Bundles. A book can belong to one or more bundles, so, I have a many-to-many attribute in place between the two. On the book this attribute is called fromBundle.
I have my class for each entity. This class was written when instead a many-to-many I had just a to-many relationship... so, at that time, fromBundle was not a NSSet. My problem is this, I have to build a predicate that could look inside the fromBundle NSSet and see if the set contains the bundle I am looking for.
This is the code I had before the change.
Bundle *aBundle = [Bundle bundleComNumber:aNumber inManagedObjectContext:context];
NSArray *all = nil;
NSFetchRequest *request = [[NSFetchRequest alloc] init];
request.entity = [NSEntityDescription entityForName:@"Book" inManagedObjectContext:context];
// the problem is on the next line... as I see, the line is not looking inside the NSSet
// I may be wrong, but when I run this, it crashes and stops on the executeFetchRequest line
// saying **to-many key not allowed here**
request.predicate = [NSPredicate predicateWithFormat:
@"(fromBundle == %@)", aBundle];
[request setResultType:NSDictionaryResultType];
[request setReturnsDistinctResults:YES];
[request setPropertiesToFetch:[NSArray arrayWithObjects: @"Name", @"Number", nil]];
NSSortDescriptor *sortByItem = [NSSortDescriptor sortDescriptorWithKey:ordem ascending:YES selector:@selector(compare:)];
NSArray *sortDescriptors = [NSArray arrayWithObject:sortByItem];
[request setSortDescriptors:sortDescriptors];
NSError *error = nil;
all = [context executeFetchRequest:request error:&error]; // it is crashing here saying **to-many key not allowed here**
[request release];
return all;
what am I missing?
thanks
A:
Have you tried using a predicate like @"%@ IN fromBundle" instead?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
$f: [0,1]\rightarrow L^1(\Omega)$ as a (measurable?) function from $[0,1]\times \Omega\rightarrow \mathbb{R}$
Given a map from $\big([0,1], \mathcal{B}[0,1], m\big)$ to a Banach space $(X, \|\cdot \|)$. There are strong measurable functions (they are the point wise a.e. limit of simple functions) and weak measurable functions (for each $u^* \in X^*$, we have $t\mapsto \langle u^*, f(t)\rangle$ is a measurable as a function from $[0,1] \rightarrow \mathbb{R}$).
I have two questions:
Why don't we use the normal measurable definition here? That is a function is measurable if the pre-image of all Borel sets $U\in \mathcal{B}(X)$ are in $\mathcal{B}[0,1]$.
Now suppose $X= L^1(\Omega, \mathcal{F}, \mu)$, given a measurable $f:[0,1]\rightarrow L^1$ (either strong, weak or the normal definition), can we say that the function $f(t,\omega)$ is measurable as a function from $[0,1] \times \Omega \rightarrow \mathbb{R}$ with respect to the product sigma algebra on the domain?
Here, assume elements of $L^1$ are just measurable functions, not equivalent classes, so that $f(t,x)$ is well defined.
Thank you for your time.
A:
(1) If the Banach space $X$ is separable; and if you use the Lebesgue-measurable sets on $[0,1]$ not the Borel sets; then all three definitions are equivalent.
But of course the main thing of interest is not "measurable function" but "integrable function". When $X$ is not separable, you probably want the Bochner integral, using the definition pointwise a.e. limit of simple functions. There is also the Pettis integral, using weak measurability, but its properties are much worse than the Bochner integral.
For (2), it is clear with the Bochner definition of measurable: pointwise a.e. limit of simple functions. Again you want a complete measure (like Lebesgue) so that your sequnce can do weird things on a set of measure zero.
Plug:
Edgar, G. A. Measurability in a Banach space. Indiana Univ. Math. J. 26 (1977), no. 4, 663–677. MSN.
Edgar, G. A. Measurability in a Banach space. II. Indiana Univ. Math. J. 28 (1979), no. 4, 559–579. MSN.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ajax call not accepting Names with apostrophe as a parameter
$(".loadingPnl").removeClass('hdn');
var siteurlA = window.location.protocol + "//" + window.location.host + _spPageContextInfo.siteServerRelativeUrl;
var callUrl = siteurlA + "/_layouts/15/SynchronyFinancial.Intranet/CreateMySite.aspx/SaveAvailableFavoriteItem";
var linkName = $('.txtLinkName').val();
linkName = linkName.replace("'","\'");
$.ajax({
type: "POST",
url: callUrl,
data: "{'linkName': '" + linkName + "', 'webSiteUrl':'" + $('.txtWebAddress').val() + "','iconId':'" + $(".ddlIcons").val() + "'}",
contentType: "application/json; charset=utf-8",
processData: false,
dataType: "json",
success: function (response) {
return true;
},
error: function (response) {
return true;
}
});
return true;
}
A:
The problem is you're building JSON yourself as the request parameters. Moreover, you're building invalid JSON (JSON property names are always with double quotes (")).
Instead, pass an object and let jQuery take care of how to send it - if you pass that instead of a string the server can figure it out. If you really want to do it yourself you can also pass an object to JSON.stringify.
var payload = {
linkName: linkName,
webSiteUrl: $('.txtWebAddress').val(),
iconId: $(".ddlIcons").val()
};
$.ajax({
type: "POST",
url: callUrl,
data: JSON.stringify(payload), // or just payload
contentType: "application/json; charset=utf-8",
processData: false, // if you just pass payload, remove this
dataType: "json"
// you had two `return`s here, but they wouldn't work, make sure
// you understand why
// http://stackoverflow.com/questions/14220321/how-to-return-the-response-from-an-asynchronous-call
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Giving a function within a class method of a PrototypeJS Class access to class members
Let's say I have a very simple PrototypeJS class that looks like this:
var Foo = Class.create({
initialize: function() {
this.bar = 'bar';
},
dostuff: function() {
$$('.enabled').each( function(elem) {
alert(this.bar); //FAIL
});
}
});
This fails because the function being passed to .each() doesn't have any idea what this refers to.
How can I access the bar attribute of the Class from inside that function?
A:
You can use Prototype's bind function, which 'locks [the function's] execution scope to an object'.
var Foo = Class.create({
initialize: function() {
this.bar = 'bar';
},
dostuff: function() {
$$('.enabled').each( function(elem) {
alert(this.bar);
}.bind(this)); // Set the execution scope to Foo
}
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Google is mixing GitHub and Stack Overflow
I don’t know if anybody cares, but Google is apparently mixing up GitHub and Stack Overflow:
You can find this at the bottom of every Google Glass documentation page.
I propose that a Stack Exchange representative contacts them and request to change that GitHub logo into a Stack Overflow logo, at least they are violating Stack Exchange’s trademark.
A:
Director of Ad Sales here. I've reached out to my contacts on the advertising side to see if I can get this changed. I'll keep everyone posted.
Update: This has been corrected!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Picking the best hand out of 7 cards(Poker Texas Hold'em)
I've implemented a Texas Hold'em game using C#.
I wrote classes like Card, Deck, Player, Table etc...
For example:
Player player1 = new Player("player1");
player1.Card1 = new Card(4, Symbol.Clubs, true);
player1.Card2 = new Card(5, Symbol.Clubs, true);
Card card1 = new Card(4, Symbol.Clubs, true);
Card card2 = new Card(7, Symbol.Hearts, true);
Card card3 = new Card(2, Symbol.Spades, true);
Card card4 = new Card(4, Symbol.Diamonds, true);
Card card5 = new Card(4, Symbol.Clubs, true);
Card[] tableCards = {card1, card2, card3, card4, card5};
I've also wrote some methods for evaluate cards array, like IsFlush, IsStraight, IsPair and so on.
My question is how should I pick the best hand combination if I got 7 cards(2 hand, 5 from the table).
In this code example it's {4,4,4,4,7}.
A:
Don't write your code against 5-card hands. Instead, write it in general. So,
ContainsStraightFlush
ContainsFourOfAKind
ContainsFullHouse
etc. would eat a collection of cards and return true if some subset of those cards is a straight flush, four of a kind, etc. respectively.
Then runs backwards from highest ranking hand to lowest. If one of these methods returns true, then you can easily pick off the best hand that satisfies that condition. For example on
2h Kh Qh Jh Th 9h 6c
ContainsStraightFlush would return true, and then you can pick off 9h Th Jh Qh Kh as the best hand.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Angular Observable: modify data before resolve
I tried to transform data before resolve by such way, but subscriber gets non-transformed data.
@Injectable()
export class OrderService {
getAll(): Observable< Array<Order> > {
let url = 'http://fakeapi.com';
return this.http.get( url )
.pipe(
tap( (data: any) => {
/*
* MAKE DATA TRANSFORMATIONS HERE
*/
}),
catchError( (err: any, caught: Observable<{}>) => {
console.log('error while GET : ' + url);
console.warn(err);
return caught;
})
);
}
}
here is a subscriber:
this._orderService.getAll().subscribe( data => {
this.orders = data;
console.log('\nplanning-overview-page : _orderService.getAll : data ', data);
});
A:
map is the operator for transforming a value in an observable stream, and you have to make sure it returns a value.
map(data => {
// do your transformations
return data;
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Capturing/Selecting images and returning its type
How can I allow the user to choose between capturing an image or selecting it from the gallery, and I want to know the type of the photo afterwards (PNG/JPG).
I'm using this code but it is not working well.
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == getActivity().RESULT_OK) {
if (requestCode == REQUEST_CAMERA) {
Bitmap photo = (Bitmap) data.getExtras().get("data");
mImg.setImageBitmap(photo);
} else if (requestCode == SELECT_FILE) {
Uri selectedImageUri = data.getData();
String[] projection = {MediaStore.MediaColumns.DATA};
CursorLoader cursorLoader = new CursorLoader(getActivity(), selectedImageUri, projection, null, null,
null);
Cursor cursor = cursorLoader.loadInBackground();
int column_index = cursor.getColumnIndexOrThrow(MediaStore.MediaColumns.DATA);
cursor.moveToFirst();
String selectedImagePath = cursor.getString(column_index);
Bitmap bm;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeFile(selectedImagePath, options);
final int REQUIRED_SIZE = 200;
int scale = 1;
while (options.outWidth / scale / 2 >= REQUIRED_SIZE
&& options.outHeight / scale / 2 >= REQUIRED_SIZE)
scale *= 2;
options.inSampleSize = scale;
options.inJustDecodeBounds = false;
bm = BitmapFactory.decodeFile(selectedImagePath, options);
mImg.setImageBitmap(bm);
mImg.setAlpha(1);
}
}
}
A:
I want to know the type of the photo afterwards (PNG/JPG)
Call getType() on a ContentResolver, passing in the Uri.
I'm using this code but it is not working well.
Delete most of what you have, as your selectedImagePath code will fail on most Android devices and you are decoding the bitmap on the main application thread. Use an image-loading library like Picasso to handle the image loading for you, asynchronous, including the scaling. Picasso can use the Uri directly without any of the flawed selectedImagePath stuff. Then, all you need is the getType() call to get the MIME type of the image. Your entire requestCode == SELECT_FILE block will be replaced by 2-3 lines of code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Algorithm to exteact words from an string
I’m woking with TELON/COBOL atm, i need an algorithm to extract words from a strings.
Tried searching but couldn’t find anything similar.
Anyway the algorithm needs to extract the words, and ignore spaces; here is what i have so far (ive put in comments for people who aren’t familiar with e syntax)
WS-STRING is the input string
WS-WORD-LEN is the length of the word to be extracted
WS-LST-WORD-P is the starting position of the current word to be extracted (within the sting)
WS-SUB1 is the loop index
PERFORM TEST BEFORE
VARYING WS-SUB1 FROM 1 BY 1
UNTIL WS-SUB1 > WS-STRING-LEN //loop for each char in the string, add 1 to WS-SUB1 in each itiaration
EVALUATE TRUE
WHEN WS-STRING(WS-SUB1:1) = SPACES //if the current char is a space
MOVE WS-SUB1 TO WS-SUB1-FRD
ADD 1 TO WS-SUB1-FRD
IF WS-STRING(WS-SUB1-FRD:1) = SPACES //Checks to see if the next char is a space
ADD 1 TO WS-LST-WORD-P
ELSE
MOVE WS-STRING(WS-LST-WORD-P:WS-WORD-LEN) //Substing "WS-WORD-LEN" many char from "WS-STRING" starting at "WS-LST-WORD-P" into "WS-WORD-OUT"
TO WS-WORD-OUT
ADD 1 TO WS-COUNT(2)
PERFORM Z-400-OUTPUT-WORD //This outputs "WS-WORD-OUT" to a file.
MOVE WS-SUB1 TO WS-LST-WORD-P
MOVE 1 TO WS-WORD-LEN
END-IF
WHEN OTHER
ADD 1 TO WS-WORD-LEN
END-EVALUATE
END-PERFORM
MOVE 1 TO WS-LST-WORD-P
This algorithm sort of works, but some of the output is spliced at the beginning/end.
so any ideas on whats wrong here or suggest something better?
A:
Have you looked at UNSTRING ? It would seem tailor-made for your situation.
MOVE 1 TO WS-SUB1
PERFORM UNTIL WS-SUB1 >= LENGTH OF WS-STRING
UNSTRING WS-STRING DELIMITED SPACE
INTO WS-WORD-OUT COUNT IN WS-WORD-LEN
POINTER WS-SUB1
END-UNSTRING
ADD 1 TO WS-COUNT(2)
PERFORM Z-400-OUTPUT-WORD
ADD WS-WORD-LEN TO WS-SUB1
END-PERFORM
Nota bene: the code is just freehand, uncompiled and untested.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sharing images from one app to Whatsapp
I want to share images from my application to Whatsapp but I am not able to do this. I place all images into assets folder. What do I have to change?
public void onClickWhatsApp() {
Intent waIntent = new Intent(Intent.ACTION_SEND);
waIntent.setType("text/plain");
String text = "Hey, check out this cool game for Android ‘Free Ticket Bollywood Quiz’ www.globussoft.com";
waIntent.setPackage("com.whatsapp");
if (waIntent != null) {
waIntent.putExtra(Intent.EXTRA_TEXT, text);//
startActivity(Intent.createChooser(waIntent, "Share with"));
} else {
Toast.makeText(this, "WhatsApp not Installed", Toast.LENGTH_SHORT)
.show();
}
}
A:
Your title is about sharing image, but you try sharing text. Well. try any of these.
Try this to share text:
Intent whatsappIntent = new Intent(Intent.ACTION_SEND);
whatsappIntent.setType("text/plain");
whatsappIntent.setPackage("com.whatsapp");
whatsappIntent.putExtra(Intent.EXTRA_TEXT, "This is a test text");
try {
activity.startActivity(whatsappIntent);
} catch (android.content.ActivityNotFoundException ex) {
ToastHelper.MakeShortText("Whatsapp have not been installed.");
}
Try this to share image:
Intent whatsappIntent = new Intent(Intent.ACTION_SEND);
Uri uri=Uri.parse("file:///android_asset/myimage.png");
whatsappIntent.setType("image/*");
whatsappIntent.setPackage("com.whatsapp");
sendIntent.putExtra(Intent.EXTRA_STREAM,uri);
try {
activity.startActivity(whatsappIntent);
} catch (android.content.ActivityNotFoundException ex) {
ToastHelper.MakeShortText("Whatsapp have not been installed.");
}
Hope it helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How are fields initialized by the default constructor
As many of the authors have written in their books that the default values of instance variables inside a class are initialized by the class-default constructor, but I have an issue understanding this fact.
class A {
int x;
A() {}
}
As I have provided the default constructor of class A, now how the value of x is initialised to 0?
A:
Explanation
As written in the JLS, fields are always automatically initizialized to their default value, before any other assignment.
The default for int is 0. So this is actually part of the Java standard, per definition. Call it magic, it has nothing to do with whats written in the constructor or anything.
So there is nothing in the source code that explicitly does this. It is implemented in the JVM, which must adhere to the JLS in order to represent a valid implementation of Java (there are more than just one Java implementations).
See §4.12.5:
Initial Values of Variables
Each class variable, instance variable, or array component is initialized with a default value when it is created (§15.9, §15.10.2)
Note
You can even observe that this happens before any assignment. Take a look at the following example:
public static void main(String[] args) {
System.out.println("After: " + x);
}
private static final int x = assign();
private static int assign() {
// Access the value before first assignment
System.out.println("Before: " + x);
return x + 1;
}
which outputs
Before: 0
After: 1
So it x is already 0, before the first assignment x = .... It is immediatly defaulted to 0 at variable creation, as described in the JLS.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Which philosophers has proven existing is being part of the change in time?
Does anyone know any philosopher(s)/mathematician(s) who has proven that existing is being part of change in time or a journal article from a credential academician/scholar who conclude this with proof or claimed by valid axiom(s)?
Some source came up like Kant and Hume, but I seek specific reference to sources and (page numbers).
With change I mean that the arrow of life is always moving forward in time independent of the observer. An axiom would be even better.
A:
Which philosophers have proven existing is being part of the change in time?
Coincidentally, I became interested in the work of Lee Smolin just last evening. A renowned theoretical physicist, he has made major contributions to the philosophy of physics. His areas of research includes cosmology. According to Wikipedia, in an article he wrote for Physics World, The Unique Universe (02 Jun 2009) Smolin shared profound discoveries about the nature of time:
There is only one universe.
All that is real is real in a moment, which is a succession of moments. Anything that is true is true of the present moment. Not only is time real, but everything that is real is situated in time. Nothing exists timelessly.
[Which is just a different way of saying that everything exists within the framework of time.]
Everything that is real in a moment is a process of change leading to the next or future moments. Anything that is true is then a feature of a process in this process causing or implying future moments.
Mathematics is derived from experience as a generalization of observed regularities, when time and particularity are removed. Under this heading, Smolin distances himself from mathematical platonism...
Furthermore:
Smolin views rejecting the idea of a creator as essential to cosmology on similar grounds to his objections against the multiverse. He does not definitively exclude or reject religion or mysticism but rather believes that science should only deal with that of which is observable. He also opposes the anthropic principle, which he claims "cannot help us to do science."
Outlining a review of Smolin's book, Time Reborn (2013), the Perimeter Institute for Theoretical Physics (Canada) presents many intriguing points, including:
Whatever is real is just real in a moment of time, being one in a succession
of moments.
The past was, but no longer is, real. We can, however, interpret the past by
finding evidence of past processes in the present.
The future does not yet exist, and thus it is open. We can, however, make
predictions, yet the future may produce genuinely novel phenomena.
Nothing transcends time, not even the laws of nature, which therefore can
evolve over time.
I easily resonate with Smolin's ideas, except that I happen to believe contrarily that God does fit neatly into the elusive logical equation of ultimate truth and reality; despite the fact that so many philosophers and religious doctrines have traditionally (and probably wrongly, in my humble opinion) concluded that God transcends both space and time. My own believe is rather a little bit more like Spinoza's, and Bernardo Kastrup's, who stated in one of his videos that "the Universe is the body of God". I believe that God actually equals (rather than transcends or exists outside of) space and time, that God = Universe. I also believe God is Righteous (capital R).
In conclusion, to answer your question, "Which philosophers have proven existing is being part of the change in time?"
Lee Smolin has done an excellent job of proving it scientifically, and has given the world a tremendous body of evidence for it in his books, essays, and papers. Smolin is (or has been) also on the faculty of the Perimeter Institute for Theoretical Physics (cited here).
Wikipedia
Physics World, "The Unique Universe" by Lee Smolin (02 Jun 2009)
Academia, presentational review by the Perimeter Institute for Theoretical Physics of Smolin's book, "Time Reborn", and article, "Time, Laws, and the Future of Cosmology", in Physics Today, AIP, March, 2014
Time Reborn by Lee Smolin (2013)
YouTube, "Lee Smolin Public Lecture: Time Reborn"
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I remove index.php/html from the url
I am running into a problem concerning the mod_rewrite.
RewriteCond %{THE_REQUEST} /index\.[php|html] [NC]
RewriteRule (.*)index\.[php|html]$ /$1 [R=301,L]
The idea is to get both domain.com/index.php and domain.com/index.html redirected to domain.com.
My attempts do not seem to work. The above code is one of my attempts.
It would be great if you could help me out!
A:
Your regex is incorrect You need to use ( ) instead of [ ].
Use this rule:
RewriteCond %{THE_REQUEST} /index\.(php|html?) [NC]
RewriteRule ^(.*?)index\.(php|html?)$ /$1 [R=301,L]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Visual Studio (with TFS) history other than solution explorer?
Has anyone ever found other ways to get a file's commit history outside of solution explorer? It's really annoying that history is so stagnant because it is a really helpful view. I just wish it would show the current file. Here is the use case.
I build my gigantic solution, find random errors in files I have never heard of and want to know who's at fault. I can get to the file by double clicking from the Error List view, but right clicking doesn't work, nor does navigating View->Other Windows->History. If I can even get the history view, I just get the last history that I right-clicked from the Solution Explorer. +1 Also for anyone that has a way to find a file in the solution.
A:
Double-click the error message to open the file. Then File > Source Control > Annotate to put a list of revisions down the left hand side. You can then click a revision number to get the details.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Generating .resx resource files in the correct encoding
I know I can change a resource file's encoding in the Properties tool window (e.g. to Unicode/utf-16), but this only sets the it for the existing file.
Can I get Visual Studio's Resource Generator (resgen.exe) to output files of a specific encoding in the first place, so that I don't need to change the encoding type every time I add, remove or update an entry in the file using the resource editor, or do I need to add a pre-build event command in the project properties?
A:
It turns out I was over-complicating matters, I opted to use the UTF-16-to-UTF-8 character converter instead.
This seems to be enough for my needs.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Align a label with textarea
How can I align the position of my "Description" label such that it corresponds in line with the "Enter description" text (Label - TextArea). These components are inside an gridPane and coded in javaFX ( so no Scene Builder tips can help me here ).
Code:
Label descriptionLabel = new Label("Description:");
descriptionLabel.setPadding(new Insets(5, 5, 5, 5));
JFXTextArea descriptionTextArea = new JFXTextArea();
descriptionTextArea.setPadding(new Insets(5, 0, 5, 0));
descriptionTextArea.setPromptText("Enter description...");
gridPane.add(descriptionLabel, 0, 2);
gridPane.add(descriptionTextArea, 1, 2);
I've tried with descriptionLabel.setAlignment(Pos.TOP_LEFT); but neither that helped me out
A:
You have to use GridPane Constraints. Valignment set to Top and Halignment set to Right. Java Doc
GridPane.setHalignment(descriptionLabel, HPos.RIGHT);
GridPane.setValignment(descriptionLabel, VPos.TOP);
After looking at you picture again, I think you only need setValignment.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How does the return type of the put method works in Hashtable or HashMap?
In the documentations, the return type for the HashMap or Hashtable is the value itself as following,
public V put(K key, V value)
and, it tells about the return type is the previous value associated with key, or null if there was no mapping for key. (A null return can also indicate that the map previously associated null with key.)
Say, if I write a Hashtable as following,
Hashtable<Integer, Character> hash = new Hashtable<Integer, Character>();
System.out.println(hash.put(12, 'c'));
Now, the above code returns null. I'm implementing a Hashtable and I'm slightly confused when to return null or V where V will be the previous value associated with key. My code is as following,
// for dealing w/ the table data
class HashEntry<K, V> {
private final K key;
private V value;
public HashEntry(K key, V value) {
this.key = key;
this.value = value;
}
public K getKey() {
return key;
}
public V getValue() {
return value;
}
public void setValue(V value) {
this.value = value;
}
}
class MyHashTable <K, V> {
private int size;
private int DEFAULT_CAPACITY = 16;
private HashEntry<K, V>[] values = new HashEntry[DEFAULT_CAPACITY];
public V put(K key, V value) {
boolean insert = true;
for (int i = 0; i < size; i++) {
if (values[i].getKey().equals(key)) {
values[i].setValue(value);
insert = false;
}
}
if (insert) {
// method for check or increase capacity
increaseCapacity();
values[size++] = new HashEntry<K, V>(key, value);
}
return null;
// return value;
}
When should I return null or value ( type V) in the implementation ?
A:
In your example this is your first time assigning a value to the key 12 so as stated in the documentation null is returned since there was no previous value stored for that key.
If you then did:
System.out.println(hash.put(12, 'd'));
'c' would be printed because it is returned as the previous value for the key 12.
Edit based on additional info in question:
Here is a possible implementation of your put method to return the previous value correctly.
public V put(K key, V value) {
boolean insert = true;
V prevValue = null;
for (int i = 0; i < size; i++) {
if (values[i].getKey().equals(key)) {
prevValue = values[i].getValue();
values[i].setValue(value);
insert = false;
}
}
if (insert) {
// method for check or increase capacity
increaseCapacity();
values[size++] = new HashEntry<K, V>(key, value);
}
return prevValue;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
tikz: how to create a swimlane diagram in tikz
I was wondering if there is a way to add swimlanes to a flowchart in tikz? There is a flowchart package for tikz and that is great. But for a specific application I am working on, I need to include swimlanes. I have an example of such a diagram from Lucidchart, but was hoping I could do this in tikz too.
Just to clarify, by "swimlanes" I mean the horizontal rows in the diagram that represent some group's ownership of that flowchart block.
I found this post as well, but I am not creating a sankey diagram. Not sure if I can get the lanes with a flowchart from his package or post.
Type of sankey diagram
Sample diagram:
A:
Assuming that you already have some code, like this one
\documentclass[tikz]{standalone}
\usetikzlibrary{shapes}
\tikzset{
recgray/.style={draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=gray!50,font=\sffamily},
rndgray/.style={rounded corners=1cm,draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=gray!50,font=\sffamily},
recgren/.style={draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=green!50,font=\sffamily},
diagren/.style={diamond,draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=green!50,font=\sffamily,aspect=1.5},
}
\begin{document}
\begin{tikzpicture}[x=4.5cm,y=2cm]
\node[rndgray] (cus1) at (0,3) {Order generated};
\node[recgray] (sale1) at (0,1) {Order completed};
\node[recgray] (cre1) at (0,-1) {Order received};
\node[recgray] (ware1) at (0,-3) {Order entered};
\node[recgren] (cre2) at (1,-1) {Check credit};
\node[diagren] (cre3) at (2,-1) {OK?};
\node[recgray] (cre4) at (4,-1) {Invoice prepared};
\node[recgray] (cre5) at (5,-1) {Invoice sent};
\node[recgren] (sale3) at (2,1) {Credit problem\\addressed};
\node[diagren] (sale4) at (3.75,1) {OK?};
\node[rndgray] (sale5) at (4.75,1) {Order generated};
\node[recgray] (ware4) at (4.25,-3) {Packages assembled};
\node[recgray] (ware5) at (5.25,-3) {Order shipped};
\node[rndgray,minimum width=4.5cm,text width=4.5cm] (x) at (5.5,3) {Process payment};
\begin{scope}[every path/.style={-latex}]
\draw (cus1) edge (sale1)
(sale1) edge (cre1)
(cre1) edge (cre2)
(cre2) edge (cre3)
(cre3) edge node[midway,fill=white,inner sep=2pt] {Yes} (cre4)
(cre4) edge (cre5)
(cre3) edge node[midway,fill=white,inner sep=2pt] {No} (sale3)
(sale3) edge (sale4)
(sale4) edge node[midway,above] {No} (sale5)
(sale4) edge node[midway,fill=white,inner sep=2pt] {Yes} ++(0,-1.5)
(cre1) edge (ware1)
(ware1) edge (ware4)
(ware4) edge (ware5);
\draw (cre5.north east) -- ++(0,3);
\draw (ware5.north east) -- ++(0,5);
\end{scope}
\end{tikzpicture}
\end{document}
You only have to add some rectangles and some normal nodes. \foreach may be very helpful here.
\documentclass[tikz]{standalone}
\usetikzlibrary{shapes}
\tikzset{
recgray/.style={draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=gray!50,font=\sffamily},
rndgray/.style={rounded corners=1cm,draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=gray!50,font=\sffamily},
recgren/.style={draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=green!50,font=\sffamily},
diagren/.style={diamond,draw,minimum width=3cm,minimum height=2cm,align=center,text width=3cm,fill=green!50,font=\sffamily,aspect=1.5},
}
\begin{document}
\begin{tikzpicture}[x=4.5cm,y=2cm]
%---
\foreach \i in {-4,-2,0,2} {
\draw (-.75,\i) rectangle (6.25,\i+2);
\draw (-.75,\i) rectangle (-.5,\i+2);
}
\node[rotate=90,font=\sffamily] at (-.625,1) {Sales};
\node[rotate=90,font=\sffamily] at (-.625,3) {Customer};
\node[rotate=90,font=\sffamily] at (-.625,-1) {Credit/Invoicing};
\node[rotate=90,font=\sffamily] at (-.625,-3) {Warehouse};
%---
\node[rndgray] (cus1) at (0,3) {Order generated};
\node[recgray] (sale1) at (0,1) {Order completed};
\node[recgray] (cre1) at (0,-1) {Order received};
\node[recgray] (ware1) at (0,-3) {Order entered};
\node[recgren] (cre2) at (1,-1) {Check credit};
\node[diagren] (cre3) at (2,-1) {OK?};
\node[recgray] (cre4) at (4,-1) {Invoice prepared};
\node[recgray] (cre5) at (5,-1) {Invoice sent};
\node[recgren] (sale3) at (2,1) {Credit problem\\addressed};
\node[diagren] (sale4) at (3.75,1) {OK?};
\node[rndgray] (sale5) at (4.75,1) {Order generated};
\node[recgray] (ware4) at (4.25,-3) {Packages assembled};
\node[recgray] (ware5) at (5.25,-3) {Order shipped};
\node[rndgray,minimum width=4.5cm,text width=4.5cm] (x) at (5.5,3) {Process payment};
\begin{scope}[every path/.style={-latex}]
\draw (cus1) edge (sale1)
(sale1) edge (cre1)
(cre1) edge (cre2)
(cre2) edge (cre3)
(cre3) edge node[midway,fill=white,inner sep=2pt] {Yes} (cre4)
(cre4) edge (cre5)
(cre3) edge node[midway,fill=white,inner sep=2pt] {No} (sale3)
(sale3) edge (sale4)
(sale4) edge node[midway,above] {No} (sale5)
(sale4) edge node[midway,fill=white,inner sep=2pt] {Yes} ++(0,-1.5)
(cre1) edge (ware1)
(ware1) edge (ware4)
(ware4) edge (ware5);
\draw (cre5.north east) -- ++(0,3);
\draw (ware5.north east) -- ++(0,5);
\end{scope}
\end{tikzpicture}
\end{document}
(Click on pictures to have a larger viewing area)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Show where temporaries are created in C++
What is the fastest way to uncover where temporaries are created in my C++ code?
The answer is not always easily deducible from the standard and compiler optimizations can further eliminate temporaries.
I have experimented with godbolt.org and its fantastic. Unfortunately it often hides the trees behind the wood of assembler when it comes to temporaries. Additionally, aggressive compiler optimization options make the assembler totally unreadable.
Any other means to accomplish this?
A:
"compiler optimizations can further eliminate temporaries."
It seems you have a slight misunderstanding of the C++ semantics. The C++ Standard talks about temporaries to define the formal semantics of a program. This is a compact way to describe a large set of possible executions.
An actual compiler doesn't need to behave at all like this. And often, they won't. Real compilers know about registers, real compilers don't pretend that POD's have (trivial) constructors and destructors. This happens already before optimizations. I don't know of any compiler that will generate trivial ctors in debug mode.
Now some semantics described by the Standard can only be achieved by a fairly close approximation. When destructors have visible side effects (think std::cout), temporaries of those types cannot be entirely eliminated. But real compilers might implement the visible side effect while not allocating any storage. The notion of a temporary existing or not existing is a binary view, and in reality there are intermediate forms.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Jquery Cant GET returning value when suming values of a checkbox
Im passing some values with ID's from selected check box's
Im collecting the values in an array to post however I also want to sum the values in the titles but I cant get it to do this...
where am I going wrong ? I know its in the calling of the variable that is returned but im not sure how to GET it
function doAlloMath(){
var sum=0;
alert($("input[name=alloInv]:checked").map(function () {return this.value;}).get().join(","));
alert($("input[name=alloInv]:checked").map(function () {return this.title;}).get().join(","));
alert($("input[name=alloInv]:checked").each(function (a,b) {sum += parseFloat ($(this.title)); return sum;}).get());
}
A:
A checkbox does'nt have a value unless it's explicitly set. To see if the checkbox is checked or not you need to use prop('checked');
You could try something more like this:
function doAlloMath() {
var sum = 0,
elm = $("input[name=alloInv]:checked"),
values = elm.map(function() {
return this.value;
}).get().join(","),
titles = elm.map(function() {
return this.title;
}).get().join(",");
elm.each(function(idx, elem) {
sum += parseFloat(this.title);
});
console.log(sum);
console.log(values);
console.log(titles);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
I am not able to retrieve file name after submission
if($_SERVER['REQUEST_METHOD'] == 'POST') {
$image = $_FILES["image"]["name"];
echo "File: " . $image;
}
the echo is "File :" all the time
<form action="" method="post" id="formAddProperty">
<div id="propertyImage">
<label for="image">Upload image:</label>
<input type="file" name="image">
</div>
<input type="submit" value="Add Property" id="propertySubmit">
</form>
I'm running a local server via mamp, is than an issue? The purpose is to get file name than its extension (wich is not shown in this example).
A:
You are missing
enctype="multipart/form-data"
in the form
<form action="" method="post" id="formAddProperty" enctype="multipart/form-data">
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Oracle Trigger update column with information from another column in the same table
I need to create a trigger in the yyy table. After a insert, I must update the test1 column with the same information of test2 column.
Could be like this?
CREATE OR REPLACE TRIGGER TRG_update
AFTER INSERT ON yyy FOR EACH ROW
BEGIN
UPDATE yyy SET TEST1 = :NEW.TEST2
END
A:
CREATE OR REPLACE TRIGGER TRG_update
BEFORE INSERT ON yyy
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
:NEW.TEST1 := :NEW.TEST2;
END;
/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Acceptable use of os.open/read/write/close?
I intend to frequently read/write small pieces of information from many different files. The following somewhat contrived example shows substantially less time taken when using os operations for acting directly on file descriptors. Am I missing any downside other than the convenience of file objects?
import os
import time
N = 10000
PATH = "/tmp/foo.test"
def testOpen():
for i in range(N):
with open(PATH, "wb") as fh:
fh.write("A")
for i in range(N):
with open(PATH, "rb") as fh:
s = fh.read()
def testOsOpen():
for i in range(N):
fd = os.open(PATH, os.O_CREAT | os.O_WRONLY)
try:
os.write(fd, "A")
finally:
os.close(fd)
for i in range(N):
fd = os.open(PATH, os.O_RDONLY)
try:
s = os.read(fd, 1)
finally:
os.close(fd)
if __name__ == "__main__":
for fn in testOpen, testOsOpen:
start = time.time()
fn()
print fn.func_name, "took", time.time() - start
Sample run:
$ python bench.py
testOpen took 1.82302999496
testOsOpen took 0.436559915543
A:
I'll answer just so this doesn't stay open forever ;-)
There's really little to say: as you already noted, a file object is more convenient. In some cases it's also more functional; for example, it does its own layer of buffering to speed line-oriented text operations (like file_object.readline()) (BTW, that's one reason it's slower too.) And a file object strives to work the same way across all platforms.
But if you don't need/want that, there's nothing at all wrong with using the lower-level & zippier os file descriptor functions instead. There are many of the latter, and not all are supported on all platforms, and not all options are supported on all platforms. Of course you're responsible for restricting yourself to a subset of operations & options in the intersection of the platforms you care about (which is generally true of all functions in os, not just its file descriptor functions - the name os is a strong hint that the stuff it contains may be OS-dependent).
With respect to Pythons 2 and 3, the differences are due to the strong distinction Python 3 makes between "text" and "binary" modes on all platforms. It's a Unicode world, and "text mode" for file objects make no sense without specifying the intended encoding. In Python 3, a file object read method returns a str object (a Unicode string) if the file was opened in "text mode", but a bytes object if in "binary mode". Similarly for write methods.
Because the os file descriptor methods have no notion of encoding, they can only work with bytes-like objects in Python 3 (regardless of whether, e.g., on Windows, the file descriptor was opened with the low-level os.open() O_BINARY or O_TEXT flags).
In practice, in the example you gave, this just means you would have to change instances of
"A"
to
b"A"
Note that you can also use the b"..." literal syntax in a recent-enough version of Python 2, although it's still just a string literal in Python 2. In Python 3 it denotes a different kind of object (bytes), and file descriptor functions are restricted to writing and returning bytes-like objects.
But if you're working with "binary data", that's no restriction at all. If you're working with "text data", it may be (not enough info about your specifics to guess).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Command not found error in Matlab, when tried to run system commands
I am trying to run image processing software called Flirt in Matlab. When I try
system'flirt'
I get a /bin/bash: flirt: command not found error.
If I try system('/usr/local/fsl/bin/flirt'); then it works fine. Typing just flirt in terminal also launches the program.
Is there a way of setting Matlab to find this program in path and running it without giving its full address?
A:
Check system path from within MATLAB using:
getenv('PATH')
Set from within MATLAB using:
setenv('PATH', [getenv('PATH') ':/usr/local/fsl/bin']);
If that fixes it, you can add the setenv line to your MATLAB startup file.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to reference Azure table data from a SQL Server table using a GUID
I want to implement a chat feature on my site using both SQL Server and a Azure table.
I want to store chat metadata (like who's talking to whom and when, etc.) in my Azure SQL database and also keep a GUID of the chat in SQL Server, but the actual text of a chat stored in a Azure Table.
So, how would this work?
After reading about Azure tables, should I
store a GUID in SQL Server which represents the partition key in my Azure table?
so that all chats between user A and user B have the same GUID/partition?
then I can fetch all the messages by partition filtered by date!
would I need to use the row key in this scenario?
is there a limit on partitions, what if I end up having thousands or even millions?
Store a guid for each message, storing everything but the actual message contents in SQL Server, therefor leading to possibly billions of rows for all the chats. In this scenario I guess I would only use/need 1 partition?
????
A:
In general this will work.
1 - Yes using a repeatable, uniquely identifying key as partition key is what you should do
The partition key can be a GUID. But maybe a hash value of User A&B's id would be better. Then you can still retrieve via partition key but it is not necessary to store it anywhere.
Yes you would still need rowkey as this is the primary key of the record. Partition Key is just a grouping of certain records.
The is no limit on partition keys - In general is should be something that re-occurs often but then should/may be thousands of them
2 - The billions of records will still be the case, depending on how you decide to store chat messages (store each chat line or after every x minutes or ...). But I would still suggest something like a Partition Key.
SQL Server 2016 and Azure SQL has a feature called 'Column Store Indexes' which greatly improves queries and optimizes the size of the data written to disk (Unfortunately this only available from P1 tier in Azure)
Have you considered using Cosmos Db - Throughput would be better. That is if you are going to have lots of traffic. Cosmos Db is very fast and if you use Partitioned Collections then you will have the same features and unlimited storage space.
I am sure you have good reasons but it is a bit odd that you want to use three different storage types for this. Won't a single storage type do (SQL, Azure SQL, Azure Table storage, Cosmos Db,..)?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problem with larg index when testing my code
I am trying to learn Haskell, I want to write a recursive function and do not use any library functions. The function
nth ::Integer -> [a ] -> Maybe a
takes an index n and a list of elements and returns the n-th element of the list (if the index is valid) or Nothing if
the index is invalid.
My code:
nth :: Integer -> [a] -> Maybe a
nth a [] = Nothing
nth a (x:xs) |a == 1 = Just x
|fromIntegral (length xs) < a = Nothing
|a==0 = Nothing
| otherwise = nth (a-1) xs
I want to do this test to my code:
spec = do
describe "nth" $ do
it "for valid indexes it behaves like (!!)" $
property $ \n xs -> n < 0 || (fromInteger n) >= length (xs::[Integer]) || Lists.nth n xs == Just (xs!!(fromInteger n))
it "for negative indexes it returns Nothing" $
property $ \n xs -> n >= 0 || Lists.nth n (xs::[Integer]) == Nothing
it "for too large indexes it returns Nothing" $
property $ \n xs -> (fromInteger n) < length xs || Lists.nth n (xs::[Integer]) == Nothing
but every time I am doing the test I'm getting an error
for valid indexes it behaves like (!!) FAILED [1]
for negative indexes it returns Nothing
+++ OK, passed 100 tests.
for too large indexes it returns Nothing FAILED [2]
1) Lists.nth for valid indexes it behaves like (!!)
Falsified (after 5 tests and 5 shrinks):
0
[0]
To rerun use: --match "/Lists/nth/for valid indexes it behaves like (!!)/"
./ListsSpec.hs:23:9:
2) Lists.nth for too large indexes it returns Nothing
Falsified (after 38 tests):
1
[0]
A:
There are some problems here with your function. The reason why the first case (behaving like (!!)) fails, is because (!!) :: Int -> [a] -> a uses a zero-based index, whereas your function seems to work with a one-based index. That means that you will thus need to decrement the index you give to the function.
Furthermore in your function you make a a comparison between n and fromIntegral (length xs). Since xs is the tail of the list, the check is not correct since it will, in certain circumstances, never consider the last element. Indeed:
Prelude> nth 2 [0, 2]
Nothing
Furthermore it is typically not a good idea to use length in each iteration. length runs in O(n), that means that your algorithm now runs in O(n2), so as the list grows, this easily will start taking considerable time.
A shorter and more elegant way to fix this is probably:
nth :: Integral i => i -> [a] -> Maybe a
nth 1 (x:_) = Just x
nth i (_:xs) | i < 1 = Nothing
| otherwise = nth (i-1) xs
nth _ [] = Nothing
Here we thus have four cases: in case the index is 1 and the list is non-empty, we return the head of the list, wrapped in a Just. If the index is not one, and it is less than one, then the index is too small, and hence we return Nothing (this case is strictly speaking not necessary). If i is greater than one, then we call nth (i-1) xs. Finally if we have reached the end of the list (or the list was empty in the first place), we return Nothing as well).
Now in order to test this, we thus need to rewrite these three cases:
describe "nth" $ do
it "for valid indexes it behaves like (!!)" $
property $ \n xs -> n <= 0 || n > length (xs :: [Integer]) || Lists.nth n xs == Just (xs !! (n-1))
it "for negative indexes it returns Nothing" $
property $ \n xs -> n > 0 || Lists.nth n (xs :: [Integer]) == Nothing
it "for too large indexes it returns Nothing" $
property $ \n xs -> n <= length xs || Lists.nth n (xs :: [Integer]) == Nothing
The first one thus excludes n <= 0 (negative or zero indices) as well as n > length xs and thus checks if the value is Just (xs !! (n-1)).
In the second case excludes values greater than zero, and checks if all remaining indices map on Nothing.
Finally the last property checks that for values that are higher than length xs, we obtain nothing as well.
Note that here nth uses one-based indexing. I leave it as an exercise to make it zero-based.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can't stop Protractor from displaying file download prompt
Problem
I'm testing downloading a file, but when I trigger the download, the "Save as..." prompt appears.
I saw a solution in this SO question but it doesn't seem to work for me.
Config
My protractor config file looks like this (coffeescript):
exports.config =
capabilities:
browserName: "chrome"
shardTestFiles: true
maxInstances: 2
chromeOptions:
args: ['--no-sandbox', '--test-type=browser']
prefs:
download:
prompt_for_download: false
default_directory: '/'
default_content_settings:
popups: 0
More
On chromeOptions.pref webdriver docs states:
See the 'Preferences' file in Chrome's user data directory for examples.
I can't actually see default_directory in my own Chrome preferences file.
"download": {
"directory_upgrade": true,
"prompt_for_download": false
},
System
Protractor: Version 1.5.0 (pretty new)
Node: 0.10.28, 0.11.8 and 0.11.14
A:
Provide an absolute path to an existing directory in default_directory chrome preference.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
FluentNhibernate automapping a tree (recursive association)
I'm trying to automap a class Code. Codes can have (Sub)Codes.
public class Code
{
public virtual string Key{get;set;}
public virtual Code Parent{get; set;}
public virtual ICollection<Code> SubCodes{get;set;}
private ICollection<Code> subCodes = new Collection<Code>();
}
This works but I get column IdParent and an IdCode column in my table.
Naming the Parent property IdCode doesn't help then I get an IdIdCode column and the IdCode
What do I need to do to fix this.
I use Automapping with a Configuration object
A:
Seems like your automapping uses a convention that added the prefix Id to references as well as to the Id.
If you want, You can override this convention by using your own custom ForeignKeyConvention in the AutoMap configuration.
otherwise, just name your db table columns accordingly.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I reference an image in my stencil theme
I am wondering how to correctly reference an image in my scss for my stencil theme I am working on. It works locally but when I upload my theme to bigcommerce it gives a 404 error.
background: url('../img/header-bg.png') no-repeat;
background: url('/assets/img/header-bg.png') no-repeat;
Those both work locally but both result in a 404 when I upload my theme. I have included the image in that directory and everything.
A:
I've had the same issue you are experiencing, and I'm not 100% sure what the BC recommended way to reference background images in CSS.
Assuming you are placing the images in your assets/img directory I've found that calling background:url('../img/header-bg.png') like this has worked both locally, and in production.
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Best practices for refactoring parameter file structure in multiple environments
Background info and what I've tried
Within each of two mostly distinct environments, say 'Research' and 'Production', there is a need for a structured parameter file. The file contains things like database connections, tables, and some other parameters, nothing unusual.
The current approach is to have two distinct parameter files, say "params.txt" and "params_prod.txt" (for production). (Also, in case it helps, we're not actually using .txt for this, we're using a markup language, but the details of that don't really matter for this question.)
Most of the information is duplicated between these two files, leading to lots of copy/paste. Plus, in version control, everyone has to manually make changes to two files and we have to trust that they are propagating changes if checking things in.
Needless to say, this leads to headaches to resolve when some divergence happens between the files. We have tried workarounds, like writing tests to check that the files are identical in all the ways they need to be, but this is not a perfect science since the structure of the files can change and the definition of what, exactly, must be identical between them changes too.
I have had one idea:
Just use a single file, but create sub-sections within the file. There can be a sub-section for general parameters that are always shared, another section for things that should be treated as the 'default' parameters (the 'Research' environment settings) and then another section for the 'Production' environment settings.
We already have code that parses the parameter files and instantiates objects, loads data, etc. etc., based on the parameters. We could go back and add some options to that code, such that it loads data according to which sub-section of the parameter file it is told to use and ignores parameters from the other section.
This has some benefits: (1) everything is in one file, so there's no "just trusting" that people are propagating changes. (2) It also does not require copy/paste code since anything that is shared between both environments needs to only appear in one section at the top of one parameter file. (3) If anything, this should make the parameter files themselves more modular and easier to use. (4) We save time/cost that would have been spent creating complicated test-based work-arounds that check whether the files are being propagated together during check-ins with version control. We don't need to do that in this case.
It does have costs too: (1) time spent making the newly formatted parameter file specifications in XSD so we can validate it and ensure it is backwards compatible; (2) costs if we do need to re-code and make our software-that-interfaces-with-parameter-files have options for whether to use 'Research' or 'Production'. (3) If any of the properties that are currently considered to be in the sub-section that is shared, but which suddenly become things where we want to have different options between 'Research' and 'Production', we would then have to refactor all parameter files to move that item down into the other sections.
(3) Seems like the biggest worry, but it also forces us to constantly re-factor these parameter files, which is a good thing in my view. Besides, if we want flexibility about re-assigning a parameter from one sub-section to another, there ought to be better ways to achieve it than duplicating the entire file.
Question
Are there any significant pitfalls that I am failing to realize about the proposed idea regarding don't-repeat-yourself vs. build-in-flexibility trade-offs in parameter file design?
A:
another option is to use a hierarchy of files:
there is a param_prod.txt where all parameters specific to production is set
then there is a param.txt where general parameters are set
to find a parameter you first go into param_prod.txt and if you don't find it there you then check param.txt (if not found there then either use arbitrary default or error)
pros: DRY, easily create another branch, extensible to multiple levels by keeping a list of which files to search
con: not immediately clear where a certain parameter is defined, need to create new format where parameters can be left out (maybe), searching through all files can be slow (use caching for speedup)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can we release some memory in Objective-c that a variable does not own but points to?
I have some code like this:
NSObject *var1 = [[NSObject alloc] init];
NSObject *var2 = var1;
[var2 release];
var1 = nil;
Is this correct or is this a memory leak?
As far as I know only var1 can release the memory alloc-inited in the first line, as per the Object Ownership policy
A:
Your code will release the memory, because there is a single alloc, and a single release - the amount of pointers to the object is not a factor.
Ownership is a concept that the Object Ownership policy talks about because if you follow the guidelines it makes it easier to manage and ultimately prevent problems relating to releasing things you shouldn't release (or not releasing things you should).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Hadoop 2.9.2, Spark 2.4.0 access AWS s3a bucket
It's been a couple of days but I could not download from public Amazon Bucket using Spark :(
Here is spark-shell command:
spark-shell --master yarn
-v
--jars file:/usr/local/hadoop/share/hadoop/tools/lib/hadoop-aws-2.9.2.jar,file:/usr/local/hadoop/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.199.jar
--driver-class-path=/usr/local/hadoop/share/hadoop/tools/lib/hadoop-aws-2.9.2.jar:/usr/local/hadoop/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.199.jar
Application started and shell waiting for prompt:
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0
/_/
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val data1 = sc.textFile("s3a://my-bucket-name/README.md")
18/12/25 13:06:40 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 242.1 KB, free 246.7 MB)
18/12/25 13:06:40 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 24.2 KB, free 246.6 MB)
18/12/25 13:06:40 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop-edge01:3545 (size: 24.2 KB, free: 246.9 MB)
18/12/25 13:06:40 INFO SparkContext: Created broadcast 0 from textFile at <console>:24
data1: org.apache.spark.rdd.RDD[String] = s3a://my-bucket-name/README.md MapPartitionsRDD[1] at textFile at <console>:24
scala> data1.count()
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD.count(RDD.scala:1168)
... 49 elided
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.fs.StorageStatistics
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 77 more
scala>
All AWS keys, secret-keys was set in hadoop/core-site.xml as described here: Hadoop-AWS module: Integration with Amazon Web Services
The bucket is public - anyone can download (tested with curl -O)
All .jars as you can see was provided by Hadoop itself from /usr/local/hadoop/share/hadoop/tools/lib/ folder
There's no additional settings in spark-defaults.conf - only what was sent in command line
Both jars does not provide this class:
jar tf /usr/local/hadoop/share/hadoop/tools/lib/hadoop-aws-2.9.2.jar | grep org/apache/hadoop/fs/StorageStatistics
(no result)
jar tf /usr/local/hadoop/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.199.jar | grep org/apache/hadoop/fs/StorageStatistics
(no result)
What should I do ? Did I forget to add another jar ? What the exact configuration of hadoop-aws and aws-java-sdk-bundle ? versions ?
A:
Mmmm.... I found the problem, finally..
The main issue is Spark that I have is pre-installed for Hadoop. It's 'v2.4.0 pre-build for Hadoop 2.7 and later'. This is bit of misleading title as you see my struggles with it above. Actually Spark shipped with different version of hadoop jars. The listing from: /usr/local/spark/jars/ shows that it have:
hadoop-common-2.7.3.jar
hadoop-client-2.7.3.jar
....
it only missing: hadoop-aws and aws-java-sdk. I little bit digging in Maven repository: hadoop-aws-v2.7.3 and it dependency: aws-java-sdk-v1.7.4 and voila ! Downloaded those jar and send them as parameters to Spark. Like this:
spark-shell
--master yarn
-v
--jars file:/home/aws-java-sdk-1.7.4.jar,file:/home/hadoop-aws-2.7.3.jar
--driver-class-path=/home/aws-java-sdk-1.7.4.jar:/home/hadoop-aws-2.7.3.jar
Did the job !!!
I'm just wondering why all jars from Hadoop (and I send all of them as parameter to --jars and --driver-class-path) didn't catch up. Spark somehow automatically choose it jars and not what I send
A:
I advise you not to do what you did.
You are running pre built spark with hadoop 2.7.2 jars on hadoop 2.9.2 and you added to the classpath some more jars to work with s3 from the hadoop 2.7.3 version to solve the issue.
What you should be doing is working with a "hadoop free" spark version - and provide the hadoop file by configuration as you can see in the following link -
https://spark.apache.org/docs/2.4.0/hadoop-provided.html
The main parts:
in conf/spark-env.sh
If hadoop binary is on your PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
With explicit path to hadoop binary
export SPARK_DIST_CLASSPATH=$(/path/to/hadoop/bin/hadoop classpath)
Passing a Hadoop configuration directory
export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
sqlite app keeps crashing
So I'm trying to set up a basic database with clients that have 4 fields, id, firstname, lastname and age. I have one method that puts data in and one that logs it out to make sure it's working. Here is what I have:
Right at the beginning of the MainActivity class:
SQLiteDatabase clientsDatabase;
In my onCreate method:
try
{
clientsDatabase = this.openOrCreateDatabase("Clients", MODE_PRIVATE, null);
clientsDatabase.execSQL("CREATE TABLE IF NOT EXISTS clients (id INT(3), fName VARCHAR, lName VARCHAR, age INT(3))");
}
catch(Exception e)
{
e.printStackTrace();
}
And my method that puts new data in is:
public void addMember(int id, String f, String l, int a)
{
clientsDatabase.execSQL("INSERT INTO clients (id, fName, lName, age) VALUES (" + id + ", '" + f + "', '" + l + "', " + a + ")");
}
And my method that logs the data out based on the id you give to it is:
public void printMember(int id)
{
Cursor c = clientsDatabase.rawQuery("SELECT * FROM clients WHERE id = " + Integer.toString(id), null);
int idIndex = c.getColumnIndex("id");
int fNameIndex = c.getColumnIndex("fName");
int lNameIndex = c.getColumnIndex("lName");
int ageIndex = c.getColumnIndex("age");
c.moveToFirst();
while (c != null)
{
Log.i("Results - id", Integer.toString(c.getInt(idIndex)));
Log.i("Results - First name", c.getString(fNameIndex));
Log.i("Results - Last name", c.getString(lNameIndex));
Log.i("Results - Age", Integer.toString(c.getInt(ageIndex)));
c.moveToNext();
}
c.close();
}
And FINALLY! I set up a button with the 'onClick' method of:
public void logUser(View view)
{
addMember(1, "Clark", "Kent", 30);
printMember(1);
}
The emulator crashes when I press the button, and this is what shows up in the logs(It was A LOT, so I didn't wanna make this post any longer, so I put some screenshots):
http://imgur.com/a/LpDDd
The weird thing is IT IS logging the correct information. It just crashes afterward for some reason.
And I know this isn't the best way to do this, but I really need to get this way to work, so any help is appreciated
A:
You should change your while loop to
while (!c.isAfterLast()) {
...
}
because c never gets null, it is just moved down and down until it's out of rows in the database.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Need to remove specific file from html file input with multiple selection enabled
I have a file input field where I can select multiple files at once, and show all the selected files in a list with specific file removal functionality. Now I can remove the file from the list, but I couldn't find a way to remove the file also from the input field array. How can I remove the specific file from input field array too using jQuery. I don't want to use any plugin for this.
$('#uploadBtn').change(function(){
$('#attachments').html('');
var attachments = document.getElementById('uploadBtn');
var item = '';
for(var i=0; i<attachments.files.length; i++) {
item += '<li>' + attachments.files.item(i).name +
' <a href="#" id="'+ i +'" class="dlt-attch">Remove</a>' +
'</li>';
console.log(attachments.files.item(i).name);
}
$('#attachments').append(item);
$('.dlt-attch').click(function(e){
e.preventDefault();
var id = $(this).attr('id');
console.log(attachments.files);
$(this).parent().remove();
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input id="uploadBtn" multiple="multiple" type="file" name="attachments[]" class="upload" />
<ul id="attachments" style="margin-top: 10px; list-style-type: decimal;"></ul>
A:
Unfortunately you can't remove a file from that list because they are stored in a read-only FileList object: https://developer.mozilla.org/en-US/docs/Web/API/FileList
As an alternative you can keep your own array of files, but then you will need to use your own implementation to upload the files.
There is a similar question that was asked a few years ago but it is still valid:
How do I remove a file from the FileList - There is an answer that uses XMLHttpRequest to manually upload the files.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Blackberry 10 Installing .bar file
I have only .bar file and its debug token . I want to run it on other device of BlackBerry z10. How should I do this? Please help me.
A:
you need to install debug token to device. Once that is done you can use following command to deploy bar file to device. following command can deploy multiple bar file.
./batchbar-deploy ~/Desktop/BARFOLDER 169.254.0.1 DEVICE_DEVELOPMENT_PASSWORD
Here 169.254.0.1 is device IP, when you attach it to computer.
Device should be in development mode.
You can use following command too.
./blackberry-deploy -installapp -package ./BAR_FILE.bar -device 169.254.0.1 -password DEVICE_DEVELOPMENT_PASSWORD
You can fine more information here.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using ui dialog buttons
I have dialogs on my pages which apply the ui dialog button set, i use this code
$('#pop_div').load("mypopname.php").dialog({
width: 880,
height: 650,
modal: true,
draggable: false,
resizable: false,
title: 'page title',
buttons: {
Cancel: function () {
$('#pop_div').dialog("close");
},
Submit: function () {
$("#frmname").submit();
}
}
});
I was wondering is it possible to use the same buttons (theme) normally in any page (not necessarily in a dialog) ??
A:
HTML:
<button>A button</button>
<input type="submit" value="A submit button" />
JavaScript:
$('button').button();
$('input[type=submit]').button();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Google cloud platform, vm instance's ssh permission
In my Google Cloud Platform, vm instance, I accidentally changed the permission of /etc/ssh, and now I can't access it using ssh nor filezilla.
The log is as below:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0660 for '/etc/ssh/ssh_host_ed25519_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
key_load_private: bad permissions
The only thing I can access to is gcloud command or serial console.
I know I need to change the directory's permission back to 644 or 400, but I have no idea how as I can't access the ssh.
How do I change the permission without accessing ssh?
Any help would be much appreciated!
A:
This problem can be solved by attaching the boot disk to another instance.
STEP 1:
Shutdown your instance with the SSH problem. Login into the Google Cloud Console. Go to Compute Engine -> VM instances. Click on your instance and make note of the "Boot disk" name. This will be the first disk under "Boot disk and local disks".
STEP 2:
Create a snapshot of the boot disk before doing anything further.
While still in Compute Engine -> Disk. Click on your boot disk. Click on "CREATE SNAPSHOT".
STEP 3:
Create a new instance in the same zone. A micro instance will work.
STEP 4:
Open a Cloud Shell prompt (this also works from your desktop if gcloud is setup). Execute this command. Replace NAME with your instance name (broken SSH system) and DISK with the boot disk name and ZONE with the zone that the system is in:
gcloud compute instance detach-disk NAME --disk=DISK --zone=ZONE
Make sure that the previous command did not report an error.
STEP 5:
Now we will attach this disk to the new instance that you created.
Make sure that the repair instance is running. Sometimes an instance can get confused on which disk to boot from if more than one disk is bootable.
Go to Compute Engine -> VM instances. Click on your instance. Click Edit. Under "Additional disks" click "Add item". For name enter/select the disk that you detached from your broken instance. Click Save.
STEP 6:
SSH into your new instance with both disks attached.
STEP 7:
Follow these steps carefully. We will mount the second disk to the root file system. Then change the permissions on the /mnt/repair/etc/ssh directory and contents.
Become superuser. Execute sudo -s
Execute df. Make sure that /dev/sdb1 is not mounted.
Create a directory for the mountpoint: mkdir /mnt/repair
Mount the second disk: mount /dev/sdb1 /mnt/repair
Change directories: cd /mnt/repair/etc
Set permissions for /etc/ssh (notice relative paths here): chmod 755 ssh
Change directories: cd ssh
Execute: chmod 644 *.pub
Execute: chmod 400 *key
ssh_config and sshd_config should still be 644. If not fix them too.
Shutdown the repair system: halt
STEP 8:
Now reverse the procedure and move the second disk back to your original instance and reattach. Start your instance and connect via SSH.
Note: To reattach the boot disk you have to use gcloud with the -boot option.
gcloud beta compute instances attach-disk NAME --disk=DISK --zone=ZONE --boot
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Were magical creatures and magic north of The Wall affected when dragons returned?
In the show it is evident that
With the rebirth of dragons, magic (or at least eastern
magic) seems to be slowly making its way back into the world
Warging is also considered as a magic and it seems it was already strong and popular among the wildlings before the birth of the dragons.
Were northern magics beyond the wall already strong, or did the rebirth of dragons give them a boost in some way?
Are Giants, White Walkers & Children of the Forest just non-human races, or they are also magical in some way and get a power boost when dragons are born?
A:
Northern magic has been boosted, but I doubt it was caused by the rebirth of dragons. Rather the rebirth of dragons seems to have been caused by the recent surge in magic, much like like northern magic in general. I would say the recent surge in magic could be attributed to several things, here are my top three:
The red comet. While it only became visible recently, we can assume it was drawing closer to the fire and ice world, and now that it is 'overhead' magic is at its strongest. Or, it too is the result of the surge in magic, though this seems unlikely.
Natural ebb and flow of this world. Magic rises and recedes and this is the way of things and always has been.
The Night's King. His rise could also have triggered the rise of magic, possibly to provide the means to resist him. The question here is, why didn't he rise sooner? If it is the same Night's King of legend, he has been around for a long time, so why now? If he is a new figure being mistaken for the Night's King, it begs the same question, why now? If he 'unleashed' or 'harnessed' some new power, I would think GRRM would have hinted at it by now. I thought about giving the Others their own category, but, although they seem to have somehow bestowed the Night's King's power on him, he appears to spearhead their advance. It's possible they have done something to open the flood gates of magic, but their absence from the shows and obscurity in the books tells me, no, they are not at the root of the rise in magic.
As far as other creatures/species, I haven't noticed information pointing to giants being magical, though they may have been the result of magic, like dragons. The wildings don't appear awed by giants so it doesn't seem as if they have become more numerous during this time of high magic.
I would guess the Children of the Forest are magical. The tales and myths of man that touch on the children are wrapped in magic. Also, they are said to sing the song of earth in the True Tongue, which sounds like magic. But I wouldn't say their magic is increasing, rather it appears to nearly be finished. Of the six children known to man, none have been reported to produce magic on a scale with what they are said to have done during the epoch-call great floods down to kill the first men. They harness magic, but the tone of the GRRM's writing makes it sound as if it is the same magic they have always possessed and their dwindling numbers is in lock-step with their magic dwindling.
The White Walkers or Wight Walkers (as some call them) appear to be a direct result of the rise in power/magic by the Night's King and the Others (I assert their mere return as a recent rise in the King's power, or why didn't he return sooner?)
I would say the Others sound the most tightly bound with magic, but I would not go so far as to suggest they are magical in nature. I like the idea of the red comet as the cause of magic in the world.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why can't I capture the type of the parameters of Angular's FormGroup constructor?
I'm trying to make a well-typed wrapper function that ultimately calls new FormGroup(), and I want to pass through parameters to it without just copy-pasting the signature.
TypeScript however complains that the FormGroup class type cannot be used as the type argument to TS's ConstructorParameters:
Type FormGroup does not satisfy the constraint new (...args: any[]) => any.
Type FormGroup provides no match for the signature new (...args: any[]): any.
This can be reproduced by excerpting from the Angular .d.ts:
declare interface AbstractControl { }
declare interface ValidatorFn { }
declare interface AbstractControlOptions { }
declare interface AsyncValidatorFn { }
declare class FormGroup {
constructor(
controls: {[key: string]: AbstractControl},
validatorOrOpts?: ValidatorFn | ValidatorFn[] | AbstractControlOptions | null,
asyncValidator?: AsyncValidatorFn | AsyncValidatorFn[] | null
);
}
type FormGroupParams = ConstructorParameters<FormGroup>; // <= Error occurs here
What gives? Why is something declared as a class with a constructor not newable to TS? And is there a way I can access this signature?
A:
The type signature that is implied by ConstructorParameters<T> (i.e. the presence of new (...args: any[]) => any) is checked only against the instance part of the class. However, the constructor is part of the static part of the class. The type checker does not see the presence of the constructor and hence gives you a type error. See the docs on this.
What you would need, however, would be a way of extracting from the static side to the instance part, as the following works as intended:
declare interface FormGroupConstructor {
new (
controls: {[key: string]: AbstractControl},
validatorOrOpts?: ValidatorFn | ValidatorFn[] | AbstractControlOptions | null,
asyncValidator?: AsyncValidatorFn | AsyncValidatorFn[] | null
): FormGroup;
}
type FormGroupParams = ConstructorParameters<FormGroupConstructor>
The interface I've declared here is the type of the constructor function. In other words, you want to access the ConstructorParameters of the type of the constructor. So, instead of
type FormGroupParams = ConstructorParameters<FormGroup>
you'd just do
type FormGroupParams = ConstructorParameters<typeof FormGroup>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Loop in R duplicates columns in weighted average calculations
I've got a list of 137 xts objects, each with 4 variables. Each element is a daily series split by months.
Example code:
recession <- sample(1:100, 4169, replace = TRUE)
unemployed <- sample(1:100, 4169, replace = TRUE)
jobs <- sample(1:100, 4169, replace = TRUE)
insurance <- sample(1:100, 4169, replace = TRUE)
# sequence of daily dates from January 1st 2004 to May 31st 2015:
new1 <- seq(from=as.Date("2004-01-01"), to=as.Date("2015-05-31"), by = "day")
daily_df <- data.frame(date=as.Date(new1), unemployed, jobs, recession, insurance)
library(xts)
daily_series <- xts(daily_df[-1], order.by = as.Date(new1))
# split daily series into monthly elements of daily data:
split_list <- split(daily_series, f = "months", drop = FALSE, k = 1)
What I want to do is calculate a weighted average across all the variables and elements of the list, so I ran the following code:
monthly_av = NULL
for (i in 1:length(split_list)) {
for (j in 1:ncol(split_list[[i]])) {
monthly_av = cbind(monthly_av, xts(weighted.mean(split_list[[i]][,j]), order.by = index(split_list[[i]][1])))
}}
However, the output it gives me is this:
My desired output is to be an xts object with 137 rows and 4 columns corresponding to the 4 variables. I can't figure out why this is occurring - have I misspecified the loop or is it the cbind function that's doing it?
A:
I have found the way to generate a table with monthly weighted average data:
data <- do.call(rbind,
lapply(1:length(split_list)
, function(x) apply(split_list[[x]], 2, weighted.mean)))
> dim(data)
[1] 137 4
And this will mantain an xts object:
data_xts <- do.call(cbind, lapply(1:4, function(x)
apply.monthly(daily_series[,x],weighted.mean)))
And using a for loop:
monthly_av = NULL
for (i in 1:ncol(daily_series)){
monthly_av <- cbind(monthly_av, apply.monthly(daily_series_ts[,i],weighted.mean))
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jquery validate remote bug (async=false)
I think I've found a bug in the remote rule functionality of jquery validation (bassistance). I tested it with jquery.validation 1.9.0 and 1.10.0.
Here is my HTML and JS:
<!DOCTYPE html>
<html lang="nl">
<head>
<meta charset="utf-8">
<script src="jquery-1.7.1.min.js"></script>
<script src="jquery.validate.js"></script>
</head>
<body>
<script>
$(document).ready(function() {
window.validater = $("#SignUpForm").validate({
rules: {
"initials": {
required: true
},
"lastname": {
required: true
},
"phonenumber": {
required: true,
remote: { url: "/checkPhoneNumber.php", async:false }
}
},
messages: {
"initials": {
required: "U heeft uw voorletters niet ingevuld"
},
"lastname": {
required: "U heeft uw achternaam niet ingevuld"
},
"phonenumber": {
required: "U heeft uw telefoonnummer niet ingevuld",
remote: "U heeft geen geldig telefoonnummer ingevuld (formaat: +311230123456).",
}
}
});
});
</script>
<form enctype="" name="SignUpForm" method="POST" action="" class="fbForm " id="SignUpForm" novalidate="novalidate">
<div class="fbElement fbTextfield ">
<label for="initials">Voorletters <span class="require">*</span> </label>
<input type="text" name="initials" value="" style="" title="" class="activePlaceholder" id="initials">
<label class="error" generated="true" for="initials" style="display: none;"></label>
</div>
<div class="fbElement fbTextfield ">
<label for="prefix">Tussenvoegsel </label>
<input type="text" name="prefix" value="" style="" title="" class="activePlaceholder" id="prefix">
<label class="error" generated="true" for="prefix" style="display: none;"></label>
</div>
<div class="fbElement fbTextfield ">
<label for="lastname">Achternaam <span class="require">*</span> </label>
<input type="text" name="lastname" value="" style="" title="" class="activePlaceholder" id="lastname">
<label class="error" generated="true" for="lastname" style="display: none;"></label>
</div>
<div class="fbElement fbTextfield ">
<label for="phonenumber">Telefoonnummer <span class="require">*</span> </label>
<input type="text" name="phonenumber" style="" value="+311230123456" class="activePlaceholder" id="phonenumber">
<label class="error" generated="true" for="phonenumber" style="display: none;"></label>
</div>
<div style="" class="fbContainer " id="navContainer">
<button data-loading-text="Laden..." onclick="" value="Verder" name="SignupFB_NextFB_next" class="submit " type="Submit" id="SignupFB_NextFB_next">Verder</button>
</div>
</form>
</body>
</html>
As you may notice: I have a remote rule on the phonenumber field. The remoterule itself works fine. For testing purpose my checkPhoneNumber.php contains:
<?php
$valid = 'false';
echo $valid;
I really need the async:false. Otherwise I can't submit my form when I do not first click the phonenumber field manually to trigger the ajax-request. This is a known problem on stackoverflow. However when I add async:false no validation-messages apears on the fields above. Fields below the phonenumber field does not have this problem. When I move the phonenumber field above the initials-field there is no problem with the validation-messages.
Does anyone know how to solve this problem, or knows a walk-around?
Thanks in advance,
William
A:
I found in jquery.validate.js (v1.10.0 - 9/7/2012) that in the showErrors function is called when all rules are validated and again invoked seperately when the remote-rule is finished.
When the remote-rule finishes and the showErrors function is called this.errorList will be reset. But at that point the this.errorList variable will contain the previous validation messages.
Commenting the this.errorList = []; will fix above problem. I don't know if it breaks anything else.
edit:
It doesn't fail in the provided testcases.
showErrors: function(errors) {
if(errors) {
// add items to error list and map
$.extend( this.errorMap, errors );
//this.errorList = [];
for ( var name in errors ) {
this.errorList.push({
message: errors[name],
element: this.findByName(name)[0]
});
}
// remove items from success list
this.successList = $.grep( this.successList, function(element) {
return !(element.name in errors);
});
}
if (this.settings.showErrors) {
this.settings.showErrors.call( this, this.errorMap, this.errorList );
} else {
this.defaultShowErrors();
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Bootstrap: Use div as modal only for xs screens?
I have a two column Bootstrap layout on small screens and up. For the extra small size class, I'd like to hide the second column from the normal flow, and instead use it as the contents for a modal.
Is that possible in an elegant way (i.e. without moving it around the DOM with Javascript)?
Example:
<div class="row">
<!-- Main Column -->
<div class="col-sm-8 col-md-9">
Main contents blah blah blah
</div>
<!-- Secondary Column -->
<div class="col-sm-4 col-md-3">
Other contents. These appear normally in sm, md, lg. In xs, it should be the contents of a modal.
</div>
</div>
Sample modal (in which the secondary column should be displayed):
<div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button>
<h4 class="modal-title" id="myModalLabel">Modal title</h4>
</div>
<div class="modal-body">
This is where I want the contents from the secondary column
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary">Save changes</button>
</div>
</div>
</div>
</div>
A:
Yes, you can do that by using jQuery's $.html() function to get the html of the column and then using $.html() again to place it in the modal. This, combined with Abdulla's recommendation, should give you what you want. Here is a demo I created:
$("#myModalBtn").click(function() {
$("#myModal .modal-body").html($("#myModalContent").html());
$("#myModal").modal("show");
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"/>
<div class="container">
<div class="row">
<!-- Main Column -->
<div class="col-sm-8 col-md-9">
Main contents blah blah blah
<button type="button" class="btn btn-primary visible-xs" id="myModalBtn">Launch demo modal</button>
</div>
<!-- Secondary Column -->
<div class="col-sm-4 col-md-3 hidden-xs" id="myModalContent">
Other contents. These appear normally in sm, md, lg. In xs, it should be the contents of a modal.
</div>
</div>
</div>
<div class="modal fade" id="myModal" tabindex="-1" role="dialog" aria-labelledby="myModalLabel">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button>
<h4 class="modal-title" id="myModalLabel">Modal title</h4>
</div>
<div class="modal-body">
This is where I want the contents from the secondary column
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary">Save changes</button>
</div>
</div>
</div>
</div>
Click on Run Code Snippet then click on the "Full Page" link to see it in action.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I get into the subway without killing NSF?
I am trying to go through Deus Ex without killing anybody, so far I've managed to get through the Liberty Island mission and most of Battery Park.
However, now that UNATCO have cleared out the approach to the subway in Battery Park there are still some NSF in the entrance to the subway;
How do I get into the subway without attracting the attention of these three individuals, who're obviously waiting for me to enter the subway?
A:
As said by cloudymusic there is a steam vent that you can access to enter the subway. There is a website that contains a detailed walkthrough of the whole game which shows this -
Explore the shanty town. Inside (1) is a chest with a Lockpick, a Multitool and a Candy Bar. Inside (2) is a chest with a Medkit, a Prod Charger, and a flare. Inside (3) you'll find a Lockpick in a corner. Inside (4) is a steam vent that we will use to enter the subway. The steam vents can also be accessed by opening the indicated panel (inset).
A:
There is a metal hatch on the ground just outside the subway entrance, on the left side. This hatch leads to a network of ducts that you can use to get onto the subway platform while bypassing the main entrance. Avoiding the attention of the NSF personnel on the subway platform will still be something you have to deal with, however.
You can see the location of the hatch at 7:56 in this video:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I run all unit tests of Jinja2?
I want to run the unittests of Jinja2 whenever I change something to make sure I'm not breaking something.
There's a package full of unit tests. Basically it's a folder full of Python files with the name "test_xxxxxx.py"
How do I run all of these tests in one command?
A:
It looks like Jinja uses the py.test testing tool. If so you can run all tests by just running py.test from within the tests subdirectory.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C# How to let processes with different speeds work together
In the bellow test scenario i like to trigger some task by using multiple timers. Some event can trigger another event.
An event must finish the process, before a new process can be started. Events that gets triggered, while another event is processing, shall queue up and start once nothing is processing. The timer doesn't need to be accurate.
Once a line has executed the code, which takes just few seconds, the line cant take any new orders for minutes. Thats the purpose im using timers.
The current problem on the code bellow, is that things are getting mixed up in the real App. Line2 starts processing, while Line still hasn't finished. How to make the orders queue up properly and process it?
In the real App MyTask will start to run the first lines of code back and forth, after a while the last lines of the MyTask code will be executed.
Im a beginner, so please be patient.
public partial class Form1 : Form
{
readonly System.Windows.Forms.Timer myTimer1 = new System.Windows.Forms.Timer();
readonly System.Windows.Forms.Timer myTimer2 = new System.Windows.Forms.Timer();
int leadTime1 = 100;
int leadTime2 = 100;
public Form1()
{
InitializeComponent();
TaskStarter();
}
private void TaskStarter()
{
myTimer1.Tick += new EventHandler(myEventTimer1);
myTimer2.Tick += new EventHandler(myEventTimer2);
myTimer1.Interval = leadTime1;
myTimer2.Interval = leadTime2;
myTimer1.Start();
}
private void myEventTimer1(object source, EventArgs e)
{
myTimer1.Stop();
Console.WriteLine("Line1 Processing ");
MyTask();
Console.Write(" Line1 Completed");
leadTime1.Interval = 5000; // this leadtime is variable and will show how long the line cant be used again, after the code is executed
myTimer2.Start();
myTimer1.Enabled = true;
}
private void myEventTimer2(object source, EventArgs e)
{
myTimer2.Stop();
Console.WriteLine("Line2 Processing ");
MyTask();
Console.Write(" Line2 Completed");
leadTime2.Interval = 5000; // this leadtime is variable
myTimer2.Enabled = true;
}
private void MyTask()
{
Random rnd = new Random();
int timeExecuteCode = rnd.Next(1000, 5000); // This leadtime does reflect the execution of the real code
Thread.Sleep(timeExecuteCode );
}
}
Update
Thanks to the input i was able to sort the problems, which made me remove all the timers as they were causing the asynchronous task processing. I not just lock the Lines to a while loop till all orders are completed. All is done in a single Thread. I think for the most Pro my code will look very ugly. This solution is understandable with my 4 weeks C# experience :)
The 2 List i use and the properties
public class Orders
{
public string OrderID { get ; set ; }
public Orders(string orderID) { OrderID = orderID; }
}
public class LineData
{
string lineID;
public string LineID { get { return lineID; } set { lineID = value; } }
private string orderId;
public string OrderID { get { return orderId; } set { orderId = value; } }
public string ID { get { return lineID + OrderID; } private set {; } }
public double TaskTime { get; set; }
}
Creating the Line data with the lead times per Line and Part
Adding some sample orders
while loop till all orders are completed
public class Production
{
readonly static List<LineData> listLineData = new List<LineData>();
readonly static List<Orders> listOrders = new List<Orders>();
static void Main()
{
// List Line Processing Master Data
listLineData.Add(new LineData { LineID = "Line1", OrderID = "SubPart1", TaskTime = 3 });
listLineData.Add(new LineData { LineID = "Line1", OrderID = "SubPart2", TaskTime = 3 });
listLineData.Add(new LineData { LineID = "Line2", OrderID = "Part1", TaskTime = 1 });
listLineData.Add(new LineData { LineID = "Line3", OrderID = "Part1", TaskTime = 1 });
listLineData.Add(new LineData { LineID = "Line3", OrderID = "Part2", TaskTime = 2 });
// Create Order Book
listOrders.Add(new Orders("SubPart1"));
listOrders.Add(new Orders("SubPart2"));
listOrders.Add(new Orders("Part1"));
listOrders.Add(new Orders("Part2"));
listOrders.Add(new Orders("SubPart1"));
listOrders.Add(new Orders("SubPart2"));
listOrders.Add(new Orders("Part1"));
listOrders.Add(new Orders("Part2"));
listOrders.Add(new Orders("SubPart1"));
listOrders.Add(new Orders("SubPart2"));
listOrders.Add(new Orders("Part1"));
listOrders.Add(new Orders("Part2"));
while (listOrders.Count > 0)
{
CheckProductionLines();
Thread.Sleep(100)
}
}
Picking orders from the listOrder and assign them to the correct Line.
Using DateTime.Now and add the taskTime to determine whether a line is busy or not
Sending the orders to void InitializeProduction(int indexOrder, string line) to process the order.
In a later step im going to make a function for Line1-Linex, as it is repetitive.
static DateTime timeLine1Busy = new DateTime();
static DateTime timeLine2Busy = new DateTime();
static DateTime timeLine3Busy = new DateTime();
static void CheckProductionLines()
{
// Line 1
int indexOrderLine1 = listOrders.FindIndex(x => x.OrderID == "SubPart1" || x.OrderID == "SubPart2");
if (indexOrderLine1 >= 0 && timeLine1Busy < DateTime.Now)
{
string id = "Line1" + listOrders[indexOrderLine1].OrderID.ToString();// Construct LineID (Line + Part) for Task
int indexTasktime = listLineData.FindIndex(x => x.ID == id); // Get Index LineData where the tasktime is stored
double taskTime = (listLineData[indexTasktime].TaskTime); // Get the Task Time for the current order (min.)
InitializeProduction(indexOrderLine1, "Line1"); // Push the start button to run the task
timeLine1Busy = DateTime.Now.AddSeconds(taskTime); // Set the Line to busy
}
// Line2
int indexOrderLine2 = listOrders.FindIndex(x => x.OrderID == "Part1"); // Pick order Line2
if (indexOrderLine2 >= 0 && timeLine2Busy < DateTime.Now)
{
string id = "Line2" + listOrders[indexOrderLine2].OrderID.ToString(); // Line2 + Order is unique ID in listLineData List
int indexTasktime = listLineData.FindIndex(x => x.ID == id);// Get Index LineData where the tasktime is stored
double taskTime = (listLineData[indexTasktime].TaskTime); // Get the Task Time for the current order (min.)
InitializeProduction(indexOrderLine2, "Line2"); // Push the start button to run the task
timeLine2Busy = DateTime.Now.AddSeconds(taskTime); // Set the Line to busy
}
// Line 3
int indexOrderLine3 = listOrders.FindIndex(x => x.OrderID == "Part1" || x.OrderID == "Part2"); // Pick order
if (indexOrderLine3 >= 0 && timeLine3Busy < DateTime.Now)
{
string id = "Line3" + listOrders[indexOrderLine3].OrderID.ToString(); // Line3 + Order is unique ID in listLineData List
int indexTasktime = listLineData.FindIndex(x => x.ID == id);// Get Index LineData where the tasktime is stored
double taskTime = (listLineData[indexTasktime].TaskTime); // Get the Task Time for the current order (min.)
InitializeProduction(indexOrderLine3, "Line3"); // Push the start button to run the task
timeLine3Busy = DateTime.Now.AddSeconds(taskTime); // Set the Line to busy
}
}
Here i InitializeProduction the production
Remove the order from listOrders
in real here will be processed many tasks
static void InitializeProduction(int indexOrder, string line)
{
Thread.Sleep(1000); //simulates the inizialsation code
Debug.WriteLine($"{line} {listOrders[indexOrder].OrderID} Completed ");
listOrders.RemoveAt(indexOrder); //Remove Order from List
}
}
Im sure you will see a lot of space for improvement. If simple things can or even must be applied, im listening :)
A:
Addition after comments at the end
Your problem screams for a producer-consumer pattern. This lesser known pattern has a producer who produces things that a consumer consumes.
The speed in which the producer produces items can be different than the speed in which the consumer can consume. Sometimes the producer produces faster, sometimes the producer produces slower.
In your case, the producer produces "requests to execute a task". The consumer will execute a task one at a time.
For this I use Nuget package: Microsoft.Tpl.Dataflow. It can do a lot more, but in your case, usage is simple.
Normally there are a lot of multi-threading issues you have to think about, like critical sections in the send-receive buffer. TPL will handle them for your.
If the Producer is started, it produces requests to do something, to execute and await an Action<Task>. The producer will these requests in a BufferBlock<Action<Task>>. It will produce as fast a possible.
First a factory, that will create Action<Task> with random execution time. Note that every created action is not executed yet, thus the task is not running!
class ActionFactory
{
private readonly Random rnd = new Random();
public Action<Task> Create()
{
TimeSpan timeExecuteCode = TimeSpan.FromMilliseconds(rnd.Next(1000, 5000));
return _ => Task.Delay(timeExecuteCode);
// if you want, you can use Thread.Sleep
}
}
The producer is fairly simple:
class Producer
{
private readonly BufferBlock<Action<Task>> buffer = new BufferBlock<Action<Task>>();
public TaskFactory TaskFactory {get; set;}
public ISourceBlock<Action<Task> ProducedActions => buffer;
public async Task ProduceAsync()
{
// Create several tasks and put them on the buffer
for (int i=0; i<10; ++i)
{
Action<Task> createdAction = this.TaskFactory.Create();
await this.buffer.SendAsync(createdAction);
}
// notice listeners to my output that I won't produce anything anymore
this.buffer.Complete();
}
If you want, you can optimize this: while SendAsync, you could create the next action. then await SendAsync task, before sending the next action. For simplicity I didn't do this.
The Consumer needs an input, that accepts Action<Task> objects. It will read this input, execute the action and wait until the action is completed before fetching the next input from the buffer.
class Consumer
{
public ISourceBlock<Action<Task>> ActionsToConsume {get; set;}
public async Task ConsumeAsync()
{
// wait until the producer has produced something,
// or says that nothing will be produced anymore
while (await this.ActionsToConsume.OutputAvailableAsync())
{
// the Producer has produced something; fetch it
Action<Task> actionToExecute = this.ActionsToConsume.ReceiveAsync();
// execute the action, and await the eturned Task
await actionToExecute();
// wait until Producer produces a new action.
}
// if here: producer notifies completion: nothing is expected anymore
}
Put it all together:
TaskFactory factory = new TaskFactory();
Producer producer = new Producer
{
TaskFactory = factory;
}
Consumer consumer = new Consumer
{
Buffer = producer.ProducedActions;
}
// Start Producing and Consuming and wait until everything is ready
var taskProduce = producer.ProduceAsync();
var taskConsume = consumer.ConsumeAsync();
// now producer is happily producing actions and sending them to the consumer.
// the consumer is waiting for actions to consume
// await until both tasks are finished:
await Task.WhenAll(new Task[] {taskProduce, taskConsume});
Addition after comment: do it with less code
The above seems a lot of work. I created separate classes, so you could see who is responsible for what. If you want, you can do it all with one buffer and two methods: a method that produces and a method that consumes:
private readonly BufferBlock<Action<Task>> buffer = new BufferBlock<Action<Task>>();
public async Task ProduceTasksAsync()
{
// Create several tasks and put them on the buffer
for (int i=0; i<10; ++i)
{
Action<Task> createdAction = ...
await this.buffer.SendAsync(createdAction);
}
// producer will not produce anything anymore:
buffer.Complete();
}
async Task ConsumeAsync()
{
while (await this.ActionsToConsume.OutputAvailableAsync())
{
// the Producer has produced something; fetch it, execute it
Action<Task> actionToExecute = this.ActionsToConsume.ReceiveAsync();
await actionToExecute();
}
}
Usage:
async Task ProduceAndConsumeAsync()
{
var taskProduce = producer.ProduceAsync();
var taskConsume = consumer.ConsumeAsync();
await Task.WhenAll(new Task[] {taskProduce, taskConsume});
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Separator Between Items in LongListSelector on WP
I Have a LongListSelector which is bonded to a contact list , i would like to add a little line to separate each contacts .
Here is my xaml :
<phone:LongListSelector>
<phone:LongListSelector.ItemTemplate>
<DataTemplate>
<StackPanel Orientation = "Horizontal" >
<TextBlock Text="{Binding informations}" Height="120" />
<Image Source="{Binding photo}" Height="90" Width="90" />
<Line Fill="Red" Height="2" />
</StackPanel>
</DataTemplate>
</phone:LongListSelector.ItemTemplate>
</phone:LongListSelector>
But there is no red line between the items, how can I add one?
EDIT :
Does it have to do with the fact that the orientation of my StackPanel is Horizontal?
A:
Yes, it's because of the "Horizontal".
Try this:
<phone:LongListSelector>
<phone:LongListSelector.ItemTemplate>
<DataTemplate>
<StackPanel>
<StackPanel Orientation = "Horizontal" >
<TextBlock Text="{Binding informations}" Height="120" />
<Image Source="{Binding photo}" Height="90" Width="90" />
</StackPanel>
<Line Fill="Red" Height="2" />
</StackPanel>
</DataTemplate>
</phone:LongListSelector.ItemTemplate>
</phone:LongListSelector>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can i access the function of contract from different nodes?
I am having two contracts say A and B, and two nodes running in different machines Machine1 and Machine 2 with same network id and i had added the peer using node url. Contract A is deployed by machine1 on blockchain, Contract B is deployed by machine2.Now i want access the function of contract A from machine2 and machine 1 and also access the function of contract B from machine2 and machine 1.How can i access the functions
A:
To generate Abi goto https://etherchain.org/solc and place your contract code and get the abi
Use var contract = eth.contract(abi).at(contractaddress)
Replace abi and address with the ABI and address of the contract.
This will allow you to access the contract.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Shift list from given term
Basically, I have a list here:
["a", "b", "c", "d", "e"]
Given a specific term in the list (i.e. "c"), how can I make the list cycle through itself once, returning to the beginning once at the end?
Here's what I mean:
>>> list = ["a", "b", "c", "d", "e"]
>>> letter = "c"
>>> list = magicify(list, letter)
>>> list
["c", "d", "e", "a", "b"]
>>> letter = "a"
>>> magicify(list, letter)
["a", "b", "c", "d", "e"]
A:
You can do
def magicify(list, letter):
return list[list.index(letter):]+list[:list.index(letter)]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
quandl get cmd vs. ecplipse
i am trying to download some stock data.
pretty easy code:
import quandl
import numpy as np
data = quandl.get("FSE/ADS_X",authtoken='xxx', collapse="monthly")
print(data.head(20).to_string())
when i run this in eclipse, it says
AttributeError: module 'quandl' has no attribute 'get'
running this in the cmd, it works perfectly fine.
my eclipse project is looking a C:\Users\user\Anaconda3\python.exe. this python installation can use quandl in the cmd.
any ideas?
thanks!!
A:
My program was called quandl.py as well so renaming solved the issue.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a way to flag the use of non-reentrant C library calls?
I'm working on a project that's heavily multi-threaded, and was wondering if there's a way to have the compiler flag the use of non-reentrant calls to the C library (e.g. strtok intsead of strtok_r)? If not, is there a list of calls that are non-reentrant so I can grep through my code base periodically?
A related question is if there's a way to flag 3d party library use of non-reentrant calls.
I'm assuming reentrancy implies thread-safety, but not necessarily the other way around. Is there a good reason to use non-reentrant calls in a threaded project?
A:
For source, you could possibly insist that every source file contains the line:
#include <beware.h>
after the C headers, and then the beware.h header file contains:
#define strtok unsafe_function_call_detected_strtok
#define getenv unsafe_function_call_detected_getenv
or some other suitable set of names that are unlikely to be real functions. That will result in compilation and/or linker errors.
For libraries, it's a bit more difficult. You can look into using nm to extract all the unresolved names in each object file and ensure none of the unsafe ones are called.
This wouldn't be the compiler doing it but it would be easy enough to incorporate into the build scripts. See the following transcript:
$ cat qq.c
#include <stdio.h>
int main (int argc, char *argv[]) {
printf ("Hello, world.\n");
return 0;
}
$ gcc -c -o qq.o qq.c
$ nm qq.o
00000000 b .bss
00000000 d .data
00000000 r .rdata
00000000 t .text
U ___main
00000000 T _main
U _puts
You can see the unresolved symbols in that output with a U marker (and gcc has very sneakily decided to use puts instead of printf since I gave it a constant string with no formatting commands).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I make Python remember settings?
I wrote the beautiful python example code below. Now how do I make it so when I exit then restart the program it remembers the last position of the scale?
import Tkinter
root = Tkinter.Tk()
root.sclX = Tkinter.Scale(root, from_=0, to=1500, orient='horizontal', resolution=1)
root.sclX.pack(ipadx=75)
root.resizable(False,False)
root.title('Scale')
root.mainloop()
Edit:
I tried the following code
import Tkinter
import cPickle
root = Tkinter.Tk()
root.sclX = Tkinter.Scale(root, from_=0, to=1500, orient='horizontal', resolution=1)
root.sclX.pack(ipadx=75)
root.resizable(False,False)
root.title('Scale')
with open('myconfig.pk', 'wb') as f:
cPickle.dump(f, root.config(), -1)
cPickle.dump(f, root.sclX.config(), -1)
root.mainloop()
But get the following error
Traceback (most recent call last):
File "<string>", line 244, in run_nodebug
File "C:\Python26\pickleexample.py", line 17, in <module>
cPickle.dump(f, root.config(), -1)
TypeError: argument must have 'write' attribute
A:
Write the scale value to a file and read it in on startup. Here's one way to do it (roughly),
CONFIG_FILE = '/path/to/config/file'
root.sclX = ...
try:
with open(CONFIG_FILE, 'r') as f:
root.sclX.set(int(f.read()))
except IOError: # this is what happens if the file doesn't exist
pass
...
root.mainloop()
# this needs to run when your program exits
with open(CONFIG_FILE, 'w') as f:
f.write(str(root.sclX.get()))
Obviously you could make it more robust/intricate/complicated if, for instance, you want to save and restore additional values.
A:
Just before the mainloop:
import cPickle
with open('myconfig.pk', 'wb') as f:
cPickle.dump(f, root.config(), -1)
cPickle.dump(f, root.sclX.config(), -1)
and, on subsequent runs (when the .pk file is already present), the corresponding cPickle.load calls to get it back and set it with ...config(**k) (also needs some trickery to confirm to cPickle that the pickled configuration is safe to reload, unfortunately).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ArcGIS - How to programmatically create new dynamic layer for service?
I'm trying to create a number of widgets that will visualise different data by geographic area, e.g., the Potential Losses for Instrument X in Year Y for the Counties of State Z.
Ideally this would be a map of the state (tick can do that)
with the showing the county boarders (use javascript api to display basemap layer )
and then colour each county according to loss. :(
Going down the ArcGIS route I am assuming I need to create a dynamic layer here with all of the polygons for each county of each state defined and then somehow colour them according to data pulled out of a DB showing loss for that instrument, for that county for that year.
Could someone please
A) validate that that is the correct way to go and
B) point me in the right direction to some resources to help me learn
how to achieve this.
A:
I may not fully understand your question, but it may be simpler than you think.
You can do a lot with symbolization of the layer in the mxd before publishing as a map service. It's quite easy. Harder to do fancy symbolization with javascript. See "About symbolizing layers to represent quantity" http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#/About_symbolizing_layers_to_represent_quantity/00s500000034000000/
Easiest and best performance would be to have a county polygon layer with the attributes you are describing added to it, but you could also take a county layer and then "join" or "relate" a regular db table to it -- I'm using the ArcGIS terms for a database join.
Then in a web map, you can define a popup that could display additional attributes for each county. See
http://help.arcgis.com/en/webapi/javascript/arcgis/samples/widget_popupfl/index.html and there are several similar examples.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ContainsKey Thread Safe
In the following code:
public class StringCache
{
private readonly object lockobj = new object();
private readonly Dictionary<int, string> cache = new Dictionary<int, string>();
public string GetMemberInfo(int key)
{
if (cache.ContainsKey(key))
return cache[key];
lock (lockobj)
{
if (!cache.ContainsKey(key))
cache[key] = GetString(key);
}
return cache[key];
}
private static string GetString(int key)
{
return "Not Important";
}
}
1) Is ContainsKey thread safe? IOW, what happens if that method is executing when another thread is adding something to the dictionary?
2) For the first return cache[key], is there any chance that it could return a garbled value?
TIA,
MB
A:
The inherent thread safety of ContainsKey doesn't matter, since there is no synchronization between ContainsKey & cache[key].
For example:
if (cache.ContainsKey(key))
// Switch to another thread, which deletes the key.
return cache[key];
MSDN is pretty clear on this point:
To allow the collection to be accessed
by multiple threads for reading and
writing, you must implement your own
synchronization.
For more info, JaredPar posted a great blog entry at http://blogs.msdn.com/jaredpar/archive/2009/02/11/why-are-thread-safe-collections-so-hard.aspx on thread-safe collections.
A:
No, ContainsKey is not thread-safe if you're writing values while you're trying to read.
Yes, there is a chance you could get back invalid results -- but you'll probably start seeing exceptions first.
Take a look at the ReaderWriterLockSlim for locking in situations like this -- it's built to do this kind of stuff.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Generating ice-sliding puzzles which spell out words
Let's say I have a puzzle like this:
The idea is that since you're on ice, once you start moving you can't stop until you hit a rock. If you treat the red/blue patches as paint that leaves trails when you move, then trying to collect all the coins in the fewest number of moves spells out a word.
However, as you can see, this puzzle heavily restricts what moves you can perform, making it very easy. I'd like to construct a puzzle which gives more freedom and possibly involves crossing over the same paint patches multiple times, but I'm afraid that the resulting puzzle might have multiple solutions or the painted word might be unrecognisable if you do things in a different order.
So my question is this: Is there a good way of either 1) generating a more open puzzle which still guarantees a unique solution or 2) modifying the puzzle mechanics so that 1) is easier to do?.
Edit: To clarify I'm not asking about generating these sorts of puzzles in general. Instead, given a target word, how can we generate a unique-solution puzzle which spells out that word? (via paint/any other mechanic)
A:
Three Ideas
It seems a bit unimaginative, but why not eliminate the paint blotches and replace them with "glow" tiles that change state from glowing to non-glowing and back (or perhaps switch to glowing and remain glowing) when the puck passes over them? Divide the puzzle into a series of "rooms", with one letter per room or a few letters per room, and engineer bottlenecks so that while it's a minor nightmare to get the puck out of one room into the next one, any solution that does get you out has automatically lit up all the right tiles by running over them the right number of times.
As a second thought, you could take advantage of having multiple pushable blocks that leave colour trails behind them. And while it might seem obvious what direction a block is supposed to be pushed in, such pushes obviously don't always commute. The real challenge then comes in figuring out the order and direction of block pushes to complete the letter in a room and get the puck through to the next room. Non-commutivity obviously isn't an exploitable tool if you're only ever moving one object around.
"Under the hood", puzzles of this type are just directed graphs, or (my preference of abstraction) automata. You have a set of states (position of the puck, position of any other movable objects), and a set of edges or transitions between them. You also have the states of any tiles that get painted, light up, etc. The more complex the topology of states and transitions in the automaton (vertexes, edges in the graph), the more complex the puzzle will be. Ensuring tiles are properly lit up, etc. is accomplished by imposing a guaranteed set of states that must be visited (and potentially, a guaranteed order of visitation) leading up to a new "room" or the end of the puzzle.There are some wonderful tools out there for doing this using automata and formal languages, but you can also do it by hand. My recommendation would be to start with a state diagram with specific "rooms" as described above. Ignore any correspondence between states and positions for now. For each room, start adding states and transitions to make a room as topologically complex as desired. Add in loops and one-ways and blocking states (i.e. dead ends) and whatever else your heart desires.With this done, now choose a small, specific set of states that the system "should" pass through in order to reach the "end of room" state. Again, don't worry about the correspondence of these states with the physical reality of the ice puzzle for now. If you find that there are other ways of getting to the end of room state that don't require you to pass through the necessary states (possibly in sequence), you then either have the option of editing the automaton to eliminate erroneous solutions, changing the correct solution, accepting multiple solutions, or, if you're feeling bold, syncing the automaton with a higher order automaton that dictates a broader order of operations. For this last option, think of a dungeon in a Zelda game. You have to run through a maze (a low-level automaton) to get the dungeon item, then run through the same maze again to get the master key using the dungeon item, then run through the same @#$% maze a third time to get to the boss using the big key. The high-level automaton in this case is the enforced sequence "enter → item → big key → boss". Your high level automaton doesn't necessarily have to be this explicit. It could be based on pushing blocks, running over switches, basically anything that causes a state change.However complex you ultimately wind up making your room's automaton, at the end you have a certain guaranteed complexity, and an assurance that a specific set of states must be visited (possibly in order) to reach the end-of-room state. Now comes the part where you turn your automaton into an ice rink. You do this by laying out rocks in ways that limit the puck's movement to paths that correspond to state transitions in the automaton.I admit this second phase can be challenging, especially since the layout phase would have to be built around your "special sequence" of states corresponding to very specific physical transitions (to paint out a letter, light up a letter, etc.) but you have three saving graces:
You can always add as many intermediate conveyor states as needed to connect two states. Just make sure each conveyor state leading to state A can only exit to A (or another conveyor leading to A).
You can always place rocks out in the boonies without affecting any of the intermediate tiles in an ice puzzle.
The sky's the limit on how puck movement can potentially affect the state of the rink tiles. Not only do you have your original idea and the few alternatives I've suggested, there's nothing stopping you from making a puck that passes over tile X paint all the tiles in X's row up until it hits rocks, or paint X and its 4-connected neighbours, or do whatever, so long as the user can establish a consistent pattern between action and consequence.
Both of these processes (automaton design, rock layout) can be automated with a good deal of work, but they can just as easily be done by hand. My preference is always to do a smaller scale example by hand first, get a feel for how easy the process is when done manually and how painful the various steps would be if scaled up, and then assess whether programming an automated solution is worth the effort.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Materialized view taking too much time
i create a materialized view fast refresh. it takes almost 45 min to create but it did not refresh in 24 hours. i tried it on with index and without index. i check the log of all table max record in log table is 2 lac. query is as under please suggest what changes needed
CREATE MATERIALIZED VIEW LOG ON a WITH ROWID, SEQUENCE (COLUMN USED FROM THIS TABLE)
/
CREATE MATERIALIZED VIEW LOG ON P WITH ROWID, SEQUENCE (COLUMN USED FROM THIS TABLE)
/
CREATE MATERIALIZED VIEW LOG ON PG WITH ROWID, SEQUENCE (COLUMN USED FROM THIS TABLE)
/
CREATE MATERIALIZED VIEW LOG ON PN WITH ROWID, SEQUENCE (COLUMN USED FROM THIS TABLE))
/
CREATE MATERIALIZED VIEW LOG ON AP WITH ROWID
/
CREATE MATERIALIZED VIEW C_INFO
NOLOGGING
BUILD IMMEDIATE
refresh fast with rowid
on demand
AS
SELECT
A.ROWID ACTROWID , P.ROWID PREMROWID,
PG.ROWID PGROWID,AP.ROWID APROWID, PN.ROWID PNROWID,
...
FROM A, P, pg, ap, pn
WHERE
p.id = pg.id (+)
and pg.columname (+)= 'Value'
...
A:
Your materialized is not defined with a NEXT clause, therefore it will only refresh when you ask for it explicitely. You can use either DBMS_MVIEW.REFRESH directly or create a refresh group with DBMS_REFRESH.
In order to automate the refresh, you could program a job with DBMS_SCHEDULER or DBMS_JOB (dbms_job is deprecated in 11g).
You could also define your MV with a NEXT clause, for example this will refresh the MV every hour:
CREATE MATERIALIZED VIEW C_INFO
NOLOGGING
BUILD IMMEDIATE
refresh fast with rowid
on demand
NEXT sysdate + 1/24
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Change text color of a predefined layout
I searched everywhere but I cannot find a solution for this simple question: I'm using a predefined layout R.layout.simple_list_item_1 but I cannot find the way to modify the text color of the TextViews inside it.
A:
If you don't want to use your own XML layout, you can get the TextView from that layout and set the color on that. The ID of the TextView in R.layout.simple_list_item_1 is android.R.id.text1 (see the source for that layout).
TextView tv = (TextView) findViewById(android.R.id.text1);
tv.setTextColor(getColor(R.color.new_text_color);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In Qt, how to customize a QTabWidget as below via qss?
I'm trying to customize a QTabWidget as below. But I don't know how to show the line marked by red color as below in qss.
A:
You have to style two different subcontrols of QTabWidget: pane and tab-bar.
Give pane a top border and a negative top:
QTabWidget::pane{
border-top: 1px solid red;
margin-top: -1px;
}
Now the selected tab of the tab-bar:
QTabBar::tab:selected{
border-top: 1px solid red;
border-left: 1px solid red;
border-right: 1px solid red;
background-color: rgb(240, 240, 240);
}
Please note that the selected tab can not have transparent background, otherwise the pane top border will show up behind it (here I provided a light gray background, just as an example).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Changing file name and adding some line in it
I have a series of files as below :
000_0123
000_0234
000_0345
000_0456
000_0678
000_0890
000_01123
000_01234
I want to change the names to :
000_123
000_234
000_345
000_456
000_678
000_890
000_1123
000_1234
and I want to add first line in each file as "#include<conio.h>". Can anyone help me ?
A:
To add your line to the start of each file you could do
for i in 000*; do sed '1i#include<conio.h>' "$i"; done
1i means insert this at the first line (before existing first line). The existing first line becomes line 2. A warning: this command will fail (do nothing) for empty files.
The contents of all the files with the added line will appear in the terminal one after the other. If it looks right, then do again with -i to change the files in place
for i in 000*; do sed -i '1i#include<conio.h>' "$i"; done
If you just want to remove the leading 0 from after _ you could use rename to rename the files...
rename 's/0_0/0_/' 000*
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Calling by reference on F#
Is there an option to call a function by reference?
For example:
if I have a variable x in func1 and I want to send it to func2 so it can change it (without returning its new value etc.)
let func1 =
let mutable x = 1
func2 x
System.Console.WriteLine(x)
let func2 x =
x <- x + 1
So calling func1 will print "2"..
Is it possible? If so, how?
Thanks.
A:
You can use the & operator and byref keyword.
let func1 =
…
func2 (&x)
…
let func2 (x : int byref) =
…
See MSDN: Reference cells (F#) for more information.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is default shell for users in /etc/passwd for Solaris 11?
For a Solaris 11 Server Config Review that I am doing,
I have following lines in my /etc/passwd file:
root:x:0:0:Super-User:/root:/usr/bin/bash
daemon:x:1:1::/:
bin:x:2:2::/usr/bin:
sys:x:3:3::/:
adm:x:4:4:Admin:/var/adm:
lp:x:71:8:Line Printer Admin:/:
uucp:x:5:5:uucp Admin:/usr/lib/uucp:
nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
.....
Last word of each line means the shell where they log in to.
If nothing is mentioned, is it /usr/bin/bash by default?
From above, can I affirm if the accounts daemon, bin, sys, adm, lp, uucp can log in or not?
Please note that I have received this as an output to one of the scripts that my team had run, and hence, I may not be able to look for any info that you ask which is outside the script. But your help is really appreciated.
Thanks
A:
I believe the answers you've got so far are slightly inaccurate or at least incomplete.
You specifically mention that the question is related to Solaris 11 and this is important to the answer.
If no shell is explicitly mentioned in /etc/passwd then it is correct as the man page says that /usr/bin/sh will be used but this is a logical link to Korn 93 Shell. In other words: For those accounts where no shell is mentioned in /etc/passwd the shell is Korn 93, not Bourne Shell as you might think. Solaris used to have an affinity for the Korn shell (long time ago) so this is the reason why /usr/bin/sh points to the Korn Shell.
Here's an Oracle link with more info: New shell in Oracle Solaris 11.
Extra info: So does this mean that Korn shell is the "default shell" on Solaris 11 ? No!
When you create an account on Solaris 11 using the useradd command and you do not explicitly specify a shell then Bash shell (/usr/bin/bash) will be used. Hence I would say that Bash is the default shell on Solaris.
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Nesting JQuery .click() events
I want to nest one .click() event with another but its not working. I looked at the .on() event, but I don't think its what I need. Below is basically what I have so far, but its not working as intended.
I want to click on the 'adress1' button, get directed to the next page where I either click the 'profession1' button or the 'profession2' button, and depending on which of the last two buttons is clicked, something respective happens.
//HTML code for first button
<a href="ListofDestricts.html" class="adress" id="adress">adress1</a>
//HTML code on a different page for last two buttons
<a href="#" class="prefession-1" id="profession-1">profession1</a>
<a href="#" class="prefession-2" id="profession-1">profession2</a>
//Javascript/JQuery code
$("#adress").click(function(){
//Some action here based on #address click event
$("#profession-1").click(function(){
//Some action if #profession was clicked after #address
});
$("#profession-2").click(function(){
//Some other action if #profession2 was clicked instead
of profession1
});
});
Someone had told me to use the following:
$('#adress').on('click', '#profession-1', function() {alert("x1")}).on('click', '#profession-2', function() {alert("x2")});
but its not working either. I feel like my program is not registering the click.
Your help is much appreciated!
A:
The "root" element, in this case #address, isn't a proper element to attach the click event. You want to attach to some parent element and target a child element. Events in JavaScript bubble up and trickle back down to the element that initiated the event Event Propagation.
To remedy the issue:
$('#someParentEl').on(
'click',
'#profession-1',
function() {alert("x1")}
).on(
'click',
'#profession-2',
function() {alert("x2")}
);
Further Reading:
http://www.quirksmode.org/js/events_order.html
https://developer.mozilla.org/en-US/docs/Web/API/event.stopPropagation
http://learn.jquery.com/events/event-delegation/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Unsupported model *settingsmodel.Settings?
I'm using this ORM library for PostgreSQL: https://godoc.org/github.com/go-pg/pg#example-DB-Select and I'm having an odd issue which I don't understand.
I'm trying to SELECT data from my settings table to later update that value with a function.
package settingsmodel
import (
. "database"
)
type Settings struct {
Id int64
SiteName string
}
func Set(newValue string) bool {
site := &Settings {
SiteName: "MySite",
}
err := Db.Select(&site)
if err != nil {
panic(err) // This is where it panics
}
site.SiteName = newValue
err = Db.Update(site)
if err != nil {
panic(err)
}
return true
}
The error I'm getting is panic: pg: Model(unsupported *settingsmodel.Settings)
I have another function (in the same file) where I get the site name and it works perfectly fine:
func Get() string {
var site Settings
err := Db.Model(&site).First()
if err != nil {
panic(err)
}
return site.SiteName
}
I really don't understand why it's not working. Any help? Thanks!
A:
In the working example, you're passing a pointer to settings; in the first (non-working) example, you're passing a pointer to a pointer:
// &Settings - site is a pointer to a Settings struct
site := &Settings {
SiteName: "MySite",
}
// &site - pass a pointer to site, which is already a pointer
err := Db.Select(&site)
vs the working one:
// site is a value, not a pointer
var site Settings
// Pass a pointer to the value
err := Db.Model(&site).First()
|
{
"pile_set_name": "StackExchange"
}
|
Q:
elasticsearch select which field use for boost
Given an elasticsearch document like this:
{
"name": "bob",
"title": "long text",
"text": "long text bla bla...",
"val_a1": 0.3,
"val_a2": 0.7,
"val_a3": 1.1,
...
"val_az": 0.65
}
I need to make a search on Elastisearch with a given boost value on text field plus a boost value on document got from a named field val_xy.
In example, a search could be:
"long" with boost value on text: 2.0 and general boost val_a6
So if found "long" on text field I use a boost of 2.0, and using a boost value from field value val_a6.
How can I do this search on a java Elasticsearch client? It's possible?
A:
What you want is a function_score query. The documentation isn't the best and can be highly confusing. But using your example above you'd do something like the following:
"function_score": {
"query": {
"term": {
"title": "long"
}
},
"functions": [
{
"filter": {
"term": {
"title": "long"
}
},
"script_score": {
"script": "_score*2.0*doc['val_a6'].value"
}
}
],
"score_mode": "max",
"boost_mode": "replace"
}
My eureka moment with function_score queries was figuring out you could do filters, including bool filters, within the "functions" part.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
A Knotty situation
Given the Dowker notation of a knot and its crossing signs, calculate its bracket polynomial.
Although there are more technical definitions, for this challenge it is enough to think of a knot as something made physically by attaching the two ends of a string together. Since knots exist in three dimensions, when we draw them on paper, we use knot diagrams - two-dimensional projections in which the crossings are of exactly two lines, one over and one under.
Here (b) and (c) are different diagrams of the same knot.
How do we represent a knot diagram on paper? Most of us aren't Rembrandt, so we rely on Dowker notation, which works as follows:
Pick an arbitrary starting point on the knot. Move in an arbitrary direction along the knot and number the crossings you encounter, starting from 1, with the following modification: if it's an even number and you're currently going over the crossing, negate that even number. Finally, pick the even numbers corresponding to 1, 3, 5, etc.
Let's try an example:
Taken with permission from wikimedia user Czupirek
On this knot, we chose "1" as our starting point and proceeded to move up and to the right. Every time we go over or under another piece of the rope, we assign the crossing point the next natural number. We negate the even numbers corresponding to strands that go over a crossing, for example [3,-12] in the diagram. So, this diagram would be represented by [[1,6],[2,5],[3,-12],[-4,9],[7,8],[-10,11]]. Listing the buddies of 1, 3, 5, 7, etc gives us [6,-12,2,8,-4,-10].
There are a few things to note here. First, the Dowker notation is not unique for a given knot, as we can choose an arbitrary starting point and direction. But, given the notation, one can fully determine the structure of the knot (technically, up to reflection of its prime knot components). While not all Dowker notations can form possible knots, in this problem you can assume that the input represents an actual knot.
To avoid the ambiguity between a knot's reflections, and to make the challenge easier to solve, you will also be given a list of crossing signs as input.
In a positive crossing the lower line goes to the left from the point of view of the upper line. In a negative crossing it goes to the right. Note that reversing the direction of going around the knot (i.e. reversing both the over line and under line) doesn't change the crossing signs. In our example the crossing signs are [-1,-1,-1,1,-1,1]. They are given in the same order as the Dowker notation, i.e. for crossings numbered 1, 3, 5, 7, etc.
In this challenge we will be calculating the bracket polynomial of a knot. It's an object that is invariant across most transformation of the knot diagram - a concept which makes it supremely useful in knot theory analysis. (Again, most knot theorists compute the bracket polynomial as an intermediate product on their way to computing the Jones polynomial, which is invariant across all transformations, but we will not be doing that.) So how does it work? The bracket polynomial is a Laurent polynomial - one in which the variable (traditionally named \$A\$) can be raised to negative powers, as well as positive.
For a given knot diagram \$D\$, the three rules for the polynomial, represented as \$\langle D\rangle\$, are:
A sole loop without any crossings has polynomial 1.
If we have a diagram consisting of \$D\$ and a loop disconnected from \$D\$, the polynomial for both is the polynomial for \$D\$ times \$(-A^2-A^{-2})\$.
This rule is the trickiest. It says that if you have a crossing in \$D\$ that looks like , then you can use this rule to simplify the knots in two different ways:
In the image above, the outlined crossing in the first diagram, which is of the form , can be transformed into as in the second figure (a.k.a. positive smoothing), or as in the third figure (negative smoothing).
So, the bracket polynomial of the first diagram is the bracket polynomial of the second times \$A\$ plus the third times \$A^{-1}\$, i.e.,
Confused yet? Let's do an example, trying to find the bracket polynomial of (Note: this is two knots linked together. This sort of diagram will not be a potential input in this challenge since the inputs will only be single knots, but it may appear as an intermediate result in the algorithm.)
We first use rule 3
We use rule 3 again on both of the new knots
We substitute these 4 new knots into the first equation.
Applying rules 1 and 2 to these 4 tell us
So, this tell us
Congrats on completing your brief intro to knot theory!
Input
Two lists:
Dowker notation, e.g. [6,-12,2,8,-4,-10]. Numbering of the crossings must start from 1. The corresponding odd numbers [1,3,5,7,...] are implicit and must not be provided as input.
Signs (1/-1 or if you prefer 0/1 or false/true or '+'/'-') for the crossings corresponding to the Dowker notation, e.g [-1,-1,-1,1,-1,1].
Instead of a pair of lists, you could have a list of pairs, e.g. [[6,-1],[-12,-1],...
Output
Print or return the polynomial, for instance \$A^{-2}+5+A-A^3\$, as a list of coefficient-exponent pairs (or exponent-coefficient pairs) in increasing order of the exponents and without any zero coefficients, e.g. [[1,-2],[5,0],[1,1],[-1,3]].
Alternatively, output an odd-length list of coefficients correspondings to exponents \$-k\ldots k\$ for some \$k\in \mathbb{N}\$, e.g. [0,1,0,5,1,0,-1]. The central element is the constant term (coefficient before \$A^0\$). The leftmost and rightmost elements must not be both 0.
Rules
This is a code-golf challenge. None of the standard loopholes can be used, and libraries that have tools to calculate either Dowker notations, or Bracket polynomials, cannot be used. (A language that contains these libraries still can be used, just not the libraries/packages).
Tests
// 4-tuples of [dowker_notation, crossing_signs, expected_result, description]
[
[[],[],[[1,0]],"unknot"],
[[2],[1],[[-1,3]],"unknot with a half-twist (positive crossing)"],
[[2],[-1],[[-1,-3]],"unknot with a half-twist (negative crossing)"],
[[2,4],[1,1],[[1,6]],"unknot with two half-twists (positive crossings)"],
[[4,6,2],[1,1,1],[[1,-7],[-1,-3],[-1,5]],"right-handed trefoil knot, 3_1"],
[[4,6,2,8],[-1,1,-1,1],[[1,-8],[-1,-4],[1,0],[-1,4],[1,8]],"figure-eight knot, 4_1"],
[[6,8,10,2,4],[-1,-1,-1,-1,-1],[[-1,-7],[-1,1],[1,5],[-1,9],[1,13]],"pentafoil knot, 5_1"],
[[6,8,10,4,2],[-1,-1,-1,-1,-1],[[-1,-11],[1,-7],[-2,-3],[1,1],[-1,5],[1,9]],"three-twist knot, 5_2"],
[[4,8,10,2,12,6],[1,1,-1,1,-1,-1],[[-1,-12],[2,-8],[-2,-4],[3,0],[-2,4],[2,8],[-1,12]],"6_3"],
[[4,6,2,10,12,8],[-1,-1,-1,-1,-1,-1],[[1,-10],[2,-2],[-2,2],[1,6],[-2,10],[1,14]],"granny knot (sum of two identical trefoils)"],
[[4,6,2,-10,-12,-8],[1,1,1,1,1,1],[[1,-14],[-2,-10],[1,-6],[-2,-2],[2,2],[1,10]],"square knot (sum of two mirrored trefoils)"],
[[6,-12,2,8,-4,-10],[-1,-1,-1,1,-1,1],[[1,-2],[1,6],[-1,10]],"example knot"]
]
External resources
Not necessary for the challenge, but if you are interested:
A paper on Knot Polynomials
A paper on Dowker Notation
sandbox posts: 1, 2
thanks @ChasBrown and @H.Pwiz for catching a mistake in my definition of Dowker notation
A:
Brain-Flak, 1316 bytes
(({})<({()<(({}<>))><>}){(({})[()()]<{([{}]({})<>({}<>))}{}(([({}<>)]<<>({}<>)<>((({})<<>{({}<>)<>}<>>))>)){({}<>)<>}<>{}(({}<{}(({}<{({}<>)<>}>))>))<>{({}<>)<>}>)}<>>){(({}){}()<({}<>)>)<>{}(({}){}<>)<>}<>{}{}(()){(<({}<({}<>)>)>)<>((){[()](<(({})<>){({}[({})]<>({}<>))}{}({}<>({}<{}<>{({}<>)<>}>)[()])<>({}({})[()])(([()]{()(<({}[({})]())>)}{})<{(<{}{}>)}{}><>{()((<({}()[({}<>)])<>>))}{}<{}{}>)((){[()]<({}()<({}<({}<<>({()<({}<>)<>>}<>){({}[()]<(({})<({()<({}<>)<>>})<>>)<>{({}[()]<<>({}<>)>)}{}>)}<>>)<>>)>)((){[()](<{}(({})<<>(({})<(<<>({}<<>({}<(()()){({}[()]<([{}]()<>)<>({}<<>{({}({})<>[({}<>)])}{}{}>){({}<>)<>}<>>)}{}>{})>)>)<>{}{({}<>)<>}<>([({}<>)]<((()))>)(())<>({}<>)<>{}({}[()]){<>({}<<>(()()){({}[()]<({}<<>{({}<>)<>}>){({}[({})]<>({}<>))}{}(({})<<>({}<>)<>([{}])>)>)}{}{}>)<>({}<(({})())>[()]<>)}{}({}<<>{}([{}]()<{({}<>)<>}>){({}({})<>[({}<>)])}{}{}>){({}<>)<>}<>{}{}{}>{})>)>)}{}){(<{}(({})<<>(({}{})<<>(<({}<>)>)<>{}{({}<>)<>}<>>(({}){}){})>)>)}>}{}){(<{}([{}]<({}<<>([{}]()<>)<>({}<<>{({}({})<>[({}<>)])}{}{}>){({}<>)<>}<>>({})({}){})>)>)}{}>)}{}){{}(([{}]){}<>{}{}<<>({}<>{}){([{}]({}()()<{}({}<>)(())<>>))}{}{}{}>{})(())<>{{}({}<>)(())<>}(<>)<>}{}}{}{}<>{}{}({}<{{}({}<>)(())<>}<>{{}{((<(())>))}{}}{}{{}({}<>)(())<>}>)<>{{}({}<(<()>)<>([]){{}({}<>)(())<>([])}{}>)<>{{}({}<>)<>}{}{}({}<>)<>}<>
Try it online!
I regret nothing. Input is a flattened list of pairs.
# Part 1: extract edges
(({})<
({()<(({}<>))><>}){
(({})[()()]<
{([{}]({})<>({}<>))}{}(([({}<>)]<<>({}<>)<>((({})<<>{({}<>)<>}<>>))>)){({}<>)<>}
<>{}(({}<{}(({}<{({}<>)<>}>))>))<>{({}<>)<>}
>)}
<>>){(({}){}()<({}<>)>)<>{}(({}){}<>)<>}<>
{}{}(())
# Part 2: Compute bracket polynomial
{
# Move degree/sign to other stack
(<({}<({}<>)>)>)<>
# If current shape has crossings:
((){[()](<
# Consider first currently listed edge in set
# Find the other edge leaving same crossing
(({})<>){({}[({})]<>({}<>))}{}
# Move to top of other stack
# Also check for twist
({}<>({}<{}<>{({}<>)<>}>)[()])
# Check for twist in current edge
<>({}({})[()])
(
# Remove current edge if twist
([()]{()(<({}[({})]())>)}{})<{(<{}{}>)}{}>
# Remove matching edge if twist
<>{()((<({}()[({}<>)])<>>))}{}<{}{}>
# Push 1 minus number of twists from current vertex.
)
# If number of twists is not 1:
((){[()]<
# While testing whether number of twists is 2:
({}()<
# Keep sign/degree on third stack:
({}<({}<
# Duplicate current configuration
<>({()<({}<>)<>>}<>){({}[()]<(({})<({()<({}<>)<>>})<>>)<>{({}[()]<<>({}<>)>)}{}>)}
# Push sign and degree on separate stacks
<>>)<>>)
# If number of twists is not 2: (i.e., no twists)
>)((){[()](<{}
# Make first copy of sign/degree
(({})<<>(({})<
# Make second copy of sign/degree
(<<>({}<<>({}<
# Do twice:
(()()){({}[()]<
# Prepare search for vertex leading into crossing on other side
([{}]()<>)
# While keeping destination on third stack:
<>({}<
# Search for matching edge
<>{({}({})<>[({}<>)])}{}
# Replace old destination
{}>)
# Move back to original stack
{({}<>)<>}<>
>)}{}
# Add orientation to degree
>{})>)>)
# Move duplicate to left stack
<>{}{({}<>)<>}<>
# Create "fake" edges from current crossing as termination conditions
([({}<>)]<((()))>)(())<>
# Create representation of "top" new edge
({}<>)<>{}({}[()])
# While didn't reach initial crossing again:
{
# Keep destination of new edge on third stack
<>({}<<>
# Do twice:
(()()){({}[()]<
# Search for crossing
({}<<>{({}<>)<>}>){({}[({})]<>({}<>))}{}
# Reverse orientation of crossing
(({})<<>({}<>)<>([{}])>)
>)}{}
# Remove extraneous search term
{}
# Push new destination for edge
>)
# Set up next edge
<>({}<(({})())>[()]<>)
}
# Get destination of last edge to link up
{}({}<
# Find edge headed toward original crossing
<>{}([{}]()<{({}<>)<>}>){({}({})<>[({}<>)])}
# Replace destination
{}{}>)
# Move everything to left stack
{({}<>)<>}
# Clean up temporary data
<>{}{}{}
# Push new sign/degree of negatively smoothed knot
>{})>)
# Else (two twists)
# i.e., crossing is the twist in unknot with one half-twist
>)}{}){(<{}
# Copy sign and degree+orientation
(({})<<>(({}{})<
# Move sign to left stack
<>(<({}<>)>)
# Move copy of configuration to left stack
<>{}{({}<>)<>}
# Add an additional 4*orientation to degree
<>>(({}){}){})>)
>)}
# Else (one twist)
>}{}){(<
# Invert sign and get degree
{}([{}]<({}<
# Search term for other edge leading to this crossing
<>([{}]()<>)
# With destination on third stack:
<>({}<
# Find matching edge
<>{({}({})<>[({}<>)])}{}
# Replace destination
{}>)
# Move stuff back to left stack
{({}<>)<>}<>
# Add 3*orientation to degree
>({})({}){})>)
>)}{}
# Else (no crossings)
>)}{}){{}
# If this came from the 2-twist case, undo splitting.
# If this came from an initial empty input, use implicit zeros to not join anything
# New sign = sign - 2 * next entry sign
(([{}]){}<>{}{}<
# New degree = average of both degrees
<>({}<>{})
# Find coefficient corresponding to degree
{([{}]({}()()<{}({}<>)(())<>>))}{}{}
# Add sign to coefficient
{}>{})
# Move rest of polynomial back to right stack
(())<>{{}({}<>)(())<>}
# Set up next configuration
(<>)<>
}{}
}{}{}<>{}
# Step 3: Put polynomial in correct form
# Keeping constant term:
{}({}<
# Move to other stack to get access to terms of highest absolute degree
{{}({}<>)(())<>}<>
# Remove outer zeros
{{}{((<(())>))}{}}
# Move back to right stack to get access to lower order terms
{}{{}({}<>)(())<>}
>)<>
# While terms remain:
{
# Move term with positive coefficient
{}({}<(<()>)<>([]){{}({}<>)(())<>([])}{}>)<>{{}({}<>)<>}{}
# Move term with negative coefficient
{}({}<>)<>
}<>
A:
K (ngn/k), 196 193 bytes
{!N::2*n:#x;{+/d,'x,'d:&:'-2!(|/n)-n:#:'x}(+/1-2*s){j::+,/y;,/(&0|2*x;(-1+#?{x[j]&:x@|j;x}/!N){-(d,x)+x,d:&4}/,1;&0|-2*x)}'(N!{(x,'|1+x;x+/:!2)}'((2*!n),'-1+x|-x)@'0 1=/:x>0)@'/:+~(y<0)=s:!n#2}
Try it online!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use base workspace variables while initializing the masked block in MATLAB?
I have a structure called "uwb" in the base workspace. Under the uwb structure I have another structure called "channel". Under channel I have got two variables, a and b. Now I want to create a subsystem. I want to mask the block. My problem is I have to use the variables a and b for the initialization of the masked subsystem. How can I include a and b in initialization commands of the subsystem while masking?
A:
After creating a mask for the subsystem:
Select the Parameters tab in the subsystem mask editor to add tunable dialog parameters.
Add the dialog parameters for each variable you need to access (i.e. call them maska and maskb).
Head over to the Initialization tab and add your initialization code referring to the dialog parameter names maska and maskb. Apply your changes close the mask editor window.
Double click on the masked subsystem and you should be prompted to enter values for the two dialog parameters that were just setup.
In the textfields, type in the workspace variables uwb.channel.a and uwb.channel.b to assign their values to maska and maskb respectively.
As long as the uwb struct is in the base workspace when the model is initialized to run, the masked subsystem will evaluate and assign the a and b appropriately.
(I just tried it out and it seems to work fine, here is the model as a reference: http://sfwn.in/Fejp)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python boto3 filtering RDS tag
I have created a python script to get my AWS RDS instances Endpoint.
#!/usr/bin/env python
import boto3`
rds = boto3.client('rds')
try:
# get all of the db instances
dbs = rds.describe_db_instances()
for db in dbs['DBInstances']:
print ("%s@%s:%s %s") % (
db['MasterUsername'],
db['Endpoint']['Address'],
db['Endpoint']['Port'],
db['DBInstanceStatus'])
except Exception as error:
print error
It connects to RDS and I see data in dbs variable.
{u'DBInstances': [{u'PubliclyAccessible': False, u'MasterUsername': 'dbadmin', u'MonitoringInterval': 0, u'LicenseModel': 'general-public-license', ...
Unfortunately, I got en error:
File "rds2.py", line 7
for db in dbs['DBInstances']:
^
SyntaxError: invalid syntax`
Could you tell me whats wrong? My goal is to get Endpoint of RDS with TAG (Name = APP1).
Thanks.
A:
It is a problem with your Python indentation.
import boto3
rds = boto3.client('rds')
try:
# get all of the db instances
dbs = rds.describe_db_instances()
for db in dbs['DBInstances']:
print ("%s@%s:%s %s") % (
db['MasterUsername'],
db['Endpoint']['Address'],
db['Endpoint']['Port'],
db['DBInstanceStatus'])
except Exception as error:
print error
|
{
"pile_set_name": "StackExchange"
}
|
Q:
CalledProcessError while installing Tensorflow using Bazel
I am trying to install Tensorflow from source using Bazel on Raspberry pi. I am following the official documentation as given here. When I run the ./configure in Tensorflow directory after completing all the steps written for Bazel, I get the following error
/home/cvit/bin/bazel: line 88: /home/cvit/.bazel/bin/bazel-real: cannot execute binary file: Exec format error
/home/cvit/bin/bazel: line 88: /home/cvit/.bazel/bin/bazel-real: Success
Traceback (most recent call last):
File "./configure.py", line 1552, in <module>
main()
File "./configure.py", line 1432, in main
check_bazel_version('0.15.0')
File "./configure.py", line 450, in check_bazel_version
curr_version = run_shell(['bazel', '--batch', '--bazelrc=/dev/null', 'version'])
File "./configure.py", line 141, in run_shell
output = subprocess.check_output(cmd)
File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['bazel', '--batch', '--bazelrc=/dev/null', 'version']' returned non-zero exit status 1
I didn't put the user flag in the bazel installation. So, I think this might be bazelrc error so I tried to set $PATH=$BAZEL/bin but nothing happened.
Please give any suggestion !!
A:
Probably the problem is that the non appropriate version of bazel is installed.
Run bazel version in the tensorflow directory, and see if there is an error.
If there is a problem with bazel version, then check out the .baselversion file, and if it contains a version that isn't installable with apt, then dowload the installer from https://github.com/bazelbuild/bazel/releases then install it, else install with apt.
After that everything should work fine.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use replaceAll certain amount of times
I'm using replaceAll to only list the numbers in a certain string and I was wondering if there was a way to limit the number of times it replaced. For example:
String s = "1qwerty2qwerty3";
s = s.replaceAll("[^0-9]+", " ");
System.out.println(Arrays.asList(s.trim().split(" ")));
This will filter out all the numbers in a string, giving the result: [1, 2, 3].
I want to know if there is a way to instead get the result [1, 2]. So, basically the method finds two numbers and stops. Thanks for any help!
A:
Your replaceAll is removing everything that isn't a digit, but you want to limit the numbers returned by the split! I would, instead, stream the result of the split - then you can limit that and collect it to a List. Like,
String s = "1qwerty2qwerty3";
s = s.replaceAll("\\D+", " "); // <-- equivalent to your current regex.
System.out.println(Stream.of(s.split("\\s+")).limit(2).collect(Collectors.toList()));
Outputs (as requested)
[1, 2]
And, we can actually eliminate a step if we split on non-digits to begin with. Like,
System.out.println(Stream.of(s.split("\\D+")).limit(2)
.collect(Collectors.toList()));
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Storing File Name only in a parameter
I have a requirement in which the parameter is coming as file name that upon debugging I have analyzed, as shown below:
private processfile ( string filepath)
{
}
Now this file path can be like:
C:\abc\file1.txt
or
C:\abc\def\file1.txt
or
C:\ghj\ytr\wer\file1.txt
so I have achieved this with as shown below..
String p = new File(filePath).getName();
Now the issue is that upon printing the parameter p upon console it prints
file1.txt
whereever I was tring that only the file name to be stored and not the extension, such as
P should only contain file1 only and no extenstion. please advise.
A:
there is no inbuilt API to to get file name without extension. But why cant you trancate it programmatically like:
p = p.substring(0, p.lastIndexOf("."));
|
{
"pile_set_name": "StackExchange"
}
|
Q:
3D image (3D array of voxels) filters like skeletonization or HitAndMiss
Is there any free library similar to Aforge which would allow me to do skeletonization and HitAndMiss on a 3D image (3D array of voxels)?
A:
I did not find any C# library so I use ImageJ reps. Fiji to do the task.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why do class_respondsToSelector and respondsToSelector behave different when sent to Class?
I have spent quite some time trying to figure out how class_respondsToSelector and respondsToSelector can give different results. Consider the following class:
@interface Dummy : NSObject
- (void)test;
@end
@implementation Dummy
- (void)test {}
@end
My scenario is that I try to determine if a class responds to a certain class method. This piece reproduces the problem:
Class class = [Dummy class];
if (class_respondsToSelector(class, @selector(test)))
NSLog(@"class_respondsToSelector: YES");
else
NSLog(@"class_respondsToSelector: NO");
if ([class respondsToSelector:@selector(test)])
NSLog(@"respondsToSelector: YES");
else
NSLog(@"respondsToSelector: NO");
If I remove the declaration and implementation of -test, the output of the above is NO and NO as expected. However, running it as it reads above (including -test), the output produced is the following:
class_respondsToSelector: YES
respondsToSelector: NO
The documentation says nothing about whether respondsToSelector works for instances only, just that it indicates whether the receiver implements..., hence I am unable to determine whether this is correct behavior or not. Am I missing something?
Update
Graham Lee provided this link to a great discussion on the problem.
A:
The question asked by class_respondsToSelector() is "Do instances of this class respond to this selector?"
The question asked by -[NSObject respondsToSelector:] is "Does this particular instance (which is the "reciever") respond to this selector?"
You're sending respondsToSelector: to a class object, which is itself an instance of its metaclass, and asking about that particular object.
To see the same results as class_respondsToSelector(), either use +[NSObject instancesRespondToSelector:] or get an instance of the class.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
I have a mysql table contains lot's of records
I have a mysql table contains lot's of records. my table has a varchar field and a timestamp field. (I have one record for every minute)
I want to select records like this:
1,3,5,7,9,11,...
or 1,4,7,10,13,..
or something like this.
I can get done it using php while function, but it is not a good solution. is there any mysql select parameter to get it exactly from mysql?
p.s: sorry for post title, this is the only title stackoverflow accept it.
A:
select * from table where identity_column %2 <>0 -- to select 1,3,5,7,9...
and for your 2 condition do this !
select * from table where identity_column%3 =1 -- to select 1,4,7,10,13,....
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to recalculate automatic weights for single bones?
After I edited an already weighted model and its armature, the pose mode doesn't grab the limbs the right way anymore. Is there a way to recalculate weights for single bones? Because I already edited the weights for other parts of the mesh and don't want to redo that again.
A:
In Weight paint mode, select the pose bones you want to recompute the weight and press:
W > Assign automatic from bones
A:
(For August 2019 2.8)
First select your bones, then shift-select your mesh. Make sure your mesh is selected second.
Now go into weight paint mode and ctrl-select whichever bone you want to adjust the weight for.
Press f3 to open the operator search and search for 'Weight From Bones'
Then select it and select 'Automatic'
Since you can select whichever bones you want now during weight painting you can reapply your automatic weights to all your bones this way.
A:
ctrl-select bone in weight paint mode did not work for me in Blender 2.8 . The following works for me though:
Select the pose bones in Pose Mode.
Select armature, then mesh in Object Mode.
Goto Weight Paint, select all area by pressing A (or only selected area you want)
At the top left corner of 3D view, look for Weights menu and click Assign Automatic From Bones
Done! You can verify the new weight by selecting the Vertex Group of your bone in Object Data at the Properties window
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is links relative to when used in srcdoc attribute in a iframe element?
If I have a html file A.html, that has js code loaded from an external js file A.js in another folder. The js code creates an iframe with html in its srcdoc attribute
<iframe srcdoc='<script src="file.js"></script>'></iframe>
Then inserts that iframe in the dom. The question is where is the file.js relative to?
Thanks
A:
a.html
<iframe srcdoc='<script src="file.js"></script>'></iframe>
file.js
document.write('fnj');
It will basically give output of that particular js file in iframe
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Query was empty Nodejs Sequlize
I'm trying to update data in a nodejs application. I tested it with postman.
My developing steps was was:
Getting data with id: 10 from DB (MySQL) to update >> Unhandled rejection SequelizeDatabaseError: Query was empty
I recognised, I use wrong id, so I changed to a correct one:50 >> Some other error
Okay, I fix this some other error related to my inaccurate work, and use good id: 50 >> Unhandled rejection SequelizeDatabaseError: Unhandled rejection SequelizeDatabaseError: Query was empty
Strange behavior, because this id was good earlier... (in the 3. step) Okay, I just change the id to an other correct one: 51 >> Some other error
Okay, I fix the other error, try the update again with the new-new id: 51 >> Unhandled rejection SequelizeDatabaseError: Query was empty
It seems to me, I can get the valid data just one time, after I can not reach it...
Full stacktrace:
Unhandled rejection SequelizeDatabaseError: Query was empty
at Query.formatError (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\sequelize\lib\dialects\mysql\query.js:223:16)
at Query.connection.query [as onResult] (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\sequelize\lib\dialects\mysql\query.js:55:23)
at Query.Command.execute (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\mysql2\lib\commands\command.js:30:12)
at Connection.handlePacket (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\mysql2\lib\connection.js:515:28)
at PacketParser.onPacket (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\mysql2\lib\connection.js:94:16)
at PacketParser.executeStart (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\mysql2\lib\packet_parser.js:77:14)
at Socket.<anonymous> (C:\Users\bla_bla_bla\Documents\work\_nodejs\parentfolder\application\application\node_modules\mysql2\lib\connection.js:102:29)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:176:18)
at Socket.Readable.push (_stream_readable.js:134:10)
at TCP.onread (net.js:547:20
Ideas?
Note
My problem is not related to model-DB syncronisation. These are syncronised, because I have a case - the first calling - when I can update the object! But if I test that id twice, third etc... I getting this error. So, if these are nor synconised, then I could not update the obejct in the first time neither
UPDATE
After debbing a day, I just recognised, the second and the third time the sequlize making empty SQL query. Below the code snippets:
The good SQL update:
Executing (default): UPDATE `cardModules` SET `field1`='xxxxxxxx',`field2`='xxxxxxxxxx',`field3`='MEDIUM',`field4`=false,`field5`=false WHERE `id` = 43
The bad SQL update:
Executing (default):
The question is: why does Sequlize this?
A:
Number of hours I just recognised the problem. If the sequlize don't find any difference in the DB values and the updateable values, then will generate an empty SQL string and want to execute this. So becaues of this the same sequlize throws this Query was empty message.
So if there is no changes but you want to update, you will get this error.
I handle this problem to check the error message like:
if(err.message == 'Query was empty'){
console.log('There is no changes in the update, lets continue the progress...');
next();
}
A:
To shed more light on how/why this can happen: Generally, Sequelize will track changes to your model and then when you call model.save(), it will only attempt to update SQL columns that have changed.
let person = async Employee.findOne();
person.name = "Jerry Jones";
person.save(); //All good here, and only the name column will be updated
If no properties have changed, it won't even try to make the SQL call
let person = async Employee.findOne();
person.save(); //No errors, but no SQL call will be made
However, any properties set to an 'undefined' value (which is different than null!) will still cause the model to be marked as dirty/changed, but will be ignored when the SQL call is constructed. Thus, if you only change one property and set it to undefined, Sequelize will still attempt to make the call, but the query will be empty.
let userUpdates = {age: 99, title: "Developer"};
let person = async Employee.findOne();
person.name = userUpdates.name; //name is undefined, but property will still be considered changed.
person.save(); //Query will be empty!!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can you explain this C code? (Create a deamon program)
I want to understand the following code well:
/* Become deamon + unstoppable and no zombies children (= no wait()) */
if(fork() != 0) return 0; /* Parent returns OK to shell */
signal(SIGCLD, SIG_IGN); /* ignore child death */
signal(SIGHUP, SIG_IGN); /* ignore terminal hangups */
for(i = 0; i < 32; i++) close(i); /* close open files */
setpgrp(); /* break away from process group */
Here is how I understand the above code:
1st line: Creating a child process and terminating the parent process, so the parent parent process will go back to the shell and the child process will continue executing the program in background.
2nd line: Ignore the signal that is supposed to be sent to the parent process (who's controlling the terminal) when the child process (who's executing the program) is terminated. I think this line will avoid the occurrence of zombies children?
3rd line: I read that it ignores POSIX's Hangup and I am not sure what it is exactly.
4th line: closing the open files whose descriptor files are 0 to 31 (I am not sure why from 0 to 31)
5th line: No idea what it does.
Can you please help me to understand this code well? Thanks in advance :)
A:
You are asking what setpgrp does.
Here is the relevant man page
You could type man setpgrp to get it.
Read also the linux daemon howto
You could also use the daemon function
A:
1) fork()'ing and returning in the parent, has two meanings: A) Run in the background. B) Avoid zombies in a portable way
2) http://en.wikipedia.org/wiki/SIGCHLD
3) SIGHUP is often delivered to a process when a tty is closing. It more or less means "Continue running, even if the associated tty goes away".
4) Closing file descriptors allows starting a daemon from something like an ssh session, without the ssh session waiting around on close for the filedescriptors 0-31 to be closed. If you don't do this, daemons may sometimes cause ssh sessions to seem to hang on exit. There's nothing magic about 0-31 - some processes close more file descriptors than that, but of course 0, 1 and 2 have special meanings: stdin, stdout, stderr respectively.
5) http://en.wikipedia.org/wiki/Process_group
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Delete files extracted with xorriso
I was looking for a way to extract an iso file without root access.
I succeeded using xorriso.
I used this command:
xorriso -osirrox on -indev image.iso -extract / extracted_path
Now when I want to delete the extracted files I get a permission denied error.
lsattr lists -------------e-- for all files.
ls -l lists -r-xr-xr-x for all files.
I tried chmod go+w on a test file but still can't delete it.
Can anyone help me out?
A:
obviously your files were marked read-only in the ISO. xorriso preserves
the permissions when extracting files.
The reason why you cannot remove the test file after chmod +w is that
the directory which holds that file is still read-only. (Anyways, your
chmod command did not give w-permission to the owner of the file.)
Try this tree changing command:
chmod -R u+w extracted_path
Have a nice day :)
Thomas
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Breadth-first search tree
It seems intuitive, and is actually proven in many books, that each path from starting vertex to another one in any search tree of a breadth-first algorithm is the shortest. However, I couldn't find anything about the opposite statement: is any tree, containing all the graph vertices, where exist a vertex such as any path from it is the shortest, actually a search tree of a breadth-first algorithm applied to this graph. It's not so intuitive, so I even don't know for sure is it true or false. Could anyone clarify this point?
A:
The answer is no. For example, let your graph $G = (V,E)$ be defined as
\begin{align}
V &= \{a,b_1,b_2,c_1,c_2\},\\
G &= \{a \to b_i, b_j \to c_k\} \text{ for any }i,j,k \in \{1,2\}.
\end{align}
Then $$T = \{a\to b_1, a\to b_2, b_1\to c_1, b_2\to c_2\}$$ is a tree that has your property (every path is the shortest), but this cannot be a result of BFS since visiting $b_1$ first would imply $b_1 \to c_i$ and visiting $b_2$ first would imply $b_2 \to c_j$.
$\hspace{70pt}$
I hope this helps ;-)
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.