source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18065144
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I see a lot of questions that it's impossible to end call programmatically in Android.At the same time, I see a lot of dialer apps in googleplay market where you can activate the call and drop it also. How do they work? Edit : I've read somewhere that my app has to be system app. Then how to make it system, and what is the difference between system and user apps?
Now provide the response and nothing else.
|
You do not need to be a system app. First, create package com.android.internal.telephony in your project, and put this in a file called " ITelephony.aidl ": package com.android.internal.telephony; interface ITelephony { boolean endCall(); void answerRingingCall(); void silenceRinger(); } Once you have that, you can use this code to end a call: TelephonyManager telephonyManager = (TelephonyManager)context.getSystemService(Context.TELEPHONY_SERVICE);Class clazz = Class.forName(telephonyManager.getClass().getName());Method method = clazz.getDeclaredMethod("getITelephony");method.setAccessible(true);ITelephony telephonyService = (ITelephony) method.invoke(telephonyManager);telephonyService.endCall(); You could use this inside a PhoneStateListener, for example. For this to work, you require permissions in manifest: <uses-permission android:name="android.permission.MODIFY_PHONE_STATE" /><uses-permission android:name="android.permission.CALL_PHONE" /><uses-permission android:name="android.permission.READ_PHONE_STATE" /> Edit: Apologies for horrible formatting, I still can't figure out how to properly do code blocks here :/
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18065144', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1213617/']}
|
jdg_374941
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5377118
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To effectively utilise map-reduce jobs in Hadoop , i need data to be stored in hadoop's sequence file format . However,currently the data is only in flat .txt format.Can anyone suggest a way i can convert a .txt file to a sequence file?
Now provide the response and nothing else.
|
So the way more simplest answer is just an "identity" job that has a SequenceFile output. Looks like this in java: public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Configuration conf = new Configuration(); Job job = new Job(conf); job.setJobName("Convert Text"); job.setJarByClass(Mapper.class); job.setMapperClass(Mapper.class); job.setReducerClass(Reducer.class); // increase if you need sorting or a special number of files job.setNumReduceTasks(0); job.setOutputKeyClass(LongWritable.class); job.setOutputValueClass(Text.class); job.setOutputFormatClass(SequenceFileOutputFormat.class); job.setInputFormatClass(TextInputFormat.class); TextInputFormat.addInputPath(job, new Path("/lol")); SequenceFileOutputFormat.setOutputPath(job, new Path("/lolz")); // submit and wait for completion job.waitForCompletion(true); }
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5377118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/647952/']}
|
jdg_374942
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
454194
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was reading a note on complex analysis and was stuck on one line: Cauchy-Riemann equations Alternatively, the C-R equations can be written as $$\frac{\partial f}{\partial\bar z}=0,$$ where $z=x+iy$ and $\bar z=x-iy$. In some sense, analytic functions are truly functions of $z$, and not of $\bar z$. $$\begin{array}{c}\frac\partial{\partial z} = \frac 1 2 \left(\frac\partial{\partial x}-i\frac\partial{\partial y}\right)&\frac\partial{\partial \bar z}=\frac 1 2 \left(\frac\partial{\partial x}+i\frac\partial{\partial y}\right)\end{array}$$ I do not understand what it means that "Analytic functions are truly functions of $z$ and not of $\bar{z}$." Can someone explain? Thanks in advance for your time.
Now provide the response and nothing else.
|
This is to be taken "in some sense" or as a mnemonic for CR. As $\frac{\partial f}{\partial \bar z}=0$ suggests, the function value does not change when $\bar z$ changes. Of course this is nonsense insofar as if you know $\bar z$ the you know $z$ and then you can obtain the (changing) value of $f$, but then again a function such as $z\mapsto az+b\bar z+c$ is analytic if and only if $b=0$, that is, there is only a part depending on $z$ itself and the possible part depending on $\bar z$ must be constant.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/454194', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33640/']}
|
jdg_374943
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2072848
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a jQuery function to move table rows up and down. I do not know how to save the data, nor get the position of each row. I am using PHP to show the table rows. How do I get each table row position value when the user reorders the table rows?
Now provide the response and nothing else.
|
The jQuery UI sortable plugin provides drag-and-drop reordering. A save button can extract the IDs of each item to create a comma-delimited string of those IDs, added to a hidden textbox. The textbox is returned to the server using an async postback. This fiddle example reorders table elements, but does not save them to a database. The sortable plugin takes one line of code to turn any list into a sortable list. If you care to use them, it also provides CSS and images to provide a visual impact to sortable list (see the example that I linked to). Developers, however, must provide code to retrieve items in their new order. I embed unique IDs of each item in the list as an HTML attribute and then retrieve those IDs via jQuery. For example: // ----- code executed when the document loads$(function() { wireReorderList();});function wireReorderList() { $("#reorderExampleItems").sortable(); $("#reorderExampleItems").disableSelection();}function saveOrderClick() { // ----- Retrieve the li items inside our sortable list var items = $("#reorderExampleItems li"); var linkIDs = [items.size()]; var index = 0; // ----- Iterate through each li, extracting the ID embedded as an attribute items.each( function(intIndex) { linkIDs[index] = $(this).attr("ExampleItemID"); index++; }); $get("<%=txtExampleItemsOrder.ClientID %>").value = linkIDs.join(",");}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2072848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245872/']}
|
jdg_374944
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
361681
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have $N$ participants, each of whom gives a response $Y$ 20 times, 10 in one condition and 10 in another. I fit a linear mixed effects model comparing $Y$ in each condition. Here's a reproducible example simulating this situation using the lme4 package in R : library(lme4)fml <- "~ condition + (condition | participant_id)"d <- expand.grid(participant_id=1:40, trial_num=1:10)d <- rbind(cbind(d, condition="control"), cbind(d, condition="experimental"))set.seed(23432)d <- cbind(d, simulate(formula(fml), newparams=list(beta=c(0, .5), theta=c(.5, 0, 0), sigma=1), family=gaussian, newdata=d))m <- lmer(paste("sim_1 ", fml), data=d)summary(m) The model m yields two fixed effects (an intercept and slope for condition), and three random effects (a by-participant random intercept, a by-participant random slope for condition, and an intercept-slope correlation). I would like to statistically compare the size of the by-participant random intercept variance across the groups defined by condition (i.e., compute the variance component highlighted in red separately within the control and experimental conditions, then test whether the difference in the size of the components is non-zero). How would I do this (preferably in R)? BONUS Let's say the model is slightly more complicated: The participants each experience 10 stimuli 20 times each, 10 in one condition and 10 in another. Thus, there are two sets of crossed random effects: random effects for participant and random effects for stimulus. Here's a reproducible example: library(lme4)fml <- "~ condition + (condition | participant_id) + (condition | stimulus_id)"d <- expand.grid(participant_id=1:40, stimulus_id=1:10, trial_num=1:10)d <- rbind(cbind(d, condition="control"), cbind(d, condition="experimental"))set.seed(23432)d <- cbind(d, simulate(formula(fml), newparams=list(beta=c(0, .5), theta=c(.5, 0, 0, .5, 0, 0), sigma=1), family=gaussian, newdata=d))m <- lmer(paste("sim_1 ", fml), data=d)summary(m) I would like to statistically compare the magnitude of the random by-participant intercept variance across the groups defined by condition . How would I do that, and is the process any different from the one in the situation described above? EDIT To be a bit more specific about what I'm looking for, I want to know: Is the question, "are the conditional mean responses within each condition (i.e., random intercept values in each condition) substantially different from each other, beyond what we would expect due to sampling error" a well-defined question (i.e., is this question even theoretically answerable)? If not, why not? If the answer to question (1) is yes, how would I answer it? I would prefer an R implementation, but I'm not tied to the lme4 package -- for example, it seems as though the OpenMx package has the capability to accommodate multi-group and multi-level analyses ( https://openmx.ssri.psu.edu/openmx-features ), and this seems like the sort of question that ought to be answerable in an SEM framework.
Now provide the response and nothing else.
|
There's more than one way to test this hypothesis. For example, the procedure outlined by @amoeba should work. But it seems to me that the simplest, most expedient way to test it is using a good old likelihood ratio test comparing two nested models. The only potentially tricky part of this approach is in knowing how to set up the pair of models so that dropping out a single parameter will cleanly test the desired hypothesis of unequal variances. Below I explain how to do that. Short answer Switch to contrast (sum to zero) coding for your independent variable and then do a likelihood ratio test comparing your full model to a model that forces the correlation between random slopes and random intercepts to be 0: # switch to numeric (not factor) contrast codesd $contrast <- 2*(d$ condition == 'experimental') - 1# reduced model without correlation parametermod1 <- lmer(sim_1 ~ contrast + (contrast || participant_id), data=d)# full model with correlation parametermod2 <- lmer(sim_1 ~ contrast + (contrast | participant_id), data=d)# likelihood ratio testanova(mod1, mod2) Visual explanation / intuition In order for this answer to make sense, you need to have an intuitive understanding of what different values of the correlation parameter imply for the observed data. Consider the (randomly varying) subject-specific regression lines. Basically, the correlation parameter controls whether the participant regression lines "fan out to the right" (positive correlation) or "fan out to the left" (negative correlation) relative to the point $X=0$ , where X is your contrast-coded independent variable. Either of these imply unequal variance in participants' conditional mean responses. This is illustrated below: In this plot, we ignore the multiple observations that we have for each subject in each condition and instead just plot each subject's two random means, with a line connecting them, representing that subject's random slope. (This is made up data from 10 hypothetical subjects, not the data posted in the OP.) In the column on the left, where there's a strong negative slope-intercept correlation, the regression lines fan out to the left relative to the point $X=0$ . As you can see clearly in the figure, this leads to a greater variance in the subjects' random means in condition $X=-1$ than in condition $X=1$ . The column on the right shows the reverse, mirror image of this pattern. In this case there is greater variance in the subjects' random means in condition $X=1$ than in condition $X=-1$ . The column in the middle shows what happens when the random slopes and random intercepts are uncorrelated. This means that the regression lines fan out to the left exactly as much as they fan out to the right, relative to the point $X=0$ . This implies that the variances of the subjects' means in the two conditions are equal. It's crucial here that we've used a sum-to-zero contrast coding scheme, not dummy codes (that is, not setting the groups at $X=0$ vs. $X=1$ ). It is only under the contrast coding scheme that we have this relationship wherein the variances are equal if and only if the slope-intercept correlation is 0. The figure below tries to build that intuition: What this figure shows is the same exact dataset in both columns, but with the independent variable coded two different ways. In the column on the left we use contrast codes -- this is exactly the situation from the first figure. In the column on the right we use dummy codes. This alters the meaning of the intercepts -- now the intercepts represent the subjects' predicted responses in the control group. The bottom panel shows the consequence of this change, namely, that the slope-intercept correlation is no longer anywhere close to 0, even though the data are the same in a deep sense and the conditional variances are equal in both cases. If this still doesn't seem to make much sense, studying this previous answer of mine where I talk more about this phenomenon may help. Proof Let $y_{ijk}$ be the $j$ th response of the $i$ th subject under condition $k$ . (We have only two conditions here, so $k$ is just either 1 or 2.) Then the mixed model can be written $$y_{ijk} = \alpha_i + \beta_ix_k + e_{ijk},$$ where $\alpha_i$ are the subjects' random intercepts and have variance $\sigma^2_\alpha$ , $\beta_i$ are the subjects' random slope and have variance $\sigma^2_\beta$ , $e_{ijk}$ is the observation-level error term, and $\text{cov}(\alpha_i, \beta_i)=\sigma_{\alpha\beta}$ . We wish to show that $$\text{var}(\alpha_i + \beta_ix_1) = \text{var}(\alpha_i + \beta_ix_2) \Leftrightarrow \sigma_{\alpha\beta}=0.$$ Beginning with the left hand side of this implication, we have $$\begin{aligned}\text{var}(\alpha_i + \beta_ix_1) &= \text{var}(\alpha_i + \beta_ix_2) \\\sigma^2_\alpha + x^2_1\sigma^2_\beta + 2x_1\sigma_{\alpha\beta} &= \sigma^2_\alpha + x^2_2\sigma^2_\beta + 2x_2\sigma_{\alpha\beta} \\\sigma^2_\beta(x_1^2 - x_2^2) + 2\sigma_{\alpha\beta}(x_1 - x_2) &= 0.\end{aligned}$$ Sum-to-zero contrast codes imply that $x_1 + x_2 = 0$ and $x_1^2 = x_2^2 = x^2$ . Then we can further reduce the last line of the above to $$\begin{aligned}\sigma^2_\beta(x^2 - x^2) + 2\sigma_{\alpha\beta}(x_1 + x_1) &= 0 \\\sigma_{\alpha\beta} &= 0,\end{aligned}$$ which is what we wanted to prove. (To establish the other direction of the implication, we can just follow these same steps in reverse.) To reiterate, this shows that if the independent variable is contrast (sum to zero) coded , then the variances of the subjects' random means in each condition are equal if and only if the correlation between random slopes and random intercepts is 0. The key take-away point from all this is that testing the null hypothesis that $\sigma_{\alpha\beta} = 0$ will test the null hypothesis of equal variances described by the OP. This does NOT work if the independent variable is, say, dummy coded. Specifically, if we plug the values $x_1=0$ and $x_2=1$ into the equations above, we find that $$\text{var}(\alpha_i) = \text{var}(\alpha_i + \beta_i) \Leftrightarrow \sigma_{\alpha\beta} = -\frac{\sigma^2_\beta}{2}.$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/361681', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/11091/']}
|
jdg_374945
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28887
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I do not hold a deep understanding of computer science concepts but would like to learn more about how the utility encfs works. I have a few question regarding the concept of filesystem in regards to encfs . It is said that encfs is a cryptographic filesystem wiki link . 1)To encrypt the files encfs is moving around blocks of the files to be encrypted, so am I correct to see this 'scrambled' version of the files as a new perspective which justifies the term of a new filesystem? 2)In the man pages of encfs in the section CEVEATS link to man of encfs online , it says that encfs is not a true file system. How should I understand this? Is that because some necessary features common to all file systems is missing in encfs ' file system? Or is because of some other more substantial reason? 3)The man pages say that it creates a virtual encrypted file system . There are two questions here; what is it that makes it virtual is it that it is a file system within a file system? and the encrypted is that there is not a straight forward way to map the file blocks into a format to be read by other programs? 4)How does the command fusermount relate to encfs?
Now provide the response and nothing else.
|
I think that behind your description, there is a misconception. The unencrypted data is not stored on the disk at any point. When you write to a file in the encfs filesystem, the write instruction goes to the encfs process; the encfs process encrypts the data (in memory) and writes the ciphertext to a file. The file names, as well as the file contents, are encrypted. Reading a file undergoes the opposite process: encfs reads the encrypted data from the disk file, decrypts it in memory and passes the plaintext to the requesting application. When you run the encfs command, it does not decrypt any data. It only uses the password that you supply to unlock the filesystem's secret key. (This is actually a decryption operation, cryptographically speaking, but a different type from what happens with the file data. I will not go into more details here.) 1) Encfs is not exactly “moving blocks around”; it is decoding blocks when it reads them. Encfs is a filesystem because it behaves like one: you can store files on it, when it's mounted. 2) Encfs is not a “true” filesystem because it doesn't work independently. Encfs only provides an encryption layer; it uses an underlying filesystem to actually store data and metadata (metadata is auxiliary information about files such as permissions and modification times). 3) Virtual filesystem is another way to say that encfs itself doesn't store any data, it needs an underlying filesystem (see (2) above) for that. Encrypted means just that: encfs stores the data that you put in it in an encrypted form, which cannot be decrypted without the key. Another program could read the data stored by encfs if and only if that other program had access to the key (which requires the password that the key is protected with). 4) The fusermount command sets up a FUSE mount point. You would not normally call it directly, because a FUSE filesystem is implemented by a user-mode process which you have to start anyway, and that process (e.g. encfs ) will take care of setting up the mount point. Unmounting a FUSE filesystem, on the other hand, is a generic operation, you can always do it by calling fusermount -u .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/28887', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1325/']}
|
jdg_374946
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24198896
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below code works well and it labels the barplot correctly, However, if I try geom_text for a histogram I fail since geom_text requires a y -component and a histogram's y component is not part of the original data. Label an "ordinary" bar plot ( geom_bar(stat = "identity" ) works well: ggplot(csub, aes(x = Year, y = Anomaly10y, fill = pos)) + geom_bar(stat = "identity", position = "identity") + geom_text(aes(label = Anomaly10y,vjust=1.5)) My Problem : How to get the correct y and label (indicated by ? ) for geom_text , to put labels on top of the histogram bars ggplot(csub,aes(x = Anomaly10y)) + geom_histogram() geom_text(aes(label = ?, vjust = 1.5)) geom_text requires x , y and labels . However, y and labels are not in the original data, but generated by the geom_histogram function. How can I extract the necessary data to position labels on a histogram?
Now provide the response and nothing else.
|
geom_histogram() is just a fancy wrapper to stat_bin so you can all that yourself with the bars and text that you like. Here's an example #sample dataset.seed(15)csub<-data.frame(Anomaly10y = rpois(50,5)) And then we plot it with ggplot(csub,aes(x=Anomaly10y)) + stat_bin(binwidth=1) + ylim(c(0, 12)) + stat_bin(binwidth=1, geom="text", aes(label=..count..), vjust=-1.5) to get
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24198896', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1885727/']}
|
jdg_374947
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44513505
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am really struggling to get a working bare minimum code for a JMS producer, I have got my WL JMS server up and running and ready to test it using a JMS client but for creating a client I am having issues with use of initial context factory i.e. env.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory"); . I have tried almost all code, using WL initial context factory, oracle j2ee initial context factory ( env.put(Context.INITIAL_CONTEXT_FACTORY, "oracle.j2ee.rmi.RMIInitialContextFactory"); ) but I always get no class definition found exception when I run my code. For example, for below code I get no class definition found exception. I am trying to run client as a standalone Java program. I understand the exception and have tried to add the relevant JAR for example when I was using WL initial context factory then I placed WL full client JAR in the classpath but still I am not able to overcome this no class def found exception. Could someone please link to some repository or blog or provide me a bare minimum "working" JMS producer, or point on what wrong I am doing. Please note that my JMS server is WL, and I am trying to create a simple JMS client, without using ActiveMQ library, but I am tagging the ActiveMQ as well so that it can get more attention but if anyone thinks it is wrong then please let me know and I will delete ActiveMQ tag, or please feel free to delete yourself. My client example: import java.util.Hashtable;import javax.jms.Connection;import javax.jms.ConnectionFactory;import javax.jms.JMSException;import javax.jms.Message;import javax.jms.MessageProducer;import javax.jms.Queue;import javax.jms.QueueSession;import javax.jms.Session;import javax.naming.Context;import javax.naming.InitialContext;import javax.naming.NamingException;public class JmsProducerQueueClient { public static void main(String[] args) throws NamingException, JMSException { Connection connection = null; try { System.out.println("Create JNDI Context"); Context context = getInitialContext(); System.out.println("Get connection facory"); ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("/com/jms/ms1/cf1"); System.out.println("Create connection"); connection = connectionFactory.createConnection(); System.out.println("Create session"); Session session = connection.createSession(false, QueueSession.AUTO_ACKNOWLEDGE); System.out.println("Lookup queue"); Queue queue = (Queue) context.lookup("/com/jms/ms1/q1"); System.out.println("Start connection"); connection.start(); System.out.println("Create producer"); MessageProducer producer = session.createProducer(queue); System.out.println("Create hello world message"); Message hellowWorldText = session.createTextMessage("Hello World!"); System.out.println("Send hello world message"); producer.send(hellowWorldText); } finally { if (connection != null) { System.out.println("close the connection"); connection.close(); } } } public static Context getInitialContext() throws NamingException { Hashtable env = new Hashtable(); env.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory"); //env.put(Context.INITIAL_CONTEXT_FACTORY, "oracle.j2ee.rmi.RMIInitialContextFactory"); env.put(Context.PROVIDER_URL, "tcp://localhost:6007"); Context context = new InitialContext(env); return context; }} Logs: Create JNDI ContextException in thread "main" java.lang.NoClassDefFoundError: weblogic/security/service/PrivilegedActions at weblogic.jndi.WLSJNDIEnvironmentImpl.<clinit>(WLSJNDIEnvironmentImpl.java:57) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:190) at weblogic.jndi.internal.JNDIEnvironment.getJNDIEnvironment(JNDIEnvironment.java:37) at weblogic.jndi.Environment.<clinit>(Environment.java:92) at weblogic.jndi.WLInitialContextFactory.getInitialContext(WLInitialContextFactory.java:117) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:307) at javax.naming.InitialContext.init(InitialContext.java:242) at javax.naming.InitialContext.<init>(InitialContext.java:216) at com.learning.so.question.jms.queue.producerconsumer.JmsProducerQueueClient.getInitialContext(JmsProducerQueueClient.java:62) at com.learning.so.question.jms.queue.producerconsumer.JmsProducerQueueClient.main(JmsProducerQueueClient.java:22)
Now provide the response and nothing else.
|
You can add parameter values : df = pd.pivot_table(df,index="CNTRY",columns="TYPE", values='VALUE').reset_index()print (df)TYPE CNTRY Advisory Advisory1 Advisory2 Advisory30 FRN NaN 2.0 NaN 4.01 IND 1.0 NaN 3.0 NaN And for remove columns name rename_axis : df = pd.pivot_table(df,index="CNTRY",columns="TYPE", values='VALUE') \ .reset_index().rename_axis(None, axis=1)print (df) CNTRY Advisory Advisory1 Advisory2 Advisory30 FRN NaN 2.0 NaN 4.01 IND 1.0 NaN 3.0 NaN But maybe is necessary only pivot : df = df.pivot(index="CNTRY",columns="TYPE", values='VALUE') \ .reset_index().rename_axis(None, axis=1)print (df) CNTRY Advisory Advisory1 Advisory2 Advisory30 FRN NaN 2.0 NaN 4.01 IND 1.0 NaN 3.0 NaN because pivot_table aggregate duplicates by default aggregate function mean : df = {'TYPE' : pd.Series(['Advisory','Advisory1','Advisory2','Advisory1']), 'CNTRY' : pd.Series(['IND','FRN','IND','FRN']), 'VALUE' : pd.Series([1., 4., 3., 4.])}df = pd.DataFrame(df)print (df) CNTRY TYPE VALUE0 IND Advisory 1.01 FRN Advisory1 1.0 <-same FRN and Advisory1 2 IND Advisory2 3.03 FRN Advisory1 4.0 <-same FRN and Advisory1 df = df.pivot_table(index="CNTRY",columns="TYPE", values='VALUE') .reset_index().rename_axis(None, axis=1)print (df)TYPE Advisory Advisory1 Advisory2CNTRY FRN 0.0 2.5 0.0IND 1.0 0.0 3.0 Alternative with groupby , aggregate function and unstack : df = df.groupby(["CNTRY","TYPE"])['VALUE'].mean().unstack(fill_value=0) .reset_index().rename_axis(None, axis=1)print (df) CNTRY Advisory Advisory1 Advisory20 FRN 0.0 2.5 0.01 IND 1.0 0.0 3.0
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44513505', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5111394/']}
|
jdg_374948
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17954874
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to fade between images in a loop (like result here- jsfiddle.net/5M2PD ) but purely through CSS , no JavaScript . I tried using key-frames but I wasn't successful. Please Help. @keyframes cf3FadeInOut { 0% { opacity:1; } 45% { opacity:1; } 55% { opacity:0; } 100% { opacity:0; }}#cf3 img.top { animation-name: cf3FadeInOut; animation-timing-function: ease-in-out; animation-iteration-count: infinite; animation-duration: 10s; animation-direction: alternate;}
Now provide the response and nothing else.
|
I have taken your fiddle as a base, and made it work without script. updated demo I needed to set an id to the HTML .fadein img { position:absolute; top:0; -webkit-animation-name: fade; -webkit-animation-iteration-count: infinite; -webkit-animation-duration: 6s; animation-name: fade; animation-iteration-count: infinite; animation-duration: 6s;}@-webkit-keyframes fade { 0% {opacity: 0;} 20% {opacity: 1;} 33% {opacity: 1;} 53% {opacity: 0;} 100% {opacity: 0;}}@keyframes fade { 0% {opacity: 0;} 20% {opacity: 1;} 33% {opacity: 1;} 53% {opacity: 0;} 100% {opacity: 0;}}#f1 { background-color: lightblue;}#f2 { -webkit-animation-delay: -4s; background-color: yellow;}#f3 { -webkit-animation-delay: -2s; background-color: lightgreen;} <div class="fadein"> <img id="f3" src="http://i.imgur.com/R7A9JXc.png"> <img id="f2" src="http://i.imgur.com/D5yaJeW.png"> <img id="f1" src="http://i.imgur.com/EUqZ1Er.png"></div> I am setting the keyframes to give aprox 1/3 of the time visible, with apropiate transitions.Then I set different delays for every image, so that they alternate.If you want full browser support, you will need more vendor prefixes. I have used -webkit- and bare property so that you get the idea.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17954874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2617249/']}
|
jdg_374949
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25995
|
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I just want to understand some concepts related to the firewall. Is there any difference between saying firewall policies and firewall rules ? Can we say a firewall includes a set of policies each of which defines a set of special rules? Thank you.
Now provide the response and nothing else.
|
Though there are (as far as I know) no strict definitions, you say that policies are the abstract, high level definitions of what traffic should and shouldn't be allowed. Firewall rules are the translation of policies into practical configuration. For example: the policy "allow only management traffic from trusted networks" is translated into rules which allow traffic from 192.2.0.0/24 to TCP/22 and from 192.2.100.0/24 to TCP/3389 .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/25995', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/21963/']}
|
jdg_374950
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5210840
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between a byte array & byte buffer ? Also, in what situations should one be preferred over the other? [my usecase is for a web application being developed in java].
Now provide the response and nothing else.
|
There are actually a number of ways to work with bytes. And I agree that it's not always easy to pick the best one: the byte[] the java.nio.ByteBuffer the java.io.ByteArrayOutputStream (in combination with other streams) the java.util.BitSet The byte[] is just a primitive array, just containing the raw data. So, it does not have convenient methods for building or manipulating the content. A ByteBuffer is more like a builder. It creates a byte[] . Unlike arrays, it has more convenient helper methods. (e.g. the append(byte) method). It's not that straightforward in terms of usage . (Most tutorials are way too complicated or of poor quality, but this one will get you somewhere. Take it one step further? then read about the many pitfalls .) You could be tempted to say that a ByteBuffer does to byte[] , what a StringBuilder does for String . But there is a specific difference/shortcoming of the ByteBuffer class. Although it may appear that a bytebuffer resizes automatically while you add elements, the ByteBuffer actually has a fixed capacity. When you instantiate it, you already have to specify the maximum size of the buffer. That's one of the reasons, why I often prefer to use the ByteArrayOutputStream because it automatically resizes, just like an ArrayList does. (It has a toByteArray() method). Sometimes it's practical, to wrap it in a DataOutputStream . The advantage is that you will have some additional convenience calls, (e.g. writeShort(int) if you need to write 2 bytes.) BitSet comes in handy when you want to perform bit-level operations. You can get/set individual bits, and it has logical operator methods like xor() . (The toByteArray() method was only introduced in java 7.) Of course depending on your needs you can combine all of them to build your byte[] .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5210840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/530153/']}
|
jdg_374951
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35169
|
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wrote a code in Fortran to solve this differential equation using RK4 method: $$\frac{dy}{dx}=A\sqrt{\frac{B}{y}+\frac{C}{y^2}}$$ $A$ , $B$ , and $C$ are some known constants. The problem is that my initial values are $x=0$ and $y=0$ and because $y=0$ is the root of denominator I get a run-time error. Does anybody know how can I fix this problem?
Now provide the response and nothing else.
|
There is no need for numerical computation here. First, $T(q)$ is a well-known function, the logarithmic integral . Repeated integration by parts gives an asymptotic expansion $$\mathrm{Li}(q) = \frac{q}{\log q}\sum_{k=0}^{K-1} \frac{k!}{\log^k q} + O\left(\frac{q}{\log^{K+1}q}\right).$$ There's also a fairly rapidly convergent representation due to Ramanujan which you can find on Wikipedia Second, regarding the main integral, it has a different asymptotic. First, $\sin^2\theta = \frac{1-\cos(2\theta)}{2}$ , we remove the $\frac{q-2}{2}$ coming from the constant and concentrate on getting cancellation in the oscillatory part. Second, recall the digamma function (the logarithmetic derivative of the gammafunction) $\digamma(x) = \frac{\Gamma'(x)}{\Gamma(x)}$ which satisfies $\log x - \frac{1}{x}\leq\digamma(x) \leq \log x - \frac{1}{2x}$ and $\digamma'(x) = \frac1x + \frac{1}{2x^2} +O(x^{-3})$ Letting $u = \frac{\Gamma(x)}x$ we have $\frac{du}{u} = d(\log u) = (\digamma(x)-\frac1x)dx$ so that $$\begin{split}\frac{q-2}{2} - S(q) &= \int_{x=2}^{x=q} \cos\left(\pi\frac{\Gamma(x)}{2x}\right)dx \\ &= \int_{x=2}^{x=q} \frac{\cos(\pi u)}{u}\frac{du}{\digamma(x)-\frac1x}\end{split}$$ We now integrate by parts and get $$\begin{split} &= \left[-\frac{\sin(\pi u)}{\pi u}\frac{1}{\digamma(x)-\frac1x}\right]_{x=2}^{x=q} - \int_{x=2}^{x=q} \frac{\sin(\pi u)}{\pi u^2}\frac{du}{\digamma(x)-\frac1x} \\&- \int_{x=2}^{x=q} \frac{\sin(\pi u)}{\pi u}\frac{\digamma'(x)+\frac1{x^2}}{\left(\digamma(x)-\frac1x\right)^2} \frac{dx}{du}du \\& = \left[-\frac{\sin(\pi u)}{\pi u}\frac{1}{\digamma(x)-\frac1x}\right]_{x=2}^{x=q} - \int_{x=2}^{x=q} \frac{\sin(\pi u)}{\pi u^2}\frac{du}{\digamma(x)-\frac1x} \\&- \int_{x=2}^{x=q} \frac{\sin(\pi u)}{\pi u}\frac{\digamma'(x)+\frac1{x^2}}{\left(\digamma(x)-\frac1x\right)^3} du\end{split}$$ The first term is $O(1) + O\left(\frac{q}{\Gamma(q)}\right)$ and in particular is bounded. The second is similarly $O\left(\int_{x=2}^{x=q} \frac{du}{u^2}\right) = O(1)+ O\left(\frac{q}{\Gamma(q)}\right)$ . For the last term divide the interval into two parts: up to $2\leq x\leq q^\delta$ and $q^\delta \leq x \leq q$ for some $\delta < 1$ . On the first interval we use that $\digamma'(x)+\frac1{x^2} = O(\frac{1}{x}) = O(1)$ to bound the integral as $O(1)+O(\log u(q^\delta) = O(\log(\Gamma(q^\delta)) = O(\delta q^\delta \log q)$ . On the second interval we have $\digamma'(x)+\frac1{x^2} = O(q^{-\delta})$ so the whole integral is $O(q^{1-\delta}\log q)$ . Taking $\delta = \frac12$ we conclude that $$ S(q) = \frac{q-2}{2} + O(q^{1/2}\log q)$$ And in particular has a different asymptotic. Finally, a more careful analysis using the $\log^3x $ in the denominator of the third term would give the error term $O\left(\frac{q^{1/2}}{\log^2 q}\right)$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://scicomp.stackexchange.com/questions/35169', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/36053/']}
|
jdg_374952
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27841
|
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Libertarian party is moderate. A survey by David Kirby and David Boaz found a minimum of 14 percent American voters to have libertarian-leaning views.[69][70] ( source ) It's fiscally conservative. It doesn't run others' people business. It's most compatible with free market and capitalism which most people are used to. Yet, it didn't get much seats. In presidential election, that would be understandable. You need to win and only the big 2 parties can win. But in parliament election? We don't have 1% libertarians in most US states ( source ).
Now provide the response and nothing else.
|
Because most (all?) US states vote for individual representatives on a district-by-district basis. In order to get a person into a state senate or state assembly, that person needs to run in one voting district and gain the majority of votes there. That means an overall 14% approval across the whole nation won't get you any seats when it is evenly distributed over all districts. In order to win seats in the United States, a party needs to concentrate supporters in individual districts. Also, I doubt that the Libertarian party of the United States even has a 14% voter potential. "Libertarian-leaning views" and "Supporting the Libertarian party" are two different things. Many people make their voting decision on other factors than just which party ideology aligns most with their own.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/27841', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/5894/']}
|
jdg_374953
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2506776
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am rather new to VIM. I got some source code and this is a mess. At a first sight I would like at least to get a clear and organised view of the code, so I like to get it rightly formatted, I mean indented depending on the depth of the functions and so. I wonder if it can be done with VIM, and otherwise which other commandline tools for that can you recommend. Thanks
Now provide the response and nothing else.
|
While vim is a true Swiss-knife I still prefer external tools for some jobs. This approach is some times much more intuitive and easy to remember than using the built-in equivalent. In the case of indenting, I filter the whole file buffer through astyle. The astyle parameters are much easier to grasp in a couple of minutes, especially if you are not a vim guru. Also astyle provides much more flexibility in fine-tuning the output. First install astyle: # apt-get install astyle Then inside vim: :%!astyle (simple case - astyle default mode is C/C++) or :%!astyle --mode=c --style=ansi -s2 (ansi C++ style, use two spaces per indent level) or :1,40!astyle --mode=c --style=ansi (ansi C++ style, filter only lines 1-40)
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2506776', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171546/']}
|
jdg_374954
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4497684
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following models: class Bill(models.Model): date = models.DateTimeField(_("Date of bill"),null=True,blank=True)class Item(models.Model): name = models.CharField(_("Name"),max_length=100) price = models.FloatField(_("Price")) quantity = models.IntegerField(_("Quantity")) bill = models.ForeignKey("Bill",verbose_name=_("Bill"), related_name="billitem") I know that this is possible: from django.forms.models import inlineformset_factoryinlineformset_factory(Bill, Item) and then process this via standard view. Now I was wondering, if there is a way to achieve the same (meaning: using a inline for adding/editing items belonging to a bill) using class based views (not for the admin-interface).
Now provide the response and nothing else.
|
Key points is: generated FormSet s within forms.py using inlineformset_factory : BookImageFormSet = inlineformset_factory(BookForm, BookImage, extra=2)BookPageFormSet = inlineformset_factory(BookForm, BookPage, extra=5) returned the FormSet s within a CreateView class in views.py : def get_context_data(self, **kwargs): context = super(BookCreateView, self).get_context_data(**kwargs) if self.request.POST: context['bookimage_form'] = BookImageFormSet(self.request.POST) context['bookpage_form'] = BookPageFormSet(self.request.POST) else: context['bookimage_form'] = BookImageFormSet() context['bookpage_form'] = BookPageFormSet() return context Used form_valid to save the form and formset: def form_valid(self, form): context = self.get_context_data() bookimage_form = context['bookimage_formset'] bookpage_form = context['bookpage_formset'] if bookimage_form.is_valid() and bookpage_form.is_valid(): self.object = form.save() bookimage_form.instance = self.object bookimage_form.save() bookpage_form.instance = self.object bookpage_form.save() return HttpResponseRedirect('thanks/') else: return self.render_to_response(self.get_context_data(form=form))
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4497684', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181210/']}
|
jdg_374955
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10373
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So, I watched Interstellar and if you watched it too you know that there's a planet orbiting a black-hole, they call it Miller's Planet. According to the movie, every hour on Miller's Planet is equivalent to 7 years on Earth due to time dilation from the black-hole. Question: Assuming there are other life forms in the universe, is it really possible for it to be near a black-hole? Is it possible that they came to existence thousands (or even millions) of years before us, but are not as advanced as us because time on our planet is way quicker than that of theirs? If they have an assignment due tomorrow, we have, say, 100 years more time to do it (assume what else we can do in that 100 years). Or they actually are more advanced than us, but from Earth, they somehow live in the past?
Now provide the response and nothing else.
|
Well, first things first. It's not likely to have a planet orbiting near a black hole and in significant time dilation because the tidal effects would likely tear anything that close apart. Certainly a planet orbiting a stellar mass black hole would need to be quite far away so as to not be torn apart, so any time dilation would be pretty small. Around a super-massive black hole, the tidal effects are smaller and a nearish orbit with some measure of time dilation is possible. (see link below for more specifics), https://physics.stackexchange.com/questions/110044/time-dilation-factor-for-the-circular-orbit-at-3-2-schwarzschild-radius But a stable planetary orbit, you probably max out at about 20% time dilation and only around a super-massive black hole, where there's only 1 per galaxy. The idea of 1 day to 100 years isn't practical, it's 80 days to 100 days if you're talking about a stable planetary orbit. and, I'm not sure you'd want to be that close to the black hole in the center of the galaxy, not because the orbit isn't stable, but because stars are in that orbit too. It might not be a safe place to be. So, in reality, you'd want a stellar mass black hole and a distant orbit, where the time dilation would be quite small, and in that scenario, yes, life is possible because of tidal energy, so a planet could have liquid surface water and an atmosphere, even if the black hole gave off very little light and heat. Such a planet in a tidal-energy orbit would probably be tidally locked which would protect the far side from any gamma rays the black hole spits out when it eats, so it would in theory be a good place for life. No significant light source, unless it was a binary system, so, plants would have it harder, but there would be heat. There's another problem. The creation of black holes tends to blow everything appart in a huge explosion. It's unclear that a planet would survive a black hole's birth, so you might need a captured planet. Finally, intelligent life . . . we really don't know enough about how common intelligent life is on other planets. Life might be fairly enough, but intelligent life is far less clear and there's other factors than just time. Today, we simply don't know enough to predict if there's intelligent life out there or not. There probably is life elsewhere in the universe, though even that isn't 100% certain, but regarding intelligent life, there's far too many unknowns in that equation. I think a black hole might not be optimal for the formation intelligent life because of the lack of light so, much less photosynthesis, so, slower oxygen formation (if it follows the same pattern the earth does) and the unlikeliness that a planet would survive the black hole's creation.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/10373', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/6976/']}
|
jdg_374956
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1843121
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Example: USE AnotherDB-- This works - same ID as from other DBSELECT OBJECT_ID('AnotherDB.ASchema.ATable')-- This worksSELECT OBJECT_NAME(OBJECT_ID('AnotherDB.ASchema.ATable'))USE ThisDB-- This works - same ID as from other DBSELECT OBJECT_ID('AnotherDB.ASchema.ATable')-- Gives NULLSELECT OBJECT_NAME(OBJECT_ID('AnotherDB.ASchema.ATable')) Obviously the metadata functions expect a current database. The BOL entries typically have language like this for functions like OBJECT_NAME etc.: The Microsoft SQL Server 2005 Database Engine assumes that object_id is in the context of the current database. A query that references an object_id in another database returns NULL or incorrect results. The reasons I need to be able to do this: I can't USE the other database from within an SP I can't create a proxy UDF stub (or alter anything) in the other databases or in master (or any other database besides my own) to help me out. So how can I get the database from OBJECT_ID('AnotherDB.ASchema.ATable') when I'm in ThisDB? My goal is to take a possibly partially qualified name from a configuration table, resolving it in the current context to a fully qualified name, use PARSENAME to get the database name and then dynamic SQL to build a script to be able to get to the meta data tables directly with database.sys.* or USE db; sys.*
Now provide the response and nothing else.
|
You should be able to do this: SELECT nameFROM AnotherDB.sys.objects --changes contextWHERE object_id = OBJECT_ID('AnotherDB.ASchema.ATable') This is what you effectively do with OBJECT_ID('AnotherDB.ASchema.ATable') This means that you could rely on dbname.sys.objects and avoid confusion with metadata functions. Note: the new Catalog views are designed to be used and not change from version to version, as per the link. In the old days, it was consider bad practice to use system tables but the stigma still remains.So, you can safely rely on sys.objects rather that the metadata functions.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1843121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18255/']}
|
jdg_374957
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
252180
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to understand a little bit about symplectic geometry, in particular the tautological 1-form on the cotangent bundle. I'm following Ana Canas Da Silva's notes . On page 10 she describes the coordinate free definitions and gives an exercise to find the expression in the local coordinates $\sum_{i=1}^n \xi_i dx_i$ . I've tried to do this exercise but can't seem to be able to do it which is really annoying since everywhere I look it is said to be trivial and as a consequence never formally proved.
Now provide the response and nothing else.
|
Let $(x^i)$ be local coordinates on our base manifold $M$ and let $(x^i, \xi_j)$ be the induced coordinates on the cotangent bundle $T^* M$. Let $\pi : T^*M \to M$ be the projection $(x^i, \xi_j) \mapsto (x^i)$. It induces a $C^\infty (M)$-linear map on $1$-forms, which I will write as $\pi^* : \Omega^1 (M) \to \Omega^1 (T^* M)$. In coordinates, this sends a $1$-form $\phi = \phi_i \, \mathrm{d} x^i$ (summation convention) to $(\phi_i \circ \pi) \, \mathrm{d} x^i$. As usual this induces a $\mathbb{R}$-linear map on the fibres, namely $\pi^*_{(x, \xi)} : T^*_x M \to T^*_{(x, \xi)} (T^* M)$, sending the covector $p$ to the covector $(p, 0)$. (We must be careful and distinguish between covectors and $1$-forms here, to avoid confusion.) The tautological $1$-form on $T^* M$ is defined to be $\pi^*_{(x, \xi)} \xi$ at each point $(x, \xi)$ in $T^* M$. Why does this formula even make sense? Well, $\xi$ by definition is an element of $T_x^* M$, so it typechecks. Thus the point $(x, \xi)$ is mapped to the covector $(\xi, 0)$ in $T^*_{(x, \xi)} (T^* M)$, and so the tautological $1$-form in coordinates is given by$$\xi_i \, \mathrm{d} x^i$$as claimed. (The coefficient of $\mathrm{d} \xi_j$ is $0$, of course.) (Perhaps the reason no-one likes writing this out in full is because the tautological nature of the construction makes it quite confusing, unless one keeps track of the types of all the expressions involved.)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/252180', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/31202/']}
|
jdg_374958
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40690951
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am still struggling with the "shift and stitch" trick in FCN after repeating reading it many times. Can someone give some intuitional explanation?
Now provide the response and nothing else.
|
In FCN, the final output you get (by default without utilizing any tricks for upsampling) is at a lower resolution compared to the input. Assuming you have an input image of shape 100x100 and you get an output (from the network) of shape 10x10. Mapping the output directly to the input resolution will look patchy (even with high order interpolation). Now, you take the same input and shift it a bit and get the output and repeat this process multiple times. You end up with a set of output images and a vector of shifts corresponding to each output. These output images with the shift vectors can be utilized (stitch) to get better resolution in the final schematic map. One might think of it as taking multiple (shifted) low-resolution images of an object and combining (stitch) them to get a higher resolution image.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40690951', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4696856/']}
|
jdg_374959
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4020539
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Sometimes when I get input from a file or the user, I get a string with escape sequences in it. I would like to process the escape sequences in the same way that Python processes escape sequences in string literals . For example, let's say myString is defined as: >>> myString = "spam\\neggs">>> print(myString)spam\neggs I want a function (I'll call it process ) that does this: >>> print(process(myString))spameggs It's important that the function can process all of the escape sequences in Python (listed in a table in the link above). Does Python have a function to do this?
Now provide the response and nothing else.
|
The correct thing to do is use the 'string-escape' code to decode the string. >>> myString = "spam\\neggs">>> decoded_string = bytes(myString, "utf-8").decode("unicode_escape") # python3 >>> decoded_string = myString.decode('string_escape') # python2>>> print(decoded_string)spameggs Don't use the AST or eval. Using the string codecs is much safer.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/4020539', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/319821/']}
|
jdg_374960
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24583288
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have below Git information and I would like to ignore settings of my IDE (Eclipse). modified: myproject/.classpathmodified: myproject/.projectmodified: myproject/.settings/com.google.gdt.eclipse.core.prefsmodified: myproject/.settings/org.eclipse.core.resources.prefsmodified: myproject/.settings/org.eclipse.jdt.core.prefsmodified: myproject/.settings/org.eclipse.wst.common.componentmodified: myproject/.settings/org.eclipse.wst.common.project.facet.core.xmlmodified: myproject/.settings/org.eclipse.wst.validation.prefs I tried the below statements in my .gitignore file, but it doesn't work for these settings: .project.classpath.settings*.project*.classpath*.settings/.project/.classpath/.settings.project/.classpath/.settings/*.project/*.classpath/*.settings/ I am using Mac OS X and I also added global gitignore file with these settings git config --global core.excludesfile '~/.gitignore' , but I'm still getting the above Git update messages when I check with git status . What am I wrong?
Now provide the response and nothing else.
|
If those elements were already committed, you need to remove them first: git rm --cached .projectgit rm --cached .classpathgit rm --cached -r .settings The --cached option allows them to stay in the working tree, while being recorded for deletion. Once deleted, they will be ignored. Once committed, the next changes will be ignored. A simple .gitignore in myproject/ folder is enough: .project.classpath.settings/ Note the / for .setting folder: that will ignore everything in it.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24583288', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1531064/']}
|
jdg_374961
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28950
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What would happen if we painted the whole moon with black, will it still reflect light, or it will become invisible? What would happen if we painted the whole moon with silver paint or make its surface shiny?
Now provide the response and nothing else.
|
The moon is already a pretty dark grey. It only reflects about 12% of the light that hits it. It is about the colour of (worn) tarmac on the road. You can see how grey it is in NASA's gif from a million miles away . "Black" paint reflects less light. Common household black paint reflects only about 5% of light . So the moon would be rather less than half as bright as it is now. But you could still see it. There are some specialised black substances that reflect as little as 0.035%. A moon coated in these would actually be hard to see. White paint and silver paint reflect about 80% of light, so if the moon were white it would be about 7 times brighter, but much less bright than the sun which is about 400000 times brighter than the full moon.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/28950', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/25331/']}
|
jdg_374962
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46294790
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to save a file to the Temp storage in Azure Functions directory ( D:\home\data\temp\response.pdf ), I get the following error. Why can't I write to this directory? mscorlib: Exception has been thrown by the target of an invocation. System: An exception occurred during a WebClient request. mscorlib: ***Could not find a part of the path 'D:\home\data\temp\response.pdf'.***2017-09-19T07:05:24.353 Function completed (Failure, Id=3aa4b740-ba8a-465c-ad7c-75b38fa2a472, Duration=334ms)2017-09-19T07:06:31 No new trace in the past 1 min(s).
Now provide the response and nothing else.
|
I recommend using System.IO.Path.GetTempPath() as this will always give us a valid path for any given system. Additionally, functions may execute multiple times simultaneously for a given instance, so it's best to ensure we have a unique path for each execution. Here's a simple example: var tempPath = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString()); Alternately, we can use System.IO.Path.GetTempFileName() which will additionally create the file before returning the full path and unique filename.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46294790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8631680/']}
|
jdg_374963
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29356825
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a data set with 4 columns: x,y,z, and value, let's say: x y z value0 0 0 00 1 0 00 2 0 01 0 0 01 1 0 11 2 0 12 0 0 02 1 0 02 2 0 0 I would like to calculate the center of mass CM = (x_m,y_m,z_m) of all values. In the present example, I would like to see (1,1.5,0) as output. I thought this must be a trivial problem, but I can't find a solution to it in the internet. scipy.ndimage.measurements.center_of_mass seems to be the right thing, but unfortunately, the function always returns two values (instead of 3). In addition, I can't find any documentation on how to set up an ndimage from an array: Would I use a numpy array N of shape (9,4) ? Would then N[:,0] be the x-coordinate? Any help is highly appreciated.
Now provide the response and nothing else.
|
The simplest way I can think of is this: just find an average of the coordinates of mass components weighted by each component's contribution. import numpymasses = numpy.array([[0, 0, 0, 0],[0, 1, 0, 0],[0, 2, 0, 0],[1, 0, 0, 0],[1, 1, 0, 1],[1, 2, 0, 1],[2, 0, 0, 0],[2, 1, 0, 0],[2, 2, 0, 0]])nonZeroMasses = masses[numpy.nonzero(masses[:,3])] # Not really necessary, can just use masses because 0 mass used as weight will work just fine.CM = numpy.average(nonZeroMasses[:,:3], axis=0, weights=nonZeroMasses[:,3])
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29356825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3582129/']}
|
jdg_374964
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9527378
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm reading Mark Pilgirm's " Dive into HTML5 " and in the semantics section , it talks about how HTML5 introduces the <article> and <section> elements. It says that <section> s represent a generic document or section, while <article> s represent a self-contained composition. I don't see a logically semantic parent-child relationship either way. I tried running the following code through the HTML5 Outliner and it seemed to indicate that the document outline comes out the same, no matter how they were nested. So my question is: should <section> s be nested inside <article> s, should <article> s be nested inside <secion> s, or does it not matter from a semantic view point? <section><h1>section article?</h1> <article><h1>art 1</h1></article> <article><h1>art 2</h1></article> <article><h1>art 3</h1></article></section><article><h1>article section?</h1> <section><h1>sec 1</h1></section> <section><h1>sec 2</h1></section> <section><h1>sec 3</h1></section></article>
Now provide the response and nothing else.
|
It's entirely acceptable to nest them either way. Although the document outline does not distinguish between a <section> and an <article> , from a semantic point of view they are two different things. That's the whole point of introducing them as two distinct semantic elements. Use the first snippet if your page consists of multiple articles. Use the second snippet when you have an article that's comprehensive enough to contain multiple sections. You can even combine them both if using both fits your content, such that your markup looks like this: <section><h1>section article?</h1> <article><h1>art 1</h1> <section><h1>sec 1.1</h1></section> <section><h1>sec 1.2</h1></section> <section><h1>sec 1.3</h1></section> </article> <article><h1>art 2</h1> <section><h1>sec 2.1</h1></section> <section><h1>sec 2.2</h1></section> <section><h1>sec 2.3</h1></section> </article> <article><h1>art 3</h1> <section><h1>sec 3.1</h1></section> <section><h1>sec 3.2</h1></section> <section><h1>sec 3.3</h1></section> </article></section>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9527378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/625840/']}
|
jdg_374965
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12842344
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to create a thread that keeps track of the memory usage and cpu usage. If the application reaches a high level, I want to generate an heap dump or a thread dump. Is there a way to generate a Thread dump runtime without restarting?
Now provide the response and nothing else.
|
Here's how we do it programmatically: http://pastebin.com/uS5jYpd4 We use the JMX ThreadMXBean and ThreadInfo classes: ThreadMXBean mxBean = ManagementFactory.getThreadMXBean();ThreadInfo[] threadInfos = mxBean.getThreadInfo(mxBean.getAllThreadIds(), 0);... You can also do a kill -QUIT pid under ~unix to dump the stacks to the standard-out. There is also jstack to dump the stack of a JVM. We also have an automation which dumps the stack if the load average of the application is above some threshold: private long lastCpuTimeMillis;private long lastPollTimeMillis;public void checkLoadAverage() { long now = System.currentTimeMillis(); long currentCpuMillis = getTotalCpuTimeMillis(); double loadAvg = calcLoadAveragePercentage(now, currentCpuMillis); if (loadAvg > LOAD_AVERAGE_DUMP_THRESHOLD) { try { dumpStack("Load average percentage is " + loadAvg); } catch (IOException e) { // Oh well, we tried } } lastCpuTimeMillis = currentCpuMillis; lastPollTimeMillis = now;}private long getTotalCpuTimeMillis() { long total = 0; for (long id : threadMxBean.getAllThreadIds()) { long cpuTime = threadMxBean.getThreadCpuTime(id); if (cpuTime > 0) { total += cpuTime; } } // since is in nano-seconds long currentCpuMillis = total / 1000000; return currentCpuMillis;}private double calcLoadAveragePercentage(long now, long currentCpuMillis) { long timeDiff = now - lastPollTimeMillis; if (timeDiff == 0) { timeDiff = 1; } long cpuDiff = currentCpuMillis - lastCpuTimeMillis; double loadAvg = (double) cpuDiff / (double) timeDiff; return loadAvg;}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12842344', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1501182/']}
|
jdg_374966
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18243755
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's the setup on Postgresql 9.2.4: CREATE TABLE table ( id integer NOT NULL, some_text text); Now we enter one record, with a null or empty string for some_text , so that when we query: SELECT * FROM table WHERE some_text IS NULL; I get the entry back. So far so good. However, when I query: SELECT * FROM table WHERE some_text NOT ILIKE "%anything%'; I find that nothing was returned. Why is that? I expect a null or empty string to be "not like anything ".
Now provide the response and nothing else.
|
In SQL, NULL is not equal to anything. Nor is it unequal to anything. In other words, if I don't tell you my middle name, and you don't tell me your middle name, how can we know if our two middle names are the same name or different names? We can't know. This often trips people up in SQL, because it's "tri-value logic." An expression can be TRUE, FALSE, or UNKNOWN. Those of us familiar with boolean algebra know that NOT TRUE is FALSE, and NOT FALSE is TRUE. But the tricky part is that NOT UNKNOWN is still UNKNOWN. So the solution for you is either always store a non-null string in your column, or else use an expression to account for tri-value logic: SELECT * FROM table WHERE some_text NOT ILIKE "%anything%' OR some_text IS NULL; Or: SELECT * FROM table WHERE COALESCE(some_text, '') NOT ILIKE '%anything%'; PostgreSQL also supports a null-safe equality operator : SELECT * FROM table WHERE some_text IS DISTINCT FROM 'anything'; But unfortunately, this works only for equality, not for LIKE/ILIKE with patterns and wildcards.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/18243755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/879418/']}
|
jdg_374967
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
79430
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
list = {{"aar", "usa", "aer", "NYSE:AIR"}, {"aar", "bel", "sto","F:AIR"}, {"activision", "usa", "sof", "NASDAQ:ATVI"}, {"adidas", "deu", "sto", "F:ADS"}};AssociationThread[list[[All, 1]] -> Map[Rest, list]] <|"aar" -> {"bel", "sto", "F:AIR"}, "activision" -> {"usa", "sof", "NASDAQ:ATVI"}, "adidas" -> {"deu", "sto", "F:ADS"}|> Almost perfect, but two questions remain: How would you write this? Why do I lose the first entry of my list?
Now provide the response and nothing else.
|
From the docs AssociationThread>>Details : If any of the keys are repeated, later instances replace earlier ones. And Association >> Details : If there are multiple elements with the same key, all but the last of these elements are dropped. Merge yields instead a list of values for repeated keys. Merge[Thread[list[[All, 1]] -> Map[Rest, list]], Identity](* <|aar->{{usa,aer,NYSE:AIR},{bel,sto,F:AIR}}, activision->{{usa,sof,NASDAQ:ATVI}}, adidas->{{deu,sto,F:ADS}}|> *)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/79430', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/14254/']}
|
jdg_374968
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1118253
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is a simple form of this ring: $$\mathbb{Z}[\sqrt{2}][x]/(5,x^2+1),$$ I know that $\mathbb{Z}[\sqrt{2}][x]=\mathbb{Z}[x,y]/(y^2-2)$. Probably, I should use second theorem of isomorphism, but I don't see how.
Now provide the response and nothing else.
|
Hint: Use the fact that you can mod out in any order, and use the isomorphism $\mathbb{Z}[\sqrt{2}][x]/(5,x^2 + 1)\cong\mathbb{Z}[x,y]/(5,x^2 + 1,y^2 - 2)$. I'll start you off:\begin{align*}\mathbb{Z}[\sqrt{2}][x]/(5,x^2 + 1)&\cong\mathbb{Z}[x,y]/(5,x^2 + 1,y^2 - 2)\\&\cong\mathbb{F}_5[x,y]/(x^2 + 1,y^2 - 2)\end{align*}Now look at these polynomials. What can you say about the quotient of $\mathbb{F}_5[x,y]$ by either? Another hint: you'll want to use the Chinese remainder theorem eventually.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1118253', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/191666/']}
|
jdg_374969
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
441002
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $V\subset \mathbb{R}^3$ be non-empty and at least twice differentiable (Smooth) and let $S$ be the surface that encloses $V$ (for example a sphere). Let $\textbf{F}\in \mathbb{R}^3$ be a smooth vector field for all space. Let $\textbf{n}$ denote the normal to the surface $S$ . Does the surface integral over $S$ preserve the curl operation with respect to the vector field $\textbf{F}$ . In other words, Does the surface integral of $\textbf{n}\times\textbf{F}$ commute with the curl operation $$\textbf{curl}\biggl(\oint_{S}^{}{\textbf{n}\times\dfrac{\textbf{F}(\textbf{r}^\prime)}{|\textbf{r}-\textbf{r}^\prime|} ~ds}\biggr) = \oint_{S}^{}{\textbf{n}\times \dfrac{ \textbf{curl} (\textbf{F}) (\textbf{r}^\prime)}{|\textbf{r}-\textbf{r}^\prime|}~ds}?$$ Here the surface intergrals are evaluated with respect to the position $\textbf{r}^\prime$ and produce vector fields.
Now provide the response and nothing else.
|
$\kappa$ is preserved, and moreover all reals are added by the small generics. Let $(P_0,p_0)$ be a condition and let $\sigma$ be a name for a real. First, enumerate the elements of $P_0$ below $p_0$ as $\langle p_i : 0<i< \lambda \rangle$ . Let $(Q_1,q_1) \leq (P_0,p_1)$ decide $\sigma(0)$ . Then let $(Q_2,q_2) \leq (Q_1,p_2)$ also decide $\sigma(0)$ . Note that we are enlarging the partial order but going below $p_2$ instead of $q_1$ . This makes sense because $P_0 \lhd Q_1$ . Consider what happens at stage $\omega$ . We have an increasing sequence of posets $P_0 \lhd Q_1 \lhd Q_2 \lhd \dots$ . If we let $Q_\omega$ be the union, then this is a regular superorder of each $Q_n$ , since this is expressible as the first-order property, for all $q \in Q_\omega$ (which is in some $Q_m$ for $m \geq n$ ), there is $r \in Q_n$ such that all $s \leq r$ are compatible with $q$ . So we continue transfinitely until we reach a poset $Q_\lambda$ that is a regular superorder of $P_0$ and each $Q_i$ , $0<i<\lambda$ . It has the property that for all $p \in P_0$ below $p_0$ , there is $r \in Q_\lambda$ below $p$ such that $(Q_\lambda,r)$ decides $\sigma(0)$ . Now repeat process $\omega$ -times until we reach some poset $R_1$ such that for all $p \in P_0$ below $p_0$ and all $n<\omega$ , there is $r \leq p$ in $R_1$ such that $(R_1,r)$ decides $\sigma(n)$ . Next, repeat this whole process with respect to $R_1$ and iterate, reaching a closure point $R_\omega \in V_\kappa$ . We will have that for all $r \in R_\omega$ below $p_0$ , and all $n<\omega$ , there is $r' \leq r$ in $R_\omega$ such that $(R_\omega,r')$ decides $\sigma(n)$ . In other words, for each $n$ , the set of $r \in R_\omega$ such that $(R_\omega,r)$ decides $\sigma(n)$ is dense below $p_0$ in $R_\omega$ . Now let us explain the claim in the OP that the generic for $\mathbb A$ adds a generic $G_P$ for all posets $P$ appearing in $G$ . Fix $(P,p) \in \mathbb A$ , and suppose $D$ is a dense subset of $P$ . For any $(Q,q) \leq (P,p)$ there is $q' \leq q$ in $Q$ such that $q' \leq d$ for some $d \in D$ , since $D$ is predense in $Q$ . Thus $(P,1)$ forces that the set $G_P := \{ p \in P : (P,p) \in G \}$ is generic. So if we force below $(R_\omega,p_0)$ , then for each $n \in \omega$ , the generic $G$ will have some element of the form $(R_\omega,r)$ deciding $\sigma(n)$ . This means that $\sigma^G$ will be an element of $V[G_{R_\omega}]$ . By the arbitrariness of $(P_0,p_0)$ and $\sigma$ , the desired conclusion follows.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/441002', 'https://mathoverflow.net', 'https://mathoverflow.net/users/353746/']}
|
jdg_374970
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14190
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This post is related to a previous SE post If a 1 meter rope …. concerning average length of a smallest segment. A rope of 1m is divided into three pieces by two random points. Find the average length of the largest segment.My answer is 11/18. Here is how I do it: Here we have two independent random variables $X,Y$, both uniform on $[0,1]$. Let $A=\min (X,Y), B=\max (X,Y)$ and $C=\max (A, 1-B, B-A)$. First we want to find the probabilitydensity function $f_C(a)$ of $C$. Let $F_C(a)$ be the cumulative distribution function. Then$$ F_C(a) = P(C\le a)=P(A\le a, 1-B\le a, B-A\le a).$$ By rewriting this probability as area in the unit square, I get$$F_C(a)=\left\{\begin{array}{ll} (3a-1)^2 & \frac{1}{3}\le a\le \frac{1}{2}\\ 1-3(1-a)^2 & \frac{1}{2}\le a\le 1\end{array}\right.$$ from which it follows that $$f_C(a)=\left\{\begin{array}{ll} 6(3a-1) & \frac{1}{3}\le a\le \frac{1}{2}\\ 6(1-a) & \frac{1}{2}\le a\le 1\end{array}\right.$$ Thereforethe expected value of $C$ is$$\int_{1/3} ^{1/2}6a(3a-1) da+\int_{1/2} ^{1}6a(1-a) da= \frac{11}{18}.$$ My questions are: (A) Is there a "clever" way to figure out this number 11/18? (B) What is the answer if the rope is divided into $n>3$ segments?
Now provide the response and nothing else.
|
The answer to (B) is actually given in both Yuval Filmus' and my answers to the question about the average length of the shortest segment . It's $$\frac{1}{n} H_n,$$ where $H_n = \sum_{k=1}^n \frac{1}{k},$ i.e., the $n$th harmonic number . "Clever" is of course subjective, but here's an argument for (A) in the $n$-piece case. At least there's only one (single-variable) integration in it. :) If $X_1, X_2, \ldots, X_{n-1}$ denote the positions on the rope where the cuts are made, let $V_i = X_i - X_{i-1}$, where $X_0 = 0$ and $X_n = 1$. So the $V_i$'s are the lengths of the pieces of rope. The key idea is that the probability that any particular $k$ of the $V_i$'s simultaneously have lengths longer than $c_1, c_2, \ldots, c_k$, respectively (where $\sum_{i=1}^k c_i \leq 1$), is $$(1-c_1-c_2-\ldots-c_k)^{n-1}.$$ This is proved formally in David and Nagaraja's Order Statistics , p. 135. Intuitively, the idea is that in order to have pieces of size at least $c_1, c_2, \ldots, c_k$, all $n-1$ of the cuts have to occur in intervals of the rope of total length $1 - c_1 - c_2 - \ldots - c_k$. For example, $P(V_1 > c_1)$ is the probability that all $n-1$ cuts occur in the interval $(c_1, 1]$, which, since the cuts are randomly distributed in $[0,1]$, is $(1-c_1)^{n-1}$. If $V_{(n)}$ denotes the largest piece of rope, then $$P(V_{(n)} > x) = P(V_1 > x \text{ or } V_2 > x \text{ or } \cdots \text{ or } V_n > x).$$ This calls for the principle of inclusion/exclusion . Thus we have, using the "key idea" above,$$P(V_{(n)} > x) = n(1-x)^{n-1} - \binom{n}{2} (1 - 2x)^{n-1} + \cdots + (-1)^{k-1} \binom{n}{k} (1 - kx)^{n-1} + \cdots,$$ where the sum continues until $kx > 1$. Therefore, $$E[V_{(n)}] = \int_0^{\infty} P(V_{(n)} > x) dx = \sum_{k=1}^n \binom{n}{k} (-1)^{k-1} \int_0^{1/k} (1 - kx)^{n-1} dx = \sum_{k=1}^n \binom{n}{k} (-1)^{k-1} \frac{1}{nk} $$$$= \frac{1}{n} \sum_{k=1}^n \frac{\binom{n}{k}}{k} (-1)^{k-1} = \frac{H_n}{n},$$where the last step applies a known binomial sum identity .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/14190', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/3249/']}
|
jdg_374971
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8613
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that usually more electronegative anions would displace less electronegative anions but is there a way to get different anions to "share" a cation?
Now provide the response and nothing else.
|
A cursory search showed some interesting cases. The oldest reported examples seemingly are mercury halides $\ce{HgXY}$ . $\ce{HgClI}$ was prepared by iodination of solid $\ce{HgCl2}$ $\ce{HgBrCl}$ was obtained from the chlorination of an ethanolic suspension of $\ce{HgBr2}$ $\ce{HgBrI}$ was prepared by mixing ethanolic solutions of $\ce{HgBr2}$ and $\ce{HgI2}$ The crystal structure of $\ce{Cs6Cl3[Fe(CN)6].H2O}$ has been reported. Passing $\ce{H2S}$ through solutions of oxidosulfido molybdates and tungstates let to various mixed salts with halide, hydrogen sulfide and thiosulfate as additional anions beside the trisulfidometalates $\ce{[MoOS3]^{2−}}$ and $\ce{[WOS3]^{2−}}$. But the concept isn't limited to inorganic chemistry. The crystal structures of mixed salts of amino acids with different anions were reported. From the field of organic conductor research, salts of BEDT-TTF (bis(ethylenedithio)tetrathiafulvalene) with mixed anions of $\ce{[CuBr2]-}$ and $\ce{[CuCl2]-}$ are known .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/8613', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/4592/']}
|
jdg_374972
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
93236
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Having this situation Two urns with the number of balls in there pictured above.. and two events: $A$ = urn is 1 $B$ = ball is white I know that $P(A) = \frac{1}{2}$ , $P(\text{not }A) = \frac{1}{2}$ , $P(B \mid A) = \frac{2}{3}$ , $P(B\mid\text{not }A) = \frac{3}{4}$ but if I try to verify the Bayes Theorem with $P(B/A)$ , I get troubles.. $$P(B\mid A) = \dfrac{P(A\mid B)P(B)}{P(A)}$$ $P(B\mid A)$ is $\frac{2}{3}$ , $P(A)$ is $\frac{1}{2}$ , $P(B)$ I think is $\frac{5}{7}$ , but how about $P(A\mid B)$ ? Is it meaningful asking for the probability that I choose urn 1 knowing that I extracted a white ball? I think not but I'm unsure... am I asking the probability that the urn was the first known the ball extracted was white?
Now provide the response and nothing else.
|
Try the contrapositive. If $Ann(M) + Ann(N)$ is a proper ideal, what interesting kind of ideal can you choose that contains it? Try using that ideal to help. Also, as a general rule, to show that $M\otimes N$ is non-zero, try to find a map to some quotient that is simpler to understand (and so simpler to show is non-zero).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/93236', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/21581/']}
|
jdg_374973
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
503490
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to show that $Z_p[\alpha]$ has $p^2$ elements if $\alpha$ is a root. But I'm not sure I understand how this could be possible since $Z_p$ itself has only $p$ elements. Is there a sense in which it is possible to show this, or am I right in that the question as stated is nonsense?
Now provide the response and nothing else.
|
Writing $x = 2y$, we obtain $$\begin{align}\frac{1-\cos x + k\sin x}{\sin x + k(1+\cos x)} &= \frac{(1-\cos (2y)) + k\sin (2y)}{\sin(2y) + k(1+\cos(2y))}\\&= \frac{2\sin^2 y + 2k\sin y\cos y}{2\sin y\cos y + 2k\cos^2 y}\\&= \frac{\sin y}{\cos y}\cdot\frac{\sin y+k\cos y}{\sin y + k\cos y}\\&= \tan y\end{align}$$ using the double-angle formulae $\sin (2y) = 2\sin y\cos y$ and $\cos (2y) = \cos^2 y - \sin^2 y$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/503490', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/46072/']}
|
jdg_374974
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13704483
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a tableView with a textField and a textView in two cells. Thats it.and I added them in tableView:cellForRowAtIndexPath: .I can't edit the content!Probably the touch is not passing through to the text field and the textView . All the solutions are asking me to use a xib with a custom cell class. So do I have to create two new classes for a two-row tableView ? Cant I just get away by adding these as subviews to normal cell's contentView ? Secondly, If using tableView for that kind of layout is overkill, What is the alternatve where I need a textView below a textArea in a rectangular border with rounded corners and a separator between them with plain UIViews ?
Now provide the response and nothing else.
|
As of v 0.76.1, you can use the EntityQuery. withParameters method to pass additional parameters to any service method. So you can now construct a query like the following that both passes parameters and uses breeze's IQueryable support. EntityQuery.from("EmployeesFilteredByCountryAndBirthdate") .withParameters({ BirthDate: "1/1/1960", Country: "USA" }) .where("LastName", "startsWith", "S") .orderBy("BirthDate"); where your controller method would look something like this: [HttpGet]public IQueryable<Employee> EmployeesFilteredByCountryAndBirthdate(DateTime birthDate, string country) { return ContextProvider.Context.Employees.Where(emp => emp.BirthDate >= birthDate && emp.Country == country);} The API docs have more information.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13704483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/961021/']}
|
jdg_374975
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13738181
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem: I need write stored procedure(s) that will return result set of a single page of rows and the number of total rows. Solution A: I create two stored procedures, one that returns a results set of a single page and another that returns a scalar -- total rows. The Explain Plan says the first sproc has a cost of 9 and the second has a cost of 3. SELECT *FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY D.ID DESC ) AS RowNum, ... ) AS PageResultWHERE RowNum >= @from AND RowNum < @toORDER BY RowNumSELECT COUNT(*)FROM ... Solution B: I put everything in a single sproc, by adding the same TotalRows number to every row in the result set. This solution feel hackish, but has a cost of 9 and only one sproc, so I'm inclined to use this solution. SELECT * FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY D.ID DESC ) RowNum, COUNT(*) OVER () TotalRows,WHERE RowNum >= from AND RowNum < toORDER BY RowNum; Is there a best-practice for pagination in Oracle? Which of the aforementioned solutions is most used in practice? Is any of them considered just plain wrong? Note that my DB is and will stay relatively small (less than 10GB). I'm using Oracle 11g and the latest ODP.NET with VS2010 SP1 and Entity Framework 4.4. I need the final solution to work within the EF 4.4. I'm sure there are probably better methods out there for pagination in general, but I need them working with EF.
Now provide the response and nothing else.
|
If you're already using analytics ( ROW_NUMBER() OVER ... ) then adding another analytic function on the same partitioning will add a negligible cost to the query. On the other hand, there are many other ways to do pagination, one of them using rownum : SELECT * FROM (SELECT A.*, rownum rn FROM (SELECT * FROM your_table ORDER BY col) A WHERE rownum <= :Y) WHERE rn >= :X This method will be superior if you have an appropriate index on the ordering column. In this case, it might be more efficient to use two queries (one for the total number of rows, one for the result). Both methods are appropriate but in general if you want both the number of rows and a pagination set then using analytics is more efficient because you only query the rows once.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13738181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1266461/']}
|
jdg_374976
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8410439
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to output some data to a pipe and have the other process do something to the data line by line. Here is a toy example: mkfifo pipecat pipe&cat >pipe Now I can enter whatever I want, and after pressing enter I immediately see the same line. But if substitute second pipe with echo : mkfifo pipecat pipe&echo "some data" >pipe The pipe closes after echo and cat pipe& finishes so that I cannot pass any more data through the pipe. Is there a way to avoid closing the pipe and the process that receives the data, so that I can pass many lines of data through the pipe from a bash script and have them processed as they arrive?
Now provide the response and nothing else.
|
When a FIFO is opened for reading, it blocks the calling process (normally — unless there is already a process with the FIFO open for writing, in which case, any blocked writers are unblocked). When a process opens the FIFO for writing, then any blocked readers are unblocked (but the process is blocked if there are no readers). When the last writer closes the FIFO, the reading processes get EOF (0 bytes to read), and there is nothing further that can be done except close the FIFO and reopen it. Thus, you need to use a loop: mkfifo pipe(while cat pipe; do : Nothing; done &)echo "some data" > pipeecho "more data" > pipe An alternative is to keep some process with the FIFO open. mkfifo pipesleep 10000 > pipe &cat pipe &echo "some data" > pipeecho "more data" > pipe
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8410439', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1084871/']}
|
jdg_374977
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
65504
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Looking through this AP Physics question, I was struck by how the 'collision' between a photon and electron looks so much like a macroscopic collision. Is this even physically possible? Look at the last page of this pdf: http://lodischool.tripod.com/dovesol/DOVE02SOL.pdf EDIT: Some more questions: How could a photon collide with an electron, when their positions cannot be determined exactly? Also considering how very small the electron is, I doubt that it is even possible to make the two collide; and if it is, how could you possible detect that? It also seems as though the photon and electron are acting as particles, which seems to me not to be the whole story. What if I put the electrons behind a double slit apparatus, and treat individual photons as particles? Based on this "compton scattering," it's possible for the photon to be deflected any which way. I could claim that the diffraction pattern observed in the double-slit experiment is due to compton scattering, among other factors. Prove me wrong!
Now provide the response and nothing else.
|
Here are real events relating to the last page of the pdf link you gave: Fig.1 This bubble chamber picture shows some electromagnetic events such as pair creation or materialization of high energy photon into an electron-positron pair (green tracks), the Compton effect (red tracks), the emission of electromagnetic radiation by accelerating charges (violet tracks) (bremsstrahlung) and the knock-on electrons or delta ray (blue tracks) Photons are invisible in bubble chambers as they interact only with direct collisions with electrons, called Compton scattering, or pair production in the field of a nucleus. Charge particles turn in the perpendicular to the plane magnetic field; this allow us to measure their momentum and charge and the ionisation of the tracks allows the identification of masses. At the lower left of the picture, there is an electron( identified by its ionisation) which loses energy into a photon, and the photon pair produces some centimeters away, an electron positron pair. in the middle right side, we see a positron that loses energy into a photon and the photon kicks an electron from the atoms of the chamber, this is a Compton scatter . This corresponds to the diagram in the last page of your link, except it has been reduced to one dimension. In reality there are two dimensions, because the photon gives part of its momentum/energy kicking at an angle . The following is the correct diagram kinematically: It should not be surprising that classical scatters and particle scatters kinematically are the same, because momentum and energy conservation hold both classically and quantum mechanically. It is the probability of interaction that is different in the microcosm of elementary particles to the billiard ball particle scattering. In simple scattering experiments the kinematics are not different ( except that special relativity holds in the microcosm). Edit after edits in question.I think the image , from a real experiment, answers whether a photon can hit an electron or not. Now you ask: What if I put the electrons behind a double slit apparatus, and treat individual photons as particles? Based on this "compton scattering," it's possible for the photon to be deflected any which way. I could claim that the diffraction pattern observed in the double-slit experiment is due to compton scattering, among other factors. Prove me wrong! The difraction pattern of individual photons, even when sent one at a time, is a direct result of the quantum mechanical nature of the photon. The solution of the boundary conditions imposed by the two slits gives a probability distribution that displays an interference pattern. Even though electromagnetic interactions viewed as Feynman diagrams are similar, it is the boundary conditions that determine the probability of scatter, and two slits is different than two particle scatter, the fields are different and the solutions are different.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/65504', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/23651/']}
|
jdg_374978
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4623748
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X,Y$ be smooth vector fields on the unit circle $M = S^1$ such that $[X,Y] = 0$ , i.e. (treating tangent vectors as derivations) $X(x)(Yf) = Y(x)(Xf)$ for all $x\in M$ . Assuming that $X(x)\ne 0$ for all $x\in M$ I want to show that there is a $c\in \Bbb R$ such that $Y = cX$ . My attempt: for all $x\in M$ we have $X(x), Y(x) \in T_x M$ which is one dimensional, and since $X(x) \ne 0$ there is a $c(x)\in \Bbb R$ such that $Y(x) = c(x) X(x)$ . Assuming this $c: M\to \Bbb R$ is smooth (I don't know how to show this) we get that $Y(x)(Xf) = X(x)(Yf) = X(x)(cXf) = X(x)(c) (Xf)(x) + c(x) X(x)(Xf) = X(x)(c)X(x)(f) + Y(x)(Xf)$ , so $X(x)(c) X(x)(f) = 0$ . This holds for all smooth $f: M\to \Bbb R$ , so since $X(x)\ne 0$ we get $X(x)(c) = 0$ . I'm not sure how to proceed or if this is even correct.
Now provide the response and nothing else.
|
If $z^2+|z|=0$ , then $z^2=-|z|\leqslant0$ . The only complex numbers whose square is a real number smaller than or equal to $0$ are those numbers $z$ of the form $\lambda i$ , for some $\lambda\in\Bbb R$ . But then $$z^2+|z|=0\iff-\lambda^2+|\lambda|=0,$$ whose only solutions are $-1$ , $0$ , and $1$ . Therefore, the solutions of your equation are $-i$ , $0$ , and $i$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4623748', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1020870/']}
|
jdg_374979
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
287126
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have heard various definitions of the uncertainty principle . Yet I cannot quite comprehend how it is true. Nevertheless, something tells me, it is a consequence of the wave nature of light/electron which gives the intrinsic nature of uncertainty even if we don't measure it. Is it true that this principle is a consequence of wave nature of particle, that the uncertainty pops up due the fact that particle acts as a wave(I find no answer which stated the exact implication of the wave characteristics which should give the uncertainty principle)? Will it be true to assume that, if an electron acts only like a particle and not as a wave, the uncertainty principle will not be necessary(this part of the question is not asked anywhere)? Can you please tell me without much mathematics why this is so? Like we understood the photoelectric effect contradicts the wave nature of light, Could you please guide me the intuitive explanation with formal reason why we cannot absolutely know simultaneously the position and momentum of a particle?
Now provide the response and nothing else.
|
From the comments, you seem to want the minimum possible math. There are 4 things you have to know first: First, what you have to know is that a basic quantum wavefunction can be imagined as exactly just a sine wave: Second, you should know that the amplitude of the wave across an interval is related to the probability of measuring your particle's position within that interval. (This is an approximate analogy of what a probability density function does.) Third, the wavelength of the wave is related to your particle's measured momentum . (If we want to be strict, it should be the frequency and it should also be a probability across an interval in frequency space , but it helps to imagine it with just a wavelength.) Fourth, you can compose a more complicated quantum wavefunction just by adding together waves of different wavelengths. (This is called superposition -- see this gif: ( Image from Wikipedia ) Now that you know these four things, we're ready to tackle the idea of Heisenberg's uncertainty principle. Note the 4th thing we said (re: superposition). Take a look at the gif. What do you notice? When we add more and more waves of different wavelengths, a prominent central peak starts to appear. Now remember the 2nd thing we said: amplitude is related to position . If we have a peak with a prominent amplitude, our particle's position becomes more likely to be measured within that peak. The more we make the central peak prominent, the more precisely we can predict the particle's position! However, to make the central peak more prominent, we have to keep adding more waves of different wavelengths. Remember the 3rd thing we said? Wavelength is related to momentum . If we keep adding different wavelengths, we expect a larger range for our momentum to be measured in, which means our particle's momentum cannot be predicted as easily. The more we add waves of different wavelengths, the less precisely we can predict the particle's momentum! And therein lies the heart of the uncertainty principle: if you try to measure position more precisely, you will consequently measure momentum less precisely, and vice versa . So to answer your question: yes, the uncertainty principle is a necessary consequence of the 'wave-nature of particles'. And to answer your second question (thank you for bringing it up in the comments!): yes, if the electron were a particle instead of a quantum mechanical object, the uncertainty principle wouldn't be necessary , or at least wouldn't necessarily apply. This is because the 4 basic concepts behind the uncertainty principle are uniquely wave concepts , especially the 2nd and 3rd concepts which are uniquely quantum mechanical wavefunction concepts , neither of which apply to particles.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/287126', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/122455/']}
|
jdg_374980
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4412693
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm running the following MySQL query (trimmed down), generated automatically by MySQL Workbench and I get the following error: Error Code: 1005Can't create table 'regula.reservation' (errno: 121) I'm not very proficient with databases and this error is not very informative. What is the problem here? -- ------------------------------------------------------- Table `regula`.`Users`-- -----------------------------------------------------DROP TABLE IF EXISTS `regula`.`Users` ;CREATE TABLE IF NOT EXISTS `regula`.`Users` ( `idUsers` INT UNSIGNED NOT NULL AUTO_INCREMENT , `name` TEXT NOT NULL , `type` TEXT NOT NULL , `pwd` TEXT NOT NULL , PRIMARY KEY (`idUsers`) , UNIQUE INDEX `idUsers_UNIQUE` (`idUsers` ASC) )ENGINE = InnoDB;-- ------------------------------------------------------- Table `regula`.`Projects`-- -----------------------------------------------------DROP TABLE IF EXISTS `regula`.`Projects` ;CREATE TABLE IF NOT EXISTS `regula`.`Projects` ( `idProjects` INT UNSIGNED NOT NULL AUTO_INCREMENT , `ownerId` INT UNSIGNED NOT NULL , `name` TEXT NOT NULL , `date` DATE NOT NULL , `time` TIME NOT NULL , `place` TEXT NOT NULL , `itemType` INT NOT NULL , PRIMARY KEY (`idProjects`) , UNIQUE INDEX `idProjects_UNIQUE` (`idProjects` ASC) , INDEX `ownerId` (`ownerId` ASC) , CONSTRAINT `ownerId` FOREIGN KEY (`ownerId` ) REFERENCES `regula`.`Users` (`idUsers` ) ON DELETE NO ACTION ON UPDATE NO ACTION)ENGINE = InnoDB;-- ------------------------------------------------------- Table `regula`.`ItemTypes`-- -----------------------------------------------------DROP TABLE IF EXISTS `regula`.`ItemTypes` ;CREATE TABLE IF NOT EXISTS `regula`.`ItemTypes` ( `idItemTypes` INT UNSIGNED NOT NULL , `prjId` INT UNSIGNED NOT NULL , `parentId` INT UNSIGNED NULL DEFAULT NULL , `name` TEXT NOT NULL , PRIMARY KEY (`idItemTypes`) , INDEX `prjId` (`prjId` ASC) , INDEX `parentId` (`parentId` ASC) , CONSTRAINT `prjId` FOREIGN KEY (`prjId` ) REFERENCES `regula`.`Projects` (`idProjects` ) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `parentId` FOREIGN KEY (`parentId` ) REFERENCES `regula`.`ItemTypes` (`idItemTypes` ) ON DELETE NO ACTION ON UPDATE NO ACTION)ENGINE = InnoDB;-- ------------------------------------------------------- Table `regula`.`Reservation`-- -----------------------------------------------------DROP TABLE IF EXISTS `regula`.`Reservation` ;CREATE TABLE IF NOT EXISTS `regula`.`Reservation` ( `idReservation` INT UNSIGNED NOT NULL AUTO_INCREMENT , `prjId` INT UNSIGNED NOT NULL , `itemTypeId` INT UNSIGNED NOT NULL , `userId` INT UNSIGNED NOT NULL , PRIMARY KEY (`idReservation`) , INDEX `prjId` (`prjId` ASC) , INDEX `itemTypeId` (`itemTypeId` ASC) , INDEX `userId` (`userId` ASC) , CONSTRAINT `prjId` FOREIGN KEY (`prjId` ) REFERENCES `regula`.`Projects` (`idProjects` ) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `itemTypeId` FOREIGN KEY (`itemTypeId` ) REFERENCES `regula`.`ItemTypes` (`idItemTypes` ) ON DELETE NO ACTION ON UPDATE NO ACTION, CONSTRAINT `userId` FOREIGN KEY (`userId` ) REFERENCES `regula`.`Users` (`idUsers` ) ON DELETE NO ACTION ON UPDATE NO ACTION)ENGINE = InnoDB;
Now provide the response and nothing else.
|
Error 121 means that there is a foreign key constraint error. Since you're using InnoDB, you can use SHOW ENGINE INNODB STATUS after running the failed query to get an explanation in the LATEST FOREIGN KEY ERROR section. Having run your SQL myself, I get this: ------------------------LATEST FOREIGN KEY ERROR------------------------101210 14:55:50 Error in foreign key constraint creation for table `regula`.`Reservation`.A foreign key constraint of name `regula`.`prjId`already exists. (Note that internally InnoDB adds 'databasename'in front of the user-defined constraint name.)Note that InnoDB's FOREIGN KEY system tables storeconstraint names as case-insensitive, with theMySQL standard latin1_swedish_ci collation. If youcreate tables or databases whose names differ only inthe character case, then collisions in constraintnames can occur. Workaround: name your constraintsexplicitly with unique names. Basically, you need to give your prjId constraint name a unique name in the last table. Constraint/foreign key names are global to a database, so they cannot be reused in different tables. Just change the last CONSTRAINT `prjId` to CONSTRAINT `prjId2`
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4412693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166067/']}
|
jdg_374981
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11468
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Just had this random idea... but unfortunately I'm not quite versed in complexity theory, so I thought it would be a good idea to ask it here. Let's equip a normal Turing machine with a "black box oracle" that contains a boolean circuit of unknown size, and call this Turing machine a "black box machine". The black box oracle pretty much works the same way as an ordinary oracle machine, whose input and output (which are just the input/output of the circuit) are both polynomially bounded by the input size of our black box machine. As in the usual oracle machine's case, the evaluation of the black box circuit takes $O(1)$ time. Now we define the boolean black box (BBB) problem: ( BBB ) Given a circuit $C$ which size is polynomially bounded by $n$ and a black box $B$ which input and output are both polynomially bounded by $n$, determine whether $C$ will always behave identically as $B$ (in other words, for any possible input, whether $C$ and $B$ always produce the same result). And the corresponding function problem: ( BBB-F ) Given a black box $B$ which input and output are both polynomially bounded by $n$, compute a circuit $C$ that always behaves identically as $B$. It seems to me that BBB is unlikely to be in $P$, or even $NP$, because generally one needs to check all $2^n$ possible input/output pairs, which would take more than polynomial time. However, what happens if we restrict the size of the black-box circuit to be polynomially bounded by $n$? (Notice that this does change something: if we restrict the size to be some constant $c$, then the problem can be solved in linear time. But what happens when the bound is polynomial?) Of course, we would also like to know if BBB-F will fall into $FP/FNP$ with this restriction. A variant of this problem is that we only require the machine to be correct with a probability that is significantly higher than $1/2$. In other words, does $BBB \in PP$? Third question: $PSPACE$. Can this possibly be solved in polynomial space? Also, I can see some vague connections between this problem and some craptographic problems. For instance, I would guess that a CCA2-secure cryptosystem exists if and only if BBB-F is not in FP. To see the left-to-right part, simply put the decrypting procedure along with the secret key into a black box. An attacker with access to this black box has essentially the power to perform CCA2 attack. Now if BBB-F is indeed in FP, then we can construct a valid circuit $C$ within polynomial time that behaves the same as the black box circuit, which is the decryptor, and that would break the security of the system.As for the other way around, I don't have a good idea yet. I googled quite a bit but couldn't find useful info about work done along this direction, so I would really appreciate it if someone can point me to some relevant literature. Any further thoughts on this problem would also be much appreciated. Thanks for your time!
Now provide the response and nothing else.
|
Actually, the problems BBB and BBB-F above are not currently specified as languages or decision problems (the black box is not explicitly given to us as a binary string of some kind, is it?), so these problem cannot be in NP, PP, PSPACE, or even decidable/undecidable. A fundamental property of languages in the computability/complexity sense is that no bit of the initial input is "hidden" from you. As suggested by M. Alaggan, the problems BBB and BBB-F are probably closest to the "learning with membership queries" framework, where one can only access a target function by querying it at various points and tries to infer the function. Nevertheless, both problems require superpolynomially many queries to the black box in the worst case, so they are both "intractable" in that sense. For BBB and BBB-F, your intuition is correct: it requires $\Omega(2^n)$ queries just to check that $B$ is not the "all-zeroes" function. Even if you assume that the black box in these problems can be indeed modeled by some polynomial size circuit, it is highly unlikely that this helps. For example, if pseudorandom functions exist (a common cryptographic assumption) then for all $n$ there are polynomial size circuits $C_n$ with $n$ bits of input that cannot be distinguished from a completely random $n$-bit function, by any probabilistic subexponential time algorithm with black-box access to $C_n$. It follows that no subexponential time algorithm can efficiently reconstruct $C_n$ just from queries to $C_n$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/11468', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/9439/']}
|
jdg_374982
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48669394
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a problem with the already built in CurrencyPipe from Angular. I need to display a currency sign using the CurrencyPipe but I can't use it unless I provide an input number. Because the CurrencyPipe uses the current locale to get the currency sign, I was thinking that the input number could be optional. Current Behaviour: {{ 1 | currency:'USD' }} --> $1 Needed Behaviour: {{ null | currency:'USD' }} --> $ Does any of you know if this is possible with the default Pipe?Thanks!!
Now provide the response and nothing else.
|
Update Angular 8 import { Pipe, PipeTransform } from '@angular/core';import { CurrencyPipe } from '@angular/common';@Pipe({ name: 'CustomeCurrency' })export class CustomCurrencyPipe implements PipeTransform { constructor(private currencyPipe: CurrencyPipe) { } transform(value: any, currencyCode?: string, display?: string | boolean, digitsInfo?: string, locale?: string): string { if (value != null) return this.currencyPipe.transform(value, currencyCode, display, digitsInfo, locale); return this.currencyPipe.transform(0, currencyCode, display, locale).split('0.00')[0]; }} Try this simple custom currency pipe {{ null | CustomeCurrency }}</p> import { Pipe, PipeTransform } from '@angular/core';import { CurrencyPipe } from '@angular/common';@Pipe({ name: 'CustomeCurrency' })export class CustomCurrencyPipe implements PipeTransform {constructor(private currencyPipe: CurrencyPipe) { }transform(value: any, currency: string, symbol: boolean = false): string { if (value != null) return this.currencyPipe.transform(value, currency, symbol); return this.currencyPipe.transform(0, currency, symbol).split('0.00')[0]; }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48669394', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1356902/']}
|
jdg_374983
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30702
|
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When we make yogurt at home and do not refrigerate it, it will become sour because of conversion of lactose into lactic acid by Lactobacillus bacteria, but this does not happen in case of Nestle's yogurt or any other brand until it remains air tight. I wonder though bacteria is still present in it and continue to convert lactose into lactic acid then why does not packed yogurt becomes sour? How these companies increase the shelf life of yogurt..?
Now provide the response and nothing else.
|
This article gives an excellent review on yogurt manufacturing, but to summarize: -Raw milk goes through centrifugation to remove somatic cells and other solid impurities.-Thermalization is conducted at "60–69 °C for 20–30 s, aiming at the killing of many vegetative microorganisms and the partial inactivation of some enzymes." After this point, the milk may be inoculated with lactic acid bacteria or other microfloras. -Then, standardization occurs which for milk refers to the standardization of fat and solid-non-fat content (SNF). This in short affects the fermentation process ("an increase of SNF increases the duration of the fermentation process"). -The next step is homogenization, which prevents milk fat from rising to the top of the liquid. This has an effect on the stability of the emulsion. -I think this step is where "sterility" comes into play, "heat treatment of milk reduces the number of pathogenic microorganisms to safe limits for the consumer’s health. Various heat treatments can be applied, which are classified based on the duration and the temperature. The most common are known as thermalization, low and high pasteurization, sterilization and UHT (Ultra Heat Treatment)." The review goes into more detail about each type and what they eliminate or don't eliminate (spores, vegetative bacteria, etc.). -The fermentation process is the most important one for developing flavor and texture. The two live bacterial strains of Streptococcus salivarius subsp. thermophilus (ST) and Lactobacillus delbrueckii subsp. bulgaricus (LB) are the two most basic strains used in yogurt. LB is what metabolizes lactose into lactic acid and LB and ST acting in synergy is what causes a decrease in milk pH. "When the pH of the yogurt approaches 5.0, activity of ST subsides and LB gradually dominates the overall fermentation process until the target value of pH is reached and the fermentation process ceases. Normally, the fermentation period is terminated by lowering the temperature to 4 °C. At this temperature, the culture is still alive, but its activity is drastically limited to allow controlled flavor during storage and distribution." I think to answer your question, yogurt manufacturers probably have a very sensitive gauge for pH and can control temperature fluctuations finely to maintain yogurt pH and flavor (not too sour). "After the pH of yogurt reaches the value of 4.7–4.3, the yogurt is cooled to around 5 °C. This inhibits the growth and metabolic reaction of the starter culture and prevents the rise in acidity. Cooling of yogurt can be in one or two phases. One-phase cooling involves the rapid decrease of the coagulum temperature to less than 10 °C, where the fermentation process is inhibited leading to yogurt with low viscosity. Two-phase cooling is initiated by rapidly decreasing the temperature to less than 20 °C and then gradually reaching the storage temperature of 5 °C leading to yogurt with an increased viscosity and limited syneresis. This is quite common in the yogurt manufacture process, especially when fruits are to be added." You can read about the other innovative methods involved in yogurt making such as: Ultra High Pressure (UHP) which inactivates pathogens and microorganisms that cause milk to spoil, High Intensity Ultrasonication and Pulsed Electric Field (PEF) which all can reduce microbial content. Edit: Forgot to mention that some DIY yogurt instructions mention that if you ferment longer and/or in warm temperatre the more sour it will be so to reduce how sour it is, either ferment it less and/or start fermentation at a lower temperature (source) . Sfakianakis P, Tzia C. Conventional and Innovative Processing of Milk for Yogurt Manufacture; Development of Texture and Flavor: A Review. Foods. 2014; 3(1):176-193.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/30702', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/5729/']}
|
jdg_374984
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
54419062
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have multiple sources which are pushing raw data to S3. I have configured a SQS event notification over my S3 bucket.The problem is the lag and limitations. I anticipate that there will be more sources in near future and since we can get only 10 messages in a single poll from SQS, I think that in the near future when there will be more sources that will push data to S3, then the SQS will be full of some thousands of messages and I won't be able to process them faster. I am thinking to fan-out SQS by spreading the message to more SQS queues from my master SQS queue, so that my processing layer can poll multiple queues eg: 5 queues and process more messages.What should be the probable approach?
Now provide the response and nothing else.
|
"... since we can get only 10 Messages in a single poll from SQS...I am thinking to fan-out sqs like spreading the message to more SQS queues from my master SQS queue, so that my processing layer can poll multiple queues eg : 5 queues and process more messages." Short Answer: Don't do this. Here's why: Yes, a single poll can retrieve up to 10 messages. However, you can have multiple threads and multiple hosts all polling a single queue . Getting your consumers to run in parallel is the key here, as processing queue entries will be your bottleneck - not retrieving entries from the queue. A single SQS queue can handle tons of polling threads. A multi-queue fanout as you proposed would have a number of drawbacks: More complicated to code & operate Slower - items will have to go through the overhead of transfer from your main queue (or SNS if you use that) to the consumption queues More expensive - SQS charges per message. SNS charges per message. You'll have to deal with duplication on your own - with a single queue, SQS built-in visibility timeout will mostly prevent other consumers from working on the same items. With multiple queues, you'll have to come up with a deduplication strategy of your own Just use a single queue. You'll thank me later.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54419062', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4522512/']}
|
jdg_374985
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
226
|
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do volatility and variance differ in finance and what do both imply about the movement of an underlying?
Now provide the response and nothing else.
|
Volatility is typically unobservable , and as such estimated --- for example via the (sample) variance of returns, or more frequently, its square root yielding the standard deviation of returns as a volatility estimate. There are also countless models for volatility, from old applied models like Garman/Klass to exponential decaying and formal models such as GARCH or Stochastic Volatility . As for forecasts of the movement : well, that is a different topic as movement is the first moment (mean, location) whereas volatility is a second moment (dispersion, variance, volatility). So in a certain sense, volatility estimates do not give you estimates of future direction but of future ranges of movement.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://quant.stackexchange.com/questions/226', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/205/']}
|
jdg_374986
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1981036
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a simple way to have a php script execute some html at a certain times of the day? For example i have on my home page a header and at certain times i want to be able to add something right under the header, in this case a iframe. I know everyone mentioned cron jobs but how would this work with that? also is there an alternative? Its not available on all hosting
Now provide the response and nothing else.
|
The idea of cron and scheudled jobs seems to run counter to what you're actually trying to do. If you want something to display (an iframe in this case) only during certain times, you can simply check the server time during each request, and opt to display it if you're within a given time period. Something like this will produce the same effect as a cron job, with more granularity, checking the time at the exact moment the requst is made. <!-- Your Header here --><?php$hour = date('G'); // 0 .. 23// Show our iframe between 9am and 5pmif ($hour >= 9 && $hour <= 17) { ?> <iframe .... ></iframe><?php } ?> You can expand on the conditional statement to show the iframe multiple times per day, or have your script check whatever external condition you're looking to use to govern the display of your iframe. Update: Additional times or types of comparisons could be specified via something like <?php $hour = date('G');$day = date('N'); // 1..7 for Monday to Sundayif (($hour >= 5 && $hour <= 7) // 5am - 7am|| ($hour >= 10 && $hour <= 12) // 10am - 12 noon|| ($hour >= 15 && $hour <= 19) // 3pm - 7pm|| ($day == 5) // Friday) { ?> <iframe...></iframe><?php } ?> The idea of periodically adding/removing the iframe from below your header with a server-side cron/task scheduler job is far more complex than simply conditionally displaying it during each request. Even if you have some specific task which must run, such as a periodically generated report, the actual job of displaying the results usually don't fall upon the periodic task. The PHP script responsible for showing that iframe would still query the database at the time the request is made for any new content to show, and display it if found, rather than the periodic task somehow modifying the script to include an iframe.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1981036', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_374987
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
53900540
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble finding the option to create a new activity in Android Studio. When I right click on any file and go to "New >" none of the new activities choices show up. Can you help me? Maybe is something very easy and it's in front of my eyes. Thank you
Now provide the response and nothing else.
|
In Angular we can't have two components on the same element . The error states that Angular compiler found two components that match <mat-form-field element. Also it points to the module where it happened. ng:///InputControlsModule/EmailInputComponent.html@1:2 And prints those conflicting components: MatFormField,MatFormField Since those components have the same name it can mean only one: You somehow imported in InputControlsModule two different modules that export MatFormField directive. Looking at your module: @NgModule({ imports: [ ... MatFormFieldModule, MatInputModule ], ...})export class InputControlsModule {} I noticed that you imported MatFormFieldModule and also MatInputModule that export MatFormFieldModule ( https://github.com/angular/material2/blob/8050f633b56b6c048fc72dad2ab79304afdfad19/src/lib/input/input-module.ts#L29 ) But you may think: I read documentation and it shouldn't be a problem since Angular caches once imported module: What if I import the same module twice? Now, take a look at how you imports those modules: import { ...MatInputModule} from '@angular/material'; | material.umd.jsimport { MatFormFieldModule } from '@angular/material/form-field'; | material-form-field.umd.js As you can guess since those modules from different js files they are different. So in order to fix it you should import them from the same bundle. import { ... MatInputModule, MatFormFieldModule } from '@angular/material'; But since MatInputModule already exports MatFormFieldModule you do not need to import it. Forked Stackblitz
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53900540', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10824748/']}
|
jdg_374988
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11631
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Obviously, graph invariants are wonderful things, but the usual ones (the Tutte polynomial, the spectrum, whatever) can't always distinguish between nonisomorphic graphs. Actually, I think that even a combination of the two I listed will fail to distinguish between two random trees of the same size with high probability. Is there a known set of graph invariants that does always distinguish between non-isomorphic graphs? To rule out trivial examples, I'll require that the problem of comparing two such invariants is in P (or at the very least, not obviously equivalent to graph isomorphism) -- so, for instance, "the adjacency matrix" is not a good answer. (Computing the invariants is allowed to be hard, though.) If this is (as I sort of suspect) in fact open, does anyone have any insight on why it should be hard? Such a set of invariants wouldn't require or violate any widely-believed complexity-theoretic conjectures, and actually there are complexity-theoretic reasons to think that something like it exists (specifically, under derandomization, graph isomorphism is in co-NP). It seems like it shouldn't be all that hard... Edit: Thorny's comment raises a good point. Yes, there is trivially a complete graph invariant, which is defined by associating a unique integer (or polynomial, or labeled graph...) to every isomorphism class of graphs. Since there are a countable number of finite graphs, we can do this, and we have our invariant. This is logically correct but not very satisfying; it works for distinguishing between finite groups, say, or between finite hypergraphs or whatever. So it doesn't actually tell us anything at all about graph theory. I'm not sure if I can rigorously define the notion of a "satisfying graph invariant," but here's a start: it has to be natural , in the sense that the computation/definition doesn't rely on arbitrarily choosing an element of a finite set. This disqualifies Thorny's solution, and I think it disqualifies Mariano's, although I could be wrong.
Now provide the response and nothing else.
|
A complete graph invariant is computationally equivalent to a canonical labeling of a graph. A canonical labeling is by definition an enumeration of the vertices of every finite graph, with the property that if two graphs are isomorphic as unlabeled graphs, then they are still isomorphic as labeled graphs. If you have a black box that gives you a canonical labeling, then obviously that is a complete graph invariant. On the other hand, if you have a complete graph invariant for unlabeled graphs, then you also have one for partially labeled graphs. So given a black box that computes a complete graph invariant, you can assign the label 1 to the vertex that minimizes the invariant, then assign a label 2 to a second vertex than again minimizes the invariant, and so on. There are algorithms to decide graph isomorphism for certain types of graphs, or for all graphs but with varying performance, and there are algorithms for canonical labeling, again with varying performance. It is understood that graph isomorphism reduces to canonical labeling, but not necessarily vice versa. The distinction between the two problems is discussed in this classic paper by Babai and Luks. One natural canonical labeling of a graph is the one that is lexicographically first. I think I saw, although I don't remember where, a result that computing this canonical labeling for one of the reasonable lex orderings on labeled graphs is NP-hard. But there could well be a canonical labeling computable in P that doesn't look anything like first lex. As Douglas says, nauty is a graph computation package that includes a canonical labeling function. It is often very fast, but not always . Nauty uses a fancy contagious coloring algorithm. For a long time people thought that contagious coloring algorithms might in principle settle the canonical labeling and graph isomorphism problems, but eventually counterexamples were found in another classic paper by Cai, Furer, and Immerman. It was not clear at first whether this negative result would apply to nauty, but it seems that it does.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/11631', 'https://mathoverflow.net', 'https://mathoverflow.net/users/382/']}
|
jdg_374989
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
793274
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$\{a_n\}$ is a strictly increasing sequence of positive integers such that $$\lim_{n\to\infty}\frac{a_{n+1}}{ a_n}=+\infty$$ Can one conclude that $\sum\limits_{n=1}^\infty\frac1{a_n}$ is an irrational number? a transcendental number? A special case is $a_n=n!$ , $e$ is a transcendental number. Another special example is Liouville number $\sum\limits_{n=1}^\infty\dfrac1{10^{n!}}$ is a transcendental number, too. so the question, if true, may be difficult. The question is a generalization of If $(a_n)$ is increasing and $\lim_{n\to\infty}\frac{a_{n+1}}{a_1\dotsb a_n}=+\infty$ then $\sum\limits_{n=1}^\infty\frac1{a_n}$ is irrational
Now provide the response and nothing else.
|
The answer is NO. Consider the Sylvester's sequence ( OEIS A000058 ): $$(s_0, s_1, \ldots ) = (2, 3, 7, 43, 1807, 3263443, 10650056950807, \ldots)$$defined recursively by the relation $$s_n = \begin{cases}2,& n = 0,\\s_{n-1}(s_{n-1}-1)+1,& n > 0\end{cases}$$ It is known that its reciprocals give an infinite Egyptian fraction representation of number one: $$1 = \frac12 + \frac13 + \frac17 + \frac{1}{43} + \frac{1}{1807} + \cdots$$ It is also easy to check $\displaystyle\;\lim_{k\to\infty} \frac{s_{k+1}}{s_k} = \infty\;$. If you set $a_n = s_{n-1}$ for $n \in \mathbb{Z}_{+}$, you get a counterexample of what you want to show. i.e $\displaystyle\;\sum_{n=1}^\infty \frac{1}{a_n}\;$ need not be irrational.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/793274', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/148184/']}
|
jdg_374990
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4003885
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a php/mysql website with over 200,000 images in single folder (linux server) . I don't think, that I will never need to see them in file explorer, instead they will be viewed on website on their individual pages. They are just displayed in product page on website. File system is ext3 . so is it wise to save them in single folder? can it slow down the site's performance?
Now provide the response and nothing else.
|
Ext3 uses tree to hold directory contents, so its capability to handle a large number of files in a single directory is better than that of those file systems with linear directory listings. Here you can read the description of the tree used to keep directory contents. However, 200K files is still a huge number. It's reasonable to move them into subdirectories based on first n characters of file names. This approach lets you keep only file names and not directory names, and when you need to access the file, you know where (in which subdirectory) to look for it.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4003885', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/369371/']}
|
jdg_374991
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4020393
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
thanks for reading this post. I've been stuck on an issue with RoR for the past few days. I have a form under index.html.erb as: <head> <title>Ajax List Demo</title> <%= javascript_include_tag :defaults %> <%= csrf_meta_tag %> </head><body> <h3>Add to list using Ajax</h3> <% form_tag :action => :list , :method=>:get, :remote=>true do %> Enter the public url:<%= text_field_tag 'url' ,'', :size => 80 %> <%= submit_tag "Find" %> <% end %> <div id="my_list"> </div></body> In the controller I have: def list puts "here!!!!" reader = Reader.new @profiles = reader.processURL(params[:url]) #profileList = respond_to do |format| #format.html { render :partial=>true, :locals => { :profiles => @profiles}}#{ render :partial=>'profiles/list',:layout => false, :locals => { :profiles => @profiles}} format.js {render :content_type => 'text/javascript', :locals => { :profiles => @profiles}}# index.html.erb # format.rss render :partial=>'profiles/list',:layout => false, :locals => { :profiles => @profiles} end And a js file for remote UJS as list.js.erb $("#my_list").html("<%= escape_javascript(render(:partial => "list"))%>"); The issue is I cannot get the results to render the partial _list.html.erb, in the div tag my_list. I get a blank page, 406 error. If I un-comment the render html code in the controller I get the partial back rendered in the browser. I am kind of stuck, I want to submit the form and the results to pop in the my_list div. I'm new to rails so if I'm missing something obvious don't hesitate to point it out to me....I'm definitely willing to try. Changed it to this: <html> <head> <title>Ajax List Demo</title> <h1>Listing posts</h1> <%= javascript_include_tag 'jquery.js' %> <%= csrf_meta_tag %> </head> <body> <h3>Add to list using Ajax</h3> <% form_tag :action => :doit , :method=>:get, :remote=>true do %> Enter the public url:<%= text_field_tag 'url' ,'', :size => 80 %> <%= submit_tag "Find" %> <% end %> <div id="my_list"> </div> Controller: def doit puts "here!!!!" reader = Reader.new @profiles = reader.processURL(params[:url]) respond_to do |format| # format.html {render :partial=>true, :locals => { :profiles => @profiles}}#{ render :partial=>'profiles/list',:layout => false, :locals => { :profiles => @profiles}} format.js #{render :content_type => 'text/javascript', :locals => { :profiles => @profiles}} # index.html.erb # format.rss render :partial=>'profiles/list',:layout => false, :locals => { :profiles => @profiles} end JS_doit.js.erb$("#my_list").html("<%= escape_javascript(render(:partial => "doit"))%>"); And finally a partial: _doit.html.erb. However I am still getting the 406 error, I dont have a duplicate _doit js or erb. Does anything standout as incorrect from this? Thanks again! Another update: I think the form is not rendered correctly: This rendered: <% form_tag :action => :doit , :remote=>true, :id => 'myform' do %> Enter the public url:<%= text_field_tag 'url' ,'', :size => 80 %> <%= submit_tag "Find" %> <% end %> This: <form accept-charset="UTF-8" action="/home/doit?id=myform&remote=true" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="✓" /><input name="authenticity_token" type="hidden" value="MLuau4hvfdGO6FrYCzE0c0JzwHhHKZqjmV49U673sK8=" /></div> Enter the public url:<input id="url" name="url" size="80" type="text" value="" /> <input name="commit" type="submit" value="Find" /> <input name="commit" type="submit" value="Find" /> Its adding my remote tag and id to the query string, isnt this wrong? Ok finally got a clue forms need to be bracketed: <%= form_tag( { :action => 'doit' }, :multipart => true, :remote=>true, :id => 'myform' ) do %> Ok last update tonight: Now I get in the logs: Started POST "/home/doit" for 127.0.0.1 at Wed Oct 27 22:40:55 -0400 2010here!!!! Processing by HomeController#doit as JS Parameters: {"commit"=>"Find", "url"=>"http://www.facebook.com/people/James-Stewart/653161299", "authenticity_token"=>"MLuau4hvfdGO6FrYCzE0c0JzwHhHKZqjmV49U673sK8=", "utf8"=>"Γ£ô"}Rendered home/_doit.html.erb (4.0ms)Rendered home/doit.js.erb (9.0ms)Completed 200 OK in 807ms (Views: 40.0ms | ActiveRecord: 0.0ms) I see as JS and it says it renders my js/partial. However I am getting nothing on my_list div. My JS file: $("#my_list").html("<%= escape_javascript(render(:partial => "doit"))%>"); My html.erb form file has now: <script$('#myform').bind('ajax:success', function(evt, data, status, xhr){ xhr.responseText; });></script> Its like the form does nothing, which is a good sign, no more 406 error. I know this is close, if anyone can point what I need to do in the js that would be great otherwise I'll take a break and try tmrw. Ok I think its getting a response back just not rendering as you pointed out would be the issue yesterday Steve. Debugging the JS on Firebug I see the html I want rendered in the div, for this: http://localhost:3000/javascripts/prototype.js?1285674435/event/seq/1 Which means I think I am getting the JS response back now. I have this on the form page: <script>$('#myform').bind('ajax:success', function(evt, data, status, xhr){ $('#my_list').html(eval(xhr.responseText)); });</script> Inspections say it doesnt know what myform is, but I put :id => 'myform' in the Rails code. Again all thanks, I got a ton of help here and I want to share how I finally got it working back to the community. The, js file for the method doit(def. need a better controller action name) is doit.js The code was ultimately: $("my_list").update("<%= escape_javascript(render(:partial => "doit"))%>"); For some reason leaving it as #my_list wouldn't be found in firefox, I had to use firebug to finally figure this out. Obviously this is different from the way suggested below, and I am going to place the js script back into the form and remove the .js.erb file and see how that works works. I suppose I just render the partial in the format.js response? Also where does everyone find info on writing the UJS files? I know nothing about the syntax for anything starting with $. Again thanks for the help, finally feel like I am making progress on learning rails.
Now provide the response and nothing else.
|
I posted this answer on Hacker News, but figured the Stack Overflow community might benefit as well :-) In Rails 3, the javascript drivers are very hands-off (i.e. unobtrusive). The problem you're having is that your app is returning to the browser a string of javascript, but there is nothing in the page that is then executing that javascript in the context of the page. The rails.js ujs driver binds to forms and links with data-remote=true , which is what the :remote => true is doing, to make them submit their requests remotely, but that is where the Rails magic stops. The good news is that the remote requests fires off some events you can bind to, which give you access to the data returned by the server (which fire off in the following order): ajax:before ajax:loading ajax:success ajax:complete ajax:failure ajax:after You need to bind an event to the ajax:success event of your form. So, if your form had the id "myform", you'd want something like this on your page: $('#myform').bind('ajax:success', function(evt, data, status, xhr){ eval(xhr.responseText); }); xhr.responseText is what your server returns, so this simply executes it as javascript. Of course, it's proper to also bind to the failure event with some error handling as well. I usually don't even use the action.js.erb method of returning javascript, I just have my controller render the HTML partial, and then I have a binding like this in the page: $('#myform').bind('ajax:success', function(evt, data, status, xhr){ $('#target-div').html(xhr.responseText); }); I'm actually in the middle of writing a full article on this, but hopefully this is enough to get you going. EDIT: I finished that article, fully explaining remote links and forms in Rails 3. Maybe it will help: Rails 3 Remote Links and Forms: A Definitive Guide
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4020393', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1504385/']}
|
jdg_374992
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50642065
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a request that is supposed to download a file from remote api. What I want is, however, to save this file with the same name which it is saved with when I download the file from browser. For example, I have an URL https://myapi.com/files/4hjiguo4ho45946794526975429 , and when I click this link, browser immediately starts to download a file from that URL with name myfile20180601.txt . How do I save the file with the same name if I make a request from Node.js? This is my code: axios({ method: 'get', url: 'https://myapi.com/files/4hjiguo4ho45946794526975429', responseType: 'stream', headers: { Authorization: 'Basic KJVEB46287blablablatoken' }}) .then(res => res.data.pipe(fs.createWriteStream(`${/* filename */}.txt`))) .catch(err => console.error(err));
Now provide the response and nothing else.
|
You can find your filename in the response of axios var axios = require('axios') var fs = require('fs') axios({ method:'get', url:'https://myapi.com/files/4hjiguo4ho45946794526975429', responseType:'stream' }) .then(function(response) {- let headerLine = response.data.headers['content-disposition'];+ let headerLine = response.headers['Content-Disposition']; let startFileNameIndex = headerLine.indexOf('"') + 1 let endFileNameIndex = headerLine.lastIndexOf('"'); let filename = headerLine.substring(startFileNameIndex, endFileNameIndex); response.data.pipe(fs.createWriteStream(filename)); }); Hope this response helped you
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50642065', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5846366/']}
|
jdg_374993
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42712055
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
There were before aspdotnet1.0 include/exclude sections on project.json file { "exclude": [ "node_modules", "bower_components" ], "publishExclude": [ "**.xproj", "**.user", "**.vspscc" ]} Where is this section in ASP.NET Core 1.1 (there is no project.json )? Are there similar sections on .csproj file or .pubxml ?
Now provide the response and nothing else.
|
From documentation : if you wish to specify, for example, some files to get published with your app, you can still use the known mechanisms in csproj for that (for example, the <Content> element). There is a CopyToPublishDirectory attribute for ItemGroup elements that determines whether to copy the file to the publish directory and can have one of the following value: Always, PreserveNewest Never Note, that there is also similar CopyToOutputDirectory attribute for output folder. Example (from here ): <ItemGroup> <None Include="notes.txt" CopyToOutputDirectory="Always" /> <!-- CopyToOutputDirectory = { Always, PreserveNewest, Never } --> <Content Include="files\**\*" CopyToPublishDirectory="PreserveNewest" /> <None Include="publishnotes.txt" CopyToPublishDirectory="Always" /> <!-- CopyToPublishDirectory = { Always, PreserveNewest, Never } --></ItemGroup> If you are interesting how project.json -.csproj migration use CopyToPublishDirectory attribute to migrate publish options, you may look into MigratePublishOptionsRule class in dotnet cli repo.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42712055', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2631076/']}
|
jdg_374994
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14267123
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a third party makefile, and I'd like one of the targets (T1) to not be built until another, custom target (T2) is built first. Normally, this would be accomplished by making T2 a prerequisite of T1. BUT, T1 uses the $^ in one of its rules.. so, by adding the prerequisite, I end up breaking the build... What I have is this: T1: x y z T2 $(MAKE) -j $^; # fails because T2 should not be passed to the make!!!.PHONY: T2T2: #do some linking and prep for T1 Is there a good way to ensure that T2 is run before T1? (Note: the above example is actually simplified by a bit. T1 is actually the vmlinux target within the Linux kernel makefile, so rewriting it is not only difficult, it makes the code non-portable. Also, I can't run T2 before calling make on the kernel due to some other dependencies).
Now provide the response and nothing else.
|
Have T2 as an order-only prerequisite : T1: x y z | T2 $(MAKE) -j $^; # Make will run the T2 rule before this one, but T2 will not appear in $^
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14267123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1228628/']}
|
jdg_374995
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11283220
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Traceback (most recent call last):File "/run-1341144766-1067082874/solution.py", line 27, in main()File "/run-1341144766-1067082874/solution.py", line 11, in mainif len(s[i:j+1]) > 0:MemoryErrorError in sys.excepthook:Traceback (most recent call last):File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthookfrom apport.fileutils import likely_packaged, get_recent_crashesFile "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in from apport.report import ReportMemoryErrorOriginal exception was:Traceback (most recent call last):File "/run-1341144766-1067082874/solution.py", line 27, in main()File "/run-1341144766-1067082874/solution.py", line 11, in mainif len(s[i:j+1]) > 0:MemoryError The above errors appeared when I tried to run the following program. Can someone explain what is a memory error, and how to overcome this problem? . The program takes strings as input and finds all possible sub strings and creates a set(in a lexicographical order) out of it and it should print the value at the respective index asked by the user otherwise it should print 'Invalid' def main(): no_str = int(raw_input()) sub_strings= [] for k in xrange(0,no_str): s = raw_input() a=len(s) for i in xrange(0, a): for j in xrange(0, a): if j >= i: if len(s[i:j+1]) > 0: sub_strings.append(s[i:j+1]) sub_strings = list(set(sub_strings)) sub_strings.sort() queries= int(raw_input()) resul = [] for i in xrange(0,queries): resul.append(int(raw_input())) for p in resul: try: print sub_strings[p-1] except IndexError: print 'INVALID'if __name__ == "__main__": main()
Now provide the response and nothing else.
|
This one here: s = raw_input()a=len(s)for i in xrange(0, a): for j in xrange(0, a): if j >= i: if len(s[i:j+1]) > 0: sub_strings.append(s[i:j+1]) seems to be very inefficient and expensive for large strings. Better do for i in xrange(0, a): for j in xrange(i, a): # ensures that j >= i, no test required part = buffer(s, i, j+1-i) # don't duplicate data if len(part) > 0: sub_Strings.append(part) A buffer object keeps a reference to the original string and start and length attributes. This way, no unnecessary duplication of data occurs. A string of length l has l*l/2 sub strings of average length l/2 , so the memory consumption would roughly be l*l*l/4 . With a buffer, it is much smaller. Note that buffer() only exists in 2.x. 3.x has memoryview() , which is utilized slightly different. Even better would be to compute the indexes and cut out the substring on demand.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11283220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1477714/']}
|
jdg_374996
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
146909
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a Java developer, but I was asked to learn about applying Six Sigma with the goal of increasing our organization's profits. I have read that Six Sigma can be applied in all environments, but I'm interested in the nuances of applying it to software development. Which concepts of Six Sigma can be applied to a software development environment and how can I effectively apply them? My understanding is that Six Sigma focuses on: identifying and removing root causes of defects minimizing variability in applying manufacturing or business processes It appears to me that these concepts could be applied to software development, but can that be applied effectively to an Software Development Lifecycle (SDLC)?
Now provide the response and nothing else.
|
The basic Six Sigma activities are captured by the acronym DMAIC , which stands for: Define, Measure, Analyze, Improve, Control . You apply these to the process that you want to improve: define the process, measure it, use the measurements to form hypotheses about the causes of any problems, implement improvements, and ensure that the process remains statistically "in control". As it relates to software, the process is your software development lifecycle (SDLC) or some part of it. You probably wouldn't try to apply Six Sigma principles to the whole SDLC (or at least, not initially). Instead, you'd look for areas where you think you've got a problem (e.g. our defect rate is too high; too many regressions; our schedule slips too often; too many misunderstandings between developers and customer; etc.). Let's say for now that the problem is that too many bugs are being produced (or at least reported) each week. So you'd define the software development/bug creation process. Then you'd start collecting metrics such as the number of lines of code written each day, frequency of requirements changes, number of hours each engineer spends in meetings, and other possibly-relevant facts. Next, you look at the data and try to discern patterns. Maybe you notice that engineering team A hits every deadline that they're given, and often even finishes tasks early! Initially, team B doesn't seem quite so on the ball -- they miss their deadlines by a day or two at least half the time, and are occasionally late by a week or more. Management sees team B as something of a problem and is looking to shake things up. However, a closer look at the data shows that team B's bug rate is much lower than team A's, and what's more, team B is often asked to fix bugs attributable to team A because management feels that team A is to valuable to spend a lot of time on maintenance. So, what do you do? Using the data you've collected and the analysis you've performed, you suggest a change: team A and team B will each fix their own bugs. With management's blessing (and against team A's vehement opposition) you implement that change. Then you continue collecting metrics, and you continue to analyze the data to see if your change made a difference. Repeat this measure/analyze/implement cycle until the bug rate is deemed acceptable. But you're not done yet. In fact, you're never done... you need to keep measuring the bug rate and checking that the bug rate remains within the acceptable range, i.e. it's statistically "in control". Notice that there's nothing here that's specific to software development other than the specifics of the process you're improving, the kinds of metrics that you collect, etc. The activities that you use to improve a software development process are the same as those you'd use for a widget manufacturing process, even though software development is quite different from widget manufacturing. All that means is that you need to apply some common sense in the kinds of goals that you set for your process.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/146909', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/34468/']}
|
jdg_374997
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2012077
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Evaluation of $$\int x^{26}(x-1)^{17}(5x-3) \, dx$$ I did not understand what substution i have used so that it can simplify, I have seems it is a derivative of some function. Help me, Thanks
Now provide the response and nothing else.
|
Note that\begin{align}\frac{d}{dx}\left[\color{blue}{\frac{1}{9}x^{27}(x-1)^{18}}\right]&=2x^{27}(x-1)^{17}+3x^{26}(x-1)^{18}\\&=x^{26}(x-1)^{17}(2x+3(x-1))\\&=x^{26}(x-1)^{17}(5x-3).\end{align}Intuition: given the form of the integrand, I played around with $Cx^{27}(x-1)^{18}$ and found $C=\frac{1}{9}$ worked.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2012077', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14311/']}
|
jdg_374998
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7226721
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wish to be able to run a VBA module which manipulates the table that I'm currently in (i.e., the cursor is somewhere within that table). The VBA code will perform an identical operation on each table that you're in when you run it. So, for example, let's say I have a module which needed to bold the top row of each table (the headings). It would need to locate the table object (called whatever ) that you're currently in so that it could manipulate whatever.rows(0) . How can I get the table object from the cursor position? I also need to detect if I'm not in a table and do nothing (or raise an error dialog).
Now provide the response and nothing else.
|
The VBA subroutine at the bottom of this answer shows how to do this. It uses the current selection, collapsing it to the starting point first so as to not have to worry about multi-segment selections: Selection.Collapse Direction:=wdCollapseStart It then checks that selection to ensure it's inside a table If Not Selection.Information(wdWithInTable) Then MsgBox "Can only run this within a table" Exit Sub End If The table is then accessible by referring to Selection.Tables(1) . The code below was a simple proof of concept which simply toggled each of the starting cells in each row of the table to either insert or delete a vertical bar marker. Sub VertBar() ' Collapse the range to start so as to not have to deal with ' ' multi-segment ranges. Then check to make sure cursor is ' ' within a table. ' Selection.Collapse Direction:=wdCollapseStart If Not Selection.Information(wdWithInTable) Then MsgBox "Can only run this within a table" Exit Sub End If ' Process every row in the current table. ' Dim row As Integer Dim rng As Range For row = 1 To Selection.Tables(1).Rows.Count ' Get the range for the leftmost cell. ' Set rng = Selection.Tables(1).Rows(row).Cells(1).Range ' For each, toggle text in leftmost cell. ' If Left(rng.Text, 2) = "| " Then ' Change range to first two characters and delete them. ' rng.Collapse Direction:=wdCollapseStart rng.MoveEnd Unit:=wdCharacter, Count:=2 rng.Delete Else ' Just insert the vertical bar. ' rng.InsertBefore ("| ") End If NextEnd Sub
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7226721', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/14860/']}
|
jdg_374999
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18168669
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Now that ActiveRecord::Relation#all is deprecated in Rails 4, how do I iterate over all records? Previously: Foo.all.each do |foo| # whateverend I can approximate it now like this, but it feels dirty: Foo.where(true).each do |foo| # whateverend Is there a better way?
Now provide the response and nothing else.
|
According to the Rails Guide on Active Record Query Interface , the correct way to iterate through all records is by using find_each . Using Foo.all.each will load the entire table into memory, instantiating all the rows; then iterate through the instances. find_each does this in batches, which is more efficient in terms of memory usage. From the guide: The find_each method retrieves a batch of records and then yields each record to the block individually as a model. In the following example, find_each will retrieve 1000 records (the current default for both find_each and find_in_batches ) and then yield each record individually to the block as a model. This process is repeated until all of the records have been processed: User.find_each do |user| NewsLetter.weekly_deliver(user)end References: Active Record Query Interface ActiveRecord::Batches
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18168669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126636/']}
|
jdg_375000
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
112878
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I did a simple demonstration to illustrate the percolation theory for my students. The content is a conductive square network possessing electrodes at its top and bottom edges. Some randomly chosen knots of the network are cut away. The network is graphically shown by black lines, while to show the cut knot I place a white disk over it. Here is the code: perc1[n_Integer] := Module[{i, j, lst, lst1, m}, lnHor = Graphics[{Thick, Table[Line[{{0.5, i}, {10.5, i}}], {i, 1, 10}]}]; lnVer = Graphics[{Thick, Table[Line[{{j, 0.5}, {j, 10.5}}], {j, 1, 10}]}]; lst = DeleteDuplicates[ Table[{RandomInteger[{1, 10}], RandomInteger[{1, 10}]}, {3000}]]; lst1 = Drop[lst, Length[lst] - n]; disks = Graphics[{White, Table[Disk[lst1[[m]], 0.3], {m, 1, n}]}]; el1 = Graphics[{Black, Thickness[0.03], Line[{{0.5, 0.5}, {10.5, 0.5}}]}]; el2 = Graphics[{Black, Thickness[0.03], Line[{{0.2, 10.5}, {10.5, 10.5}}]}]; wir1 = Graphics[Line[{{5, 10.5}, {5, 11}}]]; wir2 = Graphics[Line[{{5, 0.5}, {5, 0.}}]]; Column[{ Row[{Style[ "x = \!\(\*FractionBox[SubscriptBox[\(N\), \(cut\\\ away\)], \SubscriptBox[\(N\), \(total\)]]\) = ", Italic, 14], n/100 // N}], Show[{lnHor, lnVer, disks, el1, el2, wir1, wir2}, ImageSize -> 350, PlotRange -> {{0, 11}, {0, 11}}]}, Alignment -> Center]];percolation1 = Manipulate[ perc1[n], {{n, 10, Dynamic[Row[{Style[ "\!\(\*SubscriptBox[\(N\), \(cut\\\ away\)]\)= ", Italic, 14, Blue], Style[n, 14, Blue]}]]}, 1, 100, 1, Appearance -> Labeled}, ControlPlacement -> Top, SaveDefinitions -> True] That' s what you see: When moving the slider one varies the number of the knots cut away. This works. Now my question : It would be nice, if I could in addition show by some color marking possible percolation paths (that is, those contours along which the current can flow from the top to bottom electrode in a given configuration). It should vary dynamically when moving the slider. Any idea?
Now provide the response and nothing else.
|
I've taken Graph based road. Let me leave the styling to you: gr = GridGraph[{10, 10}]; The top row is the one with Range[10]*10 vertices and the bottom one with 10*Range[0,9]+1 . Don't know how to shortly transpose this so will leave it so. topRow = 10 Range[10];bottomRow = 10 Range[0, 9] + 1;Manipulate[ deleted = RandomSample[ (*the top and the bottom row can not be dropped*) Complement[Range[100], topRow, bottomRow], n ]; gr2 = VertexDelete[gr, deleted]; (*taking shortest paths to the bottom for each top vertex.*) (* could be more than one for each*) paths = Table[ MinimalBy[ FindShortestPath[gr2, start, #] & /@ (bottomRow), Length ], {start, topRow} ]; HighlightGraph[ HighlightGraph[ gr, {Style[deleted, White]}, VertexSize -> 1.5, VertexShape -> Graphics@{White, Disk[]} ], Table[ Style[PathGraph /@ paths[[i]], Thickness@.01, Hue[i/10]], {i, 10} ], ImageSize -> {500, 500}, ImagePadding -> 25 ], {n, 1, 80, 1}]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/112878', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/788/']}
|
jdg_375001
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18051658
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't know why i can't find it, but i wanted to replace the special character '\' in python. I have a String within i have '\' characters but i confident find the solution, to replace it with '-'.This is what happening while i am trying to replace, >>> x = 'hello\world'>>> x'hello\\world'>>> x.replace('\', '-') File "<stdin>", line 1 x.replace('\', '-')SyntaxError: EOL while scanning string literal EDIT: Do try this it in the eclipse IDLE x = 'hello\world'print xx.replace('\\', '-')print x Output: hello\worldhello\world
Now provide the response and nothing else.
|
You need to escape it with another backslash: x.replace('\\', '-') Backslashes are special, in that they are used to introduce non-printing characters like newlines into a string. It's also how you add a ' character to a ' -quoted string, which is what Python thinks you were trying to do. It sees \' and interprets as a literal quote within the string, rather than letting the ' end the string. Then it gets to the end of the string and finds EOL ("end of line") before the end of the string. To introduce a real backslash, you need to double it. You can see that Python itself did this when printing the representation of your initial string here: >>> x'hello\\world' Note the double backslash. You ought to use a double backslash when specifying your string in the first place. The reason that doesn't need it is that \w is not a special character, so it gets interpreted as a literal backslash and a w . Had you said 'Hello\now' you would have a string with a newline in it. You could have also marked the string as a "raw" string by prepending it with r as in r'hello\world' . This marks the string as not being eligible for any substitutions of special characters.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18051658', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2216626/']}
|
jdg_375002
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3966211
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm interested, how is the dual input in a sensor fusioning setup in a Kalman filter modeled? Say for instance that you have an accelerometer and a gyro and want to present the "horizon level", like in an airplane, a good demo of something like this here. How do you actually harvest the two sensors positive properties and minimize the negative? Is this modeled in the Observation Model matrix (usually symbolized by capital H)? Remark: This question was also asked without any answers at math.stackexchange.com
Now provide the response and nothing else.
|
Usually, the sensor fusion problem is derived from the bayes theorem. Actually you have that your estimate (in this case the horizon level) will be a weighted sum of your sensors, which is caracterized by the sensor model. For dual sensors, you have two common choices: Model a two sensor system and derive the kalman gain for each sensor (using the system model as the predictor), or run two correction stages using different observation models. You should take a look at Bayesian Predictors (a little more general than Kalman Filter) which is precisely derived from minimizing the variance of an estimate, given two different information sources. If you have a weighted sum, and minimize the variance of the sum, for two sensors, then you get the Kalman Gain. The properties of the sensor can be "seen" in two parts of the filter. First, you have the error matrix for your observations. This is the matrix that represents the noise in the sensors observation (it is assumed to be zero mean gaussian noise, which isn't a too big assumption, given that during calibration, you can achieve a zero mean noise). The other important matrix is the observation covariance matrix. This matrix gives you an insight about how good is the sensor at giving you information (information meaning something "new" and not dependent on the other sensors reading). About "harvesting the good characteristics", what you should do is do a good calibration and noise characterization (is that spelled ok?) of the sensors. The best way to get a Kalman Filter to converge is to have a good noise model for your sensors, and that is 100% experimental. Try to determine the variance for your system (dont always trust datasheets). Hope that helps a bit.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3966211', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/441337/']}
|
jdg_375003
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35101437
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to save an object list to the Shared Preferences using TinyDB's putListObject function, but I'm getting a Wrong 2nd argument type error. I can use the putObject function just fine, the error only appears when I use the putListObject function. Player Class: public class Player { private String name = ""; private int score = 0; public Player(String name, int score) { this.name = name; this.score = score; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getScore() { return score; } public void setScore(int score) { this.score = score; }} putListObject function: public void putListObject(String key, ArrayList<Object> objArray){ checkForNullKey(key); Gson gson = new Gson(); ArrayList<String> objStrings = new ArrayList<String>(); for(Object obj : objArray){ objStrings.add(gson.toJson(obj)); } putListString(key, objStrings); } How I used the function: ArrayList<Player> playerList = new ArrayList<Player>();TinyDB tinydb = new TinyDB(this);tinydb.putListObject("players", playerList); The error I got: putListObject (String java.util.ArrayList<java.lang.Object>) in TinyDB cannot be applied to (String java.util.ArrayList<com.example.package.Player>) Help would be much appreciated!
Now provide the response and nothing else.
|
kcochibili Developer of TinyDB answered this on github.He says, you must cast your custom objects to object before trigger putListObject. A sample for putting Custom Object ArrayList: ArrayList<Player> players = new ArrayList<Player>();ArrayList<Object> playerObjects = new ArrayList<Object>();for(Player a : players){ playerObjects.add((Object)a);}TinyDB tinydb = new TinyDB(this);tinydb.putListObject("players", playerObjects); Also when you want to get values from DB it will give ArrayList< Object >. So you may want to cast them back to your custom object. A sample for this: TinyDB tinydb = new TinyDB(this);ArrayList<Object> playerObjects = tinydb.getListObject("players", Player.class);ArrayList<Player> players = new ArrayList<Player>();for(Object objs : playerObjects){ players.add((Player)objs);} You can use all custom objects by casting.The other way i prefer is, adding get and put methods for all custom objects to TinyDB Class.For example : public void putListPlayer(String key, ArrayList<Player> playerList){ checkForNullKey(key); Gson gson = new Gson(); ArrayList<String> objStrings = new ArrayList<String>(); for(Player player: playerList){ objStrings.add(gson.toJson(player)); } putListString(key, objStrings);}//No need Class<?> mClass parameter. Because we know it is Player!public ArrayList<Player> getListPlayer(String key){ Gson gson = new Gson(); ArrayList<String> objStrings = getListString(key); ArrayList<Player> playerList = new ArrayList<Player>(); for(String jObjString : objStrings){ Player player = gson.fromJson(jObjString, Player.class); playerList.add(player); } return playerList;}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35101437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4932354/']}
|
jdg_375004
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5279148
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to get my app to be backward compatible so i need to show the menu buttonsince we used the menu button in our app on previous versions. So I was reading and it says that if your target is 11 (3.0) than you don't get the menu button but it your target is 10 they you do. But the issue is I am using fragments so my target has to be 11. Any thoughts.
Now provide the response and nothing else.
|
Ordinary options menus will appear regardless of Android version. If you have: android:targetSdkVersion="11" then the options menu will appear in the action bar on Android 3.0+ devices. All items in the options menu will be available when clicking the "overflow" button in the upper-right corner. Your options menu will appear normally on Android 1.x and 2.x devices, even if you have android:targetSdkVersion="11" in your <uses-sdk> manifest element. If you overrode the MENU button to have other behavior, you will need to create some other trigger for that behavior for API Level 11 and higher. Overriding the MENU button was never a good idea to begin with, and as you can see, it is even less of a good idea now. I know of no way to get a MENU button on an Android 3.0 device if you have android:targetSdkVersion="11" .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5279148', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/498584/']}
|
jdg_375005
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1387201
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Not sure of the name of what I need to do, but I used to do it all the time, and now i forget. I have values 1 - 10. I want 10 to become 1 and 1 to become 10. What is the formula to do this again? It is driving me nuts. Thanks for the responses. My apologies that I don't fully understand your solution. Let's think of this in an excel spreadsheet where column 'A1' = 1, 'A2' = 2 .... 'A10' = 10. What formula needs to be used to make 'A1' = 10, 'A2' = 9..... 'A10' = 1? I imagine some identifier of the maximum value in the range would be necessary. Again, my apologies for not getting what you are saying.
Now provide the response and nothing else.
|
If $a,b,c$ are the roots of $p(x)=8x^3-4x^2-4x+1$ then $\frac{1}{a},\frac{1}{b},\frac{1}{c}$ are the roots of $$ q(x) = x^3-4x^2-4x+8 $$ so: $$ \frac{1}{a^3} = \frac{4}{a^2}+\frac{4}{a}- 8 $$ and: $$\sum_{cyc}\frac{1}{a^3}=4\sum_{cyc}\frac{1}{a^2}+4\sum_{cyc}\frac{1}{a}-24$$ then Viète's theorem applied to $q(x)$ gives: $$\sum_{cyc}\frac{1}{a^3}=4(4^2-2\cdot(-4))+4(4)-24 = \color{red}{88}.$$ With a bit of experience, one may recognize $p(x)$ as the minimal polynomial of $\alpha=-\cos\frac{2\pi}{7}$ , whose conjugates are $-\cos\frac{4\pi}{7}$ and $-\cos\frac{6\pi}{7}$ . By this way, the problem is equivalent to proving a not-so-difficult trigonometric identity. Another neat trick is the following: given $p(x)=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$ , $$ \log p(x) = \sum_{cyc}\log\left(1-\frac{x}{a}\right) = -\sum_{cyc}\left(\frac{x}{a}+\frac{x^2}{2a^2}+\frac{x^3}{3a^3}+\ldots\right) $$ hence $\sum_{cyc}\frac{1}{a^3}$ is minus three times the coefficient of $x^3$ in the Taylor series of $\log p(x)$ in a neighbourhood of $x=0$ , or: $$ \sum_{cyc}\frac{1}{a^3} = -\frac{1}{2}\left.\frac{d^3}{dx^3}\log p(x)\right|_{x=0}, $$ and we just need to evaluate: $$\frac{\left(4+8 x-24 x^2\right)^3}{\left(1-4 x-4 x^2+8 x^3\right)^3}+\frac{3 (-4+24 x) \left(-4-8 x+24 x^2\right)}{\left(1-4 x-4 x^2+8 x^3\right)^2}-\frac{24}{1-4 x-4 x^2+8 x^3} $$ at $x=0$ to get $4^3+3\cdot 4^2-24=\color{red}{88}$ just as before.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1387201', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/259374/']}
|
jdg_375006
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
206591
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
USB 3.0 has been around for quite some time, since it was released in 2008. But you don't see any simple microcontrollers with an internal peripheral that can do USB 3.0. The Atmega32u4, a simple 8-bit AVR, has an embedded USB2.0 phy inside and only runs at 16MHz, as such it is obviously too slow to do USB3.0. Although there are Cortex-M controllers running at over 200MHz that don't have a USB3.0 peripheral! At this point, I feel like the clock for the MCU no longer matters. The lowest end processor I can find that does USB 3.0 is TIs Keystone MPU with an ARM-A15 Is it just taking a considerable amount of time to create the IP for lower end MCUs or does it require a clock generation (or some other) unit that isn't worth the cost to develop it for cheaper MCUs?
Now provide the response and nothing else.
|
USB 3.0 PHY (physical, electrical) layer achieves the 5Gbit/s transmission rate utilizing high speed differential signaling (CML), same as PCI Express. Implementation of this physical layer on chip requires a transceiver, and a SERDES (serializer, deserializer) at the minimum, in addition to the MAC (media access control) layer requirements. These blocks would probably require additional clock generation and signal conditioning circuitry (equalizers on the lines to reduce bit error rates). Putting all this circuitry in your chip has two primary costs, silicon area and power. Even if we assume if the power consumption is irrelevant, as you can turn the whole thing off if you're not using it, shipping a MCU with a USB 3.0 PHY would probably increase the silicon area enough to increase the costs drastically.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/206591', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/28592/']}
|
jdg_375007
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1727255
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Rule C is appeared in the textbook: Introduction to mathematical logic by Mendelson (Page 81 in the fourth edition). It is said "It is very common in mathematics to reason in the following way. Assume that we have proved a wf of the form $(\exists x)(B(x))$. Then we say, let $b$ be an object such that B(b). We continue the proof, finally arriving at a formula that does not invovle the arbitrarily chosen element $b$..........In fact, we can achieve the same result without making an arbitrary choice of element $b$". So if you have a proof using Rule C, you can also have another proof without using Rule C. Now return to the Axiom of Choice. The axiom says that "Let $I$ be a set, and for each $\alpha \in I$, let $X_{\alpha}$ be a non-empty set. Then $\Pi_{\alpha \in I} X_{\alpha}$ is also non-empty." In other words, there exists a function $f:I \rightarrow \cup_{\alpha \in I}X_{\alpha}$ which assigns to each $\alpha$ an element $x_{\alpha} \in X_{\alpha}$. It seems that there is an obvious proof for it by Rule C: Since $\forall \alpha$, $X_{\alpha}$ is non-empty, there exists some element in $X_{\alpha}$. let $b$ be the element. So we assign $\alpha$ the element $b$. So $\forall \alpha$, we can assign an element. So the function exists. Clearly the above proof uses Rule C. The proof can be transformed to another proof without Rule C. So there is a proof for the Axiom of Choice. Since the Axiom of Choice has been proven to be independent, the above proof is wrong. But I failed to find it. Anybody can help me?
Now provide the response and nothing else.
|
Your confusion here reminds me of the old saying "The axiom of choice is clearly true, the well-ordering principle is clearly false, and who can say about Zorn's lemma?" While for any $\alpha$, you can choose an $x_\alpha \in X_\alpha$, you need the axiom of choice to show that this can be stitched together into a full choice function that is simultaneously valid for all $\alpha$. In other words, the axiom of choice takes you from the statement $\forall \alpha(\exists x_\alpha \in X\alpha)$ to the statement $\exists f(\forall\alpha(f(\alpha) \in X_\alpha))$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1727255', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/7992/']}
|
jdg_375008
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26923182
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using a pair of integer template parameters to specify a ratio, since I can't use a double as a template parameter. The conversion into a double is protected against divide-by-zero with a ternary. This worked in an earlier version of the compiler, but Visual Studio 2013 gives an error: error C2124: divide or mod by zero Here's a simplified version of the code: template<int B1, int B2>class MyClass{ const double B = (B2 == 0) ? 0.0 : (double) B1 / (double) B2; // ...};MyClass<0, 0> myobj; I really want B to be optimized out of expressions that use it when it's zero, so I need the single-line definition. I know I can just use template parameters <0, 1> to get around it, but I wonder if there's a way to just convince the compiler that my expression is safe?
Now provide the response and nothing else.
|
What I'm told worked: const double B = (B2 == 0 ? 0.0 : (double) B1) / (B2 == 0 ? 1.0 : (double) B2); This avoids a reliance on short circuit evaluation preventing the divide by 0; having the conditional selections happen before the division. Original idea / Perhaps something like this...? (I think B should be static const or constexpr , but I'm sure you can sort that...) template<int B1, int B2>struct MyClass{ const double B = (double) B1 / (double) B2;};template <int B1>struct MyClass<B1, 0>{ const double B = 0.0;}; If there's lots of other stuff you want in MyClass and don't want to duplicate or put in a base etc., you could move the B calculation into a supporting template using the specialisation approach above.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26923182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5987/']}
|
jdg_375009
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
704730
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that a discrete and compact subset $D \subset \mathbb{C}$ must be finite. Does this conclusion hold if $D$ is just discrete and bounded? How about discrete and closed? Compact is the usual (for these simple spaces), closed and bounded, where closed is contained under the limit operation/contains all limit points. Bounded it can be contained in a ball of some radius around the origin. $D \subset \mathbb{C}$ is a discrete subset if $\forall z \in D$ there exists a ball of radius $r>0$ such that $D \cap B_r(z)$ = $\{z\}$. Okay, for bounded set: why is discrete required? Can you give mme an example of a bounded set that is NOT finite?
Now provide the response and nothing else.
|
Proof without open covers: Assume that $D$ is not finite, take an infinite sequence of distinct elements in $D$. The Bolzano-Weierstrass theorem (I hope you know this one) states that there is a subsequence , say $A_{n}$ that converges. But this is impossible, because a convergent sequence has a limit $L$ in $D$ (because $D$ is closed). But since $L\in D$ and $D$ is a discrete set, there is a $r \in \mathbb{R}$ so $B_{r}(L) \cap D = \{L\}$ This is in contradiction with the epsilon-delta definition of convergence. Take $\epsilon = r/2$ , then there is a $n \in \mathbb{N}$ such that norm of $(L-A_{i}) < r$ for each $i>n$ So D, can not be infinite.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/704730', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/125459/']}
|
jdg_375010
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
61417499
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering if some more experienced PHP coders can help me resolve this issue: I have a edit.php page which displays current client data to update in form fields. the update button that when clicked at the moment is just going to a page clients.php (Showing all clients data including the updated client data) My goal is when the update button is clicked instead of going to clients.php as a redirect and showing a table full of clients, i wish it to redirect to just the individual client info on the report.php page and have it echo the current updated client data in a table. All i get is a syntax error below. syntax error, unexpected '' (T_ENCAPSED_AND_WHITESPACE), expecting '-' or identifier (T_STRING) or variable (T_VARIABLE) or number (T_NUM_STRING) Below is the redirect code on the edit.php page, then below that the reports.php page i want to redirect to but only echo current Client ID's data. if($run_update){ header("Location: reports.php?cus_id=<?php echo $result['id'] ?>"); keepmsg('<div class="alert alert-success text-center"> <a href="#" class="close" data-dismiss="alert" aria-label="close">×</a> <strong>Success!</strong> Client updated successfully. </div>');} else { keepmsg('<div class="alert alert-danger text-center"> <a href="#" class="close" data-dismiss="alert" aria-label="close">×</a> <strong>Sorry!</strong> Client could not be updated. </div>');} Below is the simple reports.php page. This is were I'm trying to redirect to and display only the current clients data by ID <div class="container"><div class="jumbotron"> <div class="container"> <div class="row"> <div class="col-md-12"> <small class="pull-right"><a href="edit.php?cus_id=<?php echo $row['id'] ?>" class='btn btn-danger'>Edit Client</a> </small> <h2 class="text-center invert"> Client Info: <span class="font-weight-light"><?php echo $row['deceased'] ?></span></h2> <hr> </div> </div> </div> <br> <div class="table-responsive table-wrapper"> <table class="table table-bordered table-hover text-center"> <thead> <tr> <th class="text-center">Job Type</th> <th class="text-center">Name of Deceased</th> <th class="text-center">Plot Number</th> <th class="text-center">Cemetery</th> </tr> </thead> <tbody class="invert_td"> <tr> <td><?php echo $row['jobtype'] ?></td> <td><?php echo $row['deceased'] ?></td> <td><?php echo $row['plot'] ?></td> <td><?php echo $row['cemetery'] ?></td> </tr> </tbody> </table> </div></div> Below if the 1st line of code from my form on the edit.php page if that helps <form class="form-horizontal" role="form" method="post" action="edit.php?cus_id=<?php echo $client_id ?>">
Now provide the response and nothing else.
|
The bad news is: It's a nightly-only feature for now. The good news is: docs.rs uses nightly by default. To get this to work all you need is to enable the doc_cfg feature and apply #doc(cfg) to the item being documented #![feature(doc_cfg)]#[doc(cfg(feature = "macros"))]pub fn test() {} Because this is a nightly-only feature, you probably don't want to enable it all the time. tokio defines the following in its Cargo.toml to only enable this feature on docs.rs: # docs.rs-specific configuration[package.metadata.docs.rs]# document all featuresall-features = true# defines the configuration attribute `docsrs`rustdoc-args = ["--cfg", "docsrs"] and then they use // only enables the `doc_cfg` feature when// the `docsrs` configuration attribute is defined#![cfg_attr(docsrs, feature(doc_cfg))]#[cfg_attr(docsrs, doc(cfg(feature = "macros")))]pub fn test() {}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/61417499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12946293/']}
|
jdg_375011
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
677492
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A ball rolls from point $A$ ( $v_0 = 0 \; m / s$ ) to the right along a friction-less path. We can then calculate the speed of the ball at the points $B$ and $C$ using the law of conservation of mechanical energy. Do the mathematics corresponding to the Newtonian model of the mechanics allow us to calculate the speed, even if we don't have information of the time required to travel between the points and the mass of the ball?
Now provide the response and nothing else.
|
Using conservation of energy, time does not enter into the relationships, and both gravitational potential energy and kinetic energy are proportional to the mass, so mass cancels out. Therefore as you indicate you can calculate the speed at B and C, given the speed at A (zero here) using the conservation of energy. Since gravity is a conservative force, the work done between two points is independent of the path taken. Using a force/acceleration approach (Newtonian approach), you need to know the force along the path and the distance over which the force acts. Your diagram provides the $x$ position as well as the elevations of A, B, and C, but not the shape of the path along these points. To calculate the speed and the time of travel, you need the equation of the path along A to B to C to evaluate the component of the gravitational force along the path and the distance traveled along the path.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/677492', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/319463/']}
|
jdg_375012
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6953286
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We are using bcrypt for hashing passwords and data that never needs to be decrypted. What should we do to protect other user information that does need to be decrypted? For example, let's say that we didn't want a user's real name to be in plain text in case someone was to obtain access to the database. This is somewhat sensitive data but also needs to be called from time to time and displayed in plain text. Is there a simple way to do this?
Now provide the response and nothing else.
|
You can use the crypto module: var crypto = require('crypto');var assert = require('assert');var algorithm = 'aes256'; // or any other algorithm supported by OpenSSLvar key = 'password';var text = 'I love kittens';var cipher = crypto.createCipher(algorithm, key); var encrypted = cipher.update(text, 'utf8', 'hex') + cipher.final('hex');var decipher = crypto.createDecipher(algorithm, key);var decrypted = decipher.update(encrypted, 'hex', 'utf8') + decipher.final('utf8');assert.equal(decrypted, text); Edit Now createCipher and createDecipher is deprecated instead use createCipheriv and createDecipheriv
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/6953286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/666564/']}
|
jdg_375013
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19256
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently, when talking to a physicist, I claimed that in my experience, when a problem that naively seems like it should take exponential time turns out nontrivially to be in P or BPP, an "overarching reason" why the reduction happens can typically be identified---and almost always, that reason belongs to a list of a dozen or fewer "usual suspects" (for example: dynamic programming, linear algebra...). However, that then got me to thinking: can we actually write down a decent list of such reasons? Here's a first, incomplete attempt at one: (0) Mathematical characterization. Problem has a non-obvious "purely-mathematical" characterization that, once known, makes it immediate that you can just do exhaustive search over a list of poly(n) possibilities. Example: graph planarity, for which an O(n 6 ) algorithm follows from Kuratowski's theorem. (As "planar" points out below, this was a bad example: even once you know a combinatorial characterization of planarity, giving a polynomial-time algorithm for it is still quite nontrivial. So, let me substitute a better example here: how about, say, "given an input n written in binary, compute how many colors are needed to color an arbitrary map embedded on a surface with n holes." It's not obvious a priori that this is computable at all (or even finite!). But there's a known formula giving the answer, and once you know the formula, it's trivial to compute in polynomial time. Meanwhile, "reduces to excluded minors / Robertson-Seymour theory" should probably be added as a separate overarching reason why something can be in P.) Anyway, this is specifically not the sort of situation that most interests me. (1) Dynamic programming. Problem can be broken up in a way that enables recursive solution without exponential blowup -- often because the constraints to be satisfied are arranged in a linear or other simple order. "Purely combinatorial"; no algebraic structure needed. Arguably, graph reachability (and hence 2SAT) are special cases. (2) Matroids. Problem has a matroid structure, enabling a greedy algorithm to work. Examples: matching, minimum spanning tree. (3) Linear algebra. Problem can be reduced to solving a linear system, computing a determinant, computing eigenvalues, etc. Arguably, most problems involving "miraculous cancellations," including those solvable by Valiant's matchgate formalism, also fall under the linear-algebraic umbrella. (4) Convexity. Problem can be expressed as some sort of convex optimization. Semidefinite programming, linear programming, and zero-sum games are common (increasingly-)special cases. (5) Polynomial identity testing. Problem can be reduced to checking a polynomial identity, so that the Fundamental Theorem of Algebra leads to an efficient randomized algorithm -- and in some cases, like primality, even a provably-deterministic algorithm. (6) Markov Chain Monte Carlo. Problem can be reduced to sampling from the outcome of a rapidly-mixing walk. (Example: approximately counting perfect matchings.) (7) Euclidean algorithm. GCD, continued fractions... Miscellaneous / Not obvious exactly how to classify: Stable marriage, polynomial factoring, membership problem for permutation groups, various other problems in number theory and group theory, low-dimensional lattice problems... My question is: what are the most important things I've left out? To clarify: I realize that no list can possibly be complete: whatever finite number of reasons you give, someone will be able to find an exotic problem that's in P but not for any of those reasons. Partly for that reason, I'm more interested in ideas that put lots of different, seemingly-unrelated problems in P or BPP, than in ideas that only work for one problem. I also realize that it's subjective how to divide things up. For example, should matroids just be a special case of dynamic programming? Is solvability by depth-first search important enough to be its own reason, separate from dynamic programming? Also, often the same problem can be in P for multiple reasons, depending on how you look at it: for example, finding a principal eigenvalue is in P because of linear algebra, but also because it's a convex optimization problem. In short, I'm not hoping for a "classification theorem" -- just for a list that usefully reflects what we currently know about efficient algorithms. And that's why what interests me most are the techniques for putting things in P or BPP that have broad applicability but that don't fit into the above list -- or other ideas for improving my crude first attempt to make good on my boast to the physicist.
Now provide the response and nothing else.
|
Some graph classes allow polynomial-time algorithms for problems that are NP-hard for the class of all graphs. For instance, for perfect graphs, one can find a largest independent set in polynomial time (thanks to vzn in a comment for jogging my memory). Via a product construction, this also allows a unified explanation for several apparently quite different CSPs being tractable (such as those with tree structure which are usually solved by hierarchical decomposition, and the All-Different constraint that is usually solved by perfect matching). It could be argued that perfect graphs are "easy" because they allow nice semidefinite programming formulations of the problems in question (and therefore fall under linear algebra and/or convexity). However, I'm not sure that completely captures what is going on. András Z. Salamon and Peter G. Jeavons, Perfect constraints are tractable ,CP 2008,LNCS 5202,524–528.doi: 10.1007/978-3-540-85958-1_35 Meinolf Sellmann, The Polytope of Tree-Structured Binary Constraint Satisfaction Problems ,CPAIOR 2008,LNCS 5015,367–371.doi: 10.1007/978-3-540-68155-7_39 As noted by Gil Kalai, properties of graphs that form minor-closed classes can be defined by a finite set of forbidden minors (this is the Robertson-Seymour theorem ). Another result of Robertson and Seymour is that testing for the presence of a minor can be done in cubic time. Together these lead to a polynomial-time algorithm to decide properties that are minor-closed. Neil Robertson and P. D. Seymour, Graph Minors. XIII. The disjoint paths problem , Journal of Combinatorial Theory, Series B 63 (1) 65–110, 1995. doi: 10.1006/jctb.1995.1006 One problem with minor-closed graph properties is that they are "small"; excluding even one minor excludes lots of graphs. This is perhaps one reason Robertson-Seymour structural decomposition works: there are few enough remaining graphs for them to have a nice structure. Serguei Norine, Paul Seymour, Robin Thomas, and Paul Wollan, Proper minor-closed families are small , Journal of Combinatorial Theory, Series B 96 (5) 754–757, 2006.doi: 10.1016/j.jctb.2006.01.006 ( preprint ) One attempt to go beyond minor-closed classes is via classes defined by forbidden subgraphs or forbidden induced subgraphs. Graph properties defined by a finite set of forbidden subgraphs or induced subgraphs are decidable in polynomial time, by examining all possible subgraphs. I find the really interesting case to be hereditary graph properties where the forbidden set is infinite . A hereditary property is closed under taking of induced substructures, or equivalently consists of the $F$-free structures, where $F$ is a set of forbidden induced substructures, not necessarily finite. For $F$-free classes, an infinite set $F$ doesn't lead to a recognition algorithm in any obvious way. It is also not clear why for some $F$-free graph classes one should be able to find largest independent sets in polynomial time. Trees are the cycle-free graphs; bipartite graphs are the odd-cycle-free graphs; perfect graphs are the (odd-hole,odd-antihole)-free graphs. In each of these cases the forbidden set is infinite yet there is a polynomial-time algorithm to find largest independent sets, and such graphs can also be recognised in polynomial time. There is only partial progress so far on understanding why some $F$-free classes (with $F$ infinite) are decidable in polynomial time.This progress consists of structural decomposition theorems that lead to polynomial-time recognition algorithms for such classes. Perfect graphs are (odd-hole,odd-antihole)-free, yet can be recognised in polynomial time by the Chudnovsky-Cournéjols-Liu-Seymour-Vušković algorithm . (This remains rather messy after a long period of cleaning.) There are also results if $F$ is the set of all even cycles, or the set of all odd holes, and significant progress has been made on the case where $F$ contains the claw graph. Maria Chudnovsky and Paul Seymour, Excluding induced subgraphs , Surveys in Combinatorics 2007, 99–119, Cambridge University Press, ISBN 9780521698238. ( preprint ) The hereditary case shares some of the difficulty of the case of minors. For minor-closed graph classes, it is usually not known what the finite set of forbidden minors is, even though it must be finite. For $F$-free graph classes, if the set $F$ is infinite then the class might be nice or it might not be, and we currently have no way to tell other than to try to understand the decomposition structure of the $F$-free graphs.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/19256', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/1575/']}
|
jdg_375014
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
53553
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a program which spits out an output to stdout like: [OK] Something is ok[OK] Something else is also ok[FAIL] This does not look good[FAIL] Neither does this[OK] At least this is fine To more easy spot the fails from the output I would like to mark the occurrences of the keyword FAIL with red without discarding the other messages. It would be nice if I could also highlight the OK keywords with green :). Doing a pipe to egrep --color FAIL will only show the lines where the FAIL keyword is present.
Now provide the response and nothing else.
|
Based on manatwork 's brilliant little "or nothing" hack in the other question, I offer this, which adds the ability to get two highlight colors: $ myprogram | \ GREP_COLORS='mt=01;32' egrep --color=always '\[OK\]|' | \ GREP_COLORS='mt=01;31' egrep --color=always '\[FAIL\]|' That is, you run grep over the output twice, with a different GREP_COLORS environment variable each time. The '01;32' value means "bold green", and the other means "bold red". See the Wikipedia article on ANSI codes for more ideas. Alas, this requires GNU grep version 2.5.3 or higher. Older versions of GNU grep supported a different coloring mechanism, but those versions have a bug that prevents you from using the feature in a pipeline more than once. BSD grep emulates the older GNU grep coloring mechanism, and it will work twice in a pipeline, but it yells about "empty sub-expression" due to the manatwork hack. You can remove the backslashes at the end of the first two lines to make it all a single line. I've just split it here for clarity and to play nice with the SE page format.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/53553', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/26675/']}
|
jdg_375015
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14489
|
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
CO 2 levels are increasing, they have crossed 400 ppm, which means that of every million gas molecules in the air, 400 are of CO 2 . It has been increasing. Does that mean the total volume of air in the atmosphere is increasing or that some other gas is decreasing as CO 2 increases?
Now provide the response and nothing else.
|
[ Major edits below ] In short the answer is NO. Before we get into volume changes, I have to say that volume is a tricky measure to apply to the atmosphere as a whole, because there is no clear limit between the atmosphere and outer space.Also, the thickness of the atmosphere is quite insensitive to changes in atmospheric mass. For example, the atmosphere of Venus is much more massive than Earth's ( 93 times more massive ). Leading to surface pressures 92 times those of Earth. However, the thickness of the atmosphere is quite similar, with the mesosphere extending to 120 km , only 41% higher than on the Earth, despite having 9300% the mass. This planetary comparison suggests that more CO$_2$ will lead to a more massive atmosphere and higher atmospheric pressures, but little change in its actual volume (if you somehow manage to define how to measure the volume of the atmosphere). As an analogy, imagine you have one meter of snow cover, and there is one meter of snowfall. You won't end up with two meters of snow cover. Thise is because the new snow will compact the snow below, leading to something like 1.7 meters of snow cover. Therefore, more gases in the atmosphere will lead to a denser atmosphere, but have little impact on its thickness. Now we have to make a very important distinction: It is not the same to add CO$_2$ from a CO$_2$ source (like a volcanic eruption) than add it due to combustion of fossil fuels. This is because the latter do not add molecules to the atmosphere, just turn O$_2$ molecules into CO$_2$. Now, if you look into what this transformation means in terms of volume, you will get a surprise (at least it was a surprise to me): Given standard conditions (20°C and 1 atm). One cubic meter of O$_2$ weights 1.331 kg . When you use that oxygen to burn fossil fuels all the molecules in it will go from a molecular weight of 32 (O$_2$) to 44.01 (CO$_2$), meaning you end up with 1.831 kg of CO$_2$. Now comes the interesting part: under the same conditions, one cubic meter of CO$_2$ weights 1.842 kg , consequently, our initial cubic meter of O$_2$ turned into 0.994 m$^3$ of CO$_2$, equivalent to a volume reduction of 0.6%. Therefore, burning fossil fuels reduces the volume of the atmosphere . I was surprised at first by this result, but it nicely explain the classic experiment where a boiled egg get sucked into a bottle: This is an screenshot of this video where you can see the trick in action (skip to minute 1:10) Now, despite that burning stuff decrease the volume of the atmosphere. It is important to note that if you add greenhouse gases like CO$_2$ or methane, you are not only increasing the amount of those gases in the atmosphere, because the increase in temperature will lead to an increase in water vapor too. So the atmospheric mass increase will be larger than the mass of CO$_2$ or methane added. And that extra mass is related to additional molecules, so it will lead to extra volume (contrary to the combustion case where the amount of molecules is constant). Thus, as we burn fossil fuels the mass of the atmosphere increases (as well as the atmospheric pressure), but the volume decreases, and due to the fact that CO$_2$ is the product of combustion, that increase will be at the expense of reducing the amount of molecular oxygen (O$_2$). Although, this decrease is very small: the atmosphere have ~21% of Oxygen and 0.04% of CO$_2$, so an increase in CO$_2$ of a 30% (similar to going from 300 to 400 ppm), means a reduction of O$_2$ of 0.01%, a relatively small change (using Wikipedia numbers it would go from 20.946% Oxygen to 20.934%). Also, that's strictly true only if you make an instantaneous combustion. Because over time there could be negative feedback working against this imbalance. For instance, plants will increase the rate of photosynthesis ( CO$_2$ fertilization effect ) That said, the mass increase (and volume decrease) associated to the CO$_2$ jumping from 300 ppm (pre-industrial levels) to 400 ppm is negligible, and probably dwarfed by day to day changes in particulate material or water vapor. As a side note, the book " Red Mars ", makes a fairly scientifically sound description of how to "terraform" Mars, and how adding CO$_2$, or water vapor increase the atmospheric pressure and surface temperature.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://earthscience.stackexchange.com/questions/14489', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/11519/']}
|
jdg_375016
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47961245
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a quick ssh config to json processor in golang. I have the following stuct: type SshConfig struct { Host string Port string User string LocalForward string ...} I am currently looping over every line of my ssh config file and splitting the line on spaces and checking which property to update. if split[0] == "Port" { sshConfig.Port = strings.Join(split[1:], " ")} Is there a way to check a property exists and then set it dynamically?
Now provide the response and nothing else.
|
Use the reflect package to set a field by name: // setField sets field of v with given name to given value.func setField(v interface{}, name string, value string) error { // v must be a pointer to a struct rv := reflect.ValueOf(v) if rv.Kind() != reflect.Ptr || rv.Elem().Kind() != reflect.Struct { return errors.New("v must be pointer to struct") } // Dereference pointer rv = rv.Elem() // Lookup field by name fv := rv.FieldByName(name) if !fv.IsValid() { return fmt.Errorf("not a field name: %s", name) } // Field must be exported if !fv.CanSet() { return fmt.Errorf("cannot set field %s", name) } // We expect a string field if fv.Kind() != reflect.String { return fmt.Errorf("%s is not a string field", name) } // Set the value fv.SetString(value) return nil} Call it like this: var config SshConfig...err := setField(&config, split[0], strings.Join(split[1:], " "))if err != nil { // handle error} playground example
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47961245', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1105727/']}
|
jdg_375017
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5272
|
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently took over managing my church's IT resources. Previously it was managed by nobody and so was subject to any random person or vendor's half-baked idea for building out the network. Right now we have a wired network and a wireless network on separate partitions; the wireless one is for public use and the wired is for our actual work network. The biggest problem right now is that anybody with an ethernet cable can plug in to any port and get instant access to our internet connection and any unsecured resources on the network, including printers, non-password-protected shares, domain directory listings, etc. I've tried to look up some information on this but apparently I don't know the right terminology and so haven't found much of anything helpful. What are my options for locking down our wired network so that users who don't log in to the domain can't access any of these things? (If it's of relevance, our server is running Windows Server 2k3 and does contain an Active Directory controller; our network hardware consists of a set of Cisco switches and a Cisco 881 router, which pumps both the wired and wireless network.)
Now provide the response and nothing else.
|
There are a couple of options for you, depending on how much money and time you want to invest. At the complex/high end, you can configure 802.1x authentication so that only domain users can access the wired network. This involves buying/building a radius server, tying that to your domain controller and configuring the switch to query the radius server whenever a port is connected. I am assuming you have a small network with fewer than 30 users. I also assume that your "office" users don't move around very much (i.e., their computers are on desks). In this case, a simpler option would be to disable all the unused ports, or configure them for the "guest" network. Either way, a visitor will not get access to your servers. The downside, of course, is that when you have new users, or you move furniture around, you will have to reconfigure those ports. For a small number of infrequent moves, this shouldn't be a burden. If I may move slightly off-topic, I'm willing to bet your biggest security concern isn't really unprotected ethernet ports. People who attack networks usually do so to steal money, so your efforts should be directed at protecting the church's financial assets. Make sure your online banking is secure and money can't be transferred out without some sort of verification. You might also invest in some anti-malware/spam filtering to reduce the chance of inadvertently downloading software designed to steal bank info. The case of a person plugging into your network in order to steal something isn't very likely to happen. There are more important threats out there to worry about. Sorry for the digression.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/5272', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/3401/']}
|
jdg_375018
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24048019
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In order to generate XML documentation using Web API Help Pages for my Web API project, I had to check the "XML documentation file" option under the "Builds" section of my project's properties. This correctly generates the documentation when I build locally. However, when I check in, I get the following error on the build server: CSC: Error generating XML documentation file 'c:\Builds\3...\Sources\CurrentVersion...\ProjectName\App_Data\XmlDocument.xml' ('Access is denied. ') I'm not sure why this happens, since it creates a new directory for each build anyway. I googled this issue and searched Stack Overflow, but the only solutions I found were to remove the XML documentation file (which defeats the purpose of using Web API Help Pages). How can I resolve this?
Now provide the response and nothing else.
|
If the XML file is checked-in to TFS then when TFS gets the files to the workspace on the build server, it will have "Read-Only attribute" associated with the file. Now when the build generates the new XML file, it wont be able to overwrite the old XML file since it has the read-only attribute. Solution is to:a) use your build scripts to modify the file attribute and make it read-writeb) remove the xml file checked-in to TFS so that, build will be able to generate the XML easily. Update: if you are using solution b, if the file is part of you project file make sure that you remove the file from the csproj file as well.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24048019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1207963/']}
|
jdg_375019
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
67184531
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have recently migrated my Flutter app to null-safety but WillPopScope in combination with AlertDialog causes a problem. WillPopScope expects Future<bool> but showDialog returns Future<bool?> and I can't figure out how to cast one onto the other. Widget _buildBody(BuildContext context) { return WillPopScope( onWillPop: (() => _onBackPressed(context)) as Future<bool> Function(), child: new Container([...]), );}// this should return a Future<bool> but showDialog doesn't allow thatFuture<bool?> _onBackPressed(BuildContext context) async { if (someCondition) { // showDialog returns a Future<T?> return showDialog( context: context, builder: (context) => new AlertDialog( [...] actions: <Widget>[ new TextButton( child: Text("cancel", onPressed: () => Navigator.of(context).pop(false), ), new TextButton( child: Text("discard", onPressed: () => Navigator.of(context).pop(true), ), ], )); } else { return true; }} The cast (() => _onBackPressed(context)) as Future<bool> Function() in onWillPop as shown in this sample is not working. The following _CastError was thrown building Builder(dirty):type '() => Future<bool?>' is not a subtype of type '() => Future<bool>' in type cast Any idea how I can catch the null value returned by showDialog and make willPopScope happy again?
Now provide the response and nothing else.
|
I guess easiest would be: Future<bool> _onBackPressed(BuildContext context) async { ... return (await showDialog(..)) ?? false // if dialog returns null, return false instead ... or bool? dialogResp = await showDialog(...);if(dialogResp !=) return dialogResp; else return false; or Future<bool> _onBackPressed(BuildContext context) async { ... return showDialog(..).then((x) => x ?? false) ...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/67184531', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/360226/']}
|
jdg_375020
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3291702
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find $f^{(2022)}(0)$ of the function $$f(x)=x^{2019} \cos x.$$ By Taylor series at the point $x =0$ , my answer was $0$ . But someone who gave this question to me said the answer is like the below picture. Who is correct? Did I something wrong?
Now provide the response and nothing else.
|
You are correct; the coefficient of $x^{2022}$ in the Taylor series of $x^{2019}\cos x$ at $x=0$ is $0$ , which implies that the 2022th derivative at $0$ is $0$ . Your friend made the following error. Yes, all the terms but the $k=2019$ th one dissapear. But this means s/he should have $\cos^{(2022-2019)}(0) = \cos'''(0) = -\sin(0) = 0.$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3291702', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/350635/']}
|
jdg_375021
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34552380
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Maybe its lame question, But I don't get it!If I include <string> or <vector> in multiple translation units (different .cpp) why it doesn't break the ODR?As far as I know each .cpp is compiled differently so vector's methods code will be generated for each object file separately, right?So linker should detect it and complain.Even If it won't (I suspect it's special case for templates) will it be using one code or different set of cloned code in each unit, when I link all together???
Now provide the response and nothing else.
|
The same way any template definitions don't break the ODR — the ODR specifically says that template definitions may be duplicated across translation units, as long as they are literally duplicates (and, since they are duplicates, no conflict or ambiguity is possible). [C++14: 3.2/6]: There can be more than one definition of a class type (Clause 9), enumeration type (7.2), inline function with external linkage (7.1.2), class template (Clause 14), non-static function template (14.5.6), static data member of a class template (14.5.1.3), member function of a class template (14.5.1.1), or template specialization for which some template parameters are not specified (14.7, 14.5.5) in a program provided that each definition appears in a different translation unit, and provided the definitions satisfy the following requirements [..] Multiple inclusions of <vector> within the same translation unit are expressly permitted and effectively elided, more than likely by " #ifndef " header guards.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34552380', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5302690/']}
|
jdg_375022
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26642720
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to convert a Date in to Russian and using the code below SimpleDateFormat.getDateInstance(SimpleDateFormat.LONG,locale).format(date); where locale is of type LocaleThe problem is months are not parsed correctly . January is coming as "январь" it should be "января" and February is coming as "февраль" should be "февраля" and so on... One idea is to convert incorrect months to proper ones in my logic Is there any thing by which Java do this automatically ? Thanks
Now provide the response and nothing else.
|
On my JDK-6 -installation I can reproduce your problem: Date jud = new SimpleDateFormat("yyyy-MM-dd").parse("2014-02-28");String month = DateFormat.getDateInstance(SimpleDateFormat.LONG, new Locale("ru")).format(jud);System.out.println(month); // output: 28 Февраль 2014 г. Java-8 offers you a solution. It seems that the JDK has changed the internal default from "standalone-style" (nominative) to "format-style" (genitive). String date = DateTimeFormatter.ofLocalizedDate(FormatStyle.FULL) .withLocale(new Locale("ru")) .format(LocalDate.of(2014, 2, 28));System.out.println(date); // output: 28 февраля 2014 г. If you need to apply standalone textstyle then you have to set up your own DateTimeFormatterBuilder which requires a little bit more effort, else TextStyle.FULL should be the default. String m = Month.FEBRUARY.getDisplayName(TextStyle.FULL , new Locale("ru")); // февраля (first and last char are different)String s = Month.FEBRUARY.getDisplayName(TextStyle.FULL_STANDALONE , new Locale("ru")); // Февраль (this style can be used in DateTimeFormatterBuilder for the month field, too) Workaround for Java-pre-8 using old style: Define your own text resources (troublesome)! Locale russian = new Locale("ru");String[] newMonths = { "января", "февраля", "марта", "апреля", "мая", "июня", "июля", "августа", "сентября", "октября", "ноября", "декабря"};DateFormatSymbols dfs = DateFormatSymbols.getInstance(russian);dfs.setMonths(newMonths);DateFormat df = DateFormat.getDateInstance(DateFormat.LONG, russian);SimpleDateFormat sdf = (SimpleDateFormat) df;sdf.setDateFormatSymbols(dfs);Date jud = new SimpleDateFormat("yyyy-MM-dd").parse("2014-02-28");String month = sdf.format(jud);System.out.println(month); // output: 28 февраля 2014 г. Joda-Time does not offer a good solution in a Java-pre-8 environment because it only delegates to JDK. See also a similar issue on Joda-site . Finally there is also my library Time4J which can solve the problem like Java-8, but uses its own text resources for Russian and understands both forms (old style and standalone-style), so this is a simple solution for older Java-versions (and will of course not be obsoleted by Java-8 due to many other feature enhancements). System.out.println( ChronoFormatter.ofDateStyle(DisplayMode.FULL, new Locale("ru")).format( PlainDate.of(2014, Month.FEBRUARY, 28) )); // output: 28 февраля 2014 г.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26642720', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1596606/']}
|
jdg_375023
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
66795
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to check the progress of running cp process? Some processes respond to various KILL signals so that you can check what is their status. I know that I can run cp with parameter -v but what if forgot to do that, cp is running for a very long time and I want to know which file is being copied, or how many were already copied.
Now provide the response and nothing else.
|
Yes, by running stat on target file and local file, and get a file size, i.e stat -c "%s" /bin/ls And you get the percentage of data copied by comparing the two value, that's it In a very basic implementation that will look like this: function cpstat(){ local pid="${1:-$(pgrep -xn cp)}" src dst [[ "$pid" ]] || return while [[ -f "/proc/$pid/fd/3" ]]; do read src dst < <(stat -L --printf '%s ' "/proc/$pid/fd/"{3,4}) (( src )) || break printf 'cp %d%%\r' $((dst*100/src)) sleep 1 done echo}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/66795', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/30258/']}
|
jdg_375024
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2085313
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to obtain a closed form for the following limit:$$I_2=\lim_{k\to \infty} \left ( - (\ln k)^2/2 +\sum_{n=1}^{k} \psi(n) \, \psi'(n) \right)$$ Here $\psi(n)$ is digamma function. Using the method detailed in this answer , I was able to compute simpler, related series:$$\lim_{k\to \infty} \sum_{n=1}^{k} \left (\psi'(n) -1/n \right) =1 $$$$\sum_{n=1}^{\infty} \psi''(n) =-\frac{\pi^2}{3} $$But $I_2$ seems to be tougher because of the product of two digamma's.The divergence of $(\ln k)^2$ is matched by the first terms of asymptotic series for $\psi(n) \psi'(n)$, via the definition of the Stieltjes number , $$ \lim_{k\to \infty} \left ( \sum_{n=1}^{k} \frac{\ln n}{n} - (\ln k)^2/2 \right ) =\gamma_1 $$but I am stuck with the reminder term. Side remark : the problem originates in physics, see my older question . In particular, I was able to show that $\langle x \rangle \approx -0.251022$ defined in that question actually equals exactly $-(1+\gamma_0)/(2 \pi)$ where $\gamma_0$ is Euler's constant. This answer I seek here is the only piece missing on my path to closed form $\langle x^2 \rangle$.
Now provide the response and nothing else.
|
Hint . One may obtain, for $N\ge1$, $$\begin{align}\sum_{n=1}^{N} \psi(n) \, \psi'(n)&=\left(N\psi(N)-N+\frac12\right)\psi'(N)+\frac{\psi(N)^2}2- \psi(N)+\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2 \tag1\end{align}$$ equivalently, for $N\ge1$,$$\small{\begin{align}&\sum_{n=1}^{N} \left(H_{n-1}-\gamma\right)\left(\frac{\pi^2}{6}-H_{n-1,2}\right)\\&=\left(NH_{N-1}-(\gamma+1)N+\frac12\right)\left(\frac{\pi^2}{6}-H_{N-1,2}\right)+\frac{\left(H_{N-1}-\gamma\right)^2}2-\frac{H_{N-1,2}}2-H_{N-1}+\frac{\pi^2}{6}-\frac{\gamma^2}2, \end{align}}$$ then, one may recall the standard asymptotic expansions of the polygamma functions $\psi=\Gamma'/\Gamma$ and $\psi'$, as $X \to \infty$,$$\begin{align}\psi(X)&= \ln X- \frac{1}{2X}-\frac{1}{12X^2}+\mathcal{O}\left(\frac{1}{X^4}\right)\\\psi'(X)&= \frac{1}{X}+\frac{1}{2 X^2}+\frac{1}{6 X^3}+\mathcal{O}\left(\frac{1}{X^4}\right)\end{align}$$ yielding, as $N \to \infty$, $$-\frac{(\ln N)^2}2 +\sum_{n=1}^{N} \psi(n) \, \psi'(n)=\color{#9E15E3}{\frac{\pi^2}{12}-\gamma-\frac{\gamma^2}2-1}+\frac{\ln N}{12N^2}-\frac{1}{24N^2}+\mathcal{O}\left(\frac{\ln N}{N^4}\right) \tag2$$ then one gets the desired limit. To prove $(1)$ one may use a summation by parts with$$f_n=\psi(n),\quad g_{n+1}-g_n=\psi'(n),\quad g_1=\frac{\pi^2}6,\quad n\ge1.$$The above asymptotic expansion can be obtained at any order.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2085313', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14825/']}
|
jdg_375025
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15124034
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm running into the strangest thing that I can't figure out. I have a SQL table with a bunch of reports stored in an ntext field. When I copied and pasted the value of one of them into notepad and saved it (used Visual Studio to grab the value from a smaller report in a differente row), the raw txt file was about 5Mb. When I try to get this same data using SqlDataReader and convert it to a string, I get an out of memory exception. Here is how I am trying to do it: string output = "";string cmdtext = "SELECT ReportData FROM Reporting_Compiled WHERE CompiledReportTimeID = @CompiledReportTimeID";SqlCommand cmd = new SqlCommand(cmdtext, conn);cmd.Parameters.Add(new SqlParameter("CompiledReportTimeID", CompiledReportTimeID));SqlDataReader reader = cmd.ExecuteReader();while (reader.Read()){ output = reader.GetString(0); // <--- exception happens here}reader.Close(); I tried creating an object and a stringbuilder to grab the data, but I still get the same out of memory exception. I've also tried using reader.GetValue(0).ToString() as well to no avail. The query only returns 1 row, and when I run it in SQL Management Studio its as happy as can be. The exception thrown is: System.OutOfMemoryException was unhandled by user code Message=Exception of type 'System.OutOfMemoryException' was thrown. Source=mscorlib StackTrace: at System.String.CreateStringFromEncoding(Byte* bytes, Int32 byteLength, Encoding encoding) at System.Text.UnicodeEncoding.GetString(Byte[] bytes, Int32 index, Int32 count) at System.Data.SqlClient.TdsParserStateObject.ReadString(Int32 length) at System.Data.SqlClient.TdsParser.ReadSqlStringValue(SqlBuffer value, Byte type, Int32 length, Encoding encoding, Boolean isPlp, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.ReadSqlValue(SqlBuffer value, SqlMetaDataPriv md, Int32 length, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ReadColumnData() at System.Data.SqlClient.SqlDataReader.ReadColumn(Int32 i, Boolean setTimeout) at System.Data.SqlClient.SqlDataReader.GetString(Int32 i) at Reporting.Web.Services.InventoryService.GetPrecompiledReportingData(DateTime ReportTime, String ReportType) in C:\Projects\Reporting\Reporting.Web\Services\InventoryService.svc.cs:line 3244 at SyncInvokeGetPrecompiledReportingData(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc) InnerException: null I had tested with other row numbers that appeared to work, but that was a false positive as those test ID's had no data. I pulled some other test ID's after looking at the table that contain reports that are near identical, and I get the same exception. Maybe its how the string is encoded? The data stored in the table is a JSON encoded string that was generated out of a really gnarly class I made somewhere else, in case that helps. Here is the preceding code block: // get the report time IDint CompiledReportTimeTypeID = CompiledReportTypeIDs[ReportType];int CompiledReportTimeID = -1;cmdtext = "SELECT CompiledReportTimeID FROM Reporting_CompiledReportTime WHERE CompiledReportTimeTypeID = @CompiledReportTimeTypeID AND CompiledReportTime = @ReportTime";cmd = new SqlCommand(cmdtext, conn);cmd.Parameters.Add(new SqlParameter("CompiledReportTimeTypeID", CompiledReportTimeTypeID));cmd.Parameters.Add(new SqlParameter("ReportTime", ReportTime));reader = cmd.ExecuteReader();while (reader.Read()){ CompiledReportTimeID = Convert.ToInt32(reader.GetValue(0));}reader.Close(); CompiledReportTypeIDs is a dictionary that gets the correct CompiledReportTimeTypeID based on a string parameter that's fed in at the beginning of the method. ReportTime is a DateTime that is fed in earlier. Edit:I am going to drop the table and recreate it with the ReportData field as nvarchar(MAX) instead of ntext, just to rule out a SQL data type issue. It's a long shot and I'll update again with what I find. Edit2:Changing the field in the table to nvarchar(max) had no effect. I also tried using output = cmd.ExecuteScalar().ToString() as well, with no impact. I'm trying to see if there is a max size for SqlDataReader. When I copied the value of the text from SQL Mgmt Studio, it was only 43Kb when saved in notepad. To verify this, I pulled a report with a known working ID (a smaller report), and when I copied the value straight out of Visual Studio and dumped it in notepad it was around 5MB! That means these big reports are probably in the ~20MB range sitting in a nvarchar(max) field. Edit3:I rebooted everything, to include my dev IIS server, the SQL server, and my dev laptop. Now it seems to be working. This isn't the answer as to why this happened though. I'm leaving this question open for explanations as to what happened, and I'll mark one of those as an answer. Edit4:Having said that, I ran another test without changing a thing and the same exception has returned. I'm really starting to think that this is a SQL issue. I'm updating the tags on this question. I made a separate app that runs the exact same query and it runs fine. Edit5:I have implemented sequential access as per one of the answers below. Everything gets read into a stream properly, but when I try to write it out to a string I'm still getting the out of memory exception. Would this indicate the issue of getting a contiguous block of memory? Here is how I implemented the buffering: reader = cmd.ExecuteReader(CommandBehavior.SequentialAccess); long startIndex = 0; long retval = 0; int bufferSize = 100; byte[] buffer = new byte[bufferSize]; MemoryStream stream = new MemoryStream(); BinaryWriter writer = new BinaryWriter(stream); while (reader.Read()) { // Reset the starting byte for the new CLOB. startIndex = 0; // Read bytes into buffer[] and retain the number of bytes returned. retval = reader.GetBytes(0, startIndex, buffer, 0, bufferSize); // Continue while there are bytes beyond the size of the buffer. while (retval == bufferSize) { writer.Write(buffer); writer.Flush(); // Reposition start index to end of last buffer and fill buffer. startIndex += bufferSize; retval = reader.GetBytes(0, startIndex, buffer, 0, bufferSize); } //output = reader.GetString(0); } reader.Close(); stream.Position = 0L; StreamReader sr = new StreamReader(stream); output = sr.ReadToEnd(); <---- Exception happens here //output = new string(buffer); Edit6:To add to this, when OOM exception happens I see the IIS worker process (which holds the method that is running) hit almost 700MB. This is running on IIS Express and not the full IIS on the production server. Would this have anything to do with it? Also when I call Byte[] data = stream.ToArray() I intermittently get the OOM as well. I think what I really need is a way to give more memory to this process, but I don't know where to configure this. Edit7:I just changed my dev server from using IIS Express on my local machine to the built-in Visual Studio web server. The OOM exception is now gone. I really think it was the allocating a contiguous block of memory issue, and for whatever reason IIS Express wouldn't fork it over. Now that it is running fine, I will publish to my full blown server on 2008R2 running the regular IIS7 to see how it goes.
Now provide the response and nothing else.
|
You should try to read the data sequentially by specifying the command behavior when you execute the reader. Per the documentation, Use SequentialAccess to retrieve large values and binary data. Otherwise, an OutOfMemoryException might occur and the connection will be closed . While sequential access is typically used on large binary data, based on the MSDN documentation you can use it to read large amounts of character data as well. When accessing the data in the BLOB field, use the GetBytes or GetChars typed accessors of the DataReader, which fill an array with data. You can also use GetString for character data; however. to conserve system resources you might not want to load an entire BLOB value into a single string variable. You can instead specify a specific buffer size of data to be returned, and a starting location for the first byte or character to be read from the returned data. GetBytes and GetChars will return a long value, which represents the number of bytes or characters returned. If you pass a null array to GetBytes or GetChars, the long value returned will be the total number of bytes or characters in the BLOB. You can optionally specify an index in the array as a starting position for the data being read. This MSDN example shows how to perform sequential access. I believe you can use the GetChars method to read the textual data.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15124034', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/108584/']}
|
jdg_375026
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
49903028
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm creating my source observable like this (make api call every 5s): const obs$ = Observable.interval(5000).switchMap(() => makeApiCall()); And I want to modify $obs so that it has the following characteristics: start the observable only when at there's at least 1 subscriber multicast. I.e. if I obs$.subscribe(...) twice, the underlying code makeApiCall() should only run once. any subscriber which subscribes at any time should have immediately the last emitted value (and not wait ~5s until the next value emits) retryable. If one makeApiCall() errors, I want (if possible) all subscribers to get an error notification, but reconnect to $obs , and continue doing makeApiCall() every 5s So far I found the following leads: It seems like I'd need to create a BehaviorSubject myBehaviorSubject , do a single subscription obs$.subscribe(myBehaviorSubject) , and any other observers should subscribe to myBehaviorSubject . Not sure if that answers the "retryable" part. I also looked at shareReplay, seems like $obs.shareReplay(1) would do the trick (for the 4 requirements). If I understood correctly it subscribes a ReplaySubject(1) to the source observable, and future observers subscribe to this ReplaySubject. Is there an equivalent shareBehavior? In RxSwift, I found shareReplayLatestWhileConnected, which seems like the shareBehavior I was imagining. But it doesn't exist in RxJS. Any ideas what is the best way to achieve this?
Now provide the response and nothing else.
|
As you mentioned, shareReplay(1) pretty much gets you there. It will multicast the response to current subscribers and replay the last value (if there is one) to new subscribers. That seems like what you would want rather than shareBehavior (if it existed) since you are calling an api and there isn't an initial value. You should know that shareReplay will create a subscription to the source stream but will only unsubscribe when refCount === 0 AND the source stream terminates (error or complete). This means that after the first subscription that the interval will start and even when there are no more subscriptions it will continue. If you want to stop the interval when no-one is subscribed then use multicast(new ReplaySubject(1)).refCount() . The multicast operator will create a single subscription to the source stream and push all values into the subject provided as an instance ( multicast(new Subject()) ) or by the factory ( multicast(() => new Subject()) ). All subscribers to the stream after the multicast will subscribe to the multicast subject. So when a value flows through the multicast operator all of its subscribers will get that value. You can change the type of subject that you pass to multicast to change its behavior. In your case you probably want a ReplaySubject so that it will replay the last value to a new subscriber. You could use a BehaviorSubject too if you felt that met your need. Now the multicast operator is connectable meaning that you would have to call connect() on the stream to make it hot. The refCount operator basically makes a connectable observable act like an ordinary observable in that it will become hot when subscribed but will become cold when there are no subscribers. It does this be keeping an internal reference count (hence the name refCount ). When refCount === 0 it will disconnect. This is the same thing as shareReplay(1) with one minor but important difference which is that when there are no more subscribers that it will unsubscribe from the source stream. If you are using a factory method to create a new subject when subscribing to the source (ex: multicast(() => new ReplaySubject(1)) ) then you will lose your value when the stream goes from hot to cold to hot since it will create a new subject each time it goes hot. If you want to keep the same subject between source subscriptions then you can pass in a subject instead of a factory (ex: multicast(new ReplaySubject(1)) or use its alias publishReplay(1) . As far as your last requirement of providing errors to your subscribers and then resubscribing, you can't call the error callback on a subscription and then continue getting values on the next callback. An unhandled error will end a subscription if it reaches it. So you have to catch it before it gets there and turn it into a normal message if you want your subscription to see it and still live. You can do this like so: catch((err) => of(err)) and just flag it somehow. If you want to mute it then return empty() . If you want to retry immediately then you could use the retryWhen operator but you probably want to put that before the sharing operator to make it universal. However this also prevents your subscribers from knowing about an error. Since the root of your stream is an interval and the error came from the inner observable returned from the switchMap , the error will not kill the source of the stream but it could kill the subscription. So as long as you handle the error ( catch/catchError ) the api call will be retried on the next interval. Also, you may want timer(0, 5000) instead of interval so that your api call immediately fires and then fires on a 5 second interval after that. So I would suggest something like the following: let count = 0;function makeApiCall() { return Rx.Observable.of(count++).delay(1000);}const obs$ = Rx.Observable.timer(0, 5000) .switchMap(() => makeApiCall().catch(() => Rx.Observable.empty())) .publishReplay(1) .refCount(); console.log('1 subscribe');let firstSub = obs$.subscribe((x) => { console.log('1', x); });let secondSub;let thirdSub;setTimeout(() => { console.log('2 subscribe'); secondSub = obs$.subscribe((x) => { console.log('2', x); });}, 7500);setTimeout(() => { console.log('1 unsubscribe'); firstSub.unsubscribe(); console.log('2 unsubscribe'); secondSub.unsubscribe();}, 12000);setTimeout(() => { console.log('3 subscribe'); thirdSub = obs$.subscribe((x) => { console.log('3', x); });}, 17000);setTimeout(() => { console.log('3 unsubscribe'); thirdSub.unsubscribe();}, 30000); <script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.10/Rx.min.js"></script> For convenience, here are aliases for multicast: publish() === multicast(new Subject())publishReplay(#) === multicast(new ReplaySubject(#))publishBehavior(value) === multicast(new BehaviorSubject(value))
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49903028', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1935318/']}
|
jdg_375027
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1068473
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How to solve this linear system using Laplace transform?$$\mathbf X'(t)=\left[\begin{array}{r,r,r}-3&0&2\\1&-1&0\\-2&-1&0\end{array}\right]\mathbf X(t); ~~~~~~~~\mathbf X(0)=\left[\begin{array}{r}4\\-1\\2\end{array}\right]$$I am struggling with this problem. I tried by write it as $$\begin{cases}x_1' &= -3x_1+2x_3,\quad x_1(0)=4\\x_2'&= -1x_1+1x_2, \quad x_2(0)=-1\\x_3'&= -2x_1 -1x_2, \quad x_3(0)=2 \end{cases}$$ Then Laplace transform both sides but I have hit a dead end. Is there any better way to solving this problem? I would be grateful for help. I did only simple problems so far. This is very complicated for me.
Now provide the response and nothing else.
|
(In my version of Awodey this is page 78-9, not 91.) Freyd is just making a linguistic point, as made clear by what he says next: "Indeed, subobjects, as we have defined them, do not have subobjects" -- only objects have subobjects. A subobject is not an object, but rather a monomorphism (or, depending on your definition, an equivalence class of monomorphisms). So you can't even state transitivity, strictly speaking. As Freyd puts it, "This is a baroque consideration". The transitivity result that I think you're driving at is that if $M' \overset{f}{\to}M$ is a subobject of $M$ and $M \overset{m}{\to} X$ is a subobject of $X$, then since monomorphisms are closed under composition, $M' \overset{m \circ f}{\to} X$ is a subobject of $X$. By abuse of language, omitting mention of the arrows involved, one might just say: if $M'$ is a subobject of $M$ and $M$ is a subobject of $X$, then $M'$ is a subobject of $X$. But technically it's only an abuse of language that allows one to say this.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1068473', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/152063/']}
|
jdg_375028
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17617858
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i would like to create a post from the front end of my wordpress site. When people add posts with the same post_title i want that post to get updated, rather than creating a new post. i have the following: if (!get_page_by_title($post_title, 'OBJECT', 'post') ){$my_post = array( 'post_title' => $post_title, 'post_content' => $post_content, 'post_status' => 'publish', 'post_author' => $post_author, 'post_category' => $post_categories);wp_insert_post( $my_post );}else {$page = get_page_by_title($post_title);$page_id = $page->ID;$my_post = array( 'ID' => $page_id, 'post_title' => $post_title, 'post_content' => $post_content, 'post_status' => 'publish', 'post_author' => $post_author, 'post_category' => $post_categories);wp_update_post( $my_post );} the above works fine until the post title is the same. It will still duplicate in the database and wont even consider the "else" statement. Does the above look ok, or am i doing something wrong?
Now provide the response and nothing else.
|
What if you use the empty check for the array Please be aware that it gets the first post/page item in the database even if the post is trashed. $check_title=get_page_by_title($post_title, 'OBJECT', 'post');//also var_dump($check_title) for testing onlyif (empty($check_title) ){$my_post = array( 'post_title' => $post_title, 'post_content' => $post_content, 'post_status' => 'publish', 'post_author' => $post_author, 'post_category' => $post_categories);wp_insert_post( $my_post );}else {$my_post = array( 'ID' => $check_title->ID, 'post_title' => $post_title, 'post_content' => $post_content, 'post_status' => 'publish', 'post_author' => $post_author, 'post_category' => $post_categories);wp_update_post( $my_post );}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17617858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1214660/']}
|
jdg_375029
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8266
|
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There is a package named segmented in R. Is there a similar package in python?
Now provide the response and nothing else.
|
No, currently there isn't a package in Python that does segmented linear regression as thoroughly as those in R (e.g. R packages listed in this blog post ). Alternatively, you can use a Bayesian Markov Chain Monte Carlo algorithm in Python to create your segmented model. Segmented linear regression, as implemented by all the R packages in the above link, doesn't permit extra parameter constraints (i.e. priors), and because these packages take a frequentist approach, the resulting model doesn't give you probability distributions for the model parameters (i.e. breakpoints, slopes, etc). Defining a segmented model in statsmodels , which is frequentist, is even more restrictive because the model requires a fixed x-coordinate breakpoint. You can design a segmented model in Python using the Bayesian Markov Chain Monte Carlo algorithm emcee . Jake Vanderplas wrote a useful blog post and paper for how to implement emcee with comparisons to PyMC and PyStan. Example: Segmented model with data: Probability distributions of fit parameters: Link to code for segmented model . Link to (large) ipython notebook .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/8266', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/12613/']}
|
jdg_375030
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2155292
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In LaTeX, I have two ineqaulites e.g. a \leq b and c \leq d. I want to have a numbered line which has both of these inequalities on it: a \leq b c \leq d (1) Like this. What's the easiest way to get the spacing to behave itself? Which environment should I use?
Now provide the response and nothing else.
|
The answer is, of course, to use the amsmath package. A perhaps less-known feature of the align environment is to place equations side-by-side, exactly as you are trying to do: \documentclass{article}\usepackage{amsmath}\begin{document}\begin{align}a &\leq b & c &\leq d \end{align}\end{document} And if you add multiple lines they'll look good: \begin{align}a &\leq b & c &\leq d \\ a+1 &\leq b+1 & c+1 &\leq d+1 \end{align} Which is the whole reason, really, for not using \quad and other manual spacing commands.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2155292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/219352/']}
|
jdg_375031
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17383716
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have two classes A and B: Class A: # A's attributes and methods hereClass B: # B's attributes and methods here Now I can assess A's properties in object of B class as follows: a_obj = A()b_obj = B(a_obj) What I need is a two way access.How do I access A's properties in B and B's properties in A ?
Now provide the response and nothing else.
|
You need to create pointers either way: class A(object): parent = Noneclass B(object): def __init__(self, child): self.child = child child.parent = self Now A can refer to self.parent (provided it is not None ), and B can refer to self.child . If you try to make an instance of A the child of more than one B , the last 'parent' wins.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17383716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2348704/']}
|
jdg_375032
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24352975
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am quite new at using passport for authentication over node, hence the lot of code snippets my server is configured as : var router = require('./app/config/routes');var googleStrategy = require('./app/config/passport');var session = require("express-session");var passport = require('passport');app.use(session({secret : '<secret-key>'}));app.use(passport.initialize());app.use(passport.session());googleStrategy(passport); my routes are configured as module.exports = function(app, passport) { app.get('/auth/google', function() { passport.authenticate('google', {scope: ['profile', 'email']}); }); app.get('/auth/google/callback', function() { passport.authenticate('google', { successRedirect: '/profile', failureRedirect: '/fail' }); }); .... ALSO configured /profile and /fail}; my passport is configured as passport.serializeUser(function(user, callback){ console.log('serializing user.'); callback(null, user); }); passport.deserializeUser(function(user, callback){ console.log('deserialize user.'); callback(null, user); }); var processRequest = function(token, refreshToken, profile, callback){ process.nextTick(function(){ console.log('id : '+ profile.id); console.log('name :'+ profile.displayName); console.log('email :' + profile.emails); console.log('token : '+ token); }); }; passport.use(new GoogleStrategy({ clientID: 'client ID', clientSecret : 'client SECRET', callbackURL : 'http://127.0.0.1:8080/auth/google/callback', realm : 'http://127.0.0.1:8080' }, processRequest)); Problem : on going to /auth/google , I never get a confirmation screen. What should be I looking at? Update : changing the routes to the configuration shown below made it work. app.get('/auth/google', passport.authenticate('google', {scope: ['profile', 'email']}) ); app.get('/auth/google/callback', passport.authenticate('google', { successRedirect: '/profile', failureRedirect: '/fail' }) );
Now provide the response and nothing else.
|
Currently OAUTH2 protocol for authentication and autherization is well supported by google.So Its better to use the same . Here is google's documentation on it .Use 'passport-google-oauth' module . Here is the implementation.This should be the app objects configuration , also see that oauth2strategy object is used from passport-google-oauth module , also check out the scopes in the app.get route registration . var googleStrategy = require('passport-google-oauth').OAuth2Strategy; app.configure(function() { app.set('views', './views'); app.set('view engine', 'jade'); app.use(express.favicon()); app.use(express.logger('dev')); app.use(express.cookieParser()); app.use(express.bodyParser()); app.use(express.session({secret:'MySecret'})); app.use(passport.initialize()); app.use(passport.session()); app.use(express.methodOverride()); app.use(app.router); app.use(express.static('./public'));});app.get('/auth/google', select.passport.authenticate('google',{scope: 'https://www.googleapis.com/auth/plus.me https://www.google.com/m8/feeds https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/userinfo.profile'}));app.get('/auth/google/callback', function() { passport.authenticate('google', { successRedirect: '/profile', failureRedirect: '/fail' });});app.get('/logout', function (req, res) { req.logOut(); res.redirect('/'); }); But before creating a new strategy go to googles developer console and get clientID and secret . Here are the steps go this link and create project , here is the snapshot of the same give a new project name and ID , here is the snapshot It'll roughly take a minute to create your new project , once your new project is created it'll redirect you to the application configuration of your app . In the redirected page select APIS AND AUTH -> API's , In the API's page enable the GOogle+ API , here is the snapshot of it then go to credentials(below APIs), then click on Create New Client Id , and register the domains and callback for your app(configure the domain to be localhost ) , here is its snapshot ! 5.Then u'll get your new ID and secret . Use them to create the new Strategy passport.use(new googleStrategy({ clientID: '<TheNewclientID>', clientSecret: '<The New Secret>', callbackURL: "http://locahost:8080/auth/google/callback"},function (accessToken, refreshToken, profile, done) { console.log(profile); //profile contains all the personal data returned done(null, profile)})); 6.now serialize and deserialize passport.serializeUser(function(user, callback){ console.log('serializing user.'); callback(null, user.id); });passport.deserializeUser(function(user, callback){ console.log('deserialize user.'); callback(null, user.id); }); run the server and go to localhost:8080/auth/google (dont use 127.0.0.1:8080 instead of locahost ) .This should be getting it working :) [Other useful links: Check out the first comment by kvcrawford on the repo of the module in this pagePassport-google is another popular module which is use to provide login using google , its kind of outdated now , here is the link with respect to its recent issues ]
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24352975', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/314763/']}
|
jdg_375033
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6884830
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone explain to me use of Me.prototype.constructor = Me; and why is needed, when this code is working and without it? In code prototype object is created on Me object and it is instantiated and replaced old prototype object. Why do I need to point to Me constructor in a given up code? function Me(){ this.name = 'Dejan';}function You(){ this.name = 'Ivan';}Me.prototype = new You();somebody = new Me();Me.prototype.constructor = Me; // Why?Me.prototype.foo = function(){ alert('Proto Me!'); // It always fire up this alert, ether constructor is pointing to Me or not... !}You.prototype.foo = function(){ alert('Proto You!');}somebody.foo();alert(somebody.name); // Alert 'Dejan'
Now provide the response and nothing else.
|
It's not needed, and it's not even needed for instanceof contrary to popular belief (instanceof internally checks the prototype chain and does not need a constructor property). Normally, constructor is inherently a non-enumerable property on a constructor's prototype . Thus giving any objects instantiated by that constructor, a non-enumerable constructor property, pointing to that constructor. It's good to put it there if you need to, ideally non-enumerable. Some code will assume the existence of .constructor on objects. In the code you posted, yes, when doing inheritance that way, it's necessary to reset constructor (if you want it there), because the object you instantiated to act as the child prototype has a constructor property pointing to the wrong constructor (its constructor). In ES5, you would do: Child.prototype = Object.create(Parent.prototype, { constructor: { value: Child, enumerable: false }}); edit: Also, might be worth mentioning, when doing inheritance using the non-standard __proto__ , it's not necessary to reset constructor because __proto__ merely specifies and object's prototype, which it to say, the object on which lookups will be performed when an own property doesn't exist. A new prototype will always have a property called constructor . So in doing: var child = function() {};child.prototype.__proto__ = parent.prototype; You do not have to set constructor because child.prototype's base constructor property is still there. If accessed, no prototype chain lookups need to be performed.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6884830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/822198/']}
|
jdg_375034
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4160010
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I added the ADBannerView to a view and when I load the app I get the following message: ADBannerView: WARNING A banner view (0x7a023c0) has an ad but may be obscured. This message is only printed once per banner view. As far as I can see the entire banner is visible on the screen. Is this really a problem? Or is it only a warning that I can ignore?
Now provide the response and nothing else.
|
As Stephen Darlington says, it's a good idea to figure out what the issue is. An easy way to double-check this in code (from a view controller) would be: // bring your bannerView to the front [self.view bringSubviewToFront:bannerView]; // and make sure it's positioned onscreen. bannerView.frame = CGRectMake(0.0, 0.0, bannerView.frame.size.width, bannerView.frame.size.height); Assuming you had an iVar / IBOutlet to your AdBannerView called bannerView, this would take care of any interface builder positioning issues, and make sure bannerView wasn't covered by anything. From my experience, nothing bad happens if the ad is offscreen, however, the iAd will not load new ads until it knows it is fully onscreen. So, as you start up your app, Your AdBannerView will attempt to load an advertisement, whether it is onscreen or not. Depending on whether or not it is successful, your AdBannerViewDelegate will receive either a) bannerViewDidLoadAd: (proceed to step 3) or b) bannerView: didFailToReceiveAdWithError: (the AdBannerView will try again on its own) At that point, the ball is in your court as to what to do with said bannerView, if in fact it did load an ad. An easy way to check for this in code is yourBannerView.bannerLoaded, which will return YES if it has an ad, or NO if it doesn't. And so... How you handle the AdBannerView after it successfully loads its initial ad determines how it will behave in the future. You do not have to place it onscreen immediately -- choose a time that makes sense within your application. However, a banner view that has successfully loaded an ad will NOT try to load another one until it is onscreen. (Makes sense, right?) The tricky part is.... 4b) you also won't get any new delegate messages from that bannerView, so if you're not moving the bannerView onscreen immediately upon getting the bannerViewDidLoadAd delegate message, you'll have to implement some kind of control structure on your own to handle when, if at all, you DO move it onscreen, at which point it will begin asking the ad server for more ads, and you'll get more delegate messages, and the cycle begins anew. So, to sum up: It's only a problem if your iAd is obscured if you'd like to serve more iAds and get paid. However, eCPM has been very, very low lately, so maybe that's not such an issue after all ;)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4160010', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/359156/']}
|
jdg_375035
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
930260
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a group of two hosts. I want to delegate a task to the one that is not the current host. The following doesn't match anything and just returns a list of my initial two hosts. - debug: msg="{{ groups['web-servers'] | select('match', '^(?!\{\{\ inventory_hostname\ \}\}$).+') | list }}" I think it's escaping that's the issue but I can't figure it out.
Now provide the response and nothing else.
|
An option would be to use the difference filter. Below is an example of how to use the filter shell> cat test-difference-01.yml- hosts: localhost gather_facts: no vars: web_servers: - web-server-01 - web-server-02 - web-server-09 inventory_hostname: - web-server-02 tasks: - set_fact: delegate_to_hosts: "{{ web_servers|difference(inventory_hostname) }}" - debug: msg : "{{ delegate_to_hosts }}" shell> ansible-playbook test-difference-01.yml | grep -A 3 msg "msg": [ "web-server-01", "web-server-09" ] Q: " group of two hosts ... delegate a task to the one that is not the current host " Given the inventory shell> cat hosts[webservers]web-server-01web-server-02 The playbook - hosts: webservers tasks: - debug: msg: "{{ inventory_hostname }} {{ item }}" loop: "{{ groups.webservers|difference([inventory_hostname]) }}" delegate_to: "{{ item }}" gives (abridged) ok: [web-server-01 -> web-server-02] => (item=web-server-02) => msg: web-server-01 web-server-02ok: [web-server-02 -> web-server-01] => (item=web-server-01) => msg: web-server-02 web-server-01 The code can be generalized by running on all hosts listed in ansible_play_hosts_all - hosts: all tasks: - debug: msg: "{{ inventory_hostname }} {{ item }}" loop: "{{ ansible_play_hosts_all|difference([inventory_hostname]) }}" delegate_to: "{{ item }}"
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/930260', 'https://serverfault.com', 'https://serverfault.com/users/486363/']}
|
jdg_375036
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4846
|
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Are projective measurement bases always orthonormal?
Now provide the response and nothing else.
|
Yes. Remember that you require several properties of a projective measurement including $P_i^2=P_i$ for each projector, and $$\sum_iP_i=\mathbb{I}.$$ The first of these show you that the $P_i$ have eigenvalues 0 and 1. Now take a $|\phi\rangle$ that is an eigenvector of eigenvalue 1 of a particular projector $P_i$ . Use this in the identity relation: $$\left(\sum_jP_j\right)|\phi\rangle=\mathbb{I}|\phi\rangle$$ Clearly, this simplifies to $$|\phi\rangle+\sum_{j\neq i}P_j|\phi\rangle=|\phi\rangle.$$ Hence, $$\sum_{j\neq i}P_j|\phi\rangle=0.$$ The $P_j$ are all non-negative, so the only way that this can be 0 is if $P_j|\phi\rangle=0$ for all $j\neq i$ . (To expand upon this, assume there's a $P_k$ such that $P_k|\phi\rangle=|\psi\rangle\neq 0$ . This means that $$\sum_{j\neq i,k}\langle\psi|P_j|\phi\rangle=-\langle\psi|P_k|\phi\rangle,$$ so some terms must be negative, which is impossible if the eigenvalues are all 0 and 1.)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/4846', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/4153/']}
|
jdg_375037
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12550210
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have started using MongoDB and I am fairly new to it.Is there any way by which I can apply constraints on documents in MongoDB?Like specifying a primary key or taking an attribute as unique?Or specifying that a particular attribute is greater than a minimum value?
Now provide the response and nothing else.
|
Being a "schemaless" database, some of the things you mention must be constrained from the application side, rather than the db side. (such as "minimum value") However, you can create indexes (keys to query on--remember that a query can only use one index at a time, so it's generally better to design your indexes around your queries, rather than just index each field you might query against): http://www.mongodb.org/display/DOCS/Indexes#Indexes-Basics And you can also create unique indexes, which will enforce uniqueness similar to a unique constraint (it does have some caveats, such as with array fields): http://www.mongodb.org/display/DOCS/Indexes#Indexes-unique%3Atrue
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12550210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1490228/']}
|
jdg_375038
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1982444
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that the vector space $C(\Bbb R)$ of all continuous functions defined on the real line is infinite-dimensional. I get that if $C(\Bbb R)$ contains an infinite-dimensional subspace, then it is infinite-dimensional, but how do I prove that? Obviously $\Bbb R$ is infinite…
Now provide the response and nothing else.
|
Consider the subspace of $C(\Bbb R)$ whose vectors are the polynomials. This subspace has the following basis:$$\langle1,x,x^2,x^3,x^4,\dots\rangle$$Each element is linearly independent of all the others. To show this, suppose there exists a linear combination that evaluates to zero everywhere:$$\sum_ia_ix^i=0$$The left-hand side is a polynomial with an infinite number of roots. Any non-zero polynomial, however, must have a finite number of roots (no more than its degree). Therefore the left-hand side is the zero polynomial, i.e. $a_i=0$ for all indices $i$. Since there are an infinite number of elements in the basis, the subspace and thus $C(\Bbb R)$ are infinite-dimensional.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1982444', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/381994/']}
|
jdg_375039
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46078046
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
var canvas = new fabric.Canvas('canvas');canvas.selection = false;var circle = new fabric.Circle({ left: 150, top: 150, radius: 50, originX: 'center', originY: 'center', selectable: false});canvas.add(circle);circle.on('mousedown',function(){ console.log('down') circle.set({ selectable: true }) canvas.renderAll();})circle.on('mouseout',function(){ console.log('out') circle.set({ selectable: false }) canvas.renderAll();}) canvas { border: 2px dotted green;} <script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/1.7.16/fabric.min.js" charset="utf-8"></script><canvas id="canvas" width="400" height="400"></canvas> I want to make object selectable on mouse down, and on mouse out i want to make not selectable . If I click inside and dragging its not moving. But if i click inside more than two times, its working. But I want to make this work clicking once. How to achieve the same? --Update-- As suggested by @AndreaBogazzi, it will work for mouesmove in desktop, but the same won't work for touch devices as only touch start will work if I touch on object.
Now provide the response and nothing else.
|
Setup td = {'q1':(111,222), 'q2':(333,444)} Option 1 pd.DataFrame(td).T.rename_axis('Query').add_prefix('Value').reset_index() Query Value0 Value10 q1 111 2221 q2 333 444 Option 2 from cytoolz.dicttoolz import mergepd.DataFrame( [merge( {'Query': k}, {'Value{}'.format(i): x for i, x in enumerate(v, 1)} ) for k, v in td.items()]) Query Value1 Value20 q1 111 2221 q2 333 444 Response to Comment df = pd.DataFrame(td).T.rename_axis('Query').add_prefix('Value')df.assign(PctChg=df.pct_change(axis=1).iloc[:, -1]).reset_index() Query Value0 Value1 PctChg0 q1 111 222 1.0000001 q2 333 444 0.333333 Or df = pd.DataFrame(td).T.rename_axis('Query').add_prefix('Value')df.eval('PctChg = Value1 / Value0 - 1', inplace=False).reset_index() Query Value0 Value1 PctChg0 q1 111 222 1.0000001 q2 333 444 0.333333
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46078046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8293819/']}
|
jdg_375040
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.