source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
3460990
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have read that HttpURLConnection supports persistent connections, so that a connection can be reused for multiple requests. I tried it and the only way to send a second POST was by calling openConnection for a second time. Otherwise I got a IllegalStateException("Already connected");I used the following: try{URL url = new URL("http://someconection.com");}catch(Exception e){}HttpURLConnection con = (HttpURLConnection) url.openConnection();//set output, input etc//send POST//Receive response//Read whole response//close input streamcon.disconnect();//have also tested commenting this outcon = (HttpURLConnection) url.openConnection();//Send new POST The second request is send over the same TCP connection (verified it with wireshark) but I can not understand why (although this is what I want) since I have called disconnect.I checked the source code for the HttpURLConnection and the implementation does keep a keepalive cache of connections to the same destinations. My problem is that I can not see how the connection is placed back in the cache after I have send the first request. The disconnect closes the connection and without the disconnect, still I can not see how the connection is placed back in the cache. I saw that the cache has a run method to go through over all idle connections (I am not sure how it is called), but I can not find how the connection is placed back in the cache. The only place that seems to happen is in the finished method of httpClient but this is not called for a POST with a response.Can anyone help me on this? EDIT My interest is, what is the proper handling of an HttpUrlConnection object for tcp connection reuse. Should input/output stream be closed followed by a url.openConnection(); each time to send the new request (avoiding disconnect())? If yes, I can not see how the connection is being reused when I call url.openConnection() for the second time, since the connection has been removed from the cache for the first request and can not find how it is returned back.Is it possible that the connection is not returned back to the keepalive cache (bug?), but the OS has not released the tcp connection yet and on new connection, the OS returns the buffered connection (not yet released) or something similar? EDIT2 The only related i found was from JDK_KeepAlive ...when the application calls close() on the InputStream returned by URLConnection.getInputStream(), the JDK's HTTP protocol handler will try to clean up the connection and if successful, put the connection into a connection cache for reuse by future HTTP requests. But I am not sure which handler is this. sun.net.www.protocol.http.Handler does not do any caching as I sawThanks! Now provide the response and nothing else.
Should input/output stream be closed followed by a url.openConnection(); each time to send the new request (avoiding disconnect())? Yes. If yes, I can not see how the connection is being reused when I call url.openConnection() for the second time, since the connection has been removed from the cache for the first request and can not find how it is returned back. You are confusing the HttpURLConnection with the underlying Socket and its underlying TCP connection. They aren't the same. The HttpURLConnection instances are GC'd, the underlying Socket is pooled, unless you call disconnect().
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3460990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/384706/']}
jdg_376941
stackexchange
llm_judgeable_groundtruth_similarity
47150709
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A table of dates with primary keys is sometimes used in databse design. | date_id | Date | Record_timestamp | Day | Week | Month | Quarter | Year_half | Year ||---------+----------------+---------------------+-----------+-------+--------+-------------+-------------+----------|| 0 | 2000-01-01 | NaN | Saturday | 52 | 1 | 1 | 1 | 2000 || 1 | 2000-01-02 | NaN | Sunday | 52 | 1 | 1 | 1 | 2000 || 2 | 2000-01-03 | NaN | Monday | 1 | 1 | 1 | 1 | 2000 | How to do it in pandas? Now provide the response and nothing else.
This is a little cleaner with the dt accessor : In [11]: def create_date_table2(start='2000-01-01', end='2050-12-31'): ...: df = pd.DataFrame({"Date": pd.date_range(start, end)}) ...: df["Day"] = df.Date.dt.weekday_name ...: df["Week"] = df.Date.dt.weekofyear ...: df["Quarter"] = df.Date.dt.quarter ...: df["Year"] = df.Date.dt.year ...: df["Year_half"] = (df.Quarter + 1) // 2 ...: return dfIn [12]: create_date_table2().head()Out[12]: Date Day Week Quarter Year Year_half0 2000-01-01 Saturday 52 1 2000 11 2000-01-02 Sunday 52 1 2000 12 2000-01-03 Monday 1 1 2000 13 2000-01-04 Tuesday 1 1 2000 14 2000-01-05 Wednesday 1 1 2000 1In [13]: create_date_table2().tail()Out[13]: Date Day Week Quarter Year Year_half18623 2050-12-27 Tuesday 52 4 2050 218624 2050-12-28 Wednesday 52 4 2050 218625 2050-12-29 Thursday 52 4 2050 218626 2050-12-30 Friday 52 4 2050 218627 2050-12-31 Saturday 52 4 2050 2 Note: you may like to calculate these on the fly rather than store them as columns!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47150709', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4050925/']}
jdg_376942
stackexchange
llm_judgeable_groundtruth_similarity
16032778
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have matrix like : [,1][,2][,3][,4][1,] 12 32 43 55[2,] 54 54 7 8[3,] 2 56 76 88[4,] 58 99 93 34 I do not know in advance how many rows and columns I will have in matrix. Thus, I need to create row and column names dynamically. I can name columns (row) directly like: colnames(rmatrix) <- c("a", "b", "c", "d") However, how can I create my names vector dynamically to fit the dimensions of the matrix? nm <- ("a", "b", "c", "d")colnames(rmatrix) <- nm Now provide the response and nothing else.
You can use rownames and colnames and setting do.NULL=FALSE in order to create names dynamically, as in: set.seed(1)rmatrix <- matrix(sample(0:100, 16), ncol=4)dimnames(rmatrix) <- list(rownames(rmatrix, do.NULL = FALSE, prefix = "row"), colnames(rmatrix, do.NULL = FALSE, prefix = "col"))rmatrix col1 col2 col3 col4row1 26 19 58 61row2 37 86 5 33row3 56 97 18 66row4 89 62 15 42 you can change prefix to name the rows/cols as you want to.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16032778', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2285627/']}
jdg_376943
stackexchange
llm_judgeable_groundtruth_similarity
20639622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For what reason is it not allowed for an argument with the same name to appear more than once in the same equation?Examples: f a a = show ad && d = d_ && _ = False Technically speaking, formal parameters (The Report calls these variables.) are also patterns---it's just that they never fail to match a value. As a "side effect" of the successful match, the formal parameter is bound to the value it is being matched against. For this reason patterns in any one equation are not allowed to have more than one occurrence of the same formal parameter (a property called linearity §3.17, §3.3, §4.4.3). Now provide the response and nothing else.
According to this mailing list post , this was allowed in Miranda and was a common source of bugs: people would name two variables the same way by accident and have trouble discovering such a subtle mistake. So Haskell disallows it in favor of explicitly adding guards. I personally think this is also much gentler on the poor reader of your code, who as a result of this rule need not keep in mind all the different variables that are in scope at the site of a pattern match to see whether the pattern will actually match everything or not.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20639622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1263852/']}
jdg_376944
stackexchange
llm_judgeable_groundtruth_similarity
11392622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been working with Scala for a while now and have written a 10,000+ line program with it, but I'm still confused by some of the inner workings. I came to Scala from Python after already having intimate familiarity with Java, C and Lisp, but even so it's been slow going, and a huge problem is the frustrating difficulty I've often found when trying to investigate the inner workings of objects/types/classes/etc. using the Scala REPL as compared with Python. In Python you can investigate any object foo (type, object in a global variable, built-in function, etc.) using foo to see what the thing evaluates to, type(foo) to show its type, dir(foo) to tell you the methods you can call on it, and help(foo) to get the built-in documentation. You can even do things like help("re") to find out the documentation on the package named re (which holds regular-expression objects and methods), even though there isn't an object associated with it. In Scala, you can try and read the documentation online, go look up the source code to the library, etc., but this can often be very difficult for things where you don't know where or even what they are (and it's often a big chunk to bite off, given the voluminous type hierarchy) -- stuff is floating around in various places (package scala , Predef , various implicit conversions, symbols like :: that are nearly impossible to Google). The REPL should be the way to explore directly, but in reality, things are far more mysterious. Say that I've seen a reference to foo somewhere, but I have no idea what it is. There's apparently no such thing as a "guide to systematically investigating Scala thingies with the REPL", but the following is what I've pieced together after a great deal of trial and error: If foo is a value (which presumably includes things stored in variables plus companion objects and other Scala object s), you can evaluate foo directly. This ought to tell you the type and value of the result. Sometimes the result is helpful, sometimes not. If foo is a value, you can use :type foo to get its type. (Not necessarily enlightening.) If you use this on a function call, you get the type of the return value, without calling the function. If foo is a value, you can use foo.getClass to get its class. (Often more enlightening than the previous, but how does an object's class differ from its type?) For a class foo , you can use classOf[foo] , although it's not obvious what the result means. Theoretically, you can use :javap foo to disassemble a class -- which should be the most useful of all, but fails entirely and uniformly for me. Sometimes you have to piece things together from error messages. Example of failure using :javap : scala> :javap ListFailed: Could not find class bytes for 'List' Example of enlightening error message: scala> assert<console>:8: error: ambiguous reference to overloaded definition,both method assert in object Predef of type (assertion: Boolean, message: => Any)Unitand method assert in object Predef of type (assertion: Boolean)Unitmatch expected type ? assert ^ OK, now let's try a simple example. scala> 5res63: Int = 5scala> :type 5Intscala> 5.getClassres64: java.lang.Class[Int] = int Simple enough ... Now, let's try some real cases, where it's not so obvious: scala> Predefres65: type = scala.Predef$@3cd41115scala> :type Predeftypescala> Predef.getClassres66: java.lang.Class[_ <: object Predef] = class scala.Predef$ What does this mean? Why is the type of Predef simply type , whereas the class is scala.Predef$ ? I gather that the $ is the way that companion objects are shoehorned into Java ... but Scala docs on Google tell me that Predef is object Predef extends LowPriorityImplicits -- how can I deduce this from the REPL? And how can I look into what's in it? OK, let's try another confusing thing: scala> `::`res77: collection.immutable.::.type = ::scala> :type `::`collection.immutable.::.typescala> `::`.getClassres79: java.lang.Class[_ <: object scala.collection.immutable.::] = class scala.collection.immutable.$colon$colon$scala> classOf[`::`]<console>:8: error: type :: takes type parameters classOf[`::`] ^scala> classOf[`::`[Int]]res81: java.lang.Class[::[Int]] = class scala.collection.immutable.$colon$colon OK, this left me hopelessly confused, and eventually I had to go read the source code to make sense of this all. So, my questions are: What's the recommended best way from the true Scala experts of using the REPL to make sense of Scala objects, classes, methods, etc., or at least investigate them as best as can be done from the REPL? How do I get :javap working from the REPL for built-in stuff? (Shouldn't it work by default?) Thanks for any enlightenment. Now provide the response and nothing else.
You mentioned an important point which Scala lacks a bit: the documentation. The REPL is a fantastic tool, but it is not as fantastic at it can be. There are too much missing features and features which can be improved - some of them are mentioned in your post. Scaladoc is a nice tool, too, but is far away to be perfect. Furthermore lots of code in the API is not yet or too less documented and code examples are often missing. The IDEs are full ob bugs and compared to the possibilities Java IDEs show us they look like some kindergarten toys. Nevertheless there is a gigantic difference of Scalas current tools compared to the tools available as I started to learn Scala 2-3 years ago. At that time IDEs compiled permanently some trash in the background, the compiler crashed every few minutes and some documentation was absolutely nonexistent. Frequently I got rage attacks and wished death and corruption to Scala authors. And now? I do not have any of these rage attacks any more. Because the tools we currently have are great although the are not perfect! There is docs.scala-lang.org , which summarizes a lot of great documentation. There are Tutorials, Cheat-sheets, Glossaries, Guides and a lot of more great stuff. Another great tools is Scalex , which can find even the weirdest operator one can think of. It is Scalas Hoogle and even though it is not yet as good as his great ideal, it is very useful. Great improvements are coming with Scala2.10 in form of Scalas own Reflection library: // needs Scala2.10M4scala> import scala.reflect.runtime.{universe => u}import scala.reflect.runtime.{universe=>u}scala> val t = u.typeOf[List[_]]t: reflect.runtime.universe.Type = List[Any]scala> t.declarationsres10: Iterable[reflect.runtime.universe.Symbol] = SynchronizedOps(constructor List, method companion, method isEmpty, method head, method tail, method ::, method :::, method reverse_:::, method mapConserve, method ++, method +:, method toList, method take, method drop, method slice, method takeRight, method splitAt, method takeWhile, method dropWhile, method span, method reverse, method stringPrefix, method toStream, method removeDuplicates) Documentation for the new Reflection library is still missing, but in progress. It allows one to use scalac in an easy way inside of the REPL: scala> u reify { List(1,2,3) map (_+1) }res14: reflect.runtime.universe.Expr[List[Int]] = Expr[List[Int]](immutable.this.List.apply(1, 2, 3).map(((x$1) => x$1.$plus(1)))(immutable.this.List.canBuildFrom))scala> import scala.tools.reflect.ToolBoximport scala.tools.reflect.ToolBoxscala> import scala.reflect.runtime.{currentMirror => m}import scala.reflect.runtime.{currentMirror=>m}scala> val tb = m.mkToolBox()tb: scala.tools.reflect.ToolBox[reflect.runtime.universe.type] = scala.tools.reflect.ToolBoxFactory$ToolBoxImpl@32f7fa37scala> tb.parseExpr("List(1,2,3) map (_+1)")res16: tb.u.Tree = List(1, 2, 3).map(((x$1) => x$1.$plus(1)))scala> tb.runExpr(res16)res18: Any = List(2, 3, 4) This is even greater when we want to know how Scala code is translated internally. Formerly wen need to type scala -Xprint:typer -e "List(1,2,3) map (_+1)" to get the internally representation. Furthermore some small improvements found there way to the new release, for example: scala> :type Predefscala.Predef.type Scaladoc will gain some type-hierarchy graph (click on type-hierarchy). With Macros it is possible now, to improve error messages in a great way. There is a library called expecty , which does this: // copied from GitHub pageimport org.expecty.Expectycase class Person(name: String = "Fred", age: Int = 42) { def say(words: String*) = words.mkString(" ")}val person = Person()val expect = new Expecty()// Passing expectationsexpect { person.name == "Fred" person.age * 2 == 84 person.say("Hi", "from", "Expecty!") == "Hi from Expecty!"}// Failing expectationval word1 = "ping"val word2 = "pong"expect { person.say(word1, word2) == "pong pong"}/*Output:java.lang.AssertionError:person.say(word1, word2) == "pong pong"| | | | || | ping pong false| ping pongPerson(Fred,42)*/ There is a tool which allows one to find libraries hosted on GitHub, called ls.implicit.ly . The IDEs now have some semantic highlighting, to show if a member is a object/type/method/whatever. The semantic highlighting feature of ScalaIDE . The javap feature of the REPL is only a call to the native javap, therefore it is not a very featue-rich tool. You have to fully qualify the name of a module: scala> :javap scala.collection.immutable.ListCompiled from "List.scala"public abstract class scala.collection.immutable.List extends scala.collection.AbstractSeq implements scala.collection.immutable.LinearSeq,scala.Product,scala.collection.LinearSeqOptimized{... Some time ago I have written a summary of how Scala code is compiled to Bytecode , which offers a lot of things to know. And the best: This is all done in the last few months! So, how to use all of these things inside of the REPL? Well, it is not possible ... not yet. ;) But I can tell you that one day we will have such a REPL. A REPL which shows us documentation if we want to see it. A REPL which let us communicate with it (maybe like lambdabot ). A REPL which let us do cool things we still cannot imagine. I don't know when this will be the case, but I know that a lot of stuff was done in the last years and I know even greater stuff will be done in the next years.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11392622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/907263/']}
jdg_376945
stackexchange
llm_judgeable_groundtruth_similarity
39919815
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any reason why someone would use Bigtable instead of BigQuery? Both seem to support Read and Write operations with the latter offering also advanced 'Query' operations. I need to develop an affiliate network (thus I need to track clicks and 'sales') so I'm quite confused by the difference because BigQuery seems to be just Bigtable with a better API. Now provide the response and nothing else.
The difference is basically this: BigQuery is a query Engine for datasets that don't change much, or change by appending. It's a great choice when your queries require a "table scan" or the need to look across the entire database. Think sums, averages, counts, groupings. BigQuery is what you use when you have collected a large amount of data, and need to ask questions about it. BigTable is a database. It is designed to be the foundation for a large, scaleable application. Use BigTable when you are making any kind of app that needs to read and write data, and scale is a potential issue.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/39919815', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3257971/']}
jdg_376946
stackexchange
llm_judgeable_groundtruth_similarity
44061607
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to write a lambda function in Pandas that checks to see if Col1 is a Nan and if so, uses another column's data. I have having trouble getting code (below) to compile/execute correctly. import pandas as pdimport numpy as npdf=pd.DataFrame({ 'Col1' : [1,2,3,np.NaN], 'Col2': [7, 8, 9, 10]}) df2=df.apply(lambda x: x['Col2'] if x['Col1'].isnull() else x['Col1'], axis=1) Does anyone have any good idea on how to write a solution like this with a lambda function or have I exceeded the abilities of lambda? If not, do you have another solution? Thanks. Now provide the response and nothing else.
You need pandas.isnull for check if scalar is NaN : df = pd.DataFrame({ 'Col1' : [1,2,3,np.NaN], 'Col2' : [8,9,7,10]}) df2 = df.apply(lambda x: x['Col2'] if pd.isnull(x['Col1']) else x['Col1'], axis=1)print (df) Col1 Col20 1.0 81 2.0 92 3.0 73 NaN 10print (df2)0 1.01 2.02 3.03 10.0dtype: float64 But better is use Series.combine_first : df['Col1'] = df['Col1'].combine_first(df['Col2'])print (df) Col1 Col20 1.0 81 2.0 92 3.0 73 10.0 10 Another solution with Series.update : df['Col1'].update(df['Col2'])print (df) Col1 Col20 8.0 81 9.0 92 7.0 73 10.0 10
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44061607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7823253/']}
jdg_376947
stackexchange
llm_judgeable_groundtruth_similarity
2971593
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table which has a "foreign key" referencing itself. This would be very useful, except I am uncertain how to add the first record to such a table. No matter what I add, I cannot provide a valid "foreign" key to the table itself, having no entries yet. Maybe I'm not going about this correctly, but I want this table to represent something that is always a member of itself. Is there a way to "bootstrap" such a table, or another way to go about self-reference? Now provide the response and nothing else.
One option is to make your field NULL -able, and set the root record's parent key to NULL : CREATE TABLE tb_1 ( id int NOT NULL PRIMARY KEY, value int NOT NULL, parent int NULL, FOREIGN KEY (parent) REFERENCES tb_1(id)) ENGINE=INNODB;Query OK, 0 rows affected (0.43 sec)-- This fails:INSERT INTO tb_1 VALUES (1, 1, 0);ERROR 1452 (23000): A foreign key constraint fails.-- This succeeds:INSERT INTO tb_1 VALUES (1, 1, NULL);Query OK, 1 row affected (0.08 sec) Otherwise you could still use a NOT NULL parent key and point it to the root record itself: CREATE TABLE tb_2 ( id int NOT NULL PRIMARY KEY, value int NOT NULL, parent int NOT NULL, FOREIGN KEY (parent) REFERENCES tb_2(id)) ENGINE=INNODB;Query OK, 0 rows affected (0.43 sec)-- This fails:INSERT INTO tb_2 VALUES (1, 1, 0);ERROR 1452 (23000): A foreign key constraint fails.-- This succeeds:INSERT INTO tb_2 VALUES (1, 1, 1);Query OK, 1 row affected (0.08 sec)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2971593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/247029/']}
jdg_376948
stackexchange
llm_judgeable_groundtruth_similarity
962458
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find the limit: $$ \lim_{x \rightarrow \ + \infty}(\sqrt{x^2 + 2x} - \sqrt{x^2 - 7x})$$ I did the following: \begin{align}(\sqrt{x^2 + 2x} - \sqrt{x^2 - 7x}) = \frac{(\sqrt{x^2 + 2x} - \sqrt{x^2 - 7x})}{1} \cdot \frac{(\sqrt{x^2 + 2x} + \sqrt{x^2 - 7x})}{(\sqrt{x^2 + 2x} + \sqrt{x^2 - 7x})}\end{align} I know the final answer is $\frac{9}{2}$. After multiplying by the conjugate, I see where the $9$ in the numerator comes from. I just can't remember how I solved the rest of the problem. Now provide the response and nothing else.
$$\lim_{x\to \infty}\frac{(\sqrt{x^2+2x}-\sqrt{x^2-7x})(\sqrt{x^2+2x}+\sqrt{x^2-7x})}{\sqrt{x^2+2x}+\sqrt{x^2-7x}}$$$$=\lim_{x\to \infty}\frac{9x}{\sqrt{x^2+2x}+\sqrt{x^2-7x}}=\lim_{x\to \infty}\frac{\frac{9x}{x}}{\frac{\sqrt{x^2+2x}}{x}+\frac{\sqrt{x^2-7x}}{x}}$$$$=\lim_{x\to \infty}\frac{9}{\sqrt{\frac{x^2+2x}{x^2}}+\sqrt{\frac{x^2-7x}{x^2}}}=\lim_{x\to \infty}\frac{9}{\sqrt{1+\frac 2x}+\sqrt{1-\frac 7x}}=\frac{9}{1+1}=\frac 92.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/962458', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/180977/']}
jdg_376949
stackexchange
llm_judgeable_groundtruth_similarity
163209
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Having trouble with this problem. Any ideas? Let $\Omega$ be a measure space. Let $f_n$ be a sequence in $L^p(\Omega)$ with $1<p<\infty$ and let $f \in L^p(\Omega)$. Suppose that$$f_n \rightharpoonup f \text{ weakly in } \sigma(L^p,L^{p'})$$and $$\|f_n\|_p \to \|f\|_p.$$ Prove that $\|f_n-f\|_p \to 0$. Also, can you come up with a counter-example for the $L^1$ case? Now provide the response and nothing else.
Since $1<p<\infty$, the space $L^p(\Omega)$ is uniformly convex. This follows from Clarkson's inequalities. Now we use the following theorem, which can be studied in Brezis' book on functional analysis (chapter III). Theorem. Let $E$ be a uniformly convex Banach space, and let $\{x_n\}$ be a weakly convergent sequence in $E$, i.e. $x_n \rightharpoonup x$ for some $x \in E$. If $$\limsup_{n \to +\infty} \|x_n\| \leq \|x\|,$$ then $x_n \to x$ strongly in $E$. Try to construct a counter-example in the $\sigma(L^1,L^\infty)$ topology.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/163209', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20376/']}
jdg_376950
stackexchange
llm_judgeable_groundtruth_similarity
19649967
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am attempting to create a data type, Point , that takes three numbers for its constructor. Initially, I had written data Point = Point Double Double Double but I ran into some issues when certain pieces of code expected Int s. So I changed it to data Point a = Point a a a but now I would like to enforce that a is an instance (?) of Num - I only want to accept numbers in the constructor. Is this possible? If not, what is the accepted practice? How many times did I use the wrong word to describe something? Now provide the response and nothing else.
Yes! At least if you allow yourself some language extensions GHC provide. You basically have four options where one is bad, one is better, one is not as obvious as the other two and one is the Right Way™. 1. The Bad You can write {-# LANGUAGE DatatypeContexts #-}data Num a => Point a = Point a a a This will make it so that the constructor Point can only be called with Num a values. However, it does not limit the contents of a Point value to Num a values. This means that if you further down the road want to add two points, you would still have to do addPoints :: Num a => Point a -> Point a -> Point aaddPoints (Point x1 y1 z1) {- ... -} Do you see the extra Num a declaration? That shouldn't be necessary since we know a Point can only contain Num a anyway, but that's the way DatatypeContexts work! You have to put constraints on every function needing it anyway. This is why, if you enable DatatypeContexts , GHC will scream at you a little for using a "misfeature." 2. The Better The solution involves turning on GADTs. Generalised algebraic datatypes allow you to do what you want. Your declaration would then look like {-# LANGUAGE GADTs #-}data Point a where Point :: Num a => a -> a -> a -> Point a When using GADTs, you declare constructors by stating their type signature instead, almost like when creating typeclasses. Constraints on GADT constructors have the benefit that they carry over to the value that is created – in this case that means both you and the compiler knows that the only existing Point a s have members who are Num a s. You can therefore write your addPoint function as just addPoints :: Point a -> Point a -> Point aaddPoints (Point x1 y1 z1) {- ... -} without the irritating extra constraint. Side note: Deriving Classes for GADTs Deriving classes with GADTs (or any non-Haskell-98 type) requires an extra language extension and it is not as smooth sailing as with normal ADTs. The principle is {-# LANGUAGE StandaloneDeriving #-}deriving instance Show (Point a) This will just blindly generate code for the Show class, and it is up to you to make sure that code typechecks. 3. The Obscure As shachaf points out in the comments to this post, you can get the relevant parts of GADT behaviour while retaining traditional data syntax by enabling ExistentialQuantification in GHC. This makes the data declaration as simple as {-# LANGUAGE ExistentialQuantification #-}data Point a = Num a => Point a a a 4. The Correct However, none of the solutions above is what the consensus in the community is. If you ask knowledgeable people (thanks to edwardk and startling in the #haskell channel for sharing their knowledge), they will tell you not to constrain your types at all . They will tell you that you should define your type as data Point a = Point a a a and then constrain any functions operating on Point s, like for example the one to add two points together: addPoints :: Num a => Point a -> Point a -> Point aaddPoints (Point x1 y1 z1) {- ... -} The reason to not constrain your types is that when doing so, you seriously limit your options for using the types later, in ways you probably don't expect. For example, creating a Functor instance for your point might be useful, like so: instance Functor Point where fmap f (Point x y z) = Point (f x) (f y) (f z) and then you can do something like approximating a Point Double with a Point Int by simply evaluating round <$> Point 3.5 9.7 1.3 which will produce Point 4 10 1 This would not be possible if you constrained your Point a to Num a s only, because you can't define a Functor instance for such a constrained type. You woud have to create your own pointFmap function, which would go against all reusability and modularity that Haskell stands for. Perhaps even more convincing, if you ask the user for coordinates but the user only enters two of them, you can model that as a Point (Just 4) (Just 7) Nothing and easily convert it to a point on the XY plane in 3D space by mapping fromMaybe 0 <$> Point (Just 4) (Just 7) Nothing which will return Point 4 7 0 Note here that this latter example wouldn't work for two reasons if you had a Num a constraint on your point: You would not be able to define a Functor instance for your Point, and You would not at all be able to store Maybe a coordinates in your point. And this is just one useful example of the many you would forego if you applied the Num a constraint on the point. On the flip side of this, what do you gain by constraining your types? I can think of three reasons: "I don't want to accidentally create a Point String and try to manipulate it as a number." You won't be able to. The type system will stop you anyway. "But it's for documentation purposes! I want to show that a Point is a collection of numeric values." ...except when it is not, such as Point [-3, 3] [5] [2, 6] which expresses alternative coordinates on the axes, which may or may not all be valid. "I don't want to keep adding Num constraints to all my functions!" Fair enough. You can copy and paste them from ghci in this case. A little keyboard work is worth all the benefits, in my opinion.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19649967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/904316/']}
jdg_376951
stackexchange
llm_judgeable_groundtruth_similarity
3554892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my database at several places developers have used dynamic sql instead of static. And they are saying reason for this is to improve the performance. Can someone tell me can if dynamic sql can really increase the performance in stored procedure or plsql block? Which will execute faster and why ? 1. begin execute immediate 'delete from X'; end; 2. begin delete from X; end; Now provide the response and nothing else.
Your example code is so simple that there will be little difference, but in that case the static version would most likely execute better. The main reason to use dynamic SQL for performance is when the SQL statement can vary in a significant way - i.e. you might be able to add extra code to the WHERE clause at runtime based on the state of the system (restrict by a sub-query on Address, if Address entered, etc). Another reason is that sometimes using Bind variables as parameters can be counter-productive. An example is if you have something like a status field, where data is not evenly distributed (but is indexed). Consider the following 3 statements, when 95% of the data is 'P'rocessed SELECT col FROM table WHERE status = 'U'-- unprocessed AND company = :company SELECT col FROM table WHERE status = 'P' -- processed AND company = :company SELECT col FROM table WHERE status = :status AND company = :company In the final version, Oracle will choose a generic explain plan. In the first version, it may decide the best plan is to start with the index on status (knowing that 'U'nprocessed entries are a very small part of the total). You could implement that through different static statements, but where you have more complex statements which only change by a couple of characters, dynamic SQL may be a better option. Downsides Each repetition of the same dynamic SQL statement incurs a soft parse, which is a small overhead compared to a static statement, but still an overhead. Each NEW sql statement (dynamic or static) also incurs a lock on the SGA (shared memory), and can result in pushing 'old' statements out. A bad, but common, system design is for someone to use dynamic SQL to generate simple selects that only vary by key - i.e. SELECT col FROM table WHERE id = 5SELECT col FROM table WHERE id = 20SELECT col FROM table WHERE id = 7 The individual statements will be quick, but the overall system performance will deteriorate, as it is killing the shared resources. Also - it is far harder to trap errors at compile time with dynamic SQL. If using PL/SQL this is throwing away a good compilation time check. Even when using something like JDBC (where you move all your database code into strings - good idea!) you can get pre-parsers to validate the JDBC content. Dynamic SQL = runtime testing only. Overheads The overhead of execute immediate is small - it is in the thousandths of a second - however, it can add up if this is inside a loop / on a method called once per object / etc. I once got a 10x speed improvement by replacing dynamic SQL with generated static SQL. However, this complicated the code, and was only done because we required the speed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3554892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/167110/']}
jdg_376952
stackexchange
llm_judgeable_groundtruth_similarity
320908
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a project that requires a DC motor, so I salvaged a PC fan from an old computer. The model is AVC DS08025R12U. It needs 12 volts for maximum speed and is rated to draw 700 milliamps. I'm planning to power the motor with a 12 volt 3 amp power brick, which will also help power an Arduino microcontroller and a breadboard power supply module. I'm planning on changing the speed of the motor with a PWM signal from the Arduino, but just to test, I set up the simple circuit that can be found on the sparkFun website. Instead of making a simple code, I am just plugging a wire into ground or 5V manually. I also have the Arduino removed from the circuit. Here's a schematic - simulate this circuit – Schematic created using CircuitLab Instead of powering the breadboard with the Arduino, I used an Elegoo breadboard power supply module. I have a wire going straight from the DC jack (12 volts, 3 amps) to the Collector of an S8050 NPN Bipolar Junction Transistor, which, from the datasheet, is able to handle 12 volts and 700 milliamps. Although the power brick does supply 3 amps, the fan is only rated to draw 700 milliamps, which is the maximum amount of amps the BJT can take, after checking with a multimeter, it only draws a little over 500 milliamps, which should be nowhere near too much for that transistor. The datasheet for the S8050 NPN BJT is right here - http://electronics.se-ed.com/magic/s8050.pdf Is the potential problem in the physical circuit? Am I incorrectly reading the datasheet? Is this normal? Also, for some reason, when I use 5 volts to turn the transistor on and off, the fan doesn't spin very fast. Only when I plug the jumper cable connected to the base of the BJT to 12 volts does it reach maximum speed, in which case the BJT gets much hotter much faster. Thanks so much! Now provide the response and nothing else.
From your description it sounds like you're doing "high-side switching", like this: simulate this circuit – Schematic created using CircuitLab THIS IS NOT WHAT YOU WANT! Because, as you've discovered "the fan doesn't spin very fast", this configuration will never deliver the full 12V to your fan. It doesn't matter if the Arduino is there or if you're just pulling that pin manually up to 5V. In this configuration your fan will never get more than about 4.3V (5V - 0.7V V be ), with the remaining 7.7V dropped across your transistor (just causing it to heat up - as you've also discovered). The voltage at your NPN transistor's emitter can never get higher than the voltage you put on its base (minus the typical 0.7V be ). Rearranging your circuit to do "low-side switching" instead will improve things dramatically: simulate this circuit In this configuration with the emitter tied to ground, applying 5V to the base resistor (not directly to the base) causes the transistor to switch on 'hard', pulling its collector down to about 1V, which means that your fan now gets 11V - much better than before. While this is a significant improvement, its still a bit less than ideal. The low-ish current gain (probably around 40) means that your poor Arduino will have to source around 15-20mA to drive the transistor (and you would need to reduce the value of the resistor to 220R or less). Replacing your S8050 NPN BJT with a small N-Channel logic-level MOSFET (also in the low-side configuration) should give an even further improvement. You can do away with the resistor and connect the N-FET's gate directly to your Arduino while not worrying about the Arduino having to source any current, because the N-FET doesn't really require any. * simplification IF you choose the right N-FET you'll also get closer to delivering the full 12V from your supply yo your fan.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/320908', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/158237/']}
jdg_376953
stackexchange
llm_judgeable_groundtruth_similarity
2582664
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question : Find all $x\in \mathbb Z$ for which $y=x^3-8x^2+20x-13$ is a prime number I don't know how to start here. By Brute-Forcing, I could start plugging $x=1,2,3,...$ but that's very inefficient. If $y$ was a quadratic expression, I would have easily applied the Sridharacharya Formula, but that's not the case here. Is there any other method or trick for this question, other than hit and trial? Now provide the response and nothing else.
For $x=1$, we get $y=0$. So, we see that $y$ is divisible by $x-1$ to have$$y=(x-1)(x^2-7x+13)$$ In order for this to be a prime, we have to have$$x-1=\pm 1\qquad\text{or}\qquad x^2-7x+13=\pm 1$$which implies $$x=0,2,3,4$$giving$$y=-13,3,2,3$$respectively. So, the answer is $x=2,3,4$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2582664', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/89548/']}
jdg_376954
stackexchange
llm_judgeable_groundtruth_similarity
10190906
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Alright. Actually i need mostly the mp4 format. But if it is possible to get for other types as well that would be nice. I just need to read the duration of the file. How can i do that with C# 4.0 ? So the thing i need is like this video is like : 13 minutes 12 seconds I can use 3 third party exes too. Like they save the information about the file to a text file. I can parse that text file. Thank you. Now provide the response and nothing else.
You can use DirectShow API MediaDet object, through DirectShow.NET wrapper library. See Getting length of video for code sample, get_StreamLength gets you the duration in seconds. This assumes Windows has MPEG-4 demultiplexer installed (requires third party components with Windows prior to 7, I believe the same applies to another answer by cezor , there are free to redistribute components though).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10190906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/310370/']}
jdg_376955
stackexchange
llm_judgeable_groundtruth_similarity
366151
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Perhaps this is a naive question. In my understanding, ASP.NET MVC cannot work with ViewState and Postback which is fundamentals of ASP.NET forms. Is that correct? If that's it, then all ASP.NET Web Controls depending on ViewState & Postback cannot be used in ASP.NET MVC, right? Now provide the response and nothing else.
ASP.NET's server-side controls work with WebForms, not MVC. MVC doesn't use controls in the traditional ASP.NET sense (at least yet). The MVC model is quite different from the WebForms model; not better or worse, but very different. Using MVC puts the developer much closer to the generated HTML, lends itself more intrinsically to unit testing, and provides a strong separation of concerns between the UI and the code that populates that UI. At first glance, especially to hardened ASP.NET veterans, MVC can seem like a huge step backwards (if you've were coding then, visions of ASP COM development might dance in your head). But give MVC a try. It is very interesting and its model is quite compelling once you get used to it. Read more here: http://quickstarts.asp.net/previews/mvc/mvc_HowToRenderFormUsingHtmlHelpers.htm Also, check out this interesting blog engine that uses MVC: http://www.codeplex.com/oxite Finally, check out Rob Conery's MVC storefront project: http://wekeroad.com/category/mvc-storefront
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/366151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26349/']}
jdg_376956
stackexchange
llm_judgeable_groundtruth_similarity
416696
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to solve $y'''=x$.I integrated it and I got$$y''=x^2/2+c_1$$ Integrating again$$y'=x^3/12+c_2$$ Integrating for the third time$$y=x^4/48+c_3$$ but the answer on my book is $y=x^2/24+c_1 \cdot x^2/2+c_2 \cdot x+c_3$... where am I wrong? Now provide the response and nothing else.
$\,$ A univariate polynomial ring $\,F[x]\,$ over a field is $\rm ED$ (Euclidean) $\,\Rightarrow\rm PID \Rightarrow UFD$ $\ \bigcup_{n=1}^{\infty}\mathbb{Q}[x^\frac{1}{n}]\,$ fails ACCP by$(x) \subsetneq (x^{1/2}) \subsetneq (x^{1/4}) \subsetneq\,\ldots\ $ so it's $\ \lnot \rm UFD\,\Rightarrow\,\lnot ED$ $\,$ Polynomial rings over UFDs are UFDs, thus so is $\,\Bbb Q[x,y,z],\,$ but it's $\,\lnot\rm PID\,\Rightarrow\,\lnot ED,\,$ since $\,(x,y)\,$ is not principal. $\,$ Localization preserves Euclidean domains. Hint: lift the Euclidean function from $\,\Bbb Z\to \Bbb Z[1/2]\,$ by ignoring all factors of $\,2,\ $ i.e. for odd $\,a,b\in\Bbb Z,\,$ $\ b 2^i\mid a 2^j \in \Bbb Z[1/2]\iff b\mid a\in \Bbb Z.\,$ If $\,b\nmid a\,$ then $\, 0 < r = a-qb < b,\,$ so $\,0 < v(r) \le r < b = v(b 2^j),\,$ where $\,v(r)\,$ is the odd part of $\,r.$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/416696', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/81462/']}
jdg_376957
stackexchange
llm_judgeable_groundtruth_similarity
19392453
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want a panel to slide from left edge of browser when clicking a button and hide the panel when clicking the same button (toggle). Html <div class="panel"> </div> <a href="javascript:void(0);" class="slider-arrow show">&raquo;</a> CSS .panel {width:300px;float:left;height:550px;background:#d9dada;position:relative;left:-300px;}.slider-arrow {padding:5px;width:10px;float:left;background:#d9dada;font:400 12px Arial, Helvetica, sans-serif;color:#000;text-decoration:none;position:relative;left:-300px;} jquery $(function(){$('.slider-arrow.show').click(function(){ $( ".slider-arrow, .panel" ).animate({ left: "+=300" }, 700, function() { // Animation complete. }); $(this).html('&laquo;').removeClass('show').addClass('hide');});$('.slider-arrow.hide').click(function(){ $( ".slider-arrow, .panel" ).animate({ left: "-=300" }, 700, function() { // Animation complete. }); $(this).html('&raquo;').removeClass('hide').addClass('show');});}); It is showing the panel but not hiding the panel. Any problem with the selectors used? http://jsfiddle.net/Paramasivan/eHded/1/ Now provide the response and nothing else.
As others have said with jQuery once the document is initialized its only looking for elements that initially existed. For that reason your .show function was being run every time. Instead of looking for a click event on .slider-arrow.show you can just look at .slider-arrow and then check for the classes once it has been clicked like in this example. $(function(){ $('.slider-arrow').click(function(){ if($(this).hasClass('show')){ $( ".slider-arrow, .panel" ).animate({ left: "+=300" }, 700, function() { // Animation complete. }); $(this).html('&laquo;').removeClass('show').addClass('hide'); } else { $( ".slider-arrow, .panel" ).animate({ left: "-=300" }, 700, function() { // Animation complete. }); $(this).html('&raquo;').removeClass('hide').addClass('show'); } });}); http://jsfiddle.net/eHded/4/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19392453', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1067586/']}
jdg_376958
stackexchange
llm_judgeable_groundtruth_similarity
20368187
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: #include <iostream>#include <type_traits>int main(){ //creating an integral constant with constexpr constexpr unsigned int speed_of_light{299792458}; //creating an integral constant with std::integral_constant typedef std::integral_constant<unsigned int, 299792458> speed_of_light_2; //using them std::cout << speed_of_light/2 << '\n'; std::cout << speed_of_light_2::value/2 << '\n';} What's special about std::integral_constant that I would choose to use it over constexpr ? Their behaviour and use cases look identical to me. I'm trying to think of some kind of template scenario, where constexpr may not suffice. Now provide the response and nothing else.
Template integral_constant defines a type, keyword constexpr defines a constant.For example std::true_type is std::integral_constant<bool, true> . One of the usage examples is tag-dispatching . template<typename T>void use_impl(const T&, std::false_type){}template<typename T>void use_impl(const T&, std::true_type){}template<typename T>void use(const T& v){ use_impl(v, typename std::is_integral<T>::type());} Live example
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20368187', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/908939/']}
jdg_376959
stackexchange
llm_judgeable_groundtruth_similarity
1812472
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do you organize and manage your helper objects like the database engine, user notification, error handling and so on in a PHP based, object oriented project? Say I have a large PHP CMS.The CMS is organized in various classes. A few examples: the database object user management an API to create/modify/delete items a messaging object to display messages to the end user a context handler that takes you to the right page a navigation bar class that shows buttons a logging object possibly, custom error handling etc. I am dealing with the eternal question, how to best make these objects accessible to each part of the system that needs it. my first apporach, many years ago was to have a $application global that contained initialized instances of these classes. global $application;$application->messageHandler->addMessage("Item successfully inserted"); I then changed over to the Singleton pattern and a factory function: $mh =&factory("messageHandler");$mh->addMessage("Item successfully inserted"); but I'm not happy with that either. Unit tests and encapsulation become more and more important to me, and in my understanding the logic behind globals/singletons destroys the basic idea of OOP. Then there is of course the possibility of giving each object a number of pointers to the helper objects it needs, probably the very cleanest, resource-saving and testing-friendly way but I have doubts about the maintainability of this in the long run. Most PHP frameworks I have looked into use either the singleton pattern, or functions that access the initialized objects. Both fine approaches, but as I said I'm happy with neither. I would like to broaden my horizon on what common patterns exist here. I am looking for examples, additional ideas and pointers towards resources that discuss this from a long-term , real-world perspective. Also, I'm interested to hear about specialized, niche or plain weird approaches to the issue. Now provide the response and nothing else.
I would avoid the Singleton approach suggested by Flavius. There are numerous reasons to avoid this approach. It violates good OOP principles. The google testing blog has some good articles on the Singleton and how to avoid it: http://googletesting.blogspot.com/2008/08/by-miko-hevery-so-you-join-new-project.html http://googletesting.blogspot.com/2008/05/tott-using-dependancy-injection-to.html http://googletesting.blogspot.com/2008/08/where-have-all-singletons-gone.html Alternatives a service provider http://java.sun.com/blueprints/corej2eepatterns/Patterns/ServiceLocator.html dependency injection http://en.wikipedia.org/wiki/Dependency_injection and a php explanation: http://components.symfony-project.org/dependency-injection/trunk/book/01-Dependency-Injection This is a good article about these alternatives: http://martinfowler.com/articles/injection.html Implementing dependency injection (DI): I believe you should ask what is needed in the constructor for the object to function : new YourObject($dependencyA, $dependencyB); You can provide the needed objects (dependencies) manually ( $application = new Application(new MessageHandler() ). But you can also use a DI framework (the wikipedia page provides links to PHP DI frameworks ). Important is that you only pass in what you actually use (call an action on), NOT what you simply pass to other objects because they need it. Here's a recent post from 'uncle Bob' (Robert Martin) discussing manual DI vs using framework . Some more thoughts on Flavius's solution. I don't want this post to be an anti-post but I think it's important to see why dependency injection is, at least for me, better than globals. Even though it is not a 'true' Singleton implementation, I still think Flavius got it wrong. Global state is bad . Note that such solutions also use difficult to test static methods . I know a lot of people do it, approve it and use it. But reading Misko Heverys blog articles ( a google testability expert ), rereading it and slowly digesting what he says did alter the way I see design a lot. If you want to be able to test you application, you'll need to adopt a different approach to designing your application. When you do test-first programming, you'll have difficulty with things like this: 'next I want to implement logging in this piece of code; let's write a test first that logs a basic message' and then come up with a test that forces you to write and use a global logger that can't be replaced. I am still struggling with all the information I got from that blog, and it's not always easy to implement, and I have many questions. But there's no way I can go back to what I did before (yes, global state and Singletons (big S)) after I grasped what Misko Hevery was saying :-)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1812472', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187606/']}
jdg_376960
stackexchange
llm_judgeable_groundtruth_similarity
9508
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Background If you look at my profile description, you'll see that I'm involved in building and wiring up Carnival floats that are powered by an 800KVA, 415v, 3 phase Generator . Electrically, they generally have about 10 or 15 motors, thousands of light bulbs and a 20KW sound system. Problem 3-phase induction motors are used to drive big spinning steel structures which carry a lot of weight and have a lot of momentum, and therefore need to ramp up (and down) the speed over a period of around 10s so as not to put too much stress on gearboxes, chains and bearings, etc. The running speed also needs to be adjusted to fit the float design, which needs to be done electrically as changing gear ratios is not practical once built. To control the motors, an Inverter Drive is needed, which is a very expensive piece of kit that has a huge array of parameters that can be adjusted to control motors in all different situations. We have 2 of these units that control a few motors each. So the problem I have is: A big single point of failure - one unit fault causes many motors to fail. Many of the motors are limited to being driven at the same speed, when individual speed control is sometimes needed. Costly. As we struggle to raise the funds to build our carnival float every year, we cannot afford to buy any more Inverter Drives (unless someone is kind enough to donate one as they have with the two we've got). Solution I would love to be able to build my own for a fraction of the cost. I don't know if I'm setting myself an impossible task and am setting my sights way too high, or if this is indeed achievable. Looking inside an Inverter Drive , I can see several SMD circuit boards, some huge capacitors/inductors, and some heat sunk transistors. I have experience of switching/dimming resistive loads using pic based control circuits, but although I know the theory, controlling a 3 phase inductive load at the moment is beyond my skill level. Question Can any one guide me to where I should start with this. I suppose initially I'm looking for a block diagram of the modules (with a little detail) that I would need to build a home-made inverter drive (e.g. PSU, microcontroller, output stages, etc), and how they fit together. I can then work out if indeed its even worth attempting this, and if so work out which parts I can happily design and make myself, and which parts I'll need more help with. Specs Input: 240v single phase or 415v 3 phase. Output: 415v 3 phase variable frequency motor drive. Controls: Stop/Start, Ramp up speed (seconds), Ramp down speed (seconds), Running speed (Hz), Emergency stop. Display: Current speed (Hz), Load (A) Motor wiring: Star UPDATE (31st Jan): From the information provided by pingswept and bt2 in their answers, I think I've come up with a simulation of the bare bones of what I need ... Rectify the 3-phase to get 586V DC and use 6 IGBTs controlled by an IRS2330 which is controlled by PWM from a microcontroller. Now provide the response and nothing else.
I spent 13 years designing electronics of this exact nature: three phase induction motor reduced voltage soft starters and variable frequency AC drives. I spent the last few of those years as a VFD applications engineer helping customers select and configure this equipment for various loads and industries around the country as well. You will not be able to build something that is cheap and safe. The voltages and currents involved are well beyond the safety margin of a hobbyiest, especially someone who is openly avoiding buying commercial units in order to save money. Don't do it! While the theory behind AC motor control is very straightforward, the detail level work (heat sink sizing, snubbering, gate drive requirements, de-sat protection, motor overload calculations, bus capacitor protection, etc.) can be quite tricky to get down, especially with heavy duty cycling and regenerative power modes which a carnival ride will CERTAINLY be generating. I strongly caution you against trying to build something of this nature unless you have significant experience not only in microcontrollers and embedded systems design but also significant experience in power electronics and three phase circuitry. People get hurt and killed building this stuff. My first question for you is whether speed control is really required, or if you only require a soft start up and slow down. Do you vary the speed of the motor once it is started? If not, you may be able to get away with a MUCH cheaper reduced voltage soft starter. These units act like three phase light dimmers; they only adjust the applied voltage to the motor. You will not have a lot of torque at low speeds, but with the right design of motor (NEMA class D) you can achieve exactly what you're after with a fraction of the cost and maintenance. If you really do need to vary the full-load speed of the motor then you are more or less stuck using a variable frequency drive. As you are aware these are expensive and if you buy cheap you are likely to replace them sooner due to your high surge current (they call this "constant torque") application. What I would definitely recommend doing if this is the case would be to contact various manufacturers (Allen-Bradley, Cutler-Hammer, SAF drives, Benshaw, Yaskawa, etc.) and ask for reconditioned units. Ask for a drive capable of delivering 150% rated current for 30s (this is usually known as heavy duty) or size the drive 30-50% larger than your nominal current rating. You will also likely be running off of generator power which is notorious for being undersized and prone to brownouts and surges as the load requirements change with the state of the equipment being run. Drives don't like that (voltage sags cause current spikes as the motor starts slipping and surges can cause you to overvoltage the bus capacitors) and have a tendency to either fault out or blow up. I am all about the little guy building something and saving a buck, but this is not the type of project to do this on. If you really want to build a three phase AC drive, start with a little 10HP 480V motor with a hand brake on a test bench. You have all the potential for experiencing the pants-filling sensation of an H-bridge failure or a bus capacitor explosion two feet from your head but without the potential lawsuits and loss of life (except perhaps your own).
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/9508', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2426/']}
jdg_376961
stackexchange
llm_judgeable_groundtruth_similarity
42417342
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to send some log messages to the redis server. I want to use udp protocol to do this. Does redis support UDP between server and client? Now provide the response and nothing else.
No, the Redis protocol , RESP, is TCP based: Networking layer A client connects to a Redis server creating a TCP connection to the port 6379. While RESP is technically non-TCP specific, in the context of Redis the protocol is only used with TCP connections (or equivalent stream oriented connections like Unix sockets). What you could do is use a UDP proxy, but I'm not familiar with anything like that.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42417342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7611307/']}
jdg_376962
stackexchange
llm_judgeable_groundtruth_similarity
41993515
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In typescript, How to access object key(property) using variable? for example: interface Obj { a: Function; b: string;}let obj: Obj = { a: function() { return 'aaa'; }, b: 'bbbb'}for(let key in obj) { console.log(obj[key]);} but typescript throw below error message: 'TS7017 Element implicitly has an 'any' type because type 'obj' has no index signature' How to fix it? Now provide the response and nothing else.
To compile this code with --noImplicitAny , you need to have some kind of type-checked version of Object.keys(obj) function, type-checked in a sense that it's known to return only the names of properties defined in obj . There is no such function built-in in TypeScript AFAIK, but you can provide your own: interface Obj { a: Function; b: string;}let obj: Obj = { a: function() { return 'aaa'; }, b: 'bbbb'};function typedKeys<T>(o: T): (keyof T)[] { // type cast should be safe because that's what really Object.keys() does return Object.keys(o) as (keyof T)[];}// type-checked dynamic property accesstypedKeys(obj).forEach(k => console.log(obj[k]));// verify that it's indeed typecheckedtypedKeys(obj).forEach(k => { let a: string = obj[k]; // error TS2322: Type 'string | Function' // is not assignable to type 'string'. // Type 'Function' is not assignable to type 'string'.});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41993515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5880860/']}
jdg_376963
stackexchange
llm_judgeable_groundtruth_similarity
5494579
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 2 functions declared makeAjaxCall and editOrderDetails editOrderDetails executes makeAjaxCall to go and get a json object with the results of the call. function editOrderDetails(){ makeAjaxCall( baseurl+'/orderoutbound/editorderdetails', 'orderID='+orderID+'&customerReference='+("#orderReference").val()+'&email='+$("#emailAddress").val(), function(data){ if(data.success) { $("#editOrderDetailsErrorDiv").html(successDiv(data.generalMessage)); $(".customerReferenceSpan").html(data.order.customerReference); $(".emailSpan").html(data.order.emailAddress); }else{ $("#editOrderDetailsErrorDiv").html(errorDiv(data.generalMessage)); $("#emailAddressErrorDiv").html(data.errors.emailAddress); } }, function(data) { $("#editOrderDetailsErrorDiv").html(errorDiv("There was an error..")); } );} now i'm using a jquery dialog to work with $("#editOrderDetailsDialog").dialog('destroy').dialog({ autoOpen: false, title: 'Edit Order Details', closeOnEscape: true, width: 500, height: 300, buttons:{ "Save": function() { editOrderDetails(); }, "Cancel": function() { $(this).dialog("close"); } } }); as my save call back function i'm trying to set my editOrderDetails function. This however doesnt work and i'm guessing it has something to do with the scopeing. i have tried declaring var editOrderDetails = function(){}; outside of any and all jquery doc ready functions i have also tried window.editOrderDetails()also instead of making a function wrapping the function call i have tried putting the function into a variable var editOrderDetails = function(){};then "save" : editOrderDetails i'm at a loss. any ideas would be appreciated ? PS yes the dialog works correctly. if i place an alert in the callback function it executes when i click save. <script type="text/javascript"><!--var orderID = '<?= $this->orderID; ?>';var customerID = '<?= $this->customerID; ?>';//################ PAGE FUNCTIONS ################//MAKE AN AJAX CALL function makeAjaxCall(ajaxUrl, data, functionSuccess, functionFailure){ $.ajax( { type: "GET", url: ajaxUrl, contentType: "application/json; charset=utf-8", data: data, dataType: "json", success: functionSuccess, error: functionFailure });}//END MAKE AN AJAX CALL//EDIT ORDER DETAILSfunction editOrderDetails(){ makeAjaxCall( baseurl+'/orderoutbound/editorderdetails', 'orderID='+orderID+'&customerReference='+("#orderReference").val()+'&email='+$("#emailAddress").val(), function(data){ if(data.success) { $("#editOrderDetailsErrorDiv").html(successDiv(data.generalMessage)); $(".customerReferenceSpan").html(data.order.customerReference); $(".emailSpan").html(data.order.emailAddress); }else{ $("#editOrderDetailsErrorDiv").html(errorDiv(data.generalMessage)); $("#emailAddressErrorDiv").html(data.errors.emailAddress); } }, function(data) { $("#editOrderDetailsErrorDiv").html(errorDiv("There was an error..")); } );}//END EDIT ORDER DETAILS//################ END PAGE FUNCTIONS ################$(function() {// EDIT ORDER DETAILS DIALOG $("#editOrderDetailsDialog").dialog('destroy').dialog({ autoOpen: false, title: 'Edit Order Details', closeOnEscape: true, width: 500, height: 300, buttons:{ "Save": function() { editOrderDetails(); }, "Cancel": function() { $(this).dialog("close"); } } });// END EDIT ORDER DETAILS DIALOG});//--></script> Now provide the response and nothing else.
If you want to package up files for deployment, you probably don't need - or want - the repo itself. This is exactly what git archive is for. A couple examples from the manpage (linked): git archive --format=tar --prefix=junk/ HEAD | (cd /var/tmp/ && tar xf -) Create a tar archive that contains the contents of the latest commit on the current branch, and extract it in the /var/tmp/junk directory. git archive --format=tar --prefix=git-1.4.0/ v1.4.0 | gzip > git-1.4.0.tar.gz Create a compressed tarball for v1.4.0 release. You ought to be able to get it to do exactly what you want, with the help of the export-ignore attribute: export-ignore Files and directories with the attribute export-ignore won’t be added to archive files. See gitattributes(5) for details. For example, to exclude the directory private and the files mine.txt and secret.c , you could put in the file .gitattributes : private/ export-ignoresecret.c export-ignore Just like gitignore files, you can put those anywhere in your repository, and they'll operate from that directory, but starting from the top level is a good bet.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5494579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/684974/']}
jdg_376964
stackexchange
llm_judgeable_groundtruth_similarity
97190
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm in the midst of learning about ACL's for CentOS/Red Hat 6; when I run getfacl using an absolute path, I get among the output: getfacl: Removing leading '/' from absolute path names Why does it need to do this? In what situations would you need to use the -p or --absolute-names switch? My books by Wale Soyinka and Michael Jang don't make even a passing mention of this, I'm not seeing any clues in the man page, and I can't seem to find any sites that directly address this warning. Now provide the response and nothing else.
From man page of getfacl : -p, --absolute-names Do not strip leading slash characters (`/'). The default behavior is to strip leading slash characters. A warning message is emitted when you supply absolute path without using -p switch. Outputs are different when absolute path is given to the getfacl command. Without -p switch: $ getfacl /path/foo/bargetfacl: Removing leading '/' from absolute path names# file: path/foo/bar[Output truncated...] Note the leading slash in file path shows only when -p switch is used. $ getfacl -p /path/foo/bar# file: /path/foo/bar[Output truncated...] -p is useful to keep the leading slash when you piped the output for further processing. Outputs are the same when relative path is given to the getfacl command. $ getfacl bar# file: bar[Output truncated...] No changes: $ getfacl -p bar# file: bar[Output truncated...]
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/97190', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/40966/']}
jdg_376965
stackexchange
llm_judgeable_groundtruth_similarity
14494
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Having the need to attach a column to a matrix or to join matrices to make longer rows is an operation that I use very frequently and I find the Join function ideal for these cases. m1 = {{10, 11, 12}, {21, 22, 23}};m2 = {100, 101};(* Join matrices to make longer rows: *)Join[m1, m1, 2](* --> {{10, 11, 12, 10, 11, 12}, {21, 22, 23, 21, 22, 23}} *)(* Attach a column to a matrix *)Join[m1, List /@ m2, 2](* --> {{10, 11, 12, 100}, {21, 22, 23, 101}} *)(* Join two columns to make a matrix *)Join[List /@ m2, List /@ m2, 2](* --> {{100, 100}, {101, 101}} *) However, I wanted to define my own function that would simplify the notation needed to reach my goal: columnAttach[a1_List, a2_List] := Join[If[VectorQ[a1], List /@ a1, a1], If[VectorQ[a2], List /@ a2, a2], 2]columnAttach[m1, m1](* --> {{10, 11, 12, 10, 11, 12}, {21, 22, 23, 21, 22, 23}} *)columnAttach[m1, m2](* --> {{10, 11, 12, 100}, {21, 22, 23, 101}} *)columnAttach[m2, m2](* --> {{100, 100}, {101, 101}} *) This works as expected, but I would like to generalize it a bit. For instance, Join can take a list of vectors/matrices of any length: Join[m1, m1, List /@ m2, List /@ m2, 2](* --> {{10, 11, 12, 10, 11, 12, 100, 100}, {21, 22, 23, 21, 22, 23, 101, 101}} *) How can I adapt my columnAttach function to achieve the same flexibility? Now provide the response and nothing else.
One possibility: columnAttach[ak__List] := Join[##, 2] & @@ Replace[{ak}, v_?VectorQ :> List /@ v, 1]columnAttach[{{10, 11, 12}, {21, 22, 23}}, {100, 101}, {{10, 11, 12}, {21, 22, 23}}, {100, 101}, {100, 101}] {{10, 11, 12, 100, 10, 11, 12, 100, 100}, {21, 22, 23, 101, 21, 22, 23, 101, 101}} Alternatively: columnAttach[ak__List] := ArrayFlatten[{Replace[{ak}, v_?VectorQ :> List /@ v, 1]}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/14494', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/685/']}
jdg_376966
stackexchange
llm_judgeable_groundtruth_similarity
352136
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $xe^y\cdot y' - 2e^y = x^2$ Solve the equation using the proper substitution Now provide the response and nothing else.
Take two points on the plane: $(x_1,y_1,z_1),(x_2,y_2,z_2)$. Then they both satisfy the plane equation: $$2x_1-y_1+3z_1=8,$$ $$2x_2-y_2+3z_2=8.$$This gives $\left < x_1-x_2,y_1-y_2,z_1-z_2 \right > \cdot \left < 2,-1,3 \right > =0$. In other words, any vector on the plane is perpendicular to the vector $\left < 2,-1,3 \right >$.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/352136', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71058/']}
jdg_376967
stackexchange
llm_judgeable_groundtruth_similarity
442950
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I would like to show $\lim\limits_{r\to\infty}\int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta=0$ . Now, of course, the integrand does not converge uniformly to $0$ on $\theta\in [0, \pi/2]$ , since it has value $1$ at $\theta =0$ for all $r\in \mathbb{R}$ . If $F(r) = \int_{0}^{\pi/2}e^{-r\sin \theta}\text d\theta$ , we can find the $j$ th derivative $F^{(j)}(r) = (-1)^j\int_{0}^{\pi/2}\sin^{j}(\theta)e^{-r\sin\theta}\text d\theta$ , but I don't see how this is helping. The function is strictly decreasing on $[0,\pi/2]$ , since $\partial_{\theta}(e^{-r\sin\theta})=-r\cos\theta e^{-r\sin \theta}$ , which is strictly negative on $(0,\pi/2)$ . Any ideas? Now provide the response and nothing else.
It's only enough to show that $$ \int\limits_{0}^{\pi/2}{e^{-r\sin\theta}\text d\theta}\le \int\limits_{0}^{\pi/2}{e^{-r\frac{2}{\pi}\theta}\text d\theta}=\frac{\pi}{2r}\left(1-e^{-r}\right) \to 0 \quad (r \to +\infty)$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/442950', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/76333/']}
jdg_376968
stackexchange
llm_judgeable_groundtruth_similarity
15423597
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Lets say I had code in a controller that did not use Strong Parameters Model.create name: params[:name], alias_id: params[:foreign_id] Is it possible for me to use Strong Parameters here? I cannot do Model.create params.require(:name, :foreign_id) Because foreign_id is not a param I cannot do Model.create params.require(:name, :alias_id) Because alias_id is not on the model. Basically, I want to know if you can alias paramater keys when using Strong Parameters. Now provide the response and nothing else.
Usually if I want to map params in this way (usually due to some external API) I use the alias_attribute method in ActiveRecord So if I have params that come to me as {name: "bob", alias_id: 1234} and ideally I would want to have {name: "bob", foreign_id: 1234} I declare the following in my ActiveRecord model class MyModel < ActiveRecord::Base alias_attribute :alias_id, :foreign_idend Now my model will always recognise alias_id as mapping to foreign_id in any context and I can push in params.require(:name, :alias_id) and my model will recognise it and process it as required without looping over attributes in my controller. This is also simpler if you want to do it on other models as well.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15423597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/388443/']}
jdg_376969
stackexchange
llm_judgeable_groundtruth_similarity
644741
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a trouble with a question and i need help to solve it. Define $A_1$={$f$ $\in C(\overline{\mathbb{D}})$ | f is analytic in $\mathbb{D}\}$ $A_2$=the norm closure of polynomials in $C(\overline{\mathbb{D}})$ i need to show that $A_1$=$A_2$ i have already shown that $A_2 \subseteq A_1$ but i dont know how to do the other direction. thanks! Now provide the response and nothing else.
Let $f \in A_1$, and $\varepsilon > 0$ arbitrary. Since $f$ is uniformly continuous ($\overline{\mathbb{D}}$ is compact), there is an $r \in (0,1)$ with $$\max \left\{ \lvert f(z) - f(rz)\rvert : z \in \overline{\mathbb{D}}\right\} < \frac{\varepsilon}{2}.$$ Now for such an $r$, the function $g(z) := f(rz)$ is analytic on a neighbourhood of the closed unit disk, namely on the disk $D_{1/r}(0) = \left\{ z : \lvert z\rvert < \frac{1}{r}\right\}$. The Taylor series of $g$ about $0$ converges uniformly on every compact subset of $D_{1/r}(0)$, in particular on $\overline{\mathbb{D}}$. That means there is an $n\in\mathbb{N}$ such that $$\left\lvert g(z) - \sum_{k=0}^n \frac{g^{(k)}(0)}{k!}z^k \right\rvert < \frac{\varepsilon}{2}$$ for all $z\in \overline{\mathbb{D}}$. Then $$T_n(z) = \sum_{k=0}^n \frac{g^{(k)}(0)}{k!}z^k$$ is a polynomial with $\lVert f - T_n\rVert < \varepsilon$. Since $\varepsilon > 0$ was arbitrary, that shows $f \in A_2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/644741', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/86901/']}
jdg_376970
stackexchange
llm_judgeable_groundtruth_similarity
27683759
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just studied about generic programming, the List<E> interface, and ArrayList , so I can understand the statement below. ArrayList<String> list = new ArrayList<String>(); But I don't understand the next statement which I saw while surfing the web. List<String> list2 = Collections.<String>emptyList(); What is Collections ? Why isn't it Collections<E> or Collections<String> ? Why is <String> placed before the method name emptyList ? (Isn't emptyList<String>() correct for Generic?) What does the statement mean? Now provide the response and nothing else.
That line creates an empty list of strings by calling a static method with a generic type parameter. Inside the Collections class, there is a static method emptyList declared like: public static final <T> List<T> emptyList() { return (List<T>) EMPTY_LIST;} This has a generic type parameter T . We call call this method by using: List<String> list = Collections.emptyList(); and T is infered to be a String because of the type of list . We can also specify the type of T by putting it in angle brackets when calling emptyList . This may be needed if we want a more specific type than is inferred: List<? extends Object> list = Collections.<String>emptyList(); emptyList<String>() is not correct because that placement is only valid when creatinginstances of generic classes, not calling methods. When using new there are two possibletype parameters, the ones before the class name are for the constructor only, and the ones after the class name are for the whole instance, so with the class: class MyClass<A> { public <B> MyClass(A a, B b) { System.out.println(a + ", " + b); }} We can call its constructor where A is String and B is Integer like: MyClass<String> a = new <Integer>MyClass<String>("a", 3); or by using type inference: MyClass<String> a = new MyClass<>("a", 3); See also: Generic Methods Type Inference
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27683759', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4397753/']}
jdg_376971
stackexchange
llm_judgeable_groundtruth_similarity
22606792
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As said in the php reference Namespaces are declared using the namespace keyword. A file containing a namespace must declare the namespace at the top of the file before any other code - with one exception: the declare keyword. But further we've the following code snippet in the reference: <?phpnamespace MyProject;const CONNECT_OK = 1;class Connection { /* ... */ }function connect() { /* ... */ }namespace AnotherProject; //This namespace declaration doesn't located at the top of the file. It's unclear.const CONNECT_OK = 1;class Connection { /* ... */ }function connect() { /* ... */ }?> Now provide the response and nothing else.
You can use HiddenInput as ref1 widget: class TestModelForm(ModelForm): class Meta: model = TestModel widgets = { 'ref1': forms.HiddenInput(), } Another option is saving form with commit argument equal False . This way you can include only visible fields in form and then update model instance with needed data: def some_view(request): # ... if request.method == 'POST': form = TestModelForm(request.POST) if form.is_valid(): instance = form.save(commit=False) ref = get_ref_according_to_url() instance.ref1 = ref instance.save() # ...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22606792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_376972
stackexchange
llm_judgeable_groundtruth_similarity
37121005
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I send an object with an array of objects in it from my express route to my client, I get an [Object object] and then when I try to stringify it, I get this crazy string with this console message var messages = "<%=(JSON.stringify(messages))%>"console.log(messages) Which prints this out to the console ... {&#34;messages&#34;:[{&#34;content&#34;:&#34;cool mane&#34;,&#34;creator&#34;:&#34;joe&#34;},{&#34;content&#34;:&#34;test 4&#34;,&#34;creator&#34;:&#34;joe&#34;},{&#34;content&#34;:&#34; ewgdqf&#34;,&#34;creator&#34;:&#34;joe&#34;}, It should be something so I can iterate through it by doingmessages[0].content but I'm getting this crazy string that won't let me do anything with it... If I try to loop through it, it just prints out each character by itself. Now provide the response and nothing else.
When using <%= ... %> , EJS will encode / escape any output. That's why the " in the JSON are encoded as &#34; . According to this answer , you can prevent escaping by using <%- ... %> instead. There is also no need to put the output inside a string literal. It's actually bad since you can get problems with nested quotes. Just let it output directly into the JS code: var messages = <%-JSON.stringify(messages)%>;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37121005', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5131222/']}
jdg_376973
stackexchange
llm_judgeable_groundtruth_similarity
1846072
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we define a function \begin{align}f(k ;a,b) =\frac{ \int_0^\infty \cos(a x) e^{-x^k} \,dx}{ \int_0^\infty \cos(b x) e^{-x^k} \,dx}\end{align} can we show that \begin{align}|f(k ;a,b)| \le 1\end{align}for $ 0<k \le 2$ and $a\ge b$? This question was motivated by the discussion here . Note that for $k=1$ and $k=2$ this can be done, since \begin{align} \int_0^\infty \cos(a x) e^{-x^1} \,dx=\frac{1}{1+a^2}\\ \int_0^\infty \cos(a x) e^{-x^2} \,dx=\frac{\sqrt{\pi}}{2}e^{-a^2/4}\\\end{align} So, we have that \begin{align}f(1;a,b)&=\frac{1+b^2}{1+a^2} \\f(2;a,b)&=e^{ \frac{b^2-a^2}{4}}\end{align} In which case, we have that the conjectured bound is true. Edit : The bounty was posted specifically to address this question and a question raised by Jack D'Aurizio in the comments. The question is: Let \begin{align}g_k(z)=\int_0^\infty \cos(zx) e^{-x^k} dx\end{align} What is the largest value of $k$ such that $g_k(z)$ is non-negative and decreasing for $z\in \mathbb{R}^{+}$? Now provide the response and nothing else.
Yes, the Fourier transform of $\exp(-|x|^k)$ is positive and decreasingfor all $k$ such that $0 < k \leq 2$. This follows from the known case of $k=2$ (Gaussians) via an argument ofB.F.Logan cited in my 1991 paper with Odlyzko and Rush: Noam D. Elkies, Andrew M. Odlyzko, and Jason A. Rush: On the packing densities of superballs and other bodies, Invent. Math. 105 (1991), 613-639. See Lemma 5 on page 626 (with $k=\sigma$; in that paperwe needed only positivity, not that the Fourier transform is decreasing).The key is that $\exp(-t^{k/2})$ is a "totally monotone" function of $t>0$(its $n$-th derivative has sign $(-1)^n$ for all $t>0$),and decays to zero as $t \to \infty$, whence it isa nonnegative mixture of decreasing exponentials $e^{-ct}$ $(c>0)$ by Bernstein's theorem .Taking $t=x^2$ we deduce that $\exp(-|x|^k)$ is a nonnegative mixture ofGaussians $\exp(-cx^2)$ $(c>0)$. Since the Fourier transform of $\exp(-cx^2)$is positive and decreasing for all $c>0$, the same is true of the Fouriertransform of $\exp(-|x|^k)$, QED .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1846072', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/63050/']}
jdg_376974
stackexchange
llm_judgeable_groundtruth_similarity
28697613
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a CefSharp browser created in my Winform and I need to dynamically build an HTML page in memory and then have CefSharp render it. Ideally I would like to pass the constructor a string with the HTML in it but it is expecting a URL. The answer is probably no, but is there a directive you can prepend the string with to let CefSharp know it is a string that contains a web page? Then CefSharp will create a temp file? If not, where is the Chromium temp folder set to? Will it work if I write a file to there and then pass that as a fully qualified path? I know Chrome will support something like file:///Users/dmacdonald/Documents/myFile.htm as a URL but not sure how to form a URL if using the temp structure. Here is my new code but my browser object doesn't have a ResourceHandler property. I see it has a ResourceHandlerFactory using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms;using CefSharp.WinForms;using CefSharp;namespace DanCefWinForm{ public partial class Form1 : Form { public const string TestResourceUrl = "http://maps/resource/load"; public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { ChromiumWebBrowser browser = new ChromiumWebBrowser("http://maps/resource/load") { Dock = DockStyle.Fill, }; var handler = browser.ResourceHandler; browser.Location = new Point(20, 20); browser.Size = new Size(100, 100); this.Controls.Add(browser); } }} Now provide the response and nothing else.
The Simple Approach (one "file", one page) LoadString() can be used to load directly from a string: ChromiumWebBrowser.LoadString(string html, string url); Alternatively, LoadHtml() can load from a string in a given encoding: ChromiumWebBrowser.LoadHtml(string html, string url, Encoding encoding); I tried both, and they both seem to work, at least with CefSharp.Wpf v51.0.0 . According to WebBrowserExtensions.cs , LoadHtml() uses RegisterHandler() to register a ResourceHandler . It is not clear to me how LoadString() works, but both functions seem to have the same effect. Be sure to use a valid URL format for the fake URL, such as: https://myfakeurl.com The Complex Approach (multiple "files", such as doc + images) Create a class deriving from IResourceHandlerFactory . Using VS2015, mousing over the red-underlined name should give the option of Implement interface . This auto-complete option vastly simplifies creation of the class, so be sure to use it. Similar to in step 1, create a class deriving from IResourceHandler . Be sure to use the Implement interface auto-complete option if you can. In the class created in step 1 (derived from IResourceHandlerFactory ), there is a function called GetResourceHandler() . Within this function, return a new instance of your derived class from step 2 (based on IResourceHandler ). Using new here is essential since the Web browser may request multiple files simultaneously. Each IResourceHandler instance should handle one request from the browser (no worries, this is done for you). As mentioned by OP, the browser control has a member called ResourceHandlerFactory . Set this member equal to a new instance of your class you created in step 1 (deriving from IResourceHandlerFactory ). This is what links the Chromium Web Browser control to your interface classes. In step 3 you linked both your classes, so we have a full chain. Within the class from step 2, there is a function called ProcessRequest() . This is the first function called when a request is made by a Web page. Your goal here is to record the requested URL and any POST data, then decide whether to allow the request, calling either callback.Continue() or callback.Cancel() . Return true to continue. Again in the class from step 2, there is a function called GetResponseHeaders() . This is the second function called. Your goal here is to check the URL, possibly fetching file data from wherever you store it (but not yet sending it), determine the response length (file or string size), and set an appropriate status code within the response object. Be sure to set all these variables so the request can proceed correctly. Your final step, again in the class from step 2, is to complete the request within the third called function: ReadResponse() . Within this function, write your data fetched in step 6 to the dataOut stream. If your data exceeds about 32kB, you may need to send it in multiple chunks. Be absolutely sure to limit the amount you write in a given call to the length of the dataOut stream. Set bytesRead to whatever you wrote in this particular call. On the last call, when no more data remains, simply set bytesRead to zero and return false . Because you may be called upon multiple times for a given file, be sure to track your current read location so you know where you are and how much data has been sent. For those unfamiliar with the matter, you can store data files directly compiled into your EXE by adding them to your project and setting their "Build Action" to "Embedded Resource", followed by loading their data programmatically using System.Reflection.Assembly.GetManifestResourceStream() . Using the above methods, there is no need to create or read any files from disk .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28697613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/461051/']}
jdg_376975
stackexchange
llm_judgeable_groundtruth_similarity
624328
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a 6-layer PCB with dual-side component placement. To help assembly, I'd like to keep all of the "large" components on the bottom side of the board. One of those large components is the inductor for a switching boost regulator. Normally I'd follow standard SMPS layout guidelines (e.g. https://www.analog.com/media/en/technical-documentation/application-notes/an136f.pdf ). Typically vias in the power path should be avoided due to the increase in inductance... but for the inductor, does this matter? All other components (IC, passives) would be on the top side of the board, arranged to minimize loops. Why do I not see layouts with the large inductor on the opposite side of the board more often? Now provide the response and nothing else.
For switching regulators, loop area is usually the prime concern.Loops are not only in the XY plane, but are also in the Z axis as well. So, sometimes, you can get a smaller overall loop by placing components on the backside of the board and use multiple vias to connect them (as you have surmised). Via inductance isn't really an issue if you put multiples of them in a grid. If you put 6 or more vias on each leg of the inductor, you'll have negligible inductance from them. If you are concerned about what the actual inductance is, recall that they add like resistors. So in parallel they divide. Assuming that all the vias are the same size, 6 of them in a small grid would divide their effective inductance by 6. How do you calculate the inductance of one? $$L=5.08\cdot h \left[\ln\left(\frac{2h}{r}\right)+1\right],$$ Where: \$L\$ is the via inductance in nH (nano Henries) \$r\$ is the radius of the via in inches \$h\$ is the length of the via (board thickness for a simple via) in inches Why do I not see layouts with the large inductor on the opposite side of the board more often? Mostly because it's cheaper to have all the components placed on one side of the board.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/624328', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/54729/']}
jdg_376976
stackexchange
llm_judgeable_groundtruth_similarity
384392
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: still trying to find where i would use the "yield" keyword in a real situation. I see this thread on the subject What is the yield keyword used for in C#? but in the accepted answer, they have this as an example where someone is iterating around Integers() public IEnumerable<int> Integers(){yield return 1;yield return 2;yield return 4;yield return 8;yield return 16;yield return 16777216;} but why not just use list<int> here instead. seems more straightforward.. Now provide the response and nothing else.
If you build and return a List (say it has 1 million elements), that's a big chunk of memory, and also of work to create it. Sometimes the caller may only want to know what the first element is. Or they might want to write them to a file as they get them, rather than building the whole list in memory and then writing it to a file. That's why it makes more sense to use yield return. It doesn't look that different to building the whole list and returning it, but it's very different because the whole list doesn't have to be created in memory before the caller can look at the first item on it. When the caller says: foreach (int i in Integers()){ // do something with i} Each time the loop requires a new i, it runs a bit more of the code in Integers(). The code in that function is "paused" when it hits a yield return statement.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/384392', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4653/']}
jdg_376977
stackexchange
llm_judgeable_groundtruth_similarity
239894
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am currently using JWT implementation for the authentication part of my APIs. A private key is used to sign the token generated and used to make sure it's not tampered with when it's used later for other API. My question is - What is the impact if this private key is leaked? What can the bad guy do with it? From here , my understanding is that the payload can be altered. Hence, in that example, a normal user can be changed to admin. But in my scenario, I don't have any other important fields except expiration date. So other than the bad guy able to forever extend his own token expiry date, what are the other impacts that I am facing? Now provide the response and nothing else.
Whoever possesses the private key can create valid tokens where your system simply can not distinguish between a legitimate token and a token created by the attacker. I am guessing you are not just using the expiry field but also the subject field sub, which is in short terms the logged in user. With the private key, I can create a token with any subject I want, thus sign in as any user of your system. As you stated, I can also add any other claim and you system has no choice but trust it, as I was able to create a valid signature. It can not be stressed enough, but JWT heavily relies on the private key to stay absolutely private. Losing the private key is the worst case scenario.
{}
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/239894', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/244547/']}
jdg_376978
stackexchange
llm_judgeable_groundtruth_similarity
5251
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $\ce{[Cr(H2O)4Cl2]Cl}$, Tetraaquadichlorochromium(III) chloride According to a book, the (III) indicates the oxidation state of chromium, but it doesn't explain why. Cr has 6 valence electrons. How does it get to +3? Now provide the response and nothing else.
I think the easiest way to see this is to count the formal charges on the ligands. The four waters do not have net charge. The remaining ligands--the three Cl's--are likely $\ce{Cl-}$, and so their total charge contribution is $-3$. The only way the complex can be neutral is if the chromium has a $+3$ charge: hence Chromium (III).
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/5251', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/1771/']}
jdg_376979
stackexchange
llm_judgeable_groundtruth_similarity
52907866
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am wondering how I would properly use MySQL when I am scaling my Node.JS app using the cluster module. Currently, I've only come up with two solutions: Solution 1 : Create a database connection on every "worker". Solution 2 : Have the database connection on a master process and whenever one of the workers request some data, the master process will return the data. However, using this solution, I do not know how I would be able to get the worker to retrieve the data from the master process. I (think) I made a "hacky" workaround emitting with a unique number and then waiting for the master process to send the message back to the worker and the event name being the unique number. If you don't understand what I mean by this, here's some code: // Worker processreturn new Promise (function (resolve, reject) { process.send({ // Other data here identifier: <unique number> }) // having a custom event emitter on the worker worker.once(<unique number>, function (data) { // data being the data for the request with the unique number // resolving the promise with returned data resolve(data) })})//////////////////////////// Master process// Custom event emitter on the master processmaster.on(<eventName>, function (data) { // logic // Sending data back to worker master.send(<other args>, data.identifier)} What would be the best approach to this problem? Thank you for reading. Now provide the response and nothing else.
Google Maps API loading via static import : import "https://maps.googleapis.com/maps/api/js?key=MYKEY&libraries=places&callback=initMap"; is not supported, you need to consider a different options for that purpose: reference Google Maps API JS library via /public/index.html file: <script src="https://maps.googleapis.com/maps/api/js?key=MYKEY&libraries=places"></script> or dynamically load JS resource, for example using thislibrary Now regarding SearchBar component, the below example demonstrates how to implement a simple version of Place Autocomplete (without a dependency to Google Map instance) based on this official example import React from "react";/* global google */class SearchBar extends React.Component { constructor(props) { super(props); this.autocompleteInput = React.createRef(); this.autocomplete = null; this.handlePlaceChanged = this.handlePlaceChanged.bind(this); } componentDidMount() { this.autocomplete = new google.maps.places.Autocomplete(this.autocompleteInput.current, {"types": ["geocode"]}); this.autocomplete.addListener('place_changed', this.handlePlaceChanged); } handlePlaceChanged(){ const place = this.autocomplete.getPlace(); this.props.onPlaceLoaded(place); } render() { return ( <input ref={this.autocompleteInput} id="autocomplete" placeholder="Enter your address" type="text"></input> ); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52907866', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8967651/']}
jdg_376980
stackexchange
llm_judgeable_groundtruth_similarity
13074428
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been searching for this for a while and so far all I've been able to come up with is how to create a style and apply it to a character like so: StyledDocument doc = (StyledDocument) new DefaultStyledDocument();JTextPane textpane = new JTextPane(doc);textpane.setText("Test");javax.swing.text.Style style = textpane.addStyle("Red", null);StyleConstants.setForeground(style, Color.RED);doc.setCharacterAttributes(0, 1, textpane.getStyle("Red"), true); This is useful if you have only a few styles in your document and want to store them by name so that you can apply them easily later on. In my application I want to be able to set the foreground color (one of only a few values) and the background color (grayscale, many different values) independently for every character in the text. It seems like a huge waste to create potentially hundreds/thousands of different styles for this. Is there a way to set these attributes without having to create a new style each time? It would be much easier if I only had to render the text but I also need to make it editable as well. Is there a way to do this with JTextPane , or is there another swing class that offers this functionality? Now provide the response and nothing else.
If you want to change the style for each character in a textpane, here is a complete random way to do it. You create a different attribute set for each character. Up to you to find appropriate combination (foreground/background contrast, not too much difference in size of the chars, etc...). You could also store the different styles you have already applied so that you don't use the same one twice. import java.awt.Color;import java.util.Random;import javax.swing.JFrame;import javax.swing.JScrollPane;import javax.swing.JTextPane;import javax.swing.text.DefaultStyledDocument;import javax.swing.text.SimpleAttributeSet;import javax.swing.text.StyleConstants;import javax.swing.text.StyledDocument;public class TestDifferentStyles { private void initUI() { JFrame frame = new JFrame(TestDifferentStyles.class.getSimpleName()); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); StyledDocument doc = new DefaultStyledDocument(); JTextPane textPane = new JTextPane(doc); textPane.setText("Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has " + "been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of " + "type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the " + "leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the" + " release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing " + "software like Aldus PageMaker including versions of Lorem Ipsum."); Random random = new Random(); for (int i = 0; i < textPane.getDocument().getLength(); i++) { SimpleAttributeSet set = new SimpleAttributeSet(); // StyleConstants.setBackground(set, new Color(random.nextInt(256), random.nextInt(256), random.nextInt(256))); StyleConstants.setForeground(set, new Color(random.nextInt(256), random.nextInt(256), random.nextInt(256))); StyleConstants.setFontSize(set, random.nextInt(12) + 12); StyleConstants.setBold(set, random.nextBoolean()); StyleConstants.setItalic(set, random.nextBoolean()); StyleConstants.setUnderline(set, random.nextBoolean()); doc.setCharacterAttributes(i, 1, set, true); } frame.add(new JScrollPane(textPane)); frame.setSize(500, 400); frame.setVisible(true); } public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new TestDifferentStyles().initUI(); } }); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13074428', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1775059/']}
jdg_376981
stackexchange
llm_judgeable_groundtruth_similarity
20139642
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using VS2008. I'm getting the following error. BUILD: [02:0000000295:ERRORE] c:\wince700\platform\am33x_bsp\src\bootloader\bootpart\bootpart_e.cpp(61) : error C2732: linkage specification contradicts earlier specification for 'SdhcInitialize' {log="C:\WINCE700\platform\AM33X_BSP\SRC\BOOTLOADER\bldsys.log(103)"}BUILD: [02:0000000297:ERRORE] NMAKE : fatal error U1077: 'C:\WINCE700\sdk\bin\i386\ARM\cl.EXE' : return code '0x2' {log="C:\WINCE700\platform\AM33X_BSP\SRC\BOOTLOADER\bldsys.log(104)"}BUILD: [02:0000000299:ERRORE] clean TargetCompilePass -nologo BUILDMSG=Stop. BUILDROOT=C:\WINCE700\platform\AM33X_BSP CLEANBUILD=1 NOLINK=1 NOPASS0=1 failed - rc = 2. {log="C:\WINCE700\platform\AM33X_BSP\SRC\BOOTLOADER\bldsys.log(105)"} file_1.cpp extern "C"{ // some extern declarations extern void SdhcInitialize(DWORD slot);} file_2.c void SdhcInitialize(DWORD slot){//some code} Please guide me how to resolve. Now provide the response and nothing else.
I'm guessing that you have a header that contains a prototype for the SdhcInitialize() function, and that the header was written for use by C programs. So for example, the header file might include something like the following line: SD_API_STATUS SdhcInitialize(DWORD slot); without being enclosed in an extern "C" {} block (since the header is intended for C programs). Additionally, I suspect that this header is being included - directly or indirectly - by file_1.cpp This means that the header cannot be included in a C++ program without some additional work being done, otherwise the C++ program will see the declaration as meaning that SdhcInitialize() has C++ linkage. You have two reasonable approaches to fixing this: if you can modify the header, add the following lines around the declarations in the header: #if __cplusplus extern "C" { #endif // declarations go here #if __cplusplus } #endif This way, C++ files will have the declarations enclosed in a extern "C" linkage block, while C program will not see the extern "C" bits (which would otherwise confuse the C compiler). I think an argument can be made that all C headers should include something like those lines so that the C functions can be consumed by C++ programs without hassle. if you cannot modify the header for some reason, you can work around the problem by including the header in C++ files like so: extern "C" { #include "Sdhc-header.h" }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20139642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2039127/']}
jdg_376982
stackexchange
llm_judgeable_groundtruth_similarity
3252189
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I failed at reading RWH; and not one to quit, I ordered Haskell: The Craft of Functional Programming . Now I'm curious about these functional proofs on page 146. Specifically I'm trying to prove 8.5.1 sum (reverse xs) = sum xs . I can do some of the induction proof but then I get stuck.. HYP: sum ( reverse xs ) = sum xs BASE: sum ( reverse [] ) = sum []Left = sum ( [] ) (reverse.1) = 0 (sum.1)Right = 0 (sum.1) INDUCTION: sum ( reverse (x:xs) ) = sum (x:xs) Left = sum ( reverse xs ++ [x] ) (reverse.2)Right = sum (x:xs) = x + sum xs (sum.2) So now I'm just trying ot prove that Left sum ( reverse xs ++ [x] ) is equal to Right x + sum xs , but that isn't too far off from where I started sum ( reverse (x:xs) ) = sum (x:xs) . I'm not quite sure why this needs to be proved, it seems totally reasonable to use the symbolic proof of reverse x:y:z = z:y:x (by defn), and because + is commutative (arth) then reverse 1+2+3 = 3+2+1 , Now provide the response and nothing else.
sum (reverse []) = sum [] -- def reversesum (reverse (x:xs)) = sum (reverse xs ++ [x]) -- def reverse = sum (reverse xs) + sum [x] -- sum lemma below = sum (reverse xs) + x -- def sum = x + sum (reverse xs) -- commutativity assumption! = x + sum xs -- inductive hypothesis = sum (x:xs) -- definition of sum However, there are underlying assumptions of associativity and commutativity that are not strictly warranted and this will not work properly for a number of numerical types such as Float and Double where those assumptions are violated. Lemma: sum (xs ++ ys) == sum xs + sum ys given the associativity of (+) Proof: sum ([] ++ ys) = sum ys -- def (++) = 0 + sum ys -- identity of addition = sum [] ++ sum ys -- def sumsum ((x:xs) ++ ys) = sum (x : (xs ++ ys)) -- def (++) = x + sum (xs ++ ys) -- def sum = x + (sum xs + sum ys) -- inductive hypothesis = (x + sum xs) + sum ys -- associativity assumption! = sum (x:xs) + sum ys -- def sum
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3252189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124486/']}
jdg_376983
stackexchange
llm_judgeable_groundtruth_similarity
73795
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Given a list of real numbers $p_1, \dots, p_n$, I am looking for a most efficient algorithm to sort this list in a "probabilistic ascending order", meaning that $p_i < p_j$ implies that it is likely for $i$ to be placed before $j$, but not certain. In principle, every permutation is a possible output, but the less sorted the permutation is, the less likely it is to occur. The best solution I could come up with is to modify selection sort. Instead of selecting the minimal element in every step, you select a random element with probability proportional to $\frac{1}{p_i}$. This has quadratic complexity of course, so I was wondering if there are better alternatives. Now provide the response and nothing else.
One of the popular models for biased permutations is the Mallows model , dating to a paper of Mallows from 1957. Lu and Boutilier , quoting Doignon et al. , give the following recipe for sampling a permutation according to the Mallows distribution, given a parameter $0 < \phi \leq 1$: Start with the permutation 1. Insert 2 into position 1 with probability $\frac{\phi-\phi^2}{1-\phi^2}$, and into position 2 with probability $\frac{1-\phi}{1-\phi^2}$. Insert 3 into positions 1,2,3 with probabilities $\frac{\phi^2-\phi^3}{1-\phi^3},\frac{\phi-\phi^2}{1-\phi^3},\frac{1-\phi}{1-\phi^3}$, respectively. Insert $4,\ldots,n$ in an analogous manner. When inserting $x$ into position $i$, what you do is shift positions $i,\ldots,x-1$ one step forward, and insert $x$ in the resulting empty spot. The probability to obtain a permutation $\pi$ is proportional to $\phi$ raised to the number of inversions in $\pi$ (the Kendall $\tau$ distance between $\pi$ and the identity permutation). Another popular model is the Plackett – Luce model from 1959. There are other models, for example Tallis–Dansie .
{}
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/73795', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/7226/']}
jdg_376984
stackexchange
llm_judgeable_groundtruth_similarity
11796580
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Full message: error: Ref refs/remotes/origin/user is at 3636498c2ea7735fdcedc9af5ab3c8689e6abe77 but expected a21359c6cc2097c85775cde6a40105f4bd7100ecFrom github.com:{github project url} ! a21359c..6273ffc user -> origin/user (unable to update local ref) Now provide the response and nothing else.
If you are running git under a file system that is not case sensitive (Windows or OS X) this will occur if there are two branches with the same name but different capitalisation, e.g. user_model_changes and User_model_changes as both of the remote branches will match the same tracking ref. Delete the wrong remote branch (you shouldn't have branches that differ only by case) and then git remote prune origin and everything should work
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11796580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/423373/']}
jdg_376985
stackexchange
llm_judgeable_groundtruth_similarity
11786023
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I do not want d3.behavior.zoom to add the ability to double click zoom on my graph. How can I disable this behavior? Here is a JSFiddle with the unwanted behavior. I have tried the following without any luck. d3.behavior.zoom.dblclick = function() {}; Now provide the response and nothing else.
You can disable the double-click behavior by removing the zoom behavior’s dblclick event listener. Looking at your code, you’ve assigned the zoom behavior to the SVG element. So you could say: d3.select("svg").on("dblclick.zoom", null); Or, together with where you register the zoom behavior: .call(d3.behavior.zoom().on("zoom", update)).on("dblclick.zoom", null) You might also want to move the zoom behavior down to a G element rather than putting it on the root SVG element; I’m not sure it will work correctly on the root SVG, since the SVG element doesn’t support the transform attribute .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11786023', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1299073/']}
jdg_376986
stackexchange
llm_judgeable_groundtruth_similarity
15282656
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just cant figure out this error using Spring Web Services. I believe I did everything correctly. Soap Error Response <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <SOAP-ENV:Body> <SOAP-ENV:Fault> <faultcode>SOAP-ENV:Server</faultcode> <faultstring xml:lang="en">No adapter for endpoint [public void org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.PersonManagerSyncSoapBindingImpl.readPerson(org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.ReadPersonRequest,org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.Imsx_RequestHeaderInfoType,org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.holders.ReadPersonResponseHolder,org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.holders.Imsx_ResponseHeaderInfoTypeHolder)]: Is your endpoint annotated with @Endpoint, or does it implement a supported interface like MessageHandler or PayloadEndpoint?</faultstring> </SOAP-ENV:Fault> Annotations @Endpoint public class PersonManagerSyncSoapBindingImpl implements org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.PersonManagerSyncPortType{ @PayloadRoot(localPart = "readPersonRequest", namespace = "http://www.imsglobal.org/services/lis/pms2p0/wsdl11/sync/imspms_v2p0") @ResponsePayload public void readPerson(@RequestPayload org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.ReadPersonRequest parameters, org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.Imsx_RequestHeaderInfoType headerInfoParameters, @RequestPayload org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.holders.ReadPersonResponseHolder response, @RequestPayload org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.holders.Imsx_ResponseHeaderInfoTypeHolder headerInfoResponse) { response.value = new org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.ReadPersonResponse(); headerInfoResponse.value = new org.imsglobal.www.services.lis.pms2p0.wsdl11.sync.imspms_v2p0.Imsx_ResponseHeaderInfoType(); } spring-ws-servlet.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:sws="http://www.springframework.org/schema/web-services" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/web-services http://www.springframework.org/schema/web-services/web-services-2.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <context:component-scan base-package="*"></context:component-scan> <sws:annotation-driven/> <sws:dynamic-wsdl id="personServiceManagement" portTypeName="PersonManagerSyncPortType" locationUri="/endpoints/" targetNamespace="http://www.imsglobal.org/services/lis/pms2p0/wsdl11/sync/imspms_v2p0"> <sws:xsd location="/WEB-INF/wsdl/xsd/PersonManagementService.xsd"/> </sws:dynamic-wsdl></beans> Now provide the response and nothing else.
I think you're missing the return value. Spring-WS uses the method signature to map a request/response combination. For instance, I have the following operation in my generated WSDL: <wsdl:operation name="GetHiredCandidates"> <wsdl:input message="tns:GetHiredCandidatesRequest" name="GetHiredCandidatesRequest"></wsdl:input> <wsdl:output message="tns:GetHiredCandidatesResponse" name="GetHiredCandidatesResponse"></wsdl:output></wsdl:operation> To map a method on this operation the ResponsePayload and RequestPayload need to match the input and output defined in the WSDL: @ResponsePayloadpublic GetHiredCandidatesResponse getKandidaat (@RequestPayload GetHiredCandidatesRequest) { .. return getHiredCandidatesResponse;} Hope this helps!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15282656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/280602/']}
jdg_376987
stackexchange
llm_judgeable_groundtruth_similarity
50186904
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to remove a directory and it’s contents in the PathLib module? With path.unlink() it only removes a file, with path.rmdir() the directory has to be empty. Is there no way to do it in one function call? Now provide the response and nothing else.
As you already know, the only two Path methods for removing files/directories are .unlink() and .rmdir() and neither does what you want. Pathlib is a module that provides object oriented paths across different OS's, it isn't meant to have lots of diverse methods. The aim of this library is to provide a simple hierarchy of classes tohandle filesystem paths and the common operations users do over them. The "uncommon" file system alterations, such as recursively removing a directory, is stored in different modules. If you want to recursively remove a directory, you should use the shutil module. (It works with Path instances too!) import shutilimport pathlibimport os # for checking resultsprint(os.listdir())# ["a_directory", "foo.py", ...]path = pathlib.Path("a_directory")shutil.rmtree(path)print(os.listdir())# ["foo.py", ...]
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/50186904', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4556675/']}
jdg_376988
stackexchange
llm_judgeable_groundtruth_similarity
3031988
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the boolean hypercube $\{0,1\}^N$ . For a set I $\subseteq$ {1,2,...N}, we define the parity function $h_I$ as follows. For a binary vector x = $(x_1, x_2, ...,x_N) \in \{0,1\}^N$ , $h_I(x) = \bigg(\sum_{i\in I}x_i\bigg)mod 2$ What is the VC-dimension of the class of all such parity functions, $H_{N-parity} = \{h_I:I\subseteq \{1,2,..., N\}\}$ ? [Courtesy: Shai Ben-David et al.,] Now provide the response and nothing else.
No. Suppose $G$ is an uncountable group. Every element $g$ of $G$ belongs to a countable subgroup of $G$ , namely the cyclic subgroup $\langle g\rangle$ . Thus $G$ is the union of all of its countable subgroups. Since a countable union of countable sets is countable, $G$ must have uncountably many countable subgroups.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/3031988', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/624430/']}
jdg_376989
stackexchange
llm_judgeable_groundtruth_similarity
57921251
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The list returned by the method List.of(E... elements) in java does return an immutable list, but this is not visible at all by looking at the created list. The created list simply throws an Exception instead of not showing the possiblity to change the list at all.My point is, that List.of(E... elements) should return a ImmutableList that extends List . This way the user can decide if he cares to show this fact of immutability or not. But I don't find anybody complaining or showing alternative solutions. Even Guava and Apache Commons don't do this by default. Only Guava gives the possibilty to create it (albeit with a lot of code): List<String> list = new ArrayList<String>(Arrays.asList("one", "two", "three"));ImmutableList<String> unmodifiableList = ImmutableList.<String>builder().addAll(list).build(); But even this class has a (deprecated) add and remove method. Can anyone tell me why nobody cares about this (seemingly fundamental) issue? Now provide the response and nothing else.
It's not that nobody cares; it's that this is a problem of considerable subtlety. The original reason there isn't a family of "immutable" collection interfaces is because of a concern about interface proliferation . There could potentially be interfaces not only for immutability, but synchronized and runtime type-checked collections, and also collections that can have elements set but not added or removed (e.g., Arrays.asList ) or collections from which elements can be removed but not added (e.g., Map.keySet ). But it could also be argued that immutability is so important that it should be special-cased, and that there be support in the type hierarchy for it even if there isn't support for all those other characteristics. Fair enough. The initial suggestion is to have an ImmutableList interface extend List , as ImmutableList <: List <: Collection (Where <: means "is a subtype of".) This can certainly be done, but then ImmutableList would inherit all of the methods from List , including all the mutator methods. Something would have to be done with them; a sub-interface can't "disinherit" methods from a super-interface. The best that could be done is to specify that these methods throw an exception, provide default implementations that do so, and perhaps mark the methods as deprecated so that programmers get a warning at compile time. This works, but it doesn't help much. An implementation of such an interface cannot be guaranteed to be immutable at all. A malicious or buggy implementation could override the mutator methods, or it could simply add more methods that mutate the state. Any programs that used ImmutableList couldn't make any assumptions that the list was, in fact, immutable. A variation on this is to make ImmutableList be a class instead of an interface , to define its mutator methods to throw exceptions, to make them final, and to provide no public constructors, in order to restrict implementations. In fact, this is exactly what Guava's ImmutableList has done. If you trust the Guava developers (I think they're pretty reputable) then if you have a Guava ImmutableList instance, you're assured that it is in fact immutable. For example, you could store it in a field with the knowledge that it won't change out from under you unexpectedly. But this also means that you can't add another ImmutableList implementation, at least not without modifying Guava. A problem that isn't solved by this approach is the "scrubbing" of immutability by upcasting. A lot of existing APIs define methods with parameters of type Collection or Iterable . If you were to pass an ImmutableList to such a method, it would lose the type information indicating that the list is immutable. To benefit from this, you'd have to add immutable-flavored overloads everywhere. Or, you could add instanceof checks everywhere. Both are pretty messy. (Note that the JDK's List.copyOf sidesteps this problem. Even though there are no immutable types , it checks the implementation before making a copy, and avoids making copies unnecessarily. Thus, callers can use List.copyOf to make defensive copies with impunity.) As an alternative, one might argue that we don't want ImmutableList to be a sub-interface of List , we want it to be a super-interface: List <: ImmutableList This way, instead of ImmutableList having to specify that all those mutator methods throw exceptions, they wouldn't be present in the interface at all. This is nice, except that this model is completely wrong. Since ArrayList is a List , that means ArrayList is also an ImmutableList , which is clearly nonsensical. The problem is that "immutable" implies a restriction on subtypes, which can't be done in an inheritance hierarchy. Instead, it would need to be renamed to allow capabilities to be added as one goes down the hierarchy, for example, List <: ReadableList which is more accurate. However, ReadableList is altogether a different thing from an ImmutableList . Finally, there are a bunch of semantic issues that we haven't considered. One concerns immutability vs. unmodifiability . Java has APIs that support unmodifiability, for example: List<String> alist = new ArrayList<>(...);??? ulist = Collections.unmodifiableList(alist); What should the type of ulist be? It's not immutable, since it will change if somebody changes the backing list alist . Now consider: ???<String[]> arlist = List.of(new String[] { ... }, new String[] { ... }); What should the type be? It's certainly not immutable, as it contains arrays, and arrays are always mutable. Thus it's not at all clear that it would be reasonable to say that List.of returns something immutable.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57921251', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2351407/']}
jdg_376990
stackexchange
llm_judgeable_groundtruth_similarity
38393
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: J, U, W are included in ISO basic Latin alphabet which consists of 26 letters. However, The classic Latin has only 23 letters, and J was only used as a variant of I as σ do to ς . J, U were not distinguished from I, V in Europe until late Medieval, and were not regard as different letters as late as 18th Century in English. Many Europe languages do not have a J or V(sometimes U instead) in their alphabet. W, first the digraph UU or VV then ligature as implied in its name, and not included in many European language alphabets, was included in ISO basic Latin alphabet. Meanwhile, Æ and Œ which are also common no only even in English but also in Medieval Latin and other Europe languages, however, didn't survive in the ISO basic Latin alphabet. Ch, a digraph dated back to 2 Century BC, was included in Gerke's version of Morse code and came to standard by ITU (as do Ä, Ö, Ü), didn't survive, too. Why were J, U, W included? Is it just a coincidence that English is the only major language that used all these letters and no more in its orthography? Related: Does any language using the Latin alphabet have a unique name for "w"? Now provide the response and nothing else.
Despite its name, the ISO Basic Latin Alphabet isn't particularly concerned with representing Latin. It was developed in the modern day, so the fact that I~J and U~V weren't consistently distinguished until the 18th century isn't relevant—they're consistently distinguished now . But the observation that the ISO Basic Latin Alphabet aligns exactly with what's needed for English and not with what's needed for most other European languages is an important one, and gets at the core of the answer. A lot of early work in electronic transmission of text was done in America, and as such, the early codes used were designed pretty much exclusively for English. It's the same reason why American varieties of Morse code didn't have codes for ß and ø, and why American typewriters didn't have keys for them: they just weren't needed for English, and including them was an additional expense for not much benefit. In the 60s, American manufacturers standardized "ASCII" (the American Standard Code for Information Interchange) to make it easier for their devices to talk to each other—without any particular consideration given to other languages, for the same reason as with typewriters and telegraphs. And due to the significant influence of American tech manufacturers, the original seven-bit ASCII eventually got enshrined in international standards; variations like eight-bit ASCII and eventually Unicode tended to extend it, not modify the core of it, with non-English letters like ß and ø relegated to higher codepoints separate from the English alphabet. And thus, the "ISO Basic Latin Alphabet" is just a fancy name for the English alphabet, circa the 1960s and 1970s when these standards were first devised. It's a historical accident, really, nothing more.
{}
{'log_upvote_score': 6, 'links': ['https://linguistics.stackexchange.com/questions/38393', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/26862/']}
jdg_376991
stackexchange
llm_judgeable_groundtruth_similarity
31587926
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Integrating https://github.com/wdullaer/MaterialDateTimePicker this library in android app,here is my code to select date and time.I'm able to implement enabling future date with in month and disable past date with in month but i want to implement disable option for past months and enable selection option for future months.here is my code private void selectDate() { final TimePickerDialog timePickerDialog12h = TimePickerDialog .newInstance(new OnTimeSetListener() { @Override public void onTimeSet(RadialPickerLayout view, int hourOfDay, int minute) { Object c = pad3(hourOfDay); selectedTime = new StringBuilder() .append(pad2(hourOfDay)).append(":") .append(pad(minute)).append(c); String str_date = selectedDate.toString() + " " + selectedTime + ""; String str_date_current = currentDate.toString() + " " + mCalendar.get(Calendar.HOUR_OF_DAY) + ":" + mCalendar.get(Calendar.MINUTE); DateFormat formatter = new SimpleDateFormat( "dd-MM-yyyy hh:mm aa"); DateFormat formatter2 = new SimpleDateFormat( "dd-MM-yyyy hh:mm"); DateFormat myFormat = new SimpleDateFormat( "yyyy-MM-dd HH:mm:ss"); try { String reformattedStr = myFormat.format(formatter .parse(str_date)); // Date date=formatter.parse(str_date); String reformattedStr2 = myFormat.format(formatter2 .parse(str_date_current)); date = (Date) myFormat.parse(reformattedStr); Date date2 = (Date) myFormat.parse(reformattedStr2); if (date2.before(date)) { bookAppointment(); } else { selectDate(); Toast.makeText( ChatThread.this, "We all wish we could go back in the past ;)", Toast.LENGTH_SHORT).show(); } System.out.println("Today is " + date.getTime()); } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }, mCalendar.get(Calendar.HOUR_OF_DAY), mCalendar .get(Calendar.MINUTE), false); datePickerDialog = DatePickerDialog.newInstance( new OnDateSetListener() { public void onDateSet(DatePickerDialog datePickerDialog, int year, int month, int day) { selectedDate = new StringBuilder().append(pad(day)) .append("-") .append(mCalendar.get(Calendar.MONTH) + 1) .append("-") .append(mCalendar.get(Calendar.YEAR)); currentDate = new StringBuilder() .append(mCalendar.get(Calendar.DAY_OF_MONTH)) .append("-").append(pad(month + 1)).append("-") .append(pad(year)); DateFormat format = new SimpleDateFormat("dd-MM-yyyy"); try { Date date1 = format.parse(selectedDate.toString()); Date date2 = format.parse(currentDate.toString()); Calendar cal = Calendar.getInstance(); cal.setTime(date2); datePickerDialog.setMinDate(cal); if (date2.equals(date1) || date2.compareTo(date1)<0) { timePickerDialog12h.show(getFragmentManager(), ""); } else { selectDate(); Toast.makeText( ChatThread.this, "We all wish we could go back in the past ;)", Toast.LENGTH_SHORT).show(); } } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }, mCalendar.get(Calendar.YEAR), mCalendar.get(Calendar.MONTH), mCalendar.get(Calendar.DAY_OF_MONTH)); // datePickerDialog.onCreateDialog(savedInstanceState) datePickerDialog.show(getFragmentManager(), "tag");} Now provide the response and nothing else.
So, here is the solution I came up with. Of course, it can be improved, so it is open to critique. I used await Dispatcher.Yield(DispatcherPriority.ApplicationIdle); instead of Task.Yield() because Task.Yield() will make continuation a higher priority than GUI events, but, as a bad consequence, it demands your application to use WindowsBase.dll . public static class SshCommandExtensions{ public static async Task ExecuteAsync( this SshCommand sshCommand, IProgress<ScriptOutputLine> progress, CancellationToken cancellationToken) { var asyncResult = sshCommand.BeginExecute(); var stdoutStreamReader = new StreamReader(sshCommand.OutputStream); var stderrStreamReader = new StreamReader(sshCommand.ExtendedOutputStream); while (!asyncResult.IsCompleted) { await CheckOutputAndReportProgress( sshCommand, stdoutStreamReader, stderrStreamReader, progress, cancellationToken); await Dispatcher.Yield(DispatcherPriority.ApplicationIdle); } sshCommand.EndExecute(asyncResult); await CheckOutputAndReportProgress( sshCommand, stdoutStreamReader, stderrStreamReader, progress, cancellationToken); } private static async Task CheckOutputAndReportProgress( SshCommand sshCommand, TextReader stdoutStreamReader, TextReader stderrStreamReader, IProgress<ScriptOutputLine> progress, CancellationToken cancellationToken) { if (cancellationToken.IsCancellationRequested) { sshCommand.CancelAsync(); } cancellationToken.ThrowIfCancellationRequested(); await CheckStdoutAndReportProgressAsync(stdoutStreamReader, progress); await CheckStderrAndReportProgressAsync(stderrStreamReader, progress); } private static async Task CheckStdoutAndReportProgressAsync( TextReader stdoutStreamReader, IProgress<ScriptOutputLine> stdoutProgress) { var stdoutLine = await stdoutStreamReader.ReadToEndAsync(); if (!string.IsNullOrEmpty(stdoutLine)) { stdoutProgress.Report(new ScriptOutputLine( line: stdoutLine, isErrorLine: false)); } } private static async Task CheckStderrAndReportProgressAsync( TextReader stderrStreamReader, IProgress<ScriptOutputLine> stderrProgress) { var stderrLine = await stderrStreamReader.ReadToEndAsync(); if (!string.IsNullOrEmpty(stderrLine)) { stderrProgress.Report(new ScriptOutputLine( line: stderrLine, isErrorLine: true)); } }}public class ScriptOutputLine{ public ScriptOutputLine(string line, bool isErrorLine) { Line = line; IsErrorLine = isErrorLine; } public string Line { get; private set; } public bool IsErrorLine { get; private set; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31587926', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5148125/']}
jdg_376992
stackexchange
llm_judgeable_groundtruth_similarity
6554317
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my savedInstaceState code: @Overridepublic void onSaveInstanceState(Bundle savedInstanceState) { savedInstanceState.putStringArrayList("todo_arraylist", Altodo); Log.v("bundle", "Saved"); super.onSaveInstanceState(savedInstanceState);}public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (savedInstanceState != null) { Altodo = savedInstanceState.getStringArrayList("todo_arraylist"); Log.v("bundle", "Restored"); } else { Log.v("bundle", "null"); } setContentView(R.layout.main);} The logs always show the "bundle save" tag. But in onCreate method, SavedInstanceState is always null. Now provide the response and nothing else.
I observed the exact same symptoms (reported as issue 133394 ) in a project with two Activities A and B that extend ActionBarActivity . Activity A is the main activity, and I always receive null for savedInstanceState in onCreate of its list fragment when returning from a detail view activity B . After many hours, this problem exposed itself to me as a navigation issue in disguise. The following may be relevant to my setup and come from other answers on this page: Given this answer, I made sure that fragment and activity each have unique IDs set. There is no override of onSaveInstanceState without super call. Activity A is specified as acitivy B 's parent in AndroidManifest.xml , using both the android:parentActivityName attribute and the corresponding meta-data tag for earlier versions of Android (see " Providing Up Navigation "). Already without any corresponding creation code such as getActionBar() .setHomeButtonEnabled(true) , activity B has a functioning back button ( < ) in its action bar. When this button is tapped, activity A reappears but with (a) all previous instance state lost, (b) onCreate always called, and (c) savedInstanceState always null . Interestingly, when I tap the back button provided at the bottom edge of the emulator display (an open triangle that points to the left), activity A reappears just as it was left (i.e. its instance state fully retained) without invoking onCreate . So maybe something is wrong with navigation? After more reading , I implemented my own navigation instructions to run in response to a tap on the back-button in activity B : @Overridepublic boolean onOptionsItemSelected(MenuItem item) { if (item.getItemId() == android.R.id.home) NavUtils.navigateUpFromSameTask(this); return true; } return super.onOptionsItemSelected(item);} Nothing related to restoring instance state of activity A changed. NavUtils also provide a method getParentActivityIntent(Activity) and navigateUpTo(Activity, Intent) that allow us to modify the navigation intent to explicitly instruct that activity A is not started fresh (and thus without saved instance state provided) by setting the FLAG_ACTIVITY_CLEAR_TOP flag: If set, and the activity being launched is already running in the current task, then instead of launching a new instance of that activity, all of the other activities on top of it will be closed and this Intent will be delivered to the (now on top) old activity as a new Intent. In my hands, this solves problem of lost instance state and could look like: public boolean onOptionsItemSelected(MenuItem item) { if (item.getItemId()== android.R.id.home) { Intent intent = NavUtils.getParentActivityIntent(this); intent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); NavUtils.navigateUpTo(this, intent); return true; } return super.onOptionsItemSelected(item);} Note that this may not be the complete solution in other cases where a user can switch directly to activity B from within a different task (see here ). Also, a possibly identical solution in behavior that does not make use of NavUtils is to simply call finish() : public boolean onOptionsItemSelected(MenuItem item) { if (item.getItemId()== android.R.id.home) { finish(); return true; } return super.onOptionsItemSelected(item);} Both solutions work in my hands. I am only speculating that the original issue is a slightly incorrect default implementation of the back-button, and it may be related to that implementation invoking some kind of navigateUp that misses FLAG_ACTIVITY_CLEAR_TOP .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6554317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/488546/']}
jdg_376993
stackexchange
llm_judgeable_groundtruth_similarity
53225236
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's a class I wish to test: //Request.jsimport axios, {AxiosInstance} from 'axios';import config from './config';const axiosSingleton: AxiosInstance = axios.create({ baseURL: 'http://localhost:8080',});export default class Request { public async get<$ResponseType = any>(url: string): Promise<void> { const response = await axiosSingleton.get(url); return response.data; }} when I try testing this by creating a test file, I am not sure how to mock axios. I tried a bunch of ways including - spyOn and automatic mocking. But they don't seem to work. Here's a version of the test file I am not understanding why it doesn't work // Request.test.jsimport axios from 'axios';import Request from './Request';interface ITestResponseDataType { value: string}jest.mock('axios');describe('Request Tests', () => { it('should call axios get with the right relativeUrl', async () => { const getMock = jest.fn(); axios.create.mockReturnValue({ get: getMock }); getMock.mockResolvedValue({ value: 'value' }); const data = await new Request().get<ITestResponseDataType>('/testUrl'); expect(getMock.mock.calls.length).toEqual(1); expect(data).toEqual({ value: 'value' }); });}); The error I get when I try running the test is - TypeScript diagnostics (customize using `[jest-config].globals.ts-jest.diagnostics` option): src/common/api/Request.test.ts:15:18 - error TS2339: Property 'mockReturnValue' does not exist on type '(config?: AxiosRequestConfig | undefined) => AxiosInstance'. 15 axios.create.mockReturnValue({ This error makes sense, because the type defined in axios for axios.create should not allow .mockReturnValue to be called on .create. So how do I tell typescript that jest has gone in and modified it? Now provide the response and nothing else.
Cast the mock method to jest.Mock , ie import axios from "axios"import Request from "./Request";// Create an Axios mock// Don't worry about the order, Jest will hoist this above the imports// See https://jestjs.io/docs/manual-mocks#using-with-es-module-importsjest.mock("axios", () => ({ create: jest.fn()}))// Customise the `create` mock method(axios.create as jest.Mock).mockReturnValue({ get: getMock})
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/53225236', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/936736/']}
jdg_376994
stackexchange
llm_judgeable_groundtruth_similarity
42361846
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Hibernate Validator in addition to javax.validation library to validate request bodies of controllers in a Spring MVC application. There are annotations for constraints that help with min and max boundaries and also with the number of digits but I couldn't find a way to accept only specific numbers. For example what if I only want to accept values 10, 20 and 50? I'm aware that it's possible to use org.springframework.validation.Validator interface to create more complex rules. Another thing that came to mind is to create an enum with desired numeric values but it doesn't seem like a clean solution. I'm curious to know if there is a better, simpler way to achieve what I want. Now provide the response and nothing else.
You can create your own annotation that accepts multiple values. This involves writing an Annotation class and a Validator class: public class OneOfValidator implements ConstraintValidator<OneOf, Integer> {} And the annotation: @Target({ElementType.METHOD, ElementType.FIELD, ElementType.ANNOTATION_TYPE, ElementType.CONSTRUCTOR, ElementType.PARAMETER})@Retention(RetentionPolicy.RUNTIME)@Documented@Constraint(validatedBy = OneOfValidator.class)public @interface OneOf { String message() default "value must match one of the values in the list"; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {}; int[] value() default {};} You could then use it like: @OneOf({2, 3, 5, 9})
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42361846', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1353969/']}
jdg_376995
stackexchange
llm_judgeable_groundtruth_similarity
21344340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Many cases have been shown for force directed graph geometric zooming by SVG Geometric Zooming . In geometric zooming, I only need to add a transform attribute in zoom function. However, in semantic zooming, if I only add a transform attribute in node, links won't connect to node. So, I am wondering whether there exist a solution for geometric zooming for force directed graph in d3. Here is my example with geometric zooming following previous case. I have two problems: When I zoom out, then drag whole graph, the graph will strangely disappear. Using the same redraw function function zoom() { vis.attr("transform", transform);}function transform(d){ return "translate(" + d3.event.translate + ")" + " scale(" + d3.event.scale + ")";} This only update one svg element's "transform" attribute. But how to make the function to change the node position? But what I want to do is semantic zooming . I have tried to modify zoom and transform function, but not sure the right way to do. Here is what I try. Functions I have changed: function zoom() { node.call(transform); // update link position update();}function transform(d){ // change node x, y position, not sure what function to put here.} Now provide the response and nothing else.
I tried to find a good tutorial to link to, but couldn't find anything that really covered all the issues, so I'm going to write it out step-by-step myself. First, you need to clearly understand what you're trying to accomplish. This is different for the two types of zooming. I don't really like the terminology Mike Bostock has introduced, (it's not entirely consistent with non-d3 uses of the terms) but we might as well stick with it to be consistent with other d3 examples. In "geometric zooming" you are zooming the entire image. Circles and lines get bigger as well as farther apart. SVG has an easy way to accomplish this through the "transform" attribute. When you set transform="scale(2)" on an SVG element, it is drawn as if everything was twice as big. For a circle, it's radius gets drawn twice a big, and it's cx and cy positions get plotted twice the distance from the (0,0) point. The entire coordinate system changes, so one unit is now equal to two pixels on screen, not one. Likewise, transform="translate(-50,100)" changes the entire coordinate system, so that the (0,0) point of the coordinate system gets moved 50 units to the left and 100 units down from the top-left corner (which is the default origin point). If you both translate and scale an SVG element, the order is important. If translate is before scale, than the translation is in the original units. If translate is after scale, than the translation is in the scaled units. The d3.zoom.behavior() method creates a function that listens for mouse wheel and drag events, as well as for touch screen events associated with zooming. It converts these user events into a custom "zoom" event. The zoom event is given a scale factor (a single number) and a translate factor (an array of two numbers), which the behaviour object calculates from the user's movements. What you do with these numbers is up to you; they don't change anything directly . (With the exception of when you attach a scale to the zoom behaviour function, as described later.) For geometric zooming, what you usually do is set a scale and translate transform attribute on a <g> element that contains the content you want to zoom. This example implements that geometric zooming method on a simple SVG consisting of evenly placed gridlines : http://jsfiddle.net/LYuta/2/ The zoom code is simply: function zoom() { console.log("zoom", d3.event.translate, d3.event.scale); vis.attr("transform", "translate(" + d3.event.translate + ")" + " scale(" + d3.event.scale + ")" );} The zoom is accomplished by setting the transform attribute on "vis", which is a d3 selection containing a <g> element which itself contains all the content we want to zoom. The translate and scale factors come directly from the zoom event that the d3 behaviour created. The result is that everything gets bigger or smaller -- the width of the gridlines as well as the spacing between them. The lines still have stroke-width:1.5; but the definition of what 1.5 equals on the screen has changed for them and anything else within the transformed <g> element. For every zoom event, the translate and scale factors are also logged to the console. Looking at that, you'll notice that if you're zoomed out the scale will be between 0 and 1; if you're zoomed in it will be greater than 1. If you pan (drag to move) the graph, the scale won't change at all. The translate numbers, however, change on both pan and zoom. That's because the translate represents the position of the (0,0) point in the graph relative to the position of the top-left-corner of the SVG. When you zoom, the distance between (0,0) and any other point on the graph changes. So in order to keep the content under the mouse or finger-touch in the same position on the screen, the position of the (0,0) point has to move. There are a number of other things you should pay attention to in that example: I've modified the zoom behaviour object with the .scaleExtent([min,max]) method. This sets a limit on the scale values that the behaviour will use in the zoom event, no matter how much the user spins the wheel. The transform is on a <g> element, not the <svg> itself. That's because the SVG element as a whole is treated as an HTML element, and has a different transform syntax and properties. The zoom behaviour is attached to a different <g> element, that contains the main <g> and a background rectangle. The background rectangle is there so that mouse and touch events can be observed even if the mouse or touch isn't right on a line. The <g> element itself doesn't have any height or width and so can't respond to user events directly, it only receives events from its children. I've left the rectangle black so you can tell where it is, but you can set it's style to fill:none; so long as you also set it to pointer-events:all; . The rectangle can't be inside the <g> that gets transformed, because then the area that responds to zoom events would also shrink when you zoom out, and possibly go out of sight off the edge of the SVG. You could skip the rectangle and second <g> element by attaching the zoom behaviour directly to the SVG object, as in this version of the fiddle . However, you often don't want events on the entire SVG area to trigger the zoom, so it is good to know how and why to use the background rectangle option. Here's the same geometric zooming method, applied to a simplified version of your force layout : http://jsfiddle.net/cSn6w/5/ I've reduced the number of nodes and links, and taken away the node-drag behaviour and the node-expand/collapse behaviour, so you can focus on the zoom. I've also changed the "friction" parameter so that it takes longer for the graph to stop moving; zoom it while it's still moving, and you'll see that everything will keep moving as before . "Geometric zooming" of the image is fairly straightforward, it can be implemented with very little code, and it results in fast, smooth changes by the browser. However, often the reason you want to zoom in on a graph is because the datapoints are too close together and overlapping. In that case, just making everything bigger doesn't help. You want to stretch the elements out over a larger space while keeping the individual points the same size. That's where "semantic zooming" comes into place. "Semantic zooming" of a graph, in the sense that Mike Bostock uses the term , is to zoom the layout of the graph without zooming on individual elements. (Note, there are other interpretations of "semantic zooming" for other contexts.) This is done by changing the way the position of elements is calculated, as well as the length of any lines or paths that connect objects, without changing the underlying coordinate system that defines how big a pixel is for the purpose of setting line width or the size of shapes or text. You can do these calculations yourself, using the translate and scale values to position the objects based on these formulas: zoomedPositionX = d3.event.translate[0] + d3.event.scale * dataPositionX zoomedPositionY = d3.event.translate[1] + d3.event.scale * dataPositionY I've used that approach to implement semantic zooming in this version of the gridlines example : http://jsfiddle.net/LYuta/4/ For the vertical lines, they were originally positioned like this vLines.attr("x1", function(d){return d;}) .attr("y1", 0) .attr("x2", function(d){return d;}) .attr("y2", h); In the zoom function, that gets changed to vLines.attr("x1", function(d){ return d3.event.translate[0] + d*d3.event.scale; }) .attr("y1", d3.event.translate[1]) .attr("x2", function(d){ return d3.event.translate[0] + d*d3.event.scale; }) .attr("y2", d3.event.translate[1] + h*d3.event.scale); The horizontal lines are changed similarly. The result? The position and length of the lines changes on the zoom, without the lines getting thicker or thinner. It gets a little complicated when we try to do the same for the force layout. That's because the objects in the force layout graph are also being re-positioned after every "tick" event. In order to keep them positioned in the correct places for the zoom, the tick-positioning method is going to have to use the zoomed-position formulas. Which means that: The scale and translation have to be saved in a variable that can be accessed by the tick function; and, There needs to be default scale and translation values for the tick function to use if the user hasn't zoomed anything yet. The default scale will be 1, and the default translation will be [0,0], representing normal scale and no translation. Here's what it looks like with semantic zooming on the simplified force layout : http://jsfiddle.net/cSn6w/6/ The zoom function is now function zoom() { console.log("zoom", d3.event.translate, d3.event.scale); scaleFactor = d3.event.scale; translation = d3.event.translate; tick(); //update positions} It sets the scaleFactor and translation variables, then calls the tick function. The tick function does all the positioning: at initialization, after force-layout tick events, and after zoom events. It looks like function tick() { linkLines.attr("x1", function (d) { return translation[0] + scaleFactor*d.source.x; }) .attr("y1", function (d) { return translation[1] + scaleFactor*d.source.y; }) .attr("x2", function (d) { return translation[0] + scaleFactor*d.target.x; }) .attr("y2", function (d) { return translation[1] + scaleFactor*d.target.y; }); nodeCircles.attr("cx", function (d) { return translation[0] + scaleFactor*d.x; }) .attr("cy", function (d) { return translation[1] + scaleFactor*d.y; });} Every position value for the circles and the links is adjusted by the translation and the scale factor. If this makes sense to you, this should be sufficient for your project and you don't need to use scales. Just make sure that you always use this formula to convert between the data coordinates (d.x and d.y) and the display coordinates (cx, cy, x1, x2, etc.) used to position the objects. Where this gets complicated is if you need to do the reverse conversion from display coordinates to data coordinates. You need to do this if you want the user to be able to drag individual nodes -- you need to set the data coordinate based on the screen position of the dragged node. (Note that this wasn't working properly in either of your examples). For geometric zoom , converting between screen position and data position can be down with d3.mouse() . Using d3.mouse(SVGElement) calculates the position of the mouse in the coordinate system used by that SVGElement . So if we pass in the element representing the transformed visualization, it returns coordinates that can be used directly to set the position of the objects. The draggable geometric-zoom force-layout looks like this: http://jsfiddle.net/cSn6w/7/ The drag function is: function dragged(d){ if (d.fixed) return; //root is fixed //get mouse coordinates relative to the visualization //coordinate system: var mouse = d3.mouse(vis.node()); d.x = mouse[0]; d.y = mouse[1]; tick();//re-position this node and any links} For semantic zoom , however, the SVG coordinates returned by d3.mouse() no longer directly correspond to the data coordinates. You have to factor in the scale and translation. You do this by re-arranging the formulas given above: zoomedPositionX = d3.event.translate[0] + d3.event.scale * dataPositionX zoomedPositionY = d3.event.translate[1] + d3.event.scale * dataPositionY becomes dataPositionX = (zoomedPositionX - d3.event.translate[0]) / d3.event.scaledataPositionY = (zoomedPositionY - d3.event.translate[1]) / d3.event.scale The drag function for the semantic zoom example is therefore function dragged(d){ if (d.fixed) return; //root is fixed //get mouse coordinates relative to the visualization //coordinate system: var mouse = d3.mouse(vis.node()); d.x = (mouse[0] - translation[0])/scaleFactor; d.y = (mouse[1] - translation[1])/scaleFactor; tick();//re-position this node and any links} This draggable semantic-zoom force-layout is implemented here: http://jsfiddle.net/cSn6w/8/ That should be enough to get you back on track. I'll come back later and add an explanation of scales and how they make all these calculations easier. ...and I'm back: Looking at all the data-to-display conversion functions above, doesn't it make you think "wouldn't it be easier to have a function to do this each time?" That's what the the d3 scales are for: to convert data values to display values. You don't often see scales in the force-layout examples because the force layout object allows you to set a width and height directly, and then creates d.x and d.y data values within that range. Set the layout width and height to your visualization width and height, and you can use the data values directly for positioning objects in the display. However, when you zoom in on the graph, you switch from having the entire extent of the data visible to only having a portion visible. So the data values no longer directly correspond to positioning values, and we need to convert between them. And a scale function would make that a lot easier. In D3 terminology, the expected data values are the domain and the desired output/display values are the range . The initial domain of the scale will therefore by the expected maximum and minimum values from the layout, while the initial range will be the maximum and minimum coordinates on the visualization. When you zoom, the relationship between domain and range changes, so one of those values will have to change on the scale. Luckily, we don't have to figure out the formulas ourselves, because the D3 zoom behaviour calculates it for us -- if we attach the scale objects to the zoom behaviour object using its .x() and .y() methods. As a result, if we change the drawing methods to use the scales, then all we have to do in the zoom method is call the drawing function. Here's the semantic zoom of the grid example implemented using scales : http://jsfiddle.net/LYuta/5/ Key code: /*** Configure zoom behaviour ***/var zoomer = d3.behavior.zoom() .scaleExtent([0.1,10]) //allow 10 times zoom in or out .on("zoom", zoom) //define the event handler function .x(xScale) .y(yScale); //attach the scales so their domains //will be updated automaticallyfunction zoom() { console.log("zoom", d3.event.translate, d3.event.scale); //the zoom behaviour has already changed //the domain of the x and y scales //so we just have to redraw using them drawLines();}function drawLines() { //put positioning in a separate function //that can be called at initialization as well vLines.attr("x1", function(d){ return xScale(d); }) .attr("y1", yScale(0) ) .attr("x2", function(d){ return xScale(d); }) /* etc. */ The d3 zoom behaviour object modifies the scales by changing their domain. You could get a similar effect by changing the scale range, since the important part is changing the relationship between domain and range. However, the range has another important meaning: representing the maximum and minimum values used in the display. By only changing the domain side of the scale with the zoom behaviour, the range still represents the valid display values. Which allows us to implement a different type of zoom, for when the user re-sizes the display. By letting the SVG change size according to the window size, and then setting the range of the scale based on the SVG size, the graph can be responsive to different window/device sizes. Here's the semantic zoom grid example, made responsive with scales : http://jsfiddle.net/LYuta/9/ I've given the SVG percentage-based height and width properties in CSS, which will over-ride the attribute height and width values. In the script, I've moved all the lines which relate to the display height and width into a function that checks the actual svg element for it's current height and width. Finally, I've added a window resize listener to call this method (which also triggers a re-draw). Key code: /* Set the display size based on the SVG size and re-draw */function setSize() { var svgStyles = window.getComputedStyle(svg.node()); var svgW = parseInt(svgStyles["width"]); var svgH = parseInt(svgStyles["height"]); //Set the output range of the scales xScale.range([0, svgW]); yScale.range([0, svgH]); //re-attach the scales to the zoom behaviour zoomer.x(xScale) .y(yScale); //resize the background rect.attr("width", svgW) .attr("height", svgH); //console.log(xScale.range(), yScale.range()); drawLines();}//adapt size to window changes:window.addEventListener("resize", setSize, false)setSize(); //initialize width and height The same ideas -- using scales to layout the graph, with a changing domain from the zoom and a changing range from window resize events -- can of course be applied to the force-layout. However, we still have to deal with the complication discussed above: how to reverse the conversion from data values to display values when dealing with node-drag events. The d3 linear scale has a convenient method for that, too: scale.invert() . If w = scale(x) then x = scale.invert(w) . In the node-drag event, the code using scales is therefore: function dragged(d){ if (d.fixed) return; //root is fixed //get mouse coordinates relative to the visualization //coordinate system: var mouse = d3.mouse(vis.node()); d.x = xScale.invert(mouse[0]); d.y = yScale.invert(mouse[1]); tick();//re-position this node and any links} The rest of the semantic zoom force-layout example, made responsive with scales is here: http://jsfiddle.net/cSn6w/10/ I'm sure that was a lot longer a discussion than you were expecting, but I hope it helps you understand not only what you need to do, but also why you need to do it. I get really frustrated when I see code that has obviously been cut-and-pasted together from multiple examples by someone who doesn't actually understand what the code does. If you understand the code, it's a lot easier to adapt it to your needs. And hopefully, this will serve as a good reference for other people trying to figure out how to do similar tasks.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21344340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2149631/']}
jdg_376996
stackexchange
llm_judgeable_groundtruth_similarity
661638
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I have a sum of two sinusoids like so: $$Acos(xt+\phi) + Bcos(yt+\delta)$$How would I find the period? I know that for just one sinusoid the period would be: $$Acos(xt+\phi)$$$$T = 2\pi/x$$It can't be as simple as just adding the two periods. How can I tell if they're not periodic? Now provide the response and nothing else.
Beware that assuming there is a period to find is a big assumption. For example, suppose your two sinusoids are $\cos{2\pi t}$ and $\cos{\frac{2\pi t}{\sqrt{2}}}$. The periods of the two functions are $1$ and $\sqrt{2}$, respectively. If their sum was periodic, its period would be the least common multiple of $1$ and $\sqrt{2}$, which would allow us to write $\sqrt{2}$ as a ratio of integers (red flag!).
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/661638', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/81512/']}
jdg_376997
stackexchange
llm_judgeable_groundtruth_similarity
52255
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I found this definition on Wikipedia, but I didn't quit undestand it : Topologically associating domains (TADs) are genomic regions ("chromosome neighborhoods") used to summarize the three-dimensional nuclear organization of mammalian genomes. Can anyone explain it more, and give examples if it's possible ? Now provide the response and nothing else.
TADs were initially discovered by computing contact probabilities between regions of the genome using HiC (a chromosome conformation capture method, that try to provide an idea on how the genome is organized inside the nucleus by computing the probability of each contact to be located nearby another locus). People have found that instead of being random, some loci were in contact with some regions of the genome rather than the others, implying some form of functional significance. In short: a TAD is a genomic region of increased contact probability. They are of functional relevance (for instance, Enhance-Promoter gene expression regulation occurs primarily within one TAD rather than between two (adjacent TADs). You might be interested in the following papers: A 3D Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin Looping (might be behind paywall) Topological Domains in Mammalian Genomes Identified by Analysis of Chromatin Interactions Finally, the following picture (extracted from the latter publication) provides an example of two TADS: The bottom panel shows the contact probability: two regions of the genome appear to be isolated whereas they are nearby. They show significant self-interaction. One interpretation for that (top panel) is to imagine that the genome exhibit some reproducible condensation, explaining the increased contacts.
{}
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/52255', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/25030/']}
jdg_376998
stackexchange
llm_judgeable_groundtruth_similarity
40210145
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The Android API has a very convenient class for this, IconGenerator . Using the IconGenerator in my Android app , I can easily make a marker that: is a simple rectangle with the color of my choosing. resizes to hold text of any length. is NOT an info window - I'd like the marker itself to contain the text as shown in the image below from the android version. // Android - problem solved with IconGeneratorIconGenerator iconGenerator = new IconGenerator(context);iconGenerator.setStyle(IconGenerator.STYLE_GREEN); // or any other colorBitmap iconBitmap = iconGenerator.makeIcon(myString);Marker m = new MarkerOptions().icon(BitmapDescriptorFactory.fromBitmap(iconBitmap)) .position(myLatLng);map.addMarker(m); // map is a com.google.android.gms.maps.GoogleMap Is there a way to do something as simple as this in iOS using Swift ?There has been a recent release of the iOS api that allows "marker customization", but I don't see how to apply it to this use case. // iOS (Swift) - I don't know how to create the icon as in code abovelet marker = GMSMarker(position: myLatLng)marker.icon = // How can I set to a rectangle with color/text of my choosing?marker.map = map // map is a GMSMapView Now provide the response and nothing else.
Here is what I have done let marker = GMSMarker()// I have taken a pin image which is a custom imagelet markerImage = UIImage(named: "mapMarker")!.withRenderingMode(.alwaysTemplate)//creating a marker viewlet markerView = UIImageView(image: markerImage)//changing the tint color of the imagemarkerView.tintColor = UIColor.redmarker.position = CLLocationCoordinate2D(latitude: 28.7041, longitude: 77.1025)marker.iconView = markerViewmarker.title = "New Delhi"marker.snippet = "India"marker.map = mapView//comment this line if you don't wish to put a callout bubblemapView.selectedMarker = marker The output is And my marker image was You can change your color as per your need. Also if you want something in rectange, you can just create a simple small rectangular image and use it like I did above and change the color of your need. Or if you want a rectangle with text within it, you can just create a small UIView with some label and then convert that UIView in UIImage and can do the same thing. //function to convert the given UIView into a UIImagefunc imageWithView(view:UIView) -> UIImage { UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, 0.0) view.layer.render(in: UIGraphicsGetCurrentContext()!) let image = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return image!} Hope it helps!!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40210145', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1168364/']}
jdg_376999
stackexchange
llm_judgeable_groundtruth_similarity
30906807
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I implement javascript function to calculate frequency of each word in a given sentence. this is my code: function search () { var data = document.getElementById('txt').value; var temp = data; var words = new Array(); words = temp.split(" "); var uniqueWords = new Array(); var count = new Array(); for (var i = 0; i < words.length; i++) { //var count=0; var f = 0; for (j = 0; j < uniqueWords.length; j++) { if (words[i] == uniqueWords[j]) { count[j] = count[j] + 1; //uniqueWords[j]=words[i]; f = 1; } } if (f == 0) { count[i] = 1; uniqueWords[i] = words[i]; } console.log("count of " + uniqueWords[i] + " - " + count[i]); }} am unable to trace out the problem ..any help is greatly appriciated.output in this format:count of is - 1count of the - 2.. input: this is anil is kum the anil Now provide the response and nothing else.
I feel you have over-complicated things by having multiple arrays, strings, and engaging in frequent (and hard to follow) context-switching between loops, and nested loops. Below is the approach I would encourage you to consider taking. I've inlined comments to explain each step along the way. If any of this is unclear, please let me know in the comments and I'll revisit to improve clarity. (function () { /* Below is a regular expression that finds alphanumeric characters Next is a string that could easily be replaced with a reference to a form control Lastly, we have an array that will hold any words matching our pattern */ var pattern = /\w+/g, string = "I I am am am yes yes.", matchedWords = string.match( pattern ); /* The Array.prototype.reduce method assists us in producing a single value from an array. In this case, we're going to use it to output an object with results. */ var counts = matchedWords.reduce(function ( stats, word ) { /* `stats` is the object that we'll be building up over time. `word` is each individual entry in the `matchedWords` array */ if ( stats.hasOwnProperty( word ) ) { /* `stats` already has an entry for the current `word`. As a result, let's increment the count for that `word`. */ stats[ word ] = stats[ word ] + 1; } else { /* `stats` does not yet have an entry for the current `word`. As a result, let's add a new entry, and set count to 1. */ stats[ word ] = 1; } /* Because we are building up `stats` over numerous iterations, we need to return it for the next pass to modify it. */ return stats; }, {} ); /* Now that `counts` has our object, we can log it. */ console.log( counts );}());
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30906807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2003240/']}
jdg_377000
stackexchange
llm_judgeable_groundtruth_similarity
16242389
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is how I would check internet connection in vanilla javascript: setInterval(function(){ if(navigator.onLine){ $("body").html("Connected."); }else{ $("body").html("Not connected."); }},1000); I have angular controllers and modules in my project. Where should I put the code above? It should be executed in global context and not be assigned to a certain controller. Are there some kind of global controllers maybe? Now provide the response and nothing else.
First of all, I advise you to listen to online/offline events . You can do it this way in AnguarJS: var app = module('yourApp', []);app.run(function($window, $rootScope) { $rootScope.online = navigator.onLine; $window.addEventListener("offline", function() { $rootScope.$apply(function() { $rootScope.online = false; }); }, false); $window.addEventListener("online", function() { $rootScope.$apply(function() { $rootScope.online = true; }); }, false);}); NOTE: I am wrapping changing of root scope's variable in $apply method to notify Angular that something was changed. After that you can: In controlller: $scope.$watch('online', function(newStatus) { ... }); In HTML markup: <div ng-show="online">You're online</div> <div ng-hide="online">You're offline</div> Here is a working Plunker: http://plnkr.co/edit/Q3LkiI7Cj4RWBNRLEJUA?p=preview Other solution could be to broadcast online/offline event. But in this case you need to initialize current status upon loading and then subscribe to event.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16242389', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/401025/']}
jdg_377001
stackexchange
llm_judgeable_groundtruth_similarity
1475307
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I ran across a mention somewhere that doing an emit(key, doc) will increase the amount of time an index takes to build (or something to that effect). Is there any merit to it, and is there any reason not to just always do emit(key, null) and then include_docs = true ? Now provide the response and nothing else.
Yes, it will increase the size of your index, because CouchDB effectively copies the entire document in those cases. For cases in which you can, use include_docs=true . There is, however, a race condition to be aware of when using this that is mentioned in the wiki. It is possible, during the time between reading the view data and fetching the document, that said document has changed (or has been deleted, in which case _deleted will be true ). This is documented here under "Querying Options".
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1475307', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70226/']}
jdg_377002
stackexchange
llm_judgeable_groundtruth_similarity
1570075
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the logic of Lightbox 2 on where to position the popup? It seems that when using Firefox or Chrome it shows up in different places on different pages. Is there any setting to have it show at the same height on every page for consistency? What is the CSS or logic it's using to determine positioning? Now provide the response and nothing else.
In what way does it display on different places, what is your desired behaviour, and can you give a sample page? Here is how the script itself calculates the top position of the popup (lightbox v2.04, zip download, js/lightbox.js, lines 229ff): // calculate top and left offset for the lightbox var arrayPageScroll = document.viewport.getScrollOffsets();var lightboxTop = arrayPageScroll[1] + (document.viewport.getHeight() / 10);var lightboxLeft = arrayPageScroll[0];this.lightbox.setStyle({ top: lightboxTop + 'px', left: lightboxLeft + 'px' }).show(); As the script uses Prototype's document.viewport object , the script positions the popups at 10% inside the current scroll position, like this: ------- page start|||||------- scroll position top|------- start of lightbox popup||||||||------ scroll position bottom||||------ page end
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1570075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4653/']}
jdg_377003
stackexchange
llm_judgeable_groundtruth_similarity
51306545
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to use a resolver in order to retrieve data depending on the given parameters the route holds. Unfortunately, the moment I add another data stream that my data depends on the resolver never actually resolves. If I directly return an immediately resolving value everything works fine.I debugged the situation to see that I receive all partial information but it just fails to actually resolve in the end. Here's a quick sample. Hit me up if there's more code needed to understand the problem. MyService: export class MyService { get(bar) { return of(new Foo(bar)); }} SecondService (This one retrieves data from the backend): export class SecondService { private readonly _observable: Observable<Bar>; constructor() { this._observable = merge( // Other manipulations ).pipe( // other manipulations shareReplay(1) ) } observable(): Observable<Bar> { return this._observable; }} Resolver: export class MyResolver { constructor(_secondService: SecondService, _myService: MyService) {} resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<Foo> { // Does not work - Simply does not resolve // return this._secondService // .observable() // .pipe( // switchMap((bar) => this._myService.get(bar)), // ); // WORKS return of(new Foobar('sampleData')); }} Router: const routes: Routes = [ { path: 'someroute', component: SomeComponent, canActivate: [SomeGuard], resolve: { myData: MyResolver, }, },]; Component: export class SomeComponent implements OnInit { constructor(private readonly _route: ActivatedRoute) {} ngOnInit() { this._route.data .subscribe((data) => { console.log('received:', data); this.myData = data; }); }} SomeComponent.html <pre *ngIf="myData"> Received: {{ myData | json }}</pre> Now provide the response and nothing else.
The answer to my problem is rather simple and had nothing to do with subscribing to the resolved observables, as the framework already did that automagically. In order for a resolver to finish, all the streams it depends on need to complete . If you happen to use a hot observable it is required to use another operator like take so that the stream completes at that location. So, all the code remains the same, except that I changed the resolver to: resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<Foo> { return this._secondService .observable() .pipe( take(1), switchMap((bar) => this._myService.get(bar)), );} @eduPeeth: Thank you for your answer/suggestions, unfortunately, it was a far more minor issue.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51306545', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3686898/']}
jdg_377004
stackexchange
llm_judgeable_groundtruth_similarity
368184
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I found some code in a project which looks like that : int main(int argc, char *argv[]){ // some stuff try { theApp.Run(); } catch (std::exception& exc) { cerr << exc.what() << std::endl; exit(EXIT_FAILURE); } return (EXIT_SUCCESS);} I don't understand why the exceptions are being catched. If they weren't, the application would simply exit and the exception would be printed. Do you see any good reason to catch exceptions here ? EDIT : I agree that it is good to print the exception error. However, wouldn't it be better to rethrow the exception ? I have the feeling that we are swallowing it here... Now provide the response and nothing else.
If an exception is uncaught, then the standard does not define whether the stack is unwound. So on some platforms destructors will be called, and on others the program will terminate immediately. Catching at the top level ensures that destructors are always called. So, if you aren't running under the debugger, it's probably wise to catch everything: (...) as well as std::exception. Then your application code can clean up with RAII even on a fatal exception. In many such cases you don't actually need to clean up, since the OS will do it for you. But for instance you might prefer to disconnect cleanly from remote services where possible, and there might be resources external to the process, such as named pipes/mutexes, that you'd prefer to destroy rather than leaking. Rethrowing the exception in main seems to me of limited use, since you've already lost the context in which it was originally thrown. I suppose that trapping an uncaught exception in the debugger is noisier than just logging the fault to std::cerr, so rethrowing would be the smart move if there's a chance of missing the logging. If you want the debugger to trap unexpected conditions in debug mode, which in release mode throw an exception that eventually results in an exit, then there are other ways to do that than leaving the exception uncaught so that the debugger sees it. For example, you could use assert macros. Of course, that doesn't help with unexpected and unpredictable conditions, like hardware exceptions if you're using SEH on .NET.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/368184', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/20986/']}
jdg_377005
stackexchange
llm_judgeable_groundtruth_similarity
12628700
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm designing a library that would greatly benefit from the use of the OverlappingInstances compiler flag. But everyone talks smack about this extension and warns of its dangers. My question is, are there examples of a good use of this extension anywhere on hackage? Is there any rule of thumb on how to encapsulate the badness and use the extension properly? Now provide the response and nothing else.
Perhaps a thought experiment will de-mystify this extension a bit. Let's pretend that we've dropped the restriction that functions defined with multiple pattern cases must be all in one place, so that you can write foo ("bar", Nothing) = ... at the top of a module, and then have cases like foo ("baz", Just x) = ... elsewhere. In fact, let's go even further, and allow cases to be defined in different modules entirely ! If you think that sounds like it would be confusing and error-prone to use, you're correct. To recover some semblance of sanity, we could add some limitations. For instance (ha, ha), we could require the following properties to hold: Anywhere such a function is used, the arguments given must match exactly one pattern. Anything else is a compiler error. Adding new patterns (including by importing another module) should never change the meaning of valid code--either the same patterns are chosen, or a compiler error is produced. It should be clear that matching simple constructors like True or Nothing is straightforward. We also can handwave things a bit and assume that the compiler can disambiguate literals, like "bar" and "baz" above. On the other hand, binding arguments with patterns like (x, Just y) becomes awkward--writing such a pattern means giving up the ability to write patterns like (True, _) or (False, Just "foobar") later, since that would create ambiguity. Worse yet, pattern guards become nearly useless, because they need very general matches. Many common idioms will produce endless ambiguity headaches, and of course writing a "default" fall-through pattern is completely impossible. This is roughly the situation with type class instances. We could regain some expressive power by relaxing the required properties as such: Anywhere such a function is used, it must match at least one pattern. No matches is a compiler error. If a function is used such that multiple patterns match, the most specific pattern will be used. If there is no unique most specific pattern, an error is produced. If a function is used in a way that matches a general instance, but could be applied at run-time to arguments that would match a more specific instance, this is a compiler error. Note that we are now in a situation where merely importing a module can change the behavior of a function, by bringing into scope a new, more specific pattern. Things might get murky in complicated cases involving higher-order functions, as well. Still, in many cases problems are unlikely--say, defining a generic fall-through pattern in a library, while letting client code add specific cases if needed. That's roughly where OverlappingInstances puts you. As suggested in the example above, if creating new overlaps is always either impossible or desired, and different modules won't end up seeing different, conflicting instances, then it's probably fine. What it really comes down to is that the limitations removed by OverlappingInstances are there to make using type classes behave sensibly under the "open world" assumption that any possible instance could later be added. By relaxing those requirements, you're taking on that burden yourself; so think through all the ways that new instances could be added and whether any of those scenarios are a significant problem. If you're convinced that nothing will break even in obscure and devious corner cases, then go ahead and use the extension.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12628700', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1241368/']}
jdg_377006
stackexchange
llm_judgeable_groundtruth_similarity
3925433
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was solving some basic Math Coding Problem and found that For any number $N$ , the number of ways to express $N$ as sum of Odd Numbers is $Fib[N]$ where $Fib$ is Fibonnaci , I don't have a valid proof for this and didnot understand that how this can be solved using recurrences Can someone provide with it ? If you are not getting it Suppose for N=4 number of ways to write it as sum of Odd Numbers is 3 which is Fibonnaci at $3$ $4=> 1+1+1+1$ $4=> 1+3$ $4=> 3+1$ NOTE-> the composition is ordered $( 1+3)$ and $(3+1)$ are different .UPD -> I do not claim that I observed it myself but in the problem solution I found it , I asked to just find some valid proof / reason to it Now provide the response and nothing else.
Let's say $S(n)$ is the set of ways to write $n$ as a sum of odd numbers. We can partition this set into two subsets: $A(n)$ and $B(n)$ , where $A(n)$ is the set of sums where the last summand is a $1$ , and $B(n)$ is the set of all other sums. Can you see why $A(n)$ has the same size as $S(n-1)$ ? Can you see why $B(n)$ has the same size as $S(n-2)$ ? If you prove this, you find that $|S(n)| = |A(n)| + |B(n)| = |S(n-1)| + |S(n-2)|$ , which is the Fibonacci recurrence relation. You can then prove by induction that your sequence is equal to the Fibonacci sequence.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3925433', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/716866/']}
jdg_377007
stackexchange
llm_judgeable_groundtruth_similarity
22275610
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When applying gam.check in the mgcv package, R produces some residual plots and basis dimension output. Is there a way to only produce the plots and not the printed output? library(mgcv)set.seed(0)dat <- gamSim(1,n=200)b <- gam(y~s(x0)+s(x1)+s(x2)+s(x3), data=dat)plot(b, pages=1)gam.check(b, pch=19, cex=.3) Now provide the response and nothing else.
There are four plots, from top left, moving down and across we have: A QQ plot of the residuals A histogram of the residuals A plot of residuals vs the linear predictor A plot of observed vs fitted values. In the code below, I assume b contains your fitted model, as per your example. First some things we need type <- "deviance" ## "pearson" & "response" are other valid choicesresid <- residuals(b, type = type)linpred <- napredict(b$na.action, b$linear.predictors)observed.y <- napredict(b$na.action, b$y) Note the last two lines are applying the NA handling method used when the model was fitted to the information on the linear.predictors and y , the stored copy of the response data. The above code and that shown below is all given in the first 10 or so lines of the gam.check() source. To view this, just enter gam.check at the R prompt. Each plot is produced as follows: QQ plot This is produced via qq.gam() : qq.gam(b, rep = 0, level = 0.9, type = type, rl.col = 2, rep.col = "gray80") Histogram of residuals This is produced using hist(resid, xlab = "Residuals", main = "Histogram of residuals") Residuals vs linear predictor This is produced using plot(linpred, resid, main = "Resids vs. linear pred.", xlab = "linear predictor", ylab = "residuals") Observed vs fitted values This is produced using plot(fitted(b), observed.y, xlab = "Fitted Values", ylab = "Response", main = "Response vs. Fitted Values")
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22275610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2564102/']}
jdg_377008
stackexchange
llm_judgeable_groundtruth_similarity
608069
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to manage a process with supervisord , but the process does not have an option to run in foreground: it always daemonizes. (That's Zabbix Server). Is there any way to manage daemons with supervisor? Any tools which will make it run in foreground? Or maybe, use the pidfile somehow? Now provide the response and nothing else.
In order to deal with the problem, we'll need some program running in foreground, which exits whenever the daemon exits, and which also proxies signals to the daemon. Consider using the following script bash script: #! /usr/bin/env bashset -eupidfile="/var/run/your-daemon.pid"command=/usr/sbin/your-daemon# Proxy signalsfunction kill_app(){ kill $(cat $pidfile) exit 0 # exit okay}trap "kill_app" SIGINT SIGTERM# Launch daemon$commandsleep 2# Loop while the pidfile and the process existwhile [ -f $pidfile ] && kill -0 $(cat $pidfile) ; do sleep 0.5doneexit 1000 # exit unexpected
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/608069', 'https://serverfault.com', 'https://serverfault.com/users/12097/']}
jdg_377009
stackexchange
llm_judgeable_groundtruth_similarity
16228484
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I did a quick test of the ZeroMQ PUB/SUB and now have some working code. However, I am a bit confused about the concept of high water mark as applied in zeromq. I have set a HWM in my publisher code which sets a queue length for each subscriber connected to the socket. It is also possible however to set a HWM on the receiving socket of the subscriber. Is there any reason to set a HWM on the subscriber side and how would this differ from setting a publisher HWM? Now provide the response and nothing else.
Short answer: In the publisher we should pretty much always carefully consider HWM, because there is plenty of reasons to crash (out of memory) affecting the system overall ( since the publisher serves all the subscribers ). Also in the subscriber , there are cases in which regulating the HWM could be usefull, but this depends mostly by the nature of the subscriber, what it does with the received message and how high is the probability that it could not be able to process in time for a big number of received message; and by the expected runtime environment ( how much memory is available, number of subscribers etc. ) More detailed answer: ZMQ uses the concept of HWM (high-water mark) to define the capacity of it's internal pipes. Each connection out of a socket or into a socket has its own pipe, and HWM for sending, and/or receiving, depending on the socket type. Some sockets ( PUB , PUSH ) only have send buffers . Some ( SUB , PULL , REQ , REP ) only have receive buffers . Some ( DEALER , ROUTER , PAIR ) have both send and receive buffers. The available socket option are: ZMQ_SNDHWM : Set high water mark for outbound messages (... on the publisher socket ) ZMQ_RCVHWM : Set high water mark for inbound messages (... on the subscriber socket ) ZMQ 3.0+ forces default limits on its internal buffers (the so-called HWM) because the HWM it's a great way to reduce memory overflow problems Both ZMQ_PUB and ZMQ_SUB have ZMQ_HWM option action set to "Drop" therefore when the limits are riched the memory of the subscriber or the publisher should stops growing up, at least by what depends by the ZMQ buffers. Usually who need most protection against undiscriminated use of memory ( out of memory issues ) are the publishers: Over the inproc transport, the sender and receiver share the same buffers, so the real HWM is the sum of the HWM set by both sides. But if you’ re using TCP and a subscriber is slow, messages will queue up on the publisher. Common failure causes of PUB-SUB includes: Subscribers can fetch messages too slowly, so queues build up and then overflow. Networks can become too slow, so publisher-side queues overflow and publishers crash. by Queueing messages on the publisher gets publishers run out of memory and crash especially if there are lots of subscribers and it’ s not possible to flush to disk for performance reasons. From the perspective of the publisher the great strategy , tha we use by properly settings ZWM, is Stop queuing new messages after a while , new messages just get rejected or dropped; it’ s what ØMQ does when the publisher sets a HWM. ZMQ can also queue messages on the subscriber If anyone’ s going to run out of memory and crash, it’ ll be the subscriber rather than the publisher, which is fair. This is perfect for “ peaky” streams where a subscriber can’ t keep up for a while, but can catch up when the stream slows down. Note: the HWMs are not exact; while you may get up to 1,000 messages by default, the real buffer size may be much lower (as little as half), due to the way libzmq implements its queues. The primary source of these assumption is the Pieter Hintjens's book "Code Connected Volume 1" available online in electronic format; it has a chapter dedicated to High-Water Marks containg furher explanations about this topic.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16228484', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/788171/']}
jdg_377010
stackexchange
llm_judgeable_groundtruth_similarity
2288774
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been reading articles on pseudo-randomness in computing when generating a random value. They all state that the generated numbers are pseudo-random because we know all the factors that influence the outcome, and that the roll of a die is considered truly random. But I'm wondering why. Don't we know all the physical forces that influence the die when it's being rolled? Or is there too many of them? Now provide the response and nothing else.
This has to do with chaos theory : the tiniest variation of the initial conditions will cause an enormously different output. For a physical system like a die toss: even from a classical point of view, it is very unlikely that you can know the very exact initial conditions of the throw. And of the environment: the "floor" distance and surface characteristics (think of the abrupt effect of each bounce, that will be very different depending on the most infinitesimal variation of the impact parameters), the air conditions (thermodynamic and kinematic)...! this becomes actually impossible if you include the uncertainty principle (that prevents you from knowing the exact value of certain pairs of variables at the same time, e.g. position and momentum, but see below); it would be impossible from a practical point of view to propagate these initial conditions without introducing round-off errors, that due to the chaotic nature of the problem would make the result completely unreliable; even if you could perform exact calculations, there is still the quantum indeterminacy (again, see below) that affects the development of the status of the die: at each bounce, even when air molecules brake the die rotation, it is impossible even theoretically to predict what will happen in the next instant with absolute certainty. As pointed out in many comments and with many downvotes, the contributions to the randomness of the roll from quantum effects are insignificant from any practical point of view. Nevertheless I do want to mention them since they provide a theoretical watertight border against a deterministic idea of the phenomenon. Taking care of another possible correct objection, I have to underline that my answer holds for a fair throw. If you think of a die "tossed" from, say, $1\,\mathrm{mm}$ above a horizontal flat floor, with negligible initial velocity and a face parallel to the ground, it is obvious that you can predict the outcome with practical certainty. Moving progressively away from this limit situation, you have many halfway toss styles that can influence the probability distribution of the outcomes, if only by a few percent. I'm referring to the opposite limit, when the system can be considered ergodic . When I heard this term applied to the die, maybe not $100\%$ properly, it was with the meaning that the system "scans" over time all the possible outcomes many many times, with equal probability and with no recognizable pattern. Add the fact that a fair throw starts with a random grip of the die, and you really have equal chances for all the outcomes.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2288774', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/333066/']}
jdg_377011
stackexchange
llm_judgeable_groundtruth_similarity
101721
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I found there are some companies claim that they offer service that can eliminate SSL/TLS blind spot, such as Blue Coat and Gigamon . Are they talking about some way of decipher the https content (breaking the encrypting algorithms) or just man-in-the-middle attack ? If it is a MITM proxy, can I simply detect it by checking the CA of the certificate installed on my browser? My question is about whether there is way to monitor https traffic without MITM attack and whether it is possible for MITM proxy to dupe users by showing certificates with real CAs (DigiCert, Comodo) so I cannot just tell the difference by looking at the CA. Now provide the response and nothing else.
Installing a root certificate on users browsers, and conducting a MiTM attack on employees is unfortunately a standard practice at many companies. There's a few ways you can detect this. One way is looking for a root CA cert installed on your computer and see if you don't recognize one of the CAs. This of course requires an in-depth knowledge of what real root CAs are, and what fake MitM proxy providers are. Another is simply looking at the certificate an https websitesgenerates and examine who it's signed by. The cert of all https sites willbe signed by the company providing the MiTM attack proxy. A third way is to install Firefox, preferably a version that doesn'tinstall and runs as a standalone. Firefox doesn't use the systemprovided certs, but uses its own certs. You can get this from http://portableapps.com/ If you then get a security warning about aself signed certificate, you're being MiTMed by your company.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/101721', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/88255/']}
jdg_377012
stackexchange
llm_judgeable_groundtruth_similarity
9209577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've to get all the instances of a class C and subclasses (direct or indirect) of C, in SPARQL. I can get all the direct subclasses of C in this way: SELECT ?entityWHERE { ?subclass rdfs:subClassOf :C . ?entity rdf:type ?subclass .} But I can't get the instances of an indirect subclass and neither any instance of C. As I know (I've pre-calculated them) all the subclasses (direct and indirect of C), and I can build a dynamic query, is it possible build a query like the following one? SELECT ?entityWHERE { ?entity rdf:type in <list>.} Thanks to everyone. EDIT: I've just solved it, even if in a not elegant way. SELECT ?entityWHERE { { ?entity rdf:type :C } UNION { ?entity rdf:type :SubClass1 } UNION { ?entity rdf:type :SubClass2 } UNION { ?entity rdf:type :SubClass3 }} Now provide the response and nothing else.
A better solution is to use property path expressions in SPARQL 1.1 This would be rewritten as: SELECT ?entityWHERE { ?entity rdf:type ?type. ?type rdfs:subClassOf* :C.}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9209577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1194426/']}
jdg_377013
stackexchange
llm_judgeable_groundtruth_similarity
22450036
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We are using Retrofit in our Android app, to communicate with an OAuth2 secured server. Everything works great, we use the RequestInterceptor to include the access token with each call.However there will be times, when the access token will expire, and the token needs to be refreshed. When the token expires, the next call will return with an Unauthorized HTTP code, so that's easy to monitor.We could modify each Retrofit call the following way:In the failure callback, check for the error code, if it equals Unauthorized, refresh the OAuth token, then repeat the Retrofit call.However, for this, all calls should be modified, which is not an easily maintainable, and good solution.Is there a way to do this without modifying all Retrofit calls? Now provide the response and nothing else.
Please do not use Interceptors to deal with authentication. Currently, the best approach to handle authentication is to use the new Authenticator API, designed specifically for this purpose . OkHttp will automatically ask the Authenticator for credentials when a response is 401 Not Authorised retrying last failed request with them. public class TokenAuthenticator implements Authenticator { @Override public Request authenticate(Proxy proxy, Response response) throws IOException { // Refresh your access_token using a synchronous api request newAccessToken = service.refreshToken(); // Add new header to rejected request and retry it return response.request().newBuilder() .header(AUTHORIZATION, newAccessToken) .build(); } @Override public Request authenticateProxy(Proxy proxy, Response response) throws IOException { // Null indicates no attempt to authenticate. return null; } Attach an Authenticator to an OkHttpClient the same way you do with Interceptors OkHttpClient okHttpClient = new OkHttpClient();okHttpClient.setAuthenticator(authAuthenticator); Use this client when creating your Retrofit RestAdapter RestAdapter restAdapter = new RestAdapter.Builder() .setEndpoint(ENDPOINT) .setClient(new OkClient(okHttpClient)) .build();return restAdapter.create(API.class);
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/22450036', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1395437/']}
jdg_377014
stackexchange
llm_judgeable_groundtruth_similarity
12373
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: When I started using git I just did a git init and started calling add and commit . Now I am starting to pay attention and I can see that my commits are showing up as cowens@localmachine , rather than the address I want. It appears as if setting GIT_AUTHOR_EMAIL and GIT_COMMITTER_EMAIL will do what I want, but I still have those old commits with the wrong email address/name. How can I correct the old commits? Now provide the response and nothing else.
You can go back and fix all your commits with a single call to git filter-branch. This has the same effect as rebase, but you only need to do one command to fix all your history, instead of fixing each commit individually. You can fix all the wrong emails with this command: git filter-branch --env-filter ' oldname="(old name)" oldemail="(old email)" newname="(new name)" newemail="(new email)" [ "$GIT_AUTHOR_EMAIL"="$oldemail" ] && GIT_AUTHOR_EMAIL="$newemail" [ "$GIT_COMMITTER_EMAIL"="$oldemail" ] && GIT_COMMITTER_EMAIL="$newemail" [ "$GIT_AUTHOR_NAME"="$oldname" ] && GIT_AUTHOR_NAME="$newname" [ "$GIT_COMMITTER_NAME"="$oldname" ] && GIT_COMMITTER_NAME="$newname" ' HEAD More information is available from the git docs
{}
{'log_upvote_score': 7, 'links': ['https://serverfault.com/questions/12373', 'https://serverfault.com', 'https://serverfault.com/users/2706/']}
jdg_377015
stackexchange
llm_judgeable_groundtruth_similarity
1924033
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question. Let $\mathfrak{g}$ be a real semisimple Lie algebra admitting an invariant inner-product. Is every connected Lie group with Lie algebra $\mathfrak{g}$ compact? I know that the converse is true: If $G$ is a compact connected Lie group, then the Haar measure may be used to give an invariant inner-product on $\mathrm{Lie}(G)$. Also, semisimplicity is necessary since $\mathrm{Lie}(\mathbb{R})=\mathbb{R}$ trivially admits an invariant inner-product. Now provide the response and nothing else.
I will assume that gluing data is also meant to include the condition $U_{ii} = U_i$. I'm too tired to organize this all in a narrative, so this will be fairly disjointed. It is fairly common in category theory to consider two families of objects $X_k$ and $X_{ij}$, families of maps $f_{ij} : X_{ij} \to X_i$ and $g_{ij} : X_{ij} \to X_j$, and the corresponding coequalizer $$ \coprod_{ij} X_{ij} \overset{f}{\underset{g}{\rightrightarrows}} \coprod_k X_k \xrightarrow{\rho} X$$ or, if we include 'redundant' $X_{ii} = X_i$, a pushout square $$ \begin{matrix} \coprod_{ij} X_{ij} &\xrightarrow{f}& \coprod_k X_k\\ \!g\!\downarrow & & \downarrow\!\rho\!\\ \coprod_k X_k &\xrightarrow{\rho}& X \end{matrix} $$ The picture here is that the $X_k$'s are a description of an object of interest, and the $X_{ij}$'s describe relations between the descriptions. Geometrically, we might think of the $X_k$'s as a cover, and the $X_{ij}$'s describe the overlap between them. Algebraically, we might think of $X_k$ as being generators, and the $X_{ij}$ as being relations. Either way, it's clear that in the "nicest" arrangement, we want each $X_{ij}$ to to be the pullback of $X_i \to X \leftarrow X_{j}$, so that it truly does describe all relations between $X_i$ and $X_j$, and you only really want one $X_{ij}$ per pair of indices. Gluing data is in this nice situation, but not all such diagrams are: e.g. we might only have a more minimal description of the relations, or the covers might actually self-intersect nontrivially (e.g. taking the interval $X_1 = [0,2\pi]$ as a cover of the circle, with $X_{11}$ being a single point mapped to both ends) or other deficiencies may apply. The gluing data gives an example of this sort of diagram: on each $U_{ij}$, the two maps into $\coprod_k U_k$ come from $ U_{ij} \hookrightarrow U_i $ $ U_{ij} \xrightarrow{\varphi_{ji}} U_{ji} \hookrightarrow U_j $ Furthermore, the data does assert there is "just one" in the sense that the $\varphi_{ji}$ gives a homeomorphism $U_{ij} \to U_{ji}$, and they do so coherently in the sense that $\varphi_{ij} = \varphi_{ji}^{-1}$ and $\varphi_{ii} = 1_{U_i}$, so different "paths" $U_{ij} \to U_{ji}$ (e.g $\varphi_{ji}$ versus $\varphi_{ji} \circ \varphi_{ii} \circ \varphi_{ij} \circ \varphi_{ji}$) all give the same map. The property that Top has that makes this setup convenient to work with is that it is infinitary extensive — i.e. that coproducts really do act like disjoint unions. I do not know if this is actually required , but all the ways I want to reason about gluing data rely on it. Sometimes, we also want to consider another family of $X_{ijk}$, this time with three maps down to the various $X_{mn}$. We have this in gluing data too: we can define $U_{ijk} = U_{ij} \cap U_{ik}$. And again we have "just one" per triple of indices, because we again have coherent homeomorphisms between the different permutations. It is enough to check $U_{ijk} = U_{ikj}$ $\varphi_{ji} : U_{ijk} \to U_{jik}$ is a homeomorphism The two homeomorphism $U_{ijk} \to U_{kji}$ given by $\varphi_{ki}$ and $\varphi_{kj} \circ \varphi_{ji}$ are the same. We could go further. This leads to a simple example of a simplicial object. Top is nice enough to talk about relations. The colimit defining $X$ can be viewed as taking the quotient of $\coprod_k U_k$ by the relation that the two maps $\coprod_{ij} U_{ij} \rightrightarrows \coprod_k U_k$ give equivalent outputs for each input. Normally, this relation is not an equivalence relation, and so the quotient is by the equivalence relation generated by this relation. However, the neat thing about having the transition maps $\varphi$ is that the relation really is an equivalence relation, so the colimit is much, much easier to work with. The interesting part is that it is a transitive relation, which you can check by noting that $$ x \sim \varphi_{ji}(x) \quad \text{and} \quad \varphi_{ji}(x) \sim \varphi_{kj}(\varphi_{ji}(x)) $$ only makes sense when $x \in U_{ijk}$, and that the transitive property requires $x \sim \varphi_{ki}(x)$, which we have. In Top , having the transition maps implies that the $\psi_i : U_i \to X$ are monic. I think you can even argue it's regular monic. Off hand I don't know what you want from the category to say such things. Finally, last feature is about open subspaces . While seemingly the part most topological in flavor, it too has an abstract analog. Top has a open subspace classifier . Let $S = \{0,1\}$ be the Sierpinski space, with topology $\{\varnothing, \{1\}, S\}$. Then there is a natural bijection between open subspaces of $X$ and continuous maps $X \to S$: For every open subset $U \subseteq X$, the characteristic function $\chi_U : X \to S$ is continuous For any continuous map $\chi : X \to S$, the inverse image $f^{-1}(1)$ is an open subset You can thus show that $\psi_i(U_i) \subseteq X$ is an open subspace by showing that the maps$$ \chi_{ij} : \coprod_i U_j \to S : x \mapsto \begin{cases}1 & x \in U_{ji} \\ 0 & x \notin U_{ji} \end{cases} $$induce a well-defined map $\chi_i : X \to S$, and that $\chi_i^{-1}(1) = \psi_i(U_i)$. Off hand, I'm not sure what additional properties you need (if any) to be able to make this argument in Top .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1924033', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/368306/']}
jdg_377016
stackexchange
llm_judgeable_groundtruth_similarity
91643
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible in dhcpd.conf to ignore requests from a set of MAC addresses? Something like this: host vminstances { hardware ethernet d0:0d:*; ignore booting;} Now provide the response and nothing else.
You can use something like: class "ignored" { match if substring(hardware,1,4) = 00:02;}pool { deny members of "ignored"; range 192.168.172.100 192.168.172.149; }
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/91643', 'https://serverfault.com', 'https://serverfault.com/users/28360/']}
jdg_377017
stackexchange
llm_judgeable_groundtruth_similarity
187533
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: By $\mathcal{D}(\mathbb{R})$ we denote linear space of smooth compactly supported functions. We say that $\{\varphi_n:n\in\mathbb{N}\}\subset\mathcal{D}(\mathbb{R})$ converges to $\varphi\in\mathcal{D}(\mathbb{R})$ if for all $k\in\mathbb{Z}_+$ the sequence $\{\varphi_n^{(k)}:n\in\mathbb{N}\}$ uniformly converges to $\varphi^{(k)}$. there exist a compact $K\subset \mathbb{R}$ such that $\mathrm{supp}(\varphi_n)\subset K$ for all $n\in\mathbb{N}$. Could you give me a hint to prove the following well known fact. There is no metric $d$ on $\mathcal{D}(\mathbb{R})$ such that convergence described above is equivalent to convergence in metric space $(\mathcal{D}(\mathbb{R}), d)$. Now provide the response and nothing else.
It is a consequence of the Baire category theorem. Essentially, $\mathcal{D}$ is of first category in itself and Cauchy sequences converge in $\mathcal{D}$, and this prevents metrizability.You can find a complete discussion in paragraph 6.9 of the book Functional analysis by Walter Rudin.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/187533', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19538/']}
jdg_377018
stackexchange
llm_judgeable_groundtruth_similarity
13363534
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Expecting "Hello from the derived." but getting "Hello from the base.". class Program{ interface IBase { void Method(); } public class Base: IBase { public virtual void Method() { Console.WriteLine("Hello from the base."); } } public class Derived : Base { public virtual new void Method() { Console.WriteLine("Hello from the derived."); } } static void Main(string[] args) { IBase x = new Derived(); x.Method(); }} So why isn't the derived class's method called. And more importantly, how can I get the derived classes method to get called without casting x to the Derived type? In my actual application, IBase has several other related methods and Derived only replaces two of the methods in IBase. Now provide the response and nothing else.
When you use the new modifier you are specifically saying that the method is not part of the virtual dispatch chain for that hierarchy, so calling the method by the same name in the base class will not result in redirection to the child class. If you mark the method with override instead of new then you will see the virtual dispatch that you are expecting to see. You will also need to remove virtual from the derived class's method as you cannot mark an override method as virtual (it already is). If you really don't want to override the method then it may be more appropriate, in your situation, to not use inheritance at all. You may simply want to use interfaces exclusively: public interface IFoo{ void Foo();}public class A : IFoo{ public void Foo() { Console.WriteLine("I am A, hear me roar!"); }}public class B : IFoo{ public void Foo() { Console.WriteLine("I am B, hear me roar!"); }}private static void Main(string[] args){ IFoo foo = new A(); foo.Foo(); foo = new B(); foo.Foo(); Console.WriteLine(); Console.WriteLine("Press any key to exit . . ."); Console.ReadKey(true);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13363534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1224383/']}
jdg_377019
stackexchange
llm_judgeable_groundtruth_similarity
2585055
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to do basic authentication to view a protected url. I want to access the protected url which looks like this: http://api.test.com/userinfo/vid?=1234 So I do the following with a WebView: mWebView.setHttpAuthUsernamePassword("api.test.com", "", "me@test.com", "mypassword");mWebView.loadUrl("http://api.test.com/userinfo/user?uid=53461"); but the authentication doesn't seem to work, I'm just getting an output error page. Am I using the WebView method correctly here? Update:Trying with curl: curl -u me@test.com:mypassword http://api.test.com/userinfo/user?uid=53461 and it pulls the page fine. I tried every combination of the host parameter, the owners of the api don't know what I mean by 'realm' though (and neither do I) - what info could I give them to help this along? Thanks Now provide the response and nothing else.
Another option is to use a WebViewClient; webview.setWebViewClient(new MyWebViewClient ());private class MyWebViewClient extends WebViewClient {@Overridepublic void onReceivedHttpAuthRequest(WebView view, HttpAuthHandler handler, String host, String realm) { handler.proceed("me@test.com", "mypassword");}}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2585055', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/246114/']}
jdg_377020
stackexchange
llm_judgeable_groundtruth_similarity
17889062
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing an app that uses PDO. I have a model abstract class that defines a property of id. I create a class that extends the abstract for each object model with propertis and methods unique to each data modal, and use PDO fetchObject to get objects from the database. if id is NULL, I know it hasn't been saved. I have a function that worked until I tried passing a serialized object as a property. The problem occurs when a property of the serialized object contains a single quote. (ex. Mike's Hard Lemonade) Here is the function that resides in the modal abstract class: public function save(DatabaseConnection &$db){ $properties = get_object_vars($this); $table = $this->getTableName(); foreach ($properties as $key => $value) { $cols[] = "`$key`"; $values[] = "'".$value."'"; if ($value != NULL) { $updateCols[] = "`$key`".' = "'.$value.'"'; } } $sql = 'INSERT INTO '.$table.' ('.implode(", ", $cols).') VALUES ('.implode(", ", $values).') ON DUPLICATE KEY UPDATE '.implode(", ", $updateCols); var_dump($sql); $stmnt = $db->prepare($sql); return $stmnt->execute($values);} Now provide the response and nothing else.
There are some previous considerations: When you click in a cell for editing you are switching it to edit mode and then is when editor function get executed. If you are not in edition mode despite of the HTML used, the changes are not transferred in the model. Kendo UI render boolean as checkboxes for editing but not while not in edit mode. What you need to do is: Define a template for displaying a checkbox. If you do not want to click twice the checkbox (the first to enter edit mode and the second to change it's value), you need to define a checkbox but bind a change event that intercepts clicks on it and change the model. Template definition: { title : "Fully Paid", field : "fullyPaid", template: "<input name='fullyPaid' class='ob-paid' type='checkbox' data-bind='checked: fullyPaid' #= fullyPaid ? checked='checked' : '' #/>"} As you can see I'm not defining an editor function since we will change the value of the checkbox without entering in edition mode. Define a handler that detect changes in the checkbox that I defined in the template and update the model. grid.tbody.on("change", ".ob-paid", function (e) { var row = $(e.target).closest("tr"); var item = grid.dataItem(row); item.set("fullyPaid", $(e.target).is(":checked") ? 1 : 0);}); Your JSBin modified here : http://jsbin.com/ebadaj/12/edit
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17889062', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1952093/']}
jdg_377021
stackexchange
llm_judgeable_groundtruth_similarity
1350171
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have a java class which is package-private (declared with "class", not "public class"), there is really no difference if the methods inside are declared public or protected or package-private, right? So which should I use, or when should I use which? I'm a bit confused. Now provide the response and nothing else.
If I have a java class which is package-private (declared with "class", not "public class"), there is really no difference if the methods inside are declared public or protected or package-private, right? Well maybe not immediately. But if you then (or in the future) declare a 'protected' or 'public' class that inherits from the package-private class, then the visibility of the members of the original class do matter. As @kmccoy points out, declaring the class as final removes the possibility of subclasses. But this is really only window-dressing. If you then decide that you really need to create subclasses, you simply remove the final ... and then you are back in the situation where the choice of access modifiers does matter. IMO, the bottom line is that you should pick the most appropriate modifiers ... even if it is not necessary right now. If nothing else, your choice of modifiers should document your intent as to where the abstraction boundaries lie.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1350171', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/148195/']}
jdg_377022
stackexchange
llm_judgeable_groundtruth_similarity
47824603
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After reading this walkthrough in the official documentation: http://graphql.org/graphql-js/object-types/ I am very confused about how to make custom scalar type resolvers without a third party library. Here is the sample code in the docs: var express = require('express');var graphqlHTTP = require('express-graphql');var { buildSchema } = require('graphql');// Construct a schema, using GraphQL schema languagevar schema = buildSchema(` type RandomDie { numSides: Int! rollOnce: Int! roll(numRolls: Int!): [Int] } type Query { getDie(numSides: Int): RandomDie }`);// This class implements the RandomDie GraphQL typeclass RandomDie { constructor(numSides) { this.numSides = numSides; } rollOnce() { return 1 + Math.floor(Math.random() * this.numSides); } roll({numRolls}) { var output = []; for (var i = 0; i < numRolls; i++) { output.push(this.rollOnce()); } return output; }}// The root provides the top-level API endpointsvar root = { getDie: function ({numSides}) { return new RandomDie(numSides || 6); }}var app = express();app.use('/graphql', graphqlHTTP({ schema: schema, rootValue: root, graphiql: true,}));app.listen(4000);console.log('Running a GraphQL API server at localhost:4000/graphql'); I understand I can use graphql-tools to make "executable schema" from string-based type definitions and a resolvers object. What I'm wondering is why there is no lower level / imperative graphql-js API I can use to define and resolve custom scalar types? In other words, how does graphql-tools even work? Thanks in advance! Edit: Here is some example code outlining the problem. On line 4 you can see that I am importing GraphQLJSON but it is never used. I know what to do to make this work using graphql-tools but I want to learn how it works. In other words, if graphql-tools did not exist, what would I do to inject a custom scalar type while still authoring my schema using graphql syntax? From what I can tell the only graphql-js solution is to use the non-declarative approach to authoring schema (second example below) import express from 'express';import graphqlHTTP from 'express-graphql';import { buildSchema } from 'graphql';import GraphQLJSON from 'graphql-type-json'; // where should I inject this?const schema = buildSchema(` type Image { id: ID! width: Int! height: Int! metadata: JSON! } type Query { getImage(id: ID!): Image! } scalar JSON`);class Image { constructor(id) { this.id = id; this.width = 640; this.height = 480; } metadata() { // what do I need to do in order to have this return value parsed by GraphQLJSON return { foo: 'bar' }; }}const rootValue = { getImage: function({ id }) { return new Image(id); },};const app = express();app.use( '/graphql', graphqlHTTP({ schema: schema, rootValue: rootValue, graphiql: true, }));app.listen(4000); Running this query: { getImage(id: "foo") { id width height metadata }} Results in this error: Expected a value of type \"JSON\" but received: [object Object] The answer I'm seeking would help me to return the JSON type without using graphql-tools . I have nothing against this library, but it seems bizarre to me that I must use a third party library for something so fundamental to the type resolution system in graphql-js . I would like to know more about why this dependency is needed before adopting it. Here is another way to make this work: import { GraphQLObjectType, GraphQLInt, GraphQLID } from 'graphql/type';const foo = new GraphQLObjectType({ name: 'Image', fields: { id: { type: GraphQLID }, metadata: { type: GraphQLJSON }, width: { type: GraphQLInt }, height: { type: GraphQLInt }, },}); However this does not allow me to author my schema using the graphql syntax, which is my goal. Now provide the response and nothing else.
UPDATE After some clarification, it looks like you are trying to add an a custom scalar to a schema created with schema language. Since schemas built buildSchema (or other client tools) do not have handler functions for serialize , parseValue , and parseLiteral bound, you need to modify the built schema to include those. you can do something like import { buildSchema } from 'graphql'import GraphQLJSON from 'graphql-type-json'const definition = `type Foo { config: JSON}scalar JSONQuery { readFoo: Foo}schema { query: Query}`const schema = buildSchema(definition)Object.assign(schema._typeMap.JSON, GraphQLJSON) Alternately you can also do the following which may be useful for renaming the scalar to something else Object.assign(schema._typeMap.JSON, { name: 'JSON', serialize: GraphQLJSON.serialize, parseValue: GraphQLJSON.parseValue, parseLiteral: GraphQLJSON.parseLiteral}) Original Answer buildSchema indeed creates a schema but that schema will have no resolve, serialize, parseLiteral, etc. functions associated with it. I believe graphql-tools only allows you to map resolver functions to fields which does not help you when you are trying to create a custom scalar. graphql-js has a GraphQLScalarType you can use to build custom scalars. see official documentation and example at http://graphql.org/graphql-js/type/#graphqlscalartype There are also several packages in npm that you can use as an example one i find very useful is https://github.com/taion/graphql-type-json/blob/master/src/index.js as an example if you wanted to create a base64 type that stores a string as base64 and decodes base64 strings before they are returned in the response you can create a custom base64 scalar like this import { GraphQLScalarType, GraphQLError, Kind } from 'graphql'const Base64Type = new GraphQLScalarType({ name: 'Base64', description: 'Serializes and Deserializes Base64 strings', serialize (value) { return (new Buffer(value, 'base64')).toString() }, parseValue (value) { return (new Buffer(value)).toString('base64') }, parseLiteral (ast) { if (ast.kind !== Kind.STRING) { throw new GraphQLError('Expected Base64 to be a string but got: ' + ast.kind, [ast]) } return (new Buffer(ast.value)).toString('base64') }})
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47824603', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242944/']}
jdg_377023
stackexchange
llm_judgeable_groundtruth_similarity
2791851
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working with a java.util.concurrent.ThreadPoolExecutor to process a number of items in parallel. Although the threading itself works fine, at times we've run into other resource constraints due to actions happening in the threads, which made us want to dial down the number of Threads in the pool. I'd like to know if there's a way to dial down the number of the threads while the threads are actually working. I know that you can call setMaximumPoolSize() and/or setCorePoolSize() , but these only resize the pool once threads become idle, but they don't become idle until there are no tasks waiting in the queue. Now provide the response and nothing else.
You absolutely can. Calling setCorePoolSize(int) will change the core size of the pool. Calls to this method are thread-safe and override settings provided to the constructor of ThreadPoolExecutor . If you are trimming the pool size, the remaining threads will shut-down once their current job queue is completed (if they are idle, they will shut-down immediately). If you are increasing the pool size, new threads will be allocated as soon as possible. The timeframe for the allocation of new threads is undocumented — but in the implementation, allocation of new threads is performed upon each call to the execute method. To pair this with a runtime-tunable job-farm, you can expose this property (either by wrapper or using a dynamic MBean exporter) as a read-write JMX attribute to create a rather nice, on-the-fly tunable batch processor. To reduce the pool size forcibly in runtime (which is your request), you must subclass the ThreadPoolExecutor and add a disruption to the beforeExecute(Thread,Runnable) method. Interrupting the thread is not a sufficient disruption, since that only interacts with wait-states and during processing the ThreadPoolExecutor task threads do not go into an interruptable state. I recently had the same problem trying to get a thread pool to forcibly terminate before all submitted tasks are executed. To make this happen, I interrupted the thread by throwing a runtime exception only after replacing the UncaughtExceptionHandler of the thread with one that expects my specific exception and discards it. /** * A runtime exception used to prematurely terminate threads in this pool. */static class ShutdownExceptionextends RuntimeException { ShutdownException (String message) { super(message); }}/** * This uncaught exception handler is used only as threads are entered into * their shutdown state. */static class ShutdownHandler implements UncaughtExceptionHandler { private UncaughtExceptionHandler handler; /** * Create a new shutdown handler. * * @param handler The original handler to deligate non-shutdown * exceptions to. */ ShutdownHandler (UncaughtExceptionHandler handler) { this.handler = handler; } /** * Quietly ignore {@link ShutdownException}. * <p> * Do nothing if this is a ShutdownException, this is just to prevent * logging an uncaught exception which is expected. Otherwise forward * it to the thread group handler (which may hand it off to the default * uncaught exception handler). * </p> */ public void uncaughtException (Thread thread, Throwable throwable) { if (!(throwable instanceof ShutdownException)) { /* Use the original exception handler if one is available, * otherwise use the group exception handler. */ if (handler != null) { handler.uncaughtException(thread, throwable); } } }}/** * Configure the given job as a spring bean. * * <p>Given a runnable task, configure it as a prototype spring bean, * injecting any necessary dependencices.</p> * * @param thread The thread the task will be executed in. * @param job The job to configure. * * @throws IllegalStateException if any error occurs. */protected void beforeExecute (final Thread thread, final Runnable job) { /* If we're in shutdown, it's because spring is in singleton shutdown * mode. This means we must not attempt to configure the bean, but * rather we must exit immediately (prematurely, even). */ if (!this.isShutdown()) { if (factory == null) { throw new IllegalStateException( "This class must be instantiated by spring" ); } factory.configureBean(job, job.getClass().getName()); } else { /* If we are in shutdown mode, replace the job on the queue so the * next process will see it and it won't get dropped. Further, * interrupt this thread so it will no longer process jobs. This * deviates from the existing behavior of shutdown(). */ workQueue.add(job); thread.setUncaughtExceptionHandler( new ShutdownHandler(thread.getUncaughtExceptionHandler()) ); /* Throwing a runtime exception is the only way to prematurely * cause a worker thread from the TheadPoolExecutor to exit. */ throw new ShutdownException("Terminating thread"); }} In your case, you may want to create a semaphore (just for use as a threadsafe counter) which has no permits, and when shutting down threads release to it a number of permits that corresponds to the delta of the previous core pool size and the new pool size (requiring you override the setCorePoolSize(int) method). This will allow you to terminate your threads after their current task completes. private Semaphore terminations = new Semaphore(0);protected void beforeExecute (final Thread thread, final Runnable job) { if (terminations.tryAcquire()) { /* Replace this item in the queue so it may be executed by another * thread */ queue.add(job); thread.setUncaughtExceptionHandler( new ShutdownHandler(thread.getUncaughtExceptionHandler()) ); /* Throwing a runtime exception is the only way to prematurely * cause a worker thread from the TheadPoolExecutor to exit. */ throw new ShutdownException("Terminating thread"); }}public void setCorePoolSize (final int size) { int delta = getActiveCount() - size; super.setCorePoolSize(size); if (delta > 0) { terminations.release(delta); }} This should interrupt n threads for f(n) = active - requested . If there is any problem, the ThreadPoolExecutor s allocation strategy is fairly durable. It book-keeps on premature termination using a finally block which guarantees execution. For this reason, even if you terminate too many threads, they will repopulate.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2791851', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/80163/']}
jdg_377024
stackexchange
llm_judgeable_groundtruth_similarity
5203338
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to C# What is the difference between public string MyValue; and public string MyValue { get; set; } I always assumed that both were same. Something was not working in my code. Using the latter form made it work. But don't know what is the difference. Can you help? Thanks Now provide the response and nothing else.
The first is a public field, the second an automatically implemented public property. They are not the same. With the auto implemented property the compiler will generate a private backing field. Though both can work as a way to expose data from your class, you should be using properties following the principle of information hiding - fields should be private and only accessed through properties. This allows you to make changes to the implementation without breaking the callers.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5203338', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15065/']}
jdg_377025
stackexchange
llm_judgeable_groundtruth_similarity
46434964
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: C++ allows to use class and function with the same name in one namespace: struct S {};void S() {} In this case pure name S means function S . To use struct instead you need to explicitly add struct before name. It's also possible to make function template and still use both of them: template <class T>void S() {}struct S {}; But using template struct is forbidden void S() {}template <class T>struct S {}; and gives error like this: error: redefinition of 'S' as different kind of symbol What is the reason for this? Why not to allow use template struct here? Is there any situation where using explicit keyword struct before S (like for non-template version) could not solve name collision if it was allowed? Maybe there is proposal exist? Now provide the response and nothing else.
C++ allows to use class and function with the same name in one namespace. struct S {}; void S() {} Normally when you declare struct S , you can refer to the type in two ways: as S and as struct S . But that's only until you declare something else named S , for example, a function. When you do that, the name of the type is not S any more. It's struct S only. The name S is reserved for the function. This is done for compatibility with C. C code uses this device frequently. Unlike C++, C places struct and union tags in a different name space from normal identifiers, and struct S cannot be referred to as simply S . So C++, in order to be able to compile C code that uses this device, makes an exception for struct tags that are reused as a different kind of identifier. As class is nearly synonymous with struct , this is done for the class keyword too. But C has no templates and there's no need to provide backward compatibility for them, so no such exception for class templates is made.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46434964', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3240681/']}
jdg_377026
stackexchange
llm_judgeable_groundtruth_similarity
38302858
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have in my project list of generated places with mini maps. There should be 2 points on the map and colored road direction between this two points. it should looks somehow like this: This should be static image, because there will be many such pictures with different directions on the page. But as I see, Google Static map didn't allow to draw such image. There can be only direct line between two points, like this: But I need direction on it... I decided to use static map, because in my web application I receive coordinates of those 2 points, and it's easy to put it as variables in my PHP template if I use static maps. But is it possible to receive direction as static image in same way? I have found few solution with JavaScript API, but didn't find how to draw static image as I need... Now provide the response and nothing else.
You can do it in two steps. Execute directions request from the PHP code to get the encoded polyline Use encoded polyline from step 1 with static maps E.g. https://maps.googleapis.com/maps/api/directions/json?origin=Grodno&destination=Minsk&mode=driving&key=YOUR_API_KEY This will return encoded polyline in routes[0]->overview_polyline->points Now use the polyline in static map: https://maps.googleapis.com/maps/api/staticmap?size=600x400&path=enc%3AohqfIc_jpCkE%7DCx%40mJdDa%5BbD%7BM%7D%40e%40_MgKiQuVOoFlF%7DVnCnBn%40aDlDkN%7DDwEt%40%7DM%7DB_TjBy%7C%40lEgr%40lMa%60BhSi%7C%40%7COmuAxb%40k%7BGh%5E_%7BFjRor%40%7CaAq%7DC~iAomDle%40i%7BA~d%40ktBbp%40%7DqCvoA%7DjHpm%40uuDzH%7Dm%40sAg%7DB%60Bgy%40%7CHkv%40tTsxAtCgl%40aBoeAwKwaAqG%7B%5CeBc_%40p%40aZx%60%40gcGpNg%7CBGmWa%5CgpFyZolF%7BFgcDyPy%7CEoK_%7BAwm%40%7BqFqZaiBoNsqCuNq%7BHk%60%40crG%7B%5DqkBul%40guC%7BJ%7D%5DaNo%7B%40waA%7DmFsLc_%40_V%7Dh%40icAopBcd%40i_A_w%40mlBwbAiiBmv%40ajDozBibKsZ%7DvAkLm%5DysAk%7DCyr%40i%60BqUkp%40mj%40uoBex%40koAk_E_hG%7B%60Ac%7DAwp%40soAyk%40ogAml%40%7Bg%40qKsNeJw%5DeuA%7D%60Fkm%40czBmK%7Bg%40wCed%40b%40_e%40dT%7BgCzx%40csJrc%40ejFtGi%60CnB_pFhCa%60Gw%40%7Du%40wFwaAmP%7BoA%7Dj%40etBsRm_AiGos%40aCyy%40Lic%40tFohA~NeoCvC_%7CAWm~%40gb%40w~DuLex%40mUk_Ae_%40o_Aol%40qmAgv%40_%7DAaf%40qhAkMcl%40mHwn%40iCuq%40Nqi%40pF%7D%7CE~CyiDmFkgAoUedAcb%40ku%40ma%40cl%40mUko%40sLwr%40mg%40awIoA_aApDe~%40dKytAfw%40kyFtCib%40%7DA%7Bj%40kd%40usBcRgx%40uFwb%40%7BCulAjJmbC~CumAuGwlA_%5Du_C_PqyB%7BI%7DiAwKik%40%7DUcr%40ya%40up%40%7DkB%7DoCoQ%7Da%40aMyf%40an%40wjEimBuwKiYybC%7DLuyBoJ%7DhBuMieAwd%40i%7BB%7B~%40g%60D_Si%5Dsi%40%7Bk%40cPeSuH_T%7DNct%40kNcmC_Gyr%40mq%40_~AkmA%7DkCksByrE_N%7Bc%40oAcs%40%60J%7Bi%40t%7DByaHxNqt%40tGgxA%7CJ%7BkGeJ_aDsQi_HmFwuAmI%7BdA_XijByFgv%40%7DAiwBxDocAdM%7BlAtSmcAfUmaAptAmbGh~AcvGbwBc%7DHff%40shB~Isp%40nQu%7DB%60UsuCbBok%40l%40%7DzAhIwbA~OuaAnYwp%40rYwe%40%7CNke%40zc%40%7BhBrOwRdo%40sf%40xNaTb_%40uy%40ta%40k~%40xTap%40hl%40uiCre%40unHlIi~AlFsc%40rEkk%40aAce%40mL%7DlAwPcyB_GohBzDsqAtMqtA~h%40weDtFkd%40Bi%60%40_XwfEdAag%40dEkM%60%40zAqApJef%40%7BP_o%40sYys%40ai%40yf%40_j%40y_%40oi%40mVi%5EmFqSwAiPtDuQbc%40_nAtZyaAlEkc%40r%40eq%40%7CAo%5BrTwcAtVuz%40vQ%7Dd%40%7CPmb%40xT%7B%5CzZyd%40jG%7BRzL%7Dh%40jr%40ov%40rFiImFqPiD%7BJ&key=YOUR_API_KEY
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38302858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5053338/']}
jdg_377027
stackexchange
llm_judgeable_groundtruth_similarity
1662262
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm maintaining a Ruby on Rails site and I'm confused as to how to perform redirects to relative URLs using the https protocol. I can successfully create a redirect to a relative URL using http, for example: redirect_to "/some_directory/" But I cannot discern how to create a redirect to a URL using the https protocol. I have only been able to do so by using absolute URLS, for example: redirect_to "https://mysite.com/some_directory/" I would like to keep my code clean, and using relative URLs seems like a good idea. Does anyone know how to achieve this in Rails? Now provide the response and nothing else.
The ActionController::Base#redirect_to method takes an options hash, one of the parameters of which is :protocol which allows you to call: redirect_to :protocol => 'https://', :controller => 'some_controller', :action => 'index' See the definition for #redirect_to and #url_for for more info on the options. Alternatively, and especially if SSL is to be used for all your controller actions, you could take a more declarative approach using a before_filter . In ApplicationController you could define the following method: def redirect_to_https redirect_to :protocol => "https://" unless (request.ssl? || request.local?)end You can then add filters in your those controllers which have actions requiring SSL, e.g: class YourController before_filter :redirect_to_https, :only => ["index", "show"]end Or, if you require SSL across your entire app, declare the filter in ApplicationController : class ApplicationController before_filter :redirect_to_httpsend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1662262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/201027/']}
jdg_377028
stackexchange
llm_judgeable_groundtruth_similarity
1463284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've always loved trees, that nice O(n*log(n)) and the tidiness of them. However, every software engineer I've ever known has asked me pointedly why I would use a TreeSet . From a CS background, I don't think it matters all that much which you use, and I don't care to mess around with hash functions and buckets (in the case of Java ). In which cases should I use a HashSet over a TreeSet ? Now provide the response and nothing else.
HashSet is much faster than TreeSet (constant-time versus log-time for most operations like add, remove and contains) but offers no ordering guarantees like TreeSet. HashSet the class offers constant time performance for the basic operations (add, remove, contains and size). it does not guarantee that the order of elements will remain constant over time iteration performance depends on the initial capacity and the load factor of the HashSet. It's quite safe to accept default load factor but you may want to specify an initial capacity that's about twice the size to which you expect the set to grow. TreeSet guarantees log(n) time cost for the basic operations (add, remove and contains) guarantees that elements of set will be sorted (ascending, natural, or the one specified by you via its constructor) (implements SortedSet ) doesn't offer any tuning parameters for iteration performance offers a few handy methods to deal with the ordered set like first() , last() , headSet() , and tailSet() etc Important points: Both guarantee duplicate-free collection of elements It is generally faster to add elements to the HashSet and then convert the collection to a TreeSet for a duplicate-free sorted traversal. None of these implementations are synchronized. That is if multiple threads access a set concurrently, and at least one of the threads modifies the set, it must be synchronized externally. LinkedHashSet is in some sense intermediate between HashSet and TreeSet . Implemented as a hash table with a linked list running through it, however, it provides insertion-ordered iteration which is not same as sorted traversal guaranteed by TreeSet . So a choice of usage depends entirely on your needs but I feel that even if you need an ordered collection then you should still prefer HashSet to create the Set and then convert it into TreeSet. e.g. SortedSet<String> s = new TreeSet<String>(hashSet);
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/1463284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/81271/']}
jdg_377029
stackexchange
llm_judgeable_groundtruth_similarity
2155180
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $f,g$ be analytic on some domain $\Omega \subset \mathbb{C}$. By Cauchy's formula, we have$$\frac{1}{2\pi i} \oint_{\partial\Omega}\frac{f(z) \, g(z)}{z - z_0}\, dz= f(z_0) \, g(z_0)= -\frac{1}{4\pi^2} \oint_{\partial\Omega}\frac{f(u)}{u - z_0}\, du\, \oint_{\partial\Omega}\frac{g(v)}{v - z_0}\, dv.$$Is there a way how I can get from the first expression to the last without the intermediate step? Now provide the response and nothing else.
\begin{align*}20x &\equiv 49 &\pmod{23} \\-3x &\equiv 3 &\pmod{23} \\x &\equiv -1 &\pmod{23} \\&\equiv 22 &\pmod{23}\end{align*}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2155180', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/289131/']}
jdg_377030
stackexchange
llm_judgeable_groundtruth_similarity
34719507
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why shouldn't one use case classes exclusively ? After all, they promote immutability, pattern matching over accessor methods etc ? Now provide the response and nothing else.
People frequently use case classes and then try to customize/abuse them to do something else than what case classes are supposed to do. E.g. if you want to make some fields private customize equality / hashCode have mutable state enforce invariants when constructing instances you should use a normal class, even if you have to type a bit more. Use a case class for pure, immutable and public data. Basically a tuple with named elements, nothing more. Use normal classes for e.g. handles to mutable resources (files, GUI controls etc.) People frequently think that you can abuse case classes to do tasks reserved for normal classes. So here are a number of examples of misuse of case classes: Private members Members of a case class are never really private case class Foo(x: Int, private val y: String)val x = Foo(1, "Secret")x.y // does not work, because y is privatex.productElement(1) // still does workFoo.unapply(Foo(1,2)).get._2 // another, more typesafe way to get at the private fields Private constructor You might think that by making the constructor private, it is possible to enforce invariants. In the example below, you might think that it is impossible to create a range with min > max. case class Range private (min: Int, max: Int)object Range { def create(a: Int, b: Int): Range = if(a < b) new Range(a, b) else new Range(b, a) } But that is not the case: scala> val wrong = Range.create(2,1).copy(min = 1000)wrong: Range = Range(1000,2) You would have to override the copy method as well. By the time you have made this really watertight, you might as well have used a normal class.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34719507', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/728023/']}
jdg_377031
stackexchange
llm_judgeable_groundtruth_similarity
1269713
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I test that my controller action is putting the correct errors in the ModelState when validating an entity, when I'm using DataAnnotation validation in MVC 2 Preview 1? Some code to illustrate. First, the action: [HttpPost] public ActionResult Index(BlogPost b) { if(ModelState.IsValid) { _blogService.Insert(b); return(View("Success", b)); } return View(b); } And here's a failing unit test that I think should be passing but isn't (using MbUnit & Moq): [Test]public void When_processing_invalid_post_HomeControllerModelState_should_have_at_least_one_error(){ // arrange var mockRepository = new Mock<IBlogPostSVC>(); var homeController = new HomeController(mockRepository.Object); // act var p = new BlogPost { Title = "test" }; // date and content should be required homeController.Index(p); // assert Assert.IsTrue(!homeController.ModelState.IsValid);} I guess in addition to this question, should I be testing validation, and should I be testing it in this way? Now provide the response and nothing else.
Hate to necro a old post, but I thought I'd add my own thoughts (since I just had this problem and ran across this post while seeking the answer). Don't test validation in your controller tests. Either you trust MVC's validation or write your own (i.e. don't test other's code, test your code) If you do want to test validation is doing what you expect, test it in your model tests (I do this for a couple of my more complex regex validations). What you really want to test here is that your controller does what you expect it to do when validation fails. That's your code, and your expectations. Testing it is easy once you realize that's all you want to test: [test]public void TestInvalidPostBehavior(){ // arrange var mockRepository = new Mock<IBlogPostSVC>(); var homeController = new HomeController(mockRepository.Object); var p = new BlogPost(); homeController.ViewData.ModelState.AddModelError("Key", "ErrorMessage"); // Values of these two strings don't matter. // What I'm doing is setting up the situation: my controller is receiving an invalid model. // act var result = (ViewResult) homeController.Index(p); // assert result.ForView("Index") Assert.That(result.ViewData.Model, Is.EqualTo(p));}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1269713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40015/']}
jdg_377032
stackexchange
llm_judgeable_groundtruth_similarity
27361
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why is p -methoxyphenol more acidic than p -methylphenol? According to me, inductive effect really doesn’t matter at para positions and p -methoxyphenol should be less acidic due to the high mesomeric effect of the $\ce{-OCH3}$ group in comparison to the weak hyperconjugative effect of the $\ce{-CH3}$ group. p -Methoxyphenol – $\mathrm{p}K_\mathrm{a} = 10.1$ p -Methylphenol – $\mathrm{p}K_\mathrm{a} = 10.3$ Now provide the response and nothing else.
First off, it is a very small difference in $\mathrm{p}K_\mathrm{a}$ that you are asking about. In fact, I would say that the two compounds have similar acidity. It is often difficult to explain such small differences, but what follows is a general approach that can be used to probe the question and to try and answer questions related to equilibria. In this question we are comparing the effect of two different substituents on an equilibrium. To perform this comparison we must look at the effect of the substituent on both the starting material (the substituted phenol) and the product (the substituted phenoxide anion). If the substituent stabilizes the phenol it will tend to shift the equilibrium towards the phenol side making the phenol less acidic; if the substituent stabilizes the phenoxide it will tend to shift the equilibrium to the phenoxide side making the phenol more acidic. From your other question we see that the p -methoxy group destabilizes the phenoxide anion to a greater extent than the p -methyl group. Based on this alone, we would say the substituent effect on the phenoxide side of the equilibrium should be such that the p -methoxy substituent pushes the equilibrium more strongly to the phenol side than the p -methyl group will. This would tend to make p -methoxyphenol less acidic than p -methylphenol. Looking at the phenol side of the equilibrium, both the methoxy and the methyl groups are electron releasing through resonance, with the methoxy having a much stronger effect. The phenolic $\ce{OH}$ group is also strongly electron releasing through resonance. Perhaps having two strongly electron releasing groups (hydroxyl and methoxy) attached to the aromatic ring causes some destabilization due to electron repulsion associated with the high electron density in the ring. Certainly such an effect would be greater with methoxy than methyl. If true, then we could argue that the p -methoxy substituent also destabilizes the starting phenol (more then the methyl group) and this would tend to push the equilibrium to the product side making p -methoxyphenol more acidic than p -methylphenol. If the substituent effects on the phenol and the phenoxide are comparable, then they would cancel and we might expect p -methoxyphenol and p -methylphenol to have comparable acidities and if the substituent effects cancel we would also expect both phenols to have $\mathrm{p}K_\mathrm{a}$'s similar to phenol ($\mathrm{p}K_\mathrm{a}=10$), which they do. The approach used here is sound. I think the explanation of the substituent effects on the phenoxide are also sound. The explanation of the substituent effects on the phenol side is (admittedly) weak. Nonetheless, this explanation does correctly explain why p -methoxyphenol and p -methylphenol have comparable acidities and why these acidities are close to the acidity of phenol.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/27361', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/13357/']}
jdg_377033
stackexchange
llm_judgeable_groundtruth_similarity
17120633
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm trying to get the data inside the html <tbody> . Basically, i have many rows like this; <tbody> <tr> <td>63</td> <td>Computer</td> <td>3434</td> <td> <button class="btn-medium btn-danger remove" id="mprDetailRemove"><i class="icon-remove"></i></button> </td> </tr> <tr> <td>64</td> <td>Stationary</td> <td>111</td> <td> <button class="btn-medium btn-danger remove" id="Button1"><i class="icon-remove"></i></button> </td> </tr> <tr> <td>64</td> <td>Stationary</td> <td>11</td> <td> <button class="btn-medium btn-danger remove" id="Button2"><i class="icon-remove"></i></button> </td> </tr> </tbody> Now, i'm looping through and trying to get the <td> values like this; var table = $("#mprDetailDataTable table tbody"); table.find('tr').each(function (key, val) { $(this).find('td').each(function (key, val) { var productId = val[key].innerHTML; // this isn't working var product = ? var Quantity = ? }); }); But, i'm not able to get the values(html text) of the each row. I want to assign these values to local variables. Also, i don't want to get the innerHTML of a button (which is in each row) Now provide the response and nothing else.
Using a nested .each() means that your inner loop is doing one td at a time, so you can't set the productId and product and quantity all in the inner loop. Also using function(key, val) and then val[key].innerHTML isn't right: the .each() method passes the index (an integer) and the actual element, so you'd use function(i, element) and then element.innerHTML . Though jQuery also sets this to the element, so you can just say this.innerHTML . Anyway, here's a way to get it to work: table.find('tr').each(function (i, el) { var $tds = $(this).find('td'), productId = $tds.eq(0).text(), product = $tds.eq(1).text(), Quantity = $tds.eq(2).text(); // do something with productId, product, Quantity }); Demo: http://jsfiddle.net/bqX7Q/
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17120633', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/859968/']}
jdg_377034
stackexchange
llm_judgeable_groundtruth_similarity
24352280
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this code and it won't compile, highlighting the x and y right of Point3 and writing: "Multiple declarations of x" and "Multiple declarations of y". What's wrong? Can't Point2 and Point3 have the same member name? data Point2 = Point2 {x :: Float, y :: Float}data Point3 = Point3 {x :: Float, y :: Float, z :: Float} Now provide the response and nothing else.
No, this is not currently supported. The standard approach is to prefix each field with something unique to the particular datatype, e.g. p2x , p2y , p3x etc. The reason this isn't supported is that each record field name implicitly generates a "selector" function, e.g. x :: Point2 -> Float . Having two fields with the same name in the same scope would generate a clash. This is a long-standing bugbear for many people and will be addressed by the upcoming language extension OverloadedRecordFields , which will hopefully be part of GHC 7.12 (due early 2016). When enabled, this extension will allow the same field name to be used in multiple records. The field selector will have an overloaded type which will generally be resolved by type inference.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24352280', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2156925/']}
jdg_377035
stackexchange
llm_judgeable_groundtruth_similarity
28613128
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to create a hash table that is an array list (length 100) of ArrayList. Each worker has an ID number, name, and salary. I have created an array list of size 100 of array lists and want to store each worker at the index IDNumber%100. For example, John Doe, whose ID is 9999, would be stored at the index 99. In the case of collisions, like Jean Doe whose ID number is 19999, she would be stored in the array list within the array list also at index 99. When I print out my hash table prints out the worker information multiple time like so: John Smith 234 65678.5Janice Sullivan 27854 29345.75Javier Lopez 10765 87687.1Jill Applewood 22234 38678.0Jake Drake 1234 22222.25John Doe 9999 100000.0Jean Doe 19999 100001.0John Smith 234 65678.5Ect. Do I have a problem with my printing method or am I not properly creating the hash table? import java.util.ArrayList;public class PartnerHash{ public static void main(String[]args) { Worker workers[] = {new Worker("John Smith", 65678.50, 234), new Worker ("Janice Sullivan", 29345.75, 27854), new Worker ("Javier Lopez", 87687.10, 10765), new Worker ("Jill Applewood", 38678.00, 22234), new Worker ("Jake Drake", 22222.25, 1234), new Worker ("John Doe", 100000.00, 9999), new Worker ("Jean Doe", 100001.00, 19999)}; ArrayList<ArrayList<Worker>> list =populateHashTable(workers); print(list); } private static void print(ArrayList<ArrayList<Worker>> list) { for (int i=0; i<list.size(); i++) { if (list.get(i)!=null) for (int j=0; j<list.get(i).size(); j++) { if (list.get(i).get(j)!=null) { System.out.print(list.get(i).get(j).getName()); System.out.printf("%16s",list.get(i).get(j).getIDNum()); System.out.printf("%16s",list.get(i).get(j).getSalary()); System.out.println(""); } } } } public static ArrayList<ArrayList<Worker>> populateHashTable(Worker [] workers) { ArrayList<ArrayList<Worker>> list = new ArrayList<ArrayList<Worker>>(100); ArrayList<Worker> sublist = new ArrayList<Worker>(); for (int j=0; j<100; j++) list.add(sublist); for (int i=0; i<workers.length; i++) { int index =workers[i].getIDNum()%100; ArrayList target=list.get(index); target.add(workers[i]); } return list; }} Now provide the response and nothing else.
According to the official documentation , you don't need to programmatically decide which layout to use with the respective screen size. To optimize your application's UI for the different screen sizes and densities, you can provide alternative resources for any of the generalized sizes and densities. Typically, you should provide alternative layouts for some of the different screen sizes and alternative bitmap images for different screen densities. At runtime, the system uses the appropriate resources for your application, based on the generalized size or density of the current device screen. In other words, if you follow the recommendation stated in the documentation, as I can see that you've done, placing your layout files in their respective resource folder like so: res/layout/main_activity.xml # For handsets (smaller than 600dp available width)res/layout-sw600dp/main_activity.xml # For 7” tablets (600dp wide and bigger) res/layout-sw720dp/main_activity.xml # For 10” tablets (720dp wide and bigger) Then the system will decide, which layout to use. No additional code is needed for you to specify it at run time. If you however would want to make changes depending on your screen resolution , you could get the width and height in pixels using the following code Display display = getWindowManager().getDefaultDisplay();Point size = new Point();display.getSize(size);int width = size.x;int height = size.y; Then do something cleaver depending on the width and height variables, e.g. in your case with the S2: if(width == 480 && height == 800){ //Do work that's related to the S2 screen resolution}else if(...){}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28613128', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4585067/']}
jdg_377036
stackexchange
llm_judgeable_groundtruth_similarity
350747
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I understand light consists of photons. Likewise it is a wave with varying intensity in the electric and perpendicular magnetic fields. Say there is a candle lit on the moon. Why wouldn't we be able to see it from Earth? Since light is quantized, would an equal distance in space correspond to a proportional loss in intensity? If so, what are the proportionality constants in the vacuum of space and in 'regular' air? Now provide the response and nothing else.
A beam of light loses intensity as it travels and spreads. A photon does not. Instead, when dealing with light as photons, the rate of photon absorption/detection decreases as the density of the photon flux decreases. How bright a light is doesn't depend on the intensity of the individual photons (that's constant). It depends on the number of photons from that light source hitting your eye, and as the light becomes more spread-out there are fewer photons from that light source hitting your eye. In principle, we can detect a candle on the moon, but it's very hard due both to the very small photon flux on a detector, and to the difficulty in distinguishing between a candle photon and a photon emitted by the background. Experiments such as the lunar laser ranging experiment detect single photons reflected from mirrors on the moon, but they have several things going for them. First, the instantaneous laser power produced by reflection is a good deal higher than the ambient (at a very specific wavelength), and second, since the laser wavelength is well-known the detector can use a very narrow filter to ignore other frequencies. Even so, there are a certain number of false detections which analysis needs to reject.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/350747', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/159232/']}
jdg_377037
stackexchange
llm_judgeable_groundtruth_similarity
72927
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I read somewhere that the earth has to be smaller than 1 cm to become a black hole, according to Schwarzschild. Since big bang came from a singularity, I am wondering, is there any minimum volume for anything? Now provide the response and nothing else.
Infinity is a mathematical term, very useful, but the history of physics has shown us that when we make mathematical extrapolations that lead to infinities of one sort or another, a different mathematical model will eliminate those infinities ( call me quantum mechanics). In thermodynamics the black body radiation leads to the ultraviolet catastrophe, and quantum mechanics saves the day. In classical electromagnetism, a point like electron would tend to an infinite potential at (0,0,0) as it goes with 1/r. Quantum electrodynamics saves the day. That is because quantum mechanics has inherent probabilistic indeterminacies when sizes become of order of h (the planck constant). Even though elementary particles are postulated as point particles, they are not classical particles, the wave/particle duality saves the day, so the minimum volume would be of dimensions compatible with h in the variables examined and the measurement methods used. Once gravity is quantized, the set will be complete, taking care of minimum black hole volumes too, in a similar way.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/72927', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/27754/']}
jdg_377038
stackexchange
llm_judgeable_groundtruth_similarity
10797541
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to show a popup with the requiredMessages of some inputText fields when I click on a submit button. But just only in case of there are those messages. I have tried with bean variable and javascript on the oncomplete tag, but I'm not able to make it work properly. If I put visible="true" in p:dialog, the popup is always displayed, although I try to control it from the commandButton. Now, I have this, but the popup is never displayed: <h:inputText id="Scheme" required="true" requiredMessage="Required."></h:inputText><h:commandButton id="submitModify" value="#{msg['systemdetail.modify']}" action="#{sistem.modify}" oncomplete="if (#{facesContext.maximumSeverity != null}) {dlg1.show();}"></h:commandButton><p:dialog id="popup" style="text-align:center" widgetVar="dlg1" modal="true"> <h:messages layout="table"/></p:dialog> How can I do this? Thanks in advance. Now provide the response and nothing else.
Standard JSF and PrimeFaces does not support request based EL evaluation in on* attributes. RichFaces is the only who supports that. Besides, the standard JSF <h:commandButton> does not have an oncomplete attribute at all. You're probably confusing with PrimeFaces <p:commandButton> There are several ways to achieve this: Check the condition in the visible attribute of the <p:dialog> instead. <p:dialog visible="#{not empty facesContext.messageList}"> or if you want to show validation messages only instead of all messages <p:dialog visible="#{facesContext.validationFailed}"> Use PrimeFaces <p:commandButton> instead, the PrimeFaces JS API supports the #{facesContext.validationFailed} condition through the args object as well: <p:commandButton ... oncomplete="if (args.validationFailed) dlg1.show()" />
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10797541', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1242595/']}
jdg_377039
stackexchange
llm_judgeable_groundtruth_similarity
11820566
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 2 files compiled by django-pipeline along with s3boto: master.css and master.js. They are set to "Public" in my buckets. However, when I access them, sometimes master.css is served, sometimes it errs with SignatureDoesNotMatch. The same with master.js. This doesn't happen on Chrome. What could I be missing? EDIT : It now happens on Chrome too. Now provide the response and nothing else.
Happened to me too...Took a few hours to find, but I figured it out eventually.Turns out that if the right signature is : ssCNsAOxLf5vA80ldAI3M0CU2%2Bw= Then AWS will NOT accept: ssCNsAOxLf5vA80ldAI3M0CU2+w= Where the only difference is the translation of %2B to '+'. S3BotoStorage actually yields it correctly but the encoding happens on CachedFilesMixin in the final line of the url method ( return unquote(final_url) ).To fix it, I derived a new CachedFilesMixin to undo the "damage" (I should mention that I don't know why this unquote exists in the first place, so undoing it might cause other problems) class MyCachedFilesMixin(CachedFilesMixin):def url(self, *a, **kw): s = super(MyCachedFilesMixin, self).url(*a, **kw) if isinstance(s, unicode): s = s.encode('utf-8', 'ignore') scheme, netloc, path, qs, anchor = urlparse.urlsplit(s) path = urllib.quote(path, '/%') qs = urllib.quote_plus(qs, ':&=') return urlparse.urlunsplit((scheme, netloc, path, qs, anchor)) Where I used the code I found here . Hope this helps...
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11820566', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1737290/']}
jdg_377040