source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20131553
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Could anyone please suggest me how to convert date from this 1387843200000 format into this 24/12/2013 inside my controller ? Just FYI my dates are stored in this way & when binding to edit form with input type="date" field is not being populated at all. #Plunker demo here. EditCtrl app.controller("EditCtrl", [ "$scope", "$filter", "db" function ($scope, $filter, db){ // this gets me an item object var item = db.readItem(); // item date = 1387843200000 // this returns undefined item.date = $filter('date')(date[ item.date, "dd/MM/yyyy"]);}]); Edit.html - template <form name="editForm" class="form-validate"> <div class="form-group"> <label for="date">Event date.</label> <input type="date" class="form-control" ng-model="event.date" id="date" required /> </div> <a href="#/" class="btn btn-danger ">Cancel</a> <button id="addEvent" class="btn btn-primary pull-right" ng-disabled="isClean() || editForm.$invalid" ng-click="saveEvent()">Save event.</button> </form>
Now provide the response and nothing else.
|
item.date = $filter('date')(item.date, "dd/MM/yyyy"); // for conversion to string http://docs.angularjs.org/api/ng.filter:date But if you are using HTML5 type="date" then the ISO format yyyy-MM-dd MUST be used. item.dateAsString = $filter('date')(item.date, "yyyy-MM-dd"); // for type="date" binding<input type="date" ng-model="item.dateAsString" value="{{ item.dateAsString }}" pattern="dd/MM/YYYY"/> http://www.w3.org/TR/html-markup/input.date.html NOTE: use of pattern="" with type="date" looks non-standard, but it appears to work in the expected way in Chrome 31.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20131553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/616643/']}
|
jdg_376641
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48277473
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to horizontally centered an image inside a card? I have the following <div class='column is-one-quarter has-text-centered'> <div class='card equal-height'> <div class='card-content'> <figure class='image is-64x64'><img src='...'></figure> </div> </div> </div> and I cannot center the image. I have tried to add is-centered both to the figure and to the parent div but nothing changes. Thanks.
Now provide the response and nothing else.
|
Change the display property of card-content to flex by using the .is-flex modifier. Now you can use flexbox properties to horizontally center the figure . There is no modifying class for this with Bulma, so you can make your own... .is-horizontal-center { justify-content: center;} Add this to card-content and you're done. .is-horizontal-center { justify-content: center;} <link href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.6.2/css/bulma.min.css" rel="stylesheet"/><div class='column is-one-quarter'> <div class='card equal-height'> <div class='card-content is-flex is-horizontal-center'> <figure class='image is-64x64'><img src='https://unsplash.it/64'></figure> </div> </div></div>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/48277473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/284257/']}
|
jdg_376642
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40863517
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anybody help me to solve this problem? There are 3 tables with 2 foreign keys: Schema::create('users', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('email')->unique(); $table->string('password'); $table->rememberToken(); $table->timestamps();});Schema::create('firms', function (Blueprint $table) { $table->increments('id'); $table->string('title')->nullable(); $table->integer('user_id')->unsigned()->nullable(); $table->foreign('user_id')->references('id')->on('users'); $table->timestamps();});Schema::create('jobs', function (Blueprint $table) { $table->increments('id'); $table->string('title')->nullable(); $table->integer('firm_id')->unsigned()->nullable(); $table->foreign('firm_id')->references('id')->on('firms'); $table->timestamps();}); Error after running migration: [Illuminate\Database\QueryException] SQLSTATE[HY000]: General error: 1005 Can't create table `job`.`#sql-5fc_a1` (errno: 150 "Foreign key constraint is incorrectly formed") (SQL: alter ta ble `firms` add constraint `firms_user_id_foreign` foreign key (`user_id`) references `users` (`id`)) [PDOException] SQLSTATE[HY000]: General error: 1005 Can't create table `job`.`#sql-5fc_a1` (errno: 150 "Foreign key constraint is incorrectly formed")
Now provide the response and nothing else.
|
In case of foreign keys, the referenced and referencing fields must have exactly the same data type. You create the id fields in both users and firms as signed integers. However, you create both foreign keys as unsigned integers, therefore the creation of the keys fail. You need to either add the unsigned clause to the id field definitions, or remove the unsigned clause from the foreign key fields.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40863517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7221924/']}
|
jdg_376643
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23317342
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a pandas dataframe with a column named 'City, State, Country'. I want to separate this column into three new columns, 'City, 'State' and 'Country'. 0 HUN1 ESP2 GBR3 ESP4 FRA5 ID, USA6 GA, USA7 Hoboken, NJ, USA8 NJ, USA9 AUS Splitting the column into three columns is trivial enough: location_df = df['City, State, Country'].apply(lambda x: pd.Series(x.split(','))) However, this creates left-aligned data: 0 1 20 HUN NaN NaN1 ESP NaN NaN2 GBR NaN NaN3 ESP NaN NaN4 FRA NaN NaN5 ID USA NaN6 GA USA NaN7 Hoboken NJ USA8 NJ USA NaN9 AUS NaN NaN How would one go about creating the new columns with the data right-aligned? Would I need to iterate through every row, count the number of commas and handle the contents individually?
Now provide the response and nothing else.
|
I'd do something like the following: foo = lambda x: pd.Series([i for i in reversed(x.split(','))])rev = df['City, State, Country'].apply(foo)print rev 0 1 20 HUN NaN NaN1 ESP NaN NaN2 GBR NaN NaN3 ESP NaN NaN4 FRA NaN NaN5 USA ID NaN6 USA GA NaN7 USA NJ Hoboken8 USA NJ NaN9 AUS NaN NaN I think that gets you what you want but if you also want to pretty things up and get a City, State, Country column order, you could add the following: rev.rename(columns={0:'Country',1:'State',2:'City'},inplace=True)rev = rev[['City','State','Country']]print rev City State Country0 NaN NaN HUN1 NaN NaN ESP2 NaN NaN GBR3 NaN NaN ESP4 NaN NaN FRA5 NaN ID USA6 NaN GA USA7 Hoboken NJ USA8 NaN NJ USA9 NaN NaN AUS
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23317342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3186581/']}
|
jdg_376644
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
282996
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
In answering another MathOverflow question , I stumbled across the sequence of polynomials $Q_n(p)$ defined by the recurrence$$Q_n(p) = 1-\sum_{k=2}^{n-1} \binom{n-2}{k-2}(1-p)^{k(n-k)}Q_k(p).$$Thus: $Q_{2}(p) = 1$ $Q_{3}(p) = -p^2 + 2 p$ $Q_{4}(p) = -2 p^5 + 9 p^4 - 14 p^3 + 8 p^2$ $Q_{5}(p) = 6 p^9 - 48 p^8 + 162 p^7 - 298 p^6 + 318 p^5 - 189 p^4 + 50 p^3$ Numerical calculations up to $n=60$ suggest that: The lowest-degree term of $Q_n(p)$ is $2n^{n-3}p^{n-2}$. The coefficients of $Q_n(p)$ alternate in sign. Are these true for all $n$? As the title indicates, I'm especially puzzled about 2. Indeed, the original inspiration for the polynomials $Q_n(p)$ comes from a classic paper of E. N. Gilbert ( Random graphs , Ann. Math. Stat. 30, 1141-1144 (1959); ZBL0168.40801 ) where the author studies the sequence of polynomials $P_n(p)$ given by the similar recurrence $$P_n(p) = 1 - \sum_{k=1}^{n-1} \binom{n-1}{k-1}(1-p)^{k(n-k)}P_k(p),$$ which do not have alternating coefficients.
Now provide the response and nothing else.
|
To illustrate the suggestion of Richard Stanley about positivity of real parts of zeroes, here are the zeroes of $Q_{20}$. The pattern seems to be the same for all of them. Another empirical observation: seems that$$\frac{Q_n(1-x)}{(1-x)^{n-2}(1+x)}=1+(n-3)x+\left(\binom{n-2}2+1\right)x^2+\left(\binom{n-1}3+n-3\right)x^3+\left(\binom n4+\binom{n-2}2+1\right)x^4+...+\left(\binom{n+k-4}k+\binom{n+k-6}{k-2}+\binom{n+k-8}{k-4}+...\right)x^k+O(x^{k+1})$$for $n>k+1$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/282996', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2000/']}
|
jdg_376645
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
195504
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem: Prove that for all non zero linear functionials $f:M\to\mathbb{K}$ where $M$ is a vector space over field $\mathbb{K}$, subspace $(f^{-1}(0))$ is of co-dimension one. Could someone solve this for me?
Now provide the response and nothing else.
|
The following is a proof in the finite dimensional case: The dimension of the image of $f$ is 1 because $\textrm{im} f$ is a subspace of $\Bbb{K}$ that has dimension 1 over itself. Since $\textrm{im} f \neq 0$ it must be the whole of $\Bbb{K}$. By rank nullity, $$\begin{eqnarray*} 1 &=& \dim \textrm{im} f \\&=& \dim_\Bbb{K} M- \dim \ker f\end{eqnarray*}$$ showing that $\ker f$ has codimension 1.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/195504', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34356/']}
|
jdg_376646
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39110801
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a difference when using both path.join and path.resolve with __dirname for resolving absolute path in Node.js? Should one of them be preferred when being used like that (absolute path resolutions are 90% of use cases)? I.e. const absolutePath = path.join(__dirname, some, dir); vs. const absolutePath = path.resolve(__dirname, some, dir); Both methods normalize path.
Now provide the response and nothing else.
|
Yes there is a difference between the functions but the way you are using them in this case will result in the same outcome. path.join returns a normalized path by merging two paths together. It can return an absolute path, but it doesn't necessarily always do so. For instance: path.join('app/libs/oauth', '/../ssl') resolves to app/libs/ssl path.resolve , on the other hand, will resolve to an absolute path. For instance, when you run: path.resolve('bar', '/foo'); The path returned will be /foo since that is the first absolute path that can be constructed. However, if you run: path.resolve('/bar/bae', '/foo', 'test'); The path returned will be /foo/test again because that is the first absolute path that can be formed from right to left. If you don't provide a path that specifies the root directory then the paths given to the resolve function are appended to the current working directory. So if your working directory was /home/mark/project/ : path.resolve('test', 'directory', '../back'); resolves to /home/mark/project/test/back Using __dirname is the absolute path to the directory containing the source file. When you use path.resolve or path.join they will return the same result if you give the same path following __dirname . In such cases it's really just a matter of preference.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/39110801', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3731501/']}
|
jdg_376647
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
286249
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want use my NodeMCU to drive a SIM808 Module with integrated LiPo Charger. https://www.itead.cc/wiki/SIM808_GSM/GPRS/GPS_Module There are two problems. First of all the SIM808 Module just provides the LiPo Voltage, no steady 3.3V or 5V output. The NodeMCU needs 3.3V (or 5V with LDO).The internal LDO has a high quiescent current of >1mA. So the idea would be to replace the onboard 5V to 3.3V LDO with one with a low quiescent current (<0.1mA) for direct LiPo input (3.7-4.3V) and 3.3V output which is pin compatible to just replace it direcly on the NodeMCU. The MCP1700 would be an option as TO92 to make the PINs fit because VIN/VOUT is reversed. This would be my easiest idea. Do you have any different ideas? More about the NodeMCU and the AMS1117 position on the board https://tinker.yeoman.com.au/2016/05/29/running-nodemcu-on-a-battery-esp8266-low-power-consumption-revisited/
Now provide the response and nothing else.
|
You choice looks good to me, but only if your NodeMCU is going to spend most of its time sleeping. In that case I'd use a MCP1700 (quiescent current < 4 uA). If it's going to spend a significant % of time up and running you may be better off using a switching regulator. It's all about considering the energy saved when it's sleeping vs. the energy saved when it's operating.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/286249', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/138973/']}
|
jdg_376648
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37528076
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I run following code var obj = { 0: 'a', 1: 'b', 2: 'c' };typeof Object.keys(obj)[0] //returns string In obj object i'm creating Number keys. Any reason, why its string and not a number ?
Now provide the response and nothing else.
|
Because Object.keys returns an array with strings Object.keys() returns an array whose elements are strings corresponding to the enumerable properties found directly upon object. The ordering of the properties is the same as that given by looping over the properties of the object manually. You get an array of strings, because Property names are strings by definition. Property names must be strings. This means that non-string objects cannot be used as keys in the object. Any non-string object, including a number, is typecasted into a string via the toString method.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37528076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2285490/']}
|
jdg_376649
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
219298
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to understand how a spreadsheet (a group of named or otherwise identified cells containing values or formulas referencing other cells) is solved. I have tried looking at existing projects, but there was so much going on with the GUI, serialization, events, etc. that I couldn't find the spreadsheet. At its simplest how does it work?
Now provide the response and nothing else.
|
At its core, a spreadsheet is a functional language with dynamic typing and each function or value being able to be referenced as a cell in the matrix. Instead of things like (defn some-name ...) the some-name part is placed in a cell itself. If you go to a dynamically updating functional language ide (such as lighttable for clojure), you will see much of the same functionality as a spreadsheet. Bind a value to a name, write a function that uses that value, change the value and the output of the function changes immediately. This is the same as doing something like writing =A1 + B2 in the location of C3 in excel. Thus, functional programmers often like to write spreadsheets as toy programs... and the subject of research papers too. (Yes, I'm sorry, they are all behind an ACM.org paywall) Spreadsheet functional programming The functional programming community has shown some interest in spreadsheets, but surprisingly no one seems to have considered making a standard spreadsheet, such as Excel, work with a standard functional programming language, such as Haskell. In this paper, we show one way that this can be done. Our hope is that by doing so, we might get spreadsheet programmers to give functional programming a try. Forms/3: A first-order visual language to explore the boundaries of the spreadsheet paradigm Although detractors of functional programming sometimes claim that functional programming is too difficult or counter-intuitive for most programmers to understand and use, evidence to the contrary can be found by looking at the popularity of spreadsheets. The spreadsheet paradigm, a first-order subset of the functional programming paradigm, has found wide acceptance among both programmers and end users. Still, there are many limitations with most spreadsheet systems. In this paper, we discuss language features that eliminate several of these limitations without deviating from the first-order, declarative evaluation model. Implementing function spreadsheets A large amount of end-user development is done with spreadsheets. The spreadsheet metaphor is attractive because it is visual and accommodates interactive experimentation, but as observed by Peyton Jones, Blackwell and Burnett, the spreadsheet metaphor does not admit even the most basic abstraction: that of turning an expression into a named function. Hence they proposed a way to define a function in terms of a worksheet with designated input and output cells; we shall call it a function sheet. The start of Spreadsheet at Wikipedia gives some hints as to how to implement one: A spreadsheet is an interactive computer application program for organization and analysis of data in tabular form. Spreadsheets developed as computerized simulations of paper accounting worksheets. The program operates on data represented as cells of an array, organized in rows and columns. Each cell of the array is a model–view–controller element that can contain either numeric or text data, or the results of formulas that automatically calculate and display a value based on the contents of other cells. Building on this from Outline of Model-View-Controller paradigm as expressed in the Java libraries . The author goes on to mention applets (a bit dated, it was written in '93-'96) and mentions his web page which goes to http://csis.pace.edu/~bergin/Java/applets.htm (yes, applets) for the corresponding spreadsheet code http://csis.pace.edu/~bergin/Java/Spreadsheet.java I will point out that the entirety of the spreadsheet, is not that big in this applet 570 lines including documentation. That said, depending on the language, you could probably do it all with just function pointers in a sparse array.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/219298', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/109567/']}
|
jdg_376650
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2472578
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using ASP.Net MVC 2, is there any way to use the RedirectToAction() method of the Controller class inside a class that is based on the AuthorizeAttribute class? public class CustomAttribute : AuthorizeAttribute { protected override bool AuthorizeCore(HttpContextBase context) { // Custom authentication goes here return false; } public override void OnAuthorization(AuthorizationContext context) { base.OnAuthorization(context); // This would be my ideal result context.Result = RedirectToAction("Action", "Controller"); }} I'm looking for a way to re-direct the user to a specific controller / action when they fail the authentication instead of returning them to the login page. Is it possible to have the re-direct URL generated for that controller / action and then use RedirectResult() ? I'm trying to avoid the temptation to just hard-code the URL.
Now provide the response and nothing else.
|
You can/should override HandleUnauthorizedRequest instead of OnAuthorization . Here's the default implementation: protected virtual void HandleUnauthorizedRequest(AuthorizationContext filterContext) { // Returns HTTP 401 - see comment in HttpUnauthorizedResult.cs. filterContext.Result = new HttpUnauthorizedResult(); } You can't use Controller.RedirectToAction , but you can return a new RedirectToRouteResult . So you can do: protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext) { // Returns HTTP 401 - see comment in HttpUnauthorizedResult.cs. filterContext.Result = new RedirectToRouteResult( new RouteValueDictionary { { "action", "ActionName" }, { "controller", "ControllerName" } }); }
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2472578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25549/']}
|
jdg_376651
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52717
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If the column vectors of a matrix $A$ are all orthogonal and $A$ is a square matrix, can I say that the row vectors of matrix $A$ are also orthogonal to each other? From the equation $Q \cdot Q^{T}=I$ if $Q$ is orthogonal and square matrix, it seems that this is true but I still find it hard to believe. I have a feeling that I may still be wrong because those column vectors that are perpendicular are vectors within the column space. Taking the rows vectors give a totally different direction from the column vectors in the row space and so how could they always happen to be perpendicular? Thanks for any help.
Now provide the response and nothing else.
|
Recall that two vectors are orthogonal if and only if their inner product is zero. You are incorrect in asserting that if the columns of $Q$ are orthogonal to each other then $QQ^T = I$; this follows if the columns of $Q$ form an orthonormal set (basis for $\mathbb{R}^n$); orthogonality is not sufficient. Note that "$Q$ is an orthogonal matrix" is not equivalent to "the columns of $Q$ are pairwise orthogonal". With that clarification, the answer is that if you only ask that the columns be pairwise orthogonal, then the rows need not be pairwise orthogonal. For example, take$$A = \left(\begin{array}{ccc}1& 0 & 0\\0& 0 & 1\\1 & 0 & 0\end{array}\right).$$The columns are orthogonal to each other: the middle column is orthogonal to everything (being the zero vector), and the first and third columns are orthogonal. However, the rows are not orthogonal, since the first and third rows are equal and nonzero. On the other hand, if you require that the columns of $Q$ be an orthonormal set (pairwise orthogonal, and the inner product of each column with itself equals $1$), then it does follow: precisely as you argue. That condition is equivalent to "the matrix is orthogonal", and since $I = Q^TQ = QQ^T$ and $(Q^T)^T = Q$, it follows that if $Q$ is orthogonal then so is $Q^T$, hence the columns of $Q^T$ (i.e., the rows of $Q$) form an orthonormal set as well.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/52717', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12426/']}
|
jdg_376652
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5941701
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Given : data Foo = FooString String …class Fooable a where --(is this a good way to name this?) toFoo :: a -> Foo I want to make String an instance of Fooable : instance Fooable String where toFoo = FooString GHC then complains: Illegal instance declaration for `Fooable String' (All instance types must be of the form (T t1 ... tn) where T is not a synonym. Use -XTypeSynonymInstances if you want to disable this.)In the instance declaration for `Fooable String' If instead I use [Char] : instance Fooable [Char] where toFoo = FooString GHC complains: Illegal instance declaration for `Fooable [Char]' (All instance types must be of the form (T a1 ... an) where a1 ... an are type *variables*, and each type variable appears at most once in the instance head. Use -XFlexibleInstances if you want to disable this.)In the instance declaration for `Fooable [Char]' Question : Why can I not make String and instance of a typeclass? GHC seems willing to let me get away with this if I add an extra flag. Is this a good idea?
Now provide the response and nothing else.
|
This is because String is just a type alias for [Char] , which is just the application of the type constructor [] on the type Char , so this would be of the form ([] Char) . which is not of the form (T a1 .. an) because Char is not a type variable. The reason for this restriction is to prevent overlapping instances. For example, let's say you had an instance Fooable [Char] , and then someone later came along and defined an instance Fooable [a] . Now the compiler won't be able to figure out which one you wanted to use, and will give you an error. By using -XFlexibleInstances , you're basically promising to the compiler that you won't define any such instances. Depending on what you're trying to accomplish, it might be better to define a wrapper: newtype Wrapper = Wrapper Stringinstance Fooable Wrapper where ...
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5941701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/163177/']}
|
jdg_376653
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51708
|
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Netflix gave a public tech-talk explaining their network, and one specific design puzzled me. See Page 25 at this talk (Netflix's Migration into VPC) , they seemed to choose to use 100.64.0.0/10 as their private network space, rather than 10.0.0.0/8 (or a subspace from it). What's the purpose? I don't see any obvious benefit, but I may miss some key points there. Can someone give some thoughts on this?
Now provide the response and nothing else.
|
they seemed to choose to use 100.64.0.0/10 as their private network space, rather than 10.0.0.0/8 (or a subspace from it). What's the purpose? I don't see any obvious benefit, but I may miss some key points there. Can someone give some thoughts on this? The slide is very clear on why they are using this space. They are following the current best practice for performing CGN (carrier grade NAT or carrier grade network address translation) as defined by RFC 6598 . The slide also notes this. Why this address space and not another RFC 1918 space? Because this is the shared address space the IANA has reserved as a range expressly to be used for CGN. It is the one they should be using for CGN. Let me turn your question on its head. Given an address space specifically dedicated to CGN and defined as a best current practice, if you are using CGN, why would you use anything besides 100.64.0.0/10, such as RFC 1918 space? This is the relevant section from RFC 6598 that provides the need for this address space for providers rather than just using RFC 1918 space: A Service Provider can number the interfaces in question from [RFC1918] space if at least one of the following conditions is true: o The Service Provider knows that the CPE/NAT works correctly when the same [RFC1918] address block is used on both its inside and outside interfaces. o The Service Provider knows that the [RFC1918] address block that it uses to number interfaces between the CGN and CPE is not used on the subscriber side of the CPE. Unless at least one of the conditions above is true, the Service Provider cannot safely use [RFC1918] address space and must resort to Shared Address Space. This is typically the case in an unmanaged service, where subscribers provide their own CPE and number their own internal network. However once this shared space has been assigned for the purpose and this became a best practice, there is no reason that anyone performing CGN should not use this space for its intended purpose.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/51708', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/34708/']}
|
jdg_376654
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30877019
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I launch ghci> last [0..] I can interrupt it with Ctrl+C. However ghci> last (repeat 0) cannot be aborted with Ctrl+C. GHCI silently ignores the keystrokes. How to abort this command in GHCI? Is it a bug?
Now provide the response and nothing else.
|
(Caveat lector: I use Linux, and run zsh on urxvt or gnome-terminal. If you use a different operating system, terminal, or shell, it's possible this will work differently for you.) The way I usually handle this is to hit Ctrl+Z (which puts it in the background, pausing execution entirely as a side-effect) then kill the job. Usually this is kill %1 , though you can run jobs to double-check. You can also start a new terminal and do something like killall -9 ghci , but this has a much higher resource cost: you are spawning a few new processes, opening X connections, doing whatever it is your terminal does when it initializes itself, doing whatever it is your shell does when it initializes itself, etc. If you're in the situation I often find myself in -- ghci is swapping like crazy -- that just gives ghci more time to screw things up. If you can predict this problem, and are compiling, you can use -fno-omit-yields to ask GHC to insert Ctrl+C checks even inside tight, non-allocating loops.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30877019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1717752/']}
|
jdg_376655
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48481003
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to update Bootstrap in ASP.NET Core with NuGet. I used this: Install-Package bootstrap -Version 4.0.0 It did add the dependencies but how do I add it to my project now? What is the path for local NuGet dependencies?
Now provide the response and nothing else.
|
As others already mentioned, the package manager Bower , that was usually used for dependencies like this in application that do not rely on heavy client-side scripting, is on the way out and actively recommending to move to other solutions: ..psst! While Bower is maintained, we recommend yarn and webpack for new front-end projects! So although you can still use it right now, Bootstrap has also announced to drop support for it . As a result, the built-in ASP.NET Core templates are slowly being edited to move away from it too. Unfortunately, there is no clear path forward. This is mostly due to the fact that web applications are continuously moving further into the client-side, requiring complex client-side build systems and many dependencies. So if you are building something like that, you might already know how to solve this then, and you can expand your existing build process to simply also include Bootstrap and jQuery there. But there are still many web applications out there that are not that heavy on the client-side, where the application still runs mainly on the server and the server serves static views as a result. Bower previously filled this by making it easy to just publish client-side dependencies without that much of a process. In the .NET world we also have NuGet and with previous ASP.NET versions, we could use NuGet as well to add dependencies to some client-side dependencies since NuGet would just place the content into our project correctly. Unfortunately, with the new .csproj format and the new NuGet, installed packages are located outside of our project, so we cannot simply reference those. This leaves us with a few options how to add our dependencies: One-time installation This is what the ASP.NET Core templates, that are not single-page applications, are currently doing. When you use those to create a new application, the wwwroot folder simply contains a folder lib that contains the dependencies: If you look closely at the files currently, you can see that they were originally placed there with Bower to create the template, but that is likely to change soon. The basic idea is that the files are copied once to the wwwroot folder so you can depend on them. To do this, we can simply follow Bootstrap’s introduction and download the compiled files directly. As mentioned on the download site, this does not include jQuery , so we need to download that separately too; it does contain Popper.js though if we choose to use the bootstrap.bundle file later—which we will do. For jQuery, we can simply get a single “compressed, production” file from the download site (right-click the link and select "Save link as..." from the menu). This leaves us with a few files which will simply extract and copy into the wwwroot folder. We can also make a lib folder to make it clearer that these are external dependencies: That’s all we need, so now we just need to adjust our _Layout.cshtml file to include those dependencies. For that, we add the following block to the <head> : <environment include="Development"> <link rel="stylesheet" href="~/lib/css/bootstrap.css" /></environment><environment exclude="Development"> <link rel="stylesheet" href="~/lib/css/bootstrap.min.css" /></environment> And the following block at the very end of the <body> : <environment include="Development"> <script src="~/lib/js/jquery-3.3.1.js"></script> <script src="~/lib/js/bootstrap.bundle.js"></script></environment><environment exclude="Development"> <script src="~/lib/js/jquery-3.3.1.min.js"></script> <script src="~/lib/js/bootstrap.bundle.min.js"></script></environment> You can also just include the minified versions and skip the <environment> tag helpers here to make it a bit simpler. But that’s all you need to do to keep you starting. Dependencies from NPM The more modern way, also if you want to keep your dependencies updated, would be to get the dependencies from the NPM package repository. You can use either NPM or Yarn for this; in my example, I’ll use NPM. To start off, we need to create a package.json file for our project, so we can specify our dependencies. To do this, we simply do that from the “Add New Item” dialog: Once we have that, we need to edit it to include our dependencies. It should something look like this: { "version": "1.0.0", "name": "asp.net", "private": true, "devDependencies": { "bootstrap": "4.0.0", "jquery": "3.3.1", "popper.js": "1.12.9" }} By saving, Visual Studio will already run NPM to install the dependencies for us. They will be installed into the node_modules folder. So what is left to do is to get the files from there into our wwwroot folder. There are a few options to do that: bundleconfig.json for bundling and minification We can use one of the various ways to consume a bundleconfig.json for bundling and minification, as explained in the documentation . A very easy way is to simply use the BuildBundlerMinifier NuGet package which automatically sets up a build task for this. After installing that package, we need to create a bundleconfig.json at the root of the project with the following contents: [ { "outputFileName": "wwwroot/vendor.min.css", "inputFiles": [ "node_modules/bootstrap/dist/css/bootstrap.min.css" ], "minify": { "enabled": false } }, { "outputFileName": "wwwroot/vendor.min.js", "inputFiles": [ "node_modules/jquery/dist/jquery.min.js", "node_modules/popper.js/dist/umd/popper.min.js", "node_modules/bootstrap/dist/js/bootstrap.min.js" ], "minify": { "enabled": false } }] This basically configures which files to combine into what. And when we build, we can see that the vendor.min.css and vendor.js.css are created correctly. So all we need to do is to adjust our _Layouts.html again to include those files: <!-- inside <head> --><link rel="stylesheet" href="~/vendor.min.css" /><!-- at the end of <body> --><script src="~/vendor.min.js"></script> Using a task manager like Gulp If we want to move a bit more into client-side development, we can also start to use tools that we would use there. For example Webpack which is a very commonly used build tool for really everything. But we can also start with a simpler task manager like Gulp and do the few necessary steps ourselves. For that, we add a gulpfile.js into our project root, with the following contents: const gulp = require('gulp');const concat = require('gulp-concat');const vendorStyles = [ "node_modules/bootstrap/dist/css/bootstrap.min.css"];const vendorScripts = [ "node_modules/jquery/dist/jquery.min.js", "node_modules/popper.js/dist/umd/popper.min.js", "node_modules/bootstrap/dist/js/bootstrap.min.js",];gulp.task('build-vendor-css', () => { return gulp.src(vendorStyles) .pipe(concat('vendor.min.css')) .pipe(gulp.dest('wwwroot'));});gulp.task('build-vendor-js', () => { return gulp.src(vendorScripts) .pipe(concat('vendor.min.js')) .pipe(gulp.dest('wwwroot'));});gulp.task('build-vendor', gulp.parallel('build-vendor-css', 'build-vendor-js'));gulp.task('default', gulp.series('build-vendor')); Now, we also need to adjust our package.json to have dependencies on gulp and gulp-concat : { "version": "1.0.0", "name": "asp.net", "private": true, "devDependencies": { "bootstrap": "4.0.0", "gulp": "^4.0.2", "gulp-concat": "^2.6.1", "jquery": "3.3.1", "popper.js": "1.12.9" }} Finally, we edit our .csproj to add the following task which makes sure that our Gulp task runs when we build the project: <Target Name="RunGulp" BeforeTargets="Build"> <Exec Command="node_modules\.bin\gulp.cmd" /></Target> Now, when we build, the default Gulp task runs, which runs the build-vendor tasks, which then builds our vendor.min.css and vendor.min.js just like we did before. So after adjusting our _Layout.cshtml just like above, we can make use of jQuery and Bootstrap. While the initial setup of Gulp is a bit more complicated than the bundleconfig.json one above, we have now have entered the Node-world and can start to make use of all the other cool tools there. So it might be worth to start with this. Conclusion While this suddenly got a lot more complicated than with just using Bower, we also do gain a lot of control with those new options. For example, we can now decide what files are actually included within the wwwroot folder and how those exactly look like. And we also can use this to make the first moves into the client-side development world with Node which at least should help a bit with the learning curve.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/48481003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4727606/']}
|
jdg_376656
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
211676
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the integral $$\int _{-1}^1\left(|x|-\frac{x^2 \left(x^2 \left(c_1 x^2+c_2\right)+c_3\right)}{x^2 \left(c_4 x^2+c_5\right)+c_6}\right)^2\mathrm{d}x$$ which I express in Mathematica code as Clear[c1, c2, c3, c4, c5, c6];f[x_]:=((RealAbs[x]-(x^2*(x^2*(c1*x^2+c2)+c3))/(x^2*(c4*x^2+c5)+c6))^2)f[x]HoldForm[Integrate[f[x], {x, -1, 1}]] How can I search for real values of $c_i$ that minimize the value of this nonelementary integral?
Now provide the response and nothing else.
|
In order to be more specific refering to the answer of @Yusuf, try: func[x_, a_, b_, c_, d_, m_, v_] = (-((x^2 (c + x^2 (b + a x^2)))/(m + x^2 (v + d x^2))) + Abs[x])^2;nint[a_?NumericQ, b_?NumericQ, c_?NumericQ, d_?NumericQ, m_?NumericQ, v_?NumericQ] := NIntegrate[func[x, a, b, c, d, m, v], {x, -1, 1}]NMinimize[nint[a, b, c, d, m, v], {{a, 0, 1}, {b, 0, 1}, {c, 0, 1}, {d, 0, 1}, {m, 0, 1}, {v, 0, 1}}](* {0.0000131187, {a -> 1.25015, b -> 2.12136, c -> 0.0859803, d -> 2.75847, m -> 0.00319823, v -> 0.680549}} *) But you have no warranty, that this is a global minimum. Use starting values for the parameters, that make sense according to your problem.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/211676', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/68967/']}
|
jdg_376657
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
63140
|
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Why do people suggest using red black trees/balanced binary trees for the levels in a limit order book? Why are they algorithmically ideal?
Now provide the response and nothing else.
|
Why do people suggest using red black trees/balanced binary trees for the levels in a limit order book? Because people are unoriginal and keep referencing the same blog post. Why are they algorithmically ideal? They're not necessarily ideal. In fact, they're rarely used in production trading systems with low latency requirements. However, your source probably had the following considerations: They were given more of an engineering objective rather than a trading objective. Without business constraints or queries that you're supposed to optimize, a reasonable prior is to optimize for the worst case runtime of inserts and deletes, since inserts and deletes often dominate executions . They were designing this order book structure based on sample data from an asset class with sparse prices , like equities. Because of (1) and (2), they needed to take into account the following market properties: New prices are often inserted towards the outside of the book , since (i) the inside levels tend to be dense and (ii) insertions towards the inside are likely to be matched and truncated by the opposite book. Forming a new level gives significant queue priority and orders towards the outside have more time value, so price levels are less likely to be removed by order cancels towards the outside, and more likely to be removed by cancels or executions towards the inside of the book . (3) and (4) would promote an unbalanced and tall BST, which has much worse amortized runtime than its idealized form. There are various ways to mitigate this. Self-balancing is just one naive solution, as red-black trees are very widely implemented in container libraries and a simple way to guarantee $\mathbb{O}\left(\log n\right)$ inserts and deletes of price levels. When evaluating the optimal data structure, I would keep in mind the following three main topics. 1. Start with the business use case Such as: What queries need to be optimized for your application? Sparsity of the book. Statistical distribution of book events. For example: In options instruments , there may be very few order events, so it may be cheaper to just store everything in arrays and linearly walk through them. In liquid futures contracts , most events only affect a few hundred price levels, and price bands might give you a bound on levels that you actually care about, so it is possible to preallocate the levels in an array and represent index prices as an offset from some initial state in number of ticks. Some trading strategies need to act very quickly to the change to the top of the book, and can afford to defer level inserts or deletes outside the BBO till later, so it is unimportant to optimize for level inserts or deletes. 2. Understand the messaging protocol and data feed For example: Some data feeds are bursty, so you might design your application to flush all data events before performing the critical path of your business action (e.g. order placement, model update). The optimal order book structure may differ if events are batched. Successive events in the data feed may have some price ordering. 3. Hardware codesign In practice, when you're operating at memory or cache access time scales or dealing with a small number of events relative to cache size, asymptotic time complexity often goes out of the window and it's more important to look at the actual implementation and real benchmarks, and codesign your order book for the architecture that it is running on. In such cases, a simple array or vector with linear access patterns will often outperform any complex data structure with better asymptotic runtime because a simple array makes it easier to exploit hardware optimizations that are more important: Locality Prefetching Instruction pipelining Fitting all relevant/qualifying data into fewer "pages" that have to move up the memory hierarchy, e.g. not chasing pointers across non-contiguous regions of memory. SIMD intrinsics. How does this translate to order book design? For example: The C++ STL implementation of unordered_map will often have worse performance than map for order ID lookup of instruments with a small number of orders. It is possible to represent each price level with an intrusive doubly-linked list, which has $\mathbb{O}\left(1\right)$ lookup of the neighboring nodes, so you can unlink an order that was deleted in $\mathbb{O}\left(1\right)$ . But you will often get better performance by creating a linked list of preallocated arrays, and removing orders by marking them with a tombstone flag. In many of the situations that I described above, a linked list of arrays or an array of arrays will outperform a general purpose design with red-black trees of intrusive doubly-linked lists.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://quant.stackexchange.com/questions/63140', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/11723/']}
|
jdg_376658
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3682111
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Evaluate the limit $$\displaystyle\lim_{n \to \infty} \int_{0}^{\pi} \frac{\sin x}{1+\cos ^2(nx)} dx$$ Using property of definite integral $\int_{0}^{2a} f(x).dx=2\int_{0}^{a} f(x)dx$ ,when $f(2a-x)=f(x)$ I got $$\displaystyle\lim_{n \to \infty} \int_{o}^{\pi} \frac{\sin x}{1+\cos ^2(nx)} dx=2\displaystyle\lim_{n \to \infty} \int_{o}^{\pi/2} \frac{\sin x}{1+\cos ^2(nx)} dx$$ but I cannot proceed after that. Could someone provide me with some hint? Till now I have only done integration in terms of elementary functions. Any hint would be appreciated.
Now provide the response and nothing else.
|
Here are some big hints: \begin{align}\int_0^{\pi}\frac{\sin x}{1+\cos^2nx}\mathrm{d}x &=\frac{1}{n}\int_0^{n\pi}\frac{\sin(\theta/n)}{1+\cos^2\theta}\mathrm{d}\theta\\&=\frac{1}{n}\sum_{k=0}^{n-1}\int_{k\pi}^{(k+1)\pi}\frac{\sin(\theta/n)}{1+\cos^2\theta}\mathrm{d}\theta\\&=\frac{1}{n}\sum_{k=0}^{n-1}\int_{0}^{\pi}\frac{\sin\big(\frac{\psi+k\pi}{n}\big)}{1+\cos^2\psi}\,d\psi\\&=\frac{1}{\pi}\int_0^{\pi}\frac{1}{1+\cos^2\psi}\Big[\frac{\pi}{n}\sum_{k=0}^{n-1}\sin\Big(\frac{\psi+k\pi}{n}\Big)\Big]\,d\psi \\&\to\frac{1}{\pi}\int_0^{\pi}\frac{1}{1+\cos^2\psi}\,d \psi\cdot\int_0^{\pi}\sin t\,dt\\&\ldots \end{align}
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3682111', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/290994/']}
|
jdg_376659
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
142440
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding (from Wikipedia) is that the x64 instruction set supports executing 16-bit protected mode code from long mode, but cannot execute real mode code without being switched out of long mode because long mode lacks virtual 8086 mode. Therefore, it stands to reason that real mode DOS apps can't be run in Win64 w/o software emulation or dynamic translation. However, why was support for Win16 protected-mode apps excluded when support for them seems (at least at first glance) to be reasonably implementable and is included in newer versions of Win32? Was it just a matter of demand not being high enough to justify implementation costs (and the win32 version was already implemented), or is there a good technical reason?
Now provide the response and nothing else.
|
I don't think there are any actual Win16 protected mode apps. I understand Windows/286 and above ran in protected mode (called "standard mode" or "enhanced mode" by Microsoft) but the apps were technically real mode apps. I remember reading that it came as a surprise to Microsoft that the Windows team figured out how to run real mode code under a protected mode operating system. But a similar solution for running real mode code in long mode does probably not exist. So the question really comes down to a) why not switch to protected mode (or real mode) to run Win16 apps and b) why not include an emulator (like on other non-x86 NT platforms). The answer to a) is obvious, I think, because switching between long mode and other modes is not really something the CPU supports without a reboot. OS/2 1.x had the same problem with protected mode and real mode and offered only a very inelegant solution to the problem. The answer to b) is more difficult, but I think it comes down to a decision by Microsoft based on these three points (I love lists): Few people still run 16 bit apps. Those who do can run 32 bit Windows to run them. Third party products can cover the remaining market. And indeed there are several solutions to running 32 bit Windows (and hence 16 bit apps) under 64 bit Windows, including Microsoft's own Virtual PC. All-in-all this was simply a decision to stop supporting compatibility with (what Microsoft consider) a legacy platform, kind of like the removal of 16 bit OS/2 compatibility in Windows XP. (Those were 16 bit protected mode apps.)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/142440', 'https://serverfault.com', 'https://serverfault.com/users/43050/']}
|
jdg_376660
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1372681
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that if $a^3>a$ then $a^5>a$. Velleman gives this "hint": $$\text{One approach is to start by completing the following equation:}\ (a^5-a)=(a^3-a) \cdot x$$ I don't understand this hint. Can someone please explain the hint and do the proof using this hint?
Now provide the response and nothing else.
|
The given inequality is equivalent to $a^3-a=a(a^2-1)>0$. By multiplying both sides by $a^2+1$, which is always positive, we get $a(a^2-1)(a^2+1)>a^2+1>0$, or $a^5-a>0$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1372681', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/238587/']}
|
jdg_376661
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33280672
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for an answer if there is any difference between these two functions, aside from the constness of the first one: QThread * QObject::thread() constQThread * QThread::currentThread()
Now provide the response and nothing else.
|
They are quite different. QThread * QObject::thread() const returns the thread in which a particular QObject lives. QThread * QThread::currentThread() Returns a pointer to a QThread which manages the currently executing thread. class MyClass : public QObject{};int main(int argc, char **argv){ QApplication app(argc, argv); MyClass * obj = new MyClass(); QThread thread2; obj->moveToThread(&thread2); thread2.start(); qDebug() << "The current thread is " << QThread::currentThread(); qDebug() << "The thread2 address is " << &thread2; qDebug() << "The object is in thread " << obj->thread(); return app.exec();} Sample output: The current thread is QThread(0x1436b20) The thread2 address is QThread(0x7fff29753a30) The object is in thread QThread(0x7fff29753a30)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33280672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3707709/']}
|
jdg_376662
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17610088
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
public class TestVO { public static void main(String[] args) { VO vo1 = new VO(); VO vo2 = new VO(); VO vo3; VO vo4 = new VO(); vo1.setName("Sourav"); vo2.setName("Anil"); vo3 = vo1; vo4 = vo1; System.out.println(" " + vo4.getName()); vo1.setName("Abhishek."); System.out.println(vo1.getName() + " " + vo2.getName() + " " + vo3.getName() + " " + vo4.getName()); }} --OUTPUT is:--- SouravAbhishek. Anil Abhishek. Abhishek. VO is a simple class contain String name and contain getter and setter . In the first System.out.println * vo4.getName() * print : Sourav ok.It's fine. But in the 2nd System.out.println * vo4.getName() * print : Abhishek. My Question is why 2nd print is Abhishek. . I have kept a copy of vo1 object in vo4 object not a reference. That means it create a new memory. vo1 and vo4 are different.Then why vo4.getName is changed in the 2nd time. I am setting the vo1 object but vo4 is automatically changed. Why it is happening ??
Now provide the response and nothing else.
|
I have kept a copy of vo1 object in vo4 object not a reference. No, you've pointed the variables vo1 and vo4 at the same object, like this: +-----+| vo1 |--------\+-----+ \ +----------------+ --->| (object) |+-----+ / | name = Abishek || vo4 |--------/ +----------------++-----+ Let's follow the code through: VO vo1 = new VO(); Gives us: +-----+ +----------------+| vo1 |------------->| (object) |+-----+ | name = null | +----------------+ Then: VO vo2 = new VO(); Now we have: +-----+ +----------------+| vo1 |------------->| (object) |+-----+ | name = null | +----------------++-----+ +----------------+| vo2 |------------->| (object) |+-----+ | name = null | +----------------+ Then: VO vo3; ...which just creates vo3 with null (not pointing at any object). Then: VO vo4 = new VO(); So we have: +-----+ +----------------+| vo1 |------------->| (object) |+-----+ | name = null | +----------------++-----+ +----------------+| vo2 |------------->| (object) |+-----+ | name = null | +----------------++-----+| vo3 | (is null)+-----++-----+ +----------------+| vo4 |------------->| (object) |+-----+ | name = null | +----------------+ Now: vo1.setName("Sourav");vo2.setName("Anil"); Gives us: +-----+ +----------------+| vo1 |------------->| (object) |+-----+ | name = Sourav | *** change is here *** +----------------++-----+ +----------------+| vo2 |------------->| (object) |+-----+ | name = Anil | *** and here *** +----------------++-----+| vo3 | (is null)+-----++-----+ +----------------+| vo4 |------------->| (object) |+-----+ | name = null | +----------------+ Here's where things get interesting: vo3 = vo1;vo4 = vo1; That points vo3 at the same object vo1 points to, and points vo4 at that object as well, releasing the object vo4 used to point to (which becomes eligible for garbage collection). Giving us: +-----+| vo1 |----\+-----+ \ \+-----+ \ +----------------+| vo3 |------------->| (object) |+-----+ / | name = Sourav | / +----------------++-----+ /| vo4 |----/+-----++-----+ +----------------+| vo2 |------------->| (object) |+-----+ | name = Anil | +----------------+ Now System.out.println(" " + vo4.getName()); ...gives us "Sourav" as you'd expect. Then vo1.setName("Abhishek."); ...changes the object that vo1 , vo3 , and vo4 are all pointing to: +-----+| vo1 |----\+-----+ \ \+-----+ \ +----------------+| vo3 |------------->| (object) |+-----+ / | name = Abishek | / +----------------++-----+ /| vo4 |----/+-----++-----+ +----------------+| vo2 |------------->| (object) |+-----+ | name = Anil | +----------------+ ...and so getName() on vo1 , vo3 , or vo4 will give you "Abishek."
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17610088', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2575474/']}
|
jdg_376663
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
78143
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't know how to explain or how to prove the following statement If $n=ab$ and $a,b \in \mathbb{N}$ then $2^a-1 \mid 2^n-1$. Any ideas? Perhaps an induction? Thanks in advance.
Now provide the response and nothing else.
|
Hint: Recall that $$x^u-1=(x-1)(x^{u-1}+x^{u-2}+\cdots+x+1).$$ Then $$2^{ab}-1 =(2^a)^b-1$$ and letting $x=2^a$, $u=b$ we see that $$(2^a)^b-1=(2^a-1)\left((2^a)^{b-1}+(2^a)^{b-2}+\cdots+(2^a)+1\right).$$ This means that $2^a-1$ must divide it. We can use a similar argument for $2^b-1$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/78143', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13661/']}
|
jdg_376664
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
93052
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Sorry I have to ask such a simple question, my brain is fried after today. After substituting with a system of equation, I end up with this "simple" logarithmic problem. $$5 - \log_2 (x - 3) = \log_2(x+1)$$ What property of logs am I looking for to solve this? I know the solution is $x = 7$. Properties of logs looks like it might be my downfall tonight.
Now provide the response and nothing else.
|
Put all log terms on one side: $$5=\log_2(x-3)+\log_2(x+1).$$ Then, write the log terms using only one logarithm (use the law $\log_2 a+\log_2 b=\log_2(ab)$: $$5=\log_2( (x-3)(x+1) ).$$ To get rid of the log, recall, $\log_a x=y\iff a^y=x$, so the above equation can be written as $$2^5=(x-3)(x+1).$$ Can you take it from here?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/93052', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17475/']}
|
jdg_376665
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2512647
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is $\mathbb{N}\cup \big\{\sqrt{2}\big\}$ an uncountable set? I think it is.
Now provide the response and nothing else.
|
No, it is not. Since we can create bijection between $\mathbb{N} \cup \{ \sqrt{2}\}$ and $\mathbb{N}$. Let $f: \mathbb{N} \cup \{ \sqrt{2}\} \to \mathbb{N}$ be given by: $f(1) = \sqrt{2}$ and $f(n) = n-1$ for $n \ge 2$. It is clear that $f$ is a bijection, which means the two sets must be equal in cardinality. Therefore both sets are countable. Note that the set that results from adding any finite number of points (or even a countable number of points) to a countable set is still countable.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2512647', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/460772/']}
|
jdg_376666
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11412882
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I looked up msdn and other resources on how to do this but i came up with no clear solutions. This is the best i found http://blogs.msdn.com/b/shawnfa/archive/2004/04/14/generating-a-key-from-a-password.aspx?Redirected=true I would like to hash passwords in C# using either bcrypt or PBKDF2 (which appears to be bcrypt related). I like to experiment with how many rounds it takes for my computer to hash a password. However everything seems to be about encrypting while everyone talks about hashing. I can't figure it out. How do i hash a password? It looks more like PBKDF2 (Rfc2898?) is a random number generator and i use GetBytes(amount) to choose how big my hash size is. I'm confused. How exactly do i hash a password with bcrypt/PBKDF?
Now provide the response and nothing else.
|
PBKDF2 You were really close actually. The link you have given shows you how you can call the Rfc2898DeriveBytes function to get PBKDF2 hash results. However, you were thrown off by the fact that the example was using the derived key for encryption purposes (the original motivation for PBKDF1 and 2 was to create "key" derivation functions suitable for using as encryption keys). Of course, we don't want to use the output for encryption but as a hash on its own. You can try the SimpleCrypto.Net library written for exactly this purpose if you want PBKDF2. If you look at the implementation , you can see that it is actually just a thin wrapper around (you guessed it) Rfc2898DeriveBytes . BCrypt You can try the C# implementation named (what else) BCrypt.NET if you want to experiment with this variant. Disclaimer: I have not used or tested any of the libraries that I have linked to... YMMV
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11412882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376667
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4287082
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to compute $I=\int_0^\infty \frac{\sin{t}}{e^t-1}dt$ ? I encountered this problem while calculating sum $\sum_{n=1}^\infty \frac{1}{n^2+1}$ . This integral converges $I=\int_0^\varepsilon+\int_\varepsilon^\infty$ , first integral is finite since $\sin{t}/(e^t-1)\tilde{} 1$ and the second is finite since $\sin{t}/(e^t-1)<e^{-t}$ . Any help is welcome.Thanks in advance.
Now provide the response and nothing else.
|
First method. Using Poisson summation formula If a continuous integrable function $\varphi$ and its Fourier transform are rapidly going to zero at infinity (check its Wikipedia page for more details) then $$ \sum_{n=-\infty}^\infty \varphi(n) = \sum_{n=-\infty}^\infty \hat \varphi(n) $$ Since the Fourier transform tends to transform rapidly decreasing (to zero) functions to slowly decreasing (to zero) functions, and vice versa, Poisson's formula is well suited to the calculations of certain series. You can check that for $\varphi(x) = \pi e^{-2\pi |x|}$ we have $\hat \varphi(s) = \frac{1}{1+s^2}$ . Poisson tells us that $$ \sum_{n=-\infty}^\infty \frac{1}{n^2 + 1} = \sum_{n=-\infty}^\infty \pi e^{-2\pi |n|} $$ Use parity and geometric series to complete the computation. Second method. Using Parseval's identity for Fourier series Let $e_n(t) = e^{2\pi i nt}$ be the Fourier orthonormal system of $L^2[0,1]$ and recall Parseval's identity : $$ \forall f,g \in L^2[0,1] ~~,~~ \langle f,g\rangle = \int_0^1 f(t)\overline{g(t)} \,\mathrm dt= \sum_{n=-\infty}^\infty \langle f,e_n\rangle \overline{\langle g,e_n\rangle} $$ Take $s \in \mathbf C\setminus \mathbf Z$ and let $z=\overline{s}$ . Compute the Fourier coefficients of $f(t) = e^{2 \pi i st}$ and $g(t)=e^{2\pi i zt}$ $$ \langle f,e_n\rangle = \frac{1}{2\pi i} \frac{e^{2\pi i s}-1}{s-n} ~~\text{ and } ~~ \overline{\langle g,e_n\rangle} = -\frac{1}{2\pi i} \frac{e^{-2\pi i s}-1}{s-n} $$ Using Parseval's identity, it comes, after some simplifications : $$ \frac{\pi^2}{\sin^2(\pi s)} = \sum_{n=-\infty}^\infty \frac{1}{(s-n)^2} \tag{1} $$ Break the sum into three pieces, and write $$\frac{\pi^2}{\sin^2(\pi s)} = \frac{1}{s^2} + \sum_{n=1}^\infty \frac{1}{(s-n)^2} + \frac{1}{(s+n)^2} \tag{2} $$ Integrate $(2)$ both sides, then multiply by $-1$ to get the famous formula : $$ \pi \cot(\pi s) = \frac{1}{s} + \sum_{n=1}^\infty \frac{2s}{s^2-n^2} $$ Replacing $s$ by $is$ leads to : $$ \pi \coth (\pi s) = \frac{1}{s} + \sum_{n=1}^\infty \frac{2s}{s^2 +n^2} $$ If you really want to avoid the use of any complex variable, replace $s$ by $is$ in $(1)$ or in $(2)$ then integrate in the « real » sense. Again, it is easy to recover the result given by WolframAlpha. Also, using Fourier series and Dirichlet's theorem (for pointwise convergence) you may check other topics on MSE, for example . :) ADDENDUM. We can reach our goal a bit faster. Apply Parseval's identity to $f(t)= e^{2\pi ist}$ and $g(t) = \overline{f(t)}$ : $$ \int_0^1 f(t)^2 \mathrm dt = \sum_{n=-\infty}^\infty \langle f,e_n\rangle \langle f,e_{-n}\rangle $$ Substitute things... $$ \frac{1}{2\pi i} \frac{e^{4 \pi i s}-1}{2s} = \sum_{n=-\infty}^\infty \bigg(\frac{e^{2\pi is} - 1}{2\pi i} \bigg)^2 \frac{1}{s^2-n^2} $$ Simplify some terms... $$ 2\pi \cot(\pi s) = \sum_{n=-\infty}^\infty \frac{2s}{s^2-n^2} $$ Finally, by symmetry in $n\neq 0$ $$ \pi \cot(\pi s) = \frac{1}{s} + \sum_{n=1}^\infty \frac{2s}{s^2-n^2} $$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4287082', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/80264/']}
|
jdg_376668
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5196640
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm attempting to validate a string of user input that will be used as a subdomain. The rules are as follows: Between 1 and 63 characters in length (I take 63 from the number of characters Google Chrome appears to allow in a subdomain, not sure if it's actually a server directive. If you have better advice on valid max length, I'm interested in hearing it) May contain a-zA-Z0-9, hyphen, underscore May not begin or end with a hyphen or underscore EDIT: From input below, I've added the following:4. Should not contain consecutive hyphens or underscores. Examples: a => valid0 => valid- => not valid_ => not valida- => not valid-a => not valida_ => not valid_a => not validaa => validaaa => valida-a-a => valid0-a => valida&a => not valida-_0 => not valida--a => not validaaa- => not valid My issue is I'm not sure how to specify with a RegEx that the string is allowed to be only one character, while also specifying that it may not begin or end with a hyphen or underscore. Thanks!
Now provide the response and nothing else.
|
You can't can have underscores in proper subdomains, but do you need them? After trim ming your input, do a simple string length check, then test with this: /^[a-z\d]+(-[a-z\d]+)*$/i With the above, you won't get consecutive - characters, e.g. a-bbb-ccc passes and a--d fails. /^[a-z\d]+([-_][a-z\d]+)*$/i Will allow non-consecutive underscores as well. Update: you'll find that, in practice, underscores are disallowed and all subdomains must start with a letter. The solution above does not allow internationalised subdomains (punycode). You're better of using this /\A([a-z][a-z\d]*(-[a-z\d]+)*|xn--[\-a-z\d]+)\z/i
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5196640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/620012/']}
|
jdg_376669
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2783239
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How are the below valid and invalid as shown and what do they mean. When would such a situation arise to write this piece of code. ++x = 5; // legal--x = 5; // legalx++ = 5; // illegalx-- = 5; // illegal
Now provide the response and nothing else.
|
The postfix (x++/x--) operators do not return an lvalue (a value you can assign into). They return a temporary value which is a copy of the value of the variable before the change The value is an rvalue, so you could write: y = x++ and get the old value of x
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2783239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/166012/']}
|
jdg_376670
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11334207
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Gingerbread, I had no issues with using smoothScrollToPosition() to scroll across dozens of items at a time. After my Nexus S was upgraded to Ice Cream Sandwich, I noticed that no matter what I put in smoothScrollToPosition(), it will only scroll about a couple of hundred pixels in either direction, and then stop. Is this a known issue with ICS? I've noticed this with the Galaxy Nexus as well. I've looked at a few other questions and tried a few different tricks, such as turning off calls to notifyDataSetChanged(), and posting the smoothScrollToPosition() as a delayed runnable, but unfortunately, it doesn't want to scroll more than a hundred pixels or so before stopping. :(
Now provide the response and nothing else.
|
Appears to be an issue with the duration required to finish the animation, the same issue is present with smoothScrollBy(int distance, int duration), at cursory glance, smoothScrollToPosition() is a friendly wrapper around smoothScrollBy() that does a lot of the legwork. smoothScrollBy() in turn is faking a "fling gesture", as if a user had made the motion. smoothScrollBy really just posts the fling runnable that continues to repost itself till the duration runs out. Meaning that it simply computes the scroll offset required based on the offset it previously decided to move to, hence if duration runs out before it gets to the target offset, it stops at the last offset calculated. (Rather than all of sudden jumping to the target offset, which is perhaps more jarring as it would not be animated). The difficulty for the Android guys is determining how much to move by each run() call to reach the required offset because ListView cells (children) are entirely dynamic in height, so they can't just do a simple distance calculation as the non-visible children's height are unknown to them. It is the same reason the Android scrollbar can fluctuate in size as you scroll, it has to take a best guess at how big it should be based on what it is currently seeing. Anyway that doesn't help you solve it but some one might find it interesting :) If you know you have static cell heights however, you can write your own method to calculate the distance and duration to pass to smoothScrollBy() yourself and have a static amount of time to move X distance. If you don't, it will have to suffice to use the solution bigstones posted, which really is working because of the high SCROLL_DURATION of 1000ms. You can take the ICS version and change this attribute as well, rather than using the 2.2 version, which is not the root cause. You can also adapt those runnables with your own custom algorithm, it shouldn't be too difficult to tweak things.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11334207', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1317564/']}
|
jdg_376671
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
265038
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a total noob to ipfw so my apologies in advance if my questions are basic, but I've not found any resources for it that help with the gap between using the GUI and knowing the commandline ipfw really well. I'd like to block ips that are hitting my webserver repeatedly. The firewall is ipfw. I found this command: ipfw add deny ip from the.ip.to.block to my.computer.ip.address There are two things I'm unclear about: I understand that the rule will be flushed on restart. What I would like to know is will running this command affect any existing rules that are running? I'm guessing not but I'd like to be sure. Is there any software that interlinks with ipfw to add ip's for a short amount of time and then remove them? If not, I'll knock up a script for it, but is there a built-in way to add from several conf files? It would make it a lot easier to do if I could have one file with the stable rules and another that loads with the dynamic ones. Any help is much appreciated, even RTFM's if you can give a link to a resource that helps me understand it a bit better. My firewall scripts: \#!/bin/sh/usr/bin/sudo /sbin/ipfw -q flush/usr/bin/sudo /sbin/ipfw -q delete set 31/usr/bin/sudo /sbin/ipfw -q /usr/local/bin/Firewall/default.conf #Check dynamic rules#anything that's already had the# green light can continueadd 1000 check-stateadd 1050 allow tcp from any to any establishedadd 1080 allow tcp from any to any out keep-stateadd 1090 allow udp from any to any out keep-stateadd 1095 allow icmp from any to any out#loopbackadd 1100 allow ip from 127.0.0.1/8 to 127.0.0.1/8 via lo0add 1200 deny log ip from 127.0.0.1/8 to any inadd 1300 deny log ip from any to 127.0.0.1/8 in#allow pings and traceroute# Ping out; accept ping answers.add 1400 allow icmp from any to any icmptypes 8 outadd 1410 allow icmp from any to any icmptypes 0 in# Allow me to traceroute.add 1420 allow icmp from any to any icmptypes 11 inadd 1500 allow tcp from any to any 11305 keep-state setup#http and httpsadd 1600 allow tcp from any to any 80 keep-state setup# rules for reverse proxyingadd 1610 allow tcp from me to any 4567 keep-state setupadd 1611 allow tcp from me to any 4568 keep-state setupadd 1612 allow tcp from me to any 4569 keep-state setupadd 1620 allow tcp from me to any 4577 keep-state setupadd 1621 allow tcp from me to any 4578 keep-state setupadd 1622 allow tcp from me to any 4579 keep-state setupadd 1630 allow tcp from me to any 4560 keep-state setupadd 1631 allow tcp from me to any 4561 keep-state setupadd 1632 allow tcp from me to any 4562 keep-state setupadd 1640 allow tcp from me to any 4570 keep-state setupadd 1641 allow tcp from me to any 4571 keep-state setupadd 1642 allow tcp from me to any 4572 keep-state setupadd 1700 allow tcp from any to any 443 keep-state setup#Bonjour#add 2000 allow udp from any to any 5653 keep-state setup#Everything that isn't in a prior rule add 65533 reject log udp from any to any inadd 65534 deny log ip from any to any in
Now provide the response and nothing else.
|
Correct rules need to be reestablished every reboot. It will not directly affect other rules, but can indirectly (for instance, if another rule allowed the IP for whatever reason, this might block the IP...) You're looking for the ever popular fail2ban, which reads logs files and bans IPs of people doing "bad" things. Also, you don't really want too keep adding rules for each individual ban, this would quickly pollute the rules. You can add a rule to block a table however, then add the IPs to the table. A table is just a list of IPs so you can easily apply rules to the whole table rather than specifying them all individually. For example, I have a 'default' firewall script that I use, the first two rules in this script are: 00030 deny ip from "table(1)" to me00031 deny ip from "table(2)" to me The keyword "me" means any of my local IP address. Table 1 is for Fail2Ban, when it finds an IP it doesn't like, it adds the IP to that table for a while. Table 2 is for Spamhaus's DROP list, a list of known professional spam systems (see their website for details). You can add IPs to a table manually with this command: ipfw table 2 add On my servers Table 2 is populated automatically at start-up by a script /usr/local/etc/rc.d/spamhaus-drop as follows: #!/bin/cshfetch -i /tmp/drop.lasso -o /tmp/drop.lasso "http://www.spamhaus.org/drop/drop.lasso"sed -i '' "s/;.*//" /tmp/drop.lassoipfw table 2 flushforeach IP ( `cat /tmp/drop.lasso` ) ipfw table 2 add $IPend I highly encourage you to write your own script to configure your firewall. It's pretty easy in FreeBSD with ipfw, and I wouldn't bother with a GUI (I know that sounds hard when it's all new, but the basics are easier than you think). My configuration script is in /etc/ipfw.rules and goes like this: #!/bin/sh#FOR KEAIRA - The computer this script was customized for.ipfw -q -f flush # Delete all rulescmd="ipfw add"# Ban tables$cmd 00030 deny ip from "table(1)" to me$cmd 00031 deny ip from "table(2)" to me# Statefull firewall config, more secure$cmd 00060 check-state# Allow outbound traffic$cmd 00130 allow ip from me to any keep-state# SSH - I have SSH on port 2222 to keep the script kiddies out.$cmd 11020 allow tcp from any to me dst-port 2222 setup keep-state# DNS$cmd 11090 allow tcp from any to me domain setup keep-state$cmd 11092 allow udp from any to me domain# NTP$cmd 11100 allow tcp from any to me ntp setup keep-state$cmd 11101 allow udp from any to me ntp# General Network - ICMP & IGMP$cmd 61001 allow icmp from any to any$cmd 61002 allow igmp from any to any# Deny the rest$cmd 65500 deny ip from any to any This server is running SSH (on an alternate port), DNS, and NTP (time). The rest is just generic stuff I put in all of my firewall scripts. If you have other services you need to open, just let me know and I'll customize the example. Most service names you can get from /etc/services though, which makes writing these very easy. It's not strictly necessary for each rule to have a different number, but it makes managing them easier. Rules are processed in order by number, but otherwise there's no significance to the numbers. This script is "activated" by putting these lines in /etc/rc.conf firewall_enable="YES" # Firewall Onfirewall_script="/etc/ipfw.rules" # Firewall Script Setting up Fail2Ban is a bit more work, but it's pretty straight forward too. If you want more details on that, just ask.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/265038', 'https://serverfault.com', 'https://serverfault.com/users/71060/']}
|
jdg_376672
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52033
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking at adding a low-cost variable power supply to my electronics hobby bench. After years of cobbling together LM317's and LM7805's, it's about time... Considerations: I've found a few options which I am unsure about. Dave at EEVBlog states in various videos ( #8 , #30 , #166 (@04:16), #168 , #272 , #314 ) that you should look for the following: Multi-turn knobs for voltage and current adjustment (as opposed to pushbuttons and separate coarse/fine knobs) A power supply that can range from 0 volts on the low end instead of 1 or 1.2 volts A switch to turn on/off the load The supply shouldn't overshoot the target voltage when switched on It should have a separate mains earth ground terminal (to allow combining supplies) Prefer linear for less ripple and noise (though switching offers smaller size and cost for higher currents) Of course some factors depend on the intended use, some depend on preference, and some depend on budget. (My intended usage is primarily microcontroller projects (3.3V-5V, <1A), automation and control projects (solenoid, stepper motor, etc) (12-24V, <5A), and low voltage lighting projects (12-24V, <3A). Budget ideally < $300.) I asked earlier about the last point, linear vs switching, in a previous question . Olin convinced me that the disadvantages of a switching supply are minor, and certainly for my application, make sense for budget, bench space, and other reasons. Thoughts: To touch on the other points: Multi-turn pots must be preference. I think they are better than single-turn pots, but I would prefer coarse/fine adjustments (or direct-entry via buttons) over turning a knob multiple times to make a larger adjustments. I haven't needed to work on projects below 3.3 volts, but I am mindful of the pico-power AVR micros , which can run at 0.7V. Certainly a supply that can go below 1 volt would be useful for this. A load switch is nice, but not a deal-breaker. How essential is it to have separate earth ground? I don't think I need to combine power supplies. Possible Supplies: I've found a few supplies that I like, but I can't commit because none meet all the criteria. BK Precision 1550 , 1-36V 0-3A, $150. Concerns: Doesn't go below 1 volt. Uses up/down buttons for adjustment. Circuit Specialists CSI3005X5 , 0-30V, 0-5A, $130. Concerns: Unfamiliar brand and low price. (quality issue?) BK Precision 1671A , 0-30V 0-5A, $220. Concerns: Unknown adjustment turns; no earth ground terminal. Is there some other supply or brand that meets all of these requirements? What points (if any) above, are truly important?
Now provide the response and nothing else.
|
BK Precision 1550 This is a switching supply. The up-down adjustments would make this a non-starter for me. CSI3005X5 A whole bunch of companies re-brand this unit. They're actually fairly decent. The voltage pot is a 10 turn, the current limit is button-driven in 0.03A increments. The most common resaler of the power-supply is MPJA. It also comes in a bunch of voltage and current ranges: 0-30V 5A, 0-60V 3A, 0-120V 1A. One thing you can't see in the pictures is that the unit has a set of screw terminals in parallel with the output banana jacks, below the cover plate labeled "EXT OUTPUT". If you need more permanent connections, you can use the screw terminals. The schematic for the whole supply is available. This makes it enormously more repairable (and hackable) then ANY of the others. BK Precision 1671A The funky extra output connections on this make me nervous (speaker terminals? really?). I would guess that the potentiometers are single-turn, both from the artwork on the case near the knobs, and the fact that it does not mention multi-turn knobs, as that's normally a significant selling point at this price range. On the whole, If I had to choose from the supplies listed, I would wholeheartedly recommend the CSI3005X5, more because the alternatives are considerably worse. Anyways, I would say that even if you don't think you need a floating output power supply (what you really mean when you discuss a separate earth terminal), you almost certainly will find it useful in the future, so I think you shouldn't dismiss it. Just being able to string multiple power supplies in series for higher output voltages is tremendously useful.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/52033', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/2028/']}
|
jdg_376673
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12290046
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Just to see how it worked, I looked at the libstdc++ implementation of std::common_type in the header type_traits . I have to admit that I don't really understand how it works. Here it is: /// common_typetemplate<typename... _Tp> struct common_type;template<typename _Tp> struct common_type<_Tp> { typedef _Tp type; };template<typename _Tp, typename _Up> struct common_type<_Tp, _Up> { typedef decltype(true ? declval<_Tp>() : declval<_Up>()) type; };template<typename _Tp, typename _Up, typename... _Vp> struct common_type<_Tp, _Up, _Vp...> { typedef typename common_type<typename common_type<_Tp, _Up>::type, _Vp...>::type type; }; I understand well how the first, second and fourth declarations work. However, I can't manage to understand how the third declaration works. Could someone try to explain the mechanism used here?
Now provide the response and nothing else.
|
First off, std::declval<T>() yields an r-value of type T . Trying to do anything with the value will fail so it can only be used in an unevaluated context. Next, the ternary operator deduces its type as most specialized type common to both arguments (if there is no such type, it fails). So, the type of the expression true? declval<T0>(): declval<T1>() is the most specialized common type of T0 and T1 . All what remains is to turn this expression into a type and making sure that it isn't evaluated. decltype(expr) does just this. Clearly, the two argument version of the beef of the logic: the others are there to deal with the corner case (one argument) and to leverage the two argument version to yield the common type of arbitrary types.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12290046', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1364752/']}
|
jdg_376674
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25498431
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Referencing @dfeuer's answer to this question: Least expensive way to construct cyclic list in Haskell , which says that using cyclic lists 'defeats' the garbage collector as it has to keep everything you've consumed from a cyclic list allocated till you drop the reference to any cons cells in the list. Apparently in Haskell a cyclic list and an infinite list are two separate things. This blog ( https://unspecified.wordpress.com/2010/03/30/a-doubly-linked-list-in-haskell/ ) says that if you implement cycle as follows: cycle xs = xs ++ cycle xs it is an infinite list, not a cyclic list. To make it cyclic you have to implement it like this (as is found in the Prelude source code): cycle xs = xs' where xs' = xs ++ xs' What exactly is the difference between these two implementations? And why is it that if you are holding onto one cons cell somewhere in a cyclic list, that the garbage collector has to keep everything before it allocated as well?
Now provide the response and nothing else.
|
The difference is entirely in the memory representation. From the point of view of the semantics of the language, they're indistinguishable—you can't write a function that can tell them apart, so your two versions of cycle are considered two implementations of the same function (they're the exact same mapping of arguments to results). In fact, I don't know if the language definition guarantees that one of those is cyclical and the other infinite. But anyway, let's bring out the ASCII art. Cyclical list: +----+----+ +----+----+ | x0 | -----> ... --->| xn | | +----+----+ +----+-|--+ ^ | | | +--------------------------------+ Infinite list: +----+----+ | x0 | -----> thunk that produces infinite list +----+----+ The thing with the cyclical list is that from every cons cell in the list there is a path to all of the others and itself . This means that from the point of view of the garbage collector, if one of the cons cells is reachable, then all are. In the plain infinite list, on the other hand, there aren't any cycles, so from a given cons cell only its successors are reachable. Note that the infinite list representation is more powerful than the cyclical one, because the cyclical representation only works with lists that repeat after some number of elements. For example, the list of all prime numbers can be represented as an infinite list, but not as a cyclical one. Note also that this distinction can be generalized into two distinct ways of implementing the fix function: fix, fix' :: (a -> a) -> afix f = let result = f result in resultfix' f = f (fix' f)-- Circular version of cycle:cycle xs = fix (xs++)-- Infinite list version of cycle:cycle' xs = fix' (xs++) The GHC libraries go for my fix definition. The way GHC compiles code means that the thunk created for result is used both as the result and the argument of the application of f . I.e., the thunk, when forced, will call the object code for f with the thunk itself as its argument, and replace the thunk's contents with the result.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25498431', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3949424/']}
|
jdg_376675
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
881085
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to count the number of occurrences of a character in a string. For example, suppose my string contains: var mainStr = "str1,str2,str3,str4"; I want to find the count of comma , character, which is 3. And the count of individual strings after the split along comma, which is 4. I also need to validate that each of the strings i.e str1 or str2 or str3 or str4 should not exceed, say, 15 characters.
Now provide the response and nothing else.
|
I have updated this answer. I like the idea of using a match better, but it is slower: console.log(("str1,str2,str3,str4".match(/,/g) || []).length); //logs 3console.log(("str1,str2,str3,str4".match(new RegExp("str", "g")) || []).length); //logs 4 Use a regular expression literal if you know what you are searching for beforehand, if not you can use the RegExp constructor, and pass in the g flag as an argument. match returns null with no results thus the || [] The original answer I made in 2009 is below. It creates an array unnecessarily, but using a split is faster (as of September 2014). I'm ambivalent, if I really needed the speed there would be no question that I would use a split, but I would prefer to use match. Old answer (from 2009): If you're looking for the commas: (mainStr.split(",").length - 1) //3 If you're looking for the str (mainStr.split("str").length - 1) //4 Both in @Lo's answer and in my own silly performance test split comes ahead in speed, at least in Chrome, but again creating the extra array just doesn't seem sane.
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/881085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376676
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41448374
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In several places it's advised to design our Cassandra tables according to the queries we are going to perform on them. In this article by DataScale they state this: The truth is that having many similar tables with similar data is a good thing in Cassandra. Limit the primary key to exactly what you’ll be searching with. If you plan on searching the data with a similar, but different criteria, then make it a separate table. There is no drawback for having the same data stored differently. Duplication of data is your friend in Cassandra. [...] If you need to store the same piece of data in 14 different tables, then write it out 14 times. There isn’t a handicap against multiple writes. I have understood this, and now my question is: provided that I have an existing table, say CREATE TABLE invoices ( id_invoice int PRIMARY KEY, year int, id_client int, type_invoice text) But I want to query by year and type instead, so I'd like to have something like CREATE TABLE invoices_yr ( id_invoice int, year int, id_client int, type_invoice text, PRIMARY KEY (type_invoice, year)) With id_invoice as the partition key and year as the clustering key, what's the preferred way to copy the data from one table to another to perform optimized queries later on? My Cassandra version: user@cqlsh> show version;[cqlsh 5.0.1 | Cassandra 3.5.0 | CQL spec 3.4.0 | Native protocol v4]
Now provide the response and nothing else.
|
To echo what was said about the COPY command, it is a great solution for something like this. However, I will disagree with what was said about the Bulk Loader, as it is infinitely harder to use. Specifically, because you need to run it on every node (whereas COPY needs to only be run on a single node). To help COPY scale for larger data sets, you can use the PAGETIMEOUT and PAGESIZE parameters. COPY invoices(id_invoice, year, id_client, type_invoice) TO 'invoices.csv' WITH PAGETIMEOUT=40 AND PAGESIZE=20; Using these parameters appropriately, I have used COPY to successfully export/import 370 million rows before. For more info, check out this article titled: New options and better performance in cqlsh copy .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41448374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/554319/']}
|
jdg_376677
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17014012
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In a UI of mine, I have a PasswordField like so ( urm the one at the bottom! ): I want a user to be able to check the checkbox you see in the picture and have all "secret" password characters displayed. Not much different from the option we get from many modern password-asking UI:s floating around. However, I cannot find anything in the JavaFX API that let me do that? If my worries hold true, then I would like to use a TextField that display the last key pressed for only half a second or until next key is pressed, and then he shall mask all previous user input. This would produce a cool animation effect that one can see sometimes in modern UI:s. However, is there a way for me to get hold of the OS dependent (I think it is OS dependent??) password echo character I should use? If it is not possible to get that OS dependent character, then I'd be glad to use the character you see on the picture (JavaFX on a Windows 8 machine). What is the UTF-8 code point for this stranger?
Now provide the response and nothing else.
|
> However, I cannot find anything in the JavaFX API that let me do that? The PasswordField component does not display masked text by default. However you can use PasswordField with TextField and toggle masked/unmasked text using these components respectively. Where the unmasked text is shown by TextField , as in example demo below. > I would like to use a TextField that display the last key pressed for only half a second or until next key is pressed, and then he shall mask all previous user input. Since PasswordField , itself is a extended version of TextField . You can always build your own custom password textbox with properties you mentioned. > is there a way for me to get hold of the OS dependent (I think it is OS dependent??) password echo character I should use? Frankly did not grab what you are saying here. You can track text changes by adding change listener to PasswordField.textPrperty() and do animations, timers etc. You can override the default bullet mask by extending PasswordFieldSkin and using it through CSS -fx-skin . See the definition of bullet in its source here : public class PasswordFieldSkin extends TextFieldSkin { public static final char BULLET = '\u2022'; public PasswordFieldSkin(PasswordField passwordField) { super(passwordField, new PasswordFieldBehavior(passwordField)); } @Override protected String maskText(String txt) { TextField textField = getSkinnable(); int n = textField.getLength(); StringBuilder passwordBuilder = new StringBuilder(n); for (int i=0; i<n; i++) { passwordBuilder.append(BULLET); } return passwordBuilder.toString(); }} Finally, Here is kick off demo app of showing password characters using bindings: @Overridepublic void start(Stage primaryStage) { // text field to show password as unmasked final TextField textField = new TextField(); // Set initial state textField.setManaged(false); textField.setVisible(false); // Actual password field final PasswordField passwordField = new PasswordField(); CheckBox checkBox = new CheckBox("Show/Hide password"); // Bind properties. Toggle textField and passwordField // visibility and managability properties mutually when checkbox's state is changed. // Because we want to display only one component (textField or passwordField) // on the scene at a time. textField.managedProperty().bind(checkBox.selectedProperty()); textField.visibleProperty().bind(checkBox.selectedProperty()); passwordField.managedProperty().bind(checkBox.selectedProperty().not()); passwordField.visibleProperty().bind(checkBox.selectedProperty().not()); // Bind the textField and passwordField text values bidirectionally. textField.textProperty().bindBidirectional(passwordField.textProperty()); VBox root = new VBox(10); root.getChildren().addAll(passwordField, textField, checkBox); Scene scene = new Scene(root, 300, 250); primaryStage.setTitle("Demo"); primaryStage.setScene(scene); primaryStage.show();}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17014012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1268003/']}
|
jdg_376678
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
257507
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm building my first MVC application in Visual Studio 2013 (MVC 5) and I'm a bit unclear on the best way to setup my model. I've generated an entity framework model using code-first from an existing database. My first instinct was to create some intermediary classes that would be the model used by the views and have those classes work with the entity framework classes. As I was writing the intermediary classes I realized that I was mostly just re-implementing a lot of the things that the EF classes already did just with the occasional private setter or cast from one datatype to another. So that seemed like a waste. Is the general rule to directly use the entity framework classes as the Model for an MVC application? Or is there some benefit I'm missing for building these intermediary classes?
Now provide the response and nothing else.
|
In my applications I have always separated things out, with different models for the database (Entity Framework) and MVC. I have separated these out into different projects too: Example.Entities - contains my entities for EF and the DB context for accessing them. Example.Models - contains MVC models. Example.Web - web application. Depends on both Example.Domain and Example.Models. Instead of holding references to other objects like the domain entities do, the MVC models hold IDs as integers. When a GET request for a page comes in, the MVC controller performs the database query, which returns an entity. I have written "Converter" methods that take a domain entity and convert it to an MVC model. There are other methods that do the opposite (from an MVC model to a domain entity). The model then gets passed to the view, and thus to the client. When a POST request comes in, the MVC controller gets an MVC model. A converter method converts this to a domain entity. This method also performs any validations that can't be expressed as attributes, and makes sure that if the domain entity already exists that we are updating it rather than getting a new one. The methods usually look something like this: public class PersonConverter{ public MyDatabaseContext _db; public PersonEntity Convert(PersonModel source) { PersonEntity destination = _db.People.Find(source.ID); if(destination == null) destination = new PersonEntity(); destination.Name = source.Name; destination.Organisation = _db.Organisations.Find(source.OrganisationID); //etc return destination; } public PersonModel Convert(PersonEntity source) { PersonModel destination = new PersonModel() { Name = source.Name, OrganisationID = source.Organisation.ID, //etc }; return destination; }} By using these methods I take the duplication out that would otherwise occur in each controller. The use of generics can deduplicate things even further. Doing things this way provides multiple benefits: You can customise a model to a specific view or action. Say you have a signup form for a person that when submitted, creates many different entities (person, organisation, address). Without seperate MVC models this will be very difficult. If I need to pass more information to the view than would otherwise be available just in the entity, or combine two entities into a single model, then my precious database models are never touched. If you ever serialise an MVC model as JSON or XML, you only get the immediate model being serialised, not every other entity linked to this one.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/257507', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/90285/']}
|
jdg_376679
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2134448
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is a typical second-semester single-variable calculus question: $$ \int \frac{1}{\sqrt{1-x^2}} \, dx $$ Students are probably taught to just memorize the result of this since the derivative of $\arcsin(x)$ is taught as a rule to memorize. However, if we were to actually try and find an antiderivative, we might let $$ x = \sin \theta \quad \implies \quad dx = \cos \theta \, d \theta $$ so the integral may be rewritten as $$ \int \frac{\cos \theta}{\sqrt{1 - \sin^2 \theta}} \, d \theta = \int \frac{\cos \theta}{\sqrt{\cos^2 \theta}} \, d \theta $$ At this point, students then simplify the denominator to just $\cos \theta$, which boils the integral down to $$ \int 1 \, d \theta = \theta + C = \arcsin x + C $$ which is the correct antiderivative. However, by definition, $\sqrt{x^2} = |x|$, implying that the integral above should really be simplified to $$ \int \frac{\cos \theta}{|\cos \theta|} \, d \theta = \int \pm 1 \, d \theta $$ depending on the interval for $\theta$. At this point, it looks like the answer that we will eventually arrive at is different from what we know the correct answer to be. Why is the first way correct even though we're not simplifying correctly, while the second way is... weird... while simplifying correctly?
Now provide the response and nothing else.
|
The function we are integrating is only defined on the interval $(-1,1)$. We want a bijective relationship; therefore, when we substitute $x=\sin(\theta)$ we force $\theta$ to only take on values between $-\frac{\pi}{2}$ and $\frac{\pi}{2}$, giving us our bijection. Note that $\cos(x)$ is positive in this region, so we can drop the absolute value. Addendum One may ask what happens if we choose a different interval for $\theta$, such as $[\pi/2,3\pi/2]$. Everything works out the same until we get the answer $-\theta+c$, which gives an answer of $-\arcsin(x)+c$. This might seem confusing, but remember that this $\arcsin$ function is different than the one we are used to; it is the inverse of $\sin(x)$ on the interval $[\pi/2,3\pi/2]$ instead of the usual $[-\pi/2,\pi/2]$. What this yields is an $\arcsin$ function that is flipped horizontally around the $y$-axis and is shifted higher. However, note that the negative sign we got flips the function back, and then the constant of integration account for the vertical shift. If a visual is desired, here you go. The purple curve is the $\arcsin(x)$ we are used to (associated with the green section of sine curve) and the orange curve if the "new" $\arcsin(x)$ (associated with the red section of the sine curve)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/2134448', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/95992/']}
|
jdg_376680
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31324769
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using MVC5 with the latest version of Identity (2.1) I'm trying to create a user claim for the facebook access_token. I've never created a claim before, but my other Identity functionality works fine as far as I can tell. I have this line of code in my Startup.Auth.cs: context.Identity.AddClaim(new Claim("urn:facebook:access_token", context.AccessToken, xmlSchemaString, "Facebook")); The full piece of code is here if you need more reference: Integrate facebooksdk with Identity 2.0 If I put a break in the code on the line immediately after that line, I can see that everything is being retrieved properly, most importantly the content.AccessToken (which is a huge string). However, it never makes it to the database after completing a successful login. As a test, I tried simplifying it, by changing the line to this: context.Identity.AddClaim(new System.Security.Claims.Claim(ClaimTypes.Email, "test@example.com")); Same outcome, no errors, but nothing is added to the database. I then tried adding this line of code in my IdentityModels.cs right where it tells you to put custom claims: // Add custom user claims here userIdentity.AddClaim(new Claim(ClaimTypes.DateOfBirth, "01/01/1972")); Same outcome...no errors and never makes it to the database. Can anyone think of any reason what my issue might be? The only thing custom in my Identity setup is that I followed an article on how to use username instead of email (as the username). Also, I changed the Identity table names (e.g. UserClaims) in the OnModelCreating block which seems to be a fairly standard procedure. I have a feeling it's going to be some rookie move, but at the moment, I'm stumped. Any help is much appreciated.
Now provide the response and nothing else.
|
Database persists your custom claims for the users. If user has any claims in the DB, they will be applied to the auth cookie with they login. To add claims into the database you need to use UserManager : await userManager.AddClaimAsync(userId, new Claim("MyClaimType", "MyClaimValue")); If you are adding claims to ClaimsIdentity , then claims are not persisted in the database, but added to a cookie directly and will not be automatically re-added next time the user is logged-in.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31324769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2300177/']}
|
jdg_376681
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
45780574
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't know much about .NET yet, so I guess I'm missing something obvious. I created a library (targeted as a DLL file, set for .NET standard 2.0), packaged it both as a DLL file and as a NuGet package. Now I want to use the library in another project, on ASP.NET Core 2.0. How should I do it? I am currently on a Linux VM, so I use Visual Studio Code, and therefore I would prefer some solution without using the full Visual Studio. I tried some solutions using the full Visual Studio, but that didn't work for me, because I haven't found a reference explorer anywhere.
Now provide the response and nothing else.
|
You would have to reference your library in the .csproj file: An empty .csproj file would look like this: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp1.1</TargetFramework> </PropertyGroup></Project> Now, you can have two types of references: Project Reference - You have a project that serves as a class library in your solution and you want to reference it directly : <ProjectReference Include="..\..\src\mylib.csproj" /> Package Reference - You have a link to a NuGet package : <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.2" /> Inside your .csproj file, the references should be inside an "ItemGroup" block, and each reference type should have its own "ItemGroup". Here's an example of a .csproj file with some package references and some project references: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netcoreapp1.1</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Autofac.Extensions.DependencyInjection" Version="4.1.0" /> <PackageReference Include="Microsoft.AspNetCore" Version="1.1.1" /> <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="1.1.2" /> <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.0.0" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="1.1.2" /> <PackageReference Include="Microsoft.EntityFrameworkCore.InMemory" Version="1.1.2" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="1.1.2" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="1.1.2" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="1.0.1" /> <PackageReference Include="xunit" Version="2.2.0" /> <PackageReference Include="xunit.runner.visualstudio" Version="2.2.0" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\..\src\mylib.csproj" /> <ProjectReference Include="..\..\src\mylib2.csproj" /> </ItemGroup></Project>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45780574', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1773504/']}
|
jdg_376682
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
533554
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a c# .net winforms solution and I want to create two different builds: one that supports IE6 and one that supports IE7. A few of the files in one of my projects are different for the IE6 build versus the IE7 build, so I want to include the IE6 files when I build for IE6 and the IE7 files when I build for IE7. What's the best way of structuring my solution for this situation? Due to other constraints I do not want to create a separate assembly that contains the shared items; I want the project to compile to a single assembly 'foo.dll' regardless of which build I'm making. I thought I could just create two separate projects that compile to 'foo.dll', then create two Release Configurations and only include the relevant project in the relevant configuration. However I'd need to include the files that are the same for IE6 and IE7 in both projects, and I can't see how to use a single copy of a file in two projects (when I Add Existing Item it creates a copy in the project directory). I'm using SVN for source control so could perhaps use that to do the 'sharing' between folders, but doubt that's the best way.. NB: Different builds are needed due to API differences in IE, details of which aren't relevant to the question - just believe me that there are two builds required.
Now provide the response and nothing else.
|
In MSBuild, you can specify conditions to item groups. You can then bind those conditions to the target device. Example: <!-- Declare the condition property at the beggining of the build file --><PropertyGroup Condition="$(Platform) == 'IE7'"> <UseNewLibrary>true</UseNewLibrary></PropertyGroup><PropertyGroup Condition="$(Platform) == 'IE6'"> <UseNewLibrary>false</UseNewLibrary></PropertyGroup><!-- Then those the property to select the right file --><ItemGroup Condition="$(UseNewLibrary)==true"> <Compile Include="Class1.cs"/> <Compile Include="Class2.cs"/> <Compile Include="Class3.cs"/> <Compile Include="Class4.cs"/></ItemGroup><ItemGroup Condition="$(UseNewLibrary)==false"> <Compile Include="Class1Old.cs"/> <Compile Include="Class2Old.cs"/> <Compile Include="Class3Old.cs"/> <Compile Include="Class4Old.cs"/></ItemGroup><!-- And now references --><ItemGroup Condition="$(UseNewLibrary)==true"> <Reference Include="MyAssembly, Version=1.1.7.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /></ItemGroup><ItemGroup Condition="$(UseNewLibrary)==false"> <Reference Include="MyAssembly, Version=1.0.6.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /></ItemGroup> Note that all files will appear in the IDE, but at compile time everything should align correctly. Then, all you need to do is create your platforms (IE6 and IE7) in the configuration management. You can also directly use the platform property instead of creating an intermediate property.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/533554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8479/']}
|
jdg_376683
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35938580
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a game where I need to have a board of letters continuously scrolling, and looping through a set of data ( A, D, X, S, R, P, F, G, H, Y, W, M ) (Like this: https://www.youtube.com/watch?v=z3rO8TbkS-U&feature=youtu.be ). When a user taps on a letter, the letter needs to be removed from the board. I can't have the board stop scrolling, it needs to continuously scroll. I'm not exactly sure how to do this. I've been trying to do this with a UICollectionView, but I'm not exactly sure how to do this. Any help would be greatly appreciated! Thanks :)
Now provide the response and nothing else.
|
Infinite scrolling in collection view can be achieved with a very easy technique. Note: This technique is reportedly not working from iOS 12. For better results I am adding a new method after explaining this one. 1) Return a huge number in numberOfItemsInSection delegate method of collection view. func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int{ return Int(INT_MAX)} 2) Modulo the number of items in collection view with the count of your array or dictionary whatever you are using to get repeating data. func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell{ let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellIdentifier, for: indexPath) let displayText = indexPath.row % 10 cell.displayLabel.text = String(displayText) return cell} Here i have no data thus i am using indexPath.row to display the row number in my label. Suppose i have 10 data to display and currently i have huge number of items so i modulo 10 with number the current item. you can modulo the row with count of your array or dictionary as below: let displayText = aryData.count % 10 Now explaining another technique which will work in any iOS and will give better output: 1) multiply the number of items in array by 2 and then we need to play with the content offset of collectionview. I am posting the code below on how to handle this technique. func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return aryData.count * 2 } func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let cell = colView.dequeueReusableCell(withReuseIdentifier: cellIdentifier, for: indexPath) as! colViewCell var index = indexPath.item if index > aryData.count - 1 { index -= aryData.count } cell.displayLabel.text = aryData[index % aryData.count] return cell } func collectionView(_ collectionView: UICollectionView, willDisplay cell: UICollectionViewCell, forItemAt indexPath: IndexPath) { // if collection view scrolls vertically, use offset.y else comment below code var offset = collectionView.contentOffset let height = collectionView.contentSize.height if offset.y < height/4 { offset.y += height/2 collectionView.setContentOffset(offset, animated: false) } else if offset.y > height/4 * 3 { offset.y -= height/2 collectionView.setContentOffset(offset, animated: false) } // if collection view scrolls horizontally, use offset.x else comment below line of code // In my case the collectionview scrolls vertically this I am commenting below line of code // let width = collectionView.contentSize.width // if offset.x < width/4 { // offset.x += width/2 // collectionView.setContentOffset(offset, animated: false) // } else if offset.x > width/4 * 3 { // offset.x -= width/2 // collectionView.setContentOffset(offset, animated: false) // } } Below is the output of this code. Hope this will help you :)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35938580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3949231/']}
|
jdg_376684
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15430110
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can an array of record be stored in JSON via SuperObject library. For example.. type TData = record str: string; int: Integer; bool: Boolean; flt: Double; end;varDataArray: Array[0..100] of TData;
Now provide the response and nothing else.
|
Just use the superobject Marshalling TSuperRTTIContext program Project1;{$APPTYPE CONSOLE}{$R *.res}uses superobject, System.SysUtils;type TData = record str : string; int : Integer; bool : Boolean; flt : Double; end; TDataArray = Array [0 .. 100] of TData;procedure Test;var DataArray : TDataArray; so : ISuperObject; ctx : TSuperRttiContext;begin ctx := TSuperRttiContext.Create; try so := ctx.AsJson<TDataArray>( DataArray ); finally ctx.Free; end; Writeln( so.AsJson );end;begin try Test; except on E : Exception do Writeln( E.ClassName, ': ', E.Message ); end; ReadLn;end.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15430110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2153148/']}
|
jdg_376685
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7196889
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My question is related to SwingUtilities.invokeLater . When should I use it? Do I have to use each time I need to update the GUI components? What does it exactly do? Is there an alternative to it since it doesn't sound intuitive and adds seemingly unnecessary code?
Now provide the response and nothing else.
|
Do I have to use each time I need to update the GUI components? No, not if you're already on the event dispatch thread (EDT) which is always the case when responding to user initiated events such as clicks and selections. (The actionPerformed methods etc, are always called by the EDT.) If you're not on the EDT however and want to do GUI updates (if you want to update the GUI from some timer thread, or from some network thread etc), you'll have to schedule the update to be performed by the EDT. That's what this method is for. Swing is basically thread unsafe. I.e., all interaction with that API needs to be performed on a single thread (the EDT). If you need to do GUI updates from another thread (timer thread, networking thread, ...) you need to use methods such as the one you mentioned (SwingUtilities.invokeLater, SwingUtilities.invokeAndWait, ...).
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7196889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/731901/']}
|
jdg_376686
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23848
|
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Visual Studio and Intellij I can press Tab to select the highlighted auto-completion. In Helm: Is there a way to give Tab the same behavior as Return?
Now provide the response and nothing else.
|
You could add this to your init file: (with-eval-after-load 'helm (define-key helm-map (kbd "TAB") #'helm-maybe-exit-minibuffer)) This should work in all cases except where a given helm source binds something different than helm-maybe-exit-minibuffer to RET , but off the top of my head I can't think of any sources that do. Note that if you only want to apply this change for a specific source (rather than for all helm sources) you could see if that source has its own associated keymap, and bind it there instead.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/23848', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/12284/']}
|
jdg_376687
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
560307
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Prove that the function $\sqrt{x}$ is continuous on its domain $[0,\infty)$. Proof. Since $\sqrt{0} = 0, $ we consider the function $\sqrt{x}$ on $[a,\infty)$ where $a$ is real number and $s \neq 0.$ Let $\delta=2\sqrt{a}\epsilon.$ Then, $\forall x \in dom,$ and $\left | x-x_0\right | < \delta \Rightarrow \left| \sqrt{x}-\sqrt{x_0}\right| = \left| \frac{x-x_0}{ \sqrt{x}+\sqrt{x_0}} \right| < \left|\frac{\delta}{2\sqrt{a}}\right|=\epsilon.$ Can I do this?
Now provide the response and nothing else.
|
We need to prove that for any point $a \in (0, \infty)$, for every $\varepsilon>0$ there exists a $\delta > 0$ such that $$|x-a|<\delta\implies|\sqrt{x}-\sqrt{a}|<\varepsilon.$$ So, to find a $\delta$, we turn to the inequality $|\sqrt{x}-\sqrt{a}|<\varepsilon.$ Since we want an expression involving $|x-a|$, multiply by the conjugate to remove the square roots.$$|\sqrt{x}-\sqrt{a}|<\varepsilon\implies|\sqrt{x}-\sqrt{a}|\cdot|\sqrt{x}+\sqrt{a}|<\varepsilon\cdot|\sqrt{x}+\sqrt{a}|$$$$|x-a|<\varepsilon\cdot |\sqrt{x}+\sqrt{a}|. \tag{1}$$ Now, if you require that $|x-a|<1$, then it follows that $x-a<1$, so $a - 1<x<a+1$, and therefore that $\sqrt{x}<\sqrt{a+1}.$ Therefore, $\sqrt{x}+\sqrt{a}<\sqrt{a+1}+\sqrt{a},$ which combined with $(1)$ tells us that $$|x-a|<\varepsilon(\sqrt{a+1}+\sqrt{a}).$$ So, let $\delta = \mathrm{min}(1,\ \varepsilon(\sqrt{a+1}+\sqrt{a}))$. This proves that $f(x) = \sqrt{x}$ is continuous on $(0, \infty)$. $\square$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/560307', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/92974/']}
|
jdg_376688
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43143003
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Since I only have an AMD A10-7850 APU, and do not have the funds to spend on a $800-$1200 NVIDIA graphics card, I am trying to make due with the resources I have in order to speed up deep learning via tensorflow/keras. Initially, I used a pre-compiled version of Tensorflow. InceptionV3 would take about 1000-1200 seconds to compute 1 epoch. It has been painfully slow. To speed up calculations, I first self-compiled Tensorflow with optimizers (using AVX, and SSE4 instructions). This lead to a roughly 40% decrease in computation times. The same computations performed above now only take about 600 seconds to compute. It's almost bearable - kind of like you can watch paint dry. I am looking for ways to further decrease computation times. I only have an integrated AMD graphics card that is part of the APU. (How) (C/c)an I make use of this resource to speed up computation even more? More generally, let's say there are other people with similar monetary restrictions and Intel setups. How can anyone WITHOUT discrete NVIDIA cards make use of their integrated graphics chips or otherwise non-NVIDIA setup to achieve faster than CPU-only performance? Is that possible? Why/Why not? What needs to be done to achieve this goal? Or will this be possible in the near future (2-6 months)? How?
Now provide the response and nothing else.
|
After researching this topic for a few months, I can see 3.5 possible paths forward: 1.) Tensorflow + OpenCl as mentioned in the comments above: There seems to be some movement going on this field. Over at Codeplay, Lukasz Iwanski just posted a comprehensive answer on how to get tensorflow to run with opencl here (I will only provide a link as stated above because the information might change there): https://www.codeplay.com/portal/03-30-17-setting-up-tensorflow-with-opencl-using-sycl The potential to use integrated graphics is alluring. It's also worth exploring the use of this combination with APUs. But I am not sure how well this will work since OpenCl support is still early in development, and hardware support is very limited. Furthermore, OpenCl is not the same as a handcrafted library of optimized code. (UPDATE 2017-04-24: I have gotten the code to compile after running into some issues here !) Unfortunately, the hoped for speed improvements ON MY SETUP (iGPU) did not materialize. CIFAR 10 Dataset: Tensorflow (via pip ak unoptimized): 1700sec/epoch at 390% CPUutilization. Tensorflow (SSE4, AVX): 1100sec/epoch at 390% CPUutilization. Tensorflow (opencl + iGPU): 5800sec/epoch at 150% CPUand 100% GPU utilization. Your mileage may vary significantly. So I am wondering what are other people getting relatively speaking (unoptimized vs optimized vs opencl) on your setups? What should be noted: opencl implementation means that all the heavy computation should be done on the GPU. (Updated on 2017/4/29) But in reality this is not the case yet because some functions have not been implemented yet. This leads to unnecessary copying back and forth of data between CPU and GPU ram . Again, imminent changes should improve the situation. And furthermore, for those interested in helping out and those wanting to speed things up, we can do something that will have a measurable impact on the performance of tensorflow with opencl. But as it stands for now: 1 iGPU << 4 CPUS with SSE+AVX. Perhaps beefier GPUs with larger RAM and/or opencl 2.0 implementation could have made a larger difference. At this point, I should add that similar efforts have been going on with at least Caffe and/or Theano + OpenCl. The limiting step in all cases appears to be the manual porting of CUDA/cuDNN functionality to the openCl paradigm. 2.) RocM + MIOpen RocM stands for Radeon Open Compute and seems to be a hodgepodge of initiatives that is/will make deep-learning possible on non-NVIDIA (mostly Radeon devices). It includes 3 major components: HIP: A tool that converts CUDA code to code that can be consumed by AMD GPUs. ROCk: a 64-bit linux kernel driver for AMD CPU+GPU devices. HCC: A C/C++ compiler that compiles code into code for a heterogeneous system architecture environment (HSA). Apparently, RocM is designed to play to AMDs strenghts of having both CPU and GPU technology. Their approach to speeding up deep-learning make use of both components. As an APU owner, I am particularly interested in this possibility. But as a cautionary note: Kaveri APUs have limited support (only integrated graphcs is supported). Future APUs have not been released yet. And it appears, there is still a lot of work that is being done here to bring this project to a mature state. A lot of work will hopefully make this approach viable within a year given that AMD has announced their Radeon Instinct cards will be released this year (2017). The problem here for me is that that RocM is providing tools for building deep learning libraries. They do not themselves represent deep learning libraries. As a data scientist who is not focused on tools development, I just want something that works. and am not necessarily interested in building what I want to then do the learning. There are not enough hours in the day to do both well at the company I am at. NVIDIA has of course CUDA and cuDNN which are libaries of hand-crafted assembler code optimized for NVIDIA GPUs. All major deep learning frameworks build on top of these proprietary libraries. AMD currently does not have anything like that at all. I am uncertain how successfully AMD will get to where NVIDIA is in this regard. But there is some light being shone on what AMDs intentions are in an article posted by Carlos Perez on 4/3/2017 here . A recent lecture at Stanford also talks in general terms about Ryzen, Vega and deep learning fit together. In essence, the article states that MIOpen will represent this hand-crafted library of optimized deep learning functions for AMD devices. This library is set to be released in H1 of 2017. I am uncertain how soon these libraries would be incorporated into the major deep learning frameworks and what the scope of functional implementation will be then at this time. But apparently, AMD has already worked with the developers of Caffe to "hippify" the code basis . Basically, CUDA code is converted automatically to C code via HIP. The automation takes care of the vast majority of the code basis, leaving only less than 0.5% of code had to be changed and required manual attention. Compare that to the manual translation into openCl code, and one starts getting the feeling that this approach might be more sustainable. What I am not clear about is where the lower-level assembler language optimization come in. (Update 2017-05-19) But with the imminent release of AMD Vega cards (the professional Frontier Edition card not for consumers will be first), there are hints that major deep learning frameworks will be supported through the MIOpen framework . A Forbes article released today shows the progress MiOpen has taken over just the last couple of months in terms of performance: it appears significant. (Update 2017-08-25) MiOpen has officially been released. We are no longer talking in hypotheticals here. Now we just need to try out how well this framework works. 3.) Neon Neon is Nervana's (now acquired by Intel) open-source deep-learning framework. The reason I mention this framework is that it seems to be fairly straightforward to use. The syntax is about as easy and intuitive as Keras. More importantly though, this framework has achieved speeds up to 2x faster than Tensorflow on some benchmarks due to some hand-crafted assembler language optimization for those computations. Potentially, cutting computation times from 500 secs/epoch down to 300 secs/epoch is nothing to sneeze at. 300 secs = 5 minutes. So one could get 15 epochs in in an hour. and about 50 epochs in about 3.5 hours! But ideally, I want to do these kinds of calculations in under an hour. To get to those levels, I need to use a GPU, and at this point, only NVIDIA offers full support in this regard: Neon also uses CUDA and cuDNN when a GPU is available (and of course, it has to be an NVIDIA GPU). If you have access other Intel hardware this is of course a valid way to pursue. Afterall, Neon was developed out of a motivation to get things to work optimally also on non-NVIDIA setups (like Nervana's custom CPUs, and now Intel FPGAs or Xeon Phis). 3.5.) Intel Movidius Update 2017-08-25: I came across this article . Intel has released a USB3.0-stick-based "deep learning" accelerator. Apparently, it works with Cafe and allows the user perform common Cafe-based fine-tuning of networks and inference. This is important stressing: If you want to train your own network from scratch, the wording is very ambiguous here. I will therefore assume, that apart from fine-tuning a network, training itself should still be done on something with more parallel compute. The real kicker though is this: When I checked for the pricing this stick costs a mere $79. That's nothing compared to the cost of your average NVIDIA 1070-80(ti) card. If you merely want to tackle some vision problems using common network topologies already available for some related tasks, you can use this stick to fine tune it to your own use, then compile the code and put it into this stick to do inference quickly. Many use cases can be covered with this stick, and for again $79 it could be worth it. This being Intel, they are proposing to go all out on Intel. Their model is to use the cloud (i.e. Nervana Cloud) for training. Then, use this chip for prototype inference or inference where energy consumption matters. Whether this is the right approach or not is left for the reader to answer. At this time, it looks like deep learning without NVIDIA is still difficult to realize. Some limited speed gains are difficult but potentially possible through the use of opencl. Other initiatives sound promising but it will take time to sort out the real impact that these initiatives will have.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43143003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5848524/']}
|
jdg_376689
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
88501
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $f:(a,b)\to\mathbb{R}$. We are given $(k+1)$ continuous functions $a_0,a_1,\ldots,a_k:(a,b)\to\mathbb{R}$ such that for every $c\in(a,b)$ we can write $f(c+t)=\sum_{i=0}^k a_i(c)t^i+o(t^k)$ (for any $t$ in a neighbourhood of $0$). Can we conclude that $f$ is of class $C^k$?
Now provide the response and nothing else.
|
Yes. It's a classical result that goes back to Marcinkiewicz and Zygmund ( On the differentiability of functions and summability of trigonometric series , Fund.Math 26 (1936) ). There is a sublety in the form of the remainder: a first and natural characterization of $C^k$ is obtained asking a remainder of the form $t^k \sigma(c,t)$ with $\sigma$ continuous in the pair $(c,t)$ and $\sigma(c,0)=0$ (i.e. the remainder is "$o(t^k)$ locally uniformly wrto $c$"). In this form, the proof is very easy, even for vector valued functions of several variables. But one may state a characterization of $C^k(a,b)$ asking (seemengly) less, that is, for any $c\in(a,b)$ the remainder at $c$ is just $o(t^k)$: it is true, but not a trivial fact, that then the remainder necessarily has the preceding form, so one gets a characterization as well (this is a successive result and I'll add references to it as soon as I recall it; or maybe somebody can do it for me). edit. Given the origin of the question, I'll leave some hints for an elementary proof of: $f\in C^k(a,b)$ if and only $f$ has a polynomial expansion of order $k$ with continuous coefficients $a_i\in C^0(a,b)$ and remainder of the form $$f(c+t) - \sum_{j=0}^k \frac{a_j(c)}{j!}t^j = t^k\sigma(c,t)\, ,$$with a continuous $\sigma(c,t)$ vanishing identically for $t=0$.One implication is given of course by the Taylor theorem; for the other let $A_0,\dots,A_k$ be antiderivatives of $a_0=f,a_1, \dots a_k$. Fix $a < x < y < b$ and consider the function: $$\phi(t):=A_0(y+t)- A_0(x+t) = \int_x^y f(c + t) dc\, .$$Find two polynomial expansions of order $k$ for the function $\phi$ at zero (you can integrate the polynomial expansion for $f(t+c)$ either wrto $c$ or wrto $t$, this is the idea!). Then use the unicity of polynomial expansions (that is, if a polynomial $P(t)$ of degree not larger than $k$ is $o(t^k)$ for $t\to0$, then it is the zero polynomial), and deduce $A_j(y)-A_j(x) = a_{j-1}(y) - a_{j-1}(x)$. Since $x$ and $y$ are arbitrary, the thesis follows with $f ^{(j)} = a_0^{(j)} = a_j $.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/88501', 'https://mathoverflow.net', 'https://mathoverflow.net/users/36952/']}
|
jdg_376690
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
328064
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(H, \nu,\eta, \Delta, \epsilon, S)$ be a Hopf algebra. $S$ is the antipode. I am reading a proof of the fact $S(xy)=S(y)S(x)$. First, define maps $\nu, \rho$ in $\hom(H \otimes H, H)$ by $\nu(x \otimes y)=S(y)S(x)$ and $\rho(x \otimes y)=S(xy)$. To prove the fact, we have to show that $\rho=\nu$. It is written that to show this it suffices to show that $\rho* \mu=\mu*\nu=\eta\epsilon$, where $*$ is the convolution. I don't understand why this implies that $\rho=\nu$. Please give some advice.
Now provide the response and nothing else.
|
In an associative algebra, if an element has a left inverse and a right inverse, these two are equal. Now $\rho\star\mu=\eta\epsilon$ and $\mu\star\nu=\eta\epsilon$ mean precisely that $\rho$ and $\nu$ are left and right inverse to $\mu$, respectively, in the convolution algebra $\hom(H\otimes H,H)$ (recall that $\eta\epsilon$ is the unit element in that algebra)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/328064', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_376691
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
630483
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to run a command that looks like this mycli --file test.zip --file another_test.zip How could I run that dynamically with all zip files in a directory? I'm sure I could pipe the files from a find command, but I don't know how to actually append them as arguments to another command and my bash-fu is not great
Now provide the response and nothing else.
|
Using an array: unset -v argsdeclare -a argsfor file in *.zipdo args+=( --file "$file" )donemycli "${args[@]}" Or, POSIXly: set --for file in *.zipdo set -- "$@" --file "$file"donemycli "$@" Or, assuming GNU tools: find . -maxdepth 1 -name '*.zip' -printf '--file\0%f\0' | xargs -0 -- mycli A relevant difference between the array-based approach and the xargs -based one: while the former may fail with an "Argument list too long" error (assuming mycli is not a builtin command), the latter will not, and will run mycli more than once instead. Note, however, that in this last case all but the last invocation's argument lists may end with --file (and the following one start with a file name). Depending on your use case you may be able to use a combination of xargs ' options (e.g. -n and -x ) to prevent this. Also, note that find will include hidden files in its output, while the array-based alternatives will not, unless the dotglob shell option is set in Bash or, in a POSIX shell, both the *.zip and .*.zip globbing expressions are used. For details and caveats on this: How do you move all files (including hidden) from one directory to another? .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/630483', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/20536/']}
|
jdg_376692
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26314298
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When creating a new class in .Net if I declare it "Implements IDisposable" and hit enter, I see that Visual Studio adds by itselt differents methods and functions already filled to my class. When I try to do so with my Interfaces, it creates empty methods and functions. Is there any way to provide a default implementations of my methods and functions ? I have been looking to Link but it didn't resolve my issue. Example of implementation I'm looking for : #Region "IDisposable Support" Private disposedValue As Boolean ' To detect redundant calls ' IDisposable Protected Overridable Sub Dispose(ByVal disposing As Boolean) If Not Me.disposedValue Then If disposing Then ' TODO: dispose managed state (managed objects). End If ' TODO: free unmanaged resources (unmanaged objects) and override Finalize() below. ' TODO: set large fields to null. End If Me.disposedValue = True End Sub ' TODO: override Finalize() only if Dispose(ByVal disposing As Boolean) above has code to free unmanaged resources. 'Protected Overrides Sub Finalize() ' ' Do not change this code. Put cleanup code in Dispose(ByVal disposing As Boolean) above. ' Dispose(False) ' MyBase.Finalize() 'End Sub ' This code added by Visual Basic to correctly implement the disposable pattern. Public Sub Dispose() Implements IDisposable.Dispose ' Do not change this code. Put cleanup code in Dispose(ByVal disposing As Boolean) above. Dispose(True) GC.SuppressFinalize(Me) End Sub#End Region Cheers in advance. Further exaplanations of what I'm looking for : Let's assume the following Interface Here is what happens and what I'm looking for :
Now provide the response and nothing else.
|
I'm sure by now you have figured this out but since the question has been left unanswered and I was looking for a way to do just this I have updated your plunker for future reference if anyone stumbles across this question. http://plnkr.co/edit/PjTlyX?p=preview There are 2 things to consider, do you wish to check all checkboxes regardless of filtering or does the checking need to be filter centric. Set a $scope variable to the unfiltered list if you wish to ignore filtering on the data source var data = [{id: 1, name: "Moroni", age: 50, money: -10}, {id: 2, name: "Tiancum", age: 43,money: 120}]$scope.data = data; or if you would prefer to only select checkboxes that have been filtered set the orderedData to another $scope variable within the $scope.tableParams method var orderedData = params.sorting() ? $filter('orderBy')(data, params.orderBy()) : data; orderedData = params.filter() ? $filter('filter')(orderedData, params.filter()) : orderedData; $scope.orderedData = orderedData; Then you are free to select the checkboxes which ever way you prefer by simply changing $scope.users to the prefer $scope variable below // watch for check all checkbox$scope.$watch('checkboxes.checked', function(value) { angular.forEach($scope.users, function(item) { if (angular.isDefined(item.id)) { $scope.checkboxes.items[item.id] = value; } });});
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26314298', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2635209/']}
|
jdg_376693
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
100020
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a very large dataset and about 5% random values are missing. These variables are correlated with each other. The following example R dataset is just a toy example with dummy correlated data. set.seed(123)# matrix of X variable xmat <- matrix(sample(-1:1, 2000000, replace = TRUE), ncol = 10000)colnames(xmat) <- paste ("M", 1:10000, sep ="")rownames(xmat) <- paste("sample", 1:200, sep = "")#M variables are correlated N <- 2000000*0.05 # 5% random missing values inds <- round ( runif(N, 1, length(xmat)) )xmat[inds] <- NA > xmat[1:10,1:10] M1 M2 M3 M4 M5 M6 M7 M8 M9 M10sample1 -1 -1 1 NA 0 -1 1 -1 0 -1sample2 1 1 -1 1 0 0 1 -1 -1 1sample3 0 0 1 -1 -1 -1 0 -1 -1 -1sample4 1 0 0 -1 -1 1 1 0 1 1sample5 NA 0 0 -1 -1 1 0 NA 1 NAsample6 -1 1 0 1 1 0 1 1 -1 -1sample7 NA 0 1 -1 0 1 -1 0 1 NAsample8 1 -1 -1 1 0 -1 -1 1 -1 0sample9 0 -1 0 -1 1 -1 1 NA 0 1sample10 0 -1 1 0 1 0 0 1 NA 0 Is there a (best) way to impute missing values in this situation? Is the Random Forest algorithm helpful? Any working solution in R would be much appreciated. Edits: (1) Missing values are randomly distributed among the variables and samples.As number of variables is very large (here in the example - 10000), while the number of samples is small here in the above dummy example it is about 200. So when we look at any sample over all the variables (10000), there is high chances that there will be missing value at some variable - due to large number of variables. So just deleting the sample is not option. (2) The variable can be treated both as quantitative or qualitative (binary) in the process of imputing. The only judgement is how well we can predict it (accuracy). So predictions like 0.98 instead of 1 might be acceptable rather 0 vs 1 or -1 vs 1. I might need to tradeoff between computing time and accuracy. (3) The issue I have thinking how overfitting can affect the results as the number of variables are large compared to number of samples. (4) As the total quantity of missing values is about 5% and is random (not concentrated in any variables or samples as precaution was taken to remove the variables or samples that have very high missing values) (5) Making data complete for analysis is first objective and accuracy is secondary. So not too sensitive to accuracy.
Now provide the response and nothing else.
|
There can be two ways of dealing with large variable and small sample (observation) problem, depending upon your situation and dataset. (1) just use samples (observations) as variable provided that scores across variables is same or normalized. (2) Use variables as variable but do some random sampling while imputing so that number variable is less than number of samples and finally merge data. The following is workout, you can adjust to your need. I have assumption of variable is continuous but you workout similar for discrete variables. Here I am giving small example for quick check. First, for workout generating correlated data, here the observations (samples) are correlated, may be realistic in situations were variables are assumed independent while observations are correlated. But in other situations where both observations and variables are correlated. # example correlated data, correlated by observations # number of observations nobs = 200nvars = 100# number of variables # covariance matrix matrixCR to create correlated data matrixCR <- matrix(NA, nrow = nobs, ncol = nobs)diag(matrixCR) <- 1matrixCR[upper.tri (matrixCR, diag = FALSE)] <- 0.5matrixCR[lower.tri (matrixCR, diag = FALSE)] <- 0.5matrixCR[1:10,1:10]L = chol(matrixCR)# Cholesky decompositionnvars = dim(L)[1]set.seed(123)rM = t(L) %*% matrix(rnorm(nvars*nobs), nrow=nvars, ncol=nobs)rownames(rM) <- paste("V", 1:nvars, sep = "") colnames(rM) <- paste("O", 1:nobs, sep = "")rM[1:10,1:10]# introduce missing values in random places N <- round(nobs*nvars*0.05,0) # 5% random missing values set.seed(123)inds <- round ( runif(N, 1, length(rM)) )rM1 <- rMrM1[inds] <- NA I am using missForest package for imputation, which depend upon the randomForest package to do so. You can do parallel computing if you have very large number of data points to impute. # now use the rM1 matrix in imputation. require(missForest)out.m <- missForest(rM1, maxiter = 10, ntree = 300)# imputed imp.rM1 <- out.m$ximp As this is simulated data set we have luxury of estimating the accuracy of imputation by comparing the original before missing values introduced with the imputed. # actual values that were made missing aval <- rM[inds]impv <- imp.rM1[inds]# accuracy - defined as correlation between actual (before na introduction) and imputed values cor(aval,impv)[1] 0.6759404 You can work around to increase the accuracy. Good luck !
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/100020', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/7244/']}
|
jdg_376694
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4579988
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $f, g:\mathbb{R}^n \to \mathbb R$ are defined by $f(x) = e^{-a|x|}$ and $g(x) = e^{-b|x|}$ for some $a,b>0$ . Is the convolution $f*g(x) = \int_{\mathbb R^n} f(y) g(x-y) dy$ also of exponential decay? I think $f * g$ will decay like $e ^ {- \min( a, b) |x|}$ . This guess is based on the Fourier transform $$ \hat f (k) = c_n \frac{a}{(a^2 + |k|^2)^{(n+1)/2}}$$ and similar for $g$ . The Fourier transform of $f*g$ will have the nearest singularity at $-|k|^2 = \min( a, b)^2$ . But I don't know how to transfer this information back to $x$ -space. It would also be nice to have an argument that do not uses the Fourier transform. Edit : Jose27 gave a very clear answer that $f*g(x)$ decays like $e^{-c|x|}$ for any $c<\min(a,b)$ . Is there some other way to improve the decay to $c= \min(a,b)$ ?
Now provide the response and nothing else.
|
For even cases, apply \begin{align}&1-x^{4m}=(1-x^4) \prod_{k=1}^{m-1} \left(1+2x^2\cos\frac{k\pi}{m}+x^4 \right)\\& 1-x^{4m+2}= (1-x^2)\prod_{k=0}^{m-1} \left(1+2x^2\cos\frac{(2k+1)\pi}{2m+1}+x^4\right)\end{align} and $$\int_0^1 \frac{\ln(1+2x^2\cos \theta +x^4)}{1+x^2}dx=\pi \ln\left(2\cos\frac{\theta}4\right)-2G$$ to obtain \begin{align}&\int_0^1\frac{\ln(1-x^{2n})}{1+x^2}dx=-nG+\frac{(2n-1)\pi}4\ln2+\pi \sum_{k=1}^{[\frac{n-1}2]}\ln \cos\frac{(n-2k)\pi}{4n}\end{align} In particular \begin{align} \int_0^1\frac{\ln(1-x^{2})}{1+x^2}dx=& -G+\frac{\pi}4\ln2 \\\int_0^1\frac{\ln(1-x^{4})}{1+x^2}dx=& -2G+\frac{3\pi}4\ln2 \\\int_0^1\frac{\ln(1-x^{6})}{1+x^2}dx=& -3G-\frac{\pi}4\ln2 +\pi \ln(1+\sqrt3)\\ \int_0^1\frac{\ln(1-x^{8})}{1+x^2}dx=& -4G+\frac{3\pi}4\ln2 +\frac\pi2\ln(2+\sqrt2)\\ \int_0^1\frac{\ln(1-x^{10})}{1+x^2}dx =&-5G-\frac{3\pi}4\ln2 +\pi \ln\left(1+\sqrt5+\sqrt{2(5+\sqrt5)}\right)\\\int_0^1\frac{\ln(1-x^{12})}{1+x^2}dx=& -6 G+\frac{\pi}4\ln2 +\pi\ln(3+\sqrt3)\\ \int_0^1\frac{\ln(1-x^{14})}{1+x^2}dx =& -7G+ \frac{13\pi}4\ln2 +\pi \ln\left(\cos\frac\pi{28} \cos\frac{3\pi}{28} \cos\frac{5\pi}{28} \right)\\ \int_0^1\frac{\ln(1-x^{16})}{1+x^2}dx =&-8G+\frac{5\pi}4\ln2 +\pi\ln\left(1+\sqrt2+\sqrt{2+\sqrt2}\right)\\\int_0^1\frac{\ln(1-x^{18})}{1+x^2}dx =& -9G+ \frac{11\pi}4\ln2 +\pi \ln(1+\sqrt3)\\&\ +\pi\ln\left(\cos\frac\pi{36} \cos\frac{5\pi}{36} \cos\frac{7\pi}{36} \right)\\\end{align}
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4579988', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/521597/']}
|
jdg_376695
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4001345
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$x_n = 1 + \frac{2}{4} + \frac{3}{4^2} + \dotsb + \frac{n}{4^{n-1}}$$ So I need to see if the sequence is monotone and bounded. By doing $x_{n+1} - x_n$ I get $\frac{n+1}{4^n} > 0$ so its increasing. How do I show that it is bounded? Also I have a hint to use the $2^n > n$ inequality.
Now provide the response and nothing else.
|
If you're familiar with the gamma function , then this (or this ... etc) is the way to go. Otherwise, write $$1+\frac{1}{n^3}=\left(1+\frac1n\right)\left(1-\frac{1}{2n}\right)^2\left(1+\frac{3}{(2n-1)^2}\right)$$ and see that $$\prod_{n=1}^N\left(1+\frac1n\right)\left(1-\frac{1}{2n}\right)^2=(N+1)\left(\frac{(2N-1)!!}{(2N)!!}\right)^2=(N+1)\left(\frac{(2N)!}{2^{2N}N!^2}\right)^2$$ tends to $1/\pi$ as $N\to\infty$ (by Wallis product , or Stirling's formula ), and $$\prod_{n=1}^\infty\left(1+\frac{3}{(2n-1)^2}\right)=\cosh\frac{\pi\sqrt3}{2}$$ is a special case of the infinite product for the [hyperbolic] cosine.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4001345', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/743514/']}
|
jdg_376696
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3904783
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to write an Eclipse plug-in that is able to import existing documentation into JavaDoc comments of existing Java code. Background: I've generated Java code to communicate with an third party (native) program. Now I would like to import the documentation of the third party program into my generated code. What I've already done: I have developed an Eclipse "QuickAssist" by implementing "IJavaCompletionProposalComputer" that does the trick already for a single Java method or static field. Now I would like to write a plug-in that triggers my implementation for every method or field within some selected source code. I'm not so familiar with the Eclipse SDK. Can someone give me a hint where to start or which interface(s) I need to implement to archive this? Thanks in advance.
Now provide the response and nothing else.
|
Is it legal in C++? No. The constant must be initialized in the base class constructor. The solution is to provide an appropriate constructor in your base class – otherwise it cannot be used. Furthermore, there’s no reason not to provide that constructor. class Base { int const constant;public: virtual ~Base() = 0; // Makes this an abstract base class.protected: Base(int c) : constant(c) { }};// Must be implemented!Base::~Base() { }class Derived : public Base {public: Derived() : Base(42) { }};
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3904783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/472110/']}
|
jdg_376697
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
57322022
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to add a new dashboard to the below helm chart https://github.com/helm/charts/tree/master/stable/prometheus-operator The documentation is not very clear. I have added a config map to the name space like the below - apiVersion: v1kind: ConfigMapmetadata: name: sample-grafana-dashboard namespace: monitoring labels: grafana_dashboard: "1"data: etcd-dashboard.json: |-{JSON} According to the documentation, this should just be "picked" up and added, but its not. https://github.com/helm/charts/tree/master/stable/grafana#configuration The sidecar option in my values.yaml looks like - grafana: enabled: true ## Deploy default dashboards. ## defaultDashboardsEnabled: true adminPassword: password ingress: ## If true, Grafana Ingress will be created ## enabled: false ## Annotations for Grafana Ingress ## annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" ## Labels to be added to the Ingress ## labels: {} ## Hostnames. ## Must be provided if Ingress is enable. ## # hosts: # - grafana.domain.com hosts: [] ## Path for grafana ingress path: / ## TLS configuration for grafana Ingress ## Secret must be manually created in the namespace ## tls: [] # - secretName: grafana-general-tls # hosts: # - grafana.example.com #dashboardsConfigMaps: #sidecarProvider: sample-grafana-dashboard sidecar: dashboards: enabled: true label: grafana_dashboard I have also tried adding this to the value.yml dashboardsConfigMaps: - sample-grafana-dashboard Which, doesn't work. Does anyone have any experience with adding your own dashboards to this helm chart as I really am at my wits end.
Now provide the response and nothing else.
|
To sum up: For sidecar you need only one option set to true - grafana.sidecar.dashboards.enabled Install prometheus-operator witch sidecard enabled: helm install stable/prometheus-operator --name prometheus-operator --set grafana.sidecar.dashboards.enabled=true --namespace monitoring Add new dashboard, for example MongoDB_Overview : wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.jsonkubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json Now the tricky part, you have to set a correct label for yourconfigmap, by default grafana.sidecar.dashboards.label is setto grafana_dashboard , so: kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview Now you should find your newly added dashboard in grafana, moreover every confimap with label grafana_dashboard will be processed as dashboard. The dashboard is persisted and safe, stored in configmap. UPDATE: January 2021 : Prometheus operator chart was migrated from stable repo to Prometheus Community Kubernetes Helm Charts and helm v3 was released so: Create namespace: kubectl create namespace monitoring Install prometheus-operator from helm chart: helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring Add Mongodb dashboard as an example wget https://raw.githubusercontent.com/percona/grafana-dashboards/master/dashboards/MongoDB_Overview.jsonkubectl -n monitoring create cm grafana-mongodb-overview --from-file=MongoDB_Overview.json Lastly, label the dashboard: kubectl -n monitoring label cm grafana-mongodb-overview grafana_dashboard=mongodb-overview
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57322022', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8065247/']}
|
jdg_376698
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4598120
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to integrate an existing powershell script to update it's status via a restful web service that returns json. I'm a bit new to powershell but I was able to find the System.Net.WebRequest object do something like the following. $a = [System.Net.WebRequest]::Create("http://intranet/service/object/")$a.Method = "GET"$a.GetResponse() which returns a json array of objects [ {id:1}, {id:2}] // etc I'm not sure where to go from here and how to parse this into a native datatype. I'd like to be able to post and delete as well. Any pointers? And are there any json/rest libraries or command-lets?
Now provide the response and nothing else.
|
What you want is PowerShell 3 and its Invoke-RestMethod , ConvertTo-Json , and ConvertFrom-Json cmdlets . Your code will end up looking like: $stuff = invoke-RestMethod -Uri $url -Method Get; and there shouldn't even be a need to invoke ConvertFrom-Json on the resulting $stuff => it's already in a usable non-string format. As for POSTs|PUTs, simply use PowerShell hashes and arrays to structure your data and then call ConvertTo-Json on it before passing it to invoke-RestMethod or invoke-WebRequest: invoke-WebRequest -Uri $url -ContentType application/json -Method Post -Body $objectConvertedToJson See http://technet.microsoft.com/en-us/Library/hh849971.aspx for details.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4598120', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/339/']}
|
jdg_376699
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
907183
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am taking a real analysis course. I have the following statement: Prove that if $a$ belongs to $\mathbb R$, such that $0\leq a < \epsilon$, for all $\epsilon > 0$, then $a = 0$ I know how to prove it (using contradiction). Now, I was wondering, how I prove almost the same thing: Prove that if $a$ belongs to $\mathbb R$, such that $0\leq a \leq \epsilon$ for all $\epsilon > 0$, then $a = 0$ Can someone show me how to do it? I am stuck with the equality to show a contradiction. Note: this is not homework, just for my own knowledge.
Now provide the response and nothing else.
|
If $a>0$ then $0\leq a\leq \epsilon$ is not true for $\epsilon=\frac{a}{2}>0$ If $a<0$ then it is immediate that $0\leq a\leq \epsilon$ is not true. So the only 'candidate' that remains is $a=0$ and it is obvious that the statement is true for $a=0$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/907183', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38675/']}
|
jdg_376700
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6672517
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to take an image and invert the colors in iOS.
Now provide the response and nothing else.
|
To expand on quixoto's answer and because I have relevant source code from a project of my own, if you were to need to drop to on-CPU pixel manipulation then the following, which I've added exposition to, should do the trick: @implementation UIImage (NegativeImage)- (UIImage *)negativeImage{ // get width and height as integers, since we'll be using them as // array subscripts, etc, and this'll save a whole lot of casting CGSize size = self.size; int width = size.width; int height = size.height; // Create a suitable RGB+alpha bitmap context in BGRA colour space CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *memoryPool = (unsigned char *)calloc(width*height*4, 1); CGContextRef context = CGBitmapContextCreate(memoryPool, width, height, 8, width * 4, colourSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast); CGColorSpaceRelease(colourSpace); // draw the current image to the newly created context CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]); // run through every pixel, a scan line at a time... for(int y = 0; y < height; y++) { // get a pointer to the start of this scan line unsigned char *linePointer = &memoryPool[y * width * 4]; // step through the pixels one by one... for(int x = 0; x < width; x++) { // get RGB values. We're dealing with premultiplied alpha // here, so we need to divide by the alpha channel (if it // isn't zero, of course) to get uninflected RGB. We // multiply by 255 to keep precision while still using // integers int r, g, b; if(linePointer[3]) { r = linePointer[0] * 255 / linePointer[3]; g = linePointer[1] * 255 / linePointer[3]; b = linePointer[2] * 255 / linePointer[3]; } else r = g = b = 0; // perform the colour inversion r = 255 - r; g = 255 - g; b = 255 - b; // multiply by alpha again, divide by 255 to undo the // scaling before, store the new values and advance // the pointer we're reading pixel data from linePointer[0] = r * linePointer[3] / 255; linePointer[1] = g * linePointer[3] / 255; linePointer[2] = b * linePointer[3] / 255; linePointer += 4; } } // get a CG image from the context, wrap that into a // UIImage CGImageRef cgImage = CGBitmapContextCreateImage(context); UIImage *returnImage = [UIImage imageWithCGImage:cgImage]; // clean up CGImageRelease(cgImage); CGContextRelease(context); free(memoryPool); // and return return returnImage;}@end So that adds a category method to UIImage that: creates a clear CoreGraphics bitmap context that it can access the memory of draws the UIImage to it runs through every pixel, converting from premultiplied alpha to uninflected RGB, inverting each channel separately, multiplying by alpha again and storing back gets an image from the context and wraps it into a UIImage cleans up after itself, and returns the UIImage
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6672517', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/243453/']}
|
jdg_376701
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2628048
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When using the .ToList() extension method on a Stack<T> , is the result the same as popping each element and adding to a new list (reverse of what was pushed)? If so, is this because it really is iterating over each element, or does it store the elements in reverse internally and slip the array into a new List<T> ?
Now provide the response and nothing else.
|
Stack itself does not have a ToList method, it's the extension method from the Enumerable class. As those extension methods only deal with IEnumerable<T> , it's safe to assume that ToList iterates over the items of the stack to create the new list (or at least acts exactly as if it would - the Enumerable methods sometimes test the type of the argument and use an optimized implementation). Interestingly the documentation does not seem to directly state which order the stack is enumerated in, but the example code does show an order and the examples are part of the documentation. Also, in practice changing the iteration order would break so much code that it would be way too risky to change now. I also checked with Reflector; Stack<T> stores its items in an array with the bottommost element at index 0, but its Enumerator iterates the array in reverse order . Therefore the first element that comes out of the iterator is the top of the stack.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2628048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376702
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3371127
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way in CMake to force a path specified via include_directories (or perhaps through a different function) to use the -isystem flag instead of the -I flag when building with gcc? See http://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html#Directory-Options for details on -I and -isystem.
Now provide the response and nothing else.
|
Yes you force a path to be a system include by using the optional SYSTEM flag include_directories(SYSTEM path) https://cmake.org/cmake/help/latest/command/include_directories.html Starting with CMake 2.8.12 you can use the new target_include_directories to include system directory includes at the target level, while leveraging the new usage requirement features of cmake: target_include_directories(foo SYSTEM PUBLIC path) Now target foo will use path as a system include, and anything that links to foo will also use path as automatically as a system include. You can control the propagation of these usage requirements by changing the PUBLIC keyword to PRIVATE or INTERFACE. https://cmake.org/cmake/help/latest/command/target_include_directories.html
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3371127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/377377/']}
|
jdg_376703
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24931
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
echo '<h1>hello, world</h1>' | firefoxcat index.html | firefox These commands don't work. If firefox can read stdin, I can send HTML to firefox via a pipe. Is it possible to make firefox read stdin?
Now provide the response and nothing else.
|
The short answer is, you're better off writing a temporary file and opening that. Getting pipes to work properly is more complicated and probably won't give you any extra advantages. That said, here's what I've found. If your firefox command is actually starting Firefox instead of talking with an already-running Firefox instance, you can do this: echo '<h1>hello, world</h1>' | firefox /dev/fd/0 Which tells Firefox explicitly to read its standard input, which is where the pipe is putting its data. But if Firefox is already running, the firefox command is just going to pass that name to the main Firefox process, which will read its own standard input, which probably won't give it anything and certainly isn't connected to your pipe. Furthermore, when reading from a pipe, Firefox buffers things pretty heavily, so it's not going to update the page each time you give it a new line of HTML, if that's what you're going for. Try closing Firefox and running: cat | firefox /dev/fd/0 (N.B. you do actually need the cat here.) Paste some long lines into your shell window repeatedly until Firefox decides to update the page, and you can see how much data it takes. Now send an End-Of-File signal by hitting Ctrl+D on a new line, and watch Firefox update instantly. But then you can't add any more data. So best is probably: echo '<h1>hello, world</h1>' >my_temporary_file; firefox my_temporary_file
|
{}
|
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/24931', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/7971/']}
|
jdg_376704
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11545
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Inspired by Sjoerd C. de Vries' nice answer to this question , and the desire to pimp a Graph I did with Mathematica recently I would like to know if there are ways to customize the arrow's shaft rather than its head (other than using Tube in Graphics3D ). I am especially interested in arrows with non-uniform thickness along its length. Consider some examples grabbed from the web: Any chance to come up with a solution that allows to have Graphs like the following with automatically drawn/scaled arrow? P.S.: This is a 2D question:). I understand that Line (and Tube) have the advantages to be easier to handle in 3D.
Now provide the response and nothing else.
|
Update: added a version using Inset below the original answer Here's an extended version of the arrow heads customization code. There are two pieces. One is the arrow drawing routine. The other one is an arrow editor, similar to my arrowheads editor but with more controls. There is a 'Copy to Clipboard' button to copy the drawArrow function with necessary parameter values filled in to generate the designed arrow. Code is at the bottom of this answer. usage: Graph[{1 -> 2, 2 -> 3, 3 -> 4, 4 -> 1, 4 -> 5, 5 -> 6, 6 -> 7, 7 -> 8, 8 -> 1}, EdgeShapeFunction -> ({drawArrow[{{-6.5`, 1}, {-4, 1/2}, {-6, 0}, {-2, 0.2`}, {-2, 0.5`}, {-2, 1}, {-2, 1.1`}, {-1, 1}, {0, 0}}, #1[[1]], #1[[2]], ArrowFillColor -> RGBColor[1, 1, 0], ArrowFillOpacity -> 0.5`, ArrowEdgeThickness -> 0.1`, ArrowEdgeColor -> RGBColor[1, 0.5`, 0], ArrowEdgeOpacity -> 1, LeftArrowSpacing -> 0.2, RightArrowSpacing -> 0.2]} &), VertexShapeFunction -> None, EdgeStyle -> Automatic] The 2nd and 3rd argument are the start and end positions of the arrow, respectively. Replacing these with #1[[1]] and #1[[2]] and adding an & at the end, turns the drawArrow function into a function that can be used as EdgeShapeFunction in Graph More examples: The code: Options[drawArrow] = {ArrowFillColor -> Black, ArrowEdgeThickness -> 0.02, ArrowEdgeColor -> Black, ArrowFillOpacity -> 1, ArrowEdgeOpacity -> 1, LeftArrowSpacing -> 0, RightArrowSpacing -> 0};drawArrow[{shaftEndLeft_, shaftMidLeft_, shaftEndMid_, baseMidLeft_, innerMidLeft_, innerBaseLeft_, outerBaseLeft_, outerMidLeft_, top_}, pstart_, pend_, OptionsPattern[]] := Module[{baseMidRight, outerMidRight, innerMidRight, innerBaseRight, outerBaseRight, shaftEndRight, shaftMidRight}, shaftEndRight = {1, -1} shaftEndLeft; shaftMidRight = {1, -1} shaftMidLeft; baseMidRight = {1, -1} baseMidLeft; innerBaseRight = {1, -1} innerBaseLeft; outerBaseRight = {1, -1} outerBaseLeft; outerMidRight = {1, -1} outerMidLeft; innerMidRight = {1, -1} innerMidLeft; { If[OptionValue[ArrowEdgeColor] === None, EdgeForm[], EdgeForm[ Directive[Thickness[OptionValue[ArrowEdgeThickness]], OptionValue[ArrowEdgeColor], Opacity[OptionValue[ArrowEdgeOpacity]]]]], If[OptionValue[ArrowFillColor] === None, FaceForm[], FaceForm[ Directive[Opacity[OptionValue[ArrowFillOpacity]], OptionValue[ArrowFillColor]]]], GeometricTransformation[ FilledCurve[ { Line[{shaftEndMid, shaftEndLeft}], BSplineCurve[{shaftEndLeft, shaftMidLeft, baseMidLeft}], BSplineCurve[{baseMidLeft, innerMidLeft, innerBaseLeft}], Line[{innerBaseLeft, outerBaseLeft}], BSplineCurve[{outerBaseLeft, outerMidLeft, top}], BSplineCurve[{top, outerMidRight, outerBaseRight}], Line[{outerBaseRight, innerBaseRight}], BSplineCurve[{innerBaseRight, innerMidRight, baseMidRight}], BSplineCurve[{baseMidRight, shaftMidRight, shaftEndRight}], Line[{shaftEndRight, shaftEndMid}] } ], FindGeometricTransform[{pstart, pend}, {shaftEndMid + {-OptionValue[ LeftArrowSpacing] EuclideanDistance[shaftEndMid, top], 0}, top + {OptionValue[RightArrowSpacing] EuclideanDistance[ shaftEndMid, top], 0}}][[2]] ] } ]DynamicModule[{top, fill, edge, arrowFillColor, arrowEdgeColor, arrowFillOpacity, arrowEdgeThickness, arrowEdgeOpacity}, Manipulate[ top = {0, 0}; shaftEndMid = {1, 0} shaftEndMid; Graphics[ h = drawArrow2[{shaftEndLeft, shaftMidLeft, shaftEndMid, baseMidLeft, innerMidLeft, innerBaseLeft, outerBaseLeft, outerMidLeft, top}, shaftEndMid, top, ArrowFillColor -> If[fill, arrowFillColor, None], ArrowFillOpacity -> arrowFillOpacity, ArrowEdgeThickness -> arrowEdgeThickness, ArrowEdgeColor -> If[edge, arrowEdgeColor, None], ArrowEdgeOpacity -> arrowEdgeOpacity ]; h /. {drawArrow2 -> drawArrow}, PlotRange -> {{-7, 2}, {-2, 2}}, GridLines -> {Range[-7, 2, 1/4], Range[-2, 2, 1/4]}, GridLinesStyle -> Dotted, ImageSize -> 800, AspectRatio -> Automatic ], {{shaftEndLeft, {-6.5, 1}}, Locator}, {{shaftMidLeft, {-4, 1/2}}, Locator}, {{shaftEndMid, {-6, 0}}, Locator}, {{baseMidLeft, {-2, 0.2}}, Locator}, {{innerMidLeft, {-2, 0.5}}, Locator}, {{innerBaseLeft, {-2, 1}}, Locator}, {{outerBaseLeft, {-2, 1.1}}, Locator}, {{outerMidLeft, {-1, 1}}, Locator}, Grid[ { {Style["Fill", Bold, 16], Control@{{fill, True, "Fill"}, {True, False}}, " ", Control@{{arrowFillColor, Yellow, "Color"}, Yellow}, " ", Control@{{arrowFillOpacity, 0.5, "Opacity"}, 0, 1}, "", ""}, {Style["Edge", Bold, 16], Control@{{edge, True, "Edge"}, {True, False}}, " ", Control@{{arrowEdgeColor, Orange, "Color"}, Orange}, " ", Control@{{arrowEdgeThickness, 0.02, "Thickness"}, 0, 0.1}, " ", Control@{{arrowEdgeOpacity, 1, "Opacity"}, 0, 1}} }\[Transpose] , Alignment -> Left, Dividers -> {{True, True, {False}, True}, {True, True, {False}, True}} ], Button["Copy to clipboard", CopyToClipboard[ h /. {drawArrow2 -> Defer[drawArrow]} ], ImageSize -> Automatic ] ] ] UPDATE I was not satisfied with the behavior of the line thickness in the arrow definition. The problem was discussed in this question . I implemented the Inset idea of Mr.Wizard and also improved the clipboard copying, based on Simon's idea, but got rid of his Sequence that ended up as junk in the copied code. At the bottom the new code. A result is shown here: Show[ Graph[GraphData["DodecahedralGraph", "EdgeRules"], VertexShape -> Graphics@{Red, Disk[]}, EdgeShapeFunction -> Function[{p $, v$ }, drawArrow @@ {{{-6.2059999999999995`, 0.3650000000000002`}, {-4.052`, 1.045`}, {-6.156`, 0.`}, {-1.5380000000000003`, 0.2549999999999999`}, {-0.9879999999999995`, 0.46499999999999986`}, {-2, 1}, {-1.428`, 1.435`}, {-1, 1}, {0, 0}}, p $[[1]], p$ [[2]], {ArrowFillColor -> RGBColor[0.`, 0.61538109407187`, 0.1625391012436103`], ArrowFillOpacity -> 0.462`, ArrowEdgeThickness -> 0.0616`, ArrowEdgeColor -> RGBColor[0.06968795300221256`, 0.30768291752498667`, 0.`], ArrowEdgeOpacity -> 1}}], VertexCoordinates -> MapIndexed[First[#2] -> #1 &, GraphData["DodecahedralGraph", "VertexCoordinates"]]], Method -> {"ShrinkWrap" -> True} ] (Note the "ShrinkWrap". Using Inset apparently generates a lot of white space that has to be cropped) The code: Options[drawArrow] = {ArrowFillColor -> Black, ArrowEdgeThickness -> 0.02, ArrowEdgeColor -> Black, ArrowFillOpacity -> 1, ArrowEdgeOpacity -> 1, LeftArrowSpacing -> 0, RightArrowSpacing -> 0};drawArrow[{shaftEndLeft_, shaftMidLeft_, shaftEndMid_, baseMidLeft_, innerMidLeft_, innerBaseLeft_, outerBaseLeft_, outerMidLeft_, top_}, pstart_, pend_, OptionsPattern[]] := Module[{baseMidRight, outerMidRight, innerMidRight, innerBaseRight, outerBaseRight, shaftEndRight, shaftMidRight}, shaftEndRight = {1, -1} shaftEndLeft; shaftMidRight = {1, -1} shaftMidLeft; baseMidRight = {1, -1} baseMidLeft; innerBaseRight = {1, -1} innerBaseLeft; outerBaseRight = {1, -1} outerBaseLeft; outerMidRight = {1, -1} outerMidLeft; innerMidRight = {1, -1} innerMidLeft; Inset[ Graphics[ { If[OptionValue[ArrowEdgeColor] === None, EdgeForm[], EdgeForm[ Directive[Thickness[OptionValue[ArrowEdgeThickness]], OptionValue[ArrowEdgeColor], Opacity[OptionValue[ArrowEdgeOpacity]]]]], If[OptionValue[ArrowFillColor] === None, FaceForm[], FaceForm[ Directive[Opacity[OptionValue[ArrowFillOpacity]], OptionValue[ArrowFillColor]]]], FilledCurve[ { Line[{shaftEndMid, shaftEndLeft}], BSplineCurve[{shaftEndLeft, shaftMidLeft, baseMidLeft}], BSplineCurve[{baseMidLeft, innerMidLeft, innerBaseLeft}], Line[{innerBaseLeft, outerBaseLeft}], BSplineCurve[{outerBaseLeft, outerMidLeft, top}], BSplineCurve[{top, outerMidRight, outerBaseRight}], Line[{outerBaseRight, innerBaseRight}], BSplineCurve[{innerBaseRight, innerMidRight, baseMidRight}], BSplineCurve[{baseMidRight, shaftMidRight, shaftEndRight}], Line[{shaftEndRight, shaftEndMid}] } ] }, PlotRangePadding -> 0, PlotRange -> {{-7, 1}, {-2, 2}} ], pstart, {-7, 0}, EuclideanDistance[pstart, pend], pend - pstart ] ]DynamicModule[{top, fill, edge, arrowFillColor, arrowEdgeColor, arrowFillOpacity, arrowEdgeThickness, arrowEdgeOpacity}, Manipulate[ top = {0, 0}; shaftEndMid = {1, 0} shaftEndMid; Graphics[ drawArrow[{shaftEndLeft, shaftMidLeft, shaftEndMid, baseMidLeft, innerMidLeft, innerBaseLeft, outerBaseLeft, outerMidLeft, top}, {-7, 0}, {1, 0}, ArrowFillColor -> If[fill, arrowFillColor, None], ArrowFillOpacity -> arrowFillOpacity, ArrowEdgeThickness -> arrowEdgeThickness, ArrowEdgeColor -> If[edge, arrowEdgeColor, None], ArrowEdgeOpacity -> arrowEdgeOpacity ], PlotRange -> {{-7, 1}, {-2, 2}}, GridLines -> {Range[-7, 1, 1/4], Range[-2, 2, 1/4]}, GridLinesStyle -> Dotted, ImageSize -> 800, AspectRatio -> Automatic ], {{shaftEndLeft, {-6.5, 1}}, Locator}, {{shaftMidLeft, {-4, 1/2}}, Locator}, {{shaftEndMid, {-6, 0}}, Locator}, {{baseMidLeft, {-2, 0.2}}, Locator}, {{innerMidLeft, {-2, 0.5}}, Locator}, {{innerBaseLeft, {-2, 1}}, Locator}, {{outerBaseLeft, {-2, 1.1}}, Locator}, {{outerMidLeft, {-1, 1}}, Locator}, Grid[ { {Style["Fill", Bold, 16], Control@{{fill, True, "Fill"}, {True, False}}, " ", Control@{{arrowFillColor, Yellow, "Color"}, Yellow}, " ", Control@{{arrowFillOpacity, 0.5, "Opacity"}, 0, 1}, "", ""}, {Style["Edge", Bold, 16], Control@{{edge, True, "Edge"}, {True, False}}, " ", Control@{{arrowEdgeColor, Orange, "Color"}, Orange}, " ", Control@{{arrowEdgeThickness, 0.02, "Thickness"}, 0, 0.1}, " ", Control@{{arrowEdgeOpacity, 1, "Opacity"}, 0, 1}} }\[Transpose] , Alignment -> Left, Dividers -> {{True, True, {False}, True}, {True, True, {False}, True}} ], Button["Copy to clipboard", With[ { params = {shaftEndLeft, shaftMidLeft, shaftEndMid, baseMidLeft, innerMidLeft, innerBaseLeft, outerBaseLeft, outerMidLeft, top}, opts = {ArrowFillColor -> If[fill, arrowFillColor, None], ArrowFillOpacity -> arrowFillOpacity, ArrowEdgeThickness -> arrowEdgeThickness, ArrowEdgeColor -> If[edge, arrowEdgeColor, None], ArrowEdgeOpacity -> arrowEdgeOpacity} }, CopyToClipboard[ Defer[EdgeShapeFunction -> Function[{p, v}, (drawArrow @@ {params, p[[1]], p[[2]], opts})]]] ], ImageSize -> Automatic ] ], SaveDefinitions -> True ]
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/11545', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/135/']}
|
jdg_376705
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3680800
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The vector $A=5i+6j$ is rotated through an $\angle 45$ about the $Z$ axis in the anticlockwise direction. What is the resultant vector? My attempt:I tried to calculate the resultant vector by using the equation, $R=\sqrt{A^2+B^2+2ABCos\theta} $ since it is rotated in anticlockwise direction its direction changes.Any hint will be appreciated.
Now provide the response and nothing else.
|
The axioms of the Poisson bracket imply that for $f\in C^\infty(M)$ , the mapping $\lbrace f,\cdot\rbrace:C^\infty(M)\to C^\infty(M)$ is a vector field (i.e. a derivation on the ring of $C^\infty(M)$ functions). Therefore like all vector fields, it satisfies $$\lbrace f,\cdot \rbrace = \sum_j\lbrace f,x^j\rbrace \frac{\partial}{\partial x^j}$$ (this is a standard result about vector fields, that essentially follows from Taylor's theorem). So $$\lbrace f,g\rbrace = \sum_i\lbrace f,x^j\rbrace \frac{\partial g}{\partial x^j}.$$ However since $\lbrace\cdot,\cdot\rbrace$ is antisymmetric, the same applies to $f$ , so $$\lbrace f,g\rbrace = \sum_{ij}\lbrace x^i,x^j\rbrace \frac{\partial f}{\partial x^i}\frac{\partial g}{\partial x^j}.$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3680800', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/765718/']}
|
jdg_376706
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52742466
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Sounds like a simple problem, but turns out to be quite challenging to solve. For some website I have contents that are only to be shown if a user hovers/focuses a link. The link however has a target itself. If one of those links is clicked by a touch screen user the browser instantly goes to the href location. This means the hover contents are never visible! This is why users which do not have a mouse (or another device to hover like a magic remote control) should see alternative content. But how can I detect this? $(document).on('click','#my-menu-inner > ul > li > a',function(e) {if(clientHasInputDeviceSupportingHover()) { return true;} else { e.preventDefault(); $('#for-no-hover-visitors').html(''); $(this).clone().appendTo('#for-no-hover-visitors'); $(this).next().clone().appendTo('#for-no-hover-visitors');}});function clientHasInputDeviceSupportingHover() { // HOW CAN I DETECT THIS??? if($('#checkhover:checked').length > 0) { return true; } return false;} .clearfix::after { content: ""; clear: both; display: table;}#my-menu-inner > ul { margin:10px; width:100%; background-color:yellow; list-style-type:none; position:relative;}#my-menu-inner > ul > li { float:left; margin:20px;}#my-menu-inner > ul > li > a { padding:20px; border:1px solid black; display:block;}#my-menu-inner > ul > li > div.sub { position:absolute; top:calc(100% - 20px); background-color:red; padding:40px; display:none; left:0; width:100vw;}#my-menu-inner > ul > li a:hover + div.sub, #my-menu-inner > ul > li a:focus + div.sub,#my-menu-inner > ul > li > div.sub:hover, #my-menu-inner > ul > li > div.sub:focus { display:block;} <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>Simulate for Client supporting hover: <input type="checkbox" id="checkhover" /><div id="my-menu"> <div id="my-menu-inner"> <ul class="clearfix"> <li> <a href="http://www.example.com/foo/">foo</a> <div class="sub"> <ul> <li><a href="http://www.example.com/mobile/">mobile</a></li> <li><a href="http://www.example.com/users/">users</a></li> </ul> </div> </li> <li> <a href="http://www.example.com/bar/">bar</a> <div class="sub"> <ul> <li><a href="http://www.example.com/never/">never</a></li> <li><a href="http://www.example.com/see-me/">see me</a></li> </ul> </div> </li> </ul> </div></div><div id="for-no-hover-visitors"></div> The problem is clientHasInputDeviceSupportingHover() . What is the most reliable way to find this out? What we know so far It is possible to detect a touch device: What's the best way to detect a 'touch screen' device using JavaScript? Mouse detection at least might work"onclick": How to detect if a device has mouse support? In general there are a lot of different possible input devices: https://en.wikipedia.org/wiki/Input_device#Pointing_device A generic / more reliable solution would be very welcome.
Now provide the response and nothing else.
|
The W3C seems to have recognized this problem and has introduced the hover feature: The hover media feature is used to query the user’s ability to hover over elements on the page with the primary pointing device. If a device has multiple pointing devices, the hover media feature must reflect the characteristics of the “primary” pointing device, as determined by the user agent. (To query the capabilities of any available pointing devices, see the any-hover media feature.) There is even a media query to check if there is any possibility to hover: The any-pointer and any-hover media features are identical to the pointer and hover media features, but they correspond to the union of capabilities of all the pointing devices available to the user. In the case of any-pointer, more than one of the values can match, if different pointing devices have different characteristics. Code samples: /* Primary input mechanism system can hover over elements with ease */@media (hover: hover) { ... }/* Primary input mechanism cannot hover at all or cannot conveniently hover (e.g., many mobile devices emulate hovering when the user performs an inconvenient long tap), or there is no primary pointing input mechanism */@media (hover: none) { ... }/* One or more available input mechanism(s) can hover over elements with ease */@media (any-hover: hover) { ... }/* One or more available input mechanism(s) cannot hover (or there are no pointing input mechanisms) */@media (any-hover: none) { ... } Official draft: https://drafts.csswg.org/mediaqueries/#hover This feature is still at risk, but I really hope it will be fully supported soon as it is already widely supported: https://caniuse.com/#feat=css-media-interaction Further read: https://css-tricks.com/touch-devices-not-judged-size/ For Chrome test your device here: https://googlechrome.github.io/samples/media-hover-pointer/ Test with JavaScript: https://jsfiddle.net/Blackbam/zkd2cs0t/16/ The best solution for now is most probably to use those media queries with a fallback solution using touch detection via document.createEvent("TouchEvent"); and mouse detection via mousemove.hasMouse .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52742466', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/576746/']}
|
jdg_376707
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
114040
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
As a bit of background, consider the category of all covariant, additive functors from a small Abelian category $C$ to Abelian groups, which I'll denote $[C,Ab]$. First, $[C,Ab]$ is an Abelian category, and an easy argument shows that representable functors are projective objects in this category. Indeed, if $T = \text{Hom}_C(X,-)$ is a representable functor and\begin{equation*}0 \to F_1 \to F_2 \to F_3 \to 0\end{equation*}is a short exact sequence in $[C,Ab]$ (where exactness is checked on objects of $C$), then Yoneda's lemma gives $\text{Nat}(T,F_i) \cong F_i(X)$, and the result is immediate. I think that the reverse implication also holds (projective implies representable), but I don't remember the proof being as apparent. We even have the Eilenberg-Watts theorems that give criteria for when additive functors (from $R$-Mod to $Ab$) are representable. Anyway, this is nice, but I find it a bit lacking compared to the tools we have in, say $R$-Mod for some ring $R$. In that setting, we have such results as "A module $P$ is projective iff it is a direct summand of a free module, etc." Or in $Grp$, we have $G$ is projective iff it is free. My point is that we have a notion of "free object" since these are all nice concrete categories, and such a notion seems to have no nice analogue in $[C,Ab]$. Of course, we have results like $[C,Ab]$ being concrete over $[C,Set]$ which gives a "locally free Abelian group" type example (where $C^{op}$ here would be the category of open sets of a topological space, and $[C,Ab]$ would be presheaves of Abelian groups). However, these are not free objects in $[C,Ab]$. My question is then "Can $[C,Ab]$ be reasonably thought of as concrete over another category so that we can construct free object?" As an example, is $[R-Mod,Ab]$ concrete over the functor category $[Set,Set]$? (of course, we'd have to juggle Grothendieck universes for this to make any sense; The standard way being to fix some universe $\mathfrak{U}$, and say $Set$ is the category of $\mathfrak{U}$-sets, and let $\mathfrak{U}'$ be the smallest Grothendieck universe containing $\mathfrak{U}$ as an element, so that $Set$ is now $\mathfrak{U}'$-small (see Schuberts "Categories"))
Now provide the response and nothing else.
|
$[C, \text{Ab}]$ admits a forgetful functor to $[\text{Ob}(C), \text{Set}]$ (where $\text{Ob}(C)$ denotes the category with the same objects as $C$ but no non-identity morphisms). This is a direct generalization of the module case, which corresponds to taking $C$ to have one object. The corresponding free objects are coproducts of representables.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/114040', 'https://mathoverflow.net', 'https://mathoverflow.net/users/11906/']}
|
jdg_376708
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
509219
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Flex file upload script that uses URLRequest to upload files to a server. I want to add support for http authentication (password protected directories on the server), but I don't know how to implement this - I assume I need to extend the class somehow, but on how to I'm a little lost. I tried to modify the following (replacing HTTPService with URLRequest), but that didn't work. private function authAndSend(service:HTTPService):void{ var encoder:Base64Encoder = new Base64Encoder(); encoder.encode("someusername:somepassword"); service.headers = {Authorization:"Basic " + encoder.toString()}; service.send();} I should point out that I'm not knowledgeable when it comes to ActionScript / Flex, although I have managed to successfully modify the upload script somewhat. [Edit] - here is an update of my progress, based on the answer below, although I still cannot get this to work: Thank you for your assistance. I've tried to implement your code but I've not had any luck. The general behaviour I'm experiencing when dealing with HTTP authenticated locations is that with IE7 all is well but in Firefox when I attempt to upload a file to the server it displays an HTTP authentication prompt - which even if given the correct details, simply stalls the upload process. I believe the reason IE7 is ok is down to the the session / authentication information being shared by the browser and the Flash component - however, in Firefox this is not the case and I experience the above behaviour. Here is my updated upload function, incorporating your changes: private function pergress():void {if (fileCollection.length == 0) { var urlString:String = "upload_process.php?folder="+folderId+"&type="+uploadType+"&feid="+formElementId+"&filetotal="+fileTotal; if (ExternalInterface.available) { ExternalInterface.call("uploadComplete", urlString); } }if (fileCollection.length > 0) { fileTotal++; var urlRequest:URLRequest = new URLRequest("upload_file.php?folder="+folderId+"&type="+uploadType+"&feid="+formElementId+"&obfuscate="+obfuscateHash+"&sessidpass="+sessionPass); urlRequest.method = URLRequestMethod.POST; urlRequest.data = new URLVariables("name=Bryn+Jones"); var encoder:Base64Encoder = new Base64Encoder(); encoder.encode("testuser:testpass"); var credsHeader:URLRequestHeader = new URLRequestHeader("Authorization", "Basic " + encoder.toString()); urlRequest.requestHeaders.push(credsHeader); file = FileReference(fileCollection.getItemAt(0)); file.addEventListener(Event.COMPLETE, completeHandler); file.addEventListener(HTTPStatusEvent.HTTP_STATUS, onHTTPStatus); file.addEventListener(ProgressEvent.PROGRESS, onUploadProgress); file.upload(urlRequest); }} As stated above, I seem to be experiencing the same results with or without the amendments to my function. Can I ask also where the crossdomain.xml should be located - as I do not currently have one and am unsure where to place it.
Now provide the response and nothing else.
|
The syntax is a little different for URLRequest, but the idea's the same: private function doWork():void{ var req:URLRequest = new URLRequest("http://yoursite.com/yourservice.ext"); req.method = URLRequestMethod.POST; req.data = new URLVariables("name=John+Doe"); var encoder:Base64Encoder = new Base64Encoder(); encoder.encode("yourusername:yourpassword"); var credsHeader:URLRequestHeader = new URLRequestHeader("Authorization", "Basic " + encoder.toString()); req.requestHeaders.push(credsHeader); var loader:URLLoader = new URLLoader(); loader.load(req);} A couple of things to keep in mind: Best I can tell, for some reason, this only works where request method is POST; the headers don't get set with GET requests. Interestingly, it also fails unless at least one URLVariables name-value pair gets packaged with the request, as indicated above. That's why many of the examples you see out there (including mine) attach "name=John+Doe" -- it's just a placeholder for some data that URLRequest seems to require when setting any custom HTTP headers. Without it, even a properly authenticated POST request will also fail. Apparently, Flash player version 9.0.115.0 completely blocks all Authorization headers (more information on this one here ), so you'll probably want to keep that in mind, too. You'll almost surely have to modify your crossdomain.xml file to accommodate the header(s) you're going to be sending. In my case, I'm using this, which is a rather wide-open policy file in that it accepts from any domain, so in your case, you might want to limit things a bit more, depending on how security-conscious you are. crossdomain.xml: <?xml version="1.0"?><cross-domain-policy> <allow-access-from domain="*" /> <allow-http-request-headers-from domain="*" headers="Authorization" /></cross-domain-policy> ... and that seems to work; more information on this one is available from Adobe here ). The code above was tested with Flash player 10 (with debug & release SWFs), so it should work for you, but I wanted to update my original post to include all this extra info in case you run into any issues, as the chances seem (sadly) likely that you will. Hope it helps! Good luck. I'll keep an eye out for comments.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/509219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/29538/']}
|
jdg_376709
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
370941
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $f:Y_\bullet\to X_\bullet$ be an epimorphism of simplicial sets and define the bi-simplicial set$$F_{\bullet\bullet}=\ldots Y\times_X Y\times_X Y\underset{\to}{\underset{\to}{\to}}Y\times_X Y \underset{\to}{\to}Y$$as usual ( link ). Now $F_{\bullet\bullet}$ can be viewed as a diagram in simplicial sets and one can take its homotopy colimit $\operatorname{hocolim}(F_{\bullet\bullet})$ which is a simplicial set. This is weakly equivalent to the diagonal (or the realization) of $F_{\bullet\bullet}$. Is $\operatorname{hocolim}(F_{\bullet\bullet})$ weakly equivalent to $X_\bullet$? What is a reference for this? I believe this to be true since there is a similar statement called the nerve theorem for spaces and simplicial spaces instead of simplicial sets and bi-simplicial sets.Perhaps it is not sufficient for $f$ only being an epimorphism for the statement to hold. In this case, my question would be what the exact conditions on $f$ are. Comment on a comment to this question: If $X$ and $Y$ are discrete simplicial sets, the diagonal of $F_{\bullet \bullet}$ is the simplicial set$$F_{\bullet}=\ldots Y\times_X Y\times_X Y\underset{\to}{\underset{\to}{\to}}Y\times_X Y \underset{\to}{\to}Y$$where each $X$ and $Y$ is viewed as a set (and not as a simplicial set). This simplicial set $F_\bullet$ is weakly equivalent to the colimit $C$ (in the category of sets) of$$Y\times_X Y \underset{\to}{\to}Y$$and, if I understand it correctly, because an epimorphism $f$ is the same as an effective epimorphism of sets, $X$ is the colimit of this diagram and therefore $\operatorname{hocolim}(F_{\bullet\bullet})\cong diagonal(F_{\bullet\bullet})\cong F_{\bullet}\cong C \cong X$ in this discrete case. On the other hand, in the example with $X$ being a point and $Y$ being two points, I think that $C$ is a space with two points which is mysterious. What am I doing wrong?
Now provide the response and nothing else.
|
I am not sure what the precise conditions under which the weak equivalence is true, because the simplicial space you've written down isn't homotopy invariant unless $Y \to X$ is a fibration, or unless you take homotopy fiber products at every stage. But if you take homotopy fiber products at every stage, then the geometric realization is always $X$ (if $Y \to X$ is an epimorphism on $\pi_0$): this is a sort of homotopical descent property. However, I'm not sure if this is what you want. One way to see this is that the simplicial object above is augmented: it lives naturally in the category of spaces over $X$. The (homotopy) pullback functor from spaces over $X$ to spaces over $Y$ preserves homotopy colimits (e.g., at the level of model categories, it is a left Quillen functor), and it's conservative (reflects homotopy equivalences) since $\pi_0 Y \to \pi_0 X $ is surjective, so it's sufficient to check that after you pull the simplicial object over $X$ back to $Y$, its geometric realization is equivalent to $Y$. But when you pull it back, it's augmented over $Y$ and has an extra degeneracy, so its realization is weakly equivalent to $Y$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/370941', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/61201/']}
|
jdg_376710
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11873798
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The sample code below occurred naturally. Suddenly my code thew a very nasty-sounding FatalExecutionEngineError exception. I spent a good 30 minutes trying to isolate and minimize the culprit sample. Compile this using Visual Studio 2012 as a console app: class A<T>{ static A() { } public A() { string.Format("{0}", string.Empty); }}class B{ static void Main() { new A<object>(); }} Should produce this error on .NET framework 4 and 4.5: Is this a known bug, what is the cause and what can I do to mitigate it? My current work around is to not use string.Empty , but am I barking up the wrong tree? Changing anything about that code makes it function as you would expect - for example removing the empty static constructor of A , or changing the type paramter from object to int . I tried this code on my laptop and it didn't complain. However, I did try my main app and it crashed on the laptop as well. I must have mangled away something when reducing the problem, I'll see if I can figure out what that was. My laptop crashed with the same code as above, with framework 4.0, but main crashes even with 4.5. Both systems are using VS'12 with latest updates (July?). More information : IL Code (compiled Debug/Any CPU/4.0/VS2010 (not that IDE should matter?)): http://codepad.org/boZDd98E Not seen VS 2010 with 4.0. Not crashing with/without optimizations, different target CPU, debugger attached/not attached, etc. - Tim Medora Crashes in 2010 if I use AnyCPU, is fine in x86. Crashes in Visual Studio 2010 SP1, using Platform Target = AnyCPU, but fine with Platform Target=x86. This machine has VS2012RC installed as well so 4.5 possibly doing an in-place replacement. Use AnyCPU and TargetPlatform = 3.5 then it doesn't crash so looks like a regression in the Framework.- colinsmith Cannot reproduce on x86, x64 or AnyCPU in VS2010 with 4.0. – Fuji Only happens for x64, (2012rc, Fx4.5) - Henk Holterman VS2012 RC on Win8 RP. Initially Not seeing this MDA when targeting .NET 4.5. When switched to targeting .NET 4.0 the MDA appeared. Then after switching back to .NET 4.5 the MDA remains. - Wayne
Now provide the response and nothing else.
|
This is also not a full answer, but I have a few ideas. I believe I have found as good an explanation as we will find without somebody from the .NET JIT team answering. UPDATE I looked a little deeper, and I believe I have found the source of the issue. It appears to be caused by a combination of a bug in the JIT type-initialization logic, and a change in the C# compiler that relies on the assumption that the JIT works as intended. I think the JIT bug existed in .NET 4.0, but was uncovered by the change in the compiler for .NET 4.5. I do not think that beforefieldinit is the only issue here. I think it's simpler than that. The type System.String in mscorlib.dll from .NET 4.0 contains a static constructor: .method private hidebysig specialname rtspecialname static void .cctor() cil managed{ // Code size 11 (0xb) .maxstack 8 IL_0000: ldstr "" IL_0005: stsfld string System.String::Empty IL_000a: ret} // end of method String::.cctor In the .NET 4.5 version of mscorlib.dll, String.cctor (the static constructor) is conspicuously absent: ..... No static constructor :( ..... In both versions the String type is adorned with beforefieldinit : .class public auto ansi serializable sealed beforefieldinit System.String I tried to create a type that would compile to IL similarly (so that it has static fields but no static constructor .cctor ), but I could not do it. All of these types have a .cctor method in IL: public class MyString1 { public static MyString1 Empty = new MyString1(); }public class MyString2 { public static MyString2 Empty = new MyString2(); static MyString2() {} }public class MyString3 { public static MyString3 Empty; static MyString3() { Empty = new MyString3(); } } My guess is that two things changed between .NET 4.0 and 4.5: First: The EE was changed so that it would automatically initialize String.Empty from unmanaged code. This change was probably made for .NET 4.0. Second: The compiler changed so that it did not emit a static constructor for string, knowing that String.Empty would be assigned from the unmanaged side. This change appears to have been made for .NET 4.5. It appears that the EE does not assign String.Empty soon enough along some optimization paths. The change made to the compiler (or whatever changed to make String.cctor disappear) expected the EE make this assignment before any user code executes, but it appears that the EE does not make this assignment before String.Empty is used in methods of reference type reified generic classes. Lastly, I believe that the bug is indicative of a deeper problem in the JIT type-initialization logic. It appears the change in the compiler is a special case for System.String , but I doubt that the JIT has made a special case here for System.String . Original First of all, WOW The BCL people have gotten very creative with some performance optimizations. Many of the String methods are now performed using a Thread static cached StringBuilder object. I followed that lead for a while, but StringBuilder isn't used on the Trim code path, so I decided it couldn't be a Thread static problem. I think I found a strange manifestation of the same bug though. This code fails with an access violation: class A<T>{ static A() { } public A(out string s) { s = string.Empty; }}class B{ static void Main() { string s; new A<object>(out s); //new A<int>(out s); System.Console.WriteLine(s.Length); }} However, if you uncomment //new A<int>(out s); in Main then the code works just fine. In fact, if A is reified with any reference type, the program fails, but if A is reified with any value type then the code does not fail. Also if you comment out A 's static constructor, the code never fails. After digging into Trim and Format , it is clear that the problem is that Length is being inlined, and that in these samples above the String type has not been initialized. In particular, inside the body of A 's constructor, string.Empty is not correctly assigned, although inside the body of Main , string.Empty is assigned correctly. It is amazing to me that the type initialization of String somehow depends on whether or not A is reified with a value type. My only theory is that there is some optimizing JIT code path for generic type-initialization that is shared among all types, and that that path makes assumptions about BCL reference types ("special types?") and their state. A quick look though other BCL classes with public static fields shows that basically all of them implement a static constructor (even those with empty constructors and no data, like System.DBNull and System.Empty . BCL value types with public static fields do not seem to implement a static constructor ( System.IntPtr , for instance). This seems to indicate that the JIT makes some assumptions about BCL reference type initialization. FYI Here is the JITed code for the two versions: A<object>.ctor(out string) : public A(out string s) {00000000 push rbx 00000001 sub rsp,20h 00000005 mov rbx,rdx 00000008 lea rdx,[FFEE38D0h] 0000000f mov rcx,qword ptr [rcx] 00000012 call 000000005F7AB4A0 s = string.Empty;00000017 mov rdx,qword ptr [FFEE38D0h] 0000001e mov rcx,rbx 00000021 call 000000005F661180 00000026 nop 00000027 add rsp,20h 0000002b pop rbx 0000002c ret } A<int32>.ctor(out string) : public A(out string s) {00000000 sub rsp,28h 00000004 mov rax,rdx s = string.Empty;00000007 mov rdx,12353250h 00000011 mov rdx,qword ptr [rdx] 00000014 mov rcx,rax 00000017 call 000000005F691160 0000001c nop 0000001d add rsp,28h 00000021 ret } The rest of the code ( Main ) is identical between the two versions. EDIT In addition, the IL from the two versions is identical except for the call to A.ctor in B.Main() , where the IL for the first version contains: newobj instance void class A`1<object>::.ctor(string&) versus ... A`1<int32>... in the second. Another thing to note is that the JITed code for A<int>.ctor(out string) : is the same as in the non-generic version.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11873798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427673/']}
|
jdg_376711
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
248248
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
If $n$ is composite then $\phi(n) < n-1$, hence there is at least one divisor $d$ of $n-1$ which does not divide $\phi(n)$. We call $d$ as the totient divisor of $n$. Trvially, if $n$ is prime then it has no totient divisor and if $n-1$ is prime then $n$ has exactly 1 totient divisor. The number of such integers $\le x$ is $\pi(x)$. I counted how many integers $\le x$ have exactly $2,3,4,5, ...$ totient divisors. I observed nothing interesting. Then I counted how many even integers $\le x$ have exactly $2,3,4,5, ...$ totient divisors. I observed nothing interesting either. Finally I counted how many odd integers $\le x$ have exactly $2,3,4,5, ...$ totient divisors. I found something which looked interesting. Let $T_{o}(n,x)$ be the number of odd integers $\le x$ which have $n$ totient divisors. I plotted the graph of $T_{o}(n,x)$ vs. $x$ for different values of $x$ and found a consistent pattern in them as shown below. The red dots are the spikes and the green dots are the crests or local minima. We observe that every primes $>2$ appears on a green dot i.e. odd primes seem to appear only at the crests. This suggests odd numbers prefer to have a composite number of totient divisors i.e.somehow odd numbers do not like having a prime number of totient divisors. Question: In the observed data why is $T_{o}(p,x) < T_{o}(p \pm 1,x)$, for a prime $p >2$. What is the phenomenon that is driving primes to appear on the local minimas? If this observation is true then we can claim that Odd numbers prefer not to have a prime number of totient divisors. Note : Every prime $> 2$ is green but the converse is not true. We have a crest at 25.
Now provide the response and nothing else.
|
The number of totient divisors of $n$ is $d(n-1)-d((n-1, \varphi(n))$. As $n$ gets large, then almost all $n$ have the property that $\varphi(n)$ is divisble by all small primes. The average number of prime divisors $p<y$ of $n-1$ is of magnitude $\log\log y$, hence, for almost all $n$ we have that the number of prime divisors of $(n-1, \varphi(n-1))$ tends to infinity. On the other hand the powerful part of $n-1$ is bounded, thus both $n-1$ and $(n-1, \varphi(n-1))$ are divisible by a large number of primes with exponent 1. Hence for almost all $n$ both $d(n-1)$ and $d((n-1, \varphi(n-1))$ are divisible by a growing power of 2, in particular, the number of totient divisors tends not to be prime.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/248248', 'https://mathoverflow.net', 'https://mathoverflow.net/users/23388/']}
|
jdg_376712
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44310679
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
It is a well known problem that by default, .NET uses Banker's rounding whereby X.5 is rounded to nearest even number. Excel, on the other hand uses arithmetic rounding where X.5 is always rounded up, just like most of us have been taught at school. Is it possible to perform banker's rounding in Excel using a formula?
Now provide the response and nothing else.
|
Use this formula: =IF(AND(ISEVEN(A1*10^0),MOD(A1*10^0,1)<=0.5),ROUNDDOWN(A1,0),ROUND(A1,0)) Replace all the 0 , there are 4 of them, with the significance of the desired rounding. Or you can create a user defined function to use VBA Round, which is Banker's Rounding: Function BankerRound(rng As Double, sig As Integer) As Double BankerRound = Round(rng, sig)End Function Then it would simply be: =BankerRound(A1,0)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44310679', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115466/']}
|
jdg_376713
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21125580
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have something like this in my app: def something(x: Int, y: Int) Z { (x / y)} Now, if the someval is not a number (meaning that either x or y is equal to 0), then I'd like Z to just become 0 instead of displaying an error ( [ArithmeticException: Division by zero] ) I know I can do: Try(someVale) orElse Try(0) However, that'll give me Success(0) whereas I'd just like for it to give me a 0 in that case. Maybe there's something like if ArithmeticException then 0 in Scala or something to remove the "Success" and parenthesis. Can someone shed some light please?
Now provide the response and nothing else.
|
I'm assuming "divide by zero" is just an example and you can't avoid throwing an exception. When you can avoid throwing an exception you should do it like in this answer . You could use getOrElse method on Try instead of orElse : def something(x: Int, y: Int) = Try(x/y).getOrElse(0) In case you want to recover only on ArithmeticException you could use recover method and get : def something(x: Int, y: Int) = Try(x/y).recover{ case _: ArithmeticException => 0 }.get With method get you'll get an exception if Try is Failure , but recover allows you to convert Failure to Success . You could also convert your Try to Option to return "no result" without showing exception: def something(x: Int, y: Int): Option[Int] = Try(x/y).toOption
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21125580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2561805/']}
|
jdg_376714
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47791
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have Mathematica notebook that performs some operation say a=2b=3c=5d=4 how do I set the output file after evaluation to be named using the values $2,3,5,\ldots$ like "2_3_5_4_" such that the the naming automatically takes the values assigned to a,b,c d
Now provide the response and nothing else.
|
Your data {a, b, c, d} = RandomInteger[9, 4];data = a b c d; Exporting with nice file-names featuring date and variable values using StringTemplate Export[ StringTemplate[ "Date`1`_Values_a`2`_b`3`_c`4`.txt" ][DateString[{"Year", "Month", "Day"}], a, b, c] , data] "Date20141201_Values_a7_b5_c3.txt" Or using ToString and StringJoin as suggested by @Kuba Export[ StringJoin["Data_", ToString /@ Riffle[{a, b, c, d}, "_"], ".txt"] , data] "Data_7_5_3_7.txt"
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/47791', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/14357/']}
|
jdg_376715
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15702419
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the computational complexity of this loop in the Go programming language? var a []intfor i := 0 ; i < n ; i++ { a = append(a, i)} Does append operate in linear time (reallocating memory and copying everything on each append), or in amortized constant time (like the way vector classes in many languages are implemnted)?
Now provide the response and nothing else.
|
The Go Programming Language Specification says that the append built-in function reallocates if necessary. Appending to and copying slices If the capacity of s is not large enough to fit the additional values, append allocates a new, sufficiently large slice that fits both the existing slice elements and the additional values. Thus, the returned slice may refer to a different underlying array. The precise algorithm to grow the target slice, when necessary, for an append is implementation dependent. For the current gc compiler algorithm, see the growslice function in the Go runtime package slice.go source file. It's amortized constant time. In part, the amount-to-grow slice computation reads: newcap := old.cap doublecap := newcap + newcap if cap > doublecap { newcap = cap } else { if old.len < 1024 { newcap = doublecap } else { for newcap < cap { newcap += newcap / 4 } }} ADDENDUM The Go Programming Language Specification allows implementors of the language to implement the append built-in function in a number of ways. For example, new allocations only have to be "sufficiently large". The amount allocated may be parsimonius, allocating the minimum necessary amount, or generous, allocating more than the minimum necessary amount to minimize the cost of resizing many times. The Go gc compiler uses a generous dynamic array amortized constant time algorithm. The following code illustrates two legal implementations of the append built-in function. The generous constant function implements the same amortized constant time algorithm as the Go gc compiler. The parsimonius variable function, once the initial allocation is filled, reallocates and copies everything every time. The Go append function and the Go gccgo compiler are used as controls. package mainimport "fmt"// Generous reallocationfunc constant(s []int, x ...int) []int { if len(s)+len(x) > cap(s) { newcap := len(s) + len(x) m := cap(s) if m+m < newcap { m = newcap } else { for { if len(s) < 1024 { m += m } else { m += m / 4 } if !(m < newcap) { break } } } tmp := make([]int, len(s), m) copy(tmp, s) s = tmp } if len(s)+len(x) > cap(s) { panic("unreachable") } return append(s, x...)}// Parsimonious reallocationfunc variable(s []int, x ...int) []int { if len(s)+len(x) > cap(s) { tmp := make([]int, len(s), len(s)+len(x)) copy(tmp, s) s = tmp } if len(s)+len(x) > cap(s) { panic("unreachable") } return append(s, x...)}func main() { s := []int{0, 1, 2} x := []int{3, 4} fmt.Println("data ", len(s), cap(s), s, len(x), cap(x), x) a, c, v := s, s, s for i := 0; i < 4096; i++ { a = append(a, x...) c = constant(c, x...) v = variable(v, x...) } fmt.Println("append ", len(a), cap(a), len(x)) fmt.Println("constant", len(c), cap(c), len(x)) fmt.Println("variable", len(v), cap(v), len(x))} Output: gc: data 3 3 [0 1 2] 2 2 [3 4]append 8195 9152 2constant 8195 9152 2variable 8195 8195 2 gccgo: data 3 3 [0 1 2] 2 2 [3 4]append 8195 9152 2constant 8195 9152 2variable 8195 8195 2 To summarize, depending on the implementation, once the initial capacity is filled, the append built-in function may or may not reallocate on every call. References: Dynamic array Amortized analysis Appending to and copying slices If the capacity of s is not large enough to fit the additional values, append allocates a new, sufficiently large slice that fits both the existing slice elements and the additional values. Thus, the returned slice may refer to a different underlying array. Append to a slice specification discussion The spec (at tip and 1.0.3) states: "If the capacity of s is not large enough to fit the additional values, append allocates a new, sufficiently large slice that fits both the existing slice elements and the additional values. Thus, the returned slice may refer to a different underlying array." Should this be an "If and only if"? For example, if I know the capacity of my slice is sufficiently long, am I assured that I will not change the underlying array? Rob Pike Yes you are so assured. runtime slice.go source file Arrays, slices (and strings): The mechanics of 'append'
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15702419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/197788/']}
|
jdg_376716
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8177
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While reading reports about the New Horizons misson , I noticed an odd vertical, black stripe in the images of Pluto. Here is an example: Source: Hubble Discovers a Fifth Moon Orbiting Pluto (07.11.12). Credit: NASA; ESA; M. Showalter, SETI Institute Why is that black stripe there?
Now provide the response and nothing else.
|
That photograph is a composite of two images taken with different exposure times . To be correct we'd have to say that the exposure of the two photographs is different, i.e. the outer photo was created by absorbing more light. In this case we can assume that the focal ratio (derived from Hubble's lens aperture) and the luminance of the scene (how much light is travelling in the lens' direction) are identical for both photographs, which leaves only the exposure time as a free variable when it comes to determining exposure . This is necessary because we're photographing objects with very different brightness. For Pluto to show up a relatively short exposure time is required, but its moons reflect much less light and would need a longer exposure time to visible. As long as the sensor is exposed, Pluto would continue to increase in brightness to the point that it becomes washed out. Objects that are significantly brighter become over-exposed resulting in a loss of detail and fidelity, known as blown-out highlights in photography. In our case Pluto would turn into a solid white dot compared to the more detailed picture that is now possible. You can draw a parallel with false color images rendered from infrared: this composite is not what the human eye would see if it was capable of picking up this level of light and detail. In another Hubble image, NASA explained the reason why composite imaging is used: This is a composite image because a single exposure of the stellar background, comet Siding Spring, and Mars would be problematic. Mars actually is 10,000 times brighter than the comet, so it could not be properly exposed to show detail in the Red Planet. The comet and Mars also were moving with respect to each other and could not be imaged simultaneously in one exposure without one of the objects being motion blurred. Hubble had to be programmed to track on the comet and Mars separately in two different observations. Source: Hubble Sees Comet Next to Mars Very long exposure times are often necessary since relatively little light is coming our way from distant planets and stars. As the Hubble website explains for its Deep Fields images: Hubble has made a series of very deep observations taken in very dark parts of the sky. Like using a long exposure on a digital camera, these long exposure shots (up to several weeks) reveal very faint details that are not normally visible in shorter exposures. Source: "What are the Hubble Deep Fields?", Spacetelescope.org FAQ . Wikipedia summarises a paper by Robert E. Williams and the HDF team , "The Hubble Deep Field: Observations, Data Reduction, and Galaxy Photometry" as follows: Between December 18 and December 28, 1995—during which time Hubble orbited the Earth about 150 times—342 images of the target area in the chosen filters were taken. The total exposure times at each wavelength were 42.7 hours (300 nm), 33.5 hours (450 nm), 30.3 hours (606 nm) and 34.3 hours (814 nm), divided into 342 individual exposures to prevent significant damage to individual images by cosmic rays, which cause bright streaks to appear when they strike CCD detectors. A further 10 Hubble orbits were used to make short exposures of flanking fields to aid follow-up observations by other instruments. Source: Hubble Deep Field , Wikipedia, retrieved 2014-12-09
|
{}
|
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/8177', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/3001/']}
|
jdg_376717
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20044163
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to start an activity from a static java method on an android device.I do not have any context or anything passed as parameter to the static function.For starting the activity I must call "startActivity" with the current running method as "this" pointer. So is there a way to get the current running activity?
Now provide the response and nothing else.
|
You can access only static variables/objects inside static method.So You need to Implement this way public class MainActivity extends Activity { private static Context mContext; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mContext = this; } public static void goToLoginActivity() { Intent login = new Intent(mContext, LoginActivity.class); mContext.startActivity(login); }} NOTE : But this is not the proper way to do so, this may cause window leak issue. Better approach is pass activity/context object as parameter like this. public static void goToLoginActivity(Context mContext) { Intent login = new Intent(mContext, LoginActivity.class); mContext.startActivity(login); }
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20044163', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/491605/']}
|
jdg_376718
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48488178
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following code in C#. public int Foo(int a){ // ...}// in some other methodint? x = 0;x = Foo(x); The last line will return a compilation error cannot convert from 'int?' to 'int' which is fair enough. However, for example in Haskell there is Maybe which is a counterpart to Nullable in C#. Since Maybe is a Functor I would be able to apply Foo to x using fmap . Does C# have a similar mechanism?
Now provide the response and nothing else.
|
We can implement such functionality ourselves: public static class FuncUtils { public static Nullable<R> Fmap<T, R>(this Nullable<T> x, Func<T, R> f) where T : struct where R : struct { if(x != null) { return f(x.Value); } else { return null; } }} Then we can use it with: int? x = 0;x = x.Fmap(Foo); It will thus call the function Foo if x is not null . It will wrap the result back in a Nullable<R> . In case x is null , it will return a Nullable<R> with null . Or we can write a more equivalent function (like fmap in Haskell) where we have a function Fmap that takes as input a Func<T, R> and returns a Func<Nullable<T>, Nullable<R>> so that we can then use it for a certain x : public static class FuncUtils { public static Func<Nullable<T>, Nullable<R>> Fmap<T, R>(Func<T, R> f) where T : struct where R : struct { return delegate (Nullable<T> x) { if(x != null) { return f(x.Value); } else { return null; } }; }} We can then use it like: var fmapf = FuncUtils.Fmap<int, int>(Foo);fmapf(null); // -> nullfmapf(12); // -> Foo(12) as int?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48488178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6278979/']}
|
jdg_376719
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25065940
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Background: In versions of Mac OS up to version 9, the standard representation for text files used an ASCII CR (carriage return) character, value decimal 13, to mark the end of a line. Mac OS 10, unlike earlier releases, is UNIX-like, and uses the ASCII LF (line feed) character, value decimal 10, to mark the end of a line. The question is, what are the values of the character constants '\n' and '\r' in C and C++ compilers for Mac OS releases prior to OS X? There are (at least) two possible approaches that could have been taken: Treat '\n' as the ASCII LF character, and convert it to and from CR on output to and input from text streams (similar to the conversion between LF and CR-LF on Windows systems); or Treat '\n' as the ASCII CR character, which requires no conversion on input or output. There would be some potential problems with the second approach. One is that code that assumes '\n' is LF could fail. (Such code is inherently non-portable anyway.) The other is that there still needs to be a distinct value for '\r' , and on an ASCII-based system CR is the only sensible value. And the C standard doesn't permit '\n' == '\r' (thanks to mafso for finding the citation, 5.2.2 paragraph 3), so some other value would have to be used for '\r' . What is the output of this C program when compiled and executed under Mac OS N , for N less than 10? #include <stdio.h>int main(void) { printf("'\\n' = %d\n", '\n'); printf("'\\r' = %d\n", '\r'); if ('\n' == '\r') { printf("Hmm, this could be a problem\n"); }} The question applies to both C and C++. I presume the answer would be the same for both. The answer could also vary from one C compiler to another -- but I would hope that compiler implementers would have maintained consistency with each other. To be clear, I am not asking what representation old releases of Mac OS used to represent end-of-line in text files . My question is specifically and only about the values of the constants '\n' and '\r' in C or C++ source code. I'm aware that printing '\n' (whatever its value is) to a text stream causes it to be converted to the system's end-of-line representation (in this case, ASCII CR); that behavior is required by the C standard.
Now provide the response and nothing else.
|
The values of the character constants \r and \n was the exact same in Classic Mac OS environments as it was everywhere else: \r was CR was ASCII 13 ( 0x0d ); \n was LF was ASCII 10 ( 0x0a ). The only thing that was different on Classic Mac OS was that \r was used as the "standard" line ending in text editors, just like \n is used on UNIX systems, or \r\n on DOS and Windows systems. Here's a screenshot of a simple test program running in Metrowerks CodeWarrior on Mac OS 9, for instance: Keep in mind that Classic Mac OS systems didn't have a system-wide standard C library! Functions like printf() were only present as part of compiler-specific libraries like SIOUX for CodeWarrior, which implemented C standard I/O by writing output to a window with a text field in it. As such, some implementations of standard file I/O may have performed some automatic translation between \r and \n , which may be what you're thinking of. (Many Windows systems do similar things for \r\n if you don't pass the "b" flag to fopen() , for instance.) There was certainly nothing like that in the Mac OS Toolbox, though.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25065940', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/827263/']}
|
jdg_376720
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27062
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The classic Mininum Spanning Tree (MST) algorithms can be modified to find the Maximum Spanning Tree instead. Can an algorithm such as Kruskal's be modified to return a spanning tree that is strictly more costly than an MST, but is the second cheapest? For example, if you switch one of the edges in this spanning tree, you end up with an MST and vice versa. My question, though, is simply: How can I find the second cheapest spanning tree, given a graph $G$ with an MST?
Now provide the response and nothing else.
|
The second-smallest spanning tree differs from the minimum spanning tree by a single edge swap. That is, to get the second-smallest tree, you need to add one edge that's not already in the minimum spanning tree, and then remove the heaviest edge on the cycle that the added edge forms. If you already have the minimum spanning tree, this can all be done in linear time. See e.g. V. King, "A simpler minimum spanning tree verification algorithm", Algorithmica 1997, doi:10.1007/BF02526037 .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/27062', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/28568/']}
|
jdg_376721
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7449196
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a class that inherits from ObservableCollection and adds a few additional methods such as AddRange and RemoveRange My base method call is this: public void AddRange(IEnumerable<T> collection){ foreach (var i in collection) Items.Add(i); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));} My problem with this is that I want to access e.NewItems or e.OldItems in the CollectionChanged event to perform an action on whatever item is in the collection, and the NotifyCollectionChangedAction.Reset action does not pass in these values void Instances_CollectionChanged(object sender, NotifyCollectionChangedEventArgs e){ if (e.NewItems != null) // e.NewItems is always null { foreach (var item in e.NewItems) { if (item is EventInstanceModel) ((EventInstanceModel)item).ParentEvent = this; } }} So I thought I could just use the NotifyCollectionChangedAction.Add instead of Reset , however that throws a Range actions are not supported exception public void AddRange(IEnumerable<T> collection){ var addedItems = collection.ToList(); foreach (var i in addedItems) Items.Add(i); OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, addedItems));} So my question is, how can I raise a CollectionChanged event, and pass it the new or old item list?
Now provide the response and nothing else.
|
I've been looking into it and apparently the CollectionChanged method cannot be raised with multiple items. So I can call OnCollectionChanged(new NotifyCollectionChangedEventArgs( NotifyCollectionChangedAction.Add, singleItem)); but I can't call OnCollectionChanged(new NotifyCollectionChangedEventArgs( NotifyCollectionChangedAction.Add, listOfItems)); For now what I have done is simply raise the Add event for every item added, but I am still rather unhappy at this since it means I raise the CollectionChanged event for every item in the AddRange method instead of only once. public void AddRange(IEnumerable<T> collection){ foreach (var i in collection) { Items.Add(i); OnCollectionChanged(new NotifyCollectionChangedEventArgs( NotifyCollectionChangedAction.Add, i)); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7449196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/302677/']}
|
jdg_376722
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7227814
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that with the instruction: Persistence.createEntityManagerFactory("persistence-unit-name"); The JPA persistence mechanism reads the "persistence.xml" file, looks for the persistence unit called "persistence-unit-name", and constructs the EntityManagerFactory based on it. My question is, how can I force JPA to take a file different from "persistence.xml" ?? for example, "persistence-test.xml".
Now provide the response and nothing else.
|
There is no standardized JPA way to do this, although individual JPA providers may provide a way. I would suggest the standard way to have a different class path for main and test. For example, if you use Maven, and you have two META-INF/persistence.xml files, one in src/main/resources and one in src/test/resources , tests will pick up the one in src/test/resources , because Maven puts test classes / resources ahead of main classes / resources in the classpath. You can easily configure Ant to work in similar ways. If you need more advanced logic, consider using Spring's JPA support . It will let you deal with advanced situations like integrating multiple persistence.xml files .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7227814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/812139/']}
|
jdg_376723
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2070748
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\cos(x) - \frac{1+ax^2}{1+bx^2} \text{ on } x \to 0$$ If $\displaystyle \cos(x) = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \cdots $ Then we should choose $a, b$ in a such way that it's Taylor series is close to this. However, I'm not sure how to approach this. I tried to take several derivates of second term to see its value on $x_0 = 0$, but it becomes complicated and I don't see general formula for $n$-th derivative at point zero to find $a$ and $b$.
Now provide the response and nothing else.
|
The quick-and-dirty method:\begin{align*}f(x) = \frac{1 + a x^2}{1 + bx^2} &= (1 + a x^2) \left( 1 - b x^2 + b^2 x^4 - b^3 x^6 + \cdots \right) \\&= 1 - (b - a) x^2 + (b^2 - ab) x^4 - (b^3 - a b^2) x^6 + \cdots\end{align*}We want $b - a = \frac{1}{2}$ and $b (b-a) = \frac{1}{24}$, so that (at least) the first three terms in the Taylor series of $f(x)$ and $\cos x$ agree. This implies that $b = \frac{1}{12}$ and $a = -\frac{5}{12}$; with this choice, we have$$f(x) = \frac{1 - \frac{5}{12} x^2}{1 + \frac{1}{12} x^2} = 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{288} + \cdots$$which agrees with $\cos x$ up to the $\mathcal{O}(x^6)$ term.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2070748', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/371699/']}
|
jdg_376724
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16925129
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to generate different/unique id per request in django from models field. I did this but I keep getting the same id. class Paid(models.Model): user=models.ForeignKey(User) eyw_transactionref=models.CharField(max_length=100, null=True, blank=True, unique=True, default=uuid.uuid4()) #want to generate new unique id from this field def __unicode__(self): return self.user
Now provide the response and nothing else.
|
If you are using Django 1.8 or superior, madzohan's answer is the right answer. Do it like this: #note the uuid without parenthesiseyw_transactionref=models.CharField(max_length=100, blank=True, unique=True, default=uuid.uuid4) The reason why is because with the parenthesis you evaluate the function when the model is imported and this will yield an uuid which will be used for every instance created. Without parenthesis you passed just the function needed to be called to give the default value to the field and it will be called each time the model is imported. You can also take this approach: class Paid(models.Model): user=models.ForeignKey(User) eyw_transactionref=models.CharField(max_length=100, null=True, blank=True, unique=True) def __init__(self): super(Paid, self).__init__() self.eyw_transactionref = str(uuid.uuid4()) def __unicode__(self): return self.user
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16925129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1218243/']}
|
jdg_376725
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
396526
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I want to perform some commands given variables aren't set I'm using: if [[ -z "$a" || -z "$v" ]]then echo "a or b are not set"fi Yet the same syntax doesn't work with -v , I have to use: if [[ -v a && -v b ]]then echo "a & b are set"fi What is the history behind this? I don't understand why the syntax wouldn't be the same. I've read that -v is a somewhat recent addition to bash (4.2) ?
Now provide the response and nothing else.
|
Test operators -v and -z are just not the same. Operator -z tells if a string is empty. So it is true that [[ -z "$a" ]] will give a good approximation of "variable a is unset",but not a perfect one: the expression will yield true if a is set to the empty stringrather than unset; the enclosing script will fail if a is unset and the option nounset is enabled. On the other hand, -v a will be exactly "variable a is set", evenin edge cases. It should be clear that passing $a rather than a to -v would not be right, as it would expand that possibly-unsetvariable before the test operator sees it; so it has to be part ofthat operator's task to inspect that variable, pointed to by its name,and tell whether it is set.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/396526', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/16792/']}
|
jdg_376726
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52326218
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my react-native applications, i had written a code like this. return ( <PersonHandler profilePicture={item.user.profileImage ? {uri: item.user.profileImage} : DefaultUser} firstName={item.user.firstName} lastName={item.user.lastName} buttonBorderColor={item.status === 0 ? "#000000" : "#37CAFA"} buttonBackgroundColor={item.status === 0 ? null : "#37CAFA"} buttonTextColor={item.status === 0 ? "#000000" : "#FFFFFF"} buttonText={item.status === 0 ? USER_STATUS.REQUESTED : USER_STATUS.FOLLOWING} submitting={unfollowIsInProgress && item._id === unfollowingPerson._id} onButtonPress={() => this.onUnfollowPress(item)} /> ); Now I have more than 2 statuses to handle, so the ternary operator here cannot be used. What will be the best approach to handle a situation like this? I have 3 statuses now. 0, 1 and 2. According to the status I have to handle the following conditions. buttonBorderColor={item.status === 0 ? "#000000" : "#37CAFA"} buttonBackgroundColor={item.status === 0 ? null : "#37CAFA"} buttonTextColor={item.status === 0 ? "#000000" : "#FFFFFF"} buttonText={item.status === 0 ? USER_STATUS.REQUESTED : USER_STATUS.FOLLOWING}
Now provide the response and nothing else.
|
Sure you can use the ternary operator still, you just have to use it twice , for example: buttonBorderColor={ item.status === 0 ? "#000000" : item.status === 1 ? "#37CAFA" : "#FFFFFF" // if status is 2} That said, it's a bit uncomfortable to read - you might consider using an array indexed by status whose value is the color you want instead: const colors = ["#000000", "#37CAFA", "#FFFFFF"]// ...buttonBorderColor={ colors[item.status] }
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52326218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5775871/']}
|
jdg_376727
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
796412
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does $\operatorname{arcsec}(x) = 1 /\arccos(x)$? I have looked in a few books and Google'd it but I am not finding my answer.
Now provide the response and nothing else.
|
If $\sec^{-1} x = \theta$, then $x = \sec\theta$. This means $\frac1x = \cos\theta$, so $\cos^{-1}\frac1x = \theta$. So your equation is wrong; the correct statement is $$\boxed{\sec^{-1} x = \cos^{-1}\tfrac1x}$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/796412', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15709/']}
|
jdg_376728
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20818
|
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the task of fully determining an $n$ -qubit state $\rho$ which can be written as \begin{equation}\tag{1} \rho = \sum_{p \in \{I, X, Y, Z\}^n} \text{Tr}(\rho P_{p}) P_{p}\end{equation} and each $P_{p} = P_{p_1} \otimes \dots \otimes P_{p_n}$ is a tensor product of Pauli matrices. This suggests that I could perform state tomography evaluate each expectation value $\langle P_p \rangle = \text{Tr}(\rho P_p)$ . I would plan on having $3^n$ distinct experimental configurations, one for each combination of local measurement bases $\{X, Y, Z\}^n$ . I thought that the discrepancy between $3^n$ measurement configurations and $4^n-1$ coefficients needed to specify $(1)$ would be resolved because an expectation value of the form $\langle X \otimes I \otimes X \rangle$ could be computed using a marginal distribution over the bitstrings results from the experiment that measures $\langle X \otimes X \otimes X\rangle$ (or the experiments used to compute $\langle X \otimes Y \otimes X\rangle$ or $\langle X \otimes Z \otimes X\rangle$ ). So any experiment to determine a term $\text{Tr}(\rho P_p)$ Equation $(1)$ where $P_p$ contained an $I$ would be redundant with some other experiment. This is one of the features motivating the method of ( Huang, 2021 ): If you instantiate Theorem 1 therein with $L=4^n$ and $w=n$ , it asserts that $4^n$ many $\epsilon$ -accurate estimators for Pauli expectation values can be computed in $M = O(n3^n / \epsilon^2)$ total experiments. But when I look elsewhere in the literature (e.g. Franca, 2017 ) it suggests that for an arbitrary full-rank, $2^n$ -dimensional state $\rho$ you do indeed need $\Omega(4^n)$ measurement configurations for quantum state tomography. How do I resolve the discrepancy between these two scaling behaviors?
Now provide the response and nothing else.
|
Let's say you have a magical machine that gives you $\langle P_{p} \rangle$ (which are expectation values and therefor, well, numbers) and only the $\langle P_{p} \rangle$ . It does this for all the $3$ - (n)-fold tensor products of the traceless Paulis. That is: $p \in \{X,Y,Z\}^{\otimes 3 (n)}$ , for a total of $3^{3 (n)}$ Paulis, where $n$ is the general case and $3$ is specific to your examples. The process that you describe to obtain e.g. $\langle X \otimes I \otimes X \rangle$ from, say, $\langle X \otimes X \otimes X \rangle$ does not work here. For that, you need more. These $3^{n}$ values are not enough. The trick is, the usual method of obtaining these Pauli expectation values is by measuring every qubit in a Pauli basis separately. A single-qubit Pauli measurement is a projection upon either of its eigenspaces. For instance, for the $Z$ operator has projectors $|0\rangle\langle0|$ and $|1\rangle\langle1|$ , and the $X$ operator has projectors $|+\rangle\langle+|$ and $|-\rangle\langle-|$ . You thus gather (and use) the statistics of six operators per qubit to perform full QST! This results in a total of $6^{n}$ (positive) operators that one uses in standard QST - in the case of the Paulis these are projectors, but the more general POVM can also work. With all these operators one can reconstruct all Paulis (including the containing- $I$ -terms) and therefor reconstruct the density matrix. This is exactly possible because these $6^{n}$ operators form a spanning set of the space of density matrices. The Paulis do too $^{1}$ and therefor we can use either of them reconstruct the density matrix. But wait - now we have $6^{n}$ terms! We only needed $4^{n}$ terms, right? Well, yes! The space of density matrices dimension' scales as $4^{n}$ , so if you can find a set of (positive) operators of size $4^{n}$ that are independent, this should work. A particular nice example is a SIC-POVM ; a Symmetric and Informationally-Complete POVM. If you have some magical machine that can perform this SIC-POVM $\{A_{k}\}$ measurement on all your qubits, it would have to perform $4^{n}$ measurements and return single values $\langle A_{k} \rangle = \mathrm{tr}\big[\rho A_{k}\big]$ , which would be enough to reconstruct the entire density matrix. A more straightforward but also easier example of a $4$ -partite POVM (actually, PVM) is the set consisting of the $+1$ and $-1$ eigenstates of the $Z$ operator and the $+1$ eigenstates of the $X$ and $Y$ operators: $\{|0\rangle\langle0|,|1\rangle\langle1|,|+\rangle\langle+|,|+i\rangle\langle+i|\}$ . You can check that these form a basis for the space of density matrices. However, please realize that to experimentally implement this PVM would be a daunting task (if not completely impossible) without just performing all $3^{n}$ Pauli measurements. But yeah - you could perform Pauli measurements in all $3^{n}$ basis and then only use the statistics for the above projectors, and throw the $-1$ eigenstates of $X$ and $Y$ away! $^{1}$ Actually, they don't. They form a spanning set (and a basis) for the space of Hermitian matrices, of which density matrices are a subset; they're not even density matrices themselves because they're not positive semidefinite nor have trace $1$ . But this is not important for the current discussion.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://quantumcomputing.stackexchange.com/questions/20818', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/1939/']}
|
jdg_376729
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7302891
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To date I use the following implementation of the builder pattern (as opposed to the implementation described here ): public class Widget { public static class Builder { public Builder(String name, double price) { ... } public Widget build() { ... } public Builder manufacturer(String value) { ... } public Builder serialNumber(String value) { ... } public Builder model(String value) { ... } } private Widget(Builder builder) { ... }} This works well for most situations I've encountered where I need to build up a complex object with a variety of required/mandatory and optional parameters. However, I've been struggling lately to understand how the pattern is of any benefit when all your parameters are mandatory (or at least the vast majority are). One means of getting around this has been to logically group the parameters being passed in to their own classes to reduce the number of parameters being passed to the builder constructor. For example, instead of: Widget example = new Widget.Builder(req1, req2, req3,req4,req5,req6,req7,req8) .addOptional(opt9) .build(); becomes grouped as follows: Object1 group1 = new Object1(req1, req2, req3, req4);Object2 group2 = new Object2(req5, req6);Widget example2 = new Widget.Builder(group1, group2, req7, req8) .addOptional(opt9) .build(); While having separate objects simplifies things quite a bit, it also makes things a little difficult to follow if one is not familiar with the code. One thing I considered was moving all parameters into their own addParam(param) methods and then performing validation on required parameters in the build() method. What is best practice and is there perhaps a better approach to this that I haven't considered?
Now provide the response and nothing else.
|
However, I've been struggling lately to understand how the pattern is of any benefit when all your parameters are mandatory (or at least the vast majority are). The fluent builder pattern is still beneficial: Its more readable - it effectively allows named parameters so that the call isn't just a long list of unnamed arguments Its unordered - this lets you group arguments together into logical groups, either as part of a single builder setter call or simply by letting you use a natural order to calling the builder setter methods that make the most sense of this particular instantiation. Widget example = new Widget.Builder(req1, req2, req3,req4,req5,req6,req7,req8) .addOptional(opt9) .build(); becomes grouped as follows: Object1 group1 = new Object1(req1, req2, req3, req4);Object2 group2 = new Object2(req5, req6);Widget example2 = new Widget.Builder(group1, group2, req7, req8) .addOptional(opt9) .build(); While having separate objects simplifies things quite a bit, it also makes things a little difficult to follow if one is not familiar with the code. One thing I considered was moving all parameters into their own addParam(param) methods and then performing validation on required parameters in the build() method. I would favor a hybrid when appropriate or natural. It doesn't have to be all in constructor or each param has its own addParam method. Builder gives you flexibility to do one, the other, in-between, or a combo: Widget.Builder builder = new Widget.Builder(Widget.BUTTON);builder.withWidgetBackingService(url, resource, id);builder.withWidgetStyle(bgColor, lineWidth, fontStyle);builder.withMouseover("Not required");Widget example = builder.build();
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7302891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/928072/']}
|
jdg_376730
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21259954
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to throw FaultException on server side of WCF application. I'm using DTO as payload for this exception. From some moment (for kind a big objects) I've started to receive "The size necessary to buffer the XML content exceeded the buffer quota" exception on client side. All binding message size parameters and maxDepth set to big enouth value to get out of suspection.Did anybody faced with this issue? It seems there are no solution in internet yet.Setting <dataContractSerializer maxItemsInObjectGraph="2147483647" ignoreExtensionDataObject="true" /> did not help.
Now provide the response and nothing else.
|
The problem was in "MaxFaultSize" parameter in ClientRuntime, default value is 65535, so you can't pass large payload in WCF's faults by default. To change this value, you should write custom EndpointBehavior like this: public class MaxFaultSizeBehavior : IEndpointBehavior{ private readonly int _size; public MaxFaultSizeBehavior(int size) { _size = size; } public void Validate(ServiceEndpoint endpoint) { } public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) { } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MaxFaultSize = _size; }} and apply it to endpoint in client code when creating proxy: _clientProxy.Endpoint.Behaviors.Add(new MaxFaultSizeBehavior(1024000)); or, without proxy, just cast the client to add the behavior: _client = new MyServiceClient();((ClientBase<IMyService>) _client).Endpoint.Behaviors.Add(new MaxFaultSizeBehavior(1024000)); After that everything will be fine.I've spent a lot of time searching answer, hope this helps somebody.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21259954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/818116/']}
|
jdg_376731
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
74695
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There are several articles describing the newly discovered Linux-based Turla trojan. But basically, all these articles repeat the same, very limited, information.Can anybody provide more details, such as: How do linux machines get infected Is there any privilege escalation involved, or is the whole thing only happening under the infected user (i.e. uid 1000) Where does the malware code "live" on the infected machine plus any other interesting details
Now provide the response and nothing else.
|
TURLA is the final stage of a large and sophisticated family of malware. There have been known Windows versions since at least 2010. This 40 page presentation is the most comprehensive resource I have seen, for either platform. TURLA - development & operations Some Windows Highlights Stage 0: attack stage - infection vector Stage 1: reconnaissance stage - initial backdoor Stage 2: lateral movements Stage 3: access established stage - TURLA deployed On each stage they can quit if they lose interest in target Stage 0: Injection Vectors Spear Phishing ( CVE-2013-3346 )( CVE-2013-5065 ) Watering Holes [Adobe Update social engineering / Java exploits ( CVE-2012-1723 ), Adobe Flash exploits or Internet Explorer 6,7,8 exploits] Third party supplier compromise Stage 1: Reconaissance Stage Initial backdoor - WipBot/Epic/TavDig WipBot is a combination of a zero-day and a CVE-2013-3346 exploit Exports functions with same names as TURLA. No other similarities Breaks debugging and most malware sandboxes Process hops several times, wipes its own PE section Further described in Kaspersky Lab report Stage 2: Lateral Movements Refine C&C Further penetrate network Utilize new backdoor Gets Domain Admin credentials Stage 3: Turla Dropped on select machines for long-term compromise Machines can be compromised for years without detection Other Resources The 'Penguin Turla' - Kaspersky Lab (linux specific details) Symantec Report - Turla Linux Highlights Turla module written in C/C++ Based on cd00r Executable is statically linked against multiple libraries Its functionality includes hidden network communications, arbitrary remote command execution, and remote management Much of its code is based on public sources Cannot be detected with netstat Does not require root access Linux Executable Characteristics ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, for GNU/Linux 2.2.5, stripped Linux Statically Linked Libraries glibc2.3.2 - the GNU C library openssl v0.9.6 - an older OpenSSL library libpcap - tcpdump's network capture library Linux C&C Details First stage C&C is hardcoded. Known activity @ news-bbc.podzone[.]org pDNS IP: 80.248.65.183 Linux Startup/Execution Details Process requires two parameters: ID (a numeric value used as a part of the "magic packet for authentication") and an existing network interface name The parameters can be inputted two different ways: from STDIN, or from dropper a launching the sample After the ID and interface name are entered and the process launched, the backdoor's process PID is returned Linux Magic Packet Statically links PCAP libraries Gets raw socket, applies filter, captures packets Checks for an ACK number in the TCP header, or the second byte from the UDP packet body If condition is met, execution jumps to packet payload contents and creates regular socket Backdoor uses new socket to connect to source address of Magic Packets Backdoor reports its own PID and IP, waits to receive commands Arriving commands are executed with a "/bin/sh -c " script Final Notes Everything regarding the linux version was from the Kaspersky report. Unfortunately, detecting seems to be very difficult at this point. "Although Linux variants from the Turla framework were known to exist, we haven't seen any in the wild yet." - Kaspersky Lab
|
{}
|
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/74695', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/28654/']}
|
jdg_376732
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
735714
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
im trying to get a call to amazon web service and im stuck on getting the signature, looked at this but i still have a question on it. using this example what is the NSData *keyData;NSData *clearTextData ? what do i need to pass for these two values? /* inputs: NSData *keyData; NSData *clearTextData*/uint8_t digest[CC_SHA1_DIGEST_LENGTH] = {0};CCHmacContext hmacContext;CCHmacInit(&hmacContext, kCCHmacAlgSHA1, keyData.bytes, keyData.length);CCHmacUpdate(&hmacContext, clearTextData.bytes, clearTextData.length);CCHmacFinal(&hmacContext, digest);NSData *out = [NSData dataWithBytes:digest length:CC_SHA1_DIGEST_LENGTH]
Now provide the response and nothing else.
|
I just spent like 4 hours Googling and looking for ways to calculate an unkeyed SHA1 on the iPhone that would match the results of the sha1() function in php. Here was the result: #import <CommonCrypto/CommonDigest.h> NSString *hashkey = <your data here>;// PHP uses ASCII encoding, not UTFconst char *s = [hashkey cStringUsingEncoding:NSASCIIStringEncoding];NSData *keyData = [NSData dataWithBytes:s length:strlen(s)];// This is the destinationuint8_t digest[CC_SHA1_DIGEST_LENGTH] = {0};// This one function does an unkeyed SHA1 hash of your hash dataCC_SHA1(keyData.bytes, keyData.length, digest);// Now convert to NSData structure to make it usable againNSData *out = [NSData dataWithBytes:digest length:CC_SHA1_DIGEST_LENGTH];// description converts to hex but puts <> around it and spaces every 4 bytesNSString *hash = [out description];hash = [hash stringByReplacingOccurrencesOfString:@" " withString:@""];hash = [hash stringByReplacingOccurrencesOfString:@"<" withString:@""];hash = [hash stringByReplacingOccurrencesOfString:@">" withString:@""];// hash is now a string with just the 40char hash value in it Hopefully this will help others who are struggling with SHA1 on the iPhone
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/735714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376733
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40894385
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The official php7 docker image has the following example: FROM php:7.0-fpmRUN apt-get update && apt-get install -y libmemcached-dev \ && pecl install memcached \ && docker-php-ext-enable memcached I'm trying to use FROM php:7.0-fpm-alpine : RUN apk add --update --no-cache libmemcached-devRUN pecl install memcached && docker-php-ext-enable memcached PECL gives this error: pecl/memcached requires PHP (version >= 5.2.0, version <= 6.0.0, excluded versions: 6.0.0), installed version is 7.0.13 How can I install the memcached php extension on alpine?
Now provide the response and nothing else.
|
Currently the php-memcached-dev:php7 branch contains the source for this extension. To install it you can still use the docker-php-ext-* commands but you need to checkout the source yourself. Base Installation Assuming everything required to install the extension is already installed you can do: RUN git clone -b php7 https://github.com/php-memcached-dev/php-memcached /usr/src/php/ext/memcached \ && docker-php-ext-configure /usr/src/php/ext/memcached \ --disable-memcached-sasl \ && docker-php-ext-install /usr/src/php/ext/memcached \ && rm -rf /usr/src/php/ext/memcached This block will clone the repository, configure and install the extension then clean up after it self. Pre-requisites It is most likely that you need to install to packages to build the extension, we can add and remove them by doing: ENV MEMCACHED_DEPS zlib-dev libmemcached-dev cyrus-sasl-dev gitRUN set -xe \ && apk add --no-cache libmemcached-libs zlib \ && apk add --no-cache \ --virtual .memcached-deps \ $MEMCACHED_DEPS \ && git clone -b php7 https://github.com/php-memcached-dev/php-memcached /usr/src/php/ext/memcached \ && docker-php-ext-configure /usr/src/php/ext/memcached \ --disable-memcached-sasl \ && docker-php-ext-install /usr/src/php/ext/memcached \ && rm -rf /usr/src/php/ext/memcached \ && apk del .memcached-deps Update 17 May 2017 memcached has been added to the official pecl libraries for php7 now (v3 -> php7/7.1, v2 -> php5) This makes installation a bit different FROM php:7.0-alpineENV MEMCACHED_DEPS zlib-dev libmemcached-dev cyrus-sasl-devRUN apk add --no-cache --update libmemcached-libs zlibRUN set -xe \ && apk add --no-cache --update --virtual .phpize-deps $PHPIZE_DEPS \ && apk add --no-cache --update --virtual .memcached-deps $MEMCACHED_DEPS \ && pecl install memcached \ && echo "extension=memcached.so" > /usr/local/etc/php/conf.d/20_memcached.ini \ && rm -rf /usr/share/php7 \ && rm -rf /tmp/* \ && apk del .memcached-deps .phpize-deps
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40894385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/371699/']}
|
jdg_376734
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
744201
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If we take the composite angular momentum of the three valence quarks, we should have a proton/neutron spin angular momentum of, $$S=\{ \frac{1}{2},\frac{3}{2} \}$$ using the general rule for adding angular momentum in quantum mechanics. Evidently, we only ever observe a proton spin of $\frac{1}{2}$ . So my question is, why do we not observe a spin of $\frac{3}{2}$ ?
Now provide the response and nothing else.
|
So my question is, why do we not observe a spin of $\frac{3}{2}$ ? Actually we did observe such particles.But we don't call them proton and neutron,because they behave differently in some other ways.See at Table of Baryons . The $\Delta^+$ particle. It has quark composition $uud$ (i.e. same as the proton)and spin $\frac{3}{2}$ (unlike the proton). The $\Delta^0$ particle. It has quark composition $udd$ (i.e. same as the neutron)and spin $\frac{3}{2}$ (unlike the neutron). The $\Delta$ particles are instable (with very short lifetime)and have masses around $30$ % larger than proton/neutron.(See this and this question about why the mass is larger.)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/744201', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/315787/']}
|
jdg_376735
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51841506
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on data preprocessing and want to compare the benefits of Data Standardization vs Normalization vs Robust Scaler practically. In theory, the guidelines are: Advantages: Standardization : scales features such that the distribution is centered around 0, with a standard deviation of 1. Normalization : shrinks the range such that the range is now between 0 and 1 (or -1 to 1 if there are negative values). Robust Scaler : similar to normalization but it instead uses the interquartile range, so that it is robust to outliers. Disadvantages: Standardization : not good if the data is not normally distributed (i.e. no Gaussian Distribution). Normalization : get influenced heavily by outliers (i.e. extreme values). Robust Scaler : doesn't take the median into account and only focuses on the parts where the bulk data is. I created 20 random numerical inputs and tried the above-mentioned methods ( numbers in red color represent the outliers ): I noticed that - indeed - the Normalization got affected negatively by the outliers and the change scale between the new values became tiny ( all values almost identical -6 digits after the decimal point- 0.000000x ) even there is noticeable differences between the original inputs! My questions are: Am I right to say that also Standardization gets affected negatively by the extreme values as well? If not, why according to the result provided? I really can't see how the Robust Scaler improved the data because I still have extreme values in the resulted data set? Any simple complete interpretation?
Now provide the response and nothing else.
|
Am I right to say that also Standardization gets affected negatively by the extreme values as well? Indeed you are; the scikit-learn docs themselves clearly warn for such a case: However, when data contains outliers, StandardScaler can often be mislead. In such cases, it is better to use a scaler that is robust against outliers. More or less, the same holds true for the MinMaxScaler as well. I really can't see how the Robust Scaler improved the data because I still have extreme values in the resulted data set? Any simple -complete interpretation? Robust does not mean immune , or invulnerable , and the purpose of scaling is not to "remove" outliers and extreme values - this is a separate task with its own methodologies; this is again clearly mentioned in the relevant scikit-learn docs : RobustScaler [...] Note that the outliers themselves are still present in the transformed data. If a separate outlier clipping is desirable, a non-linear transformation is required (see below). where the "see below" refers to the QuantileTransformer and quantile_transform .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51841506', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8382692/']}
|
jdg_376736
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33026015
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I use es6/7 (babel - stage 1) instead of TypeScript, how are services, and specifically Http, injected? Here's my component JS: import {Component, Inject, View, CORE_DIRECTIVES, ViewEncapsulation} from 'angular2/angular2';import {Http} from 'angular2/http';@Component({ selector: 'login'})@View({ templateUrl: './components/login/login.html', styleUrls: ['components/login/login.css'], directives: [CORE_DIRECTIVES], encapsulation: ViewEncapsulation.Emulated})export class Login { constructor(@Inject(Http) http) { console.log('http', http); } authenticate(username, password) { // this.http.get('/login'); }} I have tried: export class Login { constructor(@Inject(Http) http) { console.log('http', http); }}/********************/@Inject(Http)export class Login { constructor(http) { console.log('http', http); }}/********************/export class Login { constructor(Http: http) { console.log('http', http); }}/********************/export class Login { constructor(http = Http) { console.log('http', http); }}/********************/export class Login { constructor(Http) { this.http = new Http() console.log('http', this.http); }}/********************/export class Login { constructor(http = new Http()) { console.log('http', http); }} All but the first compiles. Others give me access to either the Http class or an http instance. But none works. I tried to following the discussion referenced by Eric Martinez in his comment. Login.js now: import {Component, Inject, View, CORE_DIRECTIVES, ViewEncapsulation} from 'angular2/angular2';import {HTTP_BINDINGS, Http, BaseRequestOptions, RequestOptions, RequestMethods} from 'angular2/http';@Component({ selector: 'login'})@View({ templateUrl: './components/login/login.html', styleUrls: ['components/login/login.css'], directives: [CORE_DIRECTIVES], encapsulation: ViewEncapsulation.Emulated, bindings: [Http]})export class Login { constructor(http) { this.http = http; console.log('http', http); } authenticate(usernameEl, passwordEl) { var username = usernameEl.value; var password = passwordEl.value; console.log('username', username, password); // this.http.get('/login'); }}Login.parameters = [Http]; It compiles now but generates the following error: Uncaught (in promise) NoBindingError {message: "No provider for Http! (Login -> Http)", stack: "Error: DI Exception↵ at NoBindingError.BaseExce…or._new ( http://localhost:3000/bundle.js:7319:22 )", keys: Array[2], injectors: Array[2]}constructResolvingMessage: (keys)arguments: (...)caller: (...)length: 1name: ""prototype: Object__proto__: ()context: (...)injectors: Array[2]0: Injector1: Injectorlength: 2__proto__: Array[0]keys: Array[2]message: "No provider for Http! (Login -> Http)"stack: "Error: DI Exception↵ at NoBindingError.BaseException [as constructor] ( http://localhost:3000/bundle.js:8400:24 )↵ at NoBindingError.AbstractBindingError [as constructor] ( http://localhost:3000/bundle.js:9066:17 )↵ at new NoBindingError ( http://localhost:3000/bundle.js:9102:17 )↵ at Injector._throwOrNull ( http://localhost:3000/bundle.js:7469:20 )↵ at Injector._getByKeyDefault ( http://localhost:3000/bundle.js:7516:22 )↵ at Injector._getByKey ( http://localhost:3000/bundle.js:7461:26 )↵ at Injector._getByDependency ( http://localhost:3000/bundle.js:7447:26 )↵ at Injector._instantiate ( http://localhost:3000/bundle.js:7339:37 )↵ at Injector._instantiateBinding ( http://localhost:3000/bundle.js:7330:26 )↵ at Injector._new ( http://localhost:3000/bundle.js:7319:22 )" proto : __
Now provide the response and nothing else.
|
Since you have @Decorators enabled in Babel ...I'll fine-tune this answer to work with your specific setup. 1. You're missing HTTP_PROVIDERS The HTTP_PROVIDERS constant includes a number of functions required to handle HTTP requests/responses. import {Http, HTTP_PROVIDERS} from 'angular2/http'; @Component({ selector: 'login', providers: [ HTTP_PROVIDERS ]}) 2. You need to desugar the DI (Dependency Injection) syntax As mentioned in @alexpods' answer . Remove the static typing constructor(http) { @Inject handles DI implicitly but is only supported in Angular2+Typescript. Since you're using Angular2+ES6 you need to attach a static getter parameter to your class to provide the ES6-specific equivalent. static get parameters() { return [[Http]];} 3. You need to bind the Http instance to your class in the constructor By doing this, it will become accessible in your authenticate() method. constructor(http) { this.http = http; console.log('http', this.http);} ...and the full implementation: import {Component, Inject, View, CORE_DIRECTIVES, ViewEncapsulation} from 'angular2/angular2';import {Http, HTTP_PROVIDERS} from 'angular2/http';@Component({ selector: 'login', // required for Http providers: [ HTTP_PROVIDERS ]})@View({ templateUrl: './components/login/login.html', styleUrls: ['components/login/login.css'], directives: [CORE_DIRECTIVES], encapsulation: ViewEncapsulation.Emulated})export class Login { constructor(http) { // bind http to your class during construction // so it's available to authenticate() this.http = http; } // Angular2 DI desugar'd static get parameters() { return [[Http]]; } authenticate(username, password) { this.http.get('/login'); }} Aside: I know for a fact this works because I'm using it for the <ng2-markdown> component on EvanPlaice.com.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33026015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1090221/']}
|
jdg_376737
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51871080
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to do something like this. In the parent component <child tag={h1}/> In the child component <this.props.tag /> The problem "Unresolved variable or type div" throwing When i pass one of html components ( tags ) like ( div , h1, ext. )
Now provide the response and nothing else.
|
UPDATED: Yes, we can pass HTML tag as a prop. There are several ways based on what you want. Passing tag as a prop <ChildComponent tag="h1" /> And inside child component, we can use the tag as below. const Child = ({ tag: Tag}) => ( <Tag>Hello World</Tag>) By setting dangerouslySetInnerHTML <Child tags="<h1>Hello world</h1>" /> Inside child component: const Child = props => <div dangerouslySetInnerHTML={{ __html: props.tags.outerHTML }}/> Here is what you should know about dangerouslySetInnerHTML. In short, this exposes XSS attack. This one is not related to passing as a prop. But you might wanna consider If you are doing SEO task related (maybe nextjs) and you need to render conditional tag (sometimes h2 and sometimes h3). Then you can do as follow! Conditional Statement // Parentconst Parent = () => <Child isH3Tag /> // Childconst Child = ({ isH3Tag = false, children }) => isH3Tag ? <h3>{children}</h3> : <h2>{children}</h2>; Here is a demo. https://codesandbox.io/s/8x65l707yj
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51871080', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376738
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13359
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After reading about Barnard's Star and how it's approaching us 140 km/sec, I'm curious if there are any visualizations of how the nearest stars to Sol will be moving over the next 100k years or so. I'm amazed that Barnard's Star, currently 6 light years away, will be only 3.8 light years away in about 9,800 years. It's really moving!
Now provide the response and nothing else.
|
This image should give you a good idea of the distances of stars relative to the Sun over the next 80,000 years (and the past 20,000 years). Image courtesy of Wikipedia user Lambiam under the Creative Commons Attribution-Share Alike 3.0 Unported license . I've created a modified version below. The nearest star's track in time is in red, and the four blue boxes indicate the five transitions (the second box contains two): Proxima Centauri remains the nearest star to us for about 25 thousand years, at which point Alpha Centauri passes it. ~7.5 thousand years after that, Ross 248 swings in closer. It then moves away after about 10 thousand years, when Alpha Centauri moves just ahead of it for a short period of time. Gliese 445 becomes the nearest star for about another ~7.5 thousand years, until it wings away and Alpha Centauri again becomes the closest star. 80 thousand years in our future, Ross 128 nudges ahead. A few interesting things to note are that for most of this time, the nearest star is closer to the Sun than Proxima Centauri is now - indeed, there are times when quite a few stars are closer simultaneously - and that this lasts for some time as a brief spike. However, this is only a short-term pattern. In the past, many stars have come much closer - even inside the Oort Cloud. Also, to address the mention of Barnard's Star: It will swing close to us within ~ 10,000 years, but will just as suddenly swing away. As an aside, a cool animation shows the movement of stars that are near the Sun throughout their past trip around the galaxy. Animation courtesy of Wikipedia user Henrykus under the Creative Commons Attribution-Share Alike 3.0 Unported license . Data and original information from the European Southern Observatory. The above animation was the result of 1,000+ nights of observation over 15 years by astronomers working at the ESO. They observed or calculated the ages, velocities, and compositions of 14,000+ F and G stars. They then used the information to create orbital trajectories for these stars.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://astronomy.stackexchange.com/questions/13359', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/6870/']}
|
jdg_376739
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
672461
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is stack alignment? Why is it used? Can it be controlled by compiler settings? The details of this question are taken from a problem faced when trying to use ffmpeg libraries with msvc, however what I'm really interested in is an explanation of what is "stack alignment". The Details: When runnig my msvc complied program which links to avcodec I get thefollowing error: "Compiler did not align stack variables. Libavcodec hasbeen miscompiled", followed by a crash in avcodec.dll. avcodec.dll was not compiled with msvc, so I'm unable to see what is going on inside. When running ffmpeg.exe and using the same avcodec.dll everything works well. ffmpeg.exe was not compiled with msvc, it was complied with gcc / mingw (same as avcodec.dll) Thanks, Dan
Now provide the response and nothing else.
|
Alignment of variables in memory (a short history). In the past computers had an 8 bits databus. This means, that each clock cycle 8 bits of information could be processed. Which was fine then. Then came 16 bit computers. Due to downward compatibility and other issues, the 8 bit byte was kept and the 16 bit word was introduced. Each word was 2 bytes. And each clock cycle 16 bits of information could be processed. But this posed a small problem. Let's look at a memory map: +----+|0000| |0001|+----+|0002||0003|+----+|0004||0005|+----+| .. | At each address there is a byte which can be accessed individually.But words can only be fetched at even addresses. So if we read a word at 0000, we read the bytes at 0000 and 0001. But if we want to read the word at position 0001, we need two read accesses. First 0000,0001 and then 0002,0003 and we only keep 0001,0002. Of course this took some extra time and that was not appreciated. So that's why they invented alignment. So we store word variables at word boundaries and byte variables at byte boundaries. For example, if we have a structure with a byte field (B) and a word field (W) (and a very naive compiler), we get the following: +----+|0000| B|0001| W+----+|0002| W|0003|+----+ Which is not fun. But when using word alignment we find: +----+|0000| B|0001| -+----+|0002| W|0003| W+----+ Here memory is sacrificed for access speed. You can imagine that when using double word (4 bytes) or quad word (8 bytes) this is even more important. That's why with most modern compilers you can chose which alignment you are using while compiling the program.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/672461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4697/']}
|
jdg_376740
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.