source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
147184
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I defined a function on objects of different types, u[x_] and v[y_] , as u[x_] := x^2v[y_] := y^3f[u[x_]] := x + 1f[v[y_]] := y/2 When I input f[u[2]] , I expect to get 5 as output. But I what I get is f[4] instead. Could someone tell me where it had gone wrong?
Now provide the response and nothing else.
|
You can get the behavior you want giving f the attribute HoldFirst , which prevents the premature evaluation of u[x] and v[x] . Remove[f, u, v]u[x_] := x^2v[x_] := x^3SetAttributes[f, HoldFirst]f[u[x_]] = 1 + u[x];f[v[x_]] = v[x]/2; then {f[42], f[u[2]], f[v[3]]} gives {f[42], 5, 27/2} the behavior you expected.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/147184', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/49164/']}
|
jdg_374841
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
208621
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
According to the prime number theorem there are about $n/\ln(n)$ primes less than $n$. This value is a limit but it could fluctuate. My question is, is there a known bound on this fluctuation? i.e. are there functions $f$ and $g$ such that $\forall n>100, f(n) < \mathrm{numPrimes}(n) \lt g(n)$? How tight is this bound? Could there be an $n$ for which the number of primes less than $n$ is smaller than $n/\sqrt{n}$? Could there be an $n$ for which there are more than $n/2$ (for $n\gt 100$ let's say) primes less than $n$? Trivially $0 \lt\mathrm{numPrimes}(n) \lt n$, how tight a bound has been achieved?
Now provide the response and nothing else.
|
Let $\pi(x)$ denote the prime counting function. Pierre Dusart has shown that $$\pi(x) \geq \frac{x}{\log(x)-1} \qquad \qquad \mbox{ for }x\geq 5393$$ and $$\pi(x) \leq \frac{x}{\log(x)-1.1}\qquad \qquad \mbox{ for } x\geq 60184.$$ Many other results of a similar flavor are also shown in this paper.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/208621', 'https://mathoverflow.net', 'https://mathoverflow.net/users/33725/']}
|
jdg_374842
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11797
|
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to python and stuck at a particular problem involving dataframes. The image has a sample column, however the data is not consistent. There are also some floats and NAN. I need these to be split across columns. That is each unique value becomes a column in the df. Any insights?
Now provide the response and nothing else.
|
It looks like you're trying to "featurize" the genre column. df = pandas.Series([('Adventure', 'Drama', 'Fantasy'), ('Comedy', 'Family'), ('Drama', 'Comedy', 'Romance'), (['Drama']), (['Documentary']), ('Adventure', 'Biography', 'Drama', 'Thriller')]).apply(frozenset).to_frame(name='genre')for genre in frozenset.union(*df.genre): df[genre] = df.apply(lambda _: int(genre in _.genre), axis=1) The output: | row | genre | Romance | Documentary | Thriller | Biography | Family | Drama | Comedy | Adventure | Fantasy ||-----|-----------------------------------------|---------|-------------|----------|-----------|--------|-------|--------|-----------|---------|| 0 | (Drama, Adventure, Fantasy) | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 || 1 | (Comedy, Family) | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 || 2 | (Drama, Comedy, Romance) | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 || 3 | (Drama) | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 || 4 | (Documentary) | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 || 5 | (Drama, Biography, Adventure, Thriller) | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 |
|
{}
|
{'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/11797', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/10345/']}
|
jdg_374843
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29446844
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for solutions to speed up a function I have written to loop through a pandas dataframe and compare column values between the current row and the previous row. As an example, this is a simplified version of my problem: User Time Col1 newcol1 newcol2 newcol3 newcol40 1 6 [cat, dog, goat] 0 0 0 01 1 6 [cat, sheep] 0 0 0 02 1 12 [sheep, goat] 0 0 0 03 2 3 [cat, lion] 0 0 0 04 2 5 [fish, goat, lemur] 0 0 0 05 3 9 [cat, dog] 0 0 0 06 4 4 [dog, goat] 0 0 0 07 4 11 [cat] 0 0 0 0 At the moment I have a function which loops through and calculates values for ' newcol1 ' and ' newcol2 ' based on whether the ' User ' has changed since the previous row and also whether the difference in the ' Time ' values is greater than 1. It also looks at the first value in the arrays stored in ' Col1 ' and ' Col2 ' and updates ' newcol3 ' and ' newcol4 ' if these values have changed since the previous row. Here's the pseudo-code for what I'm doing currently (since I've simplified the problem I haven't tested this but it's pretty similar to what I'm actually doing in ipython notebook): def myJFunc(df):... #initialize jnum counter... jnum = 0;... #loop through each row of dataframe (not including the first/zeroeth)... for i in range(1,len(df)):... #has user changed?... if df.User.loc[i] == df.User.loc[i-1]:... #has time increased by more than 1 (hour)?... if abs(df.Time.loc[i]-df.Time.loc[i-1])>1:... #update new columns... df['newcol2'].loc[i-1] = 1;... df['newcol1'].loc[i] = 1;... #increase jnum... jnum += 1;... #has content changed?... if df.Col1.loc[i][0] != df.Col1.loc[i-1][0]:... #record this change... df['newcol4'].loc[i-1] = [df.Col1.loc[i-1][0], df.Col2.loc[i][0]];... #different user?... elif df.User.loc[i] != df.User.loc[i-1]:... #update new columns... df['newcol1'].loc[i] = 1; ... df['newcol2'].loc[i-1] = 1;... #store jnum elsewhere (code not included here) and reset jnum... jnum = 1; I now need to apply this function to several million rows and it's impossibly slow so I'm trying to figure out the best way to speed it up. I've heard that Cython can increase the speed of functions but I have no experience with it (and I'm new to both pandas and python). Is it possible to pass two rows of a dataframe as arguments to the function and then use Cython to speed it up or would it be necessary to create new columns with " diff " values in them so that the function only reads from and writes to one row of the dataframe at a time, in order to benefit from using Cython? Any other speed tricks would be greatly appreciated! (As regards using .loc, I compared .loc, .iloc and .ix and this one was marginally faster so that's the only reason I'm using that currently) (Also, my User column in reality is unicode not int, which could be problematic for speedy comparisons)
Now provide the response and nothing else.
|
I was thinking along the same lines as Andy, just with groupby added, and I think this is complementary to Andy's answer. Adding groupby is just going to have the effect of putting a NaN in the first row whenever you do a diff or shift . (Note that this is not an attempt at an exact answer, just to sketch out some basic techniques.) df['time_diff'] = df.groupby('User')['Time'].diff()df['Col1_0'] = df['Col1'].apply( lambda x: x[0] )df['Col1_0_prev'] = df.groupby('User')['Col1_0'].shift() User Time Col1 time_diff Col1_0 Col1_0_prev0 1 6 [cat, dog, goat] NaN cat NaN1 1 6 [cat, sheep] 0 cat cat2 1 12 [sheep, goat] 6 sheep cat3 2 3 [cat, lion] NaN cat NaN4 2 5 [fish, goat, lemur] 2 fish cat5 3 9 [cat, dog] NaN cat NaN6 4 4 [dog, goat] NaN dog NaN7 4 11 [cat] 7 cat dog As a followup to Andy's point about storing objects, note that what I did here was to extract the first element of the list column (and add a shifted version also). Doing it like this you only have to do an expensive extraction once and after that can stick to standard pandas methods.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29446844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4718247/']}
|
jdg_374844
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25022016
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to get all the file names from repository using the GitHub API? I'm currently trying to tinker this using PyGithub , but I'm totally ok with manually doing the request as long as it works. My algorithm so far is: Get the user repo names Get the user repo that matches a certain description ??? get repo file names?
Now provide the response and nothing else.
|
This will have to be relative to a particular commit, as some files may be present in some commits and absent in others, so before you can look at files you'll need to use something like List commits on a repository : GET /repos/:owner/:repo/commits If you're just interested in the latest commit on a branch you can set the sha parameter to the branch name: sha string SHA or branch to start listing commits from. Once you have a commit hash, you can inspect that commit GET /repos/:owner/:repo/git/commits/:sha which should return something like this (truncated from GitHub's documentation): { "sha": "...", "...", "tree": { "url": "https://api.github.com/repos/octocat/Hello-World/git/trees/691272480426f78a0138979dd3ce63b77f706feb", "sha": "691272480426f78a0138979dd3ce63b77f706feb" }, "...": "..."} Look at the hash of its tree , which is essentially its directory contents. In this case, 691272480426f78a0138979dd3ce63b77f706feb . Now we can finally request the contents of that tree : GET /repos/:owner/:repo/git/trees/:sha The output from GitHub's example is { "sha": "9fb037999f264ba9a7fc6274d15fa3ae2ab98312", "url": "https://api.github.com/repos/octocat/Hello-World/trees/9fb037999f264ba9a7fc6274d15fa3ae2ab98312", "tree": [ { "path": "file.rb", "mode": "100644", "type": "blob", "size": 30, "sha": "44b4fc6d56897b048c772eb4087f854f46256132", "url": "https://api.github.com/repos/octocat/Hello-World/git/blobs/44b4fc6d56897b048c772eb4087f854f46256132" }, { "path": "subdir", "mode": "040000", "type": "tree", "sha": "f484d249c660418515fb01c2b9662073663c242e", "url": "https://api.github.com/repos/octocat/Hello-World/git/blobs/f484d249c660418515fb01c2b9662073663c242e" }, { "path": "exec_file", "mode": "100755", "type": "blob", "size": 75, "sha": "45b983be36b73c0788dc9cbcb76cbb80fc7bb057", "url": "https://api.github.com/repos/octocat/Hello-World/git/blobs/45b983be36b73c0788dc9cbcb76cbb80fc7bb057" } ]} As you can see, we have some blobs , which correspond to files, and some additional trees, which correspond to subdirectories. You may want to do this recursively .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25022016', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2980299/']}
|
jdg_374845
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2497627
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in creating a Spring standalone application that will run and wait to receive messages from an ActiveMQ queue using Spring JMS. I have searched a lot of places and cannot find a consistent way of implementing the main method for such a standalone application. There appears to be few examples of Spring standalone applications. I have looked at Tomcat, JBoss, ActiveMQ and other examples from the around the web but I have not come to a conclusion so ... What is the best practice for implementing a main method for a Java application (specifically Spring with JMS) ? Update :Here's an example from: http://forum.springsource.org/showthread.php?t=48197 Is this the best way of doing this? public static void main(String args[]) { try { ApplicationContext context = new ClassPathXmlApplicationContext("spring.xml"); . . . . . Object lock = new Object(); synchronized (lock) { lock.wait(); } } catch (Exception e) { e.printStackTrace(); }}
Now provide the response and nothing else.
|
When using Spring JMS you already use components/beans in your configuration that get auto-started and will stay alive (subscribe and read from queue/topic) until you stop the application. To start and keep the application running, loading the applicationcontext should therefore be enough.Good practice though is to also call the registerShutdownHook, so on a application halt (i.e via ctrl+c in the console), al your beans are gracefully shutdown and disposed :) public static void main(String args[]) { AbstractApplicationContext context = new ClassPathXmlApplicationContext("spring.xml"); context.registerShutdownHook();}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2497627', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23133/']}
|
jdg_374846
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48865979
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My dev server is running on node live-server. My prod will be a LAMP server. I have normalize.css inside my node_modules server.In my index.html I have <link rel="stylesheet" href="/node_modules/normalize.css"><link rel="stylesheet" href="css/styles.css"> I don't want files linked to node_modules directory. I want something like <link rel="stylesheet" href="css/normalize.css"> Is this doable? I have lot of other css and js files like this. Update 1 Let me clarify, this app is not a node app nor a php app, It has only html, css and js files. Everything runs on client side, But.. we want to leverage the latest client side dev tools for JS and CSS and upload the final build to prod server.
Now provide the response and nothing else.
|
There are lots of possible solutions. I can suggest using a task runner(gulp) which will copy these static files to a public directory like dist/assets . Install gulp on your machine npm install --save-dev gulp Create a gulpfile.js file in your root directory with following code. var gulp = require('gulp');gulp.task('default', function () { gulp.src('/node_modules/normalize.css') .pipe(gulp.dest('./dist/assets'));}); Run gulp gulp Update your index.html as below <link rel="stylesheet" href="/dist/assets/normalize.css"> Please go through the documentation of Gulp for more information You can also try Grunt or Webpack
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48865979', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/670702/']}
|
jdg_374847
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4646827
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking for a way to trigger a piece of code when a console application is manually closed (users closes window). Been trying with: AppDomain.CurrentDomain.ProcessExit += new EventHandler(CurrentDomain_ProcessExit); but the above doesn't work if manually closed. Is there any ways to use a .Net call for this or do I need to import the Kernel dll and do it that way?
Now provide the response and nothing else.
|
This code works to catch the user closing the console window: using System;using System.Runtime.InteropServices;class Program { static void Main(string[] args) { handler = new ConsoleEventDelegate(ConsoleEventCallback); SetConsoleCtrlHandler(handler, true); Console.ReadLine(); } static bool ConsoleEventCallback(int eventType) { if (eventType == 2) { Console.WriteLine("Console window closing, death imminent"); } return false; } static ConsoleEventDelegate handler; // Keeps it from getting garbage collected // Pinvoke private delegate bool ConsoleEventDelegate(int eventType); [DllImport("kernel32.dll", SetLastError = true)] private static extern bool SetConsoleCtrlHandler(ConsoleEventDelegate callback, bool add);} Beware of the restrictions. You have to respond quickly to this notification, you've got 5 seconds to complete the task. Take longer and Windows will kill your code unceremoniously. And your method is called asynchronously on a worker thread, the state of the program is entirely unpredictable so locking is likely to be required. Do make absolutely sure that an abort cannot cause trouble. For example, when saving state into a file, do make sure you save to a temporary file first and use File.Replace().
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4646827', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/526457/']}
|
jdg_374848
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
538760
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm a hobbyist trying to diagnose a circuit board for a tape deck. One piece of advice I was given was to "check for any bad solder connections". On this circuit board there are around 200 connections. All of these could be checked individually, but that seems practically infeasible. Is there a practical way to comprehensively check all connections on a circuit board? If not, what is a diagnostic alternative that will achieve a similar result?
Now provide the response and nothing else.
|
Give it a visual inspection with lots of light, and use a head-mounted magnifier or a bino microscope if your eyes are not perfect. Get light from the sides as well as head on. You can inspect a couple hundred joints in a minute or two with experience. Knowing what to look for is the trick- you're looking for joints that have the lead with a hair around it showing they've broken free or pads that have a hair mark to the trace showing the pad has lifted. On dense SMT boards look for joints that are not smooth and rounded with an appropriate amount of solder. Pay particular attention to heavy components, components that interact with the user's ham-fisted motions and around fixing points and connectors. I would not suggest blindly re-soldering connections, you're more likely to cause problems than to fix them and you'll make it impossible to repair later.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/538760', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/143805/']}
|
jdg_374849
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
224276
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There is a known expansion for the Dirac delta function in the interval $ (-1, 1) $ in terms of the Legendre polynomials as $$\delta(x) = \sum_{k = 0}^{\infty} (-1)^k \frac{(4k + 1) (2k)!}{2^{2k + 1} (k!)^2} P_{2k}(x).$$ I would like to verify this identity numerically at the endpoint, i.e., $ x = 1 $ . We know that $ \delta(1) = 0 $ , so inserting $ x = 0 $ in the right-hand side of the above may result in zero. If we add the first two hundreds of the terms in the sum using Mathematica, results in $7.994$ . As we add more terms the value of the sum increases, for example, for the first three hundred terms, we obtain $9.784$ ; and it becomes larger as we add more terms. Thus, it seems that the identity is not valid at $ x = 1 $ . However, if we set the number of terms to infinity as N[Sum[(-1)^k (4 k + 1) (2 k)! LegendreP[2 k, 1]/(2^(2 k + 1) k! k!), {k, 0, Infinity}]] now, it returns $ 0 $ . How can I understand the Mathematica's behavior?
Now provide the response and nothing else.
|
As I understood you start from the completeness relation $$\sum_{\ell=0}^\infty \frac{2\ell + 1}{2} P_\ell(x)P_\ell(y) = \delta(x-y)$$ and use that $$P_n(0) = \begin{cases} \frac{(-1)^{m}}{4^m} \tbinom{2m}{m} = \frac{(-1)^{m}}{2^{2m}} \frac{(2m)!}{\left(m!\right)^2} & \text{for} \quad n = 2m \\ 0 & \text{for} \quad n = 2m+1 \,. \end{cases}$$ The convergence is, however, not uniform d[x_,n_]:=Sum[(-1)^k (4 k + 1) (2 k)! LegendreP[2 k, x]/(2^(2 k + 1) k! k!), {k, 0,n}]Plot[d[x,#],{x,-1,1},PlotRange->All]&/@Range[5,25,5]
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/224276', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/-1/']}
|
jdg_374850
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22352
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can there be such a thing as a classification of classification theorems?
Now provide the response and nothing else.
|
In a sense, yes. The object of descriptive set theory is to understand "definable" sets of reals (as opposed to arbitrary sets). So, for example, we concern ourselves with Borel sets, or their continuous images, or the complements of those images, or countable unions of such things, etc. Also, we study other spaces, not just the set of reals. The basic setting is that of Polish spaces , i.e., complete metrizable spaces with a countable dense subset. This of course includes ${\mathbb R}$, but many other spaces that appear in practice are here. It turns out that many classification problems that occur in practice have the form: We have a Polish space $X$ and an equivalence relation $E$ on $X$ (typically, $E$ is either Borel as a subset of $X\times X$, or the continuous image of a Borel set). We then study the complexity of the quotient space $X/E$. This can be measured in several ways. For example: Can we pick in a "Borel fashion" a representative of each equivalence class? If not, can we in a sense approximate the graphs of "choice functions" even if we cannot actually single out one of them? It turns out that we can prove results that say that certain classification problems are strictly harder than certain others, and we can study the "partial ordering" of classification problems according to complexity. To be a bit more precise, consider the problem: Given two $5\times 5$ matrices with real entries, are they similar? This is a classification problem: We want to pair matrices that are similar, and the question is how hard is it to decide whether they are paired. It turns out that the Jordan canonical form of two matrices is the same iff they are similar, and the Jordan form can be handily coded by a real number, so the problem reduces to "can we identify if two reals are equal?" There are two ways of measuring the complexity of this problem. One is in terms of complexity theory, and then we need to talk about how the reals are "given" to us. The other way is the descriptive set theoretic one: We have a Polish space: ${\mathbb R}$; and an equivalence relation: equality. This is as simple as a problem gets. Another problem is: Given two auto-homeomorphisms of the unit square, when are they conjugate? This is a harder problem, meaning, there is no "Borel" map that to an auto-homeomorphism assigns a real number so that two homeomorphisms are conjugate iff they have been assigned the same real. A good place to learn about this (and about the wide variety of examples that this approach covers) is "A survey of current and recent work on the theory of Borel equivalence relations" by G. Hjorth. It is available from his webpage, at http://www.math.ucla.edu/~greg/research.html
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/22352', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_374851
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11256580
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an C++ DLL in which the following functions are exported. double getDouble(std::wstring filename, std::string ID, status &stCode);int getInt(std::wstring filename, std::string ID, status &stCode);float getFloat(std::wstring filename, std::string ID, status &stCode);string getString(std::wstring filename, std::string ID, status &stCode);int* getIntArray(std::wstring filename, std::string ID, status &stCode);float* getFloatArray(std::wstring filename, std::string ID, status &stCode);string* getStringArray(std::wstring filename, std::string ID, status &stCode); where status is of enum type... Now I want to use this DLL in my C#.NET app... Can anyone tell me how do i delclare the respected methods in C# and can make a call to this methods.... Thanks in advance...
Now provide the response and nothing else.
|
This questions ranks high with Google but I consider the given answer to be way too complicated. As pointed out in other answers, the desired functionality can be achieved using ArrayAdapter with a very easy trick. You can override the getView method of the ArrayAdapter: ArrayAdapter adapter = new ArrayAdapter(context, android.R.layout.simple_list_item_2, android.R.id.text1, list) { @Override public View getView(int position, View convertView, ViewGroup parent) { View view = super.getView(position, convertView, parent); TextView text1 = (TextView) view.findViewById(android.R.id.text1); TextView text2 = (TextView) view.findViewById(android.R.id.text2); text1.setText(persons.get(position).getName()); text2.setText(persons.get(position).getAge()); return view; }}; If you didn't notice: the trick is to supply android.R.id.text1 as (principally unneccessary) parameter, otherwise the call to super will cause an exception. Also, this solution does not make use of TwoLineListItem , which was deprecated in API 17.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11256580', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/457181/']}
|
jdg_374852
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10496938
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I often see in C# code the following convention: some_type val;val = something; like DataTable dt;dt = some_function_returning_datatable(); or DataTable dt = new DataTable();dt = some_function_returning_datatable(); instead of some_type val = something;DataTable dt = some_function_returning_datatable(); I initially assumed that this was a habit left over from the days when you had to declare all local variables at the top of the scope. But I've learned not to dismiss so quickly the habits of veteran developers. (In my 3rd code section will it not be wastage of memory when we assign dt first with new and then from function) So, is there a good reason for declaring in one line, and assigning afterwards?
Now provide the response and nothing else.
|
In my 3rd code section will it not be wastage of memory when we assign dt first with new and then from function Yes, it will indeed. Only relatively minor - creating a useless DataTable object - but still wasteful and unclear. So, is there a good reason for declaring in one line, and assigning afterwards? Only if you don't have the value immediately. For example: string x;if (someCondition) { // Do some work x = someResult;} else { // Do some other work x = someOtherResult;} Often this can be improved either using a conditional operator or extracting that code into a method. Sometimes it doesn't work out that way though. For simple situations of: Foo x = SomeInitializationWithNoUsefulSideEffects();x = SomeUsefulValue(); Or Foo x;x = SomeUsefulValue(); You should absolutely refactor to Foo x = SomeUsefulValue(); One other interesting situation where the declaration point does make a difference is in captured variables, although normally not the way it's intended: int x;for (int i = 0; i < 10; i++) { x = SomeFunction(); actions.Add(() => Console.WriteLine(x));} vs for (int i = 0; i < 10; i++) { int x = SomeFunction(); actions.Add(() => Console.WriteLine(x));} In the first snippet, each delegate will capture the same variable , so they would all effectively see the last value returned from SomeFunction . In the second snippet, each delegate will capture a separate "instance" of x .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10496938', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1011959/']}
|
jdg_374853
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24388
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am estimating sales with data on customer and store locations and attributes using a Huff-style model, where sales decay with drive time and increase with attractiveness of the store. One hypothetical instance of the problem is illustrated below. The 3 pushpins represent stores and the white flag represents the site where the customers live.** The size of the pins represents the attractiveness of the store (like floor space). The problem I am having is that I would expect red store's sales at the site to be lower than the the green store's even though they are just as far away and have identical attractiveness, because the purple store is somewhat in between the the red store and the site. I would like to translate this intuition into rigorous (but tractable) math so that I add it into my statistical model. I am also having trouble figuring out what this is called in the literature (other than the n-body gravity problem in physics). ** For the daltonic folks, the red store is in the upper left corner. The green store is in the bottom right. Purple store is to the left of the site.
Now provide the response and nothing else.
|
One of the assumptions of the Huff model (which we call multinomial logit in economics) is Independence of Irrelevant Alternatives. IIA says that the ratio of red store to green store sales is independent of the existence and characteristics of all other alternatives --- it only depends on red and green store characteristics. Your intuition is that this assumption should be violated in this application. What you want is one of the alternatives to multinomial logit which relaxes the IIA assumption. There are a number of these, including multinomial probit, nested logit, and logit models using the generalized extreme value distribution, sometimes called generalized logit. There are large literatures in this in both Industrial Organization and in Marketing. Though these models are all defined at the individual level, they can be estimated with data at the market level (like total sales at each store in your example). There is a nice free online book by Kenneth Train . Actually, there are two .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/24388', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/7071/']}
|
jdg_374854
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7981840
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to completely remove elements, based on the contents of an attribute, using python's lxml. Example: import lxml.etree as etxml="""<groceries> <fruit state="rotten">apple</fruit> <fruit state="fresh">pear</fruit> <fruit state="fresh">starfruit</fruit> <fruit state="rotten">mango</fruit> <fruit state="fresh">peach</fruit></groceries>"""tree=et.fromstring(xml)for bad in tree.xpath("//fruit[@state=\'rotten\']"): #remove this element from the treeprint et.tostring(tree, pretty_print=True) I would like this to print: <groceries> <fruit state="fresh">pear</fruit> <fruit state="fresh">starfruit</fruit> <fruit state="fresh">peach</fruit></groceries> Is there a way to do this without storing a temporary variable and printing to it manually, as: newxml="<groceries>\n"for elt in tree.xpath('//fruit[@state=\'fresh\']'): newxml+=et.tostring(elt)newxml+="</groceries>"
Now provide the response and nothing else.
|
Use the remove method of an xmlElement : tree=et.fromstring(xml)for bad in tree.xpath("//fruit[@state=\'rotten\']"): bad.getparent().remove(bad) # here I grab the parent of the element to call the remove directly on itprint et.tostring(tree, pretty_print=True, xml_declaration=True) If I had to compare with the @Acorn version, mine will work even if the elements to remove are not directly under the root node of your xml.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7981840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/702948/']}
|
jdg_374855
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50802688
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't seem to get the XmlSerializer attributes to work. I have public class DriveData{ public string Model { get; set; } public string Type { get; set; } public int SizeGB { get; set; } public string SerialNumber { get; set; } public bool IsOK { get; set; }} static List<DriveData> DiskDrives { get; set; } XmlSerializer serializer = new XmlSerializer(typeof(List<DriveData>));FileStream xmlFile = File.Create("DiskDrives.xml");serializer.Serialize(xmlFile, DiskDrives);xmlFile.Close(); What I want is: <?xml version="1.0" encoding="utf-8" ?><HardDrives> <HardDrive> <Model>Seagate1</Model> ...</HardDrive><HardDrive> ...</HardDrive> But what I'm getting is: <?xml version="1.0"?><ArrayOfDriveData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <DriveData> <Model>Seagate1</Model> <Type>SATA</Type> <SizeGB>999</SizeGB> ... </DriveData> <DriveData> ... </DriveData></ArrayOfDriveData> [XmlElement(ElementName = "HardDrives")] on my List<DriveData> does nothing. [XmlElement(ElementName = "HardDrive")] on my DriveData class causes a build error. How do I set the root and the <HardDrive> elements names to what I want?
Now provide the response and nothing else.
|
You can make moveVertically a function. Please consider code below: const moveVertically = (y) => keyframes` 0% { transform : translateY(0px) } 100% { transform : translateY(${y}px) }`;const BallAnimation = styled.g` animation : ${props => moveVertically(props.y)} ${props => props.time}s linear`; Here you have y in props of BallAnimation . So you can extract it and pass it to moveVertically function which accepts y value as a parameter.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50802688', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9926246/']}
|
jdg_374856
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10255280
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to pass a pointer to an array of struct. This code should create an array of struct, writes to the vars in struct, and then prints them out (which works). Then I want to pass a pointer of that array of struct to another function and print out the array of struts. #define PORT_NUMBER 5100#define MAX_CLIENTS 5#include <sys/types.h>#include <sys/socket.h>#include <netinet/in.h>#include <arpa/inet.h>#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <errno.h>#include <string.h>#include <pthread.h>typedef struct thread_args { int client_number; int connected; char client_name[1024];} client;void pass_func(client* clients[])int main(){ struct thread_args clients[MAX_CLIENTS]; int i; for(i =0; i < MAX_CLIENTS; i++) { clients[i].client_number=i; strcpy(clients[i].client_name, "BOBBY"); } for(i =0; i < MAX_CLIENTS; i++) { printf("%d | %s\n", clients[i].client_number=i, clients[i].client_name); } printf("\n\n"); pass_func(&clients);}void pass_func(client* clients[]){ int i; for(i =0; i < MAX_CLIENTS; i++) { printf("%d | %s\n", clients[i]->client_number=i, clients[i]->client_name); }} And this is the output: $ gcc TEST.c -lpthread -o TEST.outTEST.c: In function ‘main’:TEST.c:41:3: warning: passing argument 1 of ‘pass_func’ from incompatible pointer type [enabled by default]TEST.c:22:6: note: expected ‘struct thread_args **’ but argument is of type ‘struct thread_args (*)[5]’$ ./TEST.out 0 | BOBBY1 | BOBBY2 | BOBBY3 | BOBBY4 | BOBBYSegmentation fault I've done about an hour of research and can't figure out why this is not working. Most of the examples I find are for C++, but not C. (And yes I know many of the header files I've included are not necessary for this code; this is just a segment of my original code.)
Now provide the response and nothing else.
|
pass_func expects an array of pointers to client void pass_func(client* clients[]); but you pass it pass_func(&clients); a pointer to an array of client s. So the client clients[i] is interpreted as a pointer to client in pass_func , but of course the bit-pattern is not a valid pointer to client , hence you're trying to access memory you shouldn't and get a segfault. Either pass an array of pointers, or declare pass_func void pass_func(client *clients); (and then pass pass_func(clients) without the address-operator in main). Your compiler warned you about passing an incompatible pointer type, though.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10255280', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1347705/']}
|
jdg_374857
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6084718
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What HTML tags would you use in for this kind of information: Name : John Age : 40 City : Frankfurt Country : Germany Status : Active I like to use the <label> tag within p tags here, which sounds like good semantics. However, this information is NOT inside a form. And the W3C says: label: The label element represents a caption for a form control. So what should I use then? I can clearly use <strong> , but this has no 'strong importance' and does not feel correct. UPDATE After noticing the DEFINITION LIST suggestion, I'd like to ask if this would still be best if I have several pieces of this information on the same page? It would mean I define "name" with a different "definition" several times. Eg: Name : John Age : 40 City : Frankfurt Country : Germany Status : Active Name : Maria Age : 30 City : Bonn Country : Germany Status : Inactive Is <dl> still the best way to go?
Now provide the response and nothing else.
|
A definition list could be suitable here: <dl> <dt>Name</dt> <dd>John</dd> <dt>Age</dt> <dd>40</dd> <dt>City</dt> <dd>Frankfurt</dd> <dt>Country</dt> <dd>Germany</dd> <dt>Status</dt> <dd>Active</dd></dl> http://www.w3.org/TR/html401/struct/lists.html#h-10.3 - Definition lists, created using the DL element, generally consist of a series of term/definition pairs (although definition lists may have other applications). However, just because you want bold formatting doesn't mean you are restricted to <strong> or <b> . You can use the non-semantic <span> and simply apply the formatting with CSS. In addition, you can use CSS to add the colons, rather than putting them in your markup: dt:after { content:":"; } Perhaps to help clear up some confusion about it's proper usage, it looks like HTML5 will be referring to this tag as a description list . http://www.w3.org/TR/html5/grouping-content.html#the-dl-element - The dl element represents an association list consisting of zero or more name-value groups (a description list).
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6084718', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_374858
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
68642
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to prove the following statement: If $f \in L^1$, then $\hat f$ is uniformly continuous. The argument given is as follows : $$|\hat f (\xi +h )-\hat f (\xi)| = \left| \int f(x) (e^{-2 \pi i x \cdot (\xi+h)}- e^{-2 \pi i x \cdot (\xi)})\mathrm dx \right| \leq 2 \|f\|_{L^1}$$ Now I suppose we have to use the Dominated Convergence Theorem, but I am unable to see to what sequence of functions we apply the theorem to. Any help is greatly appreciated.
Now provide the response and nothing else.
|
I don't know if your questions has been answered in full. For completeness, we apply DCT for the reasons you mentioned in your post. The punchline of the story is: $$\begin{align}\left|\widehat{f}(\xi + h) - \widehat{f}(\xi)\right| &= \left| \int f(x) \left(e^{-2 \pi i x \cdot (\xi + h)} - e^{-2 \pi i \xi \cdot x} \right)dx \right|\\&\leq \int |f(x)| \left|e^{2 \pi i x \cdot h} - 1 \right| dx\end{align}$$ which tends to zero as $h \to 0$, and this is enough to show uniform continuity.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/68642', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1127/']}
|
jdg_374859
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3382210
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While monads are represented in Haskell using the bind and return functions, they can also have another representation using the join function, such as discussed here . I know the type of this function is M(M(X))->M(X), but what does this actually do?
Now provide the response and nothing else.
|
Actually, in a way, join is where all the magic really happens-- (>>=) is used mostly for convenience. All Functor -based type classes describe additional structure using some type. With Functor this extra structure is often thought of as a "container", while with Monad it tends to be thought of as "side effects", but those are just (occasionally misleading) shorthands--it's the same thing either way and not really anything special [0] . The distinctive feature of Monad compared to other Functor s is that it can embed control flow into the extra structure. The reason it can do this is that, unlike fmap which applies a single flat function over the entire structure, (>>=) inspects individual elements and builds new structure from that. With a plain Functor , building new structure from each piece of the original structure would instead nest the Functor , with each layer representing a point of control flow. This obviously limits the utility, as the result is messy and has a type that reflects the structure of flow control used. Monadic "side effects" are structures that have a few additional properties [1] : Two side effects can be grouped into one (e.g., "do X" and "do Y" become "do X, then Y"), and the order of grouping doesn't matter so long as the order of the effects is maintained. A "do nothing" side effect exists (e.g., "do X" and "do nothing" grouped is the same as just "do X") The join function is nothing more than that grouping operation: A nested monad type like m (m a) describes two side effects and the order they occur in, and join groups them together into a single side effect. So, as far as monadic side effects are concerned, the bind operation is a shorthand for "take a value with associated side effects and a function that introduces new side effects, then apply the function to the value while combining the side effects for each". [0]: Except IO . IO is very special. [1]: If you compare these properties to the rules for an instance of Monoid you'll see close parallels between the two--this is not a coincidence, and is in fact what that "just a monoid in the category of endofunctors, what's the problem?" line is referring to.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3382210', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/165495/']}
|
jdg_374860
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
313137
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my specific case, I have two kinds of "messages" that I need to retrive and paginate. Let's omit the details, and just say that the first kind is in a model called Msg1 and the other is called Msg2 The fields of these two models are completely different, the only fields that are common to the two models are "date" and "title" (and of course, id). I can get Msg1.objects.all() and Msg2.objects.all() but can I combine these two queries into one query, sort it by date, and paginate it? I need to preserve the lazy nature of the query. The trivial solution is to list(query) both queries and combine them in a python list. but this is inefficient for obvious reasons. I looked through the django references on models and dp-api, but it doesn't seem that there is a way to combine queries of different models/tables into one.
Now provide the response and nothing else.
|
I would suggest that you use Model inheritance . Create a base model that contains date and title. Subclass Msg1 and Msg2 off it as described. Do all your queries (to fill a page) using the base model and then switch to the derived type at the last moment. The really great thing about inheritance is that django then allows you to use the base model in foreign keys from other models, so you can make your whole application more flexible. Under the hood it is just a table for the base model with a table per sub-model containing one-to-one keys.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/313137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/35364/']}
|
jdg_374861
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7217625
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my Java Desktop Application I have a TableView in which I want to have a column with CheckBoxes. I did find where this has been done http://www.jonathangiles.net/javafx/2.0/CellFactories/ but as the download is not available and because I don't know how soon Jonathan Giles will answer my email I thought I'd ask... How do I put a CheckBox in a cell of my TableView?
Now provide the response and nothing else.
|
You need to set a CellFactory on the TableColumn. For example: Callback<TableColumn<TableData, Boolean>, TableCell<TableData, Boolean>> booleanCellFactory = new Callback<TableColumn<TableData, Boolean>, TableCell<TableData, Boolean>>() { @Override public TableCell<TableData, Boolean> call(TableColumn<TableData, Boolean> p) { return new BooleanCell(); } }; active.setCellValueFactory(new PropertyValueFactory<TableData,Boolean>("active")); active.setCellFactory(booleanCellFactory);class BooleanCell extends TableCell<TableData, Boolean> { private CheckBox checkBox; public BooleanCell() { checkBox = new CheckBox(); checkBox.setDisable(true); checkBox.selectedProperty().addListener(new ChangeListener<Boolean> () { public void changed(ObservableValue<? extends Boolean> observable, Boolean oldValue, Boolean newValue) { if(isEditing()) commitEdit(newValue == null ? false : newValue); } }); this.setGraphic(checkBox); this.setContentDisplay(ContentDisplay.GRAPHIC_ONLY); this.setEditable(true); } @Override public void startEdit() { super.startEdit(); if (isEmpty()) { return; } checkBox.setDisable(false); checkBox.requestFocus(); } @Override public void cancelEdit() { super.cancelEdit(); checkBox.setDisable(true); } public void commitEdit(Boolean value) { super.commitEdit(value); checkBox.setDisable(true); } @Override public void updateItem(Boolean item, boolean empty) { super.updateItem(item, empty); if (!isEmpty()) { checkBox.setSelected(item); } } }
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7217625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/476218/']}
|
jdg_374862
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20205796
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to upload images from my computer to a website using go. Usually, I use a bash script that sends a file and a key to the server: curl -F "image"=@"IMAGEFILE" -F "key"="KEY" URL it works fine, but I'm trying to convert this request into my golang program. http://matt.aimonetti.net/posts/2013/07/01/golang-multipart-file-upload-example/ I tried this link and many others, but, for each code that I try, the response from the server is "no image sent", and I've no idea why. If someone knows what's happening with the example above.
Now provide the response and nothing else.
|
Here's some sample code. In short, you'll need to use the mime/multipart package to build the form. package mainimport ( "bytes" "fmt" "io" "mime/multipart" "net/http" "net/http/httptest" "net/http/httputil" "os" "strings")func main() { var client *http.Client var remoteURL string { //setup a mocked http client. ts := httptest.NewTLSServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { b, err := httputil.DumpRequest(r, true) if err != nil { panic(err) } fmt.Printf("%s", b) })) defer ts.Close() client = ts.Client() remoteURL = ts.URL } //prepare the reader instances to encode values := map[string]io.Reader{ "file": mustOpen("main.go"), // lets assume its this file "other": strings.NewReader("hello world!"), } err := Upload(client, remoteURL, values) if err != nil { panic(err) }}func Upload(client *http.Client, url string, values map[string]io.Reader) (err error) { // Prepare a form that you will submit to that URL. var b bytes.Buffer w := multipart.NewWriter(&b) for key, r := range values { var fw io.Writer if x, ok := r.(io.Closer); ok { defer x.Close() } // Add an image file if x, ok := r.(*os.File); ok { if fw, err = w.CreateFormFile(key, x.Name()); err != nil { return } } else { // Add other fields if fw, err = w.CreateFormField(key); err != nil { return } } if _, err = io.Copy(fw, r); err != nil { return err } } // Don't forget to close the multipart writer. // If you don't close it, your request will be missing the terminating boundary. w.Close() // Now that you have a form, you can submit it to your handler. req, err := http.NewRequest("POST", url, &b) if err != nil { return } // Don't forget to set the content type, this will contain the boundary. req.Header.Set("Content-Type", w.FormDataContentType()) // Submit the request res, err := client.Do(req) if err != nil { return } // Check the response if res.StatusCode != http.StatusOK { err = fmt.Errorf("bad status: %s", res.Status) } return}func mustOpen(f string) *os.File { r, err := os.Open(f) if err != nil { panic(err) } return r}
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20205796', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2190535/']}
|
jdg_374863
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46712486
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
my model class BaseModel(models.Model): CreatedDate = models.DateTimeField(auto_now_add=True, verbose_name="Oluşturulma Tarihi") ModifiedDate = models.DateTimeField(auto_now=True, verbose_name="Son Güncellenme tarihi") Isdeleted = models.BooleanField(verbose_name="Silindi", default=False)class Case(BaseModel): CaseNumber = models.CharField(max_length=14) Customer = models.ForeignKey(Customer) Title = models.ForeignKey(CaseTitle) CaseCategory = models.ForeignKey(CaseCategory, verbose_name="Kategori") Priority = models.ForeignKey(CasePriority) Status = models.ForeignKey(CaseStatus) Detail = models.TextField() Group = models.ForeignKey(Group) User = models.ForeignKey(User,related_name='User' ) AssignedUser = models.ForeignKey(User,related_name='AssignedUser') CloseDetail = models.TextField blank=True) i just want to give 2 foreign key this my model but error is Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/django/db/models/fields/related.py", line 796, in __init__ to._meta.model_name AttributeError: 'ForeignKey' object has no attribute '_meta'During handling of the above exception, another exception occurred:Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 338, in execute django.setup() File "/usr/local/lib/python3.6/site-packages/django/__init__.py", line 27, in setup apps.populate(settings.INSTALLED_APPS) File "/usr/local/lib/python3.6/site-packages/django/apps/registry.py", line 108, in populate app_config.import_models() File "/usr/local/lib/python3.6/site-packages/django/apps/config.py", line 202, in import_models self.models_module = import_module(models_module_name) File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/code/Case/models.py", line 91, in <module> class Case(BaseModel): File "/code/Case/models.py", line 101, in Case AssignedUser = models.ForeignKey(User,related_name='AssignedUser') File "/usr/local/lib/python3.6/site-packages/django/db/models/fields/related.py", line 802, in __init__ RECURSIVE_RELATIONSHIP_CONSTANT, AssertionError: ForeignKey(<django.db.models.fields.related.ForeignKey>) is invalid.First parameter to ForeignKey must be either a model, a model name, or the string 'self'
Now provide the response and nothing else.
|
There is no reason to use a hack as suggested by StefanE. At this point I'm pretty sure Junit5 Test Parameters does not support anything else than primitive types and CsvSource only one allowing mixing of the types. Actually, JUnit Jupiter supports parameters of any type. It's just that the @CsvSource is limited to a few primitive types and String . Thus instead of using a @CsvSource , you should use a @MethodSource as follows. @ParameterizedTest@MethodSource("generateData")void shouldGetDataBit(int first, String second, List<String> third) { System.out.println(first); System.out.println(second); System.out.println(third);}static Stream<Arguments> generateData() { return Stream.of( Arguments.of(1, "foo", Arrays.asList("a", "b", "c")), Arguments.of(2, "bar", Arrays.asList("x", "y", "z")) );}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46712486', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2579468/']}
|
jdg_374864
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17783467
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently create a rounded version of an image in my app by drawing to a canvas. I would like to draw a faint outershadow around the image, but I cant quite get it right. I have 2 questions:1. How can I draw an outer shadow (I can only seem to draw a shadow with a x or y offset)2. How can I draw the shadow so that it does not have the artifacts shown in the attached image. Code: ![public Bitmap getRoundedCornerBitmap(Bitmap bitmap, float cornerRadius) { Bitmap output = Bitmap.createBitmap(bitmap.getWidth()+6, bitmap.getHeight() +6, Config.ARGB_8888); Canvas canvas = new Canvas(output); final int color = 0xff424242; int shadowRadius = getDipsFromPixel(3); final Rect imageRect = new Rect(shadowRadius, shadowRadius, bitmap.getWidth(), bitmap.getHeight()); final RectF rectF = new RectF(imageRect); // This does not achieve the desired effect Paint shadowPaint = new Paint(); shadowPaint.setAntiAlias(true); shadowPaint.setColor(Color.BLACK); shadowPaint.setShadowLayer((float)shadowRadius, 2.0f, 2.0f,Color.BLACK); canvas.drawOval(rectF, shadowPaint); canvas.drawARGB(0, 0, 0, 0); final Paint paint = new Paint(); paint.setAntiAlias(true); paint.setColor(color); canvas.drawRoundRect(rectF, cornerRadius, cornerRadius, paint); paint.setXfermode(new PorterDuffXfermode(Mode.SRC_IN)); canvas.drawBitmap(bitmap, imageRect, imageRect, paint); return output; }][1] This is an example of the effect I am trying to achieve:
Now provide the response and nothing else.
|
Here we go Yup I still dig the Nexus S First of all, please stop masking bitmaps that way, you can accomplish this without allocating another Bitmap , checkout this blog post about how to draw rounded (and actually any shape) images . Second using that Drawable you probably can figure out how to add your shadow, just make sure it does not get clipped, on 18+ you could use ViewOverlay s for that, also keep in mind that there are several unsupported drawing operations for hardware accelerated layers , that includes setShadowLayer and BlurMaskFilter , if performance is not an issue for you, you can disable it as always: if (SDK_INT >= HONEYCOMB) { view.setLayerType(View.LAYER_TYPE_SOFTWARE, null);} And use setShadowLayer as you were trying already: somePaint.setShadowLayer(shadowSize, deltaX, deltaY, shadowColor); For a sample please check the link at the end. If you still want to be hardware accelerated you will have to fake it at risk of overdrawing, you could use a radial gradient or draw another oval blurring it yourself (as mentioned before can't use BlurMaskFilter ) or use a pre-blurred Bitmap (more masking). For such a subtle shadow I would rather just go flat if performance is required, the full sauce is in the banana stand . Update: Starting L you can use real shadows .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17783467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/172131/']}
|
jdg_374865
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14012642
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am modifying my code from using mysql_* to PDO . In my code I had mysql_real_escape_string() . What is the equivalent of this in PDO?
Now provide the response and nothing else.
|
Well No, there is none! Technically there is PDO::quote() but it is rarely ever used and is not the equivalent of mysql_real_escape_string() That's right! If you are already using PDO the proper way as documented using prepared statements , then it will protect you from MySQL injection. # Example: Below is an example of a safe database query using prepared statements (pdo) try { // first connect to database with the PDO object. $db = new \PDO("mysql:host=localhost;dbname=xxx;charset=utf8", "xxx", "xxx", [ PDO::ATTR_EMULATE_PREPARES => false, PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION ]); } catch(\PDOException $e){ // if connection fails, show PDO error. echo "Error connecting to mysql: " . $e->getMessage(); } And, now assuming the connection is established, you can execute your query like this. if($_POST && isset($_POST['color'])){ // preparing a statement $stmt = $db->prepare("SELECT id, name, color FROM Cars WHERE color = ?"); // execute/run the statement. $stmt->execute(array($_POST['color'])); // fetch the result. $cars = $stmt->fetchAll(\PDO::FETCH_ASSOC); var_dump($cars); } Now, as you can probably tell, I haven't used anything to escape/sanitize the value of $_POST["color"] . And this code is secure from myql-injection thanks to PDO and the power of prepared statements. It is worth noting that you should pass a charset=utf8 as attribute, in your DSN as seen above, for security reasons, and always enable PDO to show errors in the form of exceptions. PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION so errors from you database queries won't reveal sensitive data like your directory structure, database username etc. Last but not least, there are moments when you should not trust PDO 100%, and will be bound to take some extra measures to prevent sql injection, one of those cases is, if you are using an outdated versions of mysql [ mysql =< 5.3.6 ] as described in this answer But, using prepared statements as shown above will always be safer, than using any of the functions that start with mysql_ Good reads PDO Tutorial for MySQL Developers
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/14012642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1925118/']}
|
jdg_374866
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
247102
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
How can we prove this inequality or give a more accurate bound?$$1 + x + \frac{{{x^2}}}{{2!}} + ....... + \frac{{{x^n}}}{{n!}} > \frac{{{e^x}}}{2},x \in [0,n]\$$ I came across the equation:$$\mathop {\lim }\limits_{n \to \infty } \frac{{1 + n{\rm{ + }}\frac{{{n^2}}}{{2!}} + ..... + \frac{{{n^n}}}{{n!}}}}{{{e^n}}} = \frac{1}{2} $$Maybe I can use this this inequality to solve this problem
Now provide the response and nothing else.
|
I think this might follow from JSJ theory. Assume that $M$ is irreducible with incompressible boundary. Then any essential annulus is homotopic into an $I$-bundle region or a Seifert-fibered region of the JSJ decomposition. In the Seifert case, the region meets the boundary in annuli, in which case the boundaries of the immersed annulus will be a multiple of the core of the annuli, and I think one can cut and paste to get a simple closed essential core of each annulus, which then cobound an embedded essential annulus. Otherwise, if the annulis lies in a product region, then your immersed annulus goes between two curves on the surface. So this amounts to asking: for a given homotopy class of curve on a surface, can one always cut and paste a subset of the curve to give an embedded curve in the same isotopy class? This seems plausible, e.g., if the curve is not filling (one should be able to cut and paste to get a boundary component of the subsurface that it fills). In any case, it seems that at least this gives a reformulation of the problem (although I'm not sure what happens for twisted $I$-bundles).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/247102', 'https://mathoverflow.net', 'https://mathoverflow.net/users/97016/']}
|
jdg_374867
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1088131
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
An edge set of a graph is a set of doubletons, pairing edges. For example: has an edge set of $\{\{6,4\},\{4,5\},\{4,3\},\{5,2\},\{5,1\},\{3,2\},\{1,2\}\}$. A set, by definition, cannot have duplicate elements, else it is not a set. is a multigraph. All graphs have edge sets, yet the edge set of the multigraph would have to contain duplicate sets in its edge set to properly represent its edges, but then it wouldn't have an edge set, because a set cannot possibly have duplicate elements. I suppose its edge set would be $\{\{1,3\},\{1,2\},\{2,4\},\{2,4\},\{2,4\}\}$, but then it wouldn't be a set, having three instances of the same member. Question How do I represent the edge set of a multigraph, which have multiple edges along the same vertices?
Now provide the response and nothing else.
|
I believe the term multiset is used to refer to a set that may have duplicate elements. This term makes sense here, especially since it is cohesive with the term multigraph . If you really want to keep the edgeset as a set you could let each element of the egdeset be a pair that consists of the edge itself and the mutliplicity of the edge. So the edgeset of the multigraph you posted would be $$\{(\{1,2\},1),(\{1,3\},1),(\{2,4\},3)\}$$If we go with the multiset term, the question then becomes this:should we call it a multiedgeset or an edgemultiset ?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1088131', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/195944/']}
|
jdg_374868
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29223949
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the class: class foo{public: static int bar; //declaration of static data member};int foo::bar = 0; //definition of data member We have to explicitly define the static variable, otherwise it will result in a undefined reference to 'foo::bar' My question is: Why do we have to give an explicit definition of a static variable? Please note that this is NOT a duplicate of previously asked undefined reference to static variable questions. This question intends to ask the reason behind explicit definition of a static variable.
Now provide the response and nothing else.
|
From the beginning of time C++ language, just like C, was built on the principle of independent translation . Each translation unit is compiled by the compiler proper independently, without any knowledge of other translation units. The whole program only comes together later, at linking stage. Linking stage is the earliest stage at which the entire program is seen by linker (it is seen as collection of object files prepared by the compiler proper ). In order to support this principle of independent translation , each entity with external linkage has to be defined in one translation unit, and in only one translation unit. The user is responsible for distributing such entities between different translation units. It is considered a part of user intent , i.e. the user is supposed to decide which translation unit (and object file) will contain each definition. The same applies to static members of the class. Static members of the class are entities with external linkage. The compiler expects you to define that entity in some translation unit. The whole purpose of this feature is to give you the opportunity to choose that translation unit. The compiler cannot choose it for you. It is, again, a part of your intent, something you have to tell the compiler. This is no longer as critical as it used to be a while ago, since the language is now designed to deal with (and eliminate) large amount of identical definitions (templates, inline functions, etc.), but the One Definition Rule is still rooted in the principle of independent translation . In addition to the above, in C++ language the point at which you define your variable will determine the order of its initialization with regard to other variables defined in the same translation unit. This is also a part of user intent , i.e. something the compiler cannot decide without your help. Starting from C++17 you can declare your static members as inline . This eliminates the need for a separate definition. By declaring them in that fashion you effectively tell compiler that you don't care where this member is physically defined and, consequently, don't care about its initialization order.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29223949', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2842375/']}
|
jdg_374869
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16533
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Two plastic sheets with charged densities as shown: I'm trying to find the field at $B$. I obtained the correct answer by adding up the fields created by each charge density. But I realized that since the field is uniform in the region between the two sheets, I should be able to make a Gaussian surface between the sheets with the shape of a box and with one edge at $B$. Thus, the flux would be $$\int{E \cdot dA} ~=~ EA ~=~ \frac{q_{encl}}{\epsilon_{0}}~=~0.$$ Because there would be no charged enclosed inside the surface. However, that means that $EA$ is $0$ (Note the integral reduced to $EA$ because $E$ is uniform). Since $A$ is not zero (it is the area of two sides of the box), this means that $E$ must be zero. However, $E$ is not zero there, as you can see by adding $\frac{\sigma}{2\epsilon_0}$ for each charge density. What am I doing wrong when using Gauss' law?
Now provide the response and nothing else.
|
You tell us that one surface of the box is at $B$, but you're a little vague on where the opposite face is. You do say that your surface is "between the two sheets", so I think you may mean that the surface is entirely contained in the space between the two sheets. The box does not intersect any charged surface. With that, and a uniform electric field in that region, the flux on the opposite face will have a sign opposite to the flux at the surface at $B$. The total flux is zero. The field and flux at any portion of the surface can be non-zero. Gauss speaks only to the total flux.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/16533', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/5791/']}
|
jdg_374870
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
86731
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried chmod 700 /bin/killchown root /bin/kill but a normal user can still do: kill blah ... and successfully kill the app 'blah' running under the user's account. I'm using Ubuntu (8.10). The bash shell apparently has the kill command built-in and so attempting the above will not work. Using .bashrc the enable command could be used to disable the built-in kill command each time a terminal is launched, but the .bashrc could then simply be edited by the user using a text editor launched via the GUI to re-enable the kill command. Can I somehow disable a built-in bash command permanently without using .bashrc (which is user-editable)? Anyone any ideas how I can disable the built-in kill?
Now provide the response and nothing else.
|
The answer is that short of compiling your own version of bash, you can't. Furthermore, even if you succeeded, if your users can compile programs or move binaries onto the system, they can always bring in a kill binary of their own, and be able to issue kills anyway. Now, the question is: What do you really want to do? Because disabling the kill command isn't, I suspect, your actual goal. You're trying to prevent the users from doing something (presumably kill something they shouldn't), and there's probably a better way to do what you want.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/86731', 'https://serverfault.com', 'https://serverfault.com/users/26784/']}
|
jdg_374871
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1063770
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a username which I must change in numerous (up to ~25) tables. (Yeah, I know.) An atomic transaction seems to be the way to go for this sort of thing. However, I do not know how to do this with pyodbc. I've seen various tutorials on atomic transactions before, but have never used them. The setup: Windows platform, Python 2.6, pyodbc, Microsoft SQL 2005. I've used pyodbc for single SQL statements, but no compound statements or transactions. Best practices for SQL seem to suggest that creating a stored procedure is excellent for this. My fears about doing a stored procedure are as follows, in order of increasing importance: 1) I have never written a stored procedure. 2) I heard that pyodbc does not return results from stored procedures as of yet. 3) This is most definitely Not My Database. It's vendor-supplied, vendor-updated, and so forth. So, what's the best way to go about this?
Now provide the response and nothing else.
|
By its documentation , pyodbc does support transactions, but only if the odbc driver support it. Furthermore, as pyodbc is compliant with PEP 249 , data is stored only when a manual commit is done. This means that you have to explicitely commit() the transaction, or rollback() the entire transaction. Note that pyodbc also support autocommit feature, and in that case you cannot have any transaction. By default, autocommit is off, but your codebase might have tuerned it on.You should check the connection, when it is performed cnxn = pyodbc.connect(cstring, autocommit=True) Alternatively, you can also explicitely turn off the autocommit mode with cnxn.autocommit = False but this might have quite a big impact on your system. Note: you can get more information on the autocommit mode of pyodbc on its wiki
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1063770', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_374872
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37785348
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have JSON like this, { "resultFlag": true, "successMessage": "Data Received", "category": [ { "CategoryID": "4", "CategoryTitle": "YSafety", "CategoryImage": "", "Percentage": "25%", "Level": 16, "Days": 60, "Topic": [ { "TopicID": "11", "TopicTitle": "Safety during Online transactions", "TopicImage": "" }, { "TopicID": "12", "TopicTitle": "Anti-Theft mobile Features", "TopicImage": "" }, { "TopicID": "32", "TopicTitle": "Magic of Science", "TopicImage": "" } ], "Sponsor": [ { "SponsorID": 1, "SponsorName": "adidas", "SponsorImage": "http://assets/upload/SPONSOR/adidas.jpg" } ] }, { "CategoryID": "5", "CategoryTitle": "YSoccer", "CategoryImage": "", "Percentage": "25%", "Level": 16, "Days": 60, "Topic": [ { "TopicID": "13", "TopicTitle": "Others", "TopicImage": "" }, { "TopicID": "14", "TopicTitle": "Rules", "TopicImage": "" }, { "TopicID": "33", "TopicTitle": "Magic of Nature", "TopicImage": "" } ], "Sponsor": [ { "SponsorID": 1, "SponsorName": "adidas", "SponsorImage": "http://assets/upload/SPONSOR/adidas.jpg" } ] } ]} my code is to fetch array inside array is below, static ArrayList<String> al_sponsor_image = new ArrayList<String>(); static ArrayList<ArrayList<String>> al_al_topic_id = new ArrayList<ArrayList<String>>(); if (resultFlag == true) { String success = response.getString("successMessage"); JSONArray array_category = response.getJSONArray("category"); for (int i = 0; i < array_category.length(); i++) { JSONObject obj_category = array_category.getJSONObject(i); String category_id = obj_category.getString("CategoryID"); String category_title = obj_category.getString("CategoryTitle"); String category_image = obj_category.getString("CategoryImage"); String percentage = obj_category.getString("Percentage"); String level = obj_category.getString("Level"); String days = obj_category.getString("Days"); // al_category_id.add(category_id); al_category_title.add(category_title); // al_category_image.add(category_image); al_percentage.add(percentage); al_level.add(level); al_days.add(days); JSONArray array_topic = obj_category.getJSONArray("Topic"); JSONArray array_sponsor = obj_category.getJSONArray("Sponsor"); al_topic_id.clear(); al_topic_title.clear(); al_topic_image.clear(); al_sponsor_id.clear(); al_sponsor_name.clear(); al_sponsor_image.clear(); for (int j = 0; j < array_topic.length(); j++) { JSONObject obj_topic = array_topic.getJSONObject(j); String topic_id = obj_topic.getString("TopicID"); String topic_title = obj_topic.getString("TopicTitle"); String topic_image = obj_topic.getString("TopicImage"); al_topic_id.add(topic_id); al_topic_title.add(topic_title); al_topic_image.add(topic_image); /* String[] st_sub_array_topic_title = al_topic_title.toArray(new String[al_topic_title.size()]); String[] st_sub_array_topic_image = al_topic_image.toArray(new String[al_topic_image.size()]); st_array_topic_title = new String[i][]; st_array_topic_title[j] = st_sub_array_topic_title; int[] itemImageList = new int[j]; itemImageList[j] = R.drawable.dummy_flower; st_array_topic_image[j] = itemImageList;*/ /*{itemImageList, itemImageList, itemImageList, itemImageList, itemImageList, itemImageList, itemImageList, itemImageList};*/ /*{R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower, R.drawable.dummy_flower};*/ // st_array_topic_image = new String[i][]; // st_array_topic_image[j] = st_sub_array_topic_image; } al_al_topic_id.add(i, al_topic_id); al_al_topic_title.add(i, al_topic_title); al_al_topic_image.add(i, al_topic_image); System.out.println("al_al_topic_id : " + al_al_topic_id); System.out.println("al_al_topic_title : " + al_al_topic_title); System.out.println("al_al_topic_image : " + al_al_topic_image); // String[][] st_array_topic_image for (int j = 0; j < array_sponsor.length(); j++) { JSONObject obj_sponsor = array_sponsor.getJSONObject(j); String sponsor_id = obj_sponsor.getString("SponsorID"); String sponsor_name = obj_sponsor.getString("SponsorName"); String sponsor_image = obj_sponsor.getString("SponsorImage"); al_sponsor_id.add(sponsor_id); al_sponsor_name.add(sponsor_name); al_sponsor_image.add(sponsor_image); /*String[] st_sub_array_sponsor_name = al_sponsor_name.toArray(new String[al_sponsor_name.size()]); String[] st_sub_array_sponsor_image = al_sponsor_image.toArray(new String[al_sponsor_image.size()]); st_array_sponsor_name = new String[i][]; st_array_sponsor_name[j] = st_sub_array_sponsor_name; st_array_sponsor_image = new String[i][]; st_array_sponsor_image[j] = st_sub_array_sponsor_image;*/ } al_al_sponsor_id.add(i, al_sponsor_id); al_al_sponsor_name.add(i, al_sponsor_name); al_al_sponsor_image.add(i, al_sponsor_image); System.out.println("al_al_sponsor_id : " + al_al_sponsor_id); System.out.println("al_al_sponsor_name : " + al_al_sponsor_name); System.out.println("al_al_sponsor_image : " + al_al_sponsor_image); } /* String[] st_array_category_title = al_category_title.toArray(new String[al_category_title.size()]); String[] st_array_percentage = al_percentage.toArray(new String[al_percentage.size()]); String[] st_array_level = al_level.toArray(new String[al_level.size()]); String[] st_array_days = al_days.toArray(new String[al_days.size()]); */ rv_learning_cards.setAdapter(new AdapterLearningCards(activity, al_category_title, al_percentage, al_level, al_days, al_al_topic_image, al_al_topic_title, al_al_sponsor_image)); } When I print the al_al_topic_title, it replaces all array with lat array... al_al_topic_title : [[Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities], [Travel and Activity Gear, Types of Activities]]al_al_topic_title : [[Others, Others], [Others, Others], [Others, Others], [Others, Others], [Others, Others], [Others, Others], [Others, Others], [Others, Others], [Others, Others], [Others, Others]] Can anyone give me suggetion?
Now provide the response and nothing else.
|
The solution: Set margin on the child element instead of padding on your flex item. .Row{ display:flex;}.Item{ display:flex; flex:1; flex-direction:column;}.Item > div{ background:#7ae; margin:0 10px 10px 0;}.Flx2{ flex:2;} <div class="Row"> <div class="Item"> <div>1A</div> </div> <div class="Item"> <div>1B</div> </div> <div class="Item Flx2"> <div>1C</div> </div></div><div class="Row"> <div class="Item"> <div>2A</div> </div> <div class="Item"> <div>2B</div> </div></div> The problem: The calculation is done without padding . So; adding padding to the flex element is not giving you your expected width by the spec . The specific article For example, the available space to a flex item in a floated auto-sized flex container is: the width of the flex container’s containing block minus the flex container’s margin, border, and padding in the horizontal dimension infinite in the vertical dimension Why is the padding not calculated? That's what the spec wants. Determine the available main and cross space for the flex items . For each dimension, if that dimension of the flex container ’s content box is a definite size, use that; if that dimension of the flex container is being sized under a min or max-content constraint, the available space in that dimension is that constraint; otherwise, subtract the flex container’s margin, border, and padding from the space available to the flex container in that dimension and use that value . This might result in an infinite value. If you subtract the padding and margin from the element's size, you get: 1A + 1B = 2A However, after you did that, the padding was added to the element. The more elements, the more padding. That's not being calculated in the width, causing your statement to be false.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/37785348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6159206/']}
|
jdg_374873
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
302613
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a machine that has McAfee SafeBoot Whole Disk Encryption on it. If I don't have the safeboot password, can I boot from a OS installation disk, format the disk and reinstall the OS? Will that corrupt my Master Boot Record since that is also encrypted? Follow-up: I was able to boot from CD, format the disk and reinstall the OS
Now provide the response and nothing else.
|
"Whole Disk" encryption is a little misleading in a way - in a pure software implementation such as SafeBoot there must always be an unencrypted MBR and some form of Pre-boot Authentication within an unencrypted partition on the disk. A better technical description would be "Whole Partition" encryption - though that wouldn't appeal to management types who don't need to know the distinction! If you erase all partitions on your disk, repartition and reformat, you effectively erase the PBA and all data in the OS partition. You will lose all data (as with any other reformat operation), but you will be able to convert the disk back to an unencrypted state.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/302613', 'https://serverfault.com', 'https://serverfault.com/users/4113/']}
|
jdg_374874
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
118184
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a file named : newfile which consist of the following data 1 AC BB CC2 AB CC DD3 CA BB CC4 BE DD EE5 BD AA AA I type the following command in bash to sort the data according to the second character, second field sort -k3 newfile I expected the following results 3 CA BB CC2 AB CC DD1 AC BB CC5 BD AA AA4 BE DD EE why am i getting the following results and how am i suppose to solve according to the third character (ignoring the blank) 5 BD AA AA1 AC BB CC3 CA BB CC2 AB CC DD4 BE DD EE
Now provide the response and nothing else.
|
To sort on a specific character within a field (i.e. a block of character surrounded by blank characters). You can use this specific syntax : sort -k 1.4 file This will sort on the fourth character of the file. See https://stackoverflow.com/questions/12383706/unix-sort-on-column-without-separator for details. If you experience counterintuitive results while playing with -k , add the option -b . This will make sort ignore the blanks. So sort -b -k 2.2 file gives what you want : second character of second field, ignoring blanks.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/118184', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/56421/']}
|
jdg_374875
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37996419
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a simple test to my RestController. I expect that $[1].parent_id returns Long as an object and not integer primitive. It will return Long if parent_id is in a long number range and > integer number range (such as : 2147483650L). @RunWith(SpringJUnit4ClassRunner.class)@SpringApplicationConfiguration(classes = Application.class)@WebAppConfiguration@WebAppConfigurationpublic class TransactionServiceControllerTest {@Beforepublic void setup() throws Exception { this.mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build(); // I copy this from my RestController class this.transactions = Arrays.asList( new Transaction(100d, "car", null), new Transaction(100d, "table", 12L) );}@Testpublic void readSingleBookmark() throws Exception { mockMvc.perform(MockMvcRequestBuilders.get("/transaction/")) .andExpect(content().contentType(contentType)) // ok .andExpect(jsonPath("$", hasSize(2))) // ok //omitted andExpect(jsonPath("$[1].parent_id",is(this.transactions.get(1).getParentId())));} //assertion failExpected: is <12L>but: was <12> Result from another test : Expected: is <12L>but: was <2147483650L> //return Long instead int this is my JacksonConfiguration @Configurationpublic class JacksonConfiguration { @Bean @Primary public ObjectMapper objectMapper() { final ObjectMapper objectMapper = new ObjectMapper(); //supposed to be this is the magic trick but it seems not.. objectMapper.enable(DeserializationFeature.USE_LONG_FOR_INTS); objectMapper.setSerializationInclusion(JsonInclude.Include.NON_ABSENT); objectMapper.setPropertyNamingStrategy(PropertyNamingStrategy.SNAKE_CASE); return objectMapper; }} And my POJO public class Transaction {private double ammount;private String type;private Long parentId;public Transaction(Double ammount, String type, Long parentId) { //omitted}//setter and getter omitted} MyRestController @RestController@RequestMapping("transaction")public class TransactionServiceController {@RequestMapping(method = RequestMethod.GET)List<Transaction> getTransaction() { return Arrays.asList( new Transaction(100d, "car", null), new Transaction(100d, "table", 12L) ); }} And Application.java @SpringBootApplicationpublic class Application { public static void main(String[] args){ SpringApplication.run(Application.class,args); }}
Now provide the response and nothing else.
|
Update Spring Framework 4.3.3 and 5.0.0 added first-class support for explicit conversions for request content for use with MockRestServiceServer . See SPR-14498 for details. Spring Framework 4.3.15 and 5.0.5 will add first-class support for explicit conversions for response content for use with MockMvc . See SPR-16587 for details. Original Answer One option (which I haven not personally verified) would be to try a different JsonProvider . This can be set via com.jayway.jsonpath.Configuration.setDefaults(Defaults) . If you are sure that the Long can always be safely narrowed to an int , you could use the following: andExpect(jsonPath("$[1].parent_id",is(this.transactions.get(1).getParentId().intValue()))); And the only other option is to write a custom Matcher that converts the incoming Integer to a Long before performing the actual matching.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37996419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2870357/']}
|
jdg_374876
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34525917
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I try to understand the expression {} == true following the section 7.2.12 of doc Ecma-262 . If Type(y) is Boolean, return the result of the comparison x == ToNumber(y) The result of ToNumber(true) is 1 , then {} == 1 If Type(x) is Object and Type(y) is either String, Number, or Symbol, then return the result of the comparison ToPrimitive(x) == y . I am confused at the ToPrimitive({}) now. If hint is "string", then a. Let methodNames be «"toString", "valueOf"». Else, a. Let methodNames be «"valueOf", "toString"». Should ToPrimitive({}) be interpreted as {}.toString() or {}.valueOf() ? Suppose the toString() is called. If Type(x) is String and Type(y) is Number, return the result of the comparison ToNumber(x) == y So {} == true could be ToNumber(ToPrimitive({})) == ToNumber(true) ?
Now provide the response and nothing else.
|
The spec says: When ToPrimitive is called with no hint, then it generally behaves as if the hint were Number. Hence, according to the ToPrimitive algorithm, valueOf is called first. But since that returns an object, not a primitive value, toString will be called second, which returns a string. So {} == true could be ToNumber(ToPrimitive({})) == ToNumber(true) ? Yes, that's exactly what it is.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34525917', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3011380/']}
|
jdg_374877
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13861
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given a collection of points $P \subset \mathbb R^3$, a crude characterization of the "shape" of $P$ is sometimes given by the principal components. We construct a covariance matrix, e.g., if $P$ is discrete, $C = \displaystyle\sum_{p\in P} (p - \mu)(p - \mu)^\intercal$, where $\mu = \displaystyle\frac1{|P|}\sum_{p\in P}\, p$ is the centre of mass. This defines an ellipsoid whose semi-axis are defined by the unit eigenvectors of $C$, scaled by the associated eigenvalues. My question is concerned with the following statement: The ellipsoid described by the principal components is the best fit ellipsoid for $P$. Unfortunately, I don't know of any author or resource that I can accuse of explicitly making such a claim$^*$. Anyway, my question is: Is there a natural geometric definition of "best fit ellipsoid" for which the above statement becomes true? For example, some kind of least squares or other variational characterization of this same ellipsoid is what I am looking for. I would also accept an answer that convinces me that this is the wrong way to be looking at the principal components, but that will be a tough sell. If we do a coordinate translation, so that $v_p = (p - \mu)$, and let$\hat{v}_p = \frac{v_p}{\left\|v_p\right\|}$, and look at $C$ as alinear transformation which is the sum of the rank one operators inthis coordinate system, $C = \displaystyle\sum_{p \in P} \left\|v_p\right\|^2\hat{v}_p\hat{v}_p^\intercal$, then the ellipsoid in question is theimage of the unit ball. From this characterization I gain someintuition as to why this particular ellipsoid is a good one. I amlooking for a better understanding, preferably from a geometric perspective. * Wikipedia comes close to such a claim in the description of the moments : "The 'second moment', ... in higher dimensions measures the shape of a cloud of points as it could be fit by an ellipsoid." Edit: Although I feel that the observation that the ellipsoid reflects the variance of the Gaussian distribution that has maximal likelihood to produce $P$ (I haven't rolled up my sleeves and checked), this is not the kind of answer I am looking for. Perhaps I should remove all tags that refer to probability or regression. I will make the question very specific then. From stuff I've seen elsewhere on the web, I get the feeling that this ellipsoid is different from the one that minimizes the sum of squared distances to the points, but I don't know for sure. How about this then: the radial distance from a point $p$ to the ellipsoid is the distance as measured along the line that contains $p$ and $\mu$ (the latter being the origin in our new coordinate system). So let this be my question: Does the ellipsoid defined above minimize the sum of the squared radial distances?
Now provide the response and nothing else.
|
From the principle of the "jacobi-rotation" method for obtaining the principal components it is clear, that some ellipsoid (based on the data) is defined by the following idea: twodimensional case: rotate the cloud of datapoints such that the sum-of-squared x-coordinates ("ssq(x)") is maximum the sum-of-squared y-coordinates ("ssq(y)") is then minimal. multidimensional case: you do 1) with all pairs of axes x-y, x-z, x-w,... This gives a "temporary" maximum of ssq(x) then you do 1) with all pairs y-z,y-w,... to obtain a "temporary" maximum of ssq(y) and so on with z-w,.... Then you repeat the whole process until convergence In principle this only defines the direction of the axes of some ellipsoid. Then, for instance in the monography "Faktorenanalyse" ("factoranalysis") of K. Ueberla, the size of the ellipsoid is defined by: "length of the axes are the roots of ssq(axis)" (paraphrased) Here no direct reference is made to the property of radial deviation (when representated in terms of polar-coordinates). In fact, the concept of minimizing the radial distances is different by a simple consideration. If we have -for instance- only three datapoints which are nearly on one straight line, then the center of the best-fitting-circle tends to infinity and not to the mean. The same may occur with some cloud of datapoints whose overall shape is that of a small but sufficient segment of a circumference of an ellipse. Well, we may force the origin into the arithmetic center of the cloud - but this introduces a likely artificial restriction. Actually, if you search for "circular regression" you'll arrive at nonlinear, iterative methods much different of the method of principal components. [update] Here I show the difference between the two concepts in a two-dimensional example. I generated slightly correlated random-data. Rotated to pca-position, such that the x-axis of the plot is the first princ.comp. (eigenvector) and so on. The blue ellipse is that ellipse using the axis-lengthes of stddev of the principal components. Then I computed another ellipse by minimizing the sum-of-squares of radial distances of datapoints to the ellipses circumference (just manual checking), this is the red ellipse.Here is the plot: [end update] One more comment for the geometric visualization (it is just for the two-dimensional case). Consider the cloud of data as endpoints of vectors from the origin. Then the common concept of the resultant is the mean of the arithmetic/geometric sum of all vectors. This is also called "centroid" and in early times of factor-analysis, without the help of computers, it was often an accepted approximation for the "central tendency": the mean of the coordinates are the best fit in the sense of least "squares-of-distances" . However, if some vectors have "opposing" (better word?) direction, their influence neutralizes, although they would nicely define an axis. In the old centroid-method such vectors were "inflected" (sign changed) to correct for that effect. Another option is to double the angle of each vector - then opposing vectors overlay and strengthen each others influence instead of neutralizing. Then take the mean of the new vectors and half its angle. This is then the principal component/axis and thus represents another least-squares estimation for the fit. (The computation of the rotation-criterion agrees perfectly to the minimizing formula of the Jacobi-rotation for PCA)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/13861', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1728/']}
|
jdg_374878
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8093755
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a main application that spawns a seperate thread to process messages off a queue. I have an issue on AIX when I hit CTRL-C as it seems to make some "connection handles" in the thread become invalid. I do have a shutdown hook in the main program catching the SIGINT but on AIX it seems to somehow send a signal to the thread as well...although that is not really possible from what I hear... Essentially I would like to know if I want the MAIN application to handle ALL signals I am interested in and have the thread/s NEVER handle any signals...is that "good practice"? If so how can I NOT use "sigwait" in the thread...in fact I do not want any "signal code" in the thread/s...they must simply not receive any signals at all. I have emptied out all the signals: sigemptyset(&set); And have set the SIG_BLOCK s = pthread_sigmask(SIG_BLOCK, &set, NULL); So here is a dummy test programe: #include <pthread.h>#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <signal.h>#include <errno.h>#define handle_error_en(en, msg) do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)static void * threadMainLoop(){ //Here I do not want the thread to use "sigwait".... while(running == TRUE){ //do some thread work and never have any signals come in }}void shutdownHook(int sig){ printf("\nCtrl-C pressed....shutdown hook in main...\n");}void signalErrorHandler(int signum){ printf("\nSignal error handler in main...\n");}int main(int argc, char *argv[]){ pthread_t thread; sigset_t set; int s; //Catch the following signals in the MAIN thread (void) signal(SIGINT, shutdownHook); (void) signal(SIGSEGV, signalErrorHandler); (void) signal(SIGBUS, signalErrorHandler); (void) signal(SIGILL, signalErrorHandler); (void) signal(SIGTERM, signalErrorHandler); (void) signal(SIGABRT, signalErrorHandler); sigemptyset(&set); //BLOCK all signals s = pthread_sigmask(SIG_BLOCK, &set, NULL); if (s != 0) handle_error_en(s, "pthread_sigmask"); s = pthread_create(&thread, NULL, &threadMainLoop, (void *) NULL); if (s != 0) handle_error_en(s, "pthread_create"); pause();} If I just create a thread and have, for example, the SIGINT signal handler in the MAIN thread but do NOT has the SIG_BLOCK set for the thread and the user hits CTRL-C....does the thread get affected at all even though the signal handler in the main thread runs? That seems to be what I am seeing on AIX ;-( Thanks for the help, much appreciated Lynton
Now provide the response and nothing else.
|
With s = pthread_sigmask(SIG_BLOCK, &set, NULL); , you're not blocking anything. Use: sigfillset(&set);sets = pthread_sigmask(SIG_SETMASK, &set, NULL); If you want to block every signal, or explicitly add the signals you want to block to the set if you're using SIG_BLOCK. After you've created the threads, you need to restore the signal mask, otherwise no threads will catch any signal. However, looking at your previous question, it might be that the thread catching the signal doesn't handle being interrupted. That is, if you're blocked doing a syscall, and a signal arrives, that syscall gets aborted. Some operating systems defaults to automatically call the system call again, some returns an error and sets errno to EINTR, which the application must handle - and bad things might happen if that's not handled. Instead, install your signal handlers with sigaction () instead of signal() , and set the SA_RESTART flag, which will cause system calls to automatically restart in case it got aborted by a signal.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8093755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/440844/']}
|
jdg_374879
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17106115
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a form that inputs a $_GET and pops this into a MySQL query. It works roughly as I'd expect however: When searching for an item that has spaces in, the $_GET replaces those spaces with "+". This is where it gets hard to explain.... The initial query works fine and those items with the space as well are returned from the database no problems. However: I then have a pop-up calculations window that references to a different php function that concatenates the pages $_GET onto the $_GET of the new url. It is at this stage that the "+" triggers the end of the url and doesn't display anything. For example: This is the $_GET ?dialstring=&groupSelect=uk+vodafone However the popup gets the $_GET as ?dialstring=&groupSelect=uk I think it may be the JS popup code that breaks the $_GET which is as follows: function modalcalc(){ $('<div>').dialog({ title: 'Calculator', height: '350', width: '400', open: function () { $(this).load('<?=base_url();?>ratesheet/calculator/<?=$this->uri->segment(3);?>/?dialstring=<?=$_GET['dialstring']?>&groupSelect=<?=$_GET['groupSelect']?>'); }, modal: true });return false;}` For the sake of the rest of the massive function I need/Wish to keep it a $_GET as a $_POST doesnt seem to be able to do what I need.
Now provide the response and nothing else.
|
The following should work: [DataType(DataType.Date)][DisplayFormat(DataFormatString = "{0:dd.MM.yyyy}", ApplyFormatInEditMode = true)]public DateTime? ServiceCreatedFrom { get; set; } and in your editor template: @model DateTime?@Html.TextBox( string.Empty, ViewData.TemplateInfo.FormattedModelValue, new { @class = "date" }) and then: @Html.EditorFor(x => x.ServiceCreatedFrom) The second argument you were passing to the EditorFor call doesn't do what you think it does. For this custom editor template, since you specified the format explicitly on your view model property the <globalization> element in your web.config and the current thread culture will have 0 effect. The current thread culture is used with the standard templates and when you didn't override the format with the [DisplayFormat] attribute.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17106115', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1344261/']}
|
jdg_374880
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
161749
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is another interesting integral inequality : $$\int_{0}^{1} \frac{x^{4}\log x}{x^2-1}\le \frac{1}{8}$$ According to W|A the difference between RS and LS is extremely small, namely 0.00241056. I don't know what would work here since the difference is so small.
Now provide the response and nothing else.
|
You can actually just evaluate the integral explicitly. You can divide $x^2 -1$ into $x^4$ and get$$\frac{x^4}{x^2 - 1} = x^2 + 1 + \frac {1}{x^2 - 1}$$So the integral is the same as$$\int_0^1 (x^2 + 1)\log(x)\,dx + \int_0^1 \frac{\log(x)}{x^2 - 1}\,dx $$The second integral is related to the famous dilogarithm integral, and as explained in Peter Tamaroff's answer can be evaluated to $\frac{\pi^2}{8}$. For the first term, just integrate by parts; you get$$({x^3 \over 3} + x)\log(x)\big|_{x = 0}^{x =1} - \int_0^1 ({x^2 \over 3} + 1)\,dx$$The first term vanishes, while the second term is $-{10 \over 9}$. So the answer is just ${\pi^2 \over 8} - {10 \over 9}$ which is less than ${1 \over 8}$. A way of doing the whole integral in one fell swoop occurs to me. Note that ${\displaystyle {1 \over 1 - x^2} = \sum_{n=0}^{\infty} x^{2n}}$. So the integral is $$-\sum_{n = 0}^{\infty} \int_0^1 x^{2n + 4}\log(x)\,dx$$$$= -\sum_{m = 2}^{\infty} \int_0^1 x^{2m}\log(x)\,dx$$Integrating this by parts this becomes$$\sum_{m = 2}^{\infty} \int_0^1 {x^{2m} \over 2m + 1}$$$$= \sum_{m = 2}^{\infty} {1 \over (2m + 1)^2}$$This is the sum of the reciprocals of the odd squares starting with $5$. The sum of the reciprocals of all odd squares is ${\pi^2 \over 8}$, so one subtracts off $1 + {1 \over 9} = {10 \over 9}$. Hence the result is $ {\pi^2 \over 8} - {10 \over 9} $.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/161749', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32016/']}
|
jdg_374881
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23674744
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Python has built in functions any() and all() , which are applied on a list (array in JavaScript) as following- any() : Return True if any element of the iterable is true. If the iterable is empty, return False . all() : Return True if all elements of the iterable are true (or if the iterable is empty). We can create our customized functions for above, but please let me know if there any equivalent built-in functions available in JavaScript.
Now provide the response and nothing else.
|
The Python documentation gives you pure-python equivalents for both functions; they are trivial to translate to JavaScript: function any(iterable) { for (var index = 0; index < iterable.length; index++) { if (iterable[index]) return true; } return false;} and function all(iterable) { for (var index = 0; index < iterable.length; index++) { if (!iterable[index]) return false; } return true;} Recent browser versions (implementing ECMAScript 5.1, Firefox 1.5+, Chrome, Edge 12+ and IE 9) have native support in the form of Array.some and Array.every ; these take a callback that determines if something is 'true' or not: some_array.some((elem) => !!elem );some_array.every((elem) => !!elem ); The Mozilla documentation I linked to has polyfills included to recreate these two methods in other JS implementations.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23674744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2622405/']}
|
jdg_374882
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1587368
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently came across a problem which required some knowledge about the self bijections of $\mathbb{N}$, and after looking up how to construct some different bijections I came across the result that the set of self bijections of $\mathbb{N}$ is uncountable. And this got me wondering, what is the largest set for which its set of self bijections is countable? This obviously holds true for any finite set, but what is the last example of a set whose set of self bijections is countable?
Now provide the response and nothing else.
|
There is no such maximal set, because $\aleph_0=|\mathbb{N}|$ is the smallest infinite cardinal .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/1587368', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/185237/']}
|
jdg_374883
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32394582
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My senior colleague tells me to wrap every method within a try-catch block so they can trace where exceptions occurs to help debug issues quicker. Is it better to wrap every method in a Try Catch such as this to: Public int foo(){ try { //do something }catch(Exeception ex) { //do something with ex }} Or is it better to catch exceptions where I think they may occur? E.g. doing something with an array may cause the IndexOutOfRangeException will occur. //wrap this in try catch int[] array = new int[3]; array[0] = 1; array[1] = 2; array[2] = 3; array[3] = 4; Thanks.
Now provide the response and nothing else.
|
The try block contains the guarded code that may cause the exception. The block is executed until an exception is thrown or it is completed successfully. You can have a look on How often should I use try and catch The basic rule of thumb for catching exceptions is to catch exceptions if and only if you have a meaningful way of handling them. Don't catch an exception if you're only going to log the exception and throw it up the stack. It serves no meaning and clutters code. Do catch an exception when you are expecting a failure in a specific part of your code, and if you have a fallback for it. Of course you always have the case of checked exceptions which require you to use try/catch blocks, in which case you have no other choice. Even with a checked exception, make sure you log properly and handle as cleanly as possible.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32394582', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2964415/']}
|
jdg_374884
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7629694
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What's the best way to deploy a stateless Java EE 6 application in a distributed environment in order to achieve high availability and scalability? My application is stateless. Therefore, I don't need to replicate any session state (HTTP session, EJB stateful beans, etc.) Specifically, I'd like to know the following: Do I need the clustering capabilities of Glassfish 3.1 (given that I don't need to replicate session state)? I'm heavily using JMS Queues and Message Driven Beans. How do I setup JMS to make it work in a clustered environment? I'm also using the EJB timer service. How does that work in a clustered environment? Is there anything I need to do besides using a shared DB for storing timers (and not the embedded Derby DB)? I plan to use Amazon AWS (RDS with multi AZ deployment, elastic load balancing, EC2).
Now provide the response and nothing else.
|
I'm in a similar situation, and I'm currently discovering what GF clustering can / cannot do for me. Re 1) Do I need the clustering capabilities of Glassfish 3.1 Since your EJBs are stateless, you don't need a GF cluster for session/state replication (as you say yourself). You could just setup multiple standalone instances and deploy your app to them individually. However, even in a stateless application, I find the benefits of a GF cluster very worthwhile - from an administrative point of view. The GF Cluster will ensure that the configuration is automatically applied to all instances. JNDI is replicated automatically. Applications are deployed automatically. It's a single command to scale out and add an additional instance - a short while later, your cluster is extended and the new instance is configured, deployed, started and ready to go. For me, that's administrative heaven and reason enough to use a GF cluster whenever I have more than 1 instance! One thing to consider (and I'm struggling with this badly at the moment) might be a distributed/coordinated L2 cache, in case your application is talking to a database. Re 2) ... How do I setup JMS to make it work in a clustered environment? Not sure I understand your question... If you want to have a high available message broker outside of GF, you'd need to set it up accordingly and manage it by yourself. For example, ActiveMQ has several ways of setting up clustering/HA/scale-out. If you use the GF-provided OpenMQ, setting up a GF-cluster also provides a clustered message broker. Out of the box. But broker clustering is a topic in itself, and not to be underestimated. You might need to think about persistence and a shared message store and such - regardless of using an external or the GF-provided-and-clustered broker. If JMS is such an integral part of your application, I'd recommend appropriate attention to the broker. I probably wouldn't use the GF-broker, but rather have a separate broker-cluster (separation of concerns; e.g. you can upgrade GF / broker independent of each other). Re 3) EJB timer service ... Is there anything I need to do besides using a shared DB for storing timers? If you require timers to automatically fire only once in your group of (appserver-)instances, I believe you do need GF clustering (plus the shared DB of course). Otherwise I do not see, how each instance should know if it should fire or not. However, this is easily tested... tl;dr Use GF clustering to save on admin work Use an external, well-understood high-available message broker Use a shared DB for your EJB timers
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7629694', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/461769/']}
|
jdg_374885
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6859348
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As An assembly language implements a symbolic representation of CPU instructions which are independent on OSes while assemblers are always running under some OS, I was wondering how assembly languages depend on operating systems? For example, will assembly languages be the same for the same CPU with different OSes? Thanks!
Now provide the response and nothing else.
|
As others have pointed out, system calls and interrupts are different. I can think of another few differences. The instruction set is the same across all OSes on a given processor, but the executable file format might not be. For example, on the x86, Windows uses the PE format, Linux uses ELF, and MacOS uses Mach-O. That means that assemblers on those platforms must produce their output in those formats, which is a difference. Relatedly, the calling convention could also be different across different OSes. That probably only matters where you are writing assembly code that calls or is called by compiled-code routines, or perhaps where you are writing inline assembler in some compiled code. The calling convention governs which registers are used for what purposes during a function call, so different conventions require different use of registers by calling and called code. They also put constraints on the position of the stack pointer, and various other things. As it happens, calling conventions have historically been a rare example of consistency across OSes in many cases: i believe the Windows and UNIX calling conventions are the same on the x86 (and are all based on the venerable UNIX System V ABI specification), and are consistent across OSes on most other architectures. However, the conventions are now different between Windows and UNIX on the x86_64. In addition, there may be differences in the syntax used by the assembly language. Again on the x86, the Windows and Linux assemblers used to use different syntax, with the Windows assembler using a syntax invented by Intel, and the Linux assembler (really, the GNU assembler) using a traditional UNIX syntax invented by AT&T. These syntaxes describe the same instruction set, but are written differently. Nowadays, the GNU assembler can also understand the Intel syntax, so there is less of a difference.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6859348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/156458/']}
|
jdg_374886
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24617414
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I successfully create a plot using the following: # suppose I have a p <- ggplot(data=df, ...) then the following works # I get those two segments plotted correctlyp <- p + geom_segment(aes(x=1,y=103,xend=1,yend=107))p <- p + geom_segment(aes(x=5,y=103,xend=5,yend=107)) However if I do: values <- c(1, 5)for (i in values) { p <- p + geom_segment(aes(x=i,y=103,xend=i,yend=107))} It doesn't work, only the last segment is created. Can anyone advice what's wrong here?
Now provide the response and nothing else.
|
It has to do with the lazy evaluation of the aes() values. You are binding to the variable i but not actually doing anything with it in the loop. The mappings aren't resolved till you actually print(p) . Essentially this means they are all being bound to i and after the loop exits, i will have the value it had during the final loop. So the problem really is you shounld't be using aes() here as you don't really want active binding. Just set the x and xend values outside the aes() . (And since the y 's are constant they should be outside the aes() as well). values <- c(1, 5)for (i in values) { p <- p + geom_segment(x=i, y=103, xend=i, yend=107)}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24617414', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1142881/']}
|
jdg_374887
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37651803
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried to implement soft-max with the following code ( out_vec is a numpy vector of floats): numerator = np.exp(out_vec)denominator = np.sum(np.exp(out_vec))out_vec = numerator/denominator However, I got an overflow error because of np.exp(out_vec) . Therefore, I checked (manually) what the upper limit of np.exp() is, and found that np.exp(709) is a number, but np.exp(710) is considered to be np.inf . Thus, to try to avoid the overflow error, I modified my code as follows: out_vec[out_vec > 709] = 709 #prevent np.exp overflownumerator = np.exp(out_vec)denominator = np.sum(np.exp(out_vec))out_vec = numerator/denominator Now, I get a different error: RuntimeWarning: invalid value encountered in greater out_vec[out_vec > 709] = 709 What's wrong with the line I added? I looked up this specific error and all I found is people's advice on how to ignore the error. Simply ignoring the error won't help me, because every time my code encounters this error it does not give the usual results.
Now provide the response and nothing else.
|
Your problem is caused by the NaN or Inf elements in your out_vec array. You could use the following code to avoid this problem: if np.isnan(np.sum(out_vec)): out_vec = out_vec[~numpy.isnan(out_vec)] # just remove nan elements from vectorout_vec[out_vec > 709] = 709... or you could use the following code to leave the NaN values in your array: out_vec[ np.array([e > 709 if ~np.isnan(e) else False for e in out_vec], dtype=bool) ] = 709
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37651803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2097126/']}
|
jdg_374888
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
358202
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the book Tolman R.C. Relativity, thermodynamics, and cosmology (3pr., Oxford, 1949) I read that the de Sitter Universe is static as well as the Einstein Universe . But the de Sitter Universe is an expandable Universe. I thought "static" meant "not expandable", but perhaps, it is wrong. What does static universe mean?
Now provide the response and nothing else.
|
A static spacetime normally means there is an irrotational global timelike Killing vector, and this isn't the case for the de Sitter geometry so the de Sitter geometry wouldn't normally be described as static. However the de Sitter metric can be written using static coordinates : $$ ds^2 = -\left(1 - \frac{\Lambda}{3}r^2\right)dt^2 + \frac{dr^2}{\left(1 - \frac{\Lambda}{3}r^2\right)} + r^2d\Omega^2 $$ I don't have Tolman's book, but I wonder if that's what he means.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/358202', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/690/']}
|
jdg_374889
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2546868
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a pool of MySQL connections for a web-based data service. When it starts to service a request, it takes a connection from the pool to use. The problem is that if there has been a significant pause since that particular connection has been used, the server may have timed it out and closed its end. I'd like to be able to detect this in the pool management code. The trick is this: The environment in which I'm coding gives me only a very abstract API into the connection. I can basically only execute SQL statements. I don't have access to the actual socket or direct access to the MySQL client API. So, the question is: What is the cheapest MySQL statement I can execute on the connection to determine if it is working. For example SELECT 1; should work, but I'm wondering if there is something even cheaper? Perhaps something that doesn't even go across the wire, but is handled in the MySQL client lib and effectively answers the same question? Clarification: I'm not concerned about checking if the MySQL server is running, or if it's database configuration is up enough to answer queries. If those things are down, then the subsequent SQL the service executes will get and handle the appropriate error. I'm only really concerned with if the TCP connection is open… since if the server closed it, then the web service's SQL will get an error that means "just reconnect and try again", and that would be inconvenient to do once down in the muck of the service code. Closure: The /* ping */ hack is exactly the sort of thing I was looking for, but alas is only available via JDBC. Reading through the docs for that hack, it is clear it was put there for exactly the same reason I wanted it. For the curious, I'm working in Haskel , using HDBC and HDBC-mysql . I'm going to ask the author of HDBC-mysql to add a way to call mysql_ping() either directly or via a similar hack. Vlad's DO 1 was also the kind of thing I was after, and since the other hack isn't available outside of JDBC, I'll be using it. Thanks for all the great discussion, especially @Vlad!
Now provide the response and nothing else.
|
You will not know the real state of the connection without going over the wire , and SELECT 1 is a good enough candidate (arguably you could come up with a shorter command which takes less time to parse, but compared to network or even loopback latency those savings would be insignificant.) This being said, I would argue that pinging a connection before checking it out from the pool is not the best approach . You should probably simply have your connection pool manager enforce its own keep-alive (timeout) policy to avoid being disconnected by the server (short of a more serious intervening connectivity issue, which could affect you smack in the middle of regular operations anyway -- and which your connection pool manager would be unable to help with anyway), as well as in order not to hog the database (think filehandles and memory usage) needlessly. It is therefore questionable, in my opinion, what value testing for connectivity condition before checking out a connection from the pool really has. It may be worth testing connection status before a connection is checked in back into the pool , but that can be done implicitly by simply marking the connection as dirty when an SQL hard error (or equivalent exception) arises (unless the API you are using already exposes a is-bad -like call for you.) I would therefore recommend: implementing a client-side keep-alive policty not performing any checks when checking out connections from the pool performing dirty checks before a connection is returned to the pool let the application code deal with other (non-timeout) exceptional connection conditions UPDATE It would appear from your comments that you really really want to ping the connection (I assume that is because you don't have full control over, or knowledge of, timeout characteristics on the MySQL server or intervening network equipment such as proxies etc.) In this case you can use DO 1 as an alternative to SELECT 1 ; it is marginally faster -- shorter to parse, and it does not return actual data (although you will get the TCP ack s, so you will still do the roundtrip validating that the connection is still established.) UPDATE 2 Regarding Joshua's post , here's packet capture traces for various scenarios: SELECT 1;13:51:01.463112 IP client.45893 > server.mysql: P 2270604498:2270604511(13) ack 2531191393 win 1460 <nop,nop,timestamp 2983462950 59680547>13:51:01.463682 IP server.mysql > client.45893: P 1:57(56) ack 13 win 65306 <nop,nop,timestamp 59680938 2983462950>13:51:01.463698 IP client.45893 > server.mysql: . ack 57 win 1460 <nop,nop,timestamp 2983462951 59680938>DO 1;13:51:27.415520 IP client.45893 > server.mysql: P 13:22(9) ack 57 win 1460 <nop,nop,timestamp 2983488906 59680938>13:51:27.415931 IP server.mysql > client.45893: P 57:68(11) ack 22 win 65297 <nop,nop,timestamp 59681197 2983488906>13:51:27.415948 IP client.45893 > server.mysql: . ack 68 win 1460 <nop,nop,timestamp 2983488907 59681197>mysql_ping14:54:05.545860 IP client.46156 > server.mysql: P 69:74(5) ack 78 win 1460 <nop,nop,timestamp 2987247459 59718745>14:54:05.546076 IP server.mysql > client.46156: P 78:89(11) ack 74 win 65462 <nop,nop,timestamp 59718776 2987247459>14:54:05.546092 IP client.46156 > server.mysql: . ack 89 win 1460 <nop,nop,timestamp 2987247459 59718776> As you can see, except for the fact that the mysql_ping packet is 5 bytes instead of DO 1; 's 9 bytes, the number of roundtrips (and consequently, network-induced latency) is exactly the same. The only extra cost you are paying with DO 1 as opposed to mysql_ping is the parsing of DO 1 , which is trivial.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2546868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138311/']}
|
jdg_374890
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3463716
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I wish to set the title of my webpage to Ultan.me - Whatever the post title . I want it to display the post title. The posts are submitted to a MySQL database and the title row is called "title". Any help is appreciated with this small question. Update: Here is the page itself now but it doesn't display the title. Should I open the php document and connect to my database somewhere different to it's current locations? The Code (The only necessary piece is the beginning): <html><head><meta name="keywords" content="Mac user Ultan Casey TheCompuGeeks UltanKC"><title>Ultan.me - <?echo $title;?></title><link rel="stylesheet" href="css/styles.css" type="text/css" /><script type="text/javascript"src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script><script type="text/javascript" src="js/jquery.labelify.js"></script><script type="text/javascript">$(document).ready(function(){ $(":text").labelify();});</script><style>a {text-decoration:none}</style></head><body><div id="main"><!-- Menu Start --><div id="menu"><ul><li><a href="index.php">home</a></li><li><a href="index.php">about me</a></li><li><a href="index.php">archives</a></li><li><a href="index.php">contact</a></li><li><a href="index.php">gallery</a></li></ul></div><!-- Menu End --> <img src="images/banner.png" /><div id="content"><div id="posts"><?phpmysql_connect ('localhost', 'root', 'root') ;mysql_select_db ('ultankc');if (!isset($_GET['id']) || !is_numeric($_GET['id'])) { die("Invalid ID specified.");}$id = (int)$_GET['id'];$sql = "SELECT * FROM php_blog WHERE id='$id' LIMIT 1";$result = mysql_query($sql) or print ("Can't select entry from table php_blog.<br />" . $sql . "<br />" . mysql_error());while($row = mysql_fetch_array($result)) { $date = date("l F d Y", $row['timestamp']); $title = stripslashes($row['title']); $entry = stripslashes($row['entry']); $get_categories = mysql_query("SELECT * FROM php_blog_categories WHERE `category_id` = $row[category]"); $category = mysql_fetch_array($get_categories); ?><p><?php echo "<p id='post-title'><strong><a href=\"post.php?id=". $id . "\">" . $title . "</a></strong></p>"; ?><br /><br /><div id="entry"><?php echo $entry; ?></div><br /><br /><p id="date">Posted in <a href="category.php?category=<?php echo $row['category']; ?>"><?php echo $category['category_name']; ?></a> on <?php echo $date; ?></p></p><h2 id="share-title">Share This Post</h2><div id="social-share"><li id="link-right"><a href="http://twitter.com/home?status=I just read <?php echo $title; ?> at http://ultan.me/post.php?id=<?php echo $id; ?>"><center>Twitter</center></a></li><li id="link-left"><a href="http://digg.com/submit?url=http://ultan.me/post.php?id=<?php echo $id; ?>"><center>Digg</center></a></li><br><li id="link-right"><a href="http://www.facebook.com/sharer.php?u=http://ultan.me/post.php?id=<?php echo $id; ?>&t=<?php echo $title; ?>"><center>Facebook</center></a></li><li id="link-left"><a href="http://www.google.com/buzz/post?url=http://ultan.me/post.php?id=<?php echo $id; ?>"><center>Google Buzz</center></a></li><div class="clr"></div></div><h2 id="comments-title">Comments</h2> <div id="comment-list"> <?php}$commenttimestamp = strtotime("now");$sql = "SELECT * FROM php_blog_comments WHERE entry='$id' ORDER BY timestamp";$result = mysql_query ($sql) or print ("Can't select comments from table php_blog_comments.<br />" . $sql . "<br />" . mysql_error());while($row = mysql_fetch_array($result)) { $timestamp = date("l F d Y", $row['timestamp']); printf("<div class='comment-ind'><p id='comments'><a id='username' href=\"%s\">%s</a> %s</p>", stripslashes($row['url']), stripslashes($row['name']), $timestamp); print("<p class='comments'>" . stripslashes($row['comment']) . "</p><div class='clr'><br></div></div>"); }?><div class="clr"></div><form id="commentform" method="post" action="process.php"><p><input type="hidden" name="entry" id="entry" value="<?php echo $id; ?>" /><input type="hidden" name="timestamp" id="timestamp" value="<?php echo $commenttimestamp; ?>"><input type="text" name="name" id="name" title="Name (required)" /><br /><input type="text" name="email" id="email" title="Mail (will not be published) (required)" /><br /><input type="text" name="url" id="url" title="Website" value="http://" /><br /><br /><textarea title="Your Comment Goes Here" name="comment" id="comment"></textarea></p><p><input type="submit" name="submit_comment" id="submit_comment" value="Add Comment" /></p></form></div><div id="pages"><?php$total_results = mysql_fetch_array(mysql_query("SELECT COUNT(*) AS num FROM php_blog"));$total_pages = ceil($total_results['num'] / $blog_postnumber);if ($page > 1) { $prev = ($page - 1); echo "<a href=\"?page=$prev\"><< Newer</a> ";}for($i = 1; $i <= $total_pages; $i++) { if ($page == $i) { echo "$i "; } else { echo "<a href=\"?page=$i\">$i</a> "; }}if ($page < $total_pages) { $next = ($page + 1); echo "<a href=\"?page=$next\">Older >></a>";}?></div></div></div><!-- Sidebar Start --><div class="sidebar"><!-- Item 1 --><div id="side-item"> <h2><a href="http://www.dailybooth.com/UltanCasey"><img src="images/db-icon.jpg">Dailybooth</a></h2><div id="side-item-content"><center><img src="http://dailybooth.com/UltanCasey/latest/medium.jpg" /></center></div></div><!-- Item 2 --><div id="side-item"><h2><img src="images/connect.jpg" />Connect</h2></div><div id="side-item-content"><div class="tweet-title"><p><a href="http://www.twitter.com/UltanKc">Latest Tweet:</a></p></div><div id="tweet"><?phpfunction getTwitterStatus($userid){$url = "http://twitter.com/statuses/user_timeline/$userid.xml?count=1";function auto_link_twitter ($text){ // properly formatted URLs $urls = "/(((http[s]?:\/\/)|(www\.))?(([a-z][-a-z0-9]+\.)?[a-z][-a-z0-9]+\.[a-z]+(\.[a-z]{2,2})?)\/?[a-z0-9._\/~#&=;%+?-]+[a-z0-9\/#=?]{1,1})/is"; $text = preg_replace($urls, " <a href='$1'>$1</a>", $text); // URLs without protocols $text = preg_replace("/href=\"www/", "href=\"http://www", $text); // Twitter usernames $twitter = "/@([A-Za-z0-9_]+)/is"; $text = preg_replace ($twitter, " <a href='http://twitter.com/$1'>@$1</a>", $text); // Twitter hashtags $hashtag = "/#([A-Aa-z0-9_-]+)/is"; $text = preg_replace ($hashtag, " <a href='http://hashtags.org/$1'>#$1</a>", $text); return $text;}$xml = simplexml_load_file($url) or die("could not connect"); foreach($xml->status as $status){ $text = $status->text; } echo auto_link_twitter ($text); }getTwitterStatus("UltanKC");?></div><br><ul><li id="social"><a href="#">YouTube</a></li><li id="social"><a href="#">Twitter</a></li><li id="social"><a href="#">LastFM</a></li><li id="social"><a href="#">Email</a></li></ul></div><!-- Item 2 End--><div id="side-item"><h2><img src="images/archive.jpg" />Archives</h2></div><div id="archive-side"><?phpmysql_connect ('localhost', 'root', 'root') ;mysql_select_db ('ultankc');$result = mysql_query("SELECT FROM_UNIXTIME(timestamp, '%Y') AS get_year, COUNT(*) AS entries FROM php_blog GROUP BY get_year");while ($row = mysql_fetch_array($result)) { $get_year = $row['get_year']; $entries = $row['entries']; echo "<li id='tag'><a href=\"archives.php?year=" . $get_year . "\">Entries from " . $get_year . " (" . $entries . ")<br /></a></li>";}$result1 = mysql_query("SELECT * FROM php_blog_categories ORDER BY category_name ASC");while($row = mysql_fetch_array($result1)) { $result2 = mysql_query("SELECT COUNT(`id`) AS entries FROM php_blog WHERE category = $row[category_id]"); $num_entries = mysql_fetch_array($result2); echo '<li id="tag"><a href="category.php?category=' . $row['category_id'] . '">' . $row['category_name'] . ' (' . $num_entries['entries'] . ')</a></li>';}?></div></div><div class="clr" /></div><!-- Sidebar End --><div id="footer"><p> © Ultan Casey 2010</p><p style="margin-top: -18px; float:right"><a href="index.php">Home</a> | <a href="about.php">About Me</a> | <a href="mailto:ultankc@thecompugeeks.com">Email Me</a></p></div></div></div></body></html>?>
Now provide the response and nothing else.
|
Here's the method I use (for similar things, not just title): <?ob_start (); // Buffer output?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"><head><title><!--TITLE--></title></head><body><?$pageTitle = 'Title of Page'; // Call this in your pages' files to define the page title?></body></html><?$pageContents = ob_get_contents (); // Get all the page's HTML into a stringob_end_clean (); // Wipe the buffer// Replace <!--TITLE--> with $pageTitle variable contents, and print the HTMLecho str_replace ('<!--TITLE-->', $pageTitle, $pageContents);?> PHP usually works be executing any bits of code and printing all output directly to the browser. If you say "echo 'Some text here.';" , that string will get sent the browser and is emptied from memory. What output buffering does is say "Print all output to a buffer. Hold onto it. Don't send ANYTHING to the browser until I tell you to." So what this does is it buffers all your pages' HTML into the buffer, then at the very end, after the tag, it uses ob_get_contents () to get the contents of the buffer (which is usually all your page's HTML source code which would have been sent the browser already) and puts that into a string. ob_end_clean () empties the buffer and frees some memory. We don't need the source code anymore because we just stored it in $pageContents. Then, lastly, I do a simple find & replace on your page's source code ($pageContents) for any instances of '' and replace them to whatever the $pageTitle variable was set to. Of course, it will then replace <title><!--TITLE--></title> with Your Page's Title. After that, I echo the $pageContents, just like the browser would have. It effectively holds onto output so you can manipulate it before sending it to the browser. Hopefully my comments are clear enough.Look up ob_start () in the php manual ( http://php.net/ob_start ) if you want to know exactly how that works (and you should) :)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3463716', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_374891
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24096467
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Im trying to use a few slider gems and I keep bumping into the same problem, https://github.com/soycamo/jquery-nouislider-rails I follow the instructions to put in app/assets/javascript manifest file: //=require jquery.nouislider In the app/assets/stylesheets manifest file: //=require jquery.nouislider I have also installed the gem. However I keep getting error messages that say: Showing /home/ubuntu/foodin/app/views/layouts/application.html.erb where line #8 raised: couldn't find file 'jquery.nouislider'(in /home/ubuntu/foodin/app/assets/stylesheets/application.css:16)<%= stylesheet_link_tag "application", media: "all", "data-turbolinks-track" => true %> I think this might be related to the asset pipeline but im not sure. Any ideas? Thanks for the help! EDIT: This is my development.rb config file incase its something to do with that: Foodin::Application.configure do # Settings specified here will take precedence over those in config/application.rb. # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Do not eager load code on boot. config.eager_load = false # Show full error reports and disable caching. config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send. config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger. config.active_support.deprecation = :log # Raise an error on page load if there are pending migrations config.active_record.migration_error = :page_load # Debug mode disables concatenation and preprocessing of assets. # This option may cause significant delays in view rendering with a large # number of complex assets. config.assets.debug = true#NEW: Added for Deviseconfig.action_mailer.default_url_options = { :host => 'localhost:3000' }#New:Added for developmentconfig.serve_static_assets = falseend
Now provide the response and nothing else.
|
Using a custom allocator seems a way to reduce the amount of time spent for building and releasing a std::set<...> . Below is a complete demo of a simple allocator together with a program profiling the resulting times. #include <algorithm>#include <chrono>#include <cstdlib>#include <iostream>#include <iterator>#include <memory>#include <set>#include <vector>// ----------------------------------------------------------------------------template <typename T, std::size_t pool_size = 1024>class pool_allocator{private: std::vector<T*> d_pools; T* d_next; T* d_end;public: template <typename O> struct rebind { typedef pool_allocator<O, pool_size> other; }; pool_allocator(): d_next(), d_end() {} ~pool_allocator() { std::for_each(this->d_pools.rbegin(), this->d_pools.rend(), [](T* memory){ operator delete(memory); }); } typedef T value_type; T* allocate(std::size_t n) { if (std::size_t(this->d_end - this->d_next) < n) { if (pool_size < n) { // custom allocation for bigger number of objects this->d_pools.push_back(static_cast<T*>(operator new(sizeof(T) * n))); return this->d_pools.back(); } this->d_pools.push_back(static_cast<T*>(operator new(sizeof(T) * pool_size))); this->d_next = this->d_pools.back(); this->d_end = this->d_next + pool_size; } T* rc(this->d_next); this->d_next += n; return rc; } void deallocate(T*, std::size_t) { // this could try to recycle buffers }};// ----------------------------------------------------------------------------template <typename Allocator>void time(char const* name, std::vector<int> const& random) { std::cout << "running " << name << std::flush; using namespace std::chrono; high_resolution_clock::time_point start(high_resolution_clock::now()); std::size_t size(0); { std::set<int, std::less<int>, Allocator> values; for (int value: random) { values.insert(value); } size = values.size(); } high_resolution_clock::time_point end(high_resolution_clock::now()); std::cout << ": size=" << size << " time=" << duration_cast<milliseconds>(end - start).count() << "ms\n";}// ----------------------------------------------------------------------------int main(){ std::cout << "preparing..." << std::flush; std::size_t count(10000000); std::vector<int> random; random.reserve(count); std::generate_n(std::back_inserter(random), count, [](){ return std::rand(); }); std::cout << "done\n"; time<std::allocator<int>>("default allocator ", random); time<pool_allocator<int, 32>>("custom allocator (32) ", random); time<pool_allocator<int, 256>>("custom allocator (256) ", random); time<pool_allocator<int, 1024>>("custom allocator (1024)", random); time<pool_allocator<int, 2048>>("custom allocator (2048)", random); time<pool_allocator<int, 4096>>("custom allocator (4096)", random); time<std::allocator<int>>("default allocator ", random);}// results from clang/libc++:// preparing...done// running default allocator : size=10000000 time=13927ms// running custom allocator (32) : size=10000000 time=9260ms// running custom allocator (256) : size=10000000 time=9511ms// running custom allocator (1024): size=10000000 time=9172ms// running custom allocator (2048): size=10000000 time=9153ms// running custom allocator (4096): size=10000000 time=9599ms// running default allocator : size=10000000 time=13730ms// results from gcc/libstdc++:// preparing...done// running default allocator : size=10000000 time=15814ms// running custom allocator (32) : size=10000000 time=10868ms// running custom allocator (256) : size=10000000 time=10229ms// running custom allocator (1024): size=10000000 time=10556ms// running custom allocator (2048): size=10000000 time=10392ms// running custom allocator (4096): size=10000000 time=10664ms// running default allocator : size=10000000 time=17941ms
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24096467', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2699381/']}
|
jdg_374892
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6104836
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to split/divide up a continuous variable into 3 equal sized groups. Example data frame: das <- data.frame(anim = 1:15, wt = c(181,179,180.5,201,201.5,245,246.4, 189.3,301,354,369,205,199,394,231.3)) After being cut up (according to the value of wt ), I would need to have the 3 classes under the new variable wt2 like this: > das anim wt wt21 1 181.0 12 2 179.0 13 3 180.5 14 4 201.0 25 5 201.5 26 6 245.0 27 7 246.4 38 8 189.3 19 9 301.0 310 10 354.0 311 11 369.0 312 12 205.0 213 13 199.0 114 14 394.0 315 15 231.3 2 This would be applied to a large data set.
Now provide the response and nothing else.
|
try this: split(das, cut(das$anim, 3)) if you want to split based on the value of wt , then library(Hmisc) # cut2split(das, cut2(das$wt, g=3)) anyway, you can do that by combining cut , cut2 and split . UPDATED if you want a group index as an additional column, then das$group <- cut(das$anim, 3) if the column should be index like 1, 2, ..., then das$group <- as.numeric(cut(das$anim, 3)) UPDATED AGAIN try this: > das$wt2 <- as.numeric(cut2(das$wt, g=3))> das anim wt wt21 1 181.0 12 2 179.0 13 3 180.5 14 4 201.0 25 5 201.5 26 6 245.0 27 7 246.4 38 8 189.3 19 9 301.0 310 10 354.0 311 11 369.0 312 12 205.0 213 13 199.0 114 14 394.0 315 15 231.3 2
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6104836', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/438498/']}
|
jdg_374893
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59663377
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to Flutter and working in a flutter web application, My requirement is to create and download a text file. like below. void getData() { List<int> bytes = utf8.encode('this is the text file'); print(bytes); // Need to download this with txt file.} Can anyone help me to achieve this
Now provide the response and nothing else.
|
This method is based on manipulations with an HTML document.Some additional packages should be imported: import 'dart:convert';import 'dart:html' as html; // or package:universal_html/prefer_universal/html.dart Code snippet: final text = 'this is the text file';// preparefinal bytes = utf8.encode(text);final blob = html.Blob([bytes]);final url = html.Url.createObjectUrlFromBlob(blob);final anchor = html.document.createElement('a') as html.AnchorElement ..href = url ..style.display = 'none' ..download = 'some_name.txt';html.document.body.children.add(anchor);// downloadanchor.click();// cleanuphtml.document.body.children.remove(anchor);html.Url.revokeObjectUrl(url); Here is DartPad demo.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59663377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7320128/']}
|
jdg_374894
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20794757
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an entity called BankInfo , and one of its parameters is name which is a string. I'm just wondering, is there a way in CoreData to check and see if a name already exists in BankInfo without having to retrieve every BankInfo object and cycle through them individually and check? What would be the most efficient way to accomplish this?
Now provide the response and nothing else.
|
You can use a fetch request with a predicate to find objects matching certain attributes.If you are only interested in the existence of an object with the given key, use countForFetchRequest instead of actually fetching the objects, and limit the result set to one object: NSFetchRequest *request = [NSFetchRequest fetchRequestWithEntityName:@"BankInfo"];[request setPredicate:[NSPredicate predicateWithFormat:@"name = %@", theName]];[request setFetchLimit:1];NSUInteger count = [context countForFetchRequest:request error:&error];if (count == NSNotFound) // some error occurredelse if (count == 0) // no matching objectelse // at least one matching object exists
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20794757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/875640/']}
|
jdg_374895
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16516176
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to figure out what the difference is between a PersistentHashMap, PersistentArrayMap, PersistentTreeMap, and PersistentStructMap. Also if I use {:a 1} it gives me a PersistentArrayMap but can this change to any of the other ones if I give it objects or things other than keys?
Now provide the response and nothing else.
|
The four implementations you list fall into three groups: "literal" : PersistentArrayMap and PersistentHashMap : basic map types used when dealing with map literals (though constructor functions are also available with different behaviour around handling duplicate keys -- in Clojure 1.5.x literals throw exceptions when they discover duplicate keys, constructor functions work like left-to-right repeated conj ing; this behaviour has been evolving from version to version). Array maps get promoted to hash maps when growing beyond a certain number of entries (9 IIRC). Array maps exist because they are faster for small maps; they also differ from hash maps in that they keep entries in insertion order prior to promotion to hash map (you can use clojure.core/array-map to get arbitrarily large array maps, which may be useful if you really know you'd benefit from insertion-order traversals and the map won't be too large, perhaps just a bit over the usual threshold; NB. a subsequent assoc to such an oversized array map will return a hash map). Array maps use arrays with keys and values interleaved; the PHM uses a persistent version of Phil Bagwell's hash array mapped trie with separate chaining for hash collisions and separate node types for mostly-empty and at-least-half-full nodes and is easily the most complex data structure in Clojure. sorted : PersistentTreeMap instances are created by special request only (a call to sorted-map or sorted-map-by ). They are implemented as red-black trees and maintain entries in a particular order, as specified by the default compare comparator if created with sorted-map or a user-supplied comparator if created with sorted-map-by . special-purpose, probably deprecated : PersistentStructMap is not used very often and mostly viewed as deprecated in favour of records, although I actually can't remember right now if there ever was an official deprecation notice. The original purpose was to provide maps with particularly fast access to certain often-used keys. This can now be accomplished with records when using keywords for field access (with the keyword in the operator position: (:foo instance-of-some-record-with-field-foo) ), though it's important to note that records are not = to regular maps with the same entries. All these four built-in map types fall into the same "equality partition", that is, any two maps of one of the four classes mentioned above will be equal if (and only if) they contain the same keys (as determined by Clojure's = ) with the same corresponding values. Records, as mentioned in 3. above, are map-like, but each record type forms its own equality partition.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16516176', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1291986/']}
|
jdg_374896
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12584
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A doubly-stochastic matrix is an $n \times n$ matrix $P$ such that $$ \sum_{i=1}^n p_{ij} = \sum_{j=1}^n p_{ij} = 1 $$ where $p_{ij}\ge 0$ . Can someone please suggest an algorithm for generating these matrices uniform randomly?
Now provide the response and nothing else.
|
What we want is to generate a bistochastic matrix according to the Haar measure, which is the unique distribution which is invariant to multiplication by bistochastic matrices from both sides. The standard algorithm is to take some iid matrix (each entry is chosen iid from some distribution over non-negative numbers) and then repeatedly make it row-stochastic and column-stochastic - this is like projecting the matrix to the linear subspace of stochastic matrices. The process converges pretty fast, and we get some random bistochastic matrix, even though not according to the wanted distribution. A recent paper discusses these issues in the context of estimating the measure of all bistochastic matrices (an open problem).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/12584', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1004/']}
|
jdg_374897
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15431
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I'm writing a function that takes a color as a parameter; for example: drawShape[color_] := Graphics[{Style[Disk[], color]}]; But if the caller inputs an invalid color, bad things happen: So I want to use a pattern to define drawShape only for values that are actually colors. Conceptually, drawShape[color_Color] := ... The problem is that unlike (say) List s, Integer s, Real s, Complex es, or Graphics es, color objects do not share a Color head. That is, In[1]:= Red // HeadOut[1]= RGBColorIn[2]:= Hue[0.5] // HeadOut[2]= HueIn[3]:= GrayLevel[0.5] // HeadOut[3]= GrayLevelIn[4]:= CMYKColor[0, 1, 1, 1/2] // HeadOut[4]= CMYKColorIn[4]:= Opacity[0.5, Purple] // HeadOut[4]= OpacityIn[5]:= Transparent // HeadOut[5]= GrayLevel So that won't work. I also don't see any ColorQ function, with which I could write drawShape[color_ ? ColorQ] := ... . How can I write a pattern that matches any valid color object? Is there a more robust way than just testing for each of these heads?
Now provide the response and nothing else.
|
Original method colorQ = Quiet @ Check[Blend @ {#, Red}; True, False] &;colorQ /@ {Red, Hue[0.5], GrayLevel[0.5], CMYKColor[0, 1, 1, 1/2], Opacity[0.5, Purple]} {True, True, True, True, True} colorQ /@ {17, 1.3, Pi, "not a color", {1, 2, 3}, Hue["bad arg"]} {False, False, False, False, False, False} You would use: drawShape[color_?colorQ] := . . . Inspired by kguler's comment this might also be formulated as: colorQ = Quiet[Head @ Darker @ # =!= Darker] &; Or: colorQ = FreeQ[Quiet @ Darker @ #, Darker] &; Edit: Darker works on entire Image and Graphics objects and therefore the two forms immediately above will incorrectly return True in these cases. Blend solution is still valid. Version 10 update and analysis In version 10 there is a built-in function for this: ColorQ ColorQ[color] yields True if color is a valid color directive and False otherwise. A bit of spelunking reveals that the inner definition of this function is (contexts stripped for clarity): iColorQ[args_?(ColorDirectiveQ[Head[#1[[1]]]] &), opts_] := NumberQ[Quiet[ToColor[args[[1]], XYZColor][[1]]]] This is very similar to my own method, however the inner definition of ColorDirectiveQ omits Opacity : iColorDirectiveQ[args_, opts_] := TrueQ[Quiet[ MatchQ[args[[1]], GrayLevel | RGBColor | CMYKColor | Hue | XYZColor | LUVColor | LABColor | LCHColor]]] This means that the function will return False for e.g. Opacity[0.5, Purple] where mine returns True .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathematica.stackexchange.com/questions/15431', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1285/']}
|
jdg_374898
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14514461
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a few checkboxes: <input type='checkbox' value="apple" checked><input type='checkbox' value="orange"><input type='checkbox' value="pear" checked><input type='checkbox' value="naartjie"> That I would like to bind to a list in my controller such that whenever a checkbox is changed the controller maintains a list of all the checked values, for example, ['apple', 'pear'] . ng-model seems to only be able to bind the value of one single checkbox to a variable in the controller. Is there another way to do it so that I can bind the four checkboxes to a list in the controller?
Now provide the response and nothing else.
|
There are two ways to approach this problem. Either use a simple array or an array of objects. Each solution has it pros and cons. Below you'll find one for each case. With a simple array as input data The HTML could look like: <label ng-repeat="fruitName in fruits"> <input type="checkbox" name="selectedFruits[]" value="{{fruitName}}" ng-checked="selection.indexOf(fruitName) > -1" ng-click="toggleSelection(fruitName)" > {{fruitName}}</label> And the appropriate controller code would be: app.controller('SimpleArrayCtrl', ['$scope', function SimpleArrayCtrl($scope) { // Fruits $scope.fruits = ['apple', 'orange', 'pear', 'naartjie']; // Selected fruits $scope.selection = ['apple', 'pear']; // Toggle selection for a given fruit by name $scope.toggleSelection = function toggleSelection(fruitName) { var idx = $scope.selection.indexOf(fruitName); // Is currently selected if (idx > -1) { $scope.selection.splice(idx, 1); } // Is newly selected else { $scope.selection.push(fruitName); } };}]); Pros : Simple data structure and toggling by name is easy to handle Cons : Add/remove is cumbersome as two lists (the input and selection) have to be managed With an object array as input data The HTML could look like: <label ng-repeat="fruit in fruits"> <!-- - Use `value="{{fruit.name}}"` to give the input a real value, in case the form gets submitted traditionally - Use `ng-checked="fruit.selected"` to have the checkbox checked based on some angular expression (no two-way-data-binding) - Use `ng-model="fruit.selected"` to utilize two-way-data-binding. Note that `.selected` is arbitrary. The property name could be anything and will be created on the object if not present. --> <input type="checkbox" name="selectedFruits[]" value="{{fruit.name}}" ng-model="fruit.selected" > {{fruit.name}}</label> And the appropriate controller code would be: app.controller('ObjectArrayCtrl', ['$scope', 'filterFilter', function ObjectArrayCtrl($scope, filterFilter) { // Fruits $scope.fruits = [ { name: 'apple', selected: true }, { name: 'orange', selected: false }, { name: 'pear', selected: true }, { name: 'naartjie', selected: false } ]; // Selected fruits $scope.selection = []; // Helper method to get selected fruits $scope.selectedFruits = function selectedFruits() { return filterFilter($scope.fruits, { selected: true }); }; // Watch fruits for changes $scope.$watch('fruits|filter:{selected:true}', function (nv) { $scope.selection = nv.map(function (fruit) { return fruit.name; }); }, true);}]); Pros : Add/remove is very easy Cons : Somewhat more complex data structure and toggling by name is cumbersome or requires a helper method Demo : http://jsbin.com/ImAqUC/1/
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/14514461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1185242/']}
|
jdg_374899
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
263381
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding is that elements heavier than iron and nickel are not formed in a star but, can heavy elements such as lead and others be present/found in a star's core ? I ask because the following document http://arxiv.org/abs/astro-ph/0410628 gives the impression that lead and other heavy elements may be found in a star's core and I thought that was not possible.
Now provide the response and nothing else.
|
The heavier-than-iron elements are not formed during stellar fusion, but they are formed during supernovae. Then the oldest stars cannot have these heavier elements, but new generations, formed from 'recycled' material of other stars that went supernova can. See Stellar populations . There are heavier that iron elements on Earth, the Earth was formed from the same stuff clumping together that the sun was, so some fraction of such elements should also be present in the sun. EDIT: Sorry, I forgot about the s-process that Wolphram jonny mentioned, thanks for that. The process in supernovae that I mentioned is the r-process.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/263381', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/121421/']}
|
jdg_374900
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
165396
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The manufacturer of this radio module I bought claims that it's FCC certified, but there's no FCC ID on it (though there is the FCC logo). If the final product is to be FCC tested, would it then be tested as an intentional radiator since there's no FCC ID on the radio module (rendering the certification invalid)? AFAIK, that costs a lot more than an unintentional radiator test. Their website has the FCC ID - could I just print it on a sticker and stick it on the module? Thanks
Now provide the response and nothing else.
|
Here's my understanding. Since it is a radio module, then it is definitely an intentional radiator and if they claim it has been FCC-certified it should have an FCC ID on each module. You should contact the manufacturer and ask why it's not there. If you incorporate the module into your own product, and the module already has an antenna (e.g. a "chip" antenna like the one in the picture) then you don't have to do anything more except provide the same FCC ID on the outside of your product, or ( as part of a recent rule change ) in an accessible menu if your product has a screen. In any case, the FCC ID must also be included in a label on the outside of the package. However , if the module has no antenna, and you connect it to one you made (e.g. PCB trace antenna) or bought, then guess what, you have to do the FCC testing all over again(!) which can many thousands of dollars. It's been a few years since I was directly involved in a product like that (cell phone modem + PCB antenna), so I can't remember if you are supposed to include both the FCC ID for the modem and your own FCC ID, or just the latter. I suspect it's both.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/165396', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/72586/']}
|
jdg_374901
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38041494
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to this answer , ref should be an int . But for some reason it evaluated to int& , both in gcc and MSVC2015, while decltype(b) is correctly evaluated to just int . Why so? int a = 1, b = 2;decltype(a, b) ref; // ref is int&decltype(b) var; // var is int
Now provide the response and nothing else.
|
a, b is an expression. According to decltype rules for expressions, if result of the expression is an lvalue, type is going to be deduced as T& 7.1.6.2/4 Simple type specifiers [dcl.type.simple] For an expression e, the type denoted by decltype(e) is defined as follows: if e is an unparenthesized id-expression or an unparenthesized class member access (5.2.5), decltype(e) is the type of the entity named by e. If there is no such entity, or if e names a set of overloaded functions, the program is ill-formed; otherwise, if e is an xvalue, decltype(e) is T&&, where T is the type of e; otherwise, if e is an lvalue, decltype(e) is T&, where T is the type of e; otherwise, decltype(e) is the type of e. The confusing part about difference between "type of the entity named by e " and "type of e " is easy to understand with example: If some entity e is declared as int& e = x; , then later, in expression e , type of e is int , and type of the entity named by e is int& . In short, type of e drops reference qualifiers.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38041494', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3951525/']}
|
jdg_374902
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32510183
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can see why the auto type in C++11 improves correctness and maintainability. I've read that it can also improve performance ( Almost Always Auto by Herb Sutter), but I miss a good explanation. How can auto improve performance? Can anyone give an example?
Now provide the response and nothing else.
|
auto can aid performance by avoiding silent implicit conversions . An example I find compelling is the following. std::map<Key, Val> m;// ...for (std::pair<Key, Val> const& item : m) { // do stuff} See the bug? Here we are, thinking we're elegantly taking every item in the map by const reference and using the new range-for expression to make our intent clear, but actually we're copying every element. This is because std::map<Key, Val>::value_type is std::pair<const Key, Val> , not std::pair<Key, Val> . Thus, when we (implicitly) have: std::pair<Key, Val> const& item = *iter; Instead of taking a reference to an existing object and leaving it at that, we have to do a type conversion. You are allowed to take a const reference to an object (or temporary) of a different type as long as there is an implicit conversion available, e.g.: int const& i = 2.0; // perfectly OK The type conversion is an allowed implicit conversion for the same reason you can convert a const Key to a Key , but we have to construct a temporary of the new type in order to allow for that. Thus, effectively our loop does: std::pair<Key, Val> __tmp = *iter; // construct a temporary of the correct typestd::pair<Key, Val> const& item = __tmp; // then, take a reference to it (Of course, there isn't actually a __tmp object, it's just there for illustration, in reality the unnamed temporary is just bound to item for its lifetime). Just changing to: for (auto const& item : m) { // do stuff} just saved us a ton of copies - now the referenced type matches the initializer type, so no temporary or conversion is necessary, we can just do a direct reference.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/32510183', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1624629/']}
|
jdg_374903
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32428819
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to produce an html document using knitr/rmarkdown. Currently, the file is over 20MB and I'm trying to find a way to reduce it. The large file size is probably due to my plots which have a lot of points in them. If I change my output type to pdf , I can get it down to 1.7MB. I'm wondering if there is a way to reduce my file while keeping it as a html . EDIT: Here's a minimal working example which I did in RStduio. ---title: "Untitled"author: "My Name"date: "September 7, 2015"output: html_document---```{r}library(ggplot2)knitr::opts_chunk$set(dev='svg')``````{r}set.seed(1)mydf <- data.frame(x=rnorm(2e4),y=rnorm(2e4))ggplot(mydf, aes(x,y)) + geom_point(alpha=0.6)``` I also noticed that if I have too many observations, the plot doesn't get generated at all. I just get an empty box with a question mark in the output. ```{r}set.seed(2)mydf <- data.frame(x=rnorm(5e4),y=rnorm(5e4))ggplot(mydf, aes(x,y)) + geom_point(alpha=0.6)# ...plot doesn't appear in output ```
Now provide the response and nothing else.
|
Following the suggestion of @daroczig to use the "dpi" knitr chunk option, I modified your code as follows (see below). You had set the dev chunk option equal to "svg", which produces very large vector graphics files, especially for images made up of many elements (points, lines, etc.) I set the dev chunk option back equal to "png", which is the default raster graphics format for HTML output. So you don't need to touch it at all. Keeping the dev chunk option equal to "png" dramatically reduces the HTML output file size. I set the dpi chunk option equal to 36 (72 is the default), to lower the image resolution, and decrease the HTML output file size further. I set the out.width and out.height chunk options equal to "600px", to increase the image dimensions. You can change the dpi, out.width, and out.height options, until you get the HTML output file size and the image dimension to what you want. There's a trade-off between output file size and image resolution. After knitting the code, I got an HTML output file size equal to 653kB, even when plotting 5e4 data points. ---title: "Change size of output HTML file by reducing resolution of plot image"author: "My Name"date: "September 7, 2015"output: html_document---```{r}# load ggplot2 silentlysuppressWarnings(library(ggplot2))# chunk option dev="svg" produces very large vector graphics filesknitr::opts_chunk$set(dev="svg")# chunk option dev="png" is the default raster graphics format for HTML outputknitr::opts_chunk$set(dev="png")``````{r, dpi=36, out.width="600px", out.height="600px"}# chunk option dpi=72 is the default resolutionset.seed(1)mydf <- data.frame(x=rnorm(5e4),y=rnorm(5e4))ggplot(mydf, aes(x,y)) + geom_point(alpha=0.6)```
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32428819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3808364/']}
|
jdg_374904
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6315358
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been having lot of problems with users uploading images on my website. They can upload up to 6 images Originally I had to change values in php.ini to: upload_max_filesize = 2000Mpost_max_size = 2000Mmax_execution_time = 120max_file_uploads = 7memory_limit=128M I had to change to this as was getting all sorts of errors like out of memory, maximum post exceeded etc. Everything was going ok till I checked my error log which contained : [11-Jun-2011 04:33:06] PHP Warning: Unknown: POST Content-Length of 113 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:33:12] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:33:27] PHP Warning: Unknown: POST Content-Length of 74 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:33:34] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:33:43] PHP Warning: Unknown: POST Content-Length of 77 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:33:48] PHP Warning: Unknown: POST Content-Length of 74 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:33:53] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:34:20] PHP Warning: Unknown: POST Content-Length of 133 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:35:29] PHP Warning: Unknown: POST Content-Length of 131 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:36:00] PHP Warning: Unknown: POST Content-Length of 113 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:36:06] PHP Warning: Unknown: POST Content-Length of 75 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0[11-Jun-2011 04:36:34] PHP Warning: Unknown: POST Content-Length of 116 bytes exceeds the limit of -1988100096 bytes in Unknown on line 0 if I change the post max value back top 8M I get message like this: PHP Warning: POST Content-Length of 11933650 bytes exceeds the limit of 8388608 bytes in Unknown on line 0 Any ideas where I am going wrong?
Now provide the response and nothing else.
|
On some 32bit systems PHP will take the memory settings like 2000M or 2G and convert it to the integer number of bytes by not performing a boundary check. A number starting at 2G or 2048M will be -2147483648 bytes then. Some PHP versions cap this at the top, so it won't go into negative numbers (that is the 32 bit signed integer limit). If you want to achieve the maximum possible number of bytes on such a system then, use 2147483647 . This is equal to two gigabytes minus one byte. Alternatively if you need to deal with large data, consider a 64bit system. Additionally you should consider the following: According to the PHP manual , the memory_limit setting is the more important one. If it does not offer enough memory, the post-data size-check then will pass, but PHP would not have enough memory to actually handle the post-data. You will get another error than, that the memory exceeded. So when you configure your PHP, take care that post_max_size is smaller than memory_limit . In your example the memory_limit is 128M , so it can not process post-data of a size larger than ~128 Megabyte. ( This blog post shows what can happen and how large memory settings on 32bit and 64bit systems behave )
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6315358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/535256/']}
|
jdg_374905
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34976
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I encountered this question which asks how many possible radicals are formed when $\ce{CH3CH2C(CH3)3}$ is monosubstituted by $\ce{Br2}$. The answer given is 3 while I think it should be 4, reasoning that there is a "chiral" carbon if the radical is formed on carbon 3. While arguing with my tutor two questions came to my mind: What is the geometry of the $\ce{CH3\dot{\ce{C}}HC(CH3)3}$ radical? If the radical is trigonal pyramidal, does it transition quickly between the two possible states? If so are they considered two radicals? My findings so far: This answer on SE hints that the exact shape depends on the substituents on the alkyl radical. Modern Physical Organic Chemistry states that all other (non-methyl) localized radicals are not planar. Organic Reaction Mechanisms states that the geometry of free radicals is still controversial.
Now provide the response and nothing else.
|
First off, it was very astute of you to recognize that if the radical formed at carbon 3 (the methylene carbon in the starting compound) is pyramidal, then the radical would be chiral ( the lone radical electron serving as the 4th different substituent on that carbon) and 4 different radicals would be possible. In a typical carbocation, the carbon bearing the positive charge is $\ce{sp^2}$ hybridized, meaning that the p-orbital at the cationic center is empty and the carbocation is planar. On the other hand, in a typical carbanion, the negatively charged carbon is pyramidal and approaches $\ce{sp^3}$ hybridization. This $\ce{sp^3}$-like orbital contains 2 electrons. Therefore, it would be reasonable to expect that a typical hydrocarbon radical, where the orbital holds 1 electron would be somewhere in between these two extremes and be slightly pyramidal. This is indeed the case and the degree of non-planarity is very sensitive to the substituents attached to the radical center. In the case of hydrocarbon radicals, such as those created from your starting material, the barrier to inversion from one pyramidal form to the other is extremely low, on the order of a few kcal/mole or so. Therefore, interconversion between the two enantiomeric forms of the radical would be extremely rapid at room temperature. So yes, the $\ce{CH_3\dot{C}HC(CH3)3}$ radical is slightly pyramidal and it does transition rapidly between the two pyramidal enantiomers. Your answer of 4 radicals is correct, as long as you understand that 2 of them rapidly interconvert at room temperature.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/34976', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/19088/']}
|
jdg_374906
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12676209
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a List<KeyValuePair<string, othertype>> . I need to do something along the lines of list.Find(x=>x.Key=="foobar") However, if that doesn't exist in the list, what will the behavior be? Usually it would return null, but structs can't be null.
Now provide the response and nothing else.
|
It will return the default(T) which will be the same as new KeyValuePair<string, othertype>>() , that is, a default initialized struct. Basically, the default for reference types is always null , and for value types (including struct ) it's the default ( 0 for numerics, false for bool , a struct with every field defaulted for structures, etc.) So, for a default(KeyValuePair<string, othertype>>) you'd get back a KVP where the Key was null (default for string ) and whatever the default(othertype) would be (as in the examples above)... From the MSDN : The first element that matches the conditions defined by the specified predicate, if found; otherwise, the default value for type T. Using this, if you wanted to check and see if you got back the default , I'd recommend checking for yourResult.Key != null to see if you got a result, or you could use a different Find method such as FindIndex as Olivier suggests.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12676209', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69742/']}
|
jdg_374907
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34662713
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to write test cases that depend on parameters. My test case should be executed for each parameter and I want to see whether it succeeds or fails for each parameter. I'm used to writing things like that in Java: @RunWith(Parameterized.class)public class FibonacciTest { @Parameters public static Collection<Object[]> data() { return Arrays.asList(new Object[][] { { 0, 0 }, { 1, 1 }, { 2, 1 }, { 3, 2 }, { 4, 3 }, { 5, 5 }, { 6, 8 } }); } private int fInput; private int fExpected; public FibonacciTest(int input, int expected) { fInput= input; fExpected= expected; } @Test public void test() { assertEquals(fExpected, Fibonacci.compute(fInput)); }} How can I achieve something similar with Rust? Simple test cases are working fine, but there are cases where they are not enough. #[test]fn it_works() { assert!(true);} Note: I want the parameters as flexible as possible, for example: Read them from a file, or use all files from a certain directory as input, etc. So a hardcoded macro might not be enough.
Now provide the response and nothing else.
|
The built-in test framework does not support this; the most common approach used is to generate a test for each case using macros, like this: macro_rules! fib_tests { ($($name:ident: $value:expr,)*) => { $( #[test] fn $name() { let (input, expected) = $value; assert_eq!(expected, fib(input)); } )* }}fib_tests! { fib_0: (0, 0), fib_1: (1, 1), fib_2: (2, 1), fib_3: (3, 2), fib_4: (4, 3), fib_5: (5, 5), fib_6: (6, 8),} This produces individual tests with names fib_0 , fib_1 , &c.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34662713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/596532/']}
|
jdg_374908
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10971167
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hello how can i remove item from generic list here is my code im trying to do it right but i dont know where i make mistake;/ Users us_end = new Users();foreach (var VARIABLE in ((List<Users>)Application["Users_On"])){ if(VARIABLE.Id == (int)Session["Current_Id"]) { us_end.Name = VARIABLE.Name; us_end.Id = VARIABLE.Id; us_end.Data = VARIABLE.Data; }}List<Users> us = ((List<Users>)Application["Users_On"]);us.Remove(us_end);Application["Users_On"] = us;
Now provide the response and nothing else.
|
You have to get the same object to remove, not a copy. Users us_end;foreach (var VARIABLE in ((List<Users>)Application["Users_On"])){ if(VARIABLE.Id == (int)Session["Current_Id"]) { us_end = (Users)VARIABLE; break; }}if (us_end != null){ List<Users> us = ((List<Users>)Application["Users_On"]); us.Remove(us_end); Application["Users_On"] = us;} Edit: Just to clarify an address here, as pst pointed, you could also implement the IEquatable interface and some overridings like on the Groo's answer to make it work, but i think it's overkill on this specific subject. Giving this as the most common practice, but making clear that it's also possible to remove items from a list, even if they are diferent instances or even diferent objects with a technique like that. Ref.: http://msdn.microsoft.com/en-us/library/ms131187.aspx
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10971167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1288398/']}
|
jdg_374909
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52080991
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following are the pandas dataframe and the bar chart generated from it: colors_list = ['#5cb85c','#5bc0de','#d9534f']result.plot(kind='bar',figsize=(15,4),width = 0.8,color = colors_list,edgecolor=None)plt.legend(labels=result.columns,fontsize= 14)plt.title("Percentage of Respondents' Interest in Data Science Areas",fontsize= 16)plt.xticks(fontsize=14)for spine in plt.gca().spines.values(): spine.set_visible(False)plt.yticks([]) I need to display the percentages of each interest category for the respective subject above their corresponding bar. I can create a list with the percentages, but I don't understand how to add it on top of the corresponding bar.
Now provide the response and nothing else.
|
Try adding the following for loop to your code: ax = result.plot(kind='bar', figsize=(15,4), width=0.8, color=colors_list, edgecolor=None)for p in ax.patches: width = p.get_width() height = p.get_height() x, y = p.get_xy() ax.annotate(f'{height}', (x + width/2, y + height*1.02), ha='center') Explanation In general, you use Axes.annotate to add annotations to your plots. This method takes the text value of the annotation and the xy coords on which to place the annotation. In a barplot, each "bar" is represented by a patch.Rectangle and each of these rectangles has the attributes width , height and the xy coords of the lower left corner of the rectangle, all of which can be obtained with the methods patch.get_width , patch.get_height and patch.get_xy respectively. Putting this all together, the solution is to loop through each patch in your Axes , and set the annotation text to be the height of that patch, with an appropriate xy position that's just above the centre of the patch - calculated from it's height, width and xy coords. For your specific need to annotate with the percentages, I would first normalize your DataFrame and plot that instead. colors_list = ['#5cb85c','#5bc0de','#d9534f']# Normalize resultresult_pct = result.div(result.sum(1), axis=0)ax = result_pct.plot(kind='bar',figsize=(15,4),width = 0.8,color = colors_list,edgecolor=None)plt.legend(labels=result.columns,fontsize= 14)plt.title("Percentage of Respondents' Interest in Data Science Areas",fontsize= 16)plt.xticks(fontsize=14)for spine in plt.gca().spines.values(): spine.set_visible(False)plt.yticks([])# Add this loop to add the annotationsfor p in ax.patches: width = p.get_width() height = p.get_height() x, y = p.get_xy() ax.annotate(f'{height:.0%}', (x + width/2, y + height*1.02), ha='center')
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52080991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7194482/']}
|
jdg_374910
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20108520
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Basically I want the output of df -h , which includes both the free space and the total size of the volume. The solution needs to work on Windows, Linux, and Mac and be written in Go. I have looked through the os and syscall Go documentation and haven't found anything. On Windows, even command line utils are either awkward ( dir C:\ ) or need elevated privileges ( fsutil volume diskfree C:\ ). Surely there is a way to do this that I haven't found yet... UPDATE: Per nemo's answer and invitation, I have provided a cross-platform Go package that does this.
Now provide the response and nothing else.
|
On POSIX systems you can use sys.unix.Statfs . Example of printing free space in bytes of current working directory: import "golang.org/x/sys/unix"import "os"var stat unix.Statfs_twd, err := os.Getwd()unix.Statfs(wd, &stat)// Available blocks * size per block = available space in bytesfmt.Println(stat.Bavail * uint64(stat.Bsize)) For Windows you need to go the syscall route as well. Example ( source , updated to match new sys/windows package ): import "golang.org/x/sys/windows"h := windows.MustLoadDLL("kernel32.dll")c := h.MustFindProc("GetDiskFreeSpaceExW")var freeBytes int64_, _, err := c.Call(uintptr(unsafe.Pointer(windows.StringToUTF16Ptr(wd))), uintptr(unsafe.Pointer(&freeBytes)), nil, nil) Feel free to write a package that provides the functionality cross-platform.On how to implement something cross-platform, see the build tool help page .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20108520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/616644/']}
|
jdg_374911
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
228084
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
so I'm in the main folder for my web hosts, trying to find a file using find . I couldn't find it - it was listed as no such file or directory - and I thought maybe it isn't anywhere. However the following command doesn't work either: find index.php which is wrong cause there are a gazillion of them. Why is find not working? Is there a better command to use?
Now provide the response and nothing else.
|
The syntax of find is not like what you have written, please read the manual page man find to get detailed idea. For example if you want to find files named index.php on the current directory and all the sub directories under it, you can use: find . -name index.php -type f If you want to search for files having names say findex.php , index.phpfoo , index.php you need to use: find . -name '*index.php*' -type f * is a glob pattern meaning zero or more characters. On the other hand if you want to look in the current directory only : find . -maxdepth 1 -name '*index.php*' -type f
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/228084', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/104388/']}
|
jdg_374912
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28074620
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to customise the MKAnnotationView for my mapView callout bubbles. I am fine with setting the annotation title when the annotation is created, and also customising the MKAnnotationView to add labels or images e.t.c (in the viewForAnnotation delegate), but how do I change the label created in the viewForAnnotation delegate, so that the title of it is different for each pin? The other issue I have is that if I don't add a title or subtitle to the annotation when it is created in the viewDidLoad method, but I still try and create one by leaving self.map.addAnnotation(annotation) , when I run the app and tap the pin no callout bubble is displayed. In the end I would like to have totally customised callout bubbles, with individual labels on them for each pin. So what i really ned to know is how to access the viewForAnnotation delegate when the annotation is created to change properties of it for each pin. override func viewDidLoad() { super.viewDidLoad() var countries: [String] = ["Germany","Germany","Poland","Denmark"] var practiceRoute: [CLLocationCoordinate2D] = [CLLocationCoordinate2DMake(50, 10),CLLocationCoordinate2DMake(52, 9),CLLocationCoordinate2DMake(53, 20),CLLocationCoordinate2DMake(56, 14)] for vari=0; i<practiceRoute.count; i++ { var annotation = MKPointAnnotation annotation.title = countries[i] annotation.coordinate = practiceRoute[i] self.map.addAnnotation(annotation) }}func mapView(mapView: MKMapView!, viewForAnnotation annotation: MKAnnotation!) -> MKAnnotationView! { if annotation is MKUserLocation { return nil } let reuseId = "pin" var pinView = mapView.dequeueReusableAnnotationViewWithIdentifier(reuseId) as? MKPinAnnotationView if(pinView==nil){ pinView=MKPinAnnotationView(annotation: annotation, reuseIdentifier: reuseId) pinView!.canShowCallout = true let base = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 50)) base.backgroundColor = UIColor.lightGrayColor() let label1 = UILabel(frame: CGRect(x: 30, y: 10, width: 60, height: 15)) label1.textColor = UIColor.blackColor() label1.text = "12 photos" base.addSubview(label1) pinView!.leftCalloutAccessoryView = base pinView!.pinColor = .Red } return pinView}
Now provide the response and nothing else.
|
Make your custom annotation view There is no public API allowing you to access the label in the pop up directly. What you need to do is make a subclass of MKPinAnnotationView and do whatever customization you want there. As an example, class CustomAnnotationView : MKPinAnnotationView{ let selectedLabel:UILabel = UILabel.init(frame:CGRectMake(0, 0, 140, 38)) override func setSelected(selected: Bool, animated: Bool) { super.setSelected(false, animated: animated) if(selected) { // Do customization, for example: selectedLabel.text = "Hello World!!" selectedLabel.textAlignment = .Center selectedLabel.font = UIFont.init(name: "HelveticaBold", size: 15) selectedLabel.backgroundColor = UIColor.lightGrayColor() selectedLabel.layer.borderColor = UIColor.darkGrayColor().CGColor selectedLabel.layer.borderWidth = 2 selectedLabel.layer.cornerRadius = 5 selectedLabel.layer.masksToBounds = true selectedLabel.center.x = 0.5 * self.frame.size.width; selectedLabel.center.y = -0.5 * selectedLabel.frame.height; self.addSubview(selectedLabel) } else { selectedLabel.removeFromSuperview() } }} Other Notes Use this custom view in the map view: func mapView(mapView: MKMapView, viewForAnnotation annotation: MKAnnotation) -> MKAnnotationView? { var anno = mapView.dequeueReusableAnnotationViewWithIdentifier("Anno") if anno == nil { anno = CustomAnnotationView.init(annotation: annotation, reuseIdentifier: "Anno") } return anno;} Since the title property of the annotation is not set, you will have to call the map view function selectAnnotation yourself. Add the following to the CustomAnnotationView class: override func touchesEnded(touches: Set<UITouch>, withEvent event: UIEvent?) { mapView?.selectAnnotation(self.annotation!, animated: true)} If you want to have more than one marker on the map: Usually just draw the annotation simply during initialization. In setSelected just return false (meaning "show all annotations all the time"). class DotAnnotationView : MKPinAnnotationView { let dot: UILabel = UILabel.init(frame:CGRect(x: 0, y: 0, width: 20, height: 20)) required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) _setup() } override init(annotation: MKAnnotation?, reuseIdentifier: String?) { super.init(annotation: annotation, reuseIdentifier: reuseIdentifier) _setup() } override func prepareForReuse() { dot.text = "you forgot to set the text value?" } override func setSelected(_ selected: Bool, animated: Bool) { super.setSelected(false, animated: animated) } func _setup() { dot.textAlignment = .center .. etc }} You set the string (or other values - say color of the panel) for each annotation in mapView#viewFor. It's like populating a cell in a UITableView. func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { let textForThisItem = annotation.title!! // or, just use index#of to determine which row this is in your data array if annotation.isEqual(mkMap.userLocation) { // skip the user-position indicator return nil } var anno = mapView.dequeueReusableAnnotationView(withIdentifier: "anno") if anno == nil { anno = DotAnnotationView.init(annotation: annotation, reuseIdentifier: "anno") } (anno as! DotAnnotationView).dot.text = textForThisItem return anno} Finally note that somewhat confusingly, if you very simply change the class of CustomAnnotationView from MKPinAnnotationView to MKAnnotationView, everything works the same but it replaces "all of the pin" rather than just the annotation.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28074620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4337311/']}
|
jdg_374913
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11190930
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on a project, where I'd like to add many elements of the same class to a page, and make all of these classes accessible to a $('selector').click(); event handler. What is happening though, is none of the dynamically added elements of the same class are responding to clicks. To give you a better picture of what I mean, I made up a sample jsFiddle, which is very similar to the actual problem in my project: Link to jsFiddle: http://jsfiddle.net/8LATf/3/ One element of the class "added_element" is on the page already when it loads. This element is clickable. A button is clicked, and it adds other elements of class "added_element" to the page dynamically using append. None of these elements are clickable. How can I make all of the elements of class "added_element" clickable? I'm guessing it has to do with the selector I use in the event handler, but I haven't been able to figure it out. Any help is much appreciated!!!
Now provide the response and nothing else.
|
You need to delegate your handler . The easiest way is to delegate everything to the document using .on('click', ...) (this is how .live() is converted internally, as of jQuery 1.7): $(document).on('click','.added_element',function() { var id = $(this).attr('id'); alert(id); }); http://jsfiddle.net/mblase75/8LATf/4/ However, in your case, you can delegate to the #container , since all the added elements appear within that. Delegating to the closest possible DOM element is preferable for efficiency reasons, when it's possible. $('#container').on('click','.added_element',function() { var id = $(this).attr('id'); alert(id); }); http://jsfiddle.net/mblase75/8LATf/5/
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11190930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1480202/']}
|
jdg_374914
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42110
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Up until now, I was pretty confident that it was pretty much impossible to list all the domains handled by a nameserver. But apparently, there exists a couple of websites on the Internet that are able to list all the domains registered in a namerserver. For example: http://www.gwebtools.com/ns-spy/udns1.ultradns.net Or all domains pointing to a specific IP : http://www.robtex.com/ip/190.7.200.92.html (These DNS/IP were picked at random) Do you know how it's done ?
Now provide the response and nothing else.
|
There are two ways that a domain name => DNS server map can be constructed: Zone file access: some registries grant access to their zone files to their registrars and other entities. This makes it pretty easy to determine which domains in those zones are delegated to a given DNS server. This is how DomainTools.com provides their Name Server Spy product. This is the most reliable method, but is obviously limited to the zone files that they have access to. Passive DNS. This involves examining traffic through recursive DNS servers at ISPs and reconstructing zone data based on what's seen. This method lets you discover information from the entire DNS space, but is less reliable as changes take longer to appear in your database, and won't recover information about domains that get little or no queries.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/42110', 'https://serverfault.com', 'https://serverfault.com/users/205/']}
|
jdg_374915
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8864311
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
(I've edited this question to avoid distractions. There is one core question which would need to be cleared up before any other question would make sense. Apologies to anybody whose answer now seems less relevant.) Let's set up a specific example: struct Base { int i;}; There are no virtual method, and there is no inheritance, and is generally a very dumb and simple object. Hence it's Plain Old Data (POD) and it falls back on a predictable layout. In particular: Base b;&b == reinterpret_cast<B*>&(b.i); This is according to Wikipedia (which itself claims to reference the C++03 standard): A pointer to a POD-struct object, suitably converted using a reinterpret cast, points to its initial member and vice versa, implying that there is no padding at the beginning of a POD-struct.[8] Now let's consider inheritance: struct Derived : public Base {}; Again, there are no virtual methods, no virtual inheritance, and no multiple inheritance. Therefore this is POD also. Question: Does this fact (Derived is POD in C++11) allow us to say that: Derived d;&d == reinterpret_cast<D*>&(d.i); // true on g++-4.6 If this is true, then the following would be well-defined: Base *b = reinterpret_cast<Base*>(malloc(sizeof(Derived)));free(b); // It will be freeing the same address, so this is OK I'm not asking about new and delete here - it's easier to consider malloc and free . I'm just curious about the regulations about the layout of derived objects in simple cases like this, and where the initial non-static member of the base class is in a predictable location. Is a Derived object supposed to be equivalent to: struct Derived { // no inheritance Base b; // it just contains it instead}; with no padding beforehand?
Now provide the response and nothing else.
|
You don't care about POD-ness, you care about standard-layout . Here's the definition, from the standard section 9 [class] : A standard-layout class is a class that: has no non-static data members of type non-standard-layout class (or array of such types) or reference, has no virtual functions (10.3) and no virtual base classes (10.1), has the same access control (Clause 11) for all non-static data members, has no non-standard-layout base classes, either has no non-static data members in the most derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and has no base classes of the same type as the first non-static data member. And the property you want is then guaranteed (section 9.2 [class.mem] ): A pointer to a standard-layout struct object, suitably converted using a reinterpret_cast , points to its initial member (or if that member is a bit-field, then to the unit in which it resides) and vice versa. This is actually better than the old requirement, because the ability to reinterpret_cast isn't lost by adding non-trivial constructors and/or destructor. Now let's move to your second question. The answer is not what you were hoping for. Base *b = new Derived;delete b; is undefined behavior unless Base has a virtual destructor. See section 5.3.5 ( [expr.delete] ) In the first alternative (delete object), if the static type of the object to be deleted is different from its dynamic type, the static type shall be a base class of the dynamic type of the object to be deleted and the static type shall have a virtual destructor or the behavior is undefined. Your earlier snippet using malloc and free is mostly correct. This will work: Base *b = new (malloc(sizeof(Derived))) Derived;free(b); because the value of pointer b is the same as the address returned from placement new, which is in turn the same address returned from malloc .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8864311', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/146041/']}
|
jdg_374916
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
64643
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I would like to find effective upper bound for the height of $a+b$ and $a/b$ and $ab$ knowing the heights of $a$ and $b$. Thanks.
Now provide the response and nothing else.
|
If you know only heights of $a$ and $b$, you may estimate heights of $a+b$, $a/b$ and $ab$. Assuming that $h$ is an absolute (Weil) height:$$h(ab)\leq h(a)+h(b)$$$$h(a/b)\leq h(a)+h(b)$$$$h(a+b)\leq\log 2 +h(a)+h(b)$$This bounds are sharp. You may find this, for example, in M. Waldschmidt "Diophantine approximation on linear algebraic groups", Chapter 3.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/64643', 'https://mathoverflow.net', 'https://mathoverflow.net/users/15028/']}
|
jdg_374917
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12352927
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to know how I can add more than a title (e.g a caption or a link) to the fancybox.I am aware that if I add a title="Bla" it'll show up in the box. But if I add something like caption="Blabla" to my image link, what code do I need to have in jquery.fancybox.js to pull that caption tag?
Now provide the response and nothing else.
|
You don't need to mess with original jquery.fancybox.js file since you could add this option within your own customized fancybox script. If you are using HTML5 DOCTYPE , you could use the data-* attribute for you caption so you can have this HTML : <a class="fancybox" href="images/01.jpg" data-caption="This is the caption" >Open fancybox</a> Then set your custom fancybox script and get the data-caption using the beforeShow callback like $(document).ready(function() { $('.fancybox').fancybox({ beforeShow : function(){ this.title = $(this.element).data("caption"); } });}); // ready That will override the title and use the data-caption instead. On the other hand, you may want to keep the title attribute and build the fancybox's title combining both, title and data-caption attributes so, for this HTML <a class="fancybox" href="images/01.jpg" title="This is the title" data-caption="This is the caption">Open fancybox</a> Use this script $(document).ready(function() { $('.fancybox').fancybox({ beforeShow : function(){ this.title = this.title + " - " + $(this.element).data("caption"); } });}); // ready Additionally, you could also get the caption/title from another HTML element within your document (a <div> for instance) that can have links or other HTML elements. Check these posts for code examples: https://stackoverflow.com/a/9611664/1055987 and https://stackoverflow.com/a/8425900/1055987 NOTE : this is for fancybox v2.0.6+
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12352927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1420672/']}
|
jdg_374918
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
361681
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need a reasonably smart algorithm to come up with "nice" grid lines for a graph (chart). For example, assume a bar chart with values of 10, 30, 72 and 60. You know: Min value: 10Max value: 72Range: 62 The first question is: what do you start from? In this case, 0 would be the intuitive value but this won't hold up on other data sets so I'm guessing: Grid min value should be either 0 or a "nice" value lower than the min value of the data in range. Alternatively, it can be specified. Grid max value should be a "nice" value above the max value in the range. Alternatively, it can be specified (eg you might want 0 to 100 if you're showing percentages, irrespective of the actual values). The number of grid lines (ticks) in the range should be either specified or a number within a given range (eg 3-8) such that the values are "nice" (ie round numbers) and you maximise use of the chart area. In our example, 80 would be a sensible max as that would use 90% of the chart height (72/80) whereas 100 would create more wasted space. Anyone know of a good algorithm for this? Language is irrelevant as I'll implement it in what I need to.
Now provide the response and nothing else.
|
I've done this with kind of a brute force method. First, figure out the maximum number of tick marks you can fit into the space. Divide the total range of values by the number of ticks; this is the minimum spacing of the tick. Now calculate the floor of the logarithm base 10 to get the magnitude of the tick, and divide by this value. You should end up with something in the range of 1 to 10. Simply choose the round number greater than or equal to the value and multiply it by the logarithm calculated earlier. This is your final tick spacing. Example in Python: import mathdef BestTick(largest, mostticks): minimum = largest / mostticks magnitude = 10 ** math.floor(math.log(minimum, 10)) residual = minimum / magnitude if residual > 5: tick = 10 * magnitude elif residual > 2: tick = 5 * magnitude elif residual > 1: tick = 2 * magnitude else: tick = magnitude return tick Edit: you are free to alter the selection of "nice" intervals. One commenter appears to be dissatisfied with the selections provided, because the actual number of ticks can be up to 2.5 times less than the maximum. Here's a slight modification that defines a table for the nice intervals. In the example, I've expanded the selections so that the number of ticks won't be less than 3/5 of the maximum. import bisectdef BestTick2(largest, mostticks): minimum = largest / mostticks magnitude = 10 ** math.floor(math.log(minimum, 10)) residual = minimum / magnitude # this table must begin with 1 and end with 10 table = [1, 1.5, 2, 3, 5, 7, 10] tick = table[bisect.bisect_right(table, residual)] if residual < 10 else 10 return tick * magnitude
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/361681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18393/']}
|
jdg_374919
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
96713
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Windows 10 is perhaps the most Internet-connected and cloud-centric operating system released by Microsoft to date. This, of course, has caused many users to be concerned about how the OS respects their privacy (or doesn't). Multiple sources are now claiming that this OS reports user data to Microsoft which could be violating the users' assumptions of privacy. (A couple of examples are linked below.) How legitimate are these concerns and claims? Is Microsoft actually collecting data about Windows 10 users' location and activity? Are they actually authorized to do so, simply by a user's acceptance of the EULA? I'm aware that Windows 10 sends malware files to Microsoft for analysis. This is a common and generally-accepted practice for most antivirus products, and antivirus is known to be integrated into this OS. What about the other information? TechWorm - Microsoft’s Windows 10 has permission to watch your every move BoingBoing - Windows 10 automatically spies on your children and sends you a dossier of their activity
Now provide the response and nothing else.
|
Microsoft Windows Pre-Release Preview (aka Windows Insiders) Privacy Statement, January 2015 : (no longer applies) When you acquire, install and use the Program software and services,Microsoft collects information about your use of the software andservices as well as about the devices and networks on which theyoperate. Examples of data we may collect include your name, emailaddress, preferences and interests; location, browsing, search andfile history; phone call and SMS data; device configuration and sensordata; voice, text and writing input; and application usage. Forexample, when you: install or use Program software and services, we may collect information about your device and applications and use it for purposessuch as determining or improving compatibility (e.g., to help devicesand apps work together), when you use voice input features like speech-to-text, we may collect voice information and use it for purposes such as improvingspeech processing (e.g., to help the service better translate speechinto text), when you open a file, we may collect information about the file, the application used to open the file, and how long it takes to use itfor purposes such as improving performance (e.g., to help retrievedocuments more quickly), or when you input text, handwrite notes, or ink comments, we may collect samples of your input to improve these input features, (e.g.,to help improve the accuracy of autocomplete and spellcheck). This is so serious that even some political parties here in France that have nothing to do with technologies denounced Microsoft Windows 10 practices. A member claimed that the statement above does not concern the shipped version of Windows 10. Well: We have not been provided any proof that Microsoft removed all those monitoring modules of its Windows 10 beta version in the final release. And, since Windows is closed-source, there's no way for us to check ourselves. The media has reported a history of Microsoft spying as its practice (e.g. Microsoft, China clash over Windows 8, backdoor-spying charges , also NSA Built Back Door In All Windows Software by 1999 ). For the shipped version of Windows 10, we can see the same information with smoother words: Privacy Statement Additionally, after the release of the shipped version of Microsoft Windows 10, this is what was written in Microsoft Windows 10 Privacy Policy: We will access, disclose and preserve personal data, including yourcontent (such as the content of your emails, other privatecommunications or files in private folders), when we have a good faithbelief that doing so is necessary to protect our customers or enforcethe terms governing the use of the services, Only by the start of this August, and after lot of organizations and even political parties complained about Windows 10 being a spyware, Microsoft changed its privacy policy statement to softer terms to which I linked to. But is this change of policy statement followed by retrieving Windows 10 from the market and replacing it by a new one? Of course not. Note that the last paragraph I quoted is only still available in external websites including famous newspapers by the start of this August (which thing means after Microsoft started already to sell its Windows 10), but we do not find this paragraph anymore in the updated version of the privacy policy statement anymore. So Microsoft removed it already. Update: From Windows 10 feedback, diagnostics, and privacy: FAQ (shipped version of Windows 10, NOT Pre-Release Preview), we can also read regarding Diagnostics Tracking Service : As you use Windows, we collect performance and usage information thathelps us identify and troubleshoot problems as well as improve ourproducts and services. We recommend that you select Full for thissetting. Basic information is data that is vital to the operation of Windows. This data helps keep Windows and apps running properly byletting Microsoft know the capabilities of your device, what isinstalled, and whether Windows is operating correctly. This optionalso turns on basic error reporting back to Microsoft. If you selectthis option, we’ll be able to provide updates to Windows (throughWindows Update, including malicious software protection by theMalicious Software Removal Tool), but some apps and features may notwork correctly or at all. Enhanced data includes all Basic data plus data about how you use Windows, such as how frequently or how long you use certain featuresor apps and which apps you use most often. This option also lets uscollect enhanced diagnostic information, such as the memory state ofyour device when a system or app crash occurs, as well as measurereliability of devices, the operating system, and apps. If you selectthis option, we’ll be able to provide you with an enhanced andpersonalized Windows experience. Full data includes all Basic and Enhanced data, and also turns on advanced diagnostic features that collect additional data from yourdevice, such as system files or memory snapshots, which mayunintentionally include parts of a document you were working on when aproblem occurred. This information helps us further troubleshoot andfix problems. If an error report contains personal data, we won’t usethat information to identify, contact, or target advertising to you.This is the recommended option for the best Windows experience and themost effective troubleshooting. Note that only on Enterprise Edition one can turn Diagnostics Tracking Service off totally. Diagnostics Tracking Service available in Windows 8.1, Windows Server 2012 R2, Windows 7 Service Pack 1 (SP1), and Windows Server 2008 R2 SP1 and Windows 10. The quoted paragraphs concern the Diagnostics Tracking Service mechanism in which other modules, apart from Telemetry, are included. Diagnostics Tracking Service consists in these files: telemetry.asm-windowsdefault.json diagtrack.dll utc.app.json utcresources.dll Note that the answer below claiming that nothing private is collected by Windows 10 as a qualified user may listen to the traffic of his Windows operating system is wrong. It is impossible to know what Windows collects and sends permanently. Windows does not stop sending information on his/her behalf as this study shows: Even when told not to, Windows 10 just can’t stop talking to Microsoft. But still what the official documentation describes is not very good for the user such as when Windows takes system files or MEMORY SNAPSHOTS , which may unintentionally include PARTS OF A DOCUMENT YOU WERE WORKING ON on when a problem occurred (From: What are the privacy and security implications of Windows Telemetry )
|
{}
|
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/96713', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/-1/']}
|
jdg_374920
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44948396
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have made the opposite conversions (SVG to VectorDrawable ) manually or using web tools. But I am having hard time to do the opposite thing. I have VectorDrawable but I am not sure how to convert it to SVG and I can find zero online tools to do it. Does anyone have experience with this and what are the steps or tools to do it?
Now provide the response and nothing else.
|
Steps I follow: android:pathData replaced with d android:fillColor replaced with fill android:strokeColor replaced with stroke android:strokeWidth replaced with stroke-width android:fillType replaced with fill-rule A path in the VectorDrawable without fillColor is fill="none" in SVG. android:viewportHeight="24" android:viewportWidth="24" is viewBox="0 0 24 24" in SVG. Example Vector Drawable <?xml version="1.0" encoding="utf-8"?><vector xmlns:android="http://schemas.android.com/apk/res/android" android:width="24dp" android:height="24dp" android:viewportHeight="24" android:viewportWidth="24"> <path android:fillColor="#ffffff" android:pathData="M12,3L2,12h3v8h2.5v-0.8c0-1.5,3-2.2,4.5-2.2s4.5,0.8,4.5,2.2V20H19v-8h3L12,3zM12,15.2c1.2,0-2.2-1-2.2-2.2 s1-2.2,2.2-2.2s2.2,1,2.2,2.2S13.2,15.2,12,15.2z" /> <path android:pathData="M0,0h24v24H0V0z" /></vector> SVG <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" > <path fill="#ffffff" d="M12,3L2,12h3v8h2.5v-0.8c0-1.5,3-2.2,4.5-2.2s4.5,0.8,4.5,2.2V20H19v8h3L12,3zM12,15.2c1.2,0-2.2-1-2.2-2.2 s1-2.2,2.2-2.2s2.2,1,2.2,2.2S13.2,15.2,12,15.2z" /> <path d="M0,0h24v24H0V0z" fill="none"/></svg>
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/44948396', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/437039/']}
|
jdg_374921
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1354007
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let be $u,v\in\mathbb{R}^n$ , then $\det(I+uv^\intercal)=1+v^\intercal u $ where $I$ denotes the identity matrix of order $n$ . How to prove this? what I did: let be $A=\{n\in\mathbb{N}: \forall u,v \in \mathbb{R^n}, \det(I+uv^\intercal)\neq1+v^\intercal u \} $ , and suppose $A\neq \varnothing $ , then for the well-orderer-principle there exists $n_0\in A$ such $n_0\leq n,\;\forall n\in A$ . since $1\notin A,\; n_0\neq 1 \rightarrow n_0-1\in\mathbb{N}\setminus A $ so: $$\forall u,v\in\mathbb{R^{n_0-1}}: \det(I+uv^\intercal)=1+v^\intercal u $$ Then, let be $u,v\in\mathbb{R}^{n_0}$ , so $$\det(I+uv^\intercal)= \displaystyle\sum_{j=1}^{n_0}(-)^{1+j}a_{1j}\det(A_{1j})$$ where $a_{ij}=\begin{cases} u_iv_j+1 & i=j \\ u_iv_j & i\neq j \end{cases}$ ,and $A_{1j}$ is the submatrix of $I+uv^\intercal$ that results deleting the i-file and j-column. Then I try to open the term for $j=1$ and try to use the relation for the new matrix of $n_0-1$ order with the determinant because it holds the same form but I have other terms which difficult me the work. Do you know other method?. Ps. here , in page 2 I've found something similar but I don't understand what it means PS. I'm taking a course of numerical analisys
Now provide the response and nothing else.
|
You are looking at a special case of the Matrix determinant Lemma. From the Wikipedia page, the proof for the case $A = I$ follows from the equality$$ \begin{bmatrix}I & 0 \\ v^T & 1 \end{bmatrix} \begin{bmatrix}I + uv^T & u \\0 &1 \end{bmatrix}\begin{bmatrix}I & 0 \\ -v^T & 1 \end{bmatrix} = \begin{bmatrix}I & u \\ 0 & 1 + v^Tu \end{bmatrix}$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1354007', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/221357/']}
|
jdg_374922
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
516889
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm struggling with this problem: For any natural number $n$, prove that $n^3 + 3n^2 + 2n$ is a multiple of $3$. That $n^3 + 3n^2 + 2n$ is a multiple of $3$ means that: $n^3 + 3n^2 + 2n = 3 \times k$ where $k \in \mathbb N$So I tried to find a the number $k$. The best result I found was: $n^3 + 3n^2 + 2n = n(n+1)(n+2)$ But I'm lagging at the last step, to prove that it is a multiple of $3$. (However, I got the intuition, If you see the multiples of $3$: $\{0, 3, 6, 9,\dotsc\}$ there is a difference of $3$ between them. So $n(n+1)(n+2)$ incorporates that difference. This means that if you take a number $n$ then $n$ or $n+1$ or $n+2$ could be a multiple of $3$ and so their multiplication is a multiple of $3$) But I couldn't extend that idea into a consistent mathematical proof. Also this problem doesn't help either: Proof that $n^3+2n$ is divisible by $3$
Now provide the response and nothing else.
|
Among three consecutive integers, one must be a multiple of three. Reason : if $n=3k$, we're done. If $n=3k+1$ then $n+2=3j$ is a multiple of three. If $n=3k+2$, then $n+1=3m$ is a multiple of three. In any case, $3\mid n(n+1)(n+2)$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/516889', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_374923
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36887058
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am developing an Android application and I want to have two different projects in Crashlytics / Fabric (debug and release). I know that this is possible by having two different package names, but in my case, the package name of these two build types have to be the same. Is there any possibility to have two crashlytics projects with the same package name?
Now provide the response and nothing else.
|
Hello I also had same questions months ago and I solved it myself. Hope my answer will help you. You need 2 organizations with different crashlyticsApiKey for each of your project on fabric. Then on your build.gradle file you will define manifestPlaceHolders debug { manifestPlaceholders = [crashlyticsApiKey: 'your_api_key_for_debug_project_here'] }release { manifestPlaceholders = [crashlyticsApiKey: 'your_api_key_for_release_project_here'] } Then you will put this variable on meta-data at your AndroidManifest.xml file <meta-data android:name="io.fabric.ApiKey" android:value="${crashlyticsApiKey}" /> That's all, now build your project both for relase and debug, you will see your app will registered for 2 different fabric account with same package nam
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36887058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2660462/']}
|
jdg_374924
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31647274
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm used to contrast enhancement in Matlab using imadjust . Is there any equivalent function in OpenCV? A google search gives the OpenCV documentation on brightness and contrast enhancement but it uses for loops which might be inefficient. Even if we make it efficient by using Matrix expressions , it is not equivalent to what imadjust does. Is there any in-built function in OpenCV or any efficient method for the task? I saw related posts but either they link to the OpenCV doc I mentioned above or they suggest Histogram Equalization and thresholding . I prefer imadjust to histogram equalization and thresholding doesn't seem to perform contrast enhancement as such. Any help on this is appreciated.
Now provide the response and nothing else.
|
There's no builtin solution in OpenCV to perform histogram stretching , but you can do it easily in a loop. imadjust allows to select a tolerance for upper and lower bounds, or the bounds directly, so you need a little more logic than a simple for loop. You can use the example below as a reference while implementing your own: #include <opencv2\opencv.hpp>#include <vector>#include <algorithm>using namespace std;using namespace cv;void imadjust(const Mat1b& src, Mat1b& dst, int tol = 1, Vec2i in = Vec2i(0, 255), Vec2i out = Vec2i(0, 255)){ // src : input CV_8UC1 image // dst : output CV_8UC1 imge // tol : tolerance, from 0 to 100. // in : src image bounds // out : dst image buonds dst = src.clone(); tol = max(0, min(100, tol)); if (tol > 0) { // Compute in and out limits // Histogram vector<int> hist(256, 0); for (int r = 0; r < src.rows; ++r) { for (int c = 0; c < src.cols; ++c) { hist[src(r,c)]++; } } // Cumulative histogram vector<int> cum = hist; for (int i = 1; i < hist.size(); ++i) { cum[i] = cum[i - 1] + hist[i]; } // Compute bounds int total = src.rows * src.cols; int low_bound = total * tol / 100; int upp_bound = total * (100-tol) / 100; in[0] = distance(cum.begin(), lower_bound(cum.begin(), cum.end(), low_bound)); in[1] = distance(cum.begin(), lower_bound(cum.begin(), cum.end(), upp_bound)); } // Stretching float scale = float(out[1] - out[0]) / float(in[1] - in[0]); for (int r = 0; r < dst.rows; ++r) { for (int c = 0; c < dst.cols; ++c) { int vs = max(src(r, c) - in[0], 0); int vd = min(int(vs * scale + 0.5f) + out[0], out[1]); dst(r, c) = saturate_cast<uchar>(vd); } }}int main(){ Mat3b img = imread("path_to_image"); Mat1b gray; cvtColor(img, gray, COLOR_RGB2GRAY); Mat1b adjusted; imadjust(gray, adjusted); // int low_in, high_in, low_out, high_out // imadjust(gray, adjusted, 0, Vec2i(low_in, high_in), Vec2i(low_out, high_out)); return 0;} Input image: Output adjusted image:
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31647274', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5022962/']}
|
jdg_374925
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18145944
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm doing some tests about yield return perfomance, and I found that it is slower than normal return. I tested value variables (int, double, etc.) and some references types (string, etc.)... And yield return were slower in both cases. Why use it then? Check out my example: public class YieldReturnTeste{ private static IEnumerable<string> YieldReturnTest(int limite) { for (int i = 0; i < limite; i++) { yield return i.ToString(); } } private static IEnumerable<string> NormalReturnTest(int limite) { List<string> listaInteiros = new List<string>(); for (int i = 0; i < limite; i++) { listaInteiros.Add(i.ToString()); } return listaInteiros; } public static void executaTeste() { Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); List<string> minhaListaYield = YieldReturnTest(2000000).ToList(); stopWatch.Stop(); TimeSpan ts = stopWatch.Elapsed; string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds / 10); Console.WriteLine("Yield return: {0}", elapsedTime); //**** stopWatch = new Stopwatch(); stopWatch.Start(); List<string> minhaListaNormal = NormalReturnTest(2000000).ToList(); stopWatch.Stop(); ts = stopWatch.Elapsed; elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}", ts.Hours, ts.Minutes, ts.Seconds, ts.Milliseconds / 10); Console.WriteLine("Normal return: {0}", elapsedTime); }}
Now provide the response and nothing else.
|
Consider the difference between File.ReadAllLines and File.ReadLines . ReadAllLines loads all of the lines into memory and returns a string[] . All well and good if the file is small. If the file is larger than will fit in memory, you'll run out of memory. ReadLines , on the other hand, uses yield return to return one line at a time. With it, you can read any size file. It doesn't load the whole file into memory. Say you wanted to find the first line that contains the word "foo", and then exit. Using ReadAllLines , you'd have to read the entire file into memory, even if "foo" occurs on the first line. With ReadLines , you only read one line. Which one would be faster? That's not the only reason. Consider a program that reads a file and processes each line. Using File.ReadAllLines , you end up with: string[] lines = File.ReadAllLines(filename);for (int i = 0; i < lines.Length; ++i){ // process line} The time it takes that program to execute is equal to the time it takes to read the file, plus time to process the lines. Imagine that the processing takes so long that you want to speed it up with multiple threads. So you do something like: lines = File.ReadAllLines(filename);Parallel.Foreach(...); But the reading is single-threaded. Your multiple threads can't start until the main thread has loaded the entire file. With ReadLines , though, you can do something like: Parallel.Foreach(File.ReadLines(filename), line => { ProcessLine(line); }); That starts up multiple threads immediately, which are processing at the same time that other lines are being read. So the reading time is overlapped with the processing time, meaning that your program will execute faster. I show my examples using files because it's easier to demonstrate the concepts that way, but the same holds true for in-memory collections. Using yield return will use less memory and is potentially faster, especially when calling methods that only need to look at part of the collection ( Enumerable.Any , Enumerable.First , etc.).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18145944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2396350/']}
|
jdg_374926
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2327408
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking at implementing the repository pattern (since what I came up with was 90% an implementation of it anyway), and have come across a design question - where I have two or more core business objects (for example, Business and Contact in a CRM app), the BO's can be strongly related or not related at all. In this situation, should I implement one repository (CrmRepository for example, with .addBusiness(), .addContact() et al), or multiple repositories (BusinessRepository, ContactRepository each with their own .add(), .delete() et al). What is the best practice in this situation? The underlying DAL is EF4. Regards Moo
Now provide the response and nothing else.
|
We have been doing a lot of thinking recently at my work and came across a few articles that helped us visualize and design our repositories in a consistent manner. From what we found out one of the better practices is to create one repository per aggregate root. An aggregate root would be an Entity type where you need to reference that entity type to reach child value types. Only an Entity type could be queried from the database and any child Value types would need to be traversed from the Entity. With your information in your question it seems like a Business would be an aggregate root and thus an Entity Type and would require its own repository. Since Contact can live independently that might be an aggregate root as well. Both objects could have references to each other and use a repository to load up the Businesses from a Contact or Load up the Contacts from a Business via its respective repository. I have been doing a lot of reading recently so i hope I made some sense in my thought process. Some links Aggregate Root Entities, Value Objects, Aggregates and Roots
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2327408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_374927
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31553
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I estimate the time of a full backup for a 30gb database? In this moment it takes about 50 minutes. Can we improve that? Thanks
Now provide the response and nothing else.
|
To add a bit and summarize other responses: Using compression will speed up the backup (which is read some then write some - reduce the 'write some' and you speed things up) but at the expense of some extra CPU. You can combine backup compression with resource governor in 2008 to limit this. Using compression also speeds up restores (read some then write some again) - which can be fantastic for reducing downtime in a disaster recovery situation. Not using the same drive for database files and backups. Apart from the contention issues (which will vary depending on the I/O subsystem), this is a disaster in the making. The worst case of data-loss I ever saw was when a 3rd-party tech accidentally formatted the drive that had the database AND the only backups on it Using a striped backup set. If you can backup the database to multiple backup files, then the I/Os are round-robined across the backup files. If you have the backup files on separate storage, you can effect massive performance gains. Tune some of the more advanced parameters like BLOCKSIZE, MAXTRANSFERSIZE, BUFFERCOUNT The fastest backups I've ever seen are at a customer of ours, Bwin, in Vienna. They can backup 2TB in 36 minutes. See my blog post about this at High-end backup compression numbers . Have a look at this SQLCAT PDF , specifically: Section 4, Page 71: Tuning the Performance of Backup Compression in SQL Server 2008 Section 1, Page 15: Tuning Backup Compression Part 2 Hope this helps!
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/31553', 'https://serverfault.com', 'https://serverfault.com/users/-1/']}
|
jdg_374928
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
56426164
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I stumbled upon the following problem on a python challenge: Write a function that satisfies the following rule for any number of function calls. f()()()()()(s) == 'fooooo' + s; example: f('it') == 'fit';f()('x') == 'fox';f()()('bar') == 'foobar';f()()()('l') == 'foool'; The function should be stateless and should not use any variables outside the scope. The function signature was: def f(s=None): # Your code here I thought that in order to be able to chain multiple calls we will have to return a function when no string is passed into the function, but can't figure out how to build the expected string with no external variables. Suggestions? def f(s=None): if s is None: # concatenate an 'o'? return f else: # Last call, return the result str. return s
Now provide the response and nothing else.
|
An alternative to Nikola's answer is something like this: def f(s=None): if s: return f'f{s}' def factory(prefix): def inner(s=None): return f'f{prefix}{s}' if s else factory(prefix + 'o') return inner return factory('o') using a closure and no helper function.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56426164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10447261/']}
|
jdg_374929
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10329804
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a CSV file with two different columns, one with PC names and one with MAC addresses like this: PCxxxx 00-11-22-33-44-55...... These values should be placed into the following CLI command: wdsutil /Set-Device /Device:PCxxxx /ID:00-11-22-33-44-55 Now since there are about 200 of them, I want to automate this. As a matter of interest, could one do this with batch? It would be quite complicated, right? I thought of arrays, but don't think one can do this in batch. Maybe with PowerShell it'd be a bit easier.
Now provide the response and nothing else.
|
In a batch file: for /f "tokens=1,2 delims= " %%a in (foo.csv) do ( wdsutil /Set-Device /Device:%%a /ID:%%b) Actually, you can do that as a one-liner from cmd directly: for /f "tokens=1,2 delims= " %a in (foo.csv) do wdsutil /Set-Device /Device:%a /ID:%b In PowerShell you can use a similar idea: Get-Content foo.csv | ForEach-Object { $name,$mac = -split $_ wdsutil /Set-Device /Device:$name /ID:$mac} Or use the CSV import cmdlet, but given your question you don't seem to have column headers, so you need to provide them manually: Import-CSV -Delim ' ' -Path foo.csv -Header Name,Mac | ForEach-Object { wdsutil /Set-Device "/Device:$($_.Name)" "/ID:$($_.Mac)"}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10329804', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/906738/']}
|
jdg_374930
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36982769
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I was using CodeIgniter 3.0.0 with PHP 5.6. Yesterday I updated to PHP 7 and started getting following error:- Uncaught TypeError: Argument 1 passed to CI_Exceptions::show_exception() must be an instance of Exception, instance of Error given, called in /my/file/path/app/system/core/Common.php on line 658 and defined in /my/file/path/hgx_portal/app/system/core/Exceptions.php:190Stack trace:#0 /my/file/path/hgx_portal/app/system/core/Common.php(658): CI_Exceptions->show_exception(Object(Error))#1 [internal function]: _exception_handler(Object(Error))#2 {main} thrown in /my/file/path/hgx_portal/app/system/core/Exceptions.phpon line 190
Now provide the response and nothing else.
|
This is a know issue in CodeIgniter 3.0.0, see the github issue here and changelog below: Fixed a bug (#4137) - :doc: Error Handling <general/errors> breaks for the new Error exceptions under PHP 7. It's because set_exception_handler() changed behavior in PHP 7. Code that implements an exception handler registered with set_exception_handler() using a type declaration of Exception will cause a fatal error when an Error object is thrown. If the handler needs to work on both PHP 5 and 7, you should remove the type declaration from the handler, while code that is being migrated to work on PHP 7 exclusively can simply replace the Exception type declaration with Throwable instead. <?php// PHP 5 era code that will break.function handler(Exception $e) { ... }set_exception_handler('handler');// PHP 5 and 7 compatible.function handler($e) { ... }// PHP 7 only.function handler(Throwable $e) { ... }?> Upgrading to anything beyond 3.0.2 will fix your issue.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36982769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/517294/']}
|
jdg_374931
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1895727
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I detect if the user is not using any of the browsers Chrome, Firefox or Internet Explorer using JavaScript or PHP?
Now provide the response and nothing else.
|
The best way to do this in JS I found is on Quirksmode . I made one for PHP which should work with common browsers : $browser = array( 'version' => '0.0.0', 'majorver' => 0, 'minorver' => 0, 'build' => 0, 'name' => 'unknown', 'useragent' => '' ); $browsers = array( 'firefox', 'msie', 'opera', 'chrome', 'safari', 'mozilla', 'seamonkey', 'konqueror', 'netscape', 'gecko', 'navigator', 'mosaic', 'lynx', 'amaya', 'omniweb', 'avant', 'camino', 'flock', 'aol' ); if (isset($_SERVER['HTTP_USER_AGENT'])) { $browser['useragent'] = $_SERVER['HTTP_USER_AGENT']; $user_agent = strtolower($browser['useragent']); foreach($browsers as $_browser) { if (preg_match("/($_browser)[\/ ]?([0-9.]*)/", $user_agent, $match)) { $browser['name'] = $match[1]; $browser['version'] = $match[2]; @list($browser['majorver'], $browser['minorver'], $browser['build']) = explode('.', $browser['version']); break; } } }
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1895727', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/229849/']}
|
jdg_374932
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
328198
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I'm right then principle of equivalence says that there is no difference between the experiments performed in a room present in a gravitational field and an identical room accelerating upwards with an acceleration equal to the gravitational fiend. Or in other words, there is no difference between gravity and the pseudo force experienced in accelerating frames. However, this can't be correct. I thought of this experiment: Consider a very long room in which a rectangular body of non-uniform mass-density is initially present at the top in a horizontal position. As time passes, it falls from there. Clearly, it will experience a torque due to its non-uniform mass if the room is in a uniform gravitational field. But, if the room is just accelerating upwards, then the body won't experience torque because no forces are actually 'acting' on it. I think forces are absolute. If there is a force acting on the room, then you can say that the room is at rest and there is an imaginary force on you. But this imaginary force won't have the effects that a force actually 'acting' on you will have. So, gravity should be different from pseudo forces because unlike pseudo forces, gravity actually acts on a body. EDIT: Here's another example. There is a maximum acceleration that the human body can withstand. Now, if at $t=0$, I'm at the top-most position of a very long room and am allowed to fall from there. If there is a gravitational field in the room greater than the maximum acceleration my body can withstand, then my body will be shattered to pieces due to the large acceleration. However, if there is no gravitational field in the room, instead the room is accelerating upwards with an acceleration greater than the one my body can withstand, then there should be no effect on my body whatever the acceleration of the room is, because my body is completely disconnected from the room. So, gravity has effects different from pseudo forces.
Now provide the response and nothing else.
|
Clearly, it will experience a torque due to its non-uniform mass if the room is in a uniform gravitational field. This should not be clear to you, because it's not true. I apologize if the next section beats you over the head with mathematics, I want to show you why Newton's laws don't say that, and then I want to give you some immediate physical insight afterwards. Newton's laws and the center of mass Consider a system of point masses $m_i$ at position vectors $\mathbf r_i$ experiencing external forces $\mathbf F_i = \mathbf F_i(\mathbf r_i)$ and internal forces $\mathbf G_{ij} = \mathbf G_{ij}(\mathbf r_i, \mathbf r_j)$ which should obey $\mathbf G_{ij} = -\mathbf G_{ji}$ consistent with Newton's third law. Newton's laws say that these must obey the equations, $$m_i ~\ddot {\mathbf r}_i = \mathbf F_i + \sum_j \mathbf G_{ij}.$$Obviously one of the things that we like to do is to sum all of these equations up and define the center of mass by defining $M = \sum_i m_i$ and then defining $\mathbf R = \sum_i (m_i/M) ~\mathbf r_i,$ leading to the equation, $$M~\mathbf R = \sum_i \mathbf F_i,$$ with the $\mathbf G_{ij}$ dropping out due to their antisymmetry in their respective indices. If you've never seen the trick, use $q_{ij} = -q_{ji}$ to replace $\sum_{ij} q_{ij}$ with $\frac12 \sum_{ij} (q_{ij} - q_{ji}),$ then expand this into two sums and relabel the second one $i \leftrightarrow j$ (they're just names of indices, after all) to find after recombining, $\frac12 \sum_{ij} (q_{ij} - q_{ij}) = \sum_{ij} 0 = 0.$ Okay if you're solid on all of those, let's talk about torques about the arbitrary origin we've chosen. Torques and angular momentums We know these are defined for a force as taking the cross product between position and the force, so that suggests that above we need to try to work with those in Newton's laws, as $$m_i~\mathbf r_i\times \ddot{\mathbf r}_i = \mathbf r_i \times \mathbf F_i + \sum_j {\mathbf r_i \times \mathbf G_{ij}}.$$ We want to do something with both of these sides. The left hand side looks like the product of a thing with its second derivative, which looks like it might be related to a derivative of a product of a thing and its first derivative. Working it out we can actually see that for the cross product it's not just a relationship, it's an equality; the fact that any vector crossed with itself is 0 leads to$$\frac{d}{dt} (\mathbf v \times \dot{\mathbf v}) = \dot{\mathbf v}\times \dot{\mathbf v} + \mathbf{v} \times \ddot{\mathbf v} = \mathbf 0 + \mathbf v\times \ddot{\mathbf v}.$$ In turn defining the angular momentum about the origin $\mathbf L_i = m_i \mathbf r_i \times \dot{\mathbf r}_i$ and assuming $\dot m_i = 0$ as usual leads to the left hand side being just $\dot{\mathbf L}_i.$ For the right hand side, we can define $\tau_i = \mathbf r_i \times \mathbf F_i$ as the external torque on particle $i$, and $\mathbf L = \sum_i \mathbf L_i$ and $\mathbf T = \sum_i \mathbf \tau_i.$ We're ready to sum over $i$ to find,$$\dot {\mathbf L} = \mathbf T + \sum_{ij} \mathbf r_i \times \mathbf G_{ij}.$$ Central force motion Now we want to try the same "antisymmetry trick" on the second half; under a $\frac12 \sum_{ij}$ symbol we have $\mathbf r_i \times \mathbf G_{ij} - \mathbf r_i \times \mathbf G_{ji}$ and under $i\leftrightarrow j$ relabeling this becomes $$\dot {\mathbf L} = \mathbf T + \frac12 \sum_{ij} (\mathbf r_i - \mathbf r_j)\times \mathbf G_{ij}.$$ Now it's not 100% of all possible systems, but in the largest class of systems that we care about, the interaction force $\mathbf G_{ij}$ points along the line connecting $j$ and $i$. This is true for the gravitational force, for the Coulomb force, or even if we make this thing out of very rigid massless struts connecting the little masses. So in the fast majority of cases (but not all!) we have $\mathbf G_{ij}\propto \mathbf r_i - \mathbf r_j$ and the latter term is 0. We have just $\dot {\mathbf L} = \mathbf T.$ These are known as "central forces", and I'm going to assume that your entire mass can be regarded as a bunch of point masses held together by massless struts and gravitational self-interaction and electromagnetic forces, all the forces are "central" in the sense that they act between two masses pointing along the line connecting them. As the above argument shows, they also therefore cannot generate net torque. (That's not what you were interested in anyway , you thought that the external field was going to torque these things, but I guess I'm just saying that external gravity also can't easily influence the constituent parts to torque each other.) A uniform gravitational field Whew! Okay, math rant is almost done! Now we just need to apply the above equations. Consider if $\mathbf F_i = m_i \mathbf g$ for some uniform gravitational acceleration $\mathbf g$. Then these two crucial equations are: $$M~\ddot{\mathbf R} = \sum_i m_i~\mathbf g = M~\mathbf g,\\ \dot {\mathbf L} = \sum_i m_i ~\mathbf r_i \times \mathbf g = M~\mathbf R \times \mathbf g.$$Do you see where I'm going here? Use that first to substitute into the second to find $\mathbf R \times \ddot{\mathbf R}$ which we know is just $\frac d{dt} (\mathbf R \times \dot{\mathbf R})$ and so we can integrate once, $\mathbf L = M~\mathbf R \times \dot{\mathbf R} + \mathbf C_0.$ However we also have $\mathbf R = \frac 12~\mathbf g~t^2 + \mathbf C_1~t + \mathbf C_2.$ We can use our choice of reference frame to set $\mathbf C_1 = \mathbf C_2 = \mathbf 0$ and in this special reference frame where the center of mass starts off at rest at the origin, we find that $\mathbf R \propto \dot{\mathbf R}$ and therefore $\mathbf L = \mathbf C_0.$ The angular momentum about the starting point for the center of mass is, in fact, a constant, no matter how the mass is non-uniformly spread. Physical insight In retrospect this should not really surprise you all that much. You know that everything falls at the same rate: fill up two water bottles, one full-up, one half-full, drop them side by side, and you'll notice that within experimental error they will hit the ground at the same time when dropped side-by-side. One has nearly twice the mass of the other, but their falling profiles are identical. Now you are proposing that if you put a thin, massless rod between them to "connect" them, then as if by magic, one of them will want to fall faster than the other and they will not land side-by-side. But what is this rod going to do? It's going to communicate forces horizontally. And what are you claiming it does? Well if they start falling differently vertically from how they were otherwise going to fall, then it must communicate a vertical force. So that's the tension between your pre-experimental intuitions and how experiments show the world actually works. I strongly, strongly encourage you to try the experiment with the plastic water bottles, or something similar where you've got two very different masses dropped side-by-side but you don't care about them breaking when they hit the floor. (You might try a coin alongside a water bottle for example.) Build up this intuition, it can serve you very well.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/328198', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/140084/']}
|
jdg_374933
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44191398
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have no experience in jBoss and can't make it work. I installed it with the installer version and I open up user.home\EAP-7.0.0\bin\standalone.bat and this is what the console looked like. Command froze so I could not copy the text so I took a screendump. Do anyone have a suggestion of how to make it work? EDITEDI followed this guide 4.2. INSTALLING JBOSS DEVELOPER STUDIO AND RED HAT JBOSS EAP with a installer jar-file and it created a EAP-7.0.0 folder but no folder or icon for the jBoss Developer Studio. What I can't find out is how to start jBoss Developer Studio.
Now provide the response and nothing else.
|
Don't use RecyclerView or ListView inside ScrollView . For nested scrolling you should use NestedScrollView . NestedScrollView is just like ScrollView , but it supports acting as both a nested scrolling parent and child on both new and old versions of Android. Nested scrolling is enabled by default. SOLUTION: 1. Instead of using ScrollView , use NestedScrollView as container of your Section part( RecyclerView and other Views ). <android.support.v4.widget.NestedScrollView android:id="@+id/main_scrollview" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:id="@+id/sections_layout" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="wrap_content"> <!-- Your Section:: RecyclerView and other Views --> </LinearLayout></android.support.v4.widget.NestedScrollView> 2. For scrolling issue use setNestedScrollingEnabled(false) to your RecyclerView .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44191398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/860581/']}
|
jdg_374934
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59867691
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have one parent checkbox and one child checkbox in App.js Now If I checked Parent checkbox and all Child checkbox have to checked how to achieve this using statefull component in react This is my code import React, { Component } from 'react';import Checkbox from "./Checkbox/Checkbox";export default class App extends Component { constructor(props) { super(props) this.state = { checked: false } }checkbox = () => { if(this.state.checked === false) { this.setState({ checked: this.state.true }) } else { this.setState({ checked: this.state.false }) }} render() { return ( <div> <form> <div class="form-group form-check"> <input checked={this.state.checked} onChange={() => this.setState({ checked: !this.state.checked })} type="checkbox" class="form-check-input" id="exampleCheck1" /> <div> <h1>Child checkbox</h1> </div> <div> <Checkbox onChange={this.checkbox}></Checkbox> </div> </div> </form> </div> ) }}
Now provide the response and nothing else.
|
It seems there is a problem in your advertised.listeners setting in server.properties file as @cricket_007 has already said. Let me try to explain it in detail. When your producer try to connect Kafka broker, broker sends advertised hostname to client to use, then producer connect to broker with using this address. So normally communication is like this: But in your case communication between producer and broker is like this: P.S: You can assume public IP in images as private IP for your case. As a result you should set your advertised listeners in server.properties like this: advertised.listeners=PLAINTEXT://192.168.199.137:9092 Reference for images: https://www.udemy.com/course/kafka-cluster-setup/
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59867691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12746202/']}
|
jdg_374935
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24312010
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I enabled security on Jenkins and added myself as the only user. After clicking on "Save" it brought me back to the log-on screen without asking me to create a password. Now I can not log-in with the username I just added. Is there a default password that I can use or do I have to disable security again in the config.xml file? Thanks for any help :)
Now provide the response and nothing else.
|
I solved it. Here's what I did. I disabled security in JENKINS_HOME/config.xml Restarted Tomcat I re-enabled security Selected Jenkin's own database and also checked Allow users to Sign-up(Also make sure that in Authorization, "Anyone can do anything" is ticked.) "Save". After clicking Save, it will bring you to the initial page, then on the top-right corner there is a "Sign up" link. Click on "Sign up", enter your username and password then log-in. Right now you are the only registered user. But it actually did something else. Prior to the creation of your first account, there is no "Manage users" in the "Manage Jenkins" menu. I saw this guide on adding users to Jenkins and it says that there should be a "Manage users" link under the "Manage Jenkins" menu but there is none. But after you have created your first account (using Sign up), you can now find it there. Add users on the "Manage users" and then go back to "Manage security" then disable "Allow users to sign up". Also, you can now use Matrix based authentication without having to worry about getting locked out of Jenkins. I hope someone can find this useful. Cheers.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24312010', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2963300/']}
|
jdg_374936
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12307234
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting an error message on this function telling me that parseInt doesn't have a radix parameter, even though I'm passing it the variable moveCount. I assume this means that the value of the span with the id #moveCount is not being assigned to the variable moveCount. I've marked the line below where I think the problem's arising. I've tried this line moveCount = $('span #moveCount').html(); both with and without using the word 'span' in the selector and am getting the same result. function incrementMoveCount() { //gets the html of the span with id //moveCount //turns it into a number //increments it by one //sets the html of the span with id moveCount //to the new move count var moveCount = 0; var newMoveCount = 0; moveCount = $('span #moveCount').html(); # problem here newMoveCount = parseInt(moveCount); newMoveCount = newMoveCount + 1; $('span #moveCount').html('newMoveCount');} html <div id="meta"><h1>Checkers</h1><h4>Moves: <span id="moveCount">0</span></h4></div>
Now provide the response and nothing else.
|
You can remove the space if it's the span's id - space specifies descendants.. so you are actually looking for descendants of span with an id=moveCount $('span#moveCount').html(); Or since you are selecting by ID you don't need to specify span as ID selector is the fastest - ID's are unique so it will always look for the first element with that id $('#moveCount').html();
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12307234', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/577455/']}
|
jdg_374937
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
240360
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This problem is the fifth question of 1996 Chinese High School Mathematics League or Chinese Mathematical Olympiad in Senior : Choose several colors from the given six different colors to dye six faces of a cube, and dye each two faces with common edges into different colors. How many different dyeing schemes are there? Note: if we dye two identical cubes, we can make the six corresponding faces of the two cubes dyed the same by proper flipping, then we say that the two cubes have the same dyeing scheme. Show[Graphics3D[ Rotate[Cuboid[{0, 0, 0}, {1, 1, 1}], 0 Degree, {0, 0, 1}], Axes -> True], i = 1; Graphics3D[ Table[Text[Style[ToString[i++], 15], {x, y, z}], {x, 0, 1}, {y, 0, 1}, {z, 0, 1}]]] g0 = Graph[(Sort /@ Flatten[Map[Thread[#[[1]] \[UndirectedEdge] #[[2]]] &, {{1, {2, 3, 5}}, {2, {1, 4, 6}}, {3, {1, 4, 7}}, {4, {2, 3, 8}}, {5, {1, 6, 7}}, {6, {2, 5, 8}}, {7, {3, 5, 8}}, {8, {4, 6, 7}}}]]) // DeleteDuplicates, VertexLabels -> "Name"];(*Adjacency between vertices*)ChromaticPolynomial[g0, 6]poly = CycleIndexPolynomial[DihedralGroup[8], Array[Subscript[a, ##] &, 6]] g1 = Graph[(Sort /@ Flatten[Map[Thread[#[[1]] \[UndirectedEdge] #[[2]]] &, {{1, {2, 3, 4, 5}}, {2, {1, 3, 5, 6}}, {3, {1, 4, 2, 6}}, {4, {1, 3, 5, 6}}, {5, {1, 2, 4, 6}}, {6, {2, 3, 4, 5}}}]]) // DeleteDuplicates, VertexLabels -> "Name"];(*Adjacency between faces*)ChromaticPolynomial[g1, 6] The above method may not consider the restriction that the color of adjacent faces can not be the same, and does not eliminate the same dyeing situation after rotation, so there are many unreasonable schemes. f = Table[{i, Delete[Range[6], {{i}, {7 - i}}]}, {i, 6}];(*A face and its adjacent 4 faces*) g = Table[{i, 7 - i}, {i, 3}]; DeleteDuplicatesBy[ Select[MapThread[Rule[#1, #2] &, {Range[6], #}] & /@ Tuples[{Black, White, Red, Green, Yellow, Cyan}, {6}], Cases[f /. #, {x_, {___, x_, ___}}] == {} &(*Detect whether a face has the same color as its four adjacent faces*)], Sort[Sort /@ (g /. #)] &(*Remove duplication*)] // Length The results of the above codes are 198030 , 4080 and 215 , but the reference answer is 230 (Maybe I didn't effectively exclude the same dyeing scheme after rotation). How to correctly list all the solutions to this problem? With reference answers: f = Table[{i, Delete[Range[6], {{i}, {7 - i}}]}, {i, 6}];g = Table[{i, 7 - i}, {i, 3}];sol = Values /@ DeleteDuplicatesBy[ Select[MapThread[Rule[#1, #2] &, {Range[6], #}] & /@ Tuples[{Black, White, Red, Green, Yellow, Cyan}, {6}], Cases[f /. #, {x_, {___, x_, ___}}] == {} &], Sort[Sort /@ (g /. #)] &] ;newsol = Map[#[[{1, 3, 2, 4, 5, 6}]] &, sol];(*Adjust the display order of faces*)newsol // Length(Graphics3D[{Specularity[0, 10], Rotate[Thread[{#, MeshPrimitives[Cuboid[{0, 0, 0}, {1, 1, 1}], 2]}], 0 Degree, {0, 0, 1}]}, Lighting -> ({"Directional", White, #} & /@ Tuples[{-1, 1}, 3])(*Diffuse light sources are arranged at four corners Or use a white scattering light source: Lighting -> {{"Ambient", White}}*)] & \/@ newsol[[1 ;; 9]]) // Multicolumn octahedralgroup=MatrixForm /@ FiniteGroupData["Octahedral", "MatrixRepresentation"]Det /@ FiniteGroupData["Octahedral", "MatrixRepresentation"] Acknowledgements: Thank you very much for the detailed answers provided by thorimur . I hope community members can provide more and more ingenious methods (additional reward).
Now provide the response and nothing else.
|
Be warned: this is a long answer, because I'm trying to be sufficiently general to treat basic graph colorings in Mathematica and maximally explanatory for anyone reading. tl;dr: Define graph colorings; create functions that identify generate colorings; then quotient the set of colorings by the graph automorphisms, by creating literal equivalence classes of colorings. Count the number of resulting equivalence classes. Get 215 instead of 230 ; find that the reference answer has double-counted the number of 6-colorings by accident—or that the question is actually slightly different than as translated, and recover 230 in that case! (Note: code presented in full near the bottom.) Intro Encoding it as a graph and looking at colorings is a good strategy! However, we need to take into account two things: ChromaticPolynomial[g, k] gives colorings using exactly k colors, whereas you need to choose up to k = 6 colors ChromaticPolynomial[g, k] considers graphs to be labeled, and so, for example, there are, according to ChromaticPolynomial , 2 colorings of the graph 1 •-• 2 . We could do this by "standard" combinatorial methods, like counting how many possibilities there are for the placement of successive colors, but I want to try to stick with your graph strategy. The second graph g1 , encoding faces as graph vertices and edges as connections, is the relevant one. Unfortunately, Mathematica doesn't have built-in graph coloring utilities beyond ChromaticPolynomial . So, we'll need to build our own. Building a solution Defining and checking graph colorings Let's choose a form to represent graph colorings with. A(n unrestricted) graph coloring is an assignment from each vertex in a graph to a color. So let's encode a coloring as an association on graph vertices, e.g.: <| v1 -> color1, v2 -> color2, ..., vn -> colorn |> This is not the most efficient way to do this. A more efficient way would be to simply use a list of colors, with the color in the n th position indicating the color of the n th vertex in VertexList[g] . But that's okay. So, let's write a function that tests if a given coloring is even a well-formed assignment of colors to a given graph's vertex set, not even requiring adjacent vertices are differently colored yet: UnrestrictedColoringQ[g_, coloring_Association] := ContainsExactly[VertexList[g], Keys[coloring]] Ok. Now let's test if it's an actual graph coloring, i.e. that no two adjacent vertices have the same color. We'll do this by mapping the association over the edges, which will replace each vertex with its color (here c is our function/association)—we do this by mapping over the edge list at the 2nd level. For example, written out stylistically instead of with \[UndirectedEdge] , just for showing the result: In[1]:= Map[c, {1 •-• 2, 2 •-• 3}, {2}]Out[1]:= {c[1] •-• c[2], c[2] •-• c[3]} The question is then whether we wind up with a color connected to a color n the output. If so, then two adjacent vertices have been assigned the same color by c . We want to check that this is avoided. That is, we want to check that that self-loops, loops of the kind a •-• a , do not appear. We'll do this with FreeQ[result, v_ \[UndirectedEdge] v_] . (Note: This assumes undirected edges; we could include directed edges by providing a couple alternatives to the pattern via | .) So, putting this all together, ColoringQ[g_, c_Association] := FreeQ[Map[c, EdgeList[g], {2}], v_ \[UndirectedEdge] v_, 1] /; UnrestrictedColoringQ[g, c] where the /; checks that c is at least an unrestricted coloring first. (If we were really building a package, we'd probably want to return an error message in that case instead.) Also note that the 1 in FreeQ just restricts us to testing the first level for safety. Generating colorings Okay, now let's build our colorings that select from a set of 6 colors. There are much better algorithms for doing this, but we're going to do it by brute force, since we only need to consider 6^6 == 46656 colorings. We can get all lists of 6 elements drawn from the 6 colors {1,2,3,4,5,6} via Tuples[{1,2,3,4,5,6}, 6] , or in general, Tuples[Table[i, {i, Length @ VertexList[g]}], Length @ VertexList[g]] . We then want to make these into unrestricted colorings, i.e. associations; we can do this with AssociationThread , e.g. AssociationThread[VertexList[g], {4,6,2,2,1,2}] produces the association we want it to. So, AllUnrestrictedColorings[g_] := With[{vs = VertexList[g]}, AssociationThread[vs, #] & /@ Tuples[Table[i, {i, Length[vs]}], Length[vs]]] We can then select the ones that are colorings. This considers isomorphic colorings inequivalent if the color labels and vertex labels are different, so we'll reflect that in the name: AllLabeledColorings[g_] := Select[AllUnrestrictedColorings[g], ColoringQ[g, #] & ] Modding out by vertex relabeling Now comes the interesting part. We want to consider the action under reflections and rotations of the cube. Mathematically, we're modding out by the action of that symmetry group. Usually this is done by creating equivalence classes, and while there are more efficient ways to do it computationally, let's reflect the typical mathematical procedure. Now, it happens that reflections and rotations of the cube correspond exactly to graph automorphisms of g1 . Mathematica has a function to produce the automorphism group of a graph, namely GraphAutomorphismGroup . We can get the list of group elements with GroupElements , and then apply these to a list of vertices by Permute[list, groupelement] or for a single element by PermutationReplace . We'll map over the keys in each association in this implementation; if we were taking colorings to be lists instead of associations, the first strategy might be relevant. Note that this does not account for isomorphic colorings up to relabeling of colors ; for example, on the graph 1 •-• 2 •-• 3 , if our colors are R, G, B , then this considers R-G-R to be inequivalent to R-B-R and B-R-B (etc.) This is what you want, though. So, if AutG is the list of group elements, a single equivalence class for a coloring c is Function[h, KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG Note: This assumes that our vertices are integers. In general, we'd need to use VertexIndex to turn it into an integer, permute, then extract the right vertex from VertexList . (Or permute the VertexList directly via Permute .) Now, for implementation reasons (namely that <| a -> x, b -> y |> is not equal to <| b-> y, a-> x |> ) we'll want to sort the resulting associations by the keys. So, instead, we want, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG We're going to package this into a function with parameter c then map over the list of colorings. Once we do, we want to delete equivalent, uh, equivalence classes (i.e. equivalence classes with the same elements) by DeleteDuplicates with function ContainsExactly . Putting this all together, for a list of colorings clist , we can write AutMod[g_, clist : {___Association}] := With[{AutG = GroupElements[GraphAutomorphismGroup[g]]}, DeleteDuplicates[ Function[c, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG ] /@ clist, ContainsExactly]] Now AutMod[g1, AllLabeledColorings[g1]] should give us all inequivalent (in the context of this problem) colorings. The length of this should be the number of dyeing schemes. The result Our result Now. This works. It takes a while to run. Your computation, which was posted after I began writing this, is much more efficient, but this reflects the underlying math more readably in my opinion, and is therefore easier to trust (for me, at least); and it's generalizable (at least to other small graphs!). However, after consideration, I believe your approach, which appears to use neighborhoods, might be generalizable too, and is certainly nicer computationally. If we wanted to make the above more efficient while using the same strategies, e.g. by encoding colorings differently, I think we could, and we might end up with something similar to what you have. The answer this produces, though, is 215 . The given answer is 230 . I'm pretty confident in the above determination of 215 because of the underlying mathematics, and from testing some smaller graphs. Why the competition is wrong Further, let's examine the reference answer. They count 30 configurations using all 6 colors, arguing roughly as follows: Fix a certain color on the top, leaving 5 options for the bottom, and $(4-1)! = 6$ colors for the remaining 4 sides, totaling 30 methods. However, they have double-counted the configurations for the remaining 4 sides, as they have forgotten to account for the reflection that identifies two of the 4 sides. The fact that we may fix one color on the top and have 5 choices for the bottom is correct. When considering how many options there are for the four remaining sides spoken of, we must imagine rotating the cube to fix one of the remaining 4 colors, on, say, the North face (so no choice has been made); then the choice of color for the South face is among all 3 remaining colors. The remaining two possible assignments of colors to the East and West faces are equivalent, by considering thee reflection that exchanges the East and West axis, so there is only actually 1 choice remaining. So the total number of possibilities is 5 times 3 times 1 (15), not 30. Hence, we conclude that the reference answer is in error, and 215 is the correct answer! Why the competition is right (and checking it) However, this whole computation might be predicated on a translation error. I've assuming that "proper flipping" means a flipping that is nontrivial, i.e., is actually a flipping operation (has determinant $-1$ ). But it strikes me that if "flipping" actually means something more like "orthogonal transformation" or "rotation", and "proper" means a member of the special orthogonal group, then this means the opposite—that we only allow things with determinant 1! Indeed, in that case, the competition's answer is correct. Let's verify that by generalizing our code for AutMod to allow arbitrary automorphism groups: AutMod[g_, clist : {___Association}, autg_List : Null] := With[{AutG = Replace[autg, Null :> GroupElements[GraphAutomorphismGroup[g]]]}, DeleteDuplicates[ Function[c, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG ] /@ clist, ContainsExactly] ] (If we were being more precise, we'd probably check if it were a subgroup of the graph automorphism group.) Then realize that the group of proper rotations may be generated by two 90 degree rotations, which here may be realized as the cycles Cycles[{{2, 3, 4, 5}}] and Cycles[{{1, 2, 6, 4}}] upon examining the specific form of g1 given. Then take H = GroupElements @ PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}]}] and we indeed find that AutMod[g1, AllLabeledColorings[g1], H] has Length equal to 230 . The code Here's all of the code presented in full: UnrestrictedColoringQ[g_, coloring_Association] := ContainsExactly[VertexList[g], Keys[coloring]];ColoringQ[g_, c_Association] := FreeQ[Map[c, EdgeList[g], {2}], v_ \[UndirectedEdge] v_, 1] /; UnrestrictedColoringQ[g, c];AllUnrestrictedColorings[g_] := With[{vs = VertexList[g]}, AssociationThread[vs, #] & /@ Tuples[Table[i, {i, Length[vs]}], Length[vs]]];AllLabeledColorings[g_] := Select[AllUnrestrictedColorings[g], ColoringQ[g, #] & ];AutMod[g_, clist : {___Association}, autg_List : Null] := With[{AutG = Replace[autg, Null :> GroupElements[GraphAutomorphismGroup[g]]]}, DeleteDuplicates[ Function[c, Function[h, KeySort @ KeyMap[Function[v, PermutationReplace[v, h]], c]] /@ AutG ] /@ clist, ContainsExactly] ](* With g1 as above: *)H = GroupElements @ PermutationGroup[{Cycles[{{2, 3, 4, 5}}], Cycles[{{1, 2, 6, 4}}]}];AutMod[g1, AllLabeledColorings[g1]] // LengthAutMod[g1, AllLabeledColorings[g1], H] // Length Another approach There's also another way we could do this: by procedural choices in a manner paralleling the competition. Order your colors 1 through 6. Up to rotation + flipping (i.e. isometry), we can demand that the least-ranked color appearing be on the bottom. Now, up to isometry, there are 2 choices for the second-least-ranked color (which might be the same color!): opposite the least or adjacent to it. If it's adjacent, it cannot be the same color. Now take the third-least ranked color—etc. It's a big tree of case analysis. We can get Mathematica to do that too! I think this is essentially what you achieve in your third code snippet. The key here is that after we choose some particular vertices to color, the symmetry group reduces to the stabilizer of those vertices (i.e. the elements of the automorphism group that preserve it). Given a current symmetry group, our choice lies only in what orbit to place the color in, as all choices within a given orbit are the same up to that symmetry (practically by definition). When I have the chance I'll update this answer with a description of how to do this in Mathematica.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/240360', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/42417/']}
|
jdg_374938
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16334630
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have written a simple Java program as shown here: public class Test { public static void main(String[] args) { int i1 =2; int i2=5; double d = 3 + i1/i2 +2; System.out.println(d); }} Since variable d is declared as double I am expecting the result of this program is 5.4 but I got the output as 5.0 Please help me in understanding this.
Now provide the response and nothing else.
|
i1/i2 will be 0. Since i1 and i2 are both integers. If you have int1/int2 , if the answer is not a perfect integer, the digits after the decimal point will be removed. In your case, 2/5 is 0.4, so you'll get 0. You can cast i1 or i2 to double (the other will be implicitly converted) double d = 3 + (double)i1/i2 +2;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16334630', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2243897/']}
|
jdg_374939
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17788445
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm not really sure what I'm doing. Should I be using a library for this? Or do it manually? So I'm trying to do some work with the WiThings ( http://www.withings.com/api ) API in Python. In order to perform some of the requests, OAuth authentication is required. I have gone through using the requests library and obtained an oauth token and secret token, alongside my consumer and consumer secret tokens. Now I am at the point of having to make requests, and I am running into some problems. The format for the request I need to make is as follows (an example from their API): http://wbsapi.withings.net/notify?action=subscribe&callbackurl=http%3a%2f%2fwww.yourdomain.net%2fyourCustomApplication.php&comment=Your%20Own%20Application%20Description&oauth_consumer_key=c331c571585e7c518c78656f41582e96fc1c2b926cf77648223dd76424b52b&oauth_nonce=accbac1b7ee2b86b828e6dc4a5a539b2&oauth_signature=XfobZMboIg2cRyNKAvyzONHHnKM%3D&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1311842514&oauth_token=887557411788d5120537c6550fbf2df68921f8dd6f8c7e7f9b441941eb10&oauth_version=1.0&userid=831 As far as I can tell, this is pretty much a typical format with OAuth, except for the userid at the end. So, is it possible for me to make a request like this using the requests library? Or some other library? How do I get the URL right, with the comment and userid and callbackurl fields? Or do I need to generate this URL manually? If that's the case, whats the best way for going about doing this? Any assistance is greatly appreciated, as I've been stuck on this for a while. EDIT So, for some clarification, I understand about 98% of the code I am being referred to. I am only having a little problem at the end. So here I am, with the following code: from __future__ import unicode_literalsfrom urlparse import parse_qsimport requestsfrom requests_oauthlib import OAuth1Sessionconsumer_key = '**Valid consumer key**'consumer_secret = '**Valid consumer secret**'oauth_key = '**Valid oauth key obtained through requests library and OAuth workflow**'oauth_secret ='**Valid oauth secret obtained through requests library and OAuth workflow**'verifier = '**Valid consumer key obtained through requests library and OAuth workflow**'base_url = 'http://wbsapi.withings.net/notify'params = {'action': 'subscribe','callbackurl': '**callback URL**','comment': '**comment**','oauth_consumer_key': '**consumer_key**','oauth_nonce': 'etc etc','oauth_signature' : '' # <-------------- Where do I get this# etc etc... I have everything else}r = requests.get("http://wbsapi.withings.net/notify", params=params) This is all I need. I have everything I need but the signature. Is there a way I can get the signature from the oauth libraries? This is all that has been holding me up.
Now provide the response and nothing else.
|
To perform GET requests with URL query string: import requestsparams = { 'action': 'subscribe', 'callbackurl': '', 'comment': '', 'oauth_consumer_key': '', 'oauth_nonce': '', # more key=value pairs as appeared in your query string}r = requests.get("http://wbsapi.withings.net/notify", params=params) With that cleared, now you just need to follow the workflow documented on http://www.withings.com/en/api/oauthguide and implement them Upon receiving your OAuth Key and OAuth Secret, perform a GET request with the following endpoint and query string which will give you back token : https://oauth.withings.com/account/request_token ? oauth_callback=http%3A%2F%2Fexample.com%2Fget_access_token &oauth_consumer_key=c331c571585e7c518c78656f41582e96fc1c2b926cf77648223dd76424b52b &oauth_nonce=f71972b1fa93b8935ccaf34ee02d7657 &oauth_signature=J8xzgFtHTsSRw8Ejc8UDV2jls34%3D &oauth_signature_method=HMAC-SHA1 &oauth_timestamp=1311778988 &oauth_version=1.0 Then you need to authorize the token you received with the following request which will give you the user_id : https://oauth.withings.com/account/authorize ? oauth_callback=http%3A%2F%2Fexample.com%2Fget_access_token &oauth_consumer_key=c331c571585e7c518c78656f41582e96fc1c2b926cf77648223dd76424b52b &oauth_nonce=369f9ceb2f285ac637c9a7e9e98019bd &oauth_signature=OR9J9iEl%2F2yGOXP2wk5c2%2BWtYvU%3D &oauth_signature_method=HMAC-SHA1 &oauth_timestamp=1311778988 &oauth_token=5bb105d2292ff43ec9c0f633fee9033045ed4643e9871b80ce586dc1bf945 &oauth_version=1.0 Then you need to request the access_token by hitting this endpoint with some more query string : https://oauth.withings.com/account/access_token ? oauth_consumer_key=c331c571585e7c518c78656f41582e96fc1c2b926cf77648223dd76424b52b &oauth_nonce=7acd22371fc56fd8a0aaf8416f79f84f &oauth_signature=jmj1g%2FB3rYR2DCpWp86jB5YVHIM%3D &oauth_signature_method=HMAC-SHA1 &oauth_timestamp=1311778988 &oauth_token=5bb105d2292ff43ec9c0f633fee9033045ed4643e9871b80ce586dc1bf945 &oauth_version=1.0 &userid=831 Now you have everything needed to perform the aforementioned request in your question, and others, example directly from the documentation: http://wbsapi.withings.com/measure ? action=getmeas &oauth_consumer_key=c331c571585e7c518c78656f41582e96fc1c2b926cf77648223dd76424b52b &oauth_nonce=accbac1b7ee2b86b828e6dc4a5a539b2 &oauth_signature=XfobZMboIg2cRyNKAvyzONHHnKM%3D &oauth_signature_method=HMAC-SHA1 &oauth_timestamp=1311842514 &oauth_token=887557411788d5120537c6550fbf2df68921f8dd6f8c7e7f9b441941eb10 &oauth_version=1.0 &userid=831 Again, everything can be done without explicit oauth library as you can finish the workflow with requests.get and query string built from a dict feed into the params argument of the method. I truly hope this helps you achieve your goal.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17788445', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1596332/']}
|
jdg_374940
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.