source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36760
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a criterion to show that a level set of some map is not an (embedded) submanifold? In particular, an exercise in Lee's smooth manifolds book asks to show that the sets defined by $x^3 - y^2 = 0$ and $x^2 - y^2 = 0$ are not embedded submanifolds. In general, is it possible that a level set of a map which does not has constant rank on the set still defines a embedded submanifold?
Now provide the response and nothing else.
|
It is certainly possible for a level set of a map which does not have constant rank on the set to still be an embedded submanifold. For example, the set defined by $x^3 - y^3 = 0$ is an embedded curve (it is the same as the line $y=x$), despite the fact that $F(x,y) = x^3 - y^3$ has a critical point at $(0,0)$. The set defined by $x^2 - y^2 = 0$ is not an embedded submanifold, because it is the union of the lines $y=x$ and $y=-x$, and is therefore not locally Euclidean at the origin. To prove that no neighborhood of the origin is homeomorphic to an open interval, observe that any open interval splits into exactly two connected components when a point is removed, but any neighborhood of the origin in the set $x^2 - y^2$ has at least four components after the point $(0,0)$ is removed. The set $x^3-y^2 = 0$ is an embedded topological submanifold, but it is not a smooth submanifold, since the embedding is not an immersion. There are many ways to prove that this set is not a smooth embedded submanifold, but one possibility is to observe that any smooth embedded curve in $\mathbb{R}^2$ must locally be of the form $y = f(x)$ or $x = f(y)$, where $f$ is some differentiable function. (This follows from the local characterization of smooth embedded submanifolds as level sets of submersions, together with the Implicit Function Theorem.) The given curve does not have this form, so it cannot be a smooth embedded submanifold.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/36760', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10434/']}
|
jdg_373041
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
182696
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
"Consider sequences of positive real numbers of the form x,2000,y,..., in which every term after the first is 1 less than the product of its two immediate neighbors. For how many different values of x does the term 2001 appear somewhere in the sequence? (A) 1 (B) 2 (C) 3 (D) 4 (E) More than 4" Can anyone suggest a systematic way to solve this problem? Thanks!
Now provide the response and nothing else.
|
Perhaps a bit of a tedious solution but this is the most straight forward approach. By direct substitution it is not difficult to show that the sequence necessarily repeats itself after the $6$th term. We have$$a_1 = x$$$$a_2 = 2000$$$$a_3 = \frac{2001}{x}$$$$a_4 = \frac{2001 + x}{2000x}$$$$a_5 = \frac{1 + x}{2000}$$$$a_6 = x$$$$a_7 = 2000$$and so on....Clearly we can take $x=2001$ for a trivial appearance. For $a_3$ to be $2001$ we require $x=1$. For $a_4$ we require $x=0.0005$. For $a_5$ to be $2001$ we require $x = 4001999$. This gives a total of $4$ solutions for $x$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/182696', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/37950/']}
|
jdg_373042
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24691385
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using sails 0.9.16 with Postgres and my question is: what is the best way to execute transaction using current API with promises? May be there is something better than: Model.query('BEGIN TRANSACTION', function (err) { if (err) { next(err); } else { Model .create(...) .(function (value) { return [value, RelatedModel.create(...).then(...)]; }) .fail(function (err) { Model.query('ROLLBACK'); next(err); }) .spread(function (...) { Model.query('COMMIT') next(...); }) } }) Thanks for help!
Now provide the response and nothing else.
|
I'm currently using this exact workflow. For executing one query with promises do this: Model .query(params) .then(function(result){ //act on result }) .catch(function(error){ //handle error }) .done(function(){ //clean up }); To execute multiple queries in parallel, do this: var Promise = require('q');Promise.all([ User.findOne(), AnotherModel.findOne(), AnotherModel2.find()]).spread(function(user,anotherModel,anotherModel2){ //use the results}).catch(function(){ //handle errors}).done(function(){ //clean up}); If you're trying to avoid nesting in your code: Model.query(params).then(function(result){//after query #1 //since you're returning a promise here, you can use .then after this return Model.query();}).then(function(results){//after query#2 if(!results){ throw new Error("No results found in query #2"); }else{ return Model.differentQuery(results); }}).then(function(results){//do something with the results}).catch(function(err){ console.log(err);}).done(function(){ //cleanup}); Note: currently, waterline uses Q for promises. There is a pull request to switch waterline from Q to bluebird here: waterline/bluebird When I answered this question, I'd yet to take the database class in college, so I didn't know what a transaction was. I did some digging, and bluebird allows you to do transactions with promises. The only problem is, this isn't exactly built into sails since it's some what of a special use case. Here's the code bluebird provides for this situation. var pg = require('pg');var Promise = require('bluebird');Promise.promisifyAll(pg);function getTransaction(connectionString) { var close; return pg.connectAsync(connectionString).spread(function(client, done) { close = done; return client.queryAsync('BEGIN').then(function () { return client; }); }).disposer(function(client, promise) { if (promise.isFulfilled()) { return client.queryAsync('COMMIT').then(closeClient); } else { return client.queryAsync('ROLLBACK').then(closeClient); } function closeClient() { if (close) close(client); } });}exports.getTransaction = getTransaction;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24691385', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1087620/']}
|
jdg_373043
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4594588
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let us say I have some vector of probabilities $\vec{V}$ such that $\sum_i v_i = 1$ . I am trying to find a sharp upper bound on $\frac{\sum_i v_i^3}{(\sum_i v_i^2)^2}$ . I can currently prove via Cauchy-Schwarz a very bad upper bound - this ratio must be less than the number of entries in the vector. But I have numerical evidence that leads me to believe that something like 1.5 should be a sharp upper bound, not dependent on the vector length.
Now provide the response and nothing else.
|
Here is a upper bound : Assume that $n \ge 3$ .Denote $S = \{(v_1, v_2, \cdots, v_n) : ~ v_i \ge 0, \forall i; ~ v_1 + v_2 + \cdots + v_n = 1\}.$ We have $$\max_{(v_1, v_2, \cdots, v_n) \in S} \frac{\sum_i v_i^3}{(\sum_i v_i^2)^2} \le \frac{3\sqrt 3}{16}\sqrt n + \frac58 + \frac{5\sqrt 3}{96\sqrt n}. \tag{1}$$ The bound (1) satisfies that $$\lim_{n\to \infty} \left(\frac{3\sqrt 3}{16}\sqrt n + \frac58 + \frac{5\sqrt 3}{96\sqrt n} - \max_{(v_1, v_2, \cdots, v_n) \in S} \frac{\sum_i v_i^3}{(\sum_i v_i^2)^2}\right) = 0. \tag{2}$$ Proof of (1) and (2) : Consider the maximum of $$f(v_1, v_2, \cdots, v_n) = \frac{\sum_i v_i^3}{(\sum_i v_i^2)^2}$$ subject to $v_i \ge 0, \forall i; ~ \sum_{i=1}^n v_i = 1$ . Using Vasc's Equal Variable Theorem (Corollary 1.9, [1]), $f$ is maximal when $0 \le v_1 = v_2 = \cdots = v_{n - 1} \le v_n$ . Letting $v_n = x$ and $v_1 = v_2 = \cdots = v_{n-1} = \frac{1 - x}{n-1}$ , we have $$\max_{(v_1, v_2, \cdots, v_n) \in S} \frac{\sum_i v_i^3}{(\sum_i v_i^2)^2} = \max_{x\in [0, 1]} \frac{(n^2 - 2n)x^3 + 3x^2 - 3x + 1}{(nx^2 - 2x + 1)^2}.$$ It suffices to prove that, for all $x\in [0, 1]$ and $n \ge 3$ , $$\frac{(n^2 - 2n)x^3 + 3x^2 - 3x + 1}{(nx^2 - 2x + 1)^2} \le \frac{3\sqrt 3}{16}\sqrt n + \frac58 + \frac{5\sqrt 3}{96\sqrt n}.$$ Letting $m = \sqrt{3n} - 3\ge 0$ . After clearing the denominators, it suffices to prove that, for all $x \in \mathbb{R}$ , $$a_1 x^4 + b_1x^3 + c_1x^2 + d_1 + e_1 \ge 0 \tag{3}$$ where \begin{align*} a_1 &= 6\,{m}^{6}+128\,{m}^{5}+1115\,{m}^{4}+5100\,{m}^{3}+12960\,{m}^{2}+ 17388\,m+9639, \\ b_1 &= -32\,{m}^{5}-552\,{m}^{4}-3792\,{m}^{3}-13020\,{m}^{2}-22392\,m-15444, \\ c_1 &= 36\,{m}^{4}+552\,{m}^{3}+3270\,{m}^{2}+8460\,m+8118,\\ d_1 &= -216\,{m}^{2}-1152\,m-1692, \\ e_1 &= 54\,{m}^{2}+216\,m+207.\end{align*} We need the following auxiliary result. Fact 1 : Let $a_1 > 0, b_1, c_1, d_1, e_1$ be real numbers.Let $f(x) = a_1x^4 + b_1x^3 + c_1x^2 + d_1x + e_1$ .Let $\Delta_f$ denote its discriminant.Let $D_f = 64a_1^3e_1-16a_1^2c_1^2+16a_1b_1^2c_1-16a_1^2b_1d_1-3b_1^4$ .If $\Delta_f \ge 0$ and $D_f > 0$ , then $f(x)\ge 0$ for any real number $x$ . (See https://en.wikipedia.org/wiki/Quartic_function and the reference [13] therein,i.e., Rees, E. L. (1922). "Graphical Discussion of the Roots of a Quartic Equation". The American Mathematical Monthly. 29 (2): 51–55.) We have \begin{align*} \Delta_f &= 63700992\, \left( 6\,{m}^{2}+24\,m+23 \right)\\ &\qquad \times \left( 512\,{m}^{5}+ 5886\,{m}^{4}+26604\,{m}^{3}+57887\,{m}^{2}+58242\,m+19326 \right) \\ &\qquad \times \left( {m}^{2}+6\,m+6 \right) ^{4} \left( m+3 \right) ^{6}\\ &> 0 \end{align*} and \begin{align*} D_f &= 12288\, \left( 4\,{m}^{6}+96\,{m}^{5}+916\,{m}^{4}+4460\,{m}^{3}+11389 \,{m}^{2}+13734\,m+5262 \right) \\ &\qquad \times \left( 8\,{m}^{5}+138\,{m}^{4}+900\,{ m}^{3}+2769\,{m}^{2}+4086\,m+2358 \right) \left( m+3 \right) ^{9}\\ &> 0.\end{align*} By Fact 1, (3) is true. (2) is true since $$\lim_{n\to \infty} \left(\frac{3\sqrt 3}{16}\sqrt n + \frac58 + \frac{5\sqrt 3}{96\sqrt n} - f\Big(\sqrt{3/n}\Big)\right) = 0$$ where $$f(x) := \frac{(n^2 - 2n)x^3 + 3x^2 - 3x + 1}{(nx^2 - 2x + 1)^2}.$$ We are done. Reference : [1] Vasile Cirtoaje, “The Equal Variable Method”, J. Inequal. Pure and Appl. Math., 8(1), 2007.Online: https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4594588', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1119986/']}
|
jdg_373044
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39026
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
There is an obvious algorithm that tests any property of most discrete distributions. This algorithm works by learning the distribution. This is done by just learning the histogram for enough number of samples. In particular, I have the following analysis,Let $p_x$ be the probability of item $x$ and $m_x$ be the number of occurrences of $x$ in $m$ samples. Then by Chernoff's bound, $Pr(|\frac{m_x}{m}-p_x|>\epsilon)<2.\exp(-2m\epsilon^2)$. Taking union bound over all $x$ in the sample space we can get $Pr(\sum_x|\frac{m_x}{m}-p_x|>\epsilon|\Omega|)<2|\Omega|\exp(-2m\epsilon^2)$. Scaling down $\epsilon$ by a factor of $|\Omega|$, we get $O(\ln |\Omega|\frac{|\Omega|^2}{\epsilon^2})$ samples are enough. It seems there are folklore algorithm to do this in $O(\ln |\Omega|\frac{|\Omega|}{\epsilon^2})$ as well as in $O(\frac{|\Omega|}{\epsilon^2})$ samples. But I am unable to prove it. A similar question was asked here: Are there distribution properties which are "maximally" hard to test?
Now provide the response and nothing else.
|
You might find this short note helpful ($\LaTeX$ code available [1] if the binary link breaks). I am reproducing the relevant part below: Theorem. (Folklore) Learning an unknown distribution over a known domain of size $n$, up to total variation $\varepsilon\in(0,1]$, and with error probability $\delta\in(0,1]$, has sample complexity $O\!\left(\frac{n+\log(1/\delta)}{\varepsilon^2}\right)$. (Moreover, this can be done efficiently.) Proof . Consider the empirical distribution $\tilde{p}$ obtained by drawing $m$ independent samples $s_1,\dots,s_m$ from the underlying distribution $p\in\Delta([n])$:\begin{equation}\label{def:empirical}\tilde{p}(i) = \frac{1}{m} \sum_{j=1}^m \mathbb{1}_{\{s_j=i\}}, \qquad i\in [n]\end{equation} First, we bound the expected total variation distance between $\tilde{p}$ and $p$, by using $\ell_2$ distance as a proxy:$$ \mathbb{E}{ d_{\rm TV}(p,\tilde{p}) } =\frac{1}{2}\mathbb{E}{ \lVert{p-\tilde{p}}\rVert_1} =\frac{1}{2}\sum_{i=1}^n\mathbb{E}{ \lvert{p(i)-\tilde{p}(i)}\rvert } \leq\frac{1}{2}\sum_{i=1}^n\sqrt{\mathbb{E}{ (p(i)-\tilde{p}(i))^2} }$$the last inequality by Jensen. But since, for every $i\in[n]$, $m\tilde{p}(i)$ follows a $\operatorname{Bin}({m},{p(i)})$ distribution, we have$\mathbb{E}{ (p(i)-\tilde{p}(i))^2} = \frac{1}{m^2}\operatorname{Var}[m\tilde{p}(i)] = \frac{1}{m}p(i)(1-p(i))$, from which$$ \mathbb{E}{ d_{\rm TV}(p,\tilde{p}) } \leq\frac{1}{2\sqrt{m}}\sum_{i=1}^n\sqrt{p(i)} \leq \frac{1}{2}\sqrt{\frac{n}{m}}$$the last inequality this time by Cauchy—Schwarz. Therefore, for $m\geq \frac{n}{\varepsilon^2}$ we have $\mathbb{E}{ d_{\rm TV}(p,\tilde{p}) }\leq \frac{\varepsilon}{2}$. Next, to convert this expected result to a high probability guarantee, we apply McDiarmid's inequality to the random variable $f(s_1,\dots,s_m) \stackrel{\rm def}{=} d_{\rm TV}(p,\tilde{p})$, noting that changing any single sample cannot change its value by more than $c\stackrel{\rm def}{=} 1/m$:$$ \mathbb{P}\left\{ \lvert{f(s_1,\dots,s_m) - \mathbb{E}{f(s_1,\dots,s_m)}\rvert} \geq \frac{\varepsilon}{2} \right\} \leq 2e^{-\frac{2\left(\frac{\varepsilon}{2}\right)^2}{mc^2}} = 2e^{-\frac{1}{2}m\varepsilon^2}$$and therefore as long as $m\geq \frac{2}{\varepsilon^2}\ln\frac{2}{\delta}$, we have $\lvert{f(s_1,\dots,s_m) - \mathbb{E}{f(s_1,\dots,s_m)}\rvert} \leq \frac{\varepsilon}{2}$ with probability at least $1-\delta$. $\square$ There is a second proof, somewhat more fun, given in that short note (credit to John Wright for pointing it out, and emphasizing it's the "fun" one). Here it is: Proof. Again, we will analyze the behavior of the empirical distribution $\tilde{p}$ over $m$ i.i.d. samples from the unknown $p$. Recalling the definition of total variation distance, note that $d_{\rm TV}({p,\tilde{p}}) > \varepsilon$ literally means there exists a subset $S\subseteq [n]$ such that $\tilde{p}(S) > p(S) + \varepsilon$. There are $2^n$ such subsets, so we can do a union bound. Fix any $S\subseteq[n]$. We have$$\tilde{p}(S) = \tilde{p}(i) = \frac{1}{m} \sum_{i\in S} \sum_{j=1}^m \mathbb{1}_{\{s_j=i\}}$$and so, letting $X_j \stackrel{\rm def}{=} \sum_{i\in S}\mathbb{1}_{\{s_j=i\}}$ for $j\in [m]$, we have$\tilde{p}(S) = \frac{1}{m}\sum_{j=1}^m X_j$ where the $X_j$'s are i.i.d. Bernoulli random variable with parameter $p(S)$. Then, by a Chernoff bound (actually, Hoeffding):$$ \mathbb{P}\left\{ \tilde{p}(S) > p(S) + \varepsilon \right\} = \mathbb{P}\left\{ \frac{1}{m}\sum_{j=1}^m X_j > \mathbb{E}\left[\frac{1}{m}\sum_{j=1}^m X_j\right] + \varepsilon \right\} \leq e^{-2\varepsilon^2 m}$$and therefore $\mathbb{P}\left\{ \tilde{p}(S) > p(S) + \varepsilon \right\} \leq \frac{\delta}{2^n}$ for any $m\geq \frac{n\ln 2+\log(1/\delta)}{2\varepsilon^2}$. A union bound over these $2^n$ possible sets $S$ concludes the proof:$$ \mathbb{P}\left\{ \exists S\subseteq [n] \text{ s.t. }\tilde{p}(S) > p(S) + \varepsilon \right\} \leq 2^n\cdot \frac{\delta}{2^n} = \delta$$and we are done. $\square$ Note: a lower bound of $\Omega(\frac{n}{\varepsilon^2})$ (also folklore) is easy to derive from Assouad's lemma , by considering the family of distributions over $[n]$ where each pair of consecutive elements $(2i,2i+1)$ has either probabilities $(\frac{1+c\varepsilon}{n},\frac{1-c\varepsilon}{n})$ or $(\frac{1-c\varepsilon}{n},\frac{1+c\varepsilon}{n})$ for some suitable constant $c>0$. (Intuitively and a bit misleadingly: any learning algorithm has to "figure out" at least $\Omega(n)$ of these independent choices, but each of them requires $\Omega(1/\varepsilon^2)$ samples.) [1] Public GitHub: https://github.com/ccanonne/probabilitydistributiontoolbox (includes the source for the note on Assouad's lemma as well).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/39026', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/43170/']}
|
jdg_373045
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
167414
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am reading Majda & Bertozzi (Vorticity and Incompressible Flow). In page 12 the following equation appears: $$\frac{D \Omega}{Dt} + \Omega \mathcal{D} + \mathcal{D} \Omega = \nu \Delta \Omega$$ where $\frac{D}{Dt}$ is the convective/lagrangian/material derivative. $\Omega$ and $\mathcal{D}$ are $3$ by $3$ matrices, the first antisymmetric and the second symmetric, and $\nu$ is a scalar. Using that $\Omega$ is defined by $\Omega h = \frac{1}{2} \omega \times h $ , where $\omega$ is a vector function representing vorticity, I should be able to get the following vorticity equation (which apparently plays a crucial role in the rest of the book): $$ \frac{D \omega}{Dt} = \mathcal{D} \omega + \nu \Delta \omega. $$ Any idea how? Here is a link to the book
Now provide the response and nothing else.
|
Re-write the main given equation in index notation (following the Einstein summation convention) $$ D_t \Omega_{ij} + \Omega_{ik}\mathcal{D}_{kj} + \mathcal{D}_{ik}\Omega_{kj} = \nu\triangle \Omega_{ij} \tag{1}$$ Small $\omega$ is defined by $$ \Omega_{ik}h^k = \frac12 \epsilon_{ijk}\omega_j h^k \tag{2}$$which is the cross product definition. The $\epsilon_{ijk}$ is the Levi-Civita symbol (or fully antisymmetric tensor with $\epsilon_{123} = 1$). Plugging in (2) (which implies that $\Omega_{ij} = \frac12 \epsilon_{ikj}\omega_k$) into (1) we have that $$ \epsilon_{ilj} D_t\omega_l + \epsilon_{ilk}\mathcal{D}_{kj}\omega_l + \mathcal{D}_{ik}\epsilon_{klj}\omega_l = \nu \epsilon_{ilj}\triangle \omega_l \tag{3}$$ Next we use the property of the Levi-Civita tensor ,$$ \epsilon_{jik}\epsilon_{jlk} = 2 \delta_{jl} \tag{4}$$which means that multiplying (3) by $\epsilon_{imj}$ gives$$ 2D_t\omega_m + \left(\epsilon_{ilk}\epsilon_{imj}\mathcal{D}_{kj} + \epsilon_{klj}\epsilon_{imj}\mathcal{D}_{ik}\right) \omega_l = \nu \triangle \omega_m \tag{5}$$The antisymmetry properties of the Levi-Civita tensor, as well as the symmetry of the tensor $\mathcal{D}$ can be used to show that$$ \epsilon_{ilk}\epsilon_{imj}\mathcal{D}_{kj} = \epsilon_{klj}\epsilon_{imj}\mathcal{D}_{ik} $$ So by another property of the Levi-Civita tensor,$$ \epsilon_{ijk}\epsilon_{imn} = \delta_{jm}\delta_{kn} - \delta_{jn}\delta_{km} \tag{6}$$we conclude that (5) is equivalent to $$ D_t\omega_m + \omega_m (\delta_{kj}\mathcal{D}_{kj} - \mathcal{D}_{jm}) = \nu \triangle \Omega_m ~.$$ Which shows that you in fact omitted one necessary condition for your equation to hold, which is that $\mathcal{D}$, in addition to being symmetric, is also trace-free . If you have learned about differential forms, one should treat $\Omega$ as a differential two form on $\mathbb{R}^3$ and $\omega$ as a differential one form on $\mathbb{R}^3$ related by the Hodge star operator $\Omega = *\omega$. From this point of view the equation you want (the one for $\omega$) is merely obtained by taking the Hodge dual of the equation you are given (the one for $\Omega$) plus a little bit of multilinear algebra.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/167414', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/20376/']}
|
jdg_373046
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4470584
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Studding Character theory, and been bouncing back and forth between reading Dummit and Foote, and Character theory of finite groups by Martin Issacs. In section 18.1 of Dummit and Foote, we are given a Bijection: $$\{V: V \text{ is an } FG-\text{module}\} \leftrightarrow \{ (\phi, V): \text{is an } F-\text{ represention from } G \rightarrow GL(V)\}$$ at the start of chapter 2 of Character theory of finite groups, It states that it is possible to have fields $F < E$ , and an irreducible $F$ -representation $X$ , such that $X$ is reducible as an $E$ -representation. I don’t understand how $X$ can be irreducible as an $F$ -representation and reducible as an $E$ -representation. Since if $X$ was a representation with corresponding module $V$ , and if $\phi$ and $\psi$ were $E$ -representations with corresponding $E$ -modules $W$ and $U$ , such that $X = \phi + \psi$ . Then shouldn’t $W$ and $U$ also be $F$ -modules and $V = W + U$ , so there would have the be $F$ -representations that sum up to $X$ ? Somone gave me the example: " $X : C_3 \rightarrow GL_2(\mathbb{R})$ , where $C_3 = \langle g \rangle$ is cyclic of order $3$ and $X(g)$ is the matrix whose first row is $[ 0 \ \ 1 ]$ andwhose second row is $[ -1 \ \ -1 ]$ . The eigenvalues of this matrix are not real (they are the two primitive cube roots of $1$ ). Therefore, $X$ is irreducible as an $\mathbb{R}$ -representation of $C_3$ . Since $X(g)$ is diagonalizable over the complex numbers $\mathbb{C}$ , it follows that as a representation, $X : C_3 \rightarrow GL_2(\mathbb{C})$ reduces. So, over $\mathbb{C}$ , the representation is equivalent to a block diagonal sum of two linear representations of $C_3$ I understand the example, but I'm still not seeing where and why my logic fails?
Now provide the response and nothing else.
|
I think this is based in a misunderstanding of how we can convert between $E$ and $F$ vector spaces. For concreteness, let's work with $\mathbb{C}$ and $\mathbb{R}$ , but you'll see that the same idea works for any field extension. You're entirely right that, if we have a decomposition of $\mathbb{C}G$ modules $V = U \oplus W$ , then we can apply restriction of scalars to get a decomposition of $\mathbb{R}G$ modules $V = U \oplus W$ . But here's the key thing! When we pass from $\mathbb{C}$ vector spaces to $\mathbb{R}$ vector spaces, the dimension doubles. If $V$ used to be $n$ -dimensional as a $\mathbb{C}$ -space, it's now $2n$ -dimensional as an $\mathbb{R}$ -space. Why is this important? Let's look at your example: Given a representation of $C_3 = \langle g \rangle$ where $g \mapsto \begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$ in $\mathsf{GL}_2(\mathbb{C})$ , then (as Dylan says in their answer) $V = \mathbb{C}^2$ decomposes as $$\mathbb{C}^2 = \left \langle \begin{pmatrix} \omega \\ 1 \end{pmatrix} \right \rangle\oplus\left \langle \begin{pmatrix} \omega^2 \\ 1 \end{pmatrix} \right \rangle$$ but now, if we apply restriction of scalars, what representations do we see? $$\mathbb{R}^4 = \left \langle \begin{pmatrix} - \frac{1}{2} \\ \frac{\sqrt{3}}{2} \\ 1 \\ 0 \end{pmatrix}, \ \begin{pmatrix} - \frac{\sqrt{3}}{2} \\ - \frac{1}{2} \\ 0 \\ 1 \end{pmatrix} \right \rangle\oplus\left \langle \begin{pmatrix} - \frac{1}{2} \\ -\frac{\sqrt{3}}{2} \\ 1 \\ 0 \end{pmatrix}, \ \begin{pmatrix} \frac{\sqrt{3}}{2} \\ - \frac{1}{2} \\ 0 \\ 1 \end{pmatrix} \right \rangle$$ here we're using the well known fact that $\omega = -\frac{1}{2} + \frac{\sqrt{3}}{2} i$ , and restriction of scalars turns us from a 2D $\mathbb{C}$ -vector space into a 4D $\mathbb{R}$ -vector space (where $i$ acts in the obvious way). Now, importantly, this is not the representation $g \mapsto \begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$ in $\mathsf{GL}_2(\mathbb{R})$ ! Instead, it's a representation $C_3 \to \mathsf{GL}_4(\mathbb{R})$ . Notice your proof tells us that this 4D vector space decomposes, as indeed it does! But you're trying to conclude that the 2D representation $g \mapsto \begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}$ in $\mathsf{GL}_2(\mathbb{R})$ decomposes. Of course, these are very different representations! What is the relationship? And what goes wrong? Given $V = \mathbb{R}^2$ with this $C_3$ representation, can freely turn it into a $\mathbb{C}$ representation by taking the complexification $V \otimes \mathbb{C}$ . But this new space will be a $2$ D $\mathbb{C}$ -space. In fact, it's exactly the 2D space that we know decomposes from earlier! The difference is that, when we take the restriction of scalars of $V \otimes \mathbb{C}$ , we don't get our old representation back! Again, we find the dimension doubled! What's the tl;dr, then? If we start with a reducible $\mathbb{C}G$ representation, you're entirely correct that it gives us a reducible $\mathbb{R}G$ representation too -- in fact, your exact proof works. However , if we're starting with an $\mathbb{R}G$ representation, it probably isn't a $\mathbb{C}G$ representation. We can make it a $\mathbb{C}G$ representation by tensoring with $\mathbb{C}$ , but this changes the space (for instance, its dimension). So if $V \otimes \mathbb{C}$ is reducible, that doesn't necessarily tell us anything about reducibility of $V$ ! As an aside, we can still ask when knowledge of $V \otimes \mathbb{C}$ can tell us things about $V$ itself. This turns out to be one of the motivating problems in "descent theory", and while this is difficult, it's fairly well understood. I hope this helps ^_^
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4470584', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/723878/']}
|
jdg_373047
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5733449
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way in SQLite to ORDER BY a date, and have result be ordered by time rather than alphabetically? For example: SELECT * FROM details GROUP BY date;John | Smith | April 01, 2011John | Smith | April 03, 2011John | Smith | April 04, 2011John | Smith | March 25, 2011 March should come before April. I'm guessing that the answer here is to store my dates as long timestamps, however I wasn't sure if it could be done more easily with SQLite. Thanks!
Now provide the response and nothing else.
|
There isn't a built-in DATE type in SQLite (as exists in some other database management systems), but it does have a nice complement of date and time functions: http://www.sqlite.org/lang_datefunc.html You can use date("April 01, 2011") to get back an ISO-8601 date (e.g., 2011-04-01). This format has the advantages of both being a readable string and being sortable. 2011-03-25 naturally comes before 2011-04-01 by standard string comparison rules, so there's no special operation required. So, store your dates in that format, and get that format using the date() function (or other relevant function).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5733449', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/123891/']}
|
jdg_373048
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47723996
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've looked through questions here and articles all over the internet, but haven't found yet solution that would satisfy my requirements. So now in 2017, is there an elegant way to have a <table> that would: be written in html+css (no js) have fixed header (not scrollable; not sticky) have scrollable <tbody> (scrollbar may be always visible) header and body would handle resizing properly and not mess alignment of the <thead> columns and the <tbody> columns would not use nested tables or separate table for header
Now provide the response and nothing else.
|
This solution fulfills all 5 requirements: table { width: 100%;}table, td { border-collapse: collapse; border: 1px solid #000;}thead { display: table; /* to take the same width as tr */ width: calc(100% - 17px); /* - 17px because of the scrollbar width */}tbody { display: block; /* to enable vertical scrolling */ max-height: 200px; /* e.g. */ overflow-y: scroll; /* keeps the scrollbar even if it doesn't need it; display purpose */}th, td { width: 33.33%; /* to enable "word-break: break-all" */ padding: 5px; word-break: break-all; /* 4. */}tr { display: table; /* display purpose; th's border */ width: 100%; box-sizing: border-box; /* because of the border (Chrome needs this line, but not FF) */}td { text-align: center; border-bottom: none; border-left: none;} <table> <thead> <tr> <th>Table Header 1</th> <th>Table Header 2</th> <th>Table Header 3</th> </tr> </thead> <tbody> <tr> <td>Data1111111111111111111111111</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data2222222222222222222222222</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data3333333333333333333333333</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> <tr> <td>Data</td> <td>Data</td> <td>Data</td> </tr> </tbody></table>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47723996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1773534/']}
|
jdg_373049
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
85662
|
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Human men and women are sexually dimorphic : there are some phenotypic differences between men and women. Men being taller than women is an accurate stereotype. Overall, men are on average 13 centimeters taller than women . The most extreme example is Macedonia, where the average man is 18.5 cm taller than the average woman. When you look at animals, I think it's pretty intuitive to think that females are often larger than males from the same species. Spiders , fish , insects, some birds, there's an endless list of examples of bigger females than males. This is explained by a variety of different reasons depending on the species, from mate competition to the ability to bear children. So why would human males be so much bigger than women ? We all know that there are a lot of factors behind the height - and, on a related subject, strength - differences between the sexes. There are some genetic factors , as pointed out in this interesting Quora post about the topic . We also know that nutrition has a large impact in growth. In a word, we know what causes the height difference between men and women. There is a great post about it here . But I couldn't find any conclusive evidence as to why men are taller than women. Is this because women naturally like tall men? Did prehistoric men have to be taller than women in order to thrive? Or is it just genetic drifting that, more or less randomly, led to this height difference? The motivation behind my curiosity is feelings-driven. A while ago, my dad said during a conversation that he once " saw a documentary in which they explained that early men stole all the good proteins from the women, so they could grow taller and stronger unlike the poor females. " As much as I love and respect my dad, I have a hard time believing that men are natural culprits and thieves. I am by no way making this post just to "prove him wrong" - I am absolutely interested by the topic and would love to know more. Also, if anyone knows the documentary he was talking about, I would be glad to give it a watch. All apologies for any bad English you might find. Please tell me if I make recurring grammar mistakes.
Now provide the response and nothing else.
|
I assume by asking why , you are asking about the distal evolutionary causes, and not the molecular mechanisms that account for these things. (Important disclaimer: these causes are difficult to be certain about; they require a fair amount of informed speculation.) With that said: it is widely agreed upon in evolutionary biology that human males, as in other mammals, are thought to be larger mostly due to sexual selection. In other words, size is sexy, and sexy individuals fare better with respect to mate choice, on average. Sexual selection can also work via male-male competition. There may be other causes that are relatively minor contributors compared to this one. Notice, however, that in humans, females evidently do not remain tiny with respect to males, but they get 'dragged along' in terms of their height, so that males are not drastically different in height (unlike, for instance, in elephant seals). This too may be a result of sexual selection, quirks of physiology across the sexes (e.g. genetics of dimorphism), or even natural selection. Think hip size and the size of human skulls. Additionally, differences in development time (e.g. length and onset of puberty ) can skew body size. Humans certainly exhibit dimorphism with respect to development.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://biology.stackexchange.com/questions/85662', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/52703/']}
|
jdg_373050
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
57748
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$PSL(3,4)$ has no element of order $15$. Thus it is no isomorphic to $A_8$. Here, $PSL(3,4)$ denotes the $3 \times 3$ projective special linear group on the field with $4$ elements. As listing all the elements takes too much work, is there any better way to prove there is no element of order $15$ in $PSL(3,4)$? Thank you very much.
Now provide the response and nothing else.
|
The splitting field for the nontrivial irreducible representations of the cyclic group of order 5 over the field $F_4$ of order 4 is $F_{16}$, so all such representations have degree 2 over $F_4$. So (using Maschke's Theorem), a nontrivial representation of a group of order 5 of degree 3 over $F_4$ must be the direct sum of a nontrivial irreducible of dimension 2 and the trivial module. The centralizer of the image of this representation in ${\rm GL}_3(4)$ must fix both of these irreducible constituents. The centralizer of the constituent in ${\rm GL}_2(4)$ is cyclic of order 15, and the centralizer of the trivial constituent is just the scalar matrices - i.e. cyclic of order 3. So the full centralizer in ${\rm GL}_3(4)$ has order 45. Since not all elements in this centralizer have determinant 1, its intersection with ${\rm SL}_3(4)$ has order 15. Since it contains the scalar matrices in ${\rm SL}_3(4)$, it follows that all elements of order 15 in ${\rm SL}_3(4)$ have fifth power equal to a scalar matrix, and so ${\rm PSL}_3(4)$ has no element of order 15. I have just noticed that I have assumed that the inverse image in ${\rm SL}_3(4)$ of an element of order 15 in ${\rm PSL}_3(4)$ centralizes an element of order 5, but I will leave you to show that.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/57748', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4736/']}
|
jdg_373051
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
337879
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose that we have a model $Y_i = \beta_0 + \beta_1X_{i1} + \beta_2X_{i2} + \dots + \beta_kX_{ik} + \epsilon_i$. Regression has a number of assumptions, such as that the errors $\epsilon_i$ should be normally distributed with mean zero and constant variance. I have been taught to check these assumptions using a normal QQ plot to test for normality of the residuals $e_i = Y_i - \hat{Y}_i$ and a residuals vs. fitted plot to check that the residuals vary around zero with constant variance. However, these tests are all on the residuals, not the errors. From what I understand, the errors are defined as the deviation of each observation from their 'true' mean value. So, we can write $\epsilon_i = Y_i - \mathbb{E}[Y_i]$. These errors cannot be observed by us. * My question is this: how good of a job do the residuals do in mimicking the errors? If the assumptions appear to be satisfied on the residuals, does this mean they are satisfied on the errors too? Are there other (better) ways to test the assumptions, like fitting the model to a testing dataset and getting the residuals from there? * Furthermore, does this not require that the model is specified correctly ? That is, that the response really does have a relationship with the predictors $X_1, X_2,$ etc. in the way specified by the model. If we are missing some predictors (say, $X_{k+1}\ \text{to}\ X_p$), then the expectation $\mathbb{E}[Y_i] = \beta_0 + \beta_1X_{i1} + \beta_2X_{i2} + \dots + \beta_kX_{ik}$ would not even be the true mean, and further analysis on an incorrect model seems pointless. How do we check whether the model is a correct one?
Now provide the response and nothing else.
|
The residuals are our estimates of the error terms The short answer to this question is relatively simple: the assumptions in a regression model are assumptions about the behaviour of the error terms, and the residuals are our estimates of the error terms. Ipso facto , examination of the behaviour of the observed residuals tells us whether or not the assumptions about the error terms are plausible. To understand this general line of reasoning in more detail, it helps to examine in detail the behaviour of the residuals in a standard regression model. Under a standard multiple linear regression with independent homoskedastic normal error terms, the distribution of the residual vector is known, which allows you to test the underlying distributional assumptions in the regression model. The basic idea is that you figure out the distribution of the residual vector under the regression assumptions, and then check if the residual values plausibly match this theoretical distribution. Deviations from the theoretical residual distribution show that the underlying assumed distribution of the error terms is wrong in some respect, and fortunately it is possible to diagnose any flawed assumption from different kinds of departures from the theoretical distribution. If you use the underlying error distribution $\epsilon_i \sim \text{IID N}(0, \sigma^2)$ for a standard regression model and you use OLS estimation for the coefficients, then the distribution of the residuals can be shown to be the multivariate normal distribution: $$\boldsymbol{r} = (\boldsymbol{I} - \boldsymbol{h}) \boldsymbol{\epsilon} \sim \text{N}(\boldsymbol{0}, \sigma^2 (\boldsymbol{I} - \boldsymbol{h})),$$ where $\boldsymbol{h} = \boldsymbol{x} (\boldsymbol{x}^{\text{T}} \boldsymbol{x})^{-1} \boldsymbol{x}^{\text{T}}$ is the hat matrix for the regression. The residual vector mimics the error vector, but the variance matrix has the additional multiplicative term $\boldsymbol{I} - \boldsymbol{h}$ . To test the regression assumptions we use the studentised residuals, which have marginal T-distribution: $$s_i \equiv \frac{r_i}{\hat{\sigma}_{\text{Ext}} \cdot (1-l_i)} \sim \text{T}(\text{df}_{\text{Res}}-1).$$ (This formula is for the externally studentised residuals, where the variance estimator excludes the variable under consideration. The values $l_i = h_{i,i}$ are the leverage values, which are the diagonal values in the hat matrix . The studentised residuals are not independent, but if $n$ is large, they are close to independent. This means that the marginal distribution is a simple known distribution but the joint distribution is complicated.) Now, if the limit $\lim_{n \rightarrow \infty} (\boldsymbol{x}^{\text{T}} \boldsymbol{x}) / n = \Delta$ exists, then it can be shown that the coefficient estimators are consistent estimators of the true regression coefficients, and the residuals are consistent estimators of the true error terms. Essentially, this means that you test the underlying distributional assumptions for the error terms by comparing the studentised residuals to the T-distribution. Each of the underlying properties of the error distribution (linearity, homoskedasticity, uncorrelated errors, normality) can be tested by using the analogous properties of the distribuion of the studentised residuals. If the model is correctly specified, then for large $n$ the residuals should be close to the true error terms, and they have a similar distributional form. Omission of an explanatory variable from the regression model leads to omitted variable bias in the coefficient estimators and this affects the residual distribution. Both the mean and variance of the residual vector are affected by the omitted variable. If the omitted terms in the regression are $\boldsymbol{Z} \boldsymbol{\delta}$ then the residual vector becomes $\boldsymbol{r} = (\boldsymbol{I} - \boldsymbol{h}) (\boldsymbol{Z \delta} + \boldsymbol{\epsilon})$ . If the data vectors in the omitted matrix $\boldsymbol{Z}$ are IID normal vectors and independent of the error terms then $\boldsymbol{Z \delta} + \boldsymbol{\epsilon} \sim \text{N} (\mu \boldsymbol{1}, \sigma_*^2 \boldsymbol{I})$ so that the residual distribution becomes: $$\boldsymbol{r} = (\boldsymbol{I} - \boldsymbol{h}) (\boldsymbol{Z \delta} + \boldsymbol{\epsilon}) \sim \text{N} \Big( \mu (\boldsymbol{I} - \boldsymbol{h}) \boldsymbol{1}, \sigma_*^2 (\boldsymbol{I} - \boldsymbol{h}) \Big).$$ If there is already an intercept term in the model (i.e., if the unit vector $\boldsymbol{1}$ is in the design matrix) then $(\boldsymbol{I} - \boldsymbol{h}) \boldsymbol{1} = \boldsymbol{0}$ , which means that the standard distributional form of the residuals is preserved. If there is no intercept term in the model then the omitted variable may give a non-zero mean for the residuals. Alternatively, if the omitted variable is not IID normal then it can lead to other deviations from the standard residual distribution. In this latter case, the residual tests are unlikely to detect anything resulting from the presence of an omitted variable; it is not usually possible to determine whether deviations from the theoretical residual distribution occurs as a result of an omitted variable, or merely because of an ill-posed relationship with the included variables (and arguably these are the same thing in any case).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/337879', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/196928/']}
|
jdg_373052
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31424561
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have a set of Promise s that are making network requests, of which one will fail: // http://does-not-exist will throw a TypeErrorvar arr = [ fetch('index.html'), fetch('http://does-not-exist') ]Promise.all(arr) .then(res => console.log('success', res)) .catch(err => console.log('error', err)) // This is executed Let's say I want to wait until all of these have finished, regardless of if one has failed. There might be a network error for a resource that I can live without, but which if I can get, I want before I proceed. I want to handle network failures gracefully. Since Promise.all doesn't leave any room for this, what is the recommended pattern for handling this, without using a promises library?
Now provide the response and nothing else.
|
Update, you probably want to use the built-in native Promise.allSettled : Promise.allSettled([promise]).then(([result]) => { //reach here regardless // {status: "fulfilled", value: 33}}); As a fun fact, this answer below was prior art in adding that method to the language :] Sure, you just need a reflect : const reflect = p => p.then(v => ({v, status: "fulfilled" }), e => ({e, status: "rejected" }));reflect(promise).then((v) => { console.log(v.status);}); Or with ES5: function reflect(promise){ return promise.then(function(v){ return {v:v, status: "fulfilled" }}, function(e){ return {e:e, status: "rejected" }});}reflect(promise).then(function(v){ console.log(v.status);}); Or in your example: var arr = [ fetch('index.html'), fetch('http://does-not-exist') ]Promise.all(arr.map(reflect)).then(function(results){ var success = results.filter(x => x.status === "fulfilled");});
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/31424561', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1751810/']}
|
jdg_373053
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4458696
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I extract the list of supported Unicode characters from a TrueType or embedded OpenType font on Linux? Is there a tool or a library I can use to process a .ttf or a .eot file and build a list of code points (like U+0123, U+1234, etc.) provided by the font?
Now provide the response and nothing else.
|
Here is a method using the fontTools Python library (which you can install with something like pip install fonttools ): #!/usr/bin/env pythonfrom itertools import chainimport sysfrom fontTools.ttLib import TTFontfrom fontTools.unicode import Unicodewith TTFont( sys.argv[1], 0, allowVID=0, ignoreDecompileErrors=True, fontNumber=-1) as ttf: chars = chain.from_iterable( [y + (Unicode[y[0]],) for y in x.cmap.items()] for x in ttf["cmap"].tables ) if len(sys.argv) == 2: # print all code points for c in chars: print(c) elif len(sys.argv) >= 3: # search code points / characters code_points = {c[0] for c in chars} for i in sys.argv[2:]: code_point = int(i) # search code point #code_point = ord(i) # search character print(Unicode[code_point]) print(code_point in code_points) The script takes as arguments the font path and optionally code points / characters to search for: $ python checkfont.py /usr/share/fonts/**/DejaVuSans.ttf(32, 'space', 'SPACE')(33, 'exclam', 'EXCLAMATION MARK')(34, 'quotedbl', 'QUOTATION MARK')…$ python checkfont.py /usr/share/fonts/**/DejaVuSans.ttf 65 12622 # a ㅎLATIN CAPITAL LETTER ATrueHANGUL LETTER HIEUHFalse
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4458696', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/480147/']}
|
jdg_373054
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
607219
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Considering the following from here : RCC->AHB1ENR Is RCC above the register itself or pointer to a register. Because the -> operator is used with the pointers as explained here . They call this "RCC register" as if it is the register itself. What is the anatomy of this RCC? So is it equivalent to: (*RCC).AHB1ENR? But if so RCC is not the register itself. Isnt it? example of usage:
Now provide the response and nothing else.
|
Neither, really. And this is just standard C, nothing specifically to do with an embedded environment. It might be best to think of RCC as a pointer which points to the base address of the RCC peripheral. In fact, RCC is a macro: #define RCC ((RCC_TypeDef *) RCC_BASE) What it means is that RCC is just a name that expands to: ((RCC_TypeDef *) RCC_BASE) , where RCC_TypeDef is a structure that has the same layout and names as the RCC peripheral registers to allow easy accessing of them, and RCC_BASE is just another macro that holds the memory address where the RCC peripheral registers start. RCC->AHB1ENR = 3; in the code for example, it expands to: ((RCC_TypeDef *) RCC_BASE)->AHB1ENR = 3; The above line will take the literal number RCC_BASE which is a memory address from where the RCC peripheral registers start in memory, then type casts the literal address as a pointer which points to that given memory address, with a type of pointer that points to a structure called RCC_TypeDef at the given memory address, and the structure defines the memory layout and names of peripheral registers. Then the -> dereferences the pointer to structure, and in that structure which starts from the RCC_BASE memory address, the member AHB1ENR which is at some added offset, is accessed, and in this example, the number 3 is stored to it. So, your example will directly access a memory address at which a register happens to exist. The RCC_BASE is the base address where the peripheral registers start, and the added offset of the struct member AHB1ENR then allows you to access a memory address where the register with that name exists. Therefore, the name of the register accessed is AHB1ENR . And RCC is just a name of a macro that is made for your convenience to access the RCC peripheral registers via a struct that defines the register names.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/607219', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/161776/']}
|
jdg_373055
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7751514
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am about to embark on a new web project and I plan to put some JavaScripts in the <head> and also some before </body> , using the following scheme: Scripts that are essential for the UX of the page: in the <head> . As I've picked up perusing the web - scripts in the <head> is loaded before the page loads, so it would make sense to put scripts that are essential to the user experience there. Scripts that are non-essential to the design and UX (Google Analytics scripts etc.): before the </body> . Is this a sensible approach? Another approach would be to put all the scripts in the <head> and add defer attributes to the non-essential scripts. However, I read that older versions of Firefox don't pick up the defer attribute.
Now provide the response and nothing else.
|
I think a lot of developers run JavaScript just before the </body> so that it is run after all the elements have been rendered. However, if you organise your code correctly, the position on the page doesn't matter. For example, when using jQuery, you can ensure the code isn't run until the page and its elements are fully rendered by doing the following: $(document).ready(function(){ //Code here}); Then the script reference can be put in the head tag. Script tags should be referenced just before </body> . This prevents render blocking while the scripts load and is much better for site perception speed. No obtrusive JavaScript should be used when using this technique.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7751514', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/24218/']}
|
jdg_373056
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1005499
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here is my button <asp:Button ID="myButton" Text="Click Me" OnClick="doSomething(10)" runat="server" /> Here is the server function public void doSomething(int num){ int someOtherNum = 10 + num;} When I try to compile the code I get the error "Method Name Expected" for the line: <asp:Button ID="myButton" Text="Click Me" OnClick="doSomething(10)" runat="server" /> What am I doing wrong? Am I not allowed to pass values to the server from an OnClick event?
Now provide the response and nothing else.
|
There are two problems here. First, the onclick event has a specific signature. It is MethodName(object sender, EventArgs e); Second, in the markup, you need to pass the Method name only with no parentheses or params. <asp:Button ID="myButton" Text="Click Me" OnClick="doSomething" runat="server" /> Then change your codebehind as follows: public void doSomething(object sender, EventArgs e){ ....} The passing of parameters can done on a client side click event handler, in this case OnClientClick, but not on the server side handler.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1005499', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/123908/']}
|
jdg_373057
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18180824
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a C++ project in Visual Studio 2010 and wish to use OCR. I came across many "tutorials" for Tesseract but sadly, all I got was a headache and wasted time. In my project I have an image stored as a Mat . One solution to my problem is to save this Mat as an image (image.jpg for example) and then call Tesseract executable file like this: system("tesseract.exe image.jpg out"); Which gets me an output out.txt and then I call infile.open ("out.txt"); to read the output from Tesseract. It is all good and works like a chair but it is not an optimal solution. In my project I am processing a video so save/call .exe/write/read at 10+ FPS is not what I am really looking for. I want to implement Tesseract to existing code so to be able to pass a Mat as an argument and immediately get a result as a String. Do you know any good tutorial(pref. step-by-step) to implement Tesseract OCR with Visual Studio 2010? Or your own solution?
Now provide the response and nothing else.
|
OK, I figured it out but it works for Release and Win32 configuration only (No debug or x64). There are many linking errors under Debug configuration. So, 1. First of all, download prepared library folder(Tesseract + Leptonica) here: Mirror 1(Google Drive) Mirror 2(MediaFire) 2. Extract tesseract.zip to C:\ 3. In Visual Studio, go under C/C++ > General > Additional Include Directories Insert C:\tesseract\include 4. Under Linker > General > Additional Library Directories Insert C:\tesseract\lib 5. Under Linker > Input > Additional Dependencies Add: liblept168.liblibtesseract302.lib Sample code should look like this: #include <tesseract\baseapi.h>#include <leptonica\allheaders.h>#include <iostream>using namespace std;int main(void){ tesseract::TessBaseAPI api; api.Init("", "eng", tesseract::OEM_DEFAULT); api.SetPageSegMode(static_cast<tesseract::PageSegMode>(7)); api.SetOutputName("out"); cout<<"File name:"; char image[256]; cin>>image; PIX *pixs = pixRead(image); STRING text_out; api.ProcessPages(image, NULL, 0, &text_out); cout<<text_out.string(); system("pause");} For interaction with OpenCV and Mat type images look HERE
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18180824', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/999536/']}
|
jdg_373058
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21938952
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to make an advanced search form with Laravel 4, and this is the query: $result = DB::table('users_ads') ->join('ads', 'users_ads.ad_id', '=', 'ads.id') ->orderBy($column, $method) ->where('status', TRUE) ->where(function($query) use ($input) { $query->where('short_description', $input['search']) ->where('category', $input['category']) ->where('product', $input['product']); }) ->join('users', 'users_ads.user_id', '=', 'users.id') ->select('ads.id', 'ads.img1', 'ads.short_description', 'ads.category', 'ads.product', 'ads.price', 'users.city') ->get(); return $result; The problem is that the user might not use all the input fields. So i want to include some if conditions in this part: $query->where('short_description', $input['search']) ->where('category', $input['category']) ->where('product', $input['product']); .. so if the input is empty, to remove the "where" condition.
Now provide the response and nothing else.
|
$filters = [ 'short_description' => 'search', 'category' => 'category', 'product' => 'product',];.....->where(function($query) use ($input, $filters){ foreach ( $filters as $column => $key ) { $value = array_get($input, $key); if ( ! is_null($value)) $query->where($column, $value); }}); Newer version of Laravel have a when method that makes this much easier: ->where(function ($query) use ($input, $filters) { foreach ($filters as $column => $key) { $query->when(array_get($input, $key), function ($query, $value) use ($column) { $query->where($column, $value); }); }});
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21938952', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1814561/']}
|
jdg_373059
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42715
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking for file system with reasonably error corrections, but rugged against forensic after wipe. Say, encrypted container via loop mounted like ext4 - journal file system. This is good performance and secure in many ways. Wiping in this scenario is a destroying at least start of crypto-container(journal of ext4). It is fast. Recovering require huge of power: key + password + destroyed crypto-container's prefix. Is there special file system with encryption and great wipe feature. "Great", I, mean fast and unrecoverable. E.g. journal at start of file system contain not a addresses of files, but key - map of blocks randomly situated in memory area. Destroying of this key means destroying a sequences of whole file system.File in such file system is lay in un-fragmented memory's sequence. Plus whole of file system encrypted. Wipe require rewriting the journal(map) of the file system only. Say, rewrite map and key 30 tries, and no NSA, no NASA, no CIA can recovery wiped file-system.
Now provide the response and nothing else.
|
The right way to "wipe out" data is to use encryption : never let unencrypted data ever hit the disk. If you do that, then destroying the decryption key is sufficient to destroy the data. The decryption key is small and in many case you can keep it in RAM only (e.g. you type it upon boot, as a "password", which really means "a key that a human remembers"); if the key is in RAM and stays only in RAM, then destroying it is as simple as shutting down power (note, though, that RAM contents may resist loss of power for a few seconds or minutes ). File wiping is what you do when you did not do things the right way. You did write sensitive information on a physical medium, without any encryption. And you would like to "fix" that. There are three main problems with that: File deletion does not overwrite the data on the physical medium; it just marks it as reusable for other files. Sometimes, even writing over the data may not actually destroy it. This is where the so-called "file shredders" intervene, by overwriting the data several times with special patterns which ought to destroy all traces of the past data. However, such shredders are quite specific to the actual disk technology, and, in practice, quite specific to the technology used by disks 12 years ago; on modern disks, the shredders are likely to be useless and unnecessary. Conversely, with SSD, quite a lot of data can remain out of reach of the most thorough shredders. So file shredders are either total overkill or insufficient, with no middle ground. Copies of parts of the file data can be stored in other areas. This may happen with virtual memory because the file data, before being in files, was in RAM. This may also happen with journaling filesystems (depending on the filesystem implementation and configuration). So any solution based on file "wiping" is likely to be inefficient and incomplete. Anyway, despite this incompleteness, some people have tried to do some sort of automating shredding. This can be done at various level; this one does it by patching the unlink() C library function call (through an LD_PRELOAD trick). Statically linked process would still avoid it, though (but there are very few statically linked applications in a typical Linux installation). I would not recommend it (and the author himself is wary of it); notably, file-shredding on a SSD is not only ineffective (see above), but also noticeably shortens the lifetime of the said SSD.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/42715', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/31018/']}
|
jdg_373060
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37635
|
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The US government is shut down, due to disagreements between President and Congress on border security funding. Both sides are in an apparent stalemate. So my question is simply, what happens if neither side backs down? Does the shut-down just go on indefinitely? How does this end?
Now provide the response and nothing else.
|
If necessary, Congress can pass a spending bill without the president's support. Currently Trump claims he will veto any bill which doesn't include funding for the wall. However if two-thirds of each chamber of Congress agree, they can override a presidential veto and end the shutdown. The current Senate has already passed a spending bill without funding for Trump's wall once with a veto-proof majority of 100-0. Instead of voting on that bill though, the Republican-lead House of Representatives voted on a different bill with wall funding that was sure to fail in the Senate. The Senate won't change much in the next Congress, but the House of Representatives will have new leadership and can try to pass a similar bill to what has previously passed the Senate. If the Senate votes similarly and the House gets enough votes they can end the shutdown with or without Trump.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/37635', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/24331/']}
|
jdg_373061
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
129150
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I've got a few days left at the end of a differential geometry class, and would like to compute the deRham cohomology of $S^n$. We've just proved the Poincare lemma, so I know the cohomology of $R^n$, and homotopy invariance of cohomology is an easy consequence. The way to compute $H^*(S^n)$ is with Mayer-Vietoris, for example as in Bott and Tu. But I don't have time to fully develop the cohomological algebra for that, just to then apply it in only one case. I'd rather get the computation for $S^n$ directly and use it for a couple of applications. So, it seems to me that one could trace through the Mayer-Vietoris argument in this specific case. At least I'd like to show that a closed $k$-form on $S^n$ is exact when $0 < k < n$. Here's how I think the argument could go: Let $\omega$ be a $k$-form on $S^n$.Let $U$ and $V$ be $S^n$ minus its north and south pole, respectively. Then homotopy invariance and the Poincare lemma give $k-1$ forms $\alpha$ on $U$ and $\beta$ on $V$ with $d\alpha = \omega$ and $d\beta = \omega$. Now use a partition of unity $f$ and set $\gamma = f_U \alpha + f_V \beta$, a $k-1$ form on $S^n$. So $\omega - d\gamma$ is now supported on $U \cap V$. Since $U\cap V$ is homotopic to $S^{n-1}$, induction gives that $\omega - d\gamma$ is exact, so $\omega = d\tau + d\gamma$. The problem is that $\omega - d\gamma$ is exact when restricted to $U \cap V$, and I don't see how why $\tau$ would extend to a form on $S^n$. Am I missing something? Is there a better approach entirely?
Now provide the response and nothing else.
|
Have you done any integration theory? (I assume you have, otherwise you wouldn't necessarily know what the deRham cohomology does for you.) The fastest proof I know is: Take a closed $k$-form $\omega$ on $S^n$, note that $g^\ast\omega$ is cohomologous to $\omega$ for all $g\in \mathrm{SO}(n{+}1)$ (since $\mathrm{SO}(n{+}1)$ is connected. Conclude that $\omega$ is cohomologous to $\bar\omega$, the average over $\mathrm{SO}(n{+}1)$ of $g^\ast\omega$ as $g$ varies over $\mathrm{SO}(n{+}1)$. But $\bar\omega$ is invariant under the action of $\mathrm{SO}(n{+}1)$, so its value at $x\in S^n$ must be invariant under the subgroup (isomorphic to $\mathrm{SO}(n)$) that stabilizes $x$. However, $\mathrm{SO}(n)$ acting on $\mathbb{R}^n$ only fixes nonzero forms in degree $0$ and $n$. Thus, if $\bar\omega$ is not zero, it must be either a constant function ($k=0$) or a multiple of the volume form ($k=n$).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/129150', 'https://mathoverflow.net', 'https://mathoverflow.net/users/33550/']}
|
jdg_373062
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
89154
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't seem to find a general method for deriving standard errors anywhere. I've looked on google, this website and even in text books but all I can find is the formula for standard errors for the mean, variance, proportion, risk ratio, etc... and not how these formulas were arrived at. If any body could explain it in simple terms or even link me to a good resource which explains it I'd be grateful.
Now provide the response and nothing else.
|
What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them together, and divide the sum by $n$. We than find the variance of this quantity and get the standard deviation by taking the square root of its variance. So, let the items that you pick be represented by the random variables $X_i, 1\le i \le n$, each of them identically distributed with variance $\sigma^2$. They are independently sampled, so the variance of the sum is just the sum of the variances.$$\text{Var}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n\text{Var}\left(X_i\right) = \sum_{i=1}^n\sigma^2 = n\sigma^2$$ Next we divide by $n$. We know in general that $\text{Var}(kY)=k^2 \text{Var}(Y)$, so putting $k=1/n$ we have $$\text{Var}\left(\frac{\sum_{i=1}^n X_i}{n}\right) = \frac{1}{n^2}\text{Var}\left(\sum_{i=1}^n X_i\right) = \frac{1}{n^2} n\sigma^2 = \frac{\sigma^2}{n}$$ Finally take the square root to get the standard deviation $\dfrac{\sigma}{\sqrt{n}}$. When the population standard deviation isn't available the sample standard deviation $s$ is used as an estimate, giving $\dfrac{s}{\sqrt{n}}$. All of the above is true regardless of the distribution of the $X_i$s, but it begs the question of what do you actually want to do with the standard error? Typically you might want to construct confidence intervals, and it is then important assign a probability to constructing a confidence interval that contains the mean. If your $X_i$s are normally distributed, this is easy, because then the sampling distribution is also normally distributed. You can say 68% of samples of the mean will lie within 1 standard error of the true mean, 95% will be within 2 standard errors, etc. If you have a large enough sample (or a smaller sample and the $X_i$s are not too abnormal) then you can invoke the central limit theorem and say that the sampling distribution is approximately normally distributed, and your probability statements are also approximate. A case in point is estimating a proportion $p$, where you draw $n$ items each from a Bernouilli distribution. The variance of each $X_i$ distribution is $p(1-p)$ and hence the standard error is $\sqrt{p(1-p)/n}$ (the proportion $p$ is estimated using the data). To then jump to saying that approximately some % of samples are within so many standard deviations of the mean, you need to understand when the sampling distribution is approximately normal. Repeatedly sampling from a Bernouilli distribution is the same as sampling from a Binomial distribution, and one common rule of thumb is to approximate only when $np$ and $n(1-p)$ are $\ge5$. (See wikipedia for a more in-depth discussion on approximating binomial with normal. See here for a worked example of standard errors with a proportion.) If, on the other hand, your sampling distribution can't be approximated by a normal distribution, then the standard error is a lot less useful. For example, with a very skewed, asymmetric distribution you can't say that the same % of samples would be $\pm1$ standard deviation either side of the mean, and you might want to find a different way to associate probabilities with samples.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/89154', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/41499/']}
|
jdg_373063
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
227889
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I've recently been to a seminar on quantum matrices. In particular the speaker introduced these objects as the coordinate ring of $2$ by $2$ matrices modulo some odd looking relations (see start of Section 2 here ). As a theoretical physicist, I'm struggling to understand in what sense these objects are quantum! Has anyone got any references or knowledge which might help answer Where do these odd looking relations come from? In what sense are the matrices quantum (can the non-commutativity in the coordinate ring be understood as emerging from some quantisation procedure...)? Should I think of quantum groups as a controlled mechanism for introducing non-commutativity into coordinate rings in general? Many thanks in advance for your expertise!
Now provide the response and nothing else.
|
Typically in math "quantum X" means a deformation of "X" which is in some sense "less commutative." So quantum groups should be deformations of groups which are "less commutative." Interpreting this is slightly tricky since groups are already non-commutative, but nonetheless they do have some "commutativity" built in which you can see either by noting: The ring of functions on the group is a commutative ring. The tensor product of representations of the group has a symmetric isomorphism $V \otimes W \rightarrow W \otimes V$. You can use either of these to motivate versions of quantum groups. A quantum group is: A Hopf algebra which deforms the Hopf algebra of functions on a group, but where the ring structure is non-commutative. Something whose category of representations deforms the category of representations of a group, but where the tensor product structure is not symmetric.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/227889', 'https://mathoverflow.net', 'https://mathoverflow.net/users/22337/']}
|
jdg_373064
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16879971
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to learn how to use QThreads in a PyQt Gui application. I have stuff that runs for a while, with (usually) points where I could update a Gui, but I would like to split the main work out to its own thread (sometimes stuff gets stuck, and it would be nice to eventually have a cancel/try again button, which obviously doesn't work if the Gui is frozen because the Main Loop is blocked). I've read https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/ . That page says that re-implementing the run method is not the way to do it. The problem I am having is finding a PyQt example that has a main thread doing the Gui and a worker thread that does not do it that way. The blog post is for C++, so while it's examples do help, I'm still a little lost. Can someone please point me to an example of the right way to do it in Python?
Now provide the response and nothing else.
|
Here is a working example of a separate worker thread which can send and receive signals to allow it to communicate with a GUI. I made two simple buttons, one which starts a long calculation in a separate thread, and one which immediately terminates the calculation and resets the worker thread. Forcibly terminating a thread as is done here is not generally the best way to do things, but there are situations in which always gracefully exiting is not an option. from PyQt4 import QtGui, QtCoreimport sysimport randomclass Example(QtCore.QObject): signalStatus = QtCore.pyqtSignal(str) def __init__(self, parent=None): super(self.__class__, self).__init__(parent) # Create a gui object. self.gui = Window() # Create a new worker thread. self.createWorkerThread() # Make any cross object connections. self._connectSignals() self.gui.show() def _connectSignals(self): self.gui.button_cancel.clicked.connect(self.forceWorkerReset) self.signalStatus.connect(self.gui.updateStatus) self.parent().aboutToQuit.connect(self.forceWorkerQuit) def createWorkerThread(self): # Setup the worker object and the worker_thread. self.worker = WorkerObject() self.worker_thread = QtCore.QThread() self.worker.moveToThread(self.worker_thread) self.worker_thread.start() # Connect any worker signals self.worker.signalStatus.connect(self.gui.updateStatus) self.gui.button_start.clicked.connect(self.worker.startWork) def forceWorkerReset(self): if self.worker_thread.isRunning(): print('Terminating thread.') self.worker_thread.terminate() print('Waiting for thread termination.') self.worker_thread.wait() self.signalStatus.emit('Idle.') print('building new working object.') self.createWorkerThread() def forceWorkerQuit(self): if self.worker_thread.isRunning(): self.worker_thread.terminate() self.worker_thread.wait()class WorkerObject(QtCore.QObject): signalStatus = QtCore.pyqtSignal(str) def __init__(self, parent=None): super(self.__class__, self).__init__(parent) @QtCore.pyqtSlot() def startWork(self): for ii in range(7): number = random.randint(0,5000**ii) self.signalStatus.emit('Iteration: {}, Factoring: {}'.format(ii, number)) factors = self.primeFactors(number) print('Number: ', number, 'Factors: ', factors) self.signalStatus.emit('Idle.') def primeFactors(self, n): i = 2 factors = [] while i * i <= n: if n % i: i += 1 else: n //= i factors.append(i) if n > 1: factors.append(n) return factorsclass Window(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) self.button_start = QtGui.QPushButton('Start', self) self.button_cancel = QtGui.QPushButton('Cancel', self) self.label_status = QtGui.QLabel('', self) layout = QtGui.QVBoxLayout(self) layout.addWidget(self.button_start) layout.addWidget(self.button_cancel) layout.addWidget(self.label_status) self.setFixedSize(400, 200) @QtCore.pyqtSlot(str) def updateStatus(self, status): self.label_status.setText(status)if __name__=='__main__': app = QtGui.QApplication(sys.argv) example = Example(app) sys.exit(app.exec_())
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16879971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/962918/']}
|
jdg_373065
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
388500
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Using setcap to give additional permissions to a binary should write the new permission somewhere, on storage or in memory, where is it stored ? Using lsof as is doesn't work because the process disappear too quickly.
Now provide the response and nothing else.
|
Extended permissions such as access control lists set by setfacl and capability flags set by setcap are stored in the same place as traditional permissions and set[ug]id flags set by chmod : in the file's inode. (They may actually be stored in a separate block on the disk, because an inode has a fixed size which has room for the traditional permission bits but not for the potentially unbounded extended permissions. But that only matters in rare cases, such as having to care that setcap could run out of disk space. But even chmod could run out of disk space on a system that uses deduplication!) GNU ls doesn't display a file's setcap attributes. You can display them with getcap . You can list all the extended attributes with getfattr -d -m - ; the setcap attribute is called security.capability and it is encoded in a binary format which getcap decodes for you.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/388500', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/53490/']}
|
jdg_373066
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
156280
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can you give a visual example of what is antimatter ? With the re-opening of Large Haldron Collider scheduled in Mar 2015, I'm reading that they smash two particles together to try to re-create particles that might have been there are the beginning of the Big Bang, and this includes antimatter? Is antimatter something we can see, or it is some invisible field. I'm trying to get my arms around this concept.
Now provide the response and nothing else.
|
Antimatter is the 'quantum opposite' of matter. An electron, which is a particle of matter, will have an 'opposite partner' which we named the positron . The positron has the same mass as the electron, but has opposite electrical charge, i.e +1. But antimatter does not only distinguish between electric charge. Antiparticles in general have opposite quantum numbers which are namely: Electric charge Colour Charge Flavour number Lepton number Baryon number Isospin and others which I'm probably forgetting. This makes the behaviour of antimatter to 'reflect' that of its matter partner. For example both the electron and the positron travelling along the x-direction will react to an external magnetic field along the y-direction. The only difference would be the direction of the force, which will be opposite as seen in the picture below: It is worth stating that the laws of physics are not completely symmetrical when we make the change from matter $\to$ antimatter . In the early universe, right after the Big Bang, equal amount of matter and antimatter was created. Matter and antimatter annihilates each other and produces photons. For that reason there should be no matter nor antimatter in our universe. All matter should have annihilated all antimatter in the early universe, but that clearly did not happen because here we are (sitting on a 'matter chair' drinking 'matter coffee') asking this question. So there clearly is an asymmetry. Where does it come from? Well, asymmetries have been observed between matter and antimatter, mainly in weak decays, where it has been shown that C-symmetry violation occurs. Particles change to antiparticles when acted upon by the Charge conjugation operator $$ {\mathcal C}\,\lvert \psi \rangle =\lvert{\bar {\psi }}\rangle $$ Notice that chirality remains unchanged by $\mathcal C$. Consider the example of a left-handed neutrino under $\mathcal C$-conjugation. It becomes a left-handed anti-neutrino , which is well-known not to participate in the weak interaction at all. It was then thought that physics laws would definitely be symmetric under both charge conjugation and parity inversion, which just switches a particles position in space, i.e: $$ {\mathbf {P}}:{\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}-x\\-y\\-z\end{pmatrix}} $$ Together, they form the combined $CP$ transformation. In principle, $CP$-symmetry should be conserved, i.e physics should be the same if we exchange a particle for its antiparticle, and invert its coordinates, but this symmetry was also found to be violated as well. This is one possible origin for the asymmetry between matter and antimatter, but this alone is not enough to explain the huge difference of matter vs antimatter in the universe hence why the question is an open research question.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/156280', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/68904/']}
|
jdg_373067
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18570807
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on an app in iOS that will occur an event if camera detects some changes in image or we can say motion in image. Here I am not asking about face recognition or a particular colored image motion, And I got all result for OpenCV when I searched, And I also found that we can achieve this by using gyroscope and accelerometer both , but how?? I am beginner in iOS.So my question is , Is there any framework or any easy way to detect motion or motion sensing by camera.And How to achieve? For Example if I move my hand before camera then it will show some message or alert. And plz give me some useful and easy to understand links about this.Thanx
Now provide the response and nothing else.
|
If all you want is some kind of crude motion detection, my open source GPUImage framework has a GPUImageMotionDetector within it. This admittedly simple motion detector does frame-to-frame comparisons, based on a low-pass filter, and can identify the number of pixels that have changed between frames and the centroid of the changed area. It operates on live video and I know some people who've used it for motion activation of functions in their iOS applications. Because it relies on pixel differences and not optical flow or feature matching, it can be prone to false positives and can't track discrete objects as they move in a frame. However, if all you need is basic motion sensing, this is pretty easy to drop into your application. Look at the FilterShowcase example to see how it works in practice.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18570807', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2484788/']}
|
jdg_373068
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33611812
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to determine if there are any big differences between these two, other than being able to import with export default by just doing: import myItem from 'myItem'; And using export const I can do: import { myItem } from 'myItem'; Are there any differences and/or use cases other than this?
Now provide the response and nothing else.
|
It's a named export vs a default export. export const is a named export that exports a const declaration or declarations. To emphasize: what matters here is the export keyword as const is used to declare a const declaration or declarations. export may also be applied to other declarations such as class or function declarations. Default Export ( export default ) You can have one default export per file. When you import you have to specify a name and import like so: import MyDefaultExport from "./MyFileWithADefaultExport"; You can give this any name you like. Named Export ( export ) With named exports, you can have multiple named exports per file. Then import the specific exports you want surrounded in braces: // ex. importing multiple exports:import { MyClass, MyOtherClass } from "./MyClass";// ex. giving a named import a different name by using "as":import { MyClass2 as MyClass2Alias } from "./MyClass2";// use MyClass, MyOtherClass, and MyClass2Alias here Or it's possible to use a default along with named imports in the same statement: import MyDefaultExport, { MyClass, MyOtherClass} from "./MyClass"; Namespace Import It's also possible to import everything from the file on an object: import * as MyClasses from "./MyClass";// use MyClasses.MyClass, MyClasses.MyOtherClass and MyClasses.default here Notes The syntax favours default exports as slightly more concise because their use case is more common ( See the discussion here ). A default export is actually a named export with the name default so you are able to import it with a named import: import { default as MyDefaultExport } from "./MyFileWithADefaultExport";
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/33611812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3201696/']}
|
jdg_373069
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28263796
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I know what inout does for value types . With objects or any other reference type , is there a purpose for that keyword in that case, instead of using var ? Code example: private class MyClass { private var testInt = 1}private func testParameterObject(var testClass: MyClass) { testClass.testInt++}private var testClass: MyClass = MyClass()testParameterObject(testClass)testClass.testInt // output ~> 2private func testInoutParameterObject(inout testClass: MyClass) { testClass.testInt++}testClass.testInt = 1testInoutParameterObject(&testClass) // what happens here?testClass.testInt // output ~> 2 It could be the same as simply the var keyword in the parameter list.
Now provide the response and nothing else.
|
The difference is that when you pass a by-reference parameter as a var , you are free to change everything that can be changed inside the passed object, but you have no way of changing the object for an entirely different one. Here is a code example illustrating this: class MyClass { private var testInt : Int init(x : Int) { testInt = x }}func testInoutParameterObject(inout testClass: MyClass) { testClass = MyClass(x:123)}var testClass = MyClass(x:321)println(testClass.testInt)testInoutParameterObject(&testClass)println(testClass.testInt) Here, the code inside testInoutParameterObject sets an entirely new MyClass object into the testClass variable that is passed to it. In Objective-C terms this loosely corresponds to passing a pointer to a pointer (two asterisks) vs. passing a pointer (one asterisk).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28263796', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/691409/']}
|
jdg_373070
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
355268
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$x \times \ln (x) = 1$$ I am trying to solve that equation. I used the theory $\ln(a) = \ln(b)$ being equivalent to $a = b$ and got stuck at $$x = e^{\frac{1}{x}}$$ That's as far as I went and I know there's a solution (around 1.8 or 1.9), since I used my calculator, but I'd like to know how to do this by hand.
Now provide the response and nothing else.
|
You can use the law of logarithms which states that for $a,b\in\mathbb{R}$: $a\ln{b}=\ln\left(b^{a}\right)$. Therefore, you have: $$x\ln{x}=1 \implies \ln{x^{x}}=1$$ You hence have: $$x^{x}=e$$ Which does not have an elementary closed form, so you must use numerical methods (for instance Newton-Raphson iteration) to get an approximation (Mathematica gives $x\approx 1.76322$). If you're interested, the closed form solution is: $$\frac{1}{W(1)}, \qquad \text{ where } W(z) \text{ is the LambertW function}$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/355268', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45301/']}
|
jdg_373071
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
146378
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was just thinking about this problem: Can every nonsingular $n\times n$ matrix with real entries be made singular by changing exactly one entry? Thanks for helping me.
Now provide the response and nothing else.
|
The determinant is a linear polynomial in any given entry, so yes. To see that$$\det\begin{pmatrix} a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \cdots & a_{nn}\end{pmatrix}$$depends linearly on $a_{k\ell}$ for any given $k$ and $\ell$, note that $$\det\begin{pmatrix} a_{11} & \cdots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \cdots & a_{nn}\end{pmatrix}=\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}=\sum_{\substack{\sigma\in S_n\\ \sigma(k)=\ell}}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}+\sum_{\substack{\sigma\in S_n\\ \sigma(k)\neq \ell}}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}$$$$=\left(\sum_{\substack{\sigma\in S_n\\ \sigma(k)=\ell}}\operatorname{sgn}(\sigma)\prod_{\substack{i=1\\i\neq k}}^n a_{i,\sigma(i)}\right)a_{k\ell} + \left(\sum_{\substack{\sigma\in S_n\\ \sigma(k)\neq \ell}}\operatorname{sgn}(\sigma)\prod_{i=1}^n a_{i,\sigma(i)}\right)$$As JeffE rightly points out below, we might have that the coefficient of $a_{k\ell}$ in the above expression is 0, and that therefore varying the value of $a_{k\ell}$ won't change the determinant. I don't see any way of guaranteeing that won't happen, but we can show that, given an $\ell$, it can't happen for every $k$: if it did, then varying the entire column $$A_\ell=\begin{pmatrix} a_{1\ell} \\ \vdots \\ a_{n\ell}\end{pmatrix}$$in any way we want doesn't change the determinant, so (for example)$$\det(A_1\mid \cdots \mid 2A_\ell\mid \cdots \mid A_n)=\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n).$$But because the determinant of $A$ is a multilinear function of the columns,$$\det(A_1\mid \cdots \mid 2A_\ell\mid \cdots \mid A_n)=2\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)$$so $$2\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)=\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)$$which is impossible because the assumption that $A$ is non-singular means that $$\det(A_1\mid \cdots \mid A_\ell\mid \cdots \mid A_n)\neq0.$$Thus, given an $\ell$, there exists at least one $k$ such that varying $a_{k\ell}$ can produce a singular matrix.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/146378', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/30856/']}
|
jdg_373072
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16937359
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a somewhat confusing question in Git.Lets say, I have a file dir1/A.txt committed and git preserves a history of commits Now I need to copy the file into dir2/A.txt (not move, but copy).I know that there is a git mv command but I need dir2/A.txt to have the same history of commits as dir1/A.txt , and dir1/A.txt to still remain there. I'm not planning to update A.txt once the copy is created and all the future work will be done on dir2/A.txt I know it sounds confusing, I'll add that this situation is on java based module (mavenized project) and we need to create a new version of code so that our customers will have the ability to have 2 different versions in runtime, the first version will be removed eventually when the alignment will be done.We can use maven versioning of course, I'm just newbie to Git and curious about what Git can provide here.
Now provide the response and nothing else.
|
All you have to do is: move the file to two different locations, merge the two commits that do the above, and move one copy back to the original location. You will be able to see historical attributions (using git blame ) and full history of changes (using git log ) for both files. Suppose you want to create a copy of file foo called bar . In that case the workflow you'd use would look like this: git mv foo bargit commitSAVED=`git rev-parse HEAD`git reset --hard HEAD^git mv foo copygit commitgit merge $SAVED # This will generate conflictsgit commit -a # Trivially resolved like thisgit mv copy foogit commit Why this works After you execute the above commands, you end up with a revision history that looks like this: ( revision history ) ( files ) ORIG_HEAD foo / \ / \SAVED ALTERNATE bar copy \ / \ / MERGED bar,copy | | RESTORED bar,foo When you ask Git about the history of foo , it will: detect the rename from copy between MERGED and RESTORED, detect that copy came from the ALTERNATE parent of MERGED, and detect the rename from foo between ORIG_HEAD and ALTERNATE. From there it will dig into the history of foo . When you ask Git about the history of bar , it will: notice no change between MERGED and RESTORED, detect that bar came from the SAVED parent of MERGED, and detect the rename from foo between ORIG_HEAD and SAVED. From there it will dig into the history of foo . It's that simple. :) You just need to force Git into a merge situation where you can accept two traceable copies of the file(s), and we do this with a parallel move of the original (which we soon revert).
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16937359', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/605153/']}
|
jdg_373073
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24472404
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've pored through the docs, StackOverflow, etc., can't seem to find this... What I want to do is serialize/deserialize a simple value-type of object as a value, not an object, as so: public class IPAddress{ byte[] bytes; public override string ToString() {... etc.}public class SomeOuterObject{ string stringValue; IPAddress ipValue;}IPAddress ip = new IPAddress("192.168.1.2");var obj = new SomeOuterObject() {stringValue = "Some String", ipValue = ip};string json = JsonConverter.SerializeObject(obj); What I want is for the json to serialize like this: // json = {"someString":"Some String","ipValue":"192.168.1.2"} <- value serialized as value, not subobject Not where the ip becomes a nested object, ex: // json = {"someString":"Some String","ipValue":{"value":"192.168.1.2"}} Does anyone know how to do this? Thanks! (P.S. I am bolting Json serialization on a large hairy legacy .NET codebase, so I can't really change any existing types, but I can augment/factor/decorate them to facilitate Json serialization.)
Now provide the response and nothing else.
|
You can handle this using a custom JsonConverter for the IPAddress class. Here is the code you would need: public class IPAddressConverter : JsonConverter{ public override bool CanConvert(Type objectType) { return (objectType == typeof(IPAddress)); } public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { return new IPAddress(JToken.Load(reader).ToString()); } public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer) { JToken.FromObject(value.ToString()).WriteTo(writer); }} Then, add a [JsonConverter] attribute to your IPAddress class and you're ready to go: [JsonConverter(typeof(IPAddressConverter))]public class IPAddress{ byte[] bytes; public IPAddress(string address) { bytes = address.Split('.').Select(s => byte.Parse(s)).ToArray(); } public override string ToString() { return string.Join(".", bytes.Select(b => b.ToString()).ToArray()); }} Here is a working demo: class Program{ static void Main(string[] args) { IPAddress ip = new IPAddress("192.168.1.2"); var obj = new SomeOuterObject() { stringValue = "Some String", ipValue = ip }; string json = JsonConvert.SerializeObject(obj); Console.WriteLine(json); }}public class SomeOuterObject{ public string stringValue { get; set; } public IPAddress ipValue { get; set; }} Output: {"stringValue":"Some String","ipValue":"192.168.1.2"}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24472404', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2252839/']}
|
jdg_373074
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17352104
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working with an existing web app, in the app there are a variety of submit buttons on different forms, some using regular http post, some defining an onClick function, and some binding a js event handler to the button using a class on the element. What I want to do, is bind another event handler to these buttons by just adding a class to the buttons, but what I want to determine is will the new event handler be guaranteed to be executed, or could one of the form submit actions happen before it does meaning my new function isn't hit. The example scenario is I want to add a class to these buttons that bimds them all to a common js function that simply logs usage to some api. Is there a risk that the logging function isn't called because the form submit has navigated away from the page? I've not done loads of js development, and I could test this 100 times over and just get lucky with it firing. Below is some code I have tested with for one of the examples - again, I'm not asking how to bind multiple events, the question is to about my understanding of the spec and whether execution of all handlers is guaranteed. $(document).ready(function(){ $('.testingBtn').click(function() { window.location.replace("http://stackoverflow.com"); }); $( ".testingBtn" ).click(function(){ alert('submitting!'); });});<input class="testingBtn" type="submit" id="submitform" value="Complete Signup" /> As seen above, I can bind the multiple events, and in this example, just directed to another url, but this could be a form.submit() etc. In my testing the alert has always fired first, but am I just getting lucky with the race conditions?
Now provide the response and nothing else.
|
In JS, you don't really have control over what order the event handlers are called, but with careful delegation and well-placed listeners, it is possible. Delegation is one of the most powerful features of the event model. As you may or may not know: in JS, an event is handed to the top of the dom, from where it propagates down to the element onto which the event should be applied. It stands to reason, therefore, that an event listener attached to the global object will call its handler prior to a listener that has been attached to the element itself. window.addEventListener('click',function(e){ e = e || window.event; var target = e.target || e.srcElement; console.log('window noticed you clicked something'); console.log(target);//<-- this is the element that was clicked}, false);//<-- we'll get to the false in a minute It's important to note we actually have access to the event object in the handlers. In this case, we left the event object untouched, so it'll just continue to propagate down to the target, on its way down, it might meet with something like this: document.getElementById('container').addEventListener('click', function(e){ e = e || window.event; var target = e.target || e.srcElement; if (target.tagName.toLowerCase() !== 'a' || target.className.match(/\bclickable\b/)) { return e;//<return the event, unharmed } e.returnValue = false; if (e.preventDefault) { e.preventDefault(); }}, false); Now, this handler will be called after the listener at the window level calls its helper. This time, the event is changed if the clicked element didn't have the clickable class, or the element is a link. The event is canceled, but it lives on, still. The event is still free to propagate further down the dom, so we might encounter something like: document.getElmentById('form3').addEventListener('click',function(e){ e = e || window.event; if (e.returnValue === false || e.isDefaultPrevented) {//this event has been changed already //do stuff, like validation or something, then you could: e.cancelBubble = true; if (e.stopPropagation) { e.stopPropagation(); } }}, false); Here, by calling stopPropagation , the event is killed off. It can't propagate further down the dom to its target unless the event was already altered. If not, the event object travels further down the DOM, as if nothing happened. Once it reaches its target node, the event enters its second phase: the bubble phase. Instead of propagating down into the deeps of the DOM, it climbs back up, to the top level (all the way to the global object, where it was dispatched... from whence it came and all that). In the bubble phase, all the same rules apply as in the propagation phase, only the other way around. The event object will encounter the elements that are closest to the target element first, and the global object last. There's a lot of handy, and clear diagrams for this here . I can't put it any better than good 'ol quirksmode, so I suggest you read what they have to say there. Bottom line: when dealing with 2 event listeners, attach them both on a different level to sort-of queue them the way you like. If you want to guarantee both are called, only stop the event from propagating in that handler that will be called last. When you've got two listeners, attached to the same element/object for the same event, I've never come across a situation where the listener that was attached first, wasn't also called first. That's it, I'm off to bed, hoping I made sense
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17352104', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/258813/']}
|
jdg_373075
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21045198
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am getting this expcetion: java.lang.ClassCastException: android.view.ViewGroup$LayoutParams cannot be cast to android.widget.AbsListView$LayoutParamsat android.widget.ListView.setupChild(ListView.java:1806)at android.widget.ListView.makeAndAddView(ListView.java:1775)at android.widget.ListView.fillDown(ListView.java:672)at android.widget.ListView.fillFromTop(ListView.java:732)at android.widget.ListView.layoutChildren(ListView.java:1625)at android.widget.AbsListView.onLayout(AbsListView.java:1863)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1628)at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1486)at android.widget.LinearLayout.onLayout(LinearLayout.java:1399)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1628)at android.widget.LinearLayout.layoutHorizontal(LinearLayout.java:1617)at android.widget.LinearLayout.onLayout(LinearLayout.java:1401)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1628)at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1486)at android.widget.LinearLayout.onLayout(LinearLayout.java:1399)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.widget.FrameLayout.onLayout(FrameLayout.java:431)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1628)at android.widget.LinearLayout.layoutVertical(LinearLayout.java:1486)at android.widget.LinearLayout.onLayout(LinearLayout.java:1399)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.widget.FrameLayout.onLayout(FrameLayout.java:431)at android.view.View.layout(View.java:11278)at android.view.ViewGroup.layout(ViewGroup.java:4224)at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1504)at android.view.ViewRootImpl.handleMessage(ViewRootImpl.java:2458)at android.os.Handler.dispatchMessage(Handler.java:99)at android.os.Looper.loop(Looper.java:137)at android.app.ActivityThread.main(ActivityThread.java:4424)at java.lang.reflect.Method.invokeNative(Native Method)at java.lang.reflect.Method.invoke(Method.java:511)at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)at dalvik.system.NativeStart.main(Native Method) but the problem is that I can not find the root of the problem. Where exactly and what that means? And why widget, i don't have any widget! CODE: Not sure if here: public class MyListPreference extends ListPreference { public MyListPreference(Context context) { super(context); } public MyListPreference(Context context, AttributeSet attrs) { super(context, attrs); } @Override protected View onCreateDialogView() { TextView dialogTitle = (TextView)super.onCreateDialogView(); if (dialogTitle != null) { // Dialog dialogTitle.setBackgroundColor(getContext().getResources().getColor(R.color.category_background)); dialogTitle.setPadding(10, 4, 4, 4); // Text dialogTitle.setTextSize(14); dialogTitle.setTypeface(null, Typeface.BOLD); dialogTitle.setTextColor(getContext().getResources().getColor(R.color.actionbar_background)); dialogTitle.setLayoutParams(new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)); } return dialogTitle; }}
Now provide the response and nothing else.
|
dialogTitle.setLayoutParams(new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT)); Here you're replacing existing layout params of correct type AbsListView.LayoutParams with more generic ViewGroup.LayoutParams . Layout params type is that of the parent container of the view. If you need to modify existing layout params, access them with getLayoutParams() , modify it, and call requestLayout() to notify that the layout has changed. Example. Instead of fooView.setLayoutParams(new LayoutParams(123, 456)); do LayoutParams lp = fooView.getLayoutParams();lp.width = 123;lp.height = 456;fooView.requestLayout();
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21045198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/267679/']}
|
jdg_373076
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14472741
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i was making an upload script when i tested an image file wit this extension .JPG, i don't know whats the difference between jpg or jpeg, but it seems that $_FILES don't recognize this file type. I've read several threads that $_FILES ins't that reliable when it comes to mime type, so i decided to used the php's mime type function mime_content_type() , php's getimagesize() , pathinfo() , though pathinfo returns a file name, and type, but i need the path of the file which is NOT present, all of the functions are being passed with $_FILES['file']['tmp_name'] as parameters. So this problem came up when i decided to upload an image file e.g sample.JPG, i think most of this files are raw from the camera <-- that's what i think though but nevertheless what is more important is that i can upload them .JPG, .jpg, jpeg, .png. all of them works fine except for .JPG. Main problem is that field ['tmp_name'] in $_FILES has no values when .JPG is to be uploaded. Any of you guys who have encountered this problem please do share your workaround or "how did you do it" kind of thing.
Now provide the response and nothing else.
|
If $_FILES[$field]['tmp_name'] is empty then the file hasn't been uploaded. You should look at $_FILES[$field]['error'] to see why. FWIW, and as far as I understand it, the mime-type in $_FILES[] is provided by the browser. Update : here is a bit of potted code to handle all file upload errors: $message = 'Error uploading file'; switch( $_FILES['newfile']['error'] ) { case UPLOAD_ERR_OK: $message = false;; break; case UPLOAD_ERR_INI_SIZE: case UPLOAD_ERR_FORM_SIZE: $message .= ' - file too large (limit of '.get_max_upload().' bytes).'; break; case UPLOAD_ERR_PARTIAL: $message .= ' - file upload was not completed.'; break; case UPLOAD_ERR_NO_FILE: $message .= ' - zero-length file uploaded.'; break; default: $message .= ' - internal error #'.$_FILES['newfile']['error']; break; } if( !$message ) { if( !is_uploaded_file($_FILES['newfile']['tmp_name']) ) { $message = 'Error uploading file - unknown error.'; } else { // Let's see if we can move the file... $dest .= '/'.$this_file; if( !move_uploaded_file($_FILES['newfile']['tmp_name'], $dest) ) { // No error supporession so we can see the underlying error. $message = 'Error uploading file - could not save upload (this will probably be a permissions problem in '.$dest.')'; } else { $message = 'File uploaded okay.'; } } }
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14472741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1545664/']}
|
jdg_373077
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3263009
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I got this from today's test. Let $z\in \mathbb{C}$ .If $|z^2 + 2019| < 2019$ prove that $|z + \sqrt{2019}| > 31$ I tried triangle inequality, but doesn't work. I also tried using contradiction, that is $|z + \sqrt{2019}| \leq 31$ then letting $z = x + iy$ but then turn to a mess. So far I've got $$(x^2 + y^2)^2 + 4038(x^2 - y^2) < 0$$ from the assumption. Any hint?Thanks!
Now provide the response and nothing else.
|
The solution follows from the following lemma. Lemma. For $z\in\mathbb C$ , we have that $$|z^2+1|<1\qquad\text{implies}\qquad |z+1|>\frac{1}{\sqrt 2}.$$ The solution follows easily once we have the lemma, since we may introduce a positive real $r>0$ and replace $z$ with $z/r$ to deduce that $$|z^2+r^2|<r^2\qquad\text{implies}\qquad |z+r|>\frac{r}{\sqrt 2}.$$ Taking $r=\sqrt{2019}$ and noting that $\sqrt{2019/2}>31$ yields the solution. Now for the proof of the lemma, observe that $$|z^2+1|^2=(|z|^2-1)^2+(z+\bar{z})^2\geq \frac{(|z|^2+z+\bar{z}-1)^2}{2}=\frac{(|z+1|^2-2)^2}{2},$$ and thus $|z^2+1|<1$ implies that $|z+1|^2>2-\sqrt{2}$ , which is a slightly stronger bound than in the lemma (since $2-\sqrt{2}>1/2$ ). Side note. Putting the tight bound through, we see that the best constant in place of $31$ is $$\sqrt{2019}\cdot\sqrt{2-\sqrt 2}\approx 34.39$$ To see that this truly the best constant, it is necessary to consider the equality case in the inequality we used during the proof of the lemma. In fact, we can tighten the inequality to an identity $$2|z^2+1|^2=\bigl(|z+1|^2-2\bigr)^2+\bigl(|z-1|^2-2\bigr)^2,$$ which illuminates what is going on.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/3263009', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/513870/']}
|
jdg_373078
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7696862
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I noticed a strange behaviour working with netcat and UDP. I start an instance (instance 1) of netcat that listens on a UDP port: nc -lu -p 10000 So i launch another instance of netcat (instance 2) and try to send datagrams to my process: nc -u 127.0.0.1 10000 I see the datagrams. But if i close instance 2 and relaunch again netcat (instance 3): nc -u 127.0.0.1 10000 i can't see datagrams on instance 1's terminal. Obsiously the operating system assigns a different UDP source port at the instance 3 respect to instance 2 and the problem is there: if i use the same instance'2 source port (example 50000): nc -u -p 50000 127.0.0.1 10000 again the instance 1 of netcat receives the datagrams. UDP is a connection less protocol so, why? Is this a standard netcat behaviour?
Now provide the response and nothing else.
|
When nc is listening to a UDP socket, it 'locks on' to the source port and source IP of the first packet it receives. Check out this trace: socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 3setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0bind(3, {sa_family=AF_INET, sin_port=htons(10000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0recvfrom(3, "f\n", 2048, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(52832), sin_addr=inet_addr("127.0.0.1")}, [16]) = 2connect(3, {sa_family=AF_INET, sin_port=htons(52832), sin_addr=inet_addr("127.0.0.1")}, 16) = 0 Here you can see that it created a UDP socket, set it for address reuse, and bound it to port 10,000. As soon as it received its first datagram (from port 52,832), it issued a connect system call 'connecting' it to the 127.0.0.1:52,832. For UDP, a connect rejects all packets that don't match the IP and port in the connect .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7696862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/707769/']}
|
jdg_373079
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6009506
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to install GDAL and Python on my Windows 7 workstation. Python-version: 2.6.6 Gdal 1.8 (Visual Visual C++ Studio 2010 Express) I followed the instructions at http://ucsb.pbworks.com/w/page/1936549/Using-GDAL-with-Python and added PYTHONPATH and the GDAL_DATA to my systems environment variables. Furthermore I modified my PATH variable by adding the GDAL binaries folder. The GDAL Python bindings I downloaded from: http://vbkto.dyndns.org/sdk/PackageList.aspx?file=release-1600-gdal-1-8-mapserver-5-6.zip " from osgeo import gdal " causes the following error-message: Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> from osgeo import gdal File "C:\Python26\lib\site-packages\osgeo\__init__.py", line 21, in <module> _gdal = swig_import_helper() File "C:\Python26\lib\site-packages\osgeo\__init__.py", line 17, in swig_import_helper _mod = imp.load_module('_gdal', fp, pathname, description)ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden. Thanks in advance,Martwig
Now provide the response and nothing else.
|
To successfully load GDAL from python in Windows 7, I follow these steps, which should be quite generic: Install python of your choosing, I used v2.7.2 x64 downloaded from the official website Run python from command line to determine the compiler version used to build python (mine shows this message: Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit (AMD64)] on win32 ) Download the corresponding SDK from this link: http://www.gisinternals.com/release.php . Select the correct package for your python (if using the above configuration, then MSVC2008 x64 release link should be selected). From the subpage, download the following: GDAL core components Python bindings Install GDAL core components, add the installation dir to path, and add new environment variable 'GDAL_DATA' pointing to data subdirectory of GDAL installation. Install python bindings You should be profiting from them by now :)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6009506', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/740650/']}
|
jdg_373080
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12229282
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Questions: 1) Difference between @Component and @Configuration ? I have read that both remove the necessity of wiring code to be put in XML, but did not get the difference between these. 2) What are the differences between @Autowired , @Inject and @Resource ? - Which one to use when? - What are the pros/cons of each?
Now provide the response and nothing else.
|
@Component and @Configuration are indeed very different types of annotations. @Component and similar annotations ( @Service , @Repository , etc. )and its JSR-330 counterpart @Named allow you to declare beans that are to be picked up by autoscanning with <context:component-scan/> or @ComponentScan they register the bean definition for the classes, so they are roughly equivalent to declaring the specified beans with the <bean ... /> tag in XML. This bean types will adhere to the standard proxy creation policies. @Configuration annotation was designed as the replacement of the XML configuration file. To create @Configuration annotated beans, Spring will always use CGLIB to subclass the @Configuration annotated class, overriding its @Bean annotated method to replace it with the bean lookup method to make singleton beans to be created only once. (Spring does not use CGLIB to intercept internal method calls of normal Spring beans, it creates a separate instance of proxy instead(same way like JDK proxy does). Doing so allows to use proxies to avoid cardinality mismatch - for example a proxy singleton can fetch current session bean, which is not possible with class inheritance only. ). Despite that, @Configuration annotated classes are still able to use annotated( @Autowired , @Inject etc.) fields and properties to request beans (and even other @Configuration annotated beans too) from the container. Example from 4.12.5 section of the documentation @Configurationpublic class AppConfig { @Bean public ClientService clientService1() { ClientServiceImpl clientService = new ClientServiceImpl(); clientService.setClientDao(clientDao()); return clientService; } @Bean public ClientService clientService2() { ClientServiceImpl clientService = new ClientServiceImpl(); clientService.setClientDao(clientDao()); return clientService; } @Bean public ClientDao clientDao() { return new ClientDaoImpl(); }} in the example above only one ClientDao instance will be created. @Autowired is Spring annotation, while @Inject is a JSR-330 annotation. @Inject is equivalent to @Autowired or @Autowired(required=true) , but you can't get @Autowired(required=false) behavior with the JSR-330 @Inject annotation. This annotation always uses by-type autowiring. Spring implements JSR-250 @Resource annotation in a rather special way. @Resource was originally designed for locating JNDI resources in Java EE, but Spring widens it applicability making it possible to wire to any bean in the container(JNDI resources are available as beans with the help of SimpleJndiBeanFactory ). The name of the corresponding bean can be specified as name attribute of @Resource annotation, if no name was specified, then the name of the annotated field or property will be used. Another strange feature is that if no bean with the property name was found spring will fallback to by-type wiring. Example Imagine that we have an AlphaClass bean named beanAlpha and a BetaClass bean beanBeta in the container. @Resource BetaClass something; // Wires to beanBeta - by-type@Resource BetaClass beanAlpha; // Will throw exception, because "beanAlpha" is not BetaClass -> it's a bad idea to use @Resource as a replacement of @Autowired@Resource Object beanAlpha; //Wires to beanAlpha - by-name So it's a good practice to always specify resource name explicitly when using @Resource annotation. Documentation Spring annotations Bean standard annotations update fixed JSR references as shevchik has pointed out. DI specific annotations are provided by JSR-330, which was developed by Google (Guice Framework) and SpringSource(Spring Framework) engineers. @Resource is JNDI based and provided by JSR-250 .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/12229282', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1152500/']}
|
jdg_373081
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3657672
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a question in my assignment : If $f$ is a nonconstant entire function such that $|f(z)|\geq M|z|^n$ for $|z|\geq R$ for some $n\in\mathbb N$ and some $M$ and $R$ in $(0,\infty)$ show that $f$ is a polynomial of degree atleast $n$ . Now , I defined a function $\ g(z) = \frac {1}{f(z)}\ $ such that $\ |g(z)| \le \frac{1}{M{|z|}^n}.$ Now , by using the cauchy inequality $$|g^{(n)}(z)| \le \frac{n!}{R^n |z|^nM}.$$ Considering that $ g(z) $ is an analytic function , it has a radius of convergence $ \infty $ $ \implies\ g^{(n)}(z) = 0.$ But if we go by this approach , then $ g^{(n)}(z) = 0 \ $ for any n . Also how can we be so sure that $ f(z) \neq 0 $ for any z ? Is my reasoning correct or is there some other way to solve it ?
Now provide the response and nothing else.
|
As @Brian points out, $f$ has only finitely many zeros. Of course, $f(z)\neq 0$ if $|z|\geq R$ . Since the set $B_R=\{z\mid |z|\leq R\}$ is compact, $f$ can only have finitely many zeros in $B_R$ (use the identity theorem). Let $a_1,\ldots,a_k$ be the zeros of $f$ counted according to multiplicty. Let $$p(z)=(z-a_1)\cdots(z-a_k)=z^k+b_{k-1}z^{k-1}+\cdots+b_0.$$ For $|z|\geq R,$ we have $$|p(z)|\leq |z|^k\Bigl(1+\frac{|b_{k-1}|}{|z|}+\cdots+\frac{|b_{0}|}{|z|^k}\Bigl)\leq C|z|^k,$$ where $C=1+\frac{|b_{k-1}|}{R}+\cdots+\frac{|b_{0}|}{R^k}.$ Thus we have $$\frac{|z|^n|p(z)|}{|f(z)|}\leq \frac{|p(z)|}{M}\leq \frac{C|z|^k}{M},$$ for $|z|\geq R$ . Suppose that $n=k$ . Then, by Liouville, we see that $\frac{p(z)}{f(z)}$ is a constant function and hence $f$ is a polynomial of degree $k=n$ . Suppose now that $n\lt k$ . Then it is easy to see that $\frac{p(z)}{f(z)}$ is a polynomial of degree $\leq k-n$ (use the Cauchy's integral formula for derivatives. Click here for a proof.)But $\frac{p(z)}{f(z)}$ is a nowhere vanishing entire function. So $\frac{p(z)}{f(z)}$ is a constant and hence $f$ is a polynomial of degree $k\gt n$ . Finally, assume $n\gt k$ . Then, by Liouville's theorem, $\frac{z^{n-k}p(z)}{f(z)}$ is a constant. So $f(z)=cz^{n-k}p(z)$ for some constant $c$ and degree of $f$ is $n$ . But $f$ and $p$ share the same zeros with same multiplicities. So degree of $f$ is equal to degree of $p$ , i.e., $n=k$ , a contradiction. (One can also use the Rouche's theorem to conclude. See @N. S.'s comment below.)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3657672', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/641044/']}
|
jdg_373082
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11856077
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In the Java/C world, people often use enums. If I'm using a Java library which using enums, I can convert between them and keywords, for example, using (. java.lang.Enum valueOf e..., (aget ^"[Ljava.lang.Enum;" (. e (getEnumConstants)) i) , and some reflection. But in the Clojure world, do people ever need anything like an enum (a named integer) ? If not, how is their code structured that they don't need them ? If yes, what's the equivalent ? I sense I'm really asking about indices (for looping), which are rarely used in functional programming (I've used map-indexed only once so far).
Now provide the response and nothing else.
|
For almost all the Clojure code I have seen keywords tend to be used instead of Enums they are name-spaced and have all the other useful properties of keywords while being much easier to write. They are not an exact standin because they are more dynamic (as in dynamic typing) than Java enums as for indexing and looping I find it more idiomatic to map over a sequence of keywords: (map do-stuff [:a :b :c :d] (range)) than to loop over the values in an enumeration, which I have yet to find an example of in Clojure code, though an example very likely exists ;-)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11856077', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/811381/']}
|
jdg_373083
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20766813
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have this number i = -6884376 .How do I refer to it as to an unsigned variable?Something like (unsigned long)i in C.
Now provide the response and nothing else.
|
Assuming : You have 2's-complement representations in mind; and, By (unsigned long) you mean unsigned 32-bit integer, then you just need to add 2**32 (or 1 << 32) to the negative value. For example, apply this to -1: >>> -1-1>>> _ + 2**324294967295L>>> bin(_)'0b11111111111111111111111111111111' Assumption #1 means you want -1 to be viewed as a solid string of 1 bits, and assumption #2 means you want 32 of them. Nobody but you can say what your hidden assumptions are, though. If, for example, you have 1's-complement representations in mind, then you need to apply the ~ prefix operator instead. Python integers work hard to give the illusion of using an infinitely wide 2's complement representation (like regular 2's complement, but with an infinite number of "sign bits"). And to duplicate what the platform C compiler does, you can use the ctypes module: >>> import ctypes>>> ctypes.c_ulong(-1) # stuff Python's -1 into a C unsigned longc_ulong(4294967295L)>>> _.value4294967295L C's unsigned long happens to be 4 bytes on the box that ran this sample.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/20766813', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1718294/']}
|
jdg_373084
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
173212
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This might be a stupid question, but I'll go ahead and ask it anyway. Say I am on Windows and I create a file, secret_financial_plan.txt, where I store all my company's black market financial operations. Once my operations are all finished, I delete the file and empty the recycle bin to cover my tracks. A few days later I realize "permanently" deleted files in Windows are fairly easily recoverable. So, I decide to apply full disk encryption (FDE) on my device by switching over to Linux and using LUKS or by encrypting the drive in Windows with Veracrypt or by using some other piece of FDE-software. Would it be possible for a forensics expert, who does NOT have access to the decryption key, to recover the secret_financial_plan.txt file after having applied FDE, or not?
Now provide the response and nothing else.
|
It comes down to the following question: Are the plaintext bytes of secret_financial_plan.txt 's contents still stored on the disk? Let's go through the steps: I create a file, secret_financial_plan.txt, where I store all my company's black market financial operations Plaintext is written to the disk I delete the file and empty the recycle bin to cover my tracks. File is renamed and then the entry from the NTFS Master File Table is marked as deleted. The plaintext is still on the disk. I decide to apply full disk encryption (FDE) on my device by switching over to Linux and using LUKS or by encrypting the drive in Windows with Veracrypt or by using some other piece of FDE-software There are different configuration options there. If you specified to encrypt the entire disk, including unallocated space, the plaintext bytes will be overwritten with the encrypted data. If you only encrypt the allocated space, the deleted file will not be overwritten* and so the plaintext might still be present, depending on whether or not it was overwritten by later writes to the disk. * Note: These are the choices Bitlocker provides. I do not know how the other FDE tools operate.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/173212', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/163505/']}
|
jdg_373085
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
320512
|
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just have noticed: if you delete a post with score of 3 (or more) you get the 'Disciplined Badge' losing the post and the reputation earned on it; then undeleting it you get back the post/points plus the 'Disciplined Badge' remains with you. Is it supposed to be like this? Would an admin revoke the awarded badge if this "trick" becomes noticed? There's some users confusing the 'Disciplined Badge' with the 'Peer Pressure Badge'. So, to let it clear: Disciplined Badge: Delete own post with score of 3 or higher.
Now provide the response and nothing else.
|
Badges will not be taken away, even if at some point the criteria for achieving them is not met any more. Regular badges, once earned, are not taken away (at least not automatically). Tag badges (earned for a score + minimum number of posts in a tag) are taken away when you no longer meet the criteria, and if any badge was earned through heinous cheating, it is sometimes taken away manually at the discretion of the moderators (it takes a Stack Exchange developer to remove it). (Taken from here ) So yes, it is supposed to be like this, whether it seems good or not. IMHO, Pekka 웃 made a very good point about this in his (now deleted) comment : badges exist mainly for an educational purpose. Someone gaming the badge has arguably done what the system wants them to do - learn about how an aspect of it works To address your update: I do not think it would be feasible for mods to manually remove the badge once awarded, as there would probably be way too many users doing this (nothing I can prove). But further, this is currently not even possible for "standard" mods, as - like mentioned above: it takes a Stack Exchange developer to remove it meaning a lot of work for a small amount of people. There are actually quite a lot of badges that can be "cheated" in a similar manner: Scholar "Ask a question and accept an answer" Autobiographer "Complete "About Me" section of user profile" Commentator "Leave 10 comments " Critic "First down vote" Disciplined "Delete own post with score of 3 or higher" Peer Pressure "Delete own post with score of -3 or lower" Suffrage "Use 30 votes in a day" Supporter "First up vote " Vox Populi "Use the maximum 40 votes in a day" (Informed)* "Read the entire tour page" *Sort of. You can't undo the action but you do not necessarily have to read the page Notice, how every one of those badges is a bronze one. This supports the argument, that those badges are only for educational purposes, to invite you to try new features. It even says so on the right sidebar of the badges page : Bronze badges encourage users to try out new features on the site. They are easy to get if you try!
|
{}
|
{'log_upvote_score': 7, 'links': ['https://meta.stackoverflow.com/questions/320512', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/5875416/']}
|
jdg_373086
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9989135
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on building a RESTful web service. I have read up on the principles of using HTTP for every mechanism as far as it will take you, and most of the time, like when fetching resources, it works pretty well. But when I need to POST a new entry of some sort, in the interest of clarity and robustness, no matter what the client may do, I want to offer the particular validation errors that the new entry may have failed at. Additionally, there are specific errors where, say, the data for creating a new user is perfectly valid, but a nickname or an email address may be taken. Simply returning 409 Conflict doesn't finely enough detail which of the nickname or the email address was taken. So getting around this isn't rocket science: document a bunch of specific error codes and return an object with errors: { errors: [4, 8, 42] } This means that in the case of unsuccessful requests, I'm not returning the resource or its key as I may be expected to by the REST philosophy. Similarly, when I return many resources, I have to frame them in some way in an array. So my question is: would I still be providing a well-behaved RESTful web service if I standardized an envelope to use for every request, such that, for example, there's always an object like { errors, isSuccessful, content } ? I have previously built RPC-style web services that used this, but I don't want to make something that's "almost REST". If there's going to be any point to being REST, I'd want to be as well-behaved as possible. If the answer is "hell no", which I think it may be, I would like to hear if it's at least solving the validation problem correctly, and what a good reference for this sort of problem solving might be, because most guides I've found have only detailed the simple cases.
Now provide the response and nothing else.
|
HTTP is your envelope. You're doing the right thing by returning a 4** error code. Having said that, there is nothing wrong with having a descriptive body on a response – in fact in the HTTP RFC , most of the HTTP Error codes advocate that you do return a description of why the error occurred. See 403 for example: If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. So you're okay to continue to use the body of a response for a more detailed description of the error(s). If you're unsure of the specific HTTP error response to use (e.g. multiple errors), and you know that the user should not repeat the request as they just did it, I usually fall back to using 400.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9989135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/573976/']}
|
jdg_373087
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27922898
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am planning to decide on how many nodes should be present on Kafka Cluster. I am not sure about the parameters to take into consideration. I am sure it has to be >=3 (with replication factor of 2 and failure tolerance of 1 node). Can someone tell me what parameters should be kept in mind while deciding the cluster size and how they effect the size. I know of following factors but don't know how it quantitatively effects the cluster size. I know how it qualitatively effect the cluster size. Is there any other parameter which effects cluster size? 1. Replication factor (cluster size >= replication factor)2. Node failure tolerance. (cluster size >= node-failure + 1) What should be cluster size for following scenario while consideration of all the parameters 1. There are 3 topics.2. Each topic has messages of different size. Message size range is 10 to 500kb. Average message size being 50kb.3. Each topic has different partitions. Partitions are 10, 100, 5004. Retention period is 7 days5. There are 100 million messages which gets posted every day for each topic. Can someone please point me to relevant documentation or any other blog which may discuss this. I have google searched it but to no avail
Now provide the response and nothing else.
|
As I understand, getting good throughput from Kafka doesn't depend only on the cluster size; there are others configurations which need to be considered as well. I will try to share as much as I can. Kafka's throughput is supposed to be linearly scalabale with the numbers of disk you have. The new multiple data directories feature introduced in Kafka 0.8 allows Kafka's topics to have different partitions on different machines. As the partition number increases greatly, so do the chances that the leader election process will be slower, also effecting consumer rebalancing. This is something to consider, and could be a bottleneck. Another key thing could be the disk flush rate. As Kafka always immediately writes all data to the filesystem, the more often data is flushed to disk, the more "seek-bound" Kafka will be, and the lower the throughput. Again a very low flush rate might lead to different problems, as in that case the amount of data to be flushed will be large. So providing an exact figure is not very practical and I think that is the reason you couldn't find such direct answer in the Kafka documentation. There will be other factors too. For example the consumer's fetch size, compressions, batch size for asynchronous producers, socket buffer sizes etc. Hardware & OS will also play a key role in this as using Kafka in a Linux based environment is advisable due to its pageCache mechanism for writing data to the disk. Read more on this here You might also want to take a look at how OS flush behavior play a key role into consideration before you actually tune it to fit your needs. I believe it is key to understand the design philosophy, which makes it so effective in terms of throughput and fault-tolerance. Some more resource I find useful to dig in https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines http://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/ https://grey-boundary.io/load-testing-apache-kafka-on-aws/ https://cwiki.apache.org/confluence/display/KAFKA/Performance+testing
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27922898', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3642260/']}
|
jdg_373088
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47468369
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I checked the kubernetes docs, find that pods/exec resources has no verb,and do not know how to only control access for it? Since I create a pod, someone else need to access it use 'exec' but cannot create anything in my cluster. How to implement this?
Now provide the response and nothing else.
|
Since pods/exec is a subresource of pods, If you want to exec a pod, you first need to get the pod, so here is my role definition. kind: RoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: namespace: default name: pod-readerrules:- apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "list"]- apiGroups: [""] resources: ["pods/exec"] verbs: ["create"]
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/47468369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5001352/']}
|
jdg_373089
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
66193796
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In this project, it uses NestJS along with TypeORM. For real API requests, CRUD operation is being operated on MySQL(which is using AWS RDS). Now I am trying to use SQLite(In-Memory) to test API results. I successfully implemented this in Unit Test, as the code below. First, below is create-memory-db.ts , which returns a connection to in-memory SQLite database. type Entity = Function | string | EntitySchema<any>;export async function createMemoryDB(entities: Entity[]) { return createConnection({ type: 'sqlite', database: ':memory:', entities, logging: false, synchronize: true, });} And by using the exported function above, I successfully ran Unit test, like below. describe('UserService Logic Test', () => { let userService: UserService; let connection: Connection; let userRepository: Repository<User>; beforeAll(async () => { connection = await createMemoryDB([User]); userRepository = await connection.getRepository(User); userService = new UserService(userRepository); }); afterAll(async () => { await connection.close(); }); afterEach(async () => { await userRepository.query('DELETE FROM users'); }); // testing codes.}); I am trying to do the same thing on e2e tests. I tried below code. // user.e2e-spec.tsdescribe('UserController (e2e)', () => { let userController: UserController; let userService: UserService; let userRepository: Repository<User>; let connection: Connection; let app: INestApplication; const NAME = 'NAME'; const EMAIL = 'test@test.com'; const PASSWORD = '12345asbcd'; beforeAll(async () => { connection = await createMemoryDB([User]); userRepository = await connection.getRepository(User); userService = new UserService(userRepository); userController = new UserController(userService); const moduleFixture: TestingModule = await Test.createTestingModule({ imports: [], controllers: [UserController], providers: [UserService], }).compile(); app = moduleFixture.createNestApplication(); await app.init(); }); afterAll(async () => { await connection.close(); }); afterEach(async () => { // await userRepository.query('DELETE FROM users'); }); it('[POST] /user : Response is OK if conditions are right', () => { const dto = new UserCreateDto(); dto.name = NAME; dto.email = EMAIL; dto.password = PASSWORD; return request(app.getHttpServer()) .post('/user') .send(JSON.stringify(dto)) .expect(HttpStatus.CREATED); });}); I cannot create UserModule since it doesn't have a constructor with Connection parameter.The code itself has no compile error, but gets results below when e2e test is executed. Nest can't resolve dependencies of the UserService (?). Please make sure that the argument UserRepository at index[0] is available in the RootTestModule context.Potential solutions:- If UserRepository is a provider, is it part of the current RootTestModule?- If UserRepository is exported from a seperate @Module, is that module imported within RootTestModule? @Module({ imports: [/* The module containing UserRepository */] })TypeError: Cannot read property 'getHttpServer' of undefined. Any help would be greatly appreciated. Thanks :) UPDATE : New error occured after trying below. describe('UserController (e2e)', () => { let userService: UserService; let userRepository: Repository<User>; let connection: Connection; let app: INestApplication; const NAME = 'NAME'; const EMAIL = 'test@test.com'; const PASSWORD = '12345asbcd'; beforeAll(async () => { connection = await createMemoryDB([User]); userRepository = await connection.getRepository(User); userService = new UserService(userRepository); const moduleFixture: TestingModule = await Test.createTestingModule({ imports: [UserModule], }) .overrideProvider(UserService) .useClass(userService) .compile(); app = moduleFixture.createNestApplication(); await app.init(); }); afterAll(async () => { await connection.close(); }); afterEach(async () => { await userRepository.query('DELETE FROM users'); }); it('[POST] /user : Response is OK if conditions are right', async () => { const dto = new UserCreateDto(); dto.name = NAME; dto.email = EMAIL; dto.password = PASSWORD; const result = await request(app.getHttpServer()) .post('/user') .send(JSON.stringify(dto)) .expect({ status: HttpStatus.CREATED }); });}); I checked if query is working, and was able to see that it is using SQLite database as I wanted. But new error appeared in console. TypeError: metatype is not a constructor.TypeError: Cannot read property 'getHttpServer' of undefined.
Now provide the response and nothing else.
|
Okay, I solved this issue by using TypeOrm.forRoot() inside the imports field of Test.createTestingModule . Below is how I did it. describe('UserController (e2e)', () => { let userService: UserService; let userRepository: Repository<User>; let app: INestApplication; const NAME = 'NAME'; const EMAIL = 'test@test.com'; const PASSWORD = '12345asbcd'; beforeAll(async () => { const moduleFixture: TestingModule = await Test.createTestingModule({ imports: [ UserModule, TypeOrmModule.forRoot({ type: 'sqlite', database: ':memory:', entities: [User], logging: true, synchronize: true, }), ], }).compile(); app = moduleFixture.createNestApplication(); await app.init(); userRepository = moduleFixture.get('UserRepository'); userService = new UserService(userRepository); }); afterAll(async () => { await app.close(); }); afterEach(async () => { await userRepository.query('DELETE FROM users'); });});
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66193796', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11523173/']}
|
jdg_373090
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19208725
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is this example usage of sync.WaitGroup correct? It gives the expected result, but I am unsure about the wg.Add(4) and the position of wg.Done() . Does it make sense to add the four goroutines at once with wg.Add() ? http://play.golang.org/p/ecvYHiie0P package mainimport ( "fmt" "sync" "time")func dosomething(millisecs time.Duration, wg *sync.WaitGroup) { duration := millisecs * time.Millisecond time.Sleep(duration) fmt.Println("Function in background, duration:", duration) wg.Done()}func main() { var wg sync.WaitGroup wg.Add(4) go dosomething(200, &wg) go dosomething(400, &wg) go dosomething(150, &wg) go dosomething(600, &wg) wg.Wait() fmt.Println("Done")} Result (as expected): Function in background, duration: 150msFunction in background, duration: 200msFunction in background, duration: 400msFunction in background, duration: 600msDone
Now provide the response and nothing else.
|
Yes, this example is correct. It is important that the wg.Add() happens before the go statement to prevent race conditions. The following would also be correct: func main() { var wg sync.WaitGroup wg.Add(1) go dosomething(200, &wg) wg.Add(1) go dosomething(400, &wg) wg.Add(1) go dosomething(150, &wg) wg.Add(1) go dosomething(600, &wg) wg.Wait() fmt.Println("Done")} However, it is rather pointless to call wg.Add over and over again when you already know how many times it will be called. Waitgroups panic if the counter falls below zero. The counter starts at zero, each Done() is a -1 and each Add() depends on the parameter. So, to ensure that the counter never drops below and avoid panics, you need the Add() to be guaranteed to come before the Done() . In Go, such guarantees are given by the memory model . The memory model states that all statements in a single goroutine appear to be executed in the same order as they are written. It is possible that they won't actually be in that order, but the outcome will be as if it was. It is also guaranteed that a goroutine doesn't run until after the go statement that calls it . Since the Add() occurs before the go statement and the go statement occurs before the Done() , we know the Add() occurs before the Done() . If you were to have the go statement come before the Add() , the program may operate correctly. However, it would be a race condition because it would not be guaranteed.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/19208725', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/317915/']}
|
jdg_373091
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4506206
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In Gorodentsev's "Algebra - 1" (page 44), given a formal power series $f \in K[\![ x]\!]$ , he defines its derivative as $$f(x+t) = f(x) + f'(x) \cdot t + \bigl( \text{terms divisible by }t^2 \bigr).$$ Then he wants to show the chain rule: $$\bigl( f(g(x)) \bigr)' = g'(x) \cdot f'(g(x)). $$ He does this as follows: let $$\tau(x, t) = g(x+t) - g(x) = t \cdot g'(x) + \bigl( \text{terms divisible by }t^2 \bigr)$$ and note that \begin{align}f(g(x+t)) &= f(g(x) + \tau(x, t)) \\&= f(g(x)) + \tau(x, t) \cdot f'(g(x)) + \bigl( \text{terms divisible by }\tau(x, t)^2 \bigr) \\&= f(g(x)) + t \cdot g'(x) \cdot f'(g(x)) + \bigl( \text{terms divisible by }t^2 \bigr). \end{align} Therefore, the result follows from the definition. Can anyone please explain why $$f(g(x) + \tau(x, t)) = f(g(x)) + \tau(x, t) \cdot f'(g(x)) + \bigl( \text{terms divisible by }\tau(x, t)^2 \bigr)$$ holds true? I'm really confused and I would appreciate any help. Thanks.
Now provide the response and nothing else.
|
As Qiaochu Yuan's answer observed, such a ring must have the property that every finitely generated ideal is generated by an idempotent and therefore every ideal is generated by idempotents. Conversely, I claim that if every principal ideal in a commutative ring $R$ is generated by an idempotent then every ideal in $R$ is radical. To prove this, suppose $I\subseteq R$ is an ideal and $r^n\in I$ . Let $e$ be an idempotent that generates the principal ideal $(r)$ . Then since $e$ is a multiple of $r$ , $e^n=e$ is a multiple of $r^n$ so $e\in I$ . But since $(e)=(r)$ , this means $r\in I$ . So, all ideals are radical in a commutative ring iff every principal ideal is generated by an idempotent. Such a ring is called a von Neumann regular ring and much is known about them. For instance, one alternative characterization is that a commutative ring is von Neumann regular iff it is $0$ -dimensional (i.e., every prime is maximal) and reduced.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4506206', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/565958/']}
|
jdg_373092
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
451941
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have this circuit built and tested. I think this should work and charge the battery with 0.4 A to about 13.5 V. The problem: it doesn't charge and between U1 and U3 (battery) is a current of about 0.03 A, but why? The LM2596 is a Buck converter Breakout board and that works flawlessly.R1 is a 4 W resistor so that won't be the problem. Why doesn't it charge? simulate this circuit – Schematic created using CircuitLab
Now provide the response and nothing else.
|
Read the datasheet of the LM317 , on page 9 it states: So when you feed the LM317 14 V it can regulate to 11 V and lower, not 13.5 V. Also there will be 1.25 V across R1 so for 13.5 V you will need to put at least 13.5 + 1.25 + 3 = 17.75 V into the LM317. The ~15 V you're feeding the LM2596 board isn't even enough, there's no need to have that LM2596 converter in place so remove it. You will need a power source with a higher voltage than ~ 15 V. As the LM317 will drop 3 V or more at a significant current, it will get hot so use a heatsink! If the LM317 gets too hot it lowers the current to lower its power dissipation (and allow it to cool down). Note that your circuit does not have a well defined "stop charging" voltage, current will keep flowing and your battery might over charge! I have built an LM317 based battery charger for my 12 V car battery. I use a 19 V laptop power supply I had lying around to power it. In that design I do not use the LM317 as a current source, instead I use it as a voltage regulator set to 13.5 V. Then when the battery has a lower voltage, the LM317 will hit its build-in current limit (< 2.2 A). For a car battery 2.2 A or less is fine. As the battery charges an the voltage reaches 13.5 V, the current gets smaller and smaller until only a leakage current is left. If that 2.2 A is too much for your battery, use this circuit instead:
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/451941', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/228266/']}
|
jdg_373093
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3270409
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an application with two activities and I'd like to be able to have two icons appear in the launcher, each launching the respective activity within the app. Specifically, I want one icon to launch my main app, and another icon to launch my settings activity. Is this possible? Here is what I've tried so far: <activity android:label="MyApp" android:name=".MyApp"> <intent-filter> <action android:name=".MyApp"/> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <activity android:label="Settings" android:name=".Settings"> <intent-filter> <action android:name=".Settings"/> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> This creates two launcher icons, but they both run my main app instead of the second icon running my settings app. I've tried just having the launcher category but then I don't get an icon so it looks like I need the main action as well. Is this the right approach or should I be declaring two applications in the manifest instead?
Now provide the response and nothing else.
|
What you need to do is have your settings activity launch in another task. You can do this by specifying its task affinity. This is done with the attribute android:taskAffinity . By default all activities share the same task affinity that defaults to main package specified in the manifest. On your settings activity you can specify android:taskAffinity="your.own.package.SettingsTask" to have the settings activity launch in its own task. Extra documentation.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3270409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/309885/']}
|
jdg_373094
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36140252
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using ES6 with Babel in my project and I am getting an error when I declare one of my const 'use strict';const APP = window.APP = window.APP || {};const _ = window._;APP.personalCard = (function () { ...}()); the error Uncaught TypeError: Identifier 'APP' has already been declared and that is the whole file, I don't have that declare anywhere else in that file. But I have declared that var in the top of the other files. What do you think it should be ?
Now provide the response and nothing else.
|
But I have declared that var in the top of the other files. That's the problem. After all, this makes multiple declarations for the same name in the same (global) scope - which will throw an error with const . Instead, use var , use only one declaration in your main file, or only assign to window.APP exclusively. Or use ES6 modules right away, and let your module bundler/loader deal with exposing them as expected.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36140252', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4371033/']}
|
jdg_373095
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
69591
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create a regexp in Emacs that matches exactly 3 digits. For example, I want to match the following: 123345789 But not 12341212 23 If I use [0-9]+ I match any single string of digits. I thought [0-9]{3} would work, but when tested in re-builder it doesn't match anything.
Now provide the response and nothing else.
|
If you're entering the regex interactively, and want to use {3} , you need to use backslashes to escape the curly braces. If you don't want to match any part of the longer strings of numbers, use \b to match word boundaries around the numbers. This leaves: \b[0-9]\{3\}\b For those wanting more information about \b , see the docs : matches the empty string, but only at the beginning or end of a word. Thus, \bfoo\b matches any occurrence of foo as a separate word. \bballs?\b matches ball or balls as a separate word. \b matches at the beginning or end of the buffer regardless of what text appears next to it. If you do want to use this regex from elisp code, as always, you must escape the backslashes one more time. For example: (highlight-regexp "\\b[0-9]\\{3\\}\\b")
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/69591', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6180/']}
|
jdg_373096
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15252521
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've found myself needing to do a GetMulti operation with an array of keys for which some entities exist, but some do not. My current code, below, returns an error ( datastore: no such entity ). err := datastore.GetMulti(c, keys, infos) So how can I do this? I'd use a "get or insert" method, but there isn't one.
Now provide the response and nothing else.
|
GetMulti can return a appengine.MultiError in this case. Loop through that and look for datastore.ErrNoSuchEntity . For example: if err := datastore.GetMulti(c, keys, dst); err != nil { if me, ok := err.(appengine.MultiError); ok { for i, merr := range me { if merr == datastore.ErrNoSuchEntity { // keys[i] is missing } } } else { return err }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15252521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/284016/']}
|
jdg_373097
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23882032
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to know if JMS API uses any protocol to transfer messages or if uses its own. If the former, which protocol? I have read many articles over the net but I couldn't find an answer for this.
Now provide the response and nothing else.
|
The standard JMS API is merely a set of interfaces; JMS providers (such as WebSphere MQ) provide their own implementations for these interfaces. The only thing that you can say for sure about all JMS implementations is that they all adhere to the JMS API; other than that, a JMS implementation may use any protocol whatsoever in order to fulfill the JMS API contracts. Now, when you're asking specifically about "protocols", you should also define which "layer" in the communication you are referring to (have a look at the OSI Model , for example). When your JMS client has to talk to a JMS server that is located on another machine on the network (a typical case), the protocol used between the client and the server will be based, in one way or another, on TCP/IP. Over the wire, you'll be able to see TCP/IP packets being exchanged back and forth. At the higher level, there are no guarantees; you are likely to find proprietary protocols varying between different implementors. Remember that, with JMS, performance is often crucial; JMS vendors put a lot of efforts into ensuring that their protocols ("above" TCP/IP) perform well. HTTP, for example, won't do.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23882032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3218553/']}
|
jdg_373098
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
235586
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If there exists any priorities between logical operators. I do not mean in any specific programming language, but in the mathematics. For example, How can we interpret A<=>B<=>C ? Thanks
Now provide the response and nothing else.
|
It is obvious that $f:x\mapsto\frac1{1+x}$ is a monotonically decreasing continuous function $\mathbf R_{\geq0}\to\mathbf R_{\geq0}$, and it is easily computed that $\alpha=\frac{-1+\sqrt5}2\approx0.618$ is its only fixed point (solution of $f(x)=x$). So $f^2:x\mapsto f(f(x))$ is a monotonically increasing function that maps the interval $[0,\alpha)$ into itself. Since $x_3=f^2(x_1)=\frac12>0=x_1$ one now sees by induction that $(x_1,x_3,x_5,...)$ is an increasing sequence bounded by $\alpha$. It then has a limit, which must be a fixed point of $f^2$ (the function mapping each term of the sequence to the next term). One checks that on $ \mathbf R_{\geq0}$ the function $f^2$ has no other fixed point than the one of $f$, which is $\alpha$, so that must be value of the limit. The sequence $(x_2,x_4,x_6,...)$ is obtained by applying $f$ to $(x_1,x_3,x_5,...)$, so by continuity of $f$ it is also convergent, with limit $f(\alpha)=\alpha$. Then $\lim_{n\to\infty}x_n=\alpha$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/235586', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38933/']}
|
jdg_373099
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34492280
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a GridView implemented and activated the mGridView.setChoiceMode(GridView.CHOICE_MODE_MULTIPLE_MODAL); mode. Now I have the possibility to select multiple items from my grid when I perform a long click on one item. I want to achieve this behavior on a normal, short click. Is this possible?
Now provide the response and nothing else.
|
First, I'd suggest to think if this user scenario is what you have been looking for. By default, in Android UX to select something you do long press and it's a pattern users used to. So, maybe you should rethink the whole flow. Saying that, do you actually need GridView.CHOICE_MODE_MULTIPLE_MODAL ? You can handle it on the Adapter level, by just storing selected positions and update this list in onClick handler: static final String[] numbers = new String[] { "A", "B", "C", "D", "E",.... "U", "V", "W", "X", "Y", "Z"};.....gridView = (GridView) findViewById(R.id.gridView1);final CustomAdapter adapter = new CustomAdapter(numbers);gridView.setAdapter(adapter);gridView.setOnItemClickListener(new AdapterView.OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View v, int position, long id) { int selectedIndex = adapter.selectedPositions.indexOf(position); if (selectedIndex > -1) { adapter.selectedPositions.remove(selectedIndex); ((CustomView)v).display(false); } else { adapter.selectedPositions.add(position); ((CustomView)v).display(true); } }}); Custom BaseAdapter to display Custom views: public class CustomAdapter extends BaseAdapter { private String[] strings; List<Integer> selectedPositions = new ArrayList<>(); CustomAdapter(String [] strings) { this.strings = strings; } @Override public int getCount() { return strings.length; } @Override public Object getItem(int position) { return strings[position]; } @Override public long getItemId(int position) { return position; } @Override public View getView(int position, View convertView, ViewGroup parent) { CustomView customView = (convertView == null) ? new CustomView(MainActivity.this) : (CustomView) convertView; customView.display(strings[position], selectedPositions.contains(position)); return customView; }} Custom View (in my case - cell with TextView). Xml: <merge xmlns:android="http://schemas.android.com/apk/res/android"> <TextView android:id="@+id/textView" android:textColor="#FFF" android:gravity="center" android:layout_width="match_parent" android:layout_height="60dp" /></merge> Code: class CustomView extends FrameLayout { TextView textView; public CustomView(Context context) { super(context); LayoutInflater.from(context).inflate(R.layout.custom_view, this); textView = (TextView)getRootView().findViewById(R.id.textView); } public void display(String text, boolean isSelected) { textView.setText(text); display(isSelected); } public void display(boolean isSelected) { textView.setBackgroundColor(isSelected? Color.RED : Color.LTGRAY); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34492280', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1370245/']}
|
jdg_373100
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2823238
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is $7-\cfrac{12}{7-\cfrac{12}{7-\cfrac{12}{7-\cdots}}}$? Conventional methods of solving these types of problems tell me to do as follows: Let$$x =7-\cfrac{12}{7-\cfrac{12}{7-\cfrac{12}{7-\cdots}}}$$ Then $x = 7-\frac{12}{x}$. Then the solution is $x=3, 4$. How do I know which one to pick?
Now provide the response and nothing else.
|
(EDITED) If $f(x) = 7 - 12/x$, then $f'(3) = 4/3$ while $f'(4) = 3/4$. Since $|f'(3)| > 1$, $3$ is a repelling fixed point of the iteration $x_{n+1} = f(x_n)$, while since $|f'(4)|< 1$, $4$ is an attracting fixed point. Thus if you don't start exactly at $3$, the iterations won't converge to $3$, but they will converge to $4$ at least if you eventually get close enough to $4$. The boundary points of the immediate basin of attraction of an attracting fixed point can be $\pm \infty$, a singular point (here $0$), a repelling fixed point,a point mapped to one of those, or a $2$-cycle. Here there are no $2$-cycles, and the basin of attractionis $(3, \infty)$. Thus if you start, or ever get, anywhere $> 3$, you end up approaching the limit $4$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2823238', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/277707/']}
|
jdg_373101
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8066
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(X, d)$ be a metric space. Is the function $x\mapsto d(x, z)$ continuous? Is it uniformly continuous?
Now provide the response and nothing else.
|
As Qiaochu points out $d(x,y)$ is continuous for fixed $x$. You may like to see this as well, as this is a familiar result in Topology: If $A$ is a non empty subset of a metric space $(X,d)$ then the function $f$ on $X$ given by $$f(x)=d(x,A):= \inf_{y\in A} d(x, y)$$ is continuous. Indeed, $$| f(x) - f(y) | = | d(x,A) - d(y,A) | \leq d(x,y),$$ and thus $f$ is uniformly continuous (use $\delta = \epsilon$ in any point). To show this, let $x$ and $y$ be points in $X$, and $p$ any point in $A$. Then $$d(x,p) \leq d(x,y) + d(y,p)\ \ \ \ \text{ (triangle inequality)}$$and so $$d(x,A) \leq d(x,y) + d(y,p)$$ as $d(x,A)$ is the infimum. But then $d(y,p) \geq d(x,A) - d(x,y)$ (for all $p$, obtained by subtracting from the previous inequality), so that $d(y,A) \geq d(x,A) - d(x,y)$ (as $d(y,A)$ is the infimum).So : $d(x,A) - d(y,A) \leq d(x,y)$. Now reverse the roles of $x$ and $y$ to get$d(y,A) - d(x,A) \leq d(x,y)$. This is taken from http://at.yorku.ca/cgi-bin/bbqa?forum=homework_help_2004;task=show_msg;msg=1323.0001
|
{}
|
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/8066', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_373102
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43110660
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying pointer and reference parts and having very hard time to study them.I think I understand now some simple usage of reference and pointers in the function, but there is something I cannot totally understand. Here are some variable declarations: int a = 1;float b = 2;int* p = &a;string s = "Hello";vector<int*> values = {p}; What will be the types of the following expressions? &a&bp&p&s&(s.at(1))values.at(0)&(values.at(0))&values I have no idea what their types exactly are, but I tried it myself. &a : pointer to int&b : pointer float p : pointer to int&p : pointer to pointer to int&s : pointer to string&(s.at(1)) : pointer to stringvalues.at(0) : pointer to int&(values.at(0)) : pointer to pointer to int&values : pointer to pointer to int and one more problem >write the following variable declarations:a) A pointer to a stringb) A reference to a floatc) An array of pointers-to-ints.d) A pointer to a pointer to boole) A reference to a pointer to an int and my answers are: a: string* s = "Hello"b: float& f = g;c: int n =1; int*x =&n; int arr[] = {*x};d: bool y = true; bool* x = &y; bool** qq = &x;e: int a = 1; int* x = &a; int& z = *x; I'm not sure about my answers. Please help these confusing parts.
Now provide the response and nothing else.
|
Updated answer: You're right things don't work as expected when importing everything in a single file. Digging into things it looks like you're running into some magic that Babel/Jest does to support browser scripts that rely on globals (like AngularJS). What's happening is that your module's angular variable is not the same as the global angular variable that is visible to angular-mocks. You can check this by running this at the top of one of your tests: import * as angular from 'angular'import 'angular-mocks'console.log(angular === window.angular); // `false` in Jest!console.log(angular.mock); // undefinedconsole.log(window.angular.mock); // `{...}` defined To work around this you just need to use the global angular variable in your tests. src/__test__/all-in-one.test.js : import "angular";import "angular-mocks";/*Work around Jest's window/global mock magic.Use the global version of `angular` that has been augmented by angular-mocks.*/var angular = window.angular;export var app = angular.module('app', []);app.directive('myDirective', () => ({ link: (scope, element) => { console.log('This does log'); scope.content = 'Hi!'; }, template: 'content: {{content}}'}));describe('myDirective', function(){ var element; var scope; beforeEach(function(){ angular.mock.module(app.name); }); it('should do something', function(){ inject(function( $rootScope, $compile ){ scope = $rootScope.$new(); element = $compile('<my-directive></my-directive>')(scope); scope.$digest(); }); expect(element.html()).toEqual('content: Hi!'); });}); Original answer: (This worked because I was accidentally using the global version of angular inside my test.) The Angular module under test isn't being initialised correctly in your tests. Your call to beforeEach(app) isn't correct. Instead you need to use angular.mock.module("moduleName") to initialise your module. describe('myDirective', () => { var element, scope // You need to pass the module name to `angular.mock.module()` beforeEach(function(){ angular.mock.module(app.name); }); // Then you can set up and run your tests as normal: beforeEach(inject(($rootScope, $compile) => { scope = $rootScope.$new() element = $compile('<my-directive></my-directive>')(scope) scope.$digest() })) it('should actually do something', () => { expect(element.html()).toEqual('Hi!') })}); And then your test works as expected for me: PASS src\__test__\app.test.js myDirective √ should do something (46ms) For reference, here is the full app and test: src/app/app.module.js : import * as angular from 'angular'export var app = angular.module('app', []);app.directive('myDirective', () => ({ link: (scope, element) => { console.log('This does log'); scope.content = 'Hi!'; }, template: 'content: {{content}}'})) src/__test__/app.test.js : import {app} from "../app/app.module";import "angular-mocks";describe('myDirective', function(){ var element; var scope; beforeEach(function(){ angular.mock.module(app.name); }); beforeEach(inject(function( $rootScope, $compile ){ scope = $rootScope.$new(); element = $compile('<my-directive></my-directive>')(scope); scope.$digest(); })); it('should do something', function(){ expect(element.html()).toEqual('content: Hi!'); });});
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43110660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5991689/']}
|
jdg_373103
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29174
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using jQuery and SimpleModal in an ASP.Net project to make some nice dialogs for a web app. Unfortunately, any buttons in a modal dialog can no longer execute their postbacks, which is not really acceptable. There is one source I've found with a workaround , but for the life of me I can't get it to work, mostly because I am not fully understanding all of the necessary steps. I also have a workaround, which is to replace the postbacks, but it's ugly and probably not the most reliable. I would really like to make the postbacks work again. Any ideas? UPDATE: I should clarify, the postbacks are not working because the Javascript used to execute the post backs has broken in some way, so nothing happens at all when the button is clicked.
Now provide the response and nothing else.
|
Both of you were on the right track. What I realized is that SimpleModal appends the dialog to the body, which is outside ASP.Net's <form> , which breaks the functionality, since it can't find the elements. To fix it, I just modified the SimpleModal source to append eveything to 'form' instead of 'body' . When I create the dialog, I also use the persist: true option, to make sure the buttons stay through opening and closing. Thanks everyone for the suggestions! UPDATE: Version 1.3 adds an appendTo option in the configuration for specifying which element the modal dialog should be appended to. Here are the docs .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29174', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2363/']}
|
jdg_373104
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
73557662
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a type build that has a flag template, and according to the active flag bits, it inherits from those types. this allows me to "build" classes from many subclasses with a great number of configurations: #include <type_traits>#include <cstdint>struct A { void a() {} };struct B { void b() {} };struct C { void c() {} };struct D { void d() {} };constexpr std::uint8_t FLAG_BIT_A = 0b1 << 0;constexpr std::uint8_t FLAG_BIT_B = 0b1 << 1;constexpr std::uint8_t FLAG_BIT_C = 0b1 << 2;constexpr std::uint8_t FLAG_BIT_D = 0b1 << 3;struct empty {};template<std::uint8_t flags> using flag_a_type = std::conditional_t<(flags & FLAG_BIT_A), A, empty>;template<std::uint8_t flags> using flag_b_type = std::conditional_t<(flags & FLAG_BIT_B), B, empty>;template<std::uint8_t flags> using flag_c_type = std::conditional_t<(flags & FLAG_BIT_C), C, empty>;template<std::uint8_t flags> using flag_d_type = std::conditional_t<(flags & FLAG_BIT_D), D, empty>;template<std::uint8_t flags>struct build : flag_a_type<flags>, flag_b_type<flags>, flag_c_type<flags>, flag_d_type<flags> {};int main() { build<FLAG_BIT_A | FLAG_BIT_C> foo;} so build<FLAG_BIT_A | FLAG_BIT_C> should result in a class that inherits from A and from C . but it doesn't compile, saying empty is already a direct base class: error C2500: 'build<5>': 'empty' is already a direct base class how can I achieve this without having to make 4 different empty structs to avoid the clash?
Now provide the response and nothing else.
|
Here's another approach using c++20 that provides much more flexibility for bitmask-based inheritance: #include <concepts>#include <cstdint>template <std::integral auto, class...>struct inherit_mask {};template <auto flags, class Base, class... Bases> requires((flags & 1) == 1)struct inherit_mask<flags, Base, Bases...> : Base, inherit_mask<(flags >> 1), Bases...> {};template <auto flags, class Base, class... Bases>struct inherit_mask<flags, Base, Bases...> : inherit_mask<(flags >> 1), Bases...> {};struct A { void a() {} };struct B { void b() {} };struct C { void c() {} };struct D { void d() {} };template <std::uint8_t flags>using build = inherit_mask<flags, A, B, C, D>;using foo = build<0b0101>;static_assert(std::derived_from<foo, A>);static_assert(not std::derived_from<foo, B>);static_assert(std::derived_from<foo, C>);static_assert(not std::derived_from<foo, D>); Compiler Explorer Works on clang, gcc, and msvc, and doesn't lead to exponential instantiation explosion.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/73557662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6039995/']}
|
jdg_373105
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
282597
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The problem is: If $G$ is a finite group with order not divisible by 3, and $(ab)^3=a^3b^3$ for all $a,b\in G$, then show that $G$ is abelian. I have been trying this for a long time but not been able to make any progress. The only thing that I can think of is:$$ab\cdot ab\cdot ab=aaa\cdot bbb\implies(ba)^2=a^2b^2=aabb=(\text{TPT})abba.$$Now, how can I prove the last equality? If I write $aabb=abb^{-1}abb$, then in order for the hypothesis to be correct, $b^{-1}abb=ba\implies ab^2=b^2a$. Where am I going wrong? What should I do?
Now provide the response and nothing else.
|
Suppose that $x\in G$ satisfies $ x^3=e$. Then we cannot have $x\ne e$, otherwise $x$ would have order 3, which implies that 3 divides $|G|$ (recall that the order of a group element divides the order of the group). Hence,$$\forall x\in G:\quad x^3=e~\Longrightarrow~x=e,$$and as $(ab)^3=a^3b^3$ for all $ a,b \in G $, we see that the function $\phi:G\to G$ defined by$$\forall x\in G:\quad\phi(x) \stackrel{\text{def}}{=} x^3$$is an injective group homomorphism. Now,$$\forall a,b\in G:\quad ababab=(ab)^3=a^3b^3=aaabbb.$$Hence,$$\forall a,b\in G:\quad baba=aabb,\quad\text{or equivalently},\quad (ba)^2=a^2b^2.$$Using this fact, we obtain\begin{align}\forall a,b\in G:\quad(ab)^4&= [(ab)^2]^2\\ &=[b^2a^2]^2\\ &=(a^2)^2(b^2)^2\\ &=a^4b^4\\ &=aaaabbbb.\end{align} On the other hand,\begin{align}\forall a,b\in G:\quad(ab)^4&= abababab\\ &=a (ba)^3b\\ &=ab^3a^3b\\ &=abbbaaab.\end{align} Hence, for all $a,b\in G$, we have $aaaabbbb=abbbaaab$, which yields$$\phi(ab)=a^3b^3=b^3a^3=\phi(ba).$$As $\phi$ is injective, we conclude that $ab=ba$ for all $a,b\in G$. Conclusion: $G$ is an abelian group.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/282597', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_373106
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24510707
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any limit on how big the selection statement can be?for example suppose I have 100 failed student as selection, will my below code work? ArrayList<Long> ids_toupdate = getFailedStudents();// has size 100.String selection = String.format(Locale.US, STUDENT._ID + " IN (%s)",TextUtils.join(", ", ids_toupdate));ContentValues cv = new ContentValues();cv.put(RawContacts.FAILED, 1);getContentResolver().update(STUDENT.CONTENT_URI,cv, selection, null);
Now provide the response and nothing else.
|
Roughly speaking the default limit is 1,000,000 bytes or 1,000,000 characters, so unless your 'students' are over 100,000 characters each your statement should be fine. The following is taken from http://www.sqlite.org/limits.html Maximum Length Of An SQL Statement The maximum number of bytes in the text of an SQL statement is limited to SQLITE_MAX_SQL_LENGTH which defaults to 1000000. You can redefine this limit to be as large as the smaller of SQLITE_MAX_LENGTH and 1073741824. If an SQL statement is limited to be a million bytes in length, then obviously you will not be able to insert multi-million byte strings by embedding them as literals inside of INSERT statements. But you should not do that anyway. Use host parameters for your data. Prepare short SQL statements like this: INSERT INTO tab1 VALUES(?,?,?); Then use the sqlite3_bind_XXXX() functions to bind your large string values to the SQL statement. The use of binding obviates the need to escape quote characters in the string, reducing the risk of SQL injection attacks. It is also runs faster since the large string does not need to be parsed or copied as much. The maximum length of an SQL statement can be lowered at run-time using the sqlite3_limit(db,SQLITE_LIMIT_SQL_LENGTH,size) interface. EDIT: 26/10/2022 As Tobias explains in his comment, the linked article now depicts the default max length as follows: Maximum Length Of An SQL Statement The maximum number of bytes in the text of an SQL statement is limited to SQLITE_MAX_SQL_LENGTH which defaults to 1,000,000,000.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24510707', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1718060/']}
|
jdg_373107
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1805587
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X, Y$ be an exponentially distributed random variables with parameters $a, b$. Then $X$ has pdf: $$f_X(x) =\begin{cases} a e^{-a x},& x\geq 0\\0,& \text{otherwise}.\end{cases}$$ Suppose $X$ and $Y$ independent. Show that $$\mathbb P(X>Y) = \frac{b}{a+b}.$$ Now I thought the following:$$f(x,y) = f_X(x)\ f_Y(y) = abe^{-ax -by},\qquad\text{for } x,y > 0.$$And then $$\mathbb P(X>Y) = \int_0^\infty \int_0^x a b e^{-ax -by}\,dydx$$However, if I solve this (manually or using Wolframalpha), I can't seem to end up with $\frac{b}{a+b}$. Any ideas?
Now provide the response and nothing else.
|
\begin{align*}P(X>Y) &= \int_{0}^\infty\int_{0}^x abe^{-ax}e^{-by}\,dydx\\&=\int_0^\infty ae^{-ax}\left[\int_0^xbe^{-by}\,dy\right]\,dx\tag 1\\&=\int_0^\infty ae^{-ax}\left(1-e^{-bx}\right)\,dx\\&=\int_0^\infty ae^{-ax}-a e^{-(a+b)x}\,dx\tag 2\\&=1-\frac{a}{a+b}\int_0^\infty(a+b)e^{-(a+b)x}\,dx\tag 3\\&=\frac{a+b-a}{a+b}\\&=\frac{b}{a+b}.\end{align*}where in I recognize the inside integral as the cdf of $Y$ The left integral is the integral of a density so it is $1$ and I make the integrand a density by multiplying and diving by the factor $a+b$ to make it equal $1$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1805587', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/321702/']}
|
jdg_373108
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
416661
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Class of all metanilpotent groups is a saturated formation ?How do I prove
Now provide the response and nothing else.
|
The inequality$$\limsup_{n\to\infty}a_n\leq\sup_{n\in\mathbb{N}}a_n$$holds for any real numbers $a_n$, because the definition of $\limsup$ is$$\limsup_{n\to\infty}a_n:=\lim_{m\to\infty}\left(\sup_{n\geq m}a_n\right)$$and for any $n\in\mathbb{N}$, we have$$\left(\sup_{n\geq m}a_n\right)\leq\sup_{m\in\mathbb{N}}a_n$$(if the numbers $a_1,\ldots,a_{m-1}$ are less than or equal to the supremum of the others, both sides are equal, and if not, then the right side is larger).Therefore$$\limsup_{n\to\infty}f_n(x)\leq \sup_{n\in\mathbb{N}}f_n(x)$$holds for any real number $x$, which is precisely what is meant by the statement$$\limsup_{n\to\infty}f_n\leq \sup_{n\in\mathbb{N}}f_n.$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/416661', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/60414/']}
|
jdg_373109
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19937927
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a WPF application with a ListBox (called listMyItems) which is successfully bound to a class of MyItems that I created. I have a List of MyItems called currentMyItems which is then assigned as ItemSource to the ListBox. It all works fine, if I add an item to the currentMyItems it pops up on the list, etc.The problem occurs when I try to remove the selected item in the ListBox. This is the code that I use: currentMyItems.Remove((MyItem)listMyItems.SelectedItem); The item disappears from the ListBox but the next time I update it, it pops back up as it was never deleted. Any tips?
Now provide the response and nothing else.
|
I think you may be confused about how data binding works. When you bind a property, you are telling WPF to go look somewhere else for the value of that property. When you bind the ListBox.ItemsSource property to currentMyItems , you are telling WPF to go look at the currentMyItems list to find its list of items. If currentMyItems is an ObservableCollection instead of a List<T> , then the UI will automatically receive a notification to update the bound value when you add or remove an item from the collection. Based on what you say in the question, it sounds like you you have two collections, one of which is bound, and the other which is used to recreate the first collection anytime a change occurs. All that is not needed. Just create one ObservableCollection<MyItem> , bind it to the ListBox.ItemsSource property, and then add or remove items from that single collection. It should work as you would expect. <ListBox x:Name="listMyItems" ItemsSource="{Binding MyItems}" /> and MyItems.Add((MyItem)listMyItems.SelectedItem)MyItems.Remove((MyItem)listMyItems.SelectedItem) If you're interested, I also have some beginner articles on my blog for WPF users who are struggling to understand the DataContext. You may want to check out Understanding the change in mindset when switching from WinForms to WPF and What is this “DataContext” you speak of?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19937927', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2500469/']}
|
jdg_373110
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
119527
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to prove the following: Let $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ be two sequences such that $(a_n)_{n\in\mathbb{N}}$ converges and $(b_n)_{n\in\mathbb{N}}$ is bounded. If $a=\lim_{n\to\infty} a_n$, prove that $$\limsup_{n\to\infty} (a_n+b_n) = a +\limsup_{n\to\infty} b_n$$ It's easy to show that $\limsup_{n\to\infty} (a_n+b_n) \leq a +\limsup_{n\to\infty} b_n$ since the limit superior is subadditive, but I'm at a loss on how to prove the other inequality.
Now provide the response and nothing else.
|
Let $\displaystyle b_{n_k}$ be a subsequence of $\displaystyle b_n$ which converges to $\displaystyle \limsup b_n$. Show that $\displaystyle a_{n_k} + b_{n_k}$ converges. How does this limit relate to $\displaystyle \limsup (a_n + b_n)$?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/119527', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1523/']}
|
jdg_373111
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
284603
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $S$ be a numerical semigroup and $k[S]$ is the associated semigroup ring. I would like to compute canonical module $\omega$ of $k[S].$ I want to show that $\omega=k[t^{-n}:n\in\mathbb Z\setminus S]$. I have shown that $H^1_{m}(k[S])=k[t^{n}:n\in\mathbb Z\setminus S]$ where $m$ is the maximal homogeneous ideal of $k[S].$ Using duality I tried to compute canonical ideal but I am not able to do it.
Now provide the response and nothing else.
|
The problem asks for the least number $N$ such that the number of divisors of $N$ is at least $n+2$. Since all numbers below $N$ must have fewer divisors, clearly $d(N) > d(m)$ for all $1\le m < N$. Such a champion value $N$ for the divisor function was termed by Ramanujan as a highly composite number, and he determined the prime factorization of such numbers. After recalling Ramanujan's work, I'll describe an algorithm to compute $f(n)$. It executes in time $$ O((\log n)^{C\log \log \log n}), $$ for some constant $C$. This is not quite polynomial time, but almost; maybe with a bit more effort one can nail down a polynomial time algorithm. Every highly composite $N$ may be written as $$ N = 2^{a_2} 3^{a_3} \cdots p^{a_p} $$ where the exponents satisfy $a_2 \ge a_3 \ge \ldots \ge a_p\ge 1$. Apart from $4$ and $36$, the last exponent $a_p =1$. Ramanujan's main result concerns the exponents $a_\ell$ for primes $\ell \le p$. He works out detailed estimates for these exponents; roughly they satisfy $$a_\ell \approx \frac{1}{\ell^{\alpha}-1},$$ with $\alpha= \log 2/\log p$, in keeping with the example in Will Sawin's answer. The numbers produced in Will Sawin's answer are what Ramanujan calls "superior highly composite numbers." These numbers $N$ are characterized by the property that for some $\epsilon >0$ one has $$ \frac{d(N)}{N^{\epsilon}} > \frac{d(n)}{n^{\epsilon}}, $$for all $n >N$, and $$ \frac{d(N)}{N^{\epsilon}} \ge \frac{d(n)}{n^{\epsilon}}$$ for all $n\le N$. The "superior highly composite numbers" are strictly a subset of the highly composite numbers. The table on pages 110-112 of Ramanujan's paper lists all the highly composite numbers (with superior highly composite numbers marked with an asterisk) with number of divisors up to $10080$ (that is, Ramanujan computes your $f(n)$ for all $n\le 10078$). Ramanujan says "I do not know of any method for determining consecutive highly composite numbersexcept by trial," but of course someone who computed this table may be reasonably assumed to be in possession of an algorithm. Now for the algorithm and its complexity. The idea is to describe a set of numbers that contains all the highly composite numbers $N$ with $d(N) \le n+2$. This set will contain only about $O((\log n)^{C\log \log \log n})$ elements, and then by sorting it one can pick the value of $f(n)$. We are looking for numbers $N=p_1^{e_1} \cdots p_{k}^{e_k}$ where $p_i$ is the $i$-th prime, and the exponents are in descending order $e_1 \ge e_2 \ge \ldots \ge e_k\ge 1$. Now we can assume that $k\le [\log_2 (n+2)] +1=K$, else $d(N)$ is already larger than $n+2$. Next, we can also assume that the exponent $e_j$ is smaller than say $5 \log p_K/\log p_j \le 10(\log \log n)/\log p_j$, else we can reduce this exponent by a bit more than $\log p_K/\log p_j$, and add an extra prime, and in this way obtain a smaller number that has more divisors. Now the idea is simply to list all numbers (together with their prime factorizations) that satisfy the above conditions on the exponents. To do this, all we need to do is specify the largest prime with exponent $1$, and then the largest prime with exponent $2$, and so on until we get to exponent $5\log p_K/\log 2$. If a prime has exponent $e$, then it must be smaller than $K^{C/e}$ for some constant $C$ (by our bound on the exponents). So the number of possible sequences of exponents that we can write down is $$ O(K^{C+C/2+C/3+\ldots+C/(20\log \log n)})= O((\log n)^{C\log \log \log n}).$$That finishes our running time analysis. The beginning of Ramanujan's table.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/284603', 'https://mathoverflow.net', 'https://mathoverflow.net/users/9485/']}
|
jdg_373112
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3350644
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a student currently learning about Functional Reactive paradigm using F#. It's radically new viewpoint for me. Yesterday I learned about creating a simple ping-pong game using this paradigm. The idea I grasp so far is : we think values as functions of time. On its pure form, it's stateless. However, I need to remember the position of the ball (or state). So I always pass the current position of the ball as the parameter of the global function. If we talk about slight more complex games, like Space Invaders, we have a lot of states (aliens' position, aliens' current HP, number of remaining bombs, etc) Is there an elegant/best way to tackle this problem? Do we always store states on the top level? Do all current states should be given as the additional input argument of the global function? Can anybody explain this using simple sample on F#?Thanks a lot.
Now provide the response and nothing else.
|
There's more than one way to do FRP, and it's an active area of research. What's best can depend a lot on the details of how things interact with each other, and new and better techniques may appear in the future. Broadly the idea is to have behaviours that are functions of time in place of ordinary values (as you said). Behaviours can be defined in terms of other behaviours, and can be defined to swap between other behaviours when particular events occur. In your example, you generally wouldn't need to remember the position of the ball via arguments (but for some kinds of FRP you might do). Instead you can just have a behaviour: ballPos : time -> (float * float) This might have global scope, or for a larger program it may be better to have a local scope with all uses of it in that scope. As things get more complicated, you'll have behaviours defined in increasingly complex ways, depend on other behaviours and events - including recursive dependencies which are handled differently in different FRP frameworks. In F# for recursive dependencies I'd expect you'd need a let rec including all involved behaviours. These can still be organised into structures though - at the top-level you might have: type alienInfo = { pos : float*float; hp : float }type playerInfo = { pos : float*float; bombs : int } let rec aliens : time -> alienInfo array = // You might want laziness here. let behaviours = [| for n in 1..numAliens -> (alienPos player n, alienHP player n) |] fun t -> [| for (posBeh, hpBeh) in behaviours -> {pos=posBeh t; hp=hpBeh t} |] // You might want laziness here.and player : time -> playerInfo = fun t -> { pos=playerPos aliens t; bombs=playerBombs aliens t} And then the behaviours for alienPos, alienHP can be defined, with dependencies on the player, and playerPos, playerBombs can be defined with dependencies on aliens. Anyway, if you can give more details of what kind of FRP you're using, it will be easier to give more specific advice. (And if you want advice on what kind - personally I'd recommend reading: http://conal.net/papers/push-pull-frp/push-pull-frp.pdf )
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3350644', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248836/']}
|
jdg_373113
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27653817
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Calling a webservice written in php and ajax call is ending up in error function <script> $(document).ready(function(){ alert('ajax'); $.ajax({ type: "GET", dataType: "json", // crossDomain: true, contentType: "application/json", url: "http://domain/Customer/getCountryList", headers: { Accept: "application/json", "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET" }, success: function(data){ alert("In S"); }, error:function(xhr,statusText){ alert("In N"); alert('Request Status: ' + xhr.status + ' Status Text: ' + xhr.statusText + ' ' + xhr.responseText); } }) }) </script> And teh service file output is [{"responseCode":"0"},[{"countryId":"1","countryName":"USA","isdCode":"","isActive":"\u0001"}]] how to parse this out put
Now provide the response and nothing else.
|
After going through the docs again, I finally found the solution. Just add card_view:cardUseCompatPadding="true" to your CardView and shadows will appear on Lollipop devices. What happens is, the content area in a CardView take different sizes on pre-lollipop and lollipop devices. So in lollipop devices the shadow is actually covered by the card so its not visible. By adding this attribute the content area remains the same across all devices and the shadow becomes visible. My xml code is like : <android.support.v7.widget.CardView android:id="@+id/media_card_view" android:layout_width="match_parent" android:layout_height="130dp" card_view:cardBackgroundColor="@android:color/white" card_view:cardElevation="2sp" card_view:cardUseCompatPadding="true" >...</android.support.v7.widget.CardView>
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27653817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4022479/']}
|
jdg_373114
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15450060
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My apps allow you to like a picture then it redirects you to the same page I get then error when I try to like a picture I can create a like objects with the shell prompt but why I get this error? thank for helping me AttributeError at /like/3/function' object has no attribute 'objects'Request Method: GET Request URL: http://127.0.0.1:8000/like/3/ Exception Value: 'function' object has no attribute 'objects' Traceback:File "C:\Python26\Lib\site-packages\django\core\handlers\base.py" in get_response111. response = callback(request, *callback_args, **callback_kwargs)File "C:\o\mysite\pet\views.py" in Like195. new_like, created = Like.objects.get_or_create(user=request.user, picture_id=picture_id) This is parts of my views.py def Like(request,picture_id): pid = picture_id new_like, created = Like.objects.get_or_create(user=request.user, picture_id=picture_id) p = Picture.objects.get(pk=pid) if created: HttpResponseRedirect(reverse('world:url_name')) else: HttpResponseRedirect(reverse('world:url_name')) My URLconf: url( Parts of my model: r'^like/(?P\d+)/$', 'pet.views.Like', name = 'Like' ),My boat.html {% if picture %} <ul> {% for pet in picture %} <li><b>description</b> = {{ pet.description}}<br/> {% if pet.image %} <li> <a href ="{% url world:Like pet.id %}"> <img src= "{{ pet.image.url }}" style="cursor:pointer"> </a> <li> {% endif %} {% endfor %} </ul> {% endif %} <a href="{% url world:PictureCreator %}">Add Pictures to your board</a><br/> My models.py class Picture(models.Model): user = models.ForeignKey(User) board = models.ForeignKey(Board,blank=False,null=False) image = models.FileField(upload_to="images/",blank=True) description = models.TextField() is_primary = models.BooleanField(default=False) def __unicode__(self): return self.descriptionclass Like(models.Model): user = models.ForeignKey(User) picture = models.ForeignKey(Picture) created = models.DateTimeField(auto_now_add=True)
Now provide the response and nothing else.
|
your view function name is defined as Like and your model is named Like you define Like as a function so when you go to access Like.objects python does not see your model Like but the function Like you could rename your view function url(r'^like/(?P\d+)/$', 'pet.views.change_name_no_conflict', name = 'Like' )def change_name_no_conflict(request,picture_id): pass
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15450060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2091670/']}
|
jdg_373115
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
144031
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Imagine I have a coin and I throw it into the air. It has side a and side b. It is absolutely random which side is going to be facing up. Knowing that, What probability does the coin have to fall and facing up the side (a)? a) 50% b) 100% c) unknown d) 0% How can it be calculated?
Now provide the response and nothing else.
|
First of all, please note that when we assign probabilities to a physical events, we are making a mathematical model. Some mathematical models seem to fit reality well. Some mathematical models, which may still be useful, fit not so well. But whether we have good fit or not, we should remember that the mathematical model is not the same thing as the reality. Let's produce a probabilisitc model of our coin tossing. If there is good reason to believe, because of symmetry, that the two sides are equally likely , then in our model of the situation, we let the probability the coin lands showing $a$ be $p$. Then the probability that it shows $b$ is also $p$. We have $p+p=1$, and therefore $p=1/2$ ($50\%$). However, that is not the whole story! "Random" in ordinary speech sometimes carries the connotation of "equally likely." But often it doesn't. For example, when you throw two fair dice, the event "sum of the pips is $7$" is a random event. for emphasis, you could say it is absolutely random. However, a sum of $7$ and a sum other than $7$ are definitely not equally likely in the model that experience shows best fits the actual tossing of dice. When the term "random" is used carefully in a mathematical discussion of probability, it does not carry the additional meaning of "equally likely." If we mean equally likely, we say so.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/144031', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/26711/']}
|
jdg_373116
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18924587
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dynamic width container div that contains constant width items. I'd like to be able to resize the container so that it only ever shows whole items, never cutting the item on the right in pieces. JSFiddle For example, a user's screen may render showing 5 items: If that user were to start shrinking the width of their screen, as soon as the bar is no longer wide enough to hold 5 full items I would like it to shrink down to only showing 4 items. Bad: Good: I know this is possible to achieve by using CSS3 media queries, but I'd like to avoid writing a different breakpoint for every single different number of elements. I'd also like to avoid using a javascript resize event handler, though I am not sure if this is possible without it.
Now provide the response and nothing else.
|
Pure CSS (some limitations) This solution is based off a modification to another solution for a similar problem I gave elsewhere . Here is the fiddle. It involves a complex relationship of overlapping pseudo-elements to create the borders, which can cause the solution to have certain limitations on what may or may not be able to be done within it (complex backgrounds would be an issue, as well as a necessity for certain positioning aspects). Nevertheless, it functions in the given case. A Bit of Explanation Essentially, each .item element is building its own section of top/bottom borders using both the :after and :before elements, the former tied to the .itemContainer , the latter tied to the .item itself (the :before is needed to create the last bit of border at the end of the row). Additionally, the :before is also creating the "flexible" position of the right border to give it the responsiveness needed when an element shifts out of view. This is why the :before must be related to the .item itself, and also why each :after element's background must be used to "hide" the right border of the preceding :before element. Since we don't know via css the "count" at any given point as to which element is the "last" in the display, all the :before elements must be displayed, but we don't want right borders for them all, hence why the :after needs to cover them. As an element shifts down to the next line, its :after no longer covers the right border of what has now become the last displayed element, revealing that border to be used as the "right" border of the whole group. HTML (Matching your original fiddle) <div class="itemBar"> <div class="itemContainer"> <div class="item">1</div> <div class="item">2</div> <div class="item">3</div> <div class="item">4</div> <div class="item">5</div> <div class="item">6</div> <div class="item">7</div> <div class="item">8</div> <div class="item">9</div> <div class="item">10</div> </div></div> CSS of main items .itemBar { display: inline-block; width: 50%; /* some width can be set, does not need to be this */}.itemContainer { position: relative; /* :after pseudo-elements are positioned off this */ z-index: 1; /* needed for pseudo-element interaction */ overflow: hidden; display: inline-block; max-height: 68px; width: 100%; border-left: 1px solid black; /* left border is supplied by this */}.item { width: 60px; height: 62px; display: inline-block; margin: 2px; border: 1px solid black; /* NOTE: CANNOT be given positioning */} CSS of Pseudo Elements .item::after { content: ''; position: absolute; /* will position off itemContainer */ z-index: -1; /* push it to the background */ top: 0; /* set it to top of itemContainer */ bottom: 0; /* set it to bottom of itemContainer */ margin-left: -100%; /* shove it past the far left edge of itemContainer */ /* next, use padding to bring it back to its position at the end of the text string of .item */ padding-left: 100%; /* next, add enough padding on the right to compensate for the right padding, right margin, and right border of .item */ padding-right: 3px; /* next, create the top and bottom border of "container", in conjunction with the :before; so this is a pseudo-border for .itemContainer being created by the .item elements */ border-top: 1px solid black; border-bottom: 1px solid black; background: #fff; /* hide other :before borders */}.item:before { /* make right border */ content: ''; padding-top: 66px; /* give it .itemContainer height minus border heights */ width: 100%; margin-top: -3px; /* .item top margin + border width */ margin-left: -100%; /* pull the text in .item back into position */ margin-right: 0; /* next, push this behind the background with an even lower z-index to hide it if it is not the right most element beign used to form the right border */ z-index: -2; float: right; /* get the before element to the right */ position: relative; /* needs to be adjusted in position */ right: -4px; /* move it same as padding-right of the after element */ display: block; /* give it a display */ /* next, use it to build the fake right border and also the fake final top/bottom borders of the of itemContainer */ border-right: 1px solid black; border-top: 1px solid black; border-bottom: 1px solid black;}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18924587', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/386152/']}
|
jdg_373117
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
206374
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Everyone says that I should make my code modular, but isn't it less efficient if I use more method calls rather than fewer, but larger, methods? What is the difference in Java, C, or C++ for that matter? I get that it is easier to edit, read and understand, especially in a group. So is the computation time loss insignificant compared to the code tidiness benefits?
Now provide the response and nothing else.
|
Yes, it is irrelevant. Computers are tireless, near-perfect execution engines working at speeds totally un-comparable to brains. While there is a measurable amount of time that a function call adds to the execution time of a program, this is as nothing compared to the additional time needed by the brain of the next person involved with the code when they have to disentangle the unreadable routine to even begin to understand how to work with it. You can try the calculation out for a joke - assume that your code has to be maintained only once , and it only adds half an hour to the time needed for someone to come to terms with the code. Take your processor clock speed and calculate: how many times would the code have to run to even dream of offsetting that? In short, taking pity on the CPU is completely, utterly misguided 99.99% of the time. For the rare remaining cases, use profilers. Do not assume that you can spot those cases - you can't.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/206374', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/97417/']}
|
jdg_373118
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
74041
|
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The People's Republic of China, according to many reports, is considering invading Taiwan. There is a law on the books that says if Taiwan does not bend diplomatically they will try to make it part of China by force. Why is the United States coming out in support of Taiwan given its own laws would stop a state, say Texas (the state that has the most talk of independence and was once even its own country), from declaring independence? The fact that the United States is supporting a country which could be argued to have seceded while the US explicitly does not allow its own states to become its own countries seems hypocritical. And also though I focus on the USA here, this question is not specifically about the USA. It is also about all countries with anti secession laws that are supporting Taiwan but focused on the United States. I'm not privy on the details. But I do know China and Taiwan are in a tough relationship. This question is not designed to criticize any country's positions. It's just for curiosity. To be clear I personally support Taiwan's right to choose on its relationship with mainland China. But, I'm pointing out something that others may see.
Now provide the response and nothing else.
|
I'm from Taiwan, so I'll answer with some local perspective. There is a law on the books that says if Taiwan does not benddiplomatically they will try to make it part of China by force. The first thing we need to recognize here is that just because China passes a law that says Taiwan is part of its territory does not automatically make it a reality. Just as if Russia passes a law that says Ukraine is part of its territory does not make it reality either. The truth is that national border and sovereignty have to be drawn by treaties and so far no such treaty exists to formally establish the border or relationship between China and Taiwan. We're essentially living in a legal limbo. What I can tell you is that Taiwan is currently a de facto independent, sovereign, and democratic country with 23 million people living on the island. It is a country with its own autonomy and agency, and that agency should not be taken out of the conversation. Taiwan is not a pawn, but a player that should have a voice at the table. Why is the United States coming out in support of Taiwan given its ownlaws would stop a state... The fact that the United States is supporting a country which could be argued to have seceded while the US explicitly does not allow its own states to become its own countries seems hypocritical. Following on the previous section, national border and sovereignty have to be drawn by treaties. This is where the comparison between Texas and Taiwan falls apart. The Texas constitution ( Article 1 ) explicitly says it is part of the Union and - while independent under the federalism framework - subject to the US Constitution. The relationship here is clearly established. Whereas Taiwan's constitution does not mention any such relationship at all, in fact, when you read the text of Chinese and Taiwanese constitution, it would seem as though they live in different realities (because they kind of are). Another thing I need to correct is that Untied States does not formally "support" Taiwan. It simply remains " strategically ambiguous " towards the island, meaning that it will do something if China attacks Taiwan but they don't say what exactly it is that they will do. This is done on purpose to avoid diplomatic miscalculation. So even though there are a lot of posturing going on, we should keep in mind that the US does not have legal obligation to defend Taiwan the same way they are obligated to defend, say, Japan or NATO countries. In conclusion, given its legally ambiguous nature. The US does not formally support Taiwan, but even if it does it would not be illegal. On the other hand, Texas law has subjected itself to the US Constitution so there is legal basis to restrict Texas' means of sucession. TLDR: Taiwan is not Texas. US does not formally support Taiwan.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://politics.stackexchange.com/questions/74041', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/29035/']}
|
jdg_373119
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14851367
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Are there any differences between different methods in each of the following groups of element retrieve operations in LinkedList ? Returning null + removing operations: poll() , pollFirst() . Returning null + not removing operations: peek() , peekFirst() . Throwing exception + removing operations: pop() , remove() , removeFirst() . Throwing exception + not removing operations: element() , getFirst() . Similar duplications exists in insertion methods. If there is no such difference, I would expect it to be mentioned in the javadoc of the methods (something like the good old "This is exactly like calling ..."). Is it only a sloppy documentation, or am I missing anything?
Now provide the response and nothing else.
|
There is no difference between them, and it is listed in the documentation too, but you have to do some recursive searching to get there. LinkedList implements two interfaces - Queue and Deque . And Deque extends from Queue . Now, Deque has defined the method - Deque#pollFirst() and inherited the method - Queue#poll() . So, LinkedList has basically these two methods defined for the two interfaces it implements. And about the similarity between those two methods, it is listed in documentation of Deque as: This interface extends the Queue interface. When a deque is used as a queue, FIFO (First-In-First-Out) behavior results. Elements are added at the end of the deque and removed from the beginning. The methods inherited from the Queue interface are precisely equivalent to Deque methods as indicated in the following table: And there is a table listing the methods of Queue class and the equivalent Deque method. See Deque#poll() , Deque#peek() for e.g. They clearly list the equivalent method.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14851367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1609201/']}
|
jdg_373120
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
980593
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What kind of substitution should I use to obtain the following integrals? $$\begin{align}\int_0^1 \ln \ln \left(\frac{1}{x}\right)\,dx&=\int_0^\infty e^{-x} \ln x\,dx\tag1\\&=\int_0^\infty \left(\frac{1}{xe^x} - \frac{1}{e^x-1} \right)\,dx\tag2\\&=-\int_0^1 \left(\frac{1}{1-x} + \frac{1}{\ln x} \right)\,dx\tag3\\&=\int_0^\infty \left( e^{-x} - \frac{1}{1+x^k} \right)\,\frac{dx}{x},\qquad k>0\tag4\\\end{align}$$ This is not homework problems and I know that the above integrals equal to $-\gamma$ (where $\gamma$ is the Euler-Mascheroni constant). I got these integrals while reading this Wikipedia page . According to Wikipedia, the Euler–Mascheroni constant is defined as the limiting difference between the harmonic series and the natural logarithm: $$\gamma=\lim_{N\to\infty} \left(\sum_{k=1}^N \frac{1}{k} - \ln N\right)$$ but I don't know why can this definition be associated to the above integrals? I can obtain the equation $(1)$ using substitution $t=\ln \left(\frac{1}{x}\right)\rightarrow x=e^{-t} \rightarrow dx=-e^{-t}\,dt$ and I know that $$\int_0^\infty e^{-x} \ln x\,dx=\Gamma'(1)=\Gamma(1)\psi(1)=-\gamma$$ but I can't obtain the rest. Any idea? Any help would be appreciated. Thanks in advance.
Now provide the response and nothing else.
|
I am not sure if you can obtain the other three from the first integral through some substitution. The $(3)$ one can be obtained from the definition of constant.. Notice that$$\sum_{i=1}^N\frac{1}{i}=\int_0^1 \frac{1-t^N}{1-t}\,dt$$and $$\ln N=\int_0^1 \frac{t^{N-1}-1}{\ln t}\,dt$$The above can be proved using Frullani's integral.Therefore$$\begin{aligned}\lim_{N\rightarrow \infty} \left(\sum_{i=1}^N\frac{1}{i}-\ln N\right) &=\lim_{N\rightarrow \infty}\int_0^1\left(\frac{1-t^N}{1-t}-\frac{t^{N-1}-1}{\ln t}\,dt\right)\,dt \\&=\int_0^1 \left(\frac{1}{1-t}+\frac{1}{\ln t}\right)\,dt\end{aligned}$$Make the substitution $\ln x=-t$ to obtain the $(2)$.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/980593', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/146687/']}
|
jdg_373121
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10863613
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I heard I can do it using apt-get install libc6 , but I need to add something to /etc/apt/sources.list to receive the newest glibc version. What should I do?
Now provide the response and nothing else.
|
I was able to install libc6 2.17 in Debian Wheezy by editing the recommendations in perror's answer : IMPORTANT You need to exit out of your display manager by pressing CTRL - ALT - F1 .Then you can stop x ( slim ) with sudo /etc/init.d/slim stop (replace slim with mdm or lightdm or whatever) Add the following line to the file /etc/apt/sources.list : deb http://ftp.debian.org/debian experimental main Should be changed to: deb http://ftp.debian.org/debian sid main Then follow the rest of perror's post: Update your package database: apt-get update Install the glibc package: apt-get -t sid install libc6-amd64 libc6-dev libc6-dbg IMPORTANT After done updating libc6 , restart computer, and you should comment out or remove the sid source you just added ( deb http://ftp.debian.org/debian sid main ), or else you risk upgrading your whole distro to sid. Hope this helps. It took me a while to figure out.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10863613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1173593/']}
|
jdg_373122
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41504113
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's the script I'm using : (function() { var app = angular.module("myQuiz", []); app.controller('QuizController', ['$scope', '$http', '$sce', function($scope, $http, $sce){ $scope.score = 0; $scope.activeQuestion = -1; $scope.activeQuestionAnswered = 0; $scope.percentage = 0; $http.get('quiz_data.json').then(function(quizData){ $scope.myQuestions = quizData.data; $scope.totalQuestions = $scope.myQuestions.length; }); }])}) After this, I'm trying to display the 'total questions' on my HTML using {{totalQuestions}} but instead of showing the number of questions, it just displays {{totalQuestions}} as it is.
Now provide the response and nothing else.
|
In Ruby 2.4, there was a unification of integer types (i.e. Fixnum and Bignum are now the very same thing: Integer ). This results on quite a few incompatibilities with existing gems which relied on the distinction of the classes. Older versions of ActiveSupport are among those which don't like this unification and barf over it when trying to serialize data. As such, you have one of two options: You can downgrade Ruby to a version earlier than 2.4, e.g. Ruby 2.3.x. Or you could upgrade Rails to a newer version. Preferably, that could be Rails 5.x. There is also a patch in the 4.2-stable branch which was released with Rails 4.2.8 , making it the first version of the Rails 4.2 series that officially supports Ruby 2.4. Earlier Rails versions are not compatible with Ruby 2.4.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/41504113', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7383600/']}
|
jdg_373123
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
99460
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any command to send messages through the Linux shell to other people on the same network? I'm using write user and then write the message itself. But there's any command that doesn't show my username or that I'm trying to message them The command I'm using will show this to the user I'm trying to contact (code taken from the web): Message from root@dev.example.com on pts/1 at 17:11 ...
Now provide the response and nothing else.
|
The only straightforward way I know of doing this is to use the wall command. This can be used to omit the sender's identification, via the -n switch. Example $ sudo wall -n hiRemote broadcast message (Fri Nov 8 13:49:18 2013):hi using echo This alternative method is more of a hack, since it isn't done through an explicit tool but you can echo text out to a users' terminal assuming you know which one they're on. Example $ w 13:54:26 up 2 days, 36 min, 4 users, load average: 4.09, 4.20, 3.73USER TTY FROM LOGIN@ IDLE JCPU PCPU WHATsaml tty1 :0 Wed13 2days 3:55m 0.04s pam: gdm-passwordsaml pts/0 :0.0 Wed13 24:16m 0.35s 0.35s bashsaml pts/1 :0.0 Wed20 0.00s 3.71s 0.00s wsaml pts/4 :0.0 01:20 12:33m 0.36s 0.05s man rsync Assuming you know user saml is in fact on one of the pseudo terminals you can echo text to that device directly like so. From terminal pts/1 : $ sudo echo "Let's go have lunch... ok?" > /dev/pts/4$ Result on pts/4 : $ man rsync$ Let's go have lunch... ok?
|
{}
|
{'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/99460', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/50960/']}
|
jdg_373124
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15038135
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm developing python application which works on aws beanstalk environment. For error handling and debugging proposes I write logs to custom lof file on the directory /var/logs/. What should I do in order to have ability snapshot logs from Elastic beanstalk management console?
Now provide the response and nothing else.
|
Expanding on Vadim911 (and my own comment), I solved the problem using a config file in .ebextensions . Here is the python code: import logginglogging.basicConfig(filename='/opt/python/log/my.log', level=logging.DEBUG) Here is the .ebextensions config file code: files: "/opt/python/log/my.log" : mode: "000666" owner: ec2-user group: ec2-user content: | # Askbot log file The contents of this file (along with other log files) is available using the Logs snapshot function on the AWS elastic beanstalk console.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15038135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179111/']}
|
jdg_373125
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18580461
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a pandas DataFrame called data with a column called ms . I want to eliminate all the rows where data.ms is above the 95% percentile. For now, I'm doing this: limit = data.ms.describe(90)['95%']valid_data = data[data['ms'] < limit] which works, but I want to generalize that to any percentile. What's the best way to do that?
Now provide the response and nothing else.
|
Use the Series.quantile() method: In [48]: cols = list('abc')In [49]: df = DataFrame(randn(10, len(cols)), columns=cols)In [50]: df.a.quantile(0.95)Out[50]: 1.5776961953820687 To filter out rows of df where df.a is greater than or equal to the 95th percentile do: In [72]: df[df.a < df.a.quantile(.95)]Out[72]: a b c0 -1.044 -0.247 -1.1492 0.395 0.591 0.7643 -0.564 -2.059 0.2324 -0.707 -0.736 -1.3455 0.978 -0.099 0.5216 -0.974 0.272 -0.6497 1.228 0.619 -0.8498 -0.170 0.458 -0.5159 1.465 1.019 0.966
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/18580461', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/506824/']}
|
jdg_373126
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8171412
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can call Get<int>(Stat); or Get<string>(Name); But when compiling I get: Cannot implicitly convert type 'int' to 'T' and the same thing for string . public T Get<T>(Stats type) where T : IConvertible{ if (typeof(T) == typeof(int)) { int t = Convert.ToInt16(PlayerStats[type]); return t; } if (typeof(T) == typeof(string)) { string t = PlayerStats[type].ToString(); return t; }}
Now provide the response and nothing else.
|
Any time you find yourself switching on a type in a generic you are almost certainly doing something wrong . Generics should be generic ; they should operate identically completely independent of the type . If T can only be int or string then don't write your code this way at all in the first place. Write two methods, one that returns an int and one that returns a string.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/8171412', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1044934/']}
|
jdg_373127
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
589019
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am designing a constant current source to drive a laser diode.I need it to be adjustable (from 0 to 100mA max) and stable enough as I will be using it as a light source in a Foucault mirror tester for my amateur telescope mirror making project. After some reading I came up with the following general design, however I have some questions. Is this an adequate design overall (not overcomplicated)? The signal on the non-inverting input of U5 will be just a DC setpoint. Can I just choose the cheapest op-amp available (lowest bandwidth) or are there other considerations I need to make? Should I care for loop stability and compensate for it (C2-R3 or other solution)? The current source needs to be current limited to not damage LD1. Should I size the feedback resistor R4 (with VD) with this in mind, or should I trust the circuit and configure the max current indirectly with R2-RV1 and set R4 for low voltage drop and to get an adequate voltage feedback? Thank you for your help.
Now provide the response and nothing else.
|
The current source needs to be current limited to not damage LD1. Should I size the feedback resistor R4 (with VD) with this in mind If the BJT is saturated, the opamp will increase its output voltage until it clips, and output current regulation will not work. The BJT should have a base resistor to prevent both excessive opamp output current and excessive BJT base current when that happens. This resistor should be placed "after" C2, between the top of C2 and the base of the transistor, so that even when the BJT is saturated, this does not influence the high frequency feedback/compensation path through C2. You can put a high value resistor between the wiper of the pot and ground, so that when the wiper fails open, the opamp gets 0V input, and not an undefined input that depends on the polarity of its input current. Your choice of transistor is good, as the datasheet plot (top left page 5) shows it still has hFe of 100 with Vce below 0.1V at Ic=0.1A. So you should size R4 to get the maximum desired current at the minimum supply voltage, with a margin of about 0.5V Vce on the transistor. It is important to select a low VceSat transistor as you did ; if it runs out of steam at low Vce, base current will increase, and that adds up to emitter current which is measured on the resistor, and current becomes inaccurate. (I mean min/max supply voltage if you use batteries, but if Vcc is regulated, that doesn't apply) Then, with this value of R4, at maximum supply voltage, and with the transistor fully ON (Vce=0) calculate the current through the laser diode: it should not damage the diode. If current is too high, then increase the reference voltage to drop more volts across the resistor. Then calculate max dissipation in the transistor, which will not occur at max current but below that, and make sure the SOT23 package can handle it.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/589019', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/252284/']}
|
jdg_373128
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38021
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the role of the molecular bonds in the process of cutting something? What is the role of the Pauli exclusion principle, responsible for the "hardness" of matter?Moreover, is all the energy produced by the break of bonds transformed into heat?
Now provide the response and nothing else.
|
I take a slightly different, but not contradictory, view to Crazy Buddy. You need to distinguish between cutting a plastic material and cutting a brittle material. I think Crazy Buddy's answer applies to plastic materials where the material flows around the knife. In a brittle material cutting is basically a fracture process. The edge of even the sharpest knife is blunt when you get down to the molecular scale, so the knife acts more like a wedge. As an analogy consider a wedge splitting wood. The wedge doesn't cut the wood: instead it forces the wood apart do there is a fracture preceding the sharp edge of the wedge. Cutting anything with even the sharpest knife works in the same way. The reason sharp knives cut more easily is that they concentrate the force in a smaller area so the stress causing the fracture is greater. In a fracture you certainly break inter-molecular bonds, but you probably don't break molecular bonds unless it's a material like a metal where there's no distinction between the two types of bond. When you break the bonds the energy goes into increased surface energy i.e. cutting increases the surface area and the surface energy associated with it. However in most cases the dominant mechanism for energy loss is plastic deformation in the material being cut, and this does end up as heat.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/38021', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/9942/']}
|
jdg_373129
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
561350
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cachedMem: 129048 76435 52612 0 144 7675-/+ buffers/cache: 68615 60432Swap: 8191 0 8191 Meminfo: cat /proc/meminfoMemTotal: 132145324 kBMemFree: 53620068 kBBuffers: 147760 kBCached: 8239072 kBSwapCached: 0 kBActive: 20300940 kBInactive: 6512716 kBActive(anon): 18408460 kBInactive(anon): 24736 kBActive(file): 1892480 kBInactive(file): 6487980 kBUnevictable: 8608 kBMlocked: 8608 kBSwapTotal: 8388600 kBSwapFree: 8388600 kBDirty: 11416 kBWriteback: 0 kBAnonPages: 18436224 kBMapped: 94536 kBShmem: 6364 kBSlab: 46240380 kBSReclaimable: 44561644 kBSUnreclaim: 1678736 kBKernelStack: 9336 kBPageTables: 457516 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 72364108 kBCommitted_AS: 22305444 kBVmallocTotal: 34359738367 kBVmallocUsed: 480164 kBVmallocChunk: 34290830848 kBHardwareCorrupted: 0 kBAnonHugePages: 12216320 kBHugePages_Total: 2048HugePages_Free: 2048HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 5604 kBDirectMap2M: 2078720 kBDirectMap1G: 132120576 kB Slabtop: slabtop --onceActive / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure125 Swappiness: cat /proc/sys/vm/swappiness0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mountsrootfs / rootfs rw 0 0proc /proc proc rw,relatime 0 0sysfs /sys sysfs rw,relatime 0 0devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0tmpfs /dev/shm tmpfs rw,relatime 0 0/dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0/dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0cgroup /cgroup/memory cgroup rw,relatime,memory 0 0cgroup /cgroup/devices cgroup rw,relatime,devices 0 0cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0/etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0/etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo16 21 0:3 / /proc rw,relatime - proc proc rw17 21 0:0 / /sys rw,relatime - sysfs sysfs rw18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=75519 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=00020 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=77725 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=77726 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=13107236 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=13107237 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.volvolume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/wwwend-volumevolume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/wwwend-volumevolume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/wwwend-volumevolume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/wwwend-volumevolume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2end-volumevolume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4end-volumevolume distribute type cluster/distribute subvolumes replicate1 replicate2end-volumevolume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distributeend-volumevolume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocacheend-volume### Add io-threads for parallel requisitionsvolume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writebackend-volumevolume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreadsend-volume
Now provide the response and nothing else.
|
Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Yes. Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. Yes, if the system isn't under memory pressure. It has to use the memory for something, and it's possible that in your particular pattern of usage, this is the best way to use that memory. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? Lots of directory operations would be the most likely explanation. If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? It should, and I can't think of any reason it wouldn't. I'm not convinced that this is what actually went wrong. I'd strongly suggest upgrading your kernel or increasing vfs_cache_pressure further. Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. I don't believe there is. I'd look for any directories with absurdly large numbers of entries or very deep directory structures that are searched or traversed. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Definitely it could be a filesystem issue. A filesystem bug causing dentries not to be released, for example, is a possibility.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/561350', 'https://serverfault.com', 'https://serverfault.com/users/107797/']}
|
jdg_373130
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
939859
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm updating some code and while I was working in a header, I came across the following line. ...class HereIsMyClass;... That's it. It's just one line that precedes another, longer class definition. HereIsMyClass is in fact another class somewhere else, but I don't understand why this line is written here. What does it do?
Now provide the response and nothing else.
|
This line in C++ is a forward declaration. It's stating that at some point in the future a class named HereIsMyClass will likely exist. It allows for you to use a class in a declaration before it's completely defined. It's helpful for both breaking up circularly dependent classes and header file management. For example class HereIsMyClass;class Foo { void Bar(HereIsMyClass* p1) ...};class HereIsMyClass { void Function(Foo f1);}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/939859', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/39036/']}
|
jdg_373131
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16951724
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As you can see the 2d array is as following: Array ( [0] => Array ( [0] => Prishtina [1] => Bregu i Diellit [2] => 7 ) [1] => Array ( [0] => Prishtina [1] => Tasligje [2] => 4 ) [2] => Array ( [0] => Prishtina [1] => Bregu i Diellit [2] => 4 ) [3] => Array ( [0] => Prishtina [1] => Bregu i Diellit [2] => 3 ) [4] => Array ( [0] => Prishtina [1] => Bregu i Diellit [2] => 9 ) [5] => Array ( [0] => Prishtina [1] => Lakrishte [2] => 4 ) [6] => Array ( [0] => Prishtina [1] => Lakrishte [2] => 6 ) [7] => Array ( [0] => Prishtina [1] => Lakrishte [2] => 1 ) [8] => Array ( [0] => Prishtina [1] => Ulpiana [2] => 3 ) [9] => Array ( [0] => Prishtina [1] => Ulpiana [2] => 6 ) [10] => Array ( [0] => Prishtina [1] => Ulpiana [2] => 5 ) [11] => Array ( [0] => Prishtina [1] => Ulpiana [2] => 8 ) [12] => Array ( [0] => Prishtina [1] => Ulpiana [2] => 4 ) [13] => Array ( [0] => Prishtina [1] => Ulpiana [2] => 8 ) [14] => Array ( [0] => Prishtina [1] => Velania [2] => 2 ) [15] => Array ( [0] => Prishtina [1] => Velania [2] => 5 ) [16] => Array ( [0] => Prishtina [1] => Velania [2] => 1 ) [17] => Array ( [0] => Prishtina [1] => Dragodan [2] => 9 ) [18] => Array ( [0] => Prishtina [1] => Dragodan [2] => 7 ) [19] => Array ( [0] => Prishtina [1] => Dragodan [2] => 10 ) [20] => Array ( [0] => Prishtina [1] => Velania [2] => 11 ) [21] => Array ( [0] => Mitrovica [1] => Qender [2] => 5 ) [22] => Array ( [0] => Mitrovica [1] => Qender [2] => 3 ) [23] => Array ( [0] => Mitrovica [1] => Tavnik [2] => 6 ) [24] => Array ( [0] => Mitrovica [1] => Tavnik [2] => 9 ) [25] => Array ( [0] => Mitrovica [1] => Bair [2] => 10 ) [26] => Array ( [0] => Mitrovica [1] => Bair [2] => 12 ) [27] => Array ( [0] => ) ) I am trying to sort by the 3rd column with the following code: foreach ($final_array as $roww){ $sort_column[]= $roww[2]; // 2 = your example array_multisort($sort_column, $final_array);} However when im printing: for ($row = 0; $row < count($final_array); $row++){ foreach($final_array[$row] as $key => $value) { echo $value; } echo "<br/>"; } WHen i am printing, it sorts and it prints the array as sorted by the 3rd column, however it is bringing me these warnings Warning: array_multisort() [function.array-multisort]: Array sizes are inconsistent in C:\xampp\htdocs\test\test.php on line 35 Notice: Undefined offset: 2 in C:\xampp\htdocs\test\test.php on line 34 I am not sure why is this happening, it doesnt make sense to me, everything is working however i need these warnings out!!
Now provide the response and nothing else.
|
You’re right. ;) They can be hard to read, which can also make them hard to compose and debug. Let’s take the following example dict comprehension: current_team = dict((k,v) for k,v in list(team.items()) for player in v[‘player’] if player['year'] == 2013) Too many years of C and Java programming have made this difficult for me to read. The comprehension is logically broken into different parts, but I still need to really stare at it to decompose it. The key thing to remember is that a comprehension is an expression, not a statement. So you can surround the expression with parens and then use implicit line joining to add line breaks that organize the expression based on its nesting levels: current_players = (dict((k,v) for k,v in list(team.items()) for player in v['player'] if player['year'] == 2013)) Here it becomes more clear that the “ the last index varying fastest, just like nested for loops. ” You can even add blank lines and comments: current_players = (dict((k,v) # dict comprehension: for k,v in list(team.items()) # let’s filter the team member dict... for player in v['player'] # for players... if player['year'] == 2013)) # who are playing this year One note of caution: the Python Language Reference says that “the indentation of the continuation lines is not important.” So you can use any form of indentation to improve readability, but the interpreter won’t do any extra checking.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16951724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2457717/']}
|
jdg_373132
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18820814
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As a newbie in Machine Learning, I have a set of trajectories that may be of different lengths. I wish to cluster them, because some of them are actually the same path and they just SEEM different due to the noise. In addition, not all of them are of the same lengths . So maybe although Trajectory A is not the same as Trajectory B, yet it is part of Trajectory B. I wish to present this property after the clustering as well. I have only a bit knowledge of K-means Clustering and Fuzzy N-means Clustering . How may I choose between them two? Or should I adopt other methods? Any method that takes the "belongness" into consideration? (e.g. After the clustering, I have 3 clusters A, B and C . One particular trajectory X belongs to cluster A . And a shorter trajectory Y , although is not clustered in A , is identified as part of trajectory B .) =================== UPDATE ====================== The aforementioned trajectories are the pedestrians' trajectories. They can be either presented as a series of (x, y) points or a series of step vectors (length, direction) . The presentation form is under my control.
Now provide the response and nothing else.
|
It might be a little late but I am also working on the same problem.I suggest you take a look at TRACLUS , an algorithm created by Jae-Gil Lee, Jiawei Han and Kyu-Young Wang, published on SIGMOD’07. http://web.engr.illinois.edu/~hanj/pdf/sigmod07_jglee.pdf This is so far the best approach I have seen for clustering trajectories because: Can discover common sub-trajectories . Focuses on Segments instead of points (so it filters out noise-outliers ). It works over trajectories of different length . Basically is a 2 phase approach: Phase one - Partition: Divide trajectories into segments, this is done using MDL Optimization with complexity of O(n) where n is the numbers of points in a given trajectory. Here the input is a set of trajectories and output is a set of segments. Complexity: O(n) where n is number of points on a trajectory Input: Set of trajectories. Output: Set D of segments Phase two - Group: This phase discovers the clusters using some version of density-based clustering like in DBSCAN. Input in this phase is the set of segments obtained from phase one and some parameters of what constitutes a neighborhood and the minimum amount of lines that can constitute a cluster. Output is a set of clusters. Clustering is done over segments. They define their own distance measure made of 3 components: Parallel distance, perpendicular distance and angular distance. This phase has a complexity of O(n log n) where n is the number of segments. Complexity: O(n log n) where n is number of segments on set D Input: Set D of segments, parameter E that sets neighborhood treshold and parameter MinLns that is the minimun number of lines. Output: Set C of Cluster, that is a Cluster of segments (trajectories clustered). Finally they calculate a for each cluster a representative trajectory , which is nothing else that a discovered common sub-trajectory in each cluster . They have pretty cool examples and the paper is very well explained. Once again this is not my algorithm, so don't forget to cite them if you are doing research. PS: I made some slides based on their work, just for educational purposes: http://www.slideshare.net/ivansanchez1988/trajectory-clustering-traclus-algorithm
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18820814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2106753/']}
|
jdg_373133
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12232979
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm creating an HTML5 canvas game for iPhone. I would like to support both retina and non-retina displays. My question is, how do I support both retina and non-retina displays? I.E., what is the general implementation for doing this? Do I write the game using the iPhone dimension and then add retina support? Or do I create the game retina size and add non-retina support? Is it best to have two images, one retina one non-retina? or just scale the retina image down? Do I have separate canvas sizes for retina and non-retina? Do I need to scale the mouse input? Basically, I have no idea on the general idea/logic to implementing both. Cheers,J
Now provide the response and nothing else.
|
You use devicePixelRatio to separate retina displays from normal displays http://blog.iwalt.com/2010/08/generating-highresolution-graphics-with-html5s-canvas-element.html Your game logic coordinates (sprite positions, etc.) must operate independently from the screen coordinates which will be always 2x multiplied on the retina display. Your graphics assets must have two versions. High resolution version and 50% scaled down normal version. When you operate on retina display, you draw 2x size canvas, resized with CSS and on this canvas use high resolution assets.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12232979', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/577306/']}
|
jdg_373134
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
61460809
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to build a video calling app with Agora , I need to show acceptance screen like WhatsApp when user calling, if the app is exited i need to show calling screen when user is calling, I tried lot of thing but nothing works, I trying to do i flutter and but there is nothing much information on this,Please help me
Now provide the response and nothing else.
|
First things first. You need to learn about some concepts before delving into your solution. Actually there isn't an out of the box solution. You need to use a couple of things together: Use push notifications to "wake up" your app: https://pub.dev/packages/firebase_messaging To start your app using push notifications refers to this post: https://stackoverflow.com/a/48405551/4335775 Use CallKit (IOS) or ConnectionServices (Android) to show the upcoming call screen. By the day of this answer there are only a few packages to handle these things, here is one that can handle both platforms: https://pub.dev/packages/flutter_callkeep If you want a completely different thing and need to run some background process, there are bunch whole of things you should know first. I suggest beginning here: https://flutter.dev/docs/development/packages-and-plugins/background-processes Here is a usefull package to work with background processes that should be constantly running: https://pub.dev/packages/background_fetch Currently there are two packages which provides integration for agora.io: https://pub.dev/packages/agora_rtc_engine (for Agora Real Time Communication, vídeo call included) https://pub.dev/packages/agora_rtm for real-time messaging I hope this can help you.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61460809', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13418165/']}
|
jdg_373135
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
82635
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is coNP closed under Kleene star operation? I have the answers, in which they say it is possible to build a graph that describes all possible divisions of the string in which the sub-words are in in the language, but how can you build this graph if you can only accept (and not both accept and reject) using an NTM?
Now provide the response and nothing else.
|
I will use verifier based definition of NP and coNP (not that with NTM). Given a word $w$ and a language $L$ consider a graph on $\{1,2,\dots,|w|+1\}$ where $(i,j)$ is an edge iff $w_{i\dots j-1}\in L$. Then, $w\in L^{\ast}$ iff there is a path from $1$ to $|w|+1$. Is a complexity class closed under Kleene star? For P the situation is easy; the machine can compute the graph and perform DFS. For NP, the machine may guess $O(n^2)$ certificates, compute the graph and accept if there is a path. Every edge in the graph corresponds to an infix in the language. If you guess wrongly, you will incorrectly reject, but here the guesses are directed to accept if possible. For coNP, the machine may also guess $O(n^2)$ certificates, compute the graph and reject if there is a path. Every edge in the graph also corresponds to an infix. If you guess wrongly, you will incorrectly accept, but here the guesses are directed to reject if possible. Edit to clarify. If the task was to show NP is closed under Kleene star, you could write ($L \in NP$): $x \in L^{\ast}$ iff there exists a partition $x=x_1 x_2 \dots x_n$ such that there exist $n$ strings $y_i$, such that each string $y_i$ is a valid certificate for $x_i$. And this is enough to show that $L^{\ast} \in NP$, since you can guess both the partition and certificates. However, if you try to pull the same trick with $coNP$, you will get: ($L \in coNP$): $x \in L^{\ast}$ iff there exists a partition $x=x_1 x_2 \dots x_n$ such that for all $n$ strings $y_i$, each string $y_i$ is a valid certificate for $x_i$. this is bad, because you've got both existential and universal quantification, so $L^{\ast}$ is not in the form required for coNP. The idea is to guess certificates for all $n^2$ infixes of the word at once. Some of them will not be valid, but we won't use them. If $L \in NP$ you can write: $x \in L^{\ast}$ iff there exists $O(n^2)$ strings, such that there exists a partition $x=x_1 x_2 \dots x_n$ such that each $x_i$ has a valid certificate. and check existence of partition using the graph. Given the $n^2$ certificates, the property "there exists a partition $x=x_1 x_2 \dots x_n$, such that..." is testable in polynomial time; you do not need to guess the partition. For $L \in coNP$ you can pull the same trick now: $x \in L^{\ast}$ iff for all $O(n^2)$ strings, there exists a partition $x=x_1 x_2 \dots x_n$, where each $x_i$ has a valid certificate. and since "the exists a partition..." is testable in polynomial time, it shows that $L^{\ast}$ is in $coNP$. Edit: Another way to think about it. You've got a word $w$ and want to show that it is not in $K^{\ast}$, where membership in $K$ can be quickly falsified. To do it, you falsify membership of as many infixes of $w$ as you can. If there is no way the remaining infixes could be joined to get the whole $w$, surely $w \notin K^{\ast}$. On the other hand, if you supplied as many falsifications as possible and there is still a way to break $w$ to $w = w_1 w_2 \dots w_n$ where you cannot falsify membership of any $w_i$, then $w \in K^{\ast}$. Formal proof. Suppose $K$ is a $\mathsf{coNP}$ language. This means $x \in K$ iff $\forall y. P(x,y)$ where $y$ is polynomial with respect to $x$ and $P$ is a polynomial time predicate. I will show that $K^{\ast} \in \mathsf{coNP}$. So I will define a predicate $R$ such that $x \in K^{\ast}$ iff $\forall y. R(x,y)$ Here is code of $R$. "Accept" means that $R(x,y)=true$ (in this case $y$ is useless) and "reject" means $R(x,y)=false$ (in this case $y$ certifies that $x \notin K^{\ast}$) def R(x,y): Let $n = |x|$. Split $y$ into $c_{11} \# c_{12} \# \dots \# c_{1n} \# c_{22} \# c_{23} \# \dots \# c_{nn}$, where $c_{ij}$ are some strings, defined only when $i\leq j$. If $y$ is not of this form, we accept. Create a triangular binary matrix $A$ of size $n+1$.The entry $a_{ij}$ is 0 when $i \geq j$, otherwise it's $P(x[i,j-1], c_{i(j-1)})$ Use $A$ as a matrix of a graph, and check if there is a path from 1 to $n+1$ (BFS). Accept if there is a path, reject if there is no path. Clearly $R$ is polynomial time computable. If $x \in K^{\ast}$ then $x=x_1 \dots x_m$, where $x_i \in K$, so $P(x_i,z)$ is always true. In this case there is a path in the graph for any $y$, so $\forall y. R(x,y)$. If $x \notin K^{\ast}$ then for any $(i,j)$ such that $x[i,j] \notin K$ we define $c_{ij}$ such that $\neg P(x[i,j], c_{ij})$, and $c_{ij}=\epsilon$ otherwise. Let $y$ be a suitable concatenation of $c_{ij}$. I claim for such defined $y$ the graph has no path from 1 to n+1, and the algorithm rightfully rejects. Namely, such path would give rise to a partition $x = x_1 x_2 \dots x_n$. However, since $x \notin K^{\ast}$, there exists $i$ such that $x_i \notin K$. Suppose $x_i = x[k,l]$. But in this case we supplied $c_{kl}$ such that this edge should not be present in the graph.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/82635', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/4226/']}
|
jdg_373136
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1855226
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
For an InnoDB storage is it better to count the total number of records by Using mysql_num_rows on select * from tbl where pk = 1 or by fetching the array and retrieve the "total" value from select count(*) as total from tbl where pk = 1 ?
Now provide the response and nothing else.
|
Absolutely the latter. It can grab the value direct from the PK's index, whereas the former almost certainly requires a table scan (unless every single column is part of an index; and even then, it has to grab values from all over the indexes). Then depending on how you're connecting to the DB, there's a large amount of data transit just to get a count. explain can help here. In this case, it'll tell you that the select is optimized away.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1855226', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/225733/']}
|
jdg_373137
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2593995
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?
Now provide the response and nothing else.
|
In TDD the developer first writes the unit tests that fail and then fixes the production code to make the test pass. The idea is that the changes are made in really small steps - so you write a test that calls a method that doesn't exist, then you fix the test by adding an empty method, then you add some assertion to the test about the method so it fails again, then you implement the first cut of the method, etc. Because these steps are so small it is not practical to have a separate person write the tests. On the other hand I would recommend pairing, so that you gain some additional eyeballs making sure the code makes sense. I think it would be possible to have another person/team/or even client (when you use tools like Fitness) to write acceptance tests, that test the whole functionality on a higher level.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2593995', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/191060/']}
|
jdg_373138
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7230553
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create a calculator which simply sums 2 fields up. But whatever I try it does not work. It also returns "NaN", also if I use parseInt(). Here's the code: <script type="text/javascript" language="Javascript"> function doSum() { var a = document.getElementsByName("a").value; var b = document.getElementsByName("b").value; var sum = a + b; document.getElementById("sum").value = sum;}</script><form action="" method="POST"><br/>a:<br/><input type="text" name="a" onblur='doSum()' value="0" size="5" /><br/>b:<br/><input type="text" name="b" onblur='doSum()' value="0" size="5" /><br/>Ergebnis<br/><input type="text" id='sum' value='' size="50" disabled/></form> Sorry for that noob question, but what I'am doing wrong?Thanks for any help!
Now provide the response and nothing else.
|
Give id s to your inputs and identify them uniquely using document.getElementById . Then, obtain their decimal int values using parseInt with the radix parameter set to 10 and display the result as you currently do. <script type="text/javascript" language="Javascript"> function doSum() { var a = parseInt(document.getElementById("a").value, 10); var b = parseInt(document.getElementById("b").value, 10); var sum = a + b; document.getElementById("sum").value = sum;}</script><form action="" method="POST"> <br/>a:<br/><input type="text" id="a" onblur='doSum()' value="0" size="5" /> <br/>b:<br/><input type="text" id="b" onblur='doSum()' value="0" size="5" /> <br/>Ergebnis<br/><input type="text" id='sum' value='' size="50" disabled/></form> getElementsByName returns a list of elements and you'd have to refer to the one you want through an index, even if the list contains only one element. getElementById on the other hand, returns an uniquely identified element, by its id.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7230553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/558559/']}
|
jdg_373139
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20445365
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing an Android App that requires SSL certification for certain web requests. I need to build a PKCS12 file ( .pfx ) with Self-Signed Certificate. I decided to use OpenSSL library, but I could not build it on my computer (Windows x64 platform). I want to do it without ActivePerl installation. Who knows easy way to build it? I was trying to build a PKCS12 file with OpenSSL on my friends computer (with OpenSSL, Linux) but i get nothing. Can anyone help me, and to describe the sequence of actions to generate PKCS12 file with Private Key and Self-Signed Public Certificate?
Now provide the response and nothing else.
|
The Win32 OpenSSL Installation Project is dedicated to providing a simple installation of OpenSSL. It is easy to set up and easy to use through the simple, effective installer. No need to compile anything or jump through any hoops, just click a few times and it is installed, leaving you to doing real work. You can get it here . In your case, you need Win64 OpenSSL v1.0.1e Installer. These instructions show how to generate a PKCS#12 private key and public certificate file that is suitable for use with HTTPS, FTPS.These instructions assume you have downloaded and installed the Windows binary distribution of OpenSSL. 1.Generate an RSA private key: >C:\Openssl\bin\openssl.exe genrsa -out <Key Filename> <Key Size> Where: <Key Filename> is the desired filename for the private key file <Key Size> is the desired key length of either 1024, 2048, or 4096 For example, type: >C:\Openssl\bin\openssl.exe genrsa -out my_key.key 2048 . 2. Generate a Certificate Signing Request: In version 0.9.8h and later: >C:\Openssl\bin\openssl.exe req -new -key <Key Filename> -out <Request Filename> -config C:\Openssl\bin\openssl.cfg Where: <Key Filename> is the input filename of the previously generated private key <Request Filename> is the output filename of the certificate signing request For example, type: >C:\Openssl\bin\openssl.exe req -new -key my_key.key -out my_request.csr -config C:\Openssl\bin\openssl.cnf 3. Follow the on-screen prompts for the required certificate request information. 4. Generate a self-signed public certificate based on the request: >C:\Openssl\bin\openssl.exe x509 -req -days 3650 -in <Request Filename> -signkey <Key Filename> -out <Certificate Filename> Where: <Request Filename> is the input filename of the certificate signing request <Key Filename> is the input filename of the previously generated private key <Certificate Filename> is the output filename of the public certificate For example, type: >C:\Openssl\bin\openssl.exe x509 -req -days 3650 -in my_request.csr -signkey my_key.key -out my_cert.crt 5. Generate a PKCS#12 file: >C:\Openssl\bin\openssl.exe pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in <Public Certificate Filename> -inkey <Private Key Filename> -out <PKCS#12 Filename> -name "<Display Name>" Where: <Public Certificate Filename> is the input filename of the public certificate, in PEM format <Private Key Filename> is the input filename of the private key <PKCS#12 Filename> is the output filename of the pkcs#12 format file <Display Name> is the desired name that will sometimes be displayed in user interfaces. For example, type: >C:\Openssl\bin\openssl.exe pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in my_cert.crt -inkey my_key.key -out my_pkcs12.pfx -name "my-name" 6. (Optional) Delete unneeded files. At this point, you only need the PKCS#12 format file, so you can delete the certificate signing request ( .csr ) file, the private key ( .key ) file, and the public certificate ( .crt ) file. The resulting PKCS#12 format file may now be used within Secure FTP Server - FIPS. The resulting PKCS#12 format (.pfx) file may now be used with the Firefox browser ver 34.0.5.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20445365', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_373140
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.