source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
19964180
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While working with retain Fragments in Android to hold an AsyncTask during configuration changes, which i guess it's the best approach, some doubts appear in my mind about UI Thread's Message Queue invocation order. Ex: Imagine this scenario: Configuration Change occurs, user rotates the device. AsyncTask is running. Fragment onDetach() is called AsyncTask doInBackground() method finishes AsyncTask onPostExecute() is called Fragment onAttach() is called So can UI Thread Message Queue be like this: Queue top -> onDetach() | onPostExecute() | onAttach() I know it cannot, the call to onPostExecute() will wait until the configuration change completes, as far as i know, but how does that work ? Are the calls from Activities, Fragments life-cycles executed consecutively ? Now provide the response and nothing else.
It is not possible for onPostExecute() to be called in between Fragment#onDetach() and Fragment#onAttach() during a configuration change. The reasoning behind this claim is threefold: Configuration changes are handled inside a single message in the main thread's message queue. As soon as the doInBackground() method returns, the AsyncTask schedules the onPostExecute() method to be invoked on the main thread by posting a message to the main thread's message queue. The configuration change's message will contain the code that will invoke the Activity and Fragment lifecycle methods (such as onDetach() and onAttach() ). The AsyncTask 's message will contain the code that will invoke the onPostExecute() method. Since the main thread processes messages in its message queue sequentially, it is impossible for the two messages to be executed at the same time, and therefore onPostExecute() can never be invoked in between the calls to onDetach() and onAttach() . Read my response to Doug Stevenson in this thread for a more detailed explanation (including links to the source code that prove the claim).
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/19964180', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1064768/']}
jdg_371641
stackexchange
llm_judgeable_groundtruth_similarity
36029
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I studied Bayesian statistics, a question about the notation of Bayes' Theorem came to my mind. Below is the density function version of Bayes' Theorem, where $y$ is data vector and $\theta$ is the parameter vector: $$p(\theta|y)=\frac{p(y|\theta)p(\theta)}{p(y)}$$The numerator on the right handside can be written as:$$p(y,\theta)$$which is the joint probability distribution of $y$ and $\theta$, then Bayes' theorem could be written as:$$p(\theta|y)=\frac{p(y,\theta)}{p(y)}$$Furthermore,$$ p(\theta|y)\propto p(y,\theta)$$Am I right on this? I think it does not look right. Because the posterior is the proportional to the joint density function. But where is the mistake? Now provide the response and nothing else.
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$ . This is not ambiguous because it is clearly understood that we are dealing with a distribution on the space of the parameter $\theta$. This notation could become ambiguous when dealing with a two parameters model, say $\theta$ and $\mu$. In such a case I personally use the notation $\underset{\theta}{\propto}$, $\underset{\mu}{\propto}$ or $\underset{\mu,\theta}{\propto}$ for precising what variable is considered in the proportionality statement.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/36029', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/3525/']}
jdg_371642
stackexchange
llm_judgeable_groundtruth_similarity
149134
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have this requirement: The application must allow the Administrator to edit, create and delete Users. Now, I have heard that each action must go in a functional requirement, and my question is: What of the following statements would be the more appropriate(if any): 1: The application must allow the Administrator to edit, create and delete Users. 2: The application must allow the Administrator to edit Users.The application must allow the Administrator to create Users.The application must allow the Administrator to delete Users. Now provide the response and nothing else.
I would go for number two. Functions in function definitions while they should be worded in business terms should be atomic. What I mean by that is "maintain customer data" could include add, edit, delete, archive, disable etc. etc. A in list of functions "delete the customer data" could mean only one thing. This helps specify the system more exactly in discussion with users as they are liable to say things like "I didn't mean remove the customer form the file, I meant flag him as an ex-customer", which they might have felt was an obvious implication of "maintain customer data".
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/149134', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/38692/']}
jdg_371643
stackexchange
llm_judgeable_groundtruth_similarity
678178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following method signature: public void MyFunction(Object[,] obj) I create this object: List<List<Object>> obj = new List<List<Object>>; Is there an easy way I can convert this to an Object[,] ? UPDATE: The fact is I like to use List s because I can easily add a new item. Is there a way I can declare my List<> object to fit this need? I know the number of columns in my Object[,] but not the number of rows. Now provide the response and nothing else.
No. In fact, these aren't necessarily compatible arrays. [,] defines a multidimensional array. List<List<T>> would correspond more to a jagged array ( object[][] ). The problem is that, with your original object, each List<object> contained in the list of lists can have a different number of objects. You would need to make a multidimensional array of the largest length of the internal list, and pad with null values or something along those lines to make it match.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/678178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/60002/']}
jdg_371644
stackexchange
llm_judgeable_groundtruth_similarity
54546450
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I had been using Anaconda with python 2.7 $ pythonPython 2.7.14 |Anaconda custom (64-bit)| (default, Dec 7 2017, 17:05:42) [GCC 7.2.0] on linux2Type "help", "copyright", "credits" or "license" for more information. When I decided to install tensorflow (since for some reason I had the non-gpu version) The command I used was: $ conda install -c anaconda tensorflow-gpu However, after it was done (detail on output of this cmd to follow), I no longer had conda: $ conda install -c conda-forge kerasTraceback (most recent call last): File "/home/me/anaconda2/bin/conda", line 12, in <module> from conda.cli import mainModuleNotFoundError: No module named 'conda' (Note: I also no longer had Keras) and was now running Python 3.7(!?): $ pythonPython 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) [GCC 7.3.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> What happened? How do I stop it from happening again? This happened once before and I ended up deleting all my anaconda files, then reinstalling. I don't want to make that a habit. The output of my conda install was: $ conda install -c anaconda tensorflow-gpuCollecting package metadata: doneSolving environment: done## Package Plan ## environment location: /home/me/anaconda2 added / updated specs: - tensorflow-gpuThe following packages will be downloaded: package | build ---------------------------|----------------- _tflow_190_select-0.0.1 | gpu 2 KB anaconda absl-py-0.7.0 | py36_0 156 KB anaconda astor-0.7.1 | py36_0 43 KB anaconda c-ares-1.15.0 | h7b6447c_1 98 KB anaconda ca-certificates-2018.12.5 | 0 123 KB anaconda certifi-2018.11.29 | py36_0 146 KB anaconda cudatoolkit-9.0 | h13b8566_0 340.4 MB anaconda cudnn-7.1.2 | cuda9.0_0 367.8 MB anaconda cupti-9.0.176 | 0 1.6 MB anaconda curl-7.63.0 | hbc83047_1000 145 KB anaconda gast-0.2.2 | py36_0 138 KB anaconda git-2.11.1 | 0 9.5 MB anaconda grpcio-1.16.1 | py36hf8bcb03_1 1.1 MB anaconda krb5-1.16.1 | h173b8e3_7 1.4 MB anaconda libcurl-7.63.0 | h20c2e04_1000 550 KB anaconda libedit-3.1.20181209 | hc058e9b_0 188 KB anaconda libssh2-1.8.0 | h1ba5d50_4 233 KB anaconda markdown-3.0.1 | py36_0 107 KB anaconda mkl_fft-1.0.10 | py36ha843d7b_0 170 KB anaconda mkl_random-1.0.2 | py36hd81dba3_0 407 KB anaconda ncurses-6.1 | he6710b0_1 958 KB anaconda numpy-1.15.4 | py36h7e9f1db_0 47 KB anaconda numpy-base-1.15.4 | py36hde5b4d6_0 4.3 MB anaconda openssl-1.1.1 | h7b6447c_0 5.0 MB anaconda pip-18.1 | py36_0 1.8 MB anaconda protobuf-3.5.2 | py36hf484d3e_1 610 KB anaconda python-3.6.8 | h0371630_0 34.4 MB anaconda qt-4.8.7 | 2 34.1 MB anaconda setuptools-40.6.3 | py36_0 625 KB anaconda six-1.12.0 | py36_0 22 KB anaconda sqlite-3.26.0 | h7b6447c_0 1.9 MB anaconda tensorboard-1.9.0 | py36hf484d3e_0 3.3 MB anaconda tensorflow-1.9.0 |gpu_py36h02c5d5e_1 3 KB anaconda tensorflow-base-1.9.0 |gpu_py36h6ecc378_0 170.8 MB anaconda tensorflow-gpu-1.9.0 | hf154084_0 2 KB anaconda termcolor-1.1.0 | py36_1 7 KB anaconda tk-8.6.8 | hbc83047_0 3.1 MB anaconda werkzeug-0.14.1 | py36_0 423 KB anaconda wheel-0.32.3 | py36_0 35 KB anaconda ------------------------------------------------------------ Total: 985.7 MBThe following NEW packages will be INSTALLED: _tflow_190_select anaconda/linux-64::_tflow_190_select-0.0.1-gpu c-ares anaconda/linux-64::c-ares-1.15.0-h7b6447c_1 cudatoolkit anaconda/linux-64::cudatoolkit-9.0-h13b8566_0 cudnn anaconda/linux-64::cudnn-7.1.2-cuda9.0_0 cupti anaconda/linux-64::cupti-9.0.176-0 krb5 anaconda/linux-64::krb5-1.16.1-h173b8e3_7 pip anaconda/linux-64::pip-18.1-py36_0 tensorflow-gpu anaconda/linux-64::tensorflow-gpu-1.9.0-hf154084_0The following packages will be UPDATED: absl-py conda-forge/noarch::absl-py-0.1.10-py~ --> anaconda/linux-64::absl-py-0.7.0-py36_0 ca-certificates conda-forge::ca-certificates-2018.11.~ --> anaconda::ca-certificates-2018.12.5-0 curl pkgs/main::curl-7.60.0-h84994c4_0 --> anaconda::curl-7.63.0-hbc83047_1000 gast 0.2.0-py27_0 --> 0.2.2-py36_0 grpcio pkgs/main::grpcio-1.12.1-py27hdbcaa40~ --> anaconda::grpcio-1.16.1-py36hf8bcb03_1 libcurl pkgs/main::libcurl-7.60.0-h1ad7b7a_0 --> anaconda::libcurl-7.63.0-h20c2e04_1000 libedit pkgs/main::libedit-3.1-heed3624_0 --> anaconda::libedit-3.1.20181209-hc058e9b_0 markdown conda-forge/noarch::markdown-2.6.11-p~ --> anaconda/linux-64::markdown-3.0.1-py36_0 mkl_fft pkgs/main::mkl_fft-1.0.6-py27hd81dba3~ --> anaconda::mkl_fft-1.0.10-py36ha843d7b_0 ncurses pkgs/main::ncurses-6.0-h9df7e31_2 --> anaconda::ncurses-6.1-he6710b0_1 openssl conda-forge::openssl-1.0.2p-h14c3975_~ --> anaconda::openssl-1.1.1-h7b6447c_0 protobuf conda-forge::protobuf-3.5.2-py27hd28b~ --> anaconda::protobuf-3.5.2-py36hf484d3e_1 python pkgs/main::python-2.7.14-h1571d57_29 --> anaconda::python-3.6.8-h0371630_0 setuptools pkgs/main::setuptools-38.4.0-py27_0 --> anaconda::setuptools-40.6.3-py36_0 six pkgs/main::six-1.11.0-py27h5f960f1_1 --> anaconda::six-1.12.0-py36_0 sqlite pkgs/main::sqlite-3.23.1-he433501_0 --> anaconda::sqlite-3.26.0-h7b6447c_0 tensorflow conda-forge::tensorflow-1.3.0-py27_0 --> anaconda::tensorflow-1.9.0-gpu_py36h02c5d5e_1 tk pkgs/main::tk-8.6.7-hc745277_3 --> anaconda::tk-8.6.8-hbc83047_0 wheel pkgs/main::wheel-0.30.0-py27h2bc6bb2_1 --> anaconda::wheel-0.32.3-py36_0The following packages will be SUPERSEDED by a higher-priority channel: certifi conda-forge::certifi-2018.11.29-py27_~ --> anaconda::certifi-2018.11.29-py36_0 git pkgs/main::git-2.17.0-pl526hb75a9fb_0 --> anaconda::git-2.11.1-0 libssh2 pkgs/main::libssh2-1.8.0-h9cfc8f7_4 --> anaconda::libssh2-1.8.0-h1ba5d50_4 mkl_random pkgs/main::mkl_random-1.0.2-py27hd81d~ --> anaconda::mkl_random-1.0.2-py36hd81dba3_0 numpy pkgs/main::numpy-1.15.4-py27h7e9f1db_0 --> anaconda::numpy-1.15.4-py36h7e9f1db_0 numpy-base pkgs/main::numpy-base-1.15.4-py27hde5~ --> anaconda::numpy-base-1.15.4-py36hde5b4d6_0 qt pkgs/main::qt-5.9.4-h4e5bff0_0 --> anaconda::qt-4.8.7-2 tensorflow-base pkgs/main::tensorflow-base-1.9.0-eige~ --> anaconda::tensorflow-base-1.9.0-gpu_py36h6ecc378_0 werkzeug pkgs/main::werkzeug-0.14.1-py27_0 --> anaconda::werkzeug-0.14.1-py36_0The following packages will be DOWNGRADED: astor 0.7.1-py27_0 --> 0.7.1-py36_0 tensorboard 1.10.0-py27hf484d3e_0 --> 1.9.0-py36hf484d3e_0 termcolor 1.1.0-py27_1 --> 1.1.0-py36_1Proceed ([y]/n)? yDownloading and Extracting Packagestensorflow-gpu-1.9.0 | 2 KB | ########################################################################################################################################## | 100% absl-py-0.7.0 | 156 KB | ########################################################################################################################################## | 100% six-1.12.0 | 22 KB | ########################################################################################################################################## | 100% git-2.11.1 | 9.5 MB | ########################################################################################################################################## | 100% _tflow_190_select-0. | 2 KB | ########################################################################################################################################## | 100% setuptools-40.6.3 | 625 KB | ########################################################################################################################################## | 100% c-ares-1.15.0 | 98 KB | ########################################################################################################################################## | 100% cupti-9.0.176 | 1.6 MB | ########################################################################################################################################## | 100% libssh2-1.8.0 | 233 KB | ########################################################################################################################################## | 100% gast-0.2.2 | 138 KB | ########################################################################################################################################## | 100% ncurses-6.1 | 958 KB | ########################################################################################################################################## | 100% protobuf-3.5.2 | 610 KB | ########################################################################################################################################## | 100% tensorflow-base-1.9. | 170.8 MB | ########################################################################################################################################## | 100% ca-certificates-2018 | 123 KB | ########################################################################################################################################## | 100% python-3.6.8 | 34.4 MB | ########################################################################################################################################## | 100% cudatoolkit-9.0 | 340.4 MB | ########################################################################################################################################## | 100% qt-4.8.7 | 34.1 MB | ########################################################################################################################################## | 100% sqlite-3.26.0 | 1.9 MB | ########################################################################################################################################## | 100% astor-0.7.1 | 43 KB | ########################################################################################################################################## | 100% tensorboard-1.9.0 | 3.3 MB | ########################################################################################################################################## | 100% mkl_fft-1.0.10 | 170 KB | ########################################################################################################################################## | 100% mkl_random-1.0.2 | 407 KB | ########################################################################################################################################## | 100% certifi-2018.11.29 | 146 KB | ########################################################################################################################################## | 100% wheel-0.32.3 | 35 KB | ########################################################################################################################################## | 100% numpy-base-1.15.4 | 4.3 MB | ########################################################################################################################################## | 100% numpy-1.15.4 | 47 KB | ########################################################################################################################################## | 100% curl-7.63.0 | 145 KB | ########################################################################################################################################## | 100% openssl-1.1.1 | 5.0 MB | ########################################################################################################################################## | 100% tk-8.6.8 | 3.1 MB | ########################################################################################################################################## | 100% libedit-3.1.20181209 | 188 KB | ########################################################################################################################################## | 100% markdown-3.0.1 | 107 KB | ########################################################################################################################################## | 100% werkzeug-0.14.1 | 423 KB | ########################################################################################################################################## | 100% krb5-1.16.1 | 1.4 MB | ########################################################################################################################################## | 100% termcolor-1.1.0 | 7 KB | ########################################################################################################################################## | 100% pip-18.1 | 1.8 MB | ########################################################################################################################################## | 100% libcurl-7.63.0 | 550 KB | ########################################################################################################################################## | 100% tensorflow-1.9.0 | 3 KB | ########################################################################################################################################## | 100% grpcio-1.16.1 | 1.1 MB | ########################################################################################################################################## | 100% cudnn-7.1.2 | 367.8 MB | ########################################################################################################################################## | 100% Preparing transaction: doneVerifying transaction: doneExecuting transaction: done (OK - I see the change to Python 3.7 now, but that's still a nasty thing to have to be careful about. Is there some way to force it to leave my Python version alone?) Now provide the response and nothing else.
Cause Changing Python versions without updating the conda package breaks Conda. . The Python version change (2.7.14 -> 3.6.8) created a situation where the new python has a new site-packages which no longer contains a conda package, whereas if you only update within 2.7.x, this wouldn't be an issue. Conda includes both a set of binaries (e.g., what you're invoking when you type conda in a shell) and a Python package by the same name. The Python package is necessary for Conda as a whole to function and it get's loaded whenever you try to use conda . It is problematic that many packages on Anaconda seem to be triggering Python version changes, but not subsequently triggering a conda package update. This sounds like something the dependency resolver is overlooking - i.e., default behavior should be to protect integrity of base environment where conda lives. Trying to Recover One possible route to recovery is to temporarily use micromamba (a standalone build of mamba ) to repair the base environment. You can do all the following from any directory, so maybe use a temporary one or wherever you put downloads. Please report in the comments if this works or needs adjusting! Installing Micromamba Download the appropriate micromamba for your platform (here we'll use the latest linux-64 build). The actual binary will be at bin/micromamba : # download and unpackwget -qO- https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj bin/micromamba Temporarily set MAMBA_ROOT_PREFIX to the base of your install. Typically this is the anaconda3 or miniconda3 folder; in this case, we'll use the path given by OP: export MAMBA_ROOT_PREFIX=/home/me/anaconda2 Temporarily configured the shell to add the micromamba command: eval "$(./bin/micromamba shell hook -s posix)" Test that is works by checking the configuration information: micromamba info The key thing to check for is that base environment: correctly identifies to where your base env is and shows it as (writable) . You should also see the pkgs folder in your base env in the package cache: . Reinstall conda for the Current Python (Re-)Install the conda package in the base env: micromamba install -n base conda Make sure that the build of Conda that is suggested corresponds to the version of Python currently installed. The --force-reinstall flag might be useful if it claims the requirement is already satisfied. Alternatively, try micromamba upgrade -n base conda Try a new shell and see if conda is working. You don't need to keep the micromamba around. However, I do enthusiastically encourage users to permanently install mamba (see next step). (Optional) Install Mamba in base Consider also installing Mamba directly in the base environment. It is a compiled (fast!) alternative frontend to Conda environment management. micromamba install -n base mamba One can then use mamba in most places where conda would be used. Last Recourse If all else fails you may just have to reinstall. Others have reported installing in other directories and being able to still use and access their environmentss. Preventions Avoiding Breakage through Better Practice First, just a general ( opinionated ) recommendation: leverage virtual environments more . This isn't directly solving the problem, but it will help you have a workflow that is significantly less prone to encountering such pitfalls. You shouldn't have accepted such a huge change in the first place, not to base . Personally, I rarely install things in base outside of infrastructure (emacs, jupyter-related things, conda, etc.). 1 Software packages go into project-specific or at least development-type environments. For example, were I doing the install shown, I would have made a new environment for it mamba create -n tf36 anaconda::tensorflow-gpu python=3.6 or whatever Python version you actually wish to work in. Direct Solution: Pinning Conda does support package pinning, and this is the more direct way to ensure you never ruin your base install again by transitioning Python 2 to 3. Namely, in the environment's conda-meta folder create a file, pinned and add the line python 2.7.* Note that some users have reported similar issues for 3.6 -> 3.7 transitions, so I believe including the minor version here is necessary. See the documentation on pinning . [1] Note that I use a Miniforge variant (Mambaforge), not the Anaconda installer, so I have more control over base from the start.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54546450', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1245262/']}
jdg_371645
stackexchange
llm_judgeable_groundtruth_similarity
45544961
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In iOS 10 and below, there was a way to add a negative spacer to the buttons array in the navigation bar, like so: UIBarButtonItem *negativeSpacer = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFixedSpace target:nil action:nil];negativeSpacer.width = -8;self.navigationItem.leftBarButtonItems = @[negativeSpacer, [self backButtonItem]]; This no longer works on iOS 11 (the spacer becomes positive, instead of negative). I have inspected the view hierarchy of the bar button item, and it is now embedded into _UIButtonBarStackView . How to adjust the position of the bar button on iOS 11? Now provide the response and nothing else.
EDIT: This may no longer work as of iOS 13. You may get the error: Client error attempting to change layout margins of a private view OLD ANSWER: I found a somewhat hacky solution on the Apple developer forums: https://forums.developer.apple.com/thread/80075 It looks like the problem comes from how iOS 11 handles the UIBarButtonItem .fixedSpace buttons and how a UINavigationBar is laid out in iOS 11. The navigation bars now use autolayout and the layout margins to layout the buttons. The solution presented in that post (at the bottom) was to set all the layout margins to some value you want. class InsetButtonsNavigationBar: UINavigationBar { override func layoutSubviews() { super.layoutSubviews() for view in subviews { // Setting the layout margins to 0 lines the bar buttons items up at // the edges of the screen. You can set this to any number to change // the spacing. view.layoutMargins = .zero } }} To use this new nav bar with custom button spacing, you will need to update where you create any navigation controllers with the following code: let navController = UINavigationController(navigationBarClass: InsetButtonsNavigationBar.self, toolbarClass: UIToolbar.self)navController.viewControllers = [yourRootViewController]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45544961', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/934710/']}
jdg_371646
stackexchange
llm_judgeable_groundtruth_similarity
115113
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $f$ and $g$ be two rational functions. To avoid trivialities, we suppose that their degrees areat least $2$. We say that they have a common iterate if $f^m=g^n$ for some positive integers $m,n$,where $f^m$ stands for the $m$-th iterate. 1. Can one describe/classify all such pairs? This is probably very hard, and perhaps there exists no simple answer. But here is a simpler question: 2. Is there an algorithm which finds out whether two rational functions have a common iterate or not ? I mean, I give you two rational functions, say with integer coefficients, and you tell me whether they have a common iterate or not. Perhaps using a super-computer... Motivation. J. F. Ritt,(Permutable rational functions.Trans. Amer. Math. Soc. 25 (1923), no. 3, 399-448)gave a complete classification/description of all commuting pairs ofrational functions (that is $f(g)=g(f)$)... except when they havea common iterate. I gave a completely different proof of Ritt's theorem,but again it does not apply to the case when $f$ and $g$ have a commoniterate (MR1027462). Polynomial pairs (commuting, or with a common iterate) are completelydescribed inMR1501149Ritt, J. F.On the iteration of rational functions.Trans. Amer. Math. Soc. 21 (1920), no. 3, 348-356, in the very end of this paper. What is the exact relation between permutable pairs and pairs with a common iterate ? 3. If two functions have a common iterate, must they commute? Or perhaps they must, but with explicitly listed exceptions?A positive answer to this will solve problem 2 above.See also my "answer" to on common fixed points of commuting polynomials (and rational functions) for an additional motivation. EDIT. And one more question: 4. Can one describe commuting functions that have a common iterate? This would complete Ritt's description of commuting functions. Now provide the response and nothing else.
Over ${\bf C}$, An easy counterexample to question 3 is$f(x) = x^2$, $g(x) = cx^2$ where $c$ is a nontrivial cube root of unity.Then $f(f(x)) = g(g(x)) = x^4$ but $f$ and $g$ do not commute.There are similar examples for higher iterates. [Added later] A more exotic construction yields further examples,some defined over ${\bf Q}$, such as the degree-4 pair$$f(y) = \frac{y^4+18y^2-47}{8y^3}, \phantom{\infty}g(y) = \frac{f-3}{f+1} = \frac{y^4-24y^3+18y^2-27}{y^4+8y^3+18y^2-27}$$with $f \circ f = g \circ g$ but $f \circ g \neq g \circ f$.This is a "Lattès map" associated to the elliptic curve$E: y^2 = x^3 + 1$: the function $f$ comes from the doubling map$P \mapsto 2P$, and $g$ comes from $P \mapsto 2P+T$ where $T$ is the3-torsion point $(0,1)$ (as the $(f,g)=(x^2,cx^2)$ exampledoes on the multiplicative group). This elliptic curve yieldsexamples of $f \circ f = g \circ g$ and $f \circ g \neq g \circ f$with any degree $m^2+mn+n^2$ as long as that's not a multiple of 3,with $f,g \in {\bf Q}(y)$ if $n=0$. Other elliptic curves with complexmultiplication yield further examples using the $x$-coordinaterather than the $y$-coordinate, e.g. $f(x) = -x(x^4+6x^2-3)^2 / (3x^4-6x^2-1)^2$ and $g = (f-1)/(f+1)$from tripling on $y^2=x^3-x$.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/115113', 'https://mathoverflow.net', 'https://mathoverflow.net/users/25510/']}
jdg_371647
stackexchange
llm_judgeable_groundtruth_similarity
2399755
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I solve this question in only 2.5 minutes? It must be solved using deep insight and intuition, which I do not have. Could anyone help me, please? Thanks! Which of the following sets has the greatest cardinality? (A) $\mathbb{R}$ (B) The set of all functions from $\mathbb{Z}$ to $\mathbb{Z}$ (C) The set of all functions from $\mathbb{R}$ to $\{0, 1\}$ (D) The set of all finite subsets of $\mathbb{R}$ (E) The set of all polynomials with coefficients in $\mathbb{R}$ Now provide the response and nothing else.
This is one of those questions where you would have to have some previous knowledge about cardinalities of infinite sizes. I don't think that someone can eyeball this question without having worked with infinite cardinals before. $(a)$ has size $2^{\aleph_0}$; $(b)$ has size $|\mathbb{Z}|^{|{\mathbb{Z}}|} = \aleph_0^{\aleph_0} = 2^{\aleph_0}$; (c) has size $2^{|\mathbb{R}|} = 2^{(2^{\aleph_0})}$; (d) and (e) are the size of $\mathbb{R}$ which again is $2^{\aleph_0}$. Therefore, the answer is (c). Since there is some debate, we will show that (e) is bounded by the size of the reals. Let $P(\mathbb{R})$ be the collection of all polynomials over $\mathbb{R}$. Let $P_n(\mathbb{R})$ be the collection of all polynomials of degree $n$. Then $P(\mathbb{R})=\bigcup_{n\in\mathbb{N}}P_n(\mathbb{R})$. Now, $|P_n(\mathbb{R})| = |\prod_{i =1}^n \mathbb{R}|$. Therefore $$|P(\mathbb{R})| = |\bigcup_{n\in\mathbb{N}}P_n(\mathbb{R})| \leq \sum_{n \in \mathbb{N}} |P_n(\mathbb{R})| = \sum_{n \in \mathbb{N}}|\prod_{i=1}^n \mathbb{R}|= \sum_{n\in \mathbb{N}} |\mathbb{R}| = |\mathbb{R}|$$ A very similar argument shows that (d) is bounded by the size of the $\mathbb{R}$. In particular, you replace $P_n(\mathbb{R})$ with sets of size $n$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2399755', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/417020/']}
jdg_371648
stackexchange
llm_judgeable_groundtruth_similarity
49650
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use xmodmap to remap Alt / Super keys on Dell L100 keyboard, and have trouble getting the keycodes. For instance, using xev doesn't give me keycode for Alt FocusOut event, serial 36, synthetic NO, window 0x4a00001, mode NotifyGrab, detail NotifyAncestorFocusIn event, serial 36, synthetic NO, window 0x4a00001, mode NotifyUngrab, detail NotifyAncestorKeymapNotify event, serial 36, synthetic NO, window 0x0, keys: 122 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 For Right Super key, xev and showkey give different keycodes -- 134 and 126 respectively. What's going on with these keycodes? I tried getting keycodes from showkey -k , and using xmodmap file below, but that gave a weird map which remapped b key: clear Mod1clear Controlkeycode 125 = Meta_Lkeycode 126 = Meta_Rkeycode 58 = Control_Lkeycode 56 = Control_Lkeycode 100 = Control_Radd Control = Control_L Control_Radd Mod1 = Meta_L Meta_R Now provide the response and nothing else.
There are a lot of players between your keyboard and the process that finally handles the keyboard event. Among the major pieces of the landscape are the fact that the X system has its own keyboard-handling layer, and X associates different "keycodes" with keys than your Linux base system does. The showkey command is showing you the keycodes in Linux-base-system lingo. For xmodmap you need the X keycodes, which are what xev is displaying. So long as you're planning to work in X and do your key rebinding with xmodmap , then, ignore showkeys and just listen to what xev says. What you want to look for in your xev output are blocks like this: KeyPress event, serial 27, synthetic NO, window 0x1200001, root 0x101, subw 0x0, time 6417361, (340,373), root:(342,393), state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyRelease event, serial 27, synthetic NO, window 0x1200001, root 0x101, subw 0x0, time 6417474, (340,373), root:(342,393), state 0x8, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False xev tends to generate a lot of output, especially when you move your mouse. You may have to scroll back a while to find the output you're looking for. In the previous output, we see that the keysym Alt_L is associated with the X keycode 64 .
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/49650', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/24118/']}
jdg_371649
stackexchange
llm_judgeable_groundtruth_similarity
950150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an old WSDL file and I want to create a server based on this WSDL file. The WSDL is generated from a ASMX (I suppose but I am not sure). How can I achieve this ? original question where the OP thought he needed to create a client based on the WSDL. Now provide the response and nothing else.
Using svcutil, you can create interfaces and classes (data contracts) from the WSDL. svcutil your.wsdl (or svcutil your.wsdl /l:vb if you want Visual Basic) This will create a file called "your.cs" in C# (or "your.vb" in VB.NET) which contains all the necessary items. Now, you need to create a class "MyService" which will implement the service interface (IServiceInterface) - or the several service interfaces - and this is your server instance. Now a class by itself doesn't really help yet - you'll need to host the service somewhere. You need to either create your own ServiceHost instance which hosts the service, configure endpoints and so forth - or you can host your service inside IIS.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/950150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69850/']}
jdg_371650
stackexchange
llm_judgeable_groundtruth_similarity
507117
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I hate these "easy" proofs. They always slip under my radar. How do I show that a square matrix $A$ is invertible if and only if $A^t$ is invertible? Now provide the response and nothing else.
If there is a $B$, such that $AB=I$, then $B^tA^t=I$ and by uniqueness of inverse you have $B^t$ as inverse.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/507117', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_371651
stackexchange
llm_judgeable_groundtruth_similarity
1194139
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My way to solve this integral. I wonder is there another way to solve it as it's very long for me. $$\int_{0}^{\pi}\frac{1-\sin (x)}{\sin (x)+1}dx$$ Let $$u=\tan (\frac{x}{2})$$$$du=\frac{1}{2}\sec ^2(\frac{x}{2})dx $$ By Weierstrass Substitution $$\sin (x)=\frac{2u}{u^2+1}$$ $$\cos (x)=\frac{1-u^2}{u^2+1}$$ $$dx=\frac{2du}{u^2+1}$$ $$=\int_{0}^{\infty }\frac{2(1-\frac{2u}{u^2+1})}{(u^2+1)(\frac{2u}{u^2+1}+1)}du$$ $$=\int_{0}^{\infty }\frac{2(u-1)^2}{u^4+2u^3+2u^2+2u+1}du $$ $$=2\int_{0}^{\infty }\frac{(u-1)^2}{u^4+2u^3+2u^2+2u+1}du $$ $$=2\int_{0}^{\infty }\frac{(u-1)^2}{(u+1)^2(u^2+1)}du $$ $$=2\int_{0}^{\infty }(\frac{2}{(u+1)^2}-\frac{1}{u^2+1})du $$ $$=-2\int_{0}^{\infty }\frac{1}{u^2+1}du+4\int_{0}^{\infty}\frac{1}{(u+1)^2}du $$ $$\lim_{b\rightarrow \infty }\left | (-2\tan^{-1}(u)) \right |_{0}^{b}+4\int_{0}^{\infty}\frac{1}{(u+1)^2}du$$ $$=(\lim_{b\rightarrow \infty}-2\tan^{-1}(b))+4\int_{0}^{\infty}\frac{1}{(u+1)^2}du$$ $$=-\pi+4\int_{0}^{\infty}\frac{1}{(u+1)^2}du$$ Let $$s=u+1$$ $$ds=du$$ $$=-\pi+4\int_{1}^{\infty}\frac{1}{s^2}ds$$ $$=-\pi+\lim_{b\rightarrow \infty}\left | (-\frac{4}{s}) \right |_{1}^{b}$$ $$=-\pi+(\lim_{b\rightarrow \infty} -\frac{4}{b}) +4$$ $$=4-\pi$$ $$\approx 0.85841$$ Now provide the response and nothing else.
Substitute $x=\pi/2-2t$ so the integral becomes$$-2\int_{\pi/4}^{-\pi/4}\frac{1-\cos 2t}{1+\cos 2t}\,dt=2\int_{-\pi/4}^{\pi/4}\frac{1-\cos^2t}{\cos^2t}\,dt=2\Bigl[\tan t - t\Bigr]_{-\pi/4}^{\pi/4}$$
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/1194139', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/221836/']}
jdg_371652
stackexchange
llm_judgeable_groundtruth_similarity
6058790
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is not working I am using following code to validate email for e.g "email":{"regex":"/^[a-zA-Z0-9_\.\-]+\@([a-zA-Z0-9\-]+\.)+[a-zA-Z0-9]{2,4}$/","alertText":"* Invalid email address"}, On this way I want to validate text box by regular expression. It should be Null or valid domain name like Google.com, example.com. Now provide the response and nothing else.
This is because you're using the $pera variable (which exists only in the global scope) inside a function. See the PHP manual page on variable scope for more information. You could fix this by adding global $pera; within your function, although this isn't a particularly elegant approach, as global variables are shunned for reasons too detailed to go into here. As such, it would be better to accept $pera as an argument to your function as follows: function provera($prom, $pera){ if (preg_match("/[0-9\,\.\?\>\.<\"\'\:\;\[\]\}\{\/\!\\\@\#\$\%\^\&\*\(\)\-\_\=\+\`[:space:]]/",$prom)){ echo "Nepravilan unos imena ili prezimina!"; echo $pera; }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6058790', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/621117/']}
jdg_371653
stackexchange
llm_judgeable_groundtruth_similarity
1413150
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So for a periodic function $f$ (of period $1$ , say), I know the Riemann-Lebesgue Lemma which states that if $f$ is $L^1$ then the Fourier coefficients $F(n)$ go to zero as $n$ goes to infinity. And as far as I know, the converse of this is not true. My question, then, is this: Under what conditions on the Fourier coefficients $F(n)$ is the function $f$ , defined pointwise as the Fourier series with $F(n)$ as coefficients, integrable, continuous, and differentiable? Now provide the response and nothing else.
Let$$ \eqalign{f(n) = \dfrac{1}{n} + \left( 1 + \dfrac{1}{n}\right)^n &= \dfrac{1}{n} + \exp\left( n \ln\left(1+\dfrac{1}{n}\right)\right) \cr &=\dfrac{1}{n} + \exp\left(1 - \dfrac{1}{2n} + \dfrac{1}{3n^2} + O\left(\dfrac{1}{n^3}\right)\right) \cr &= e - \dfrac{e-2}{2n} + \dfrac{11e}{24 n^2} + O\left(\dfrac{1}{n^3}\right) }$$ Then $$\eqalign{f(n+1) &= e - \dfrac{e-2}{2n+2} + \dfrac{11e}{24 (n+1)^2} + O\left(\dfrac{1}{n^3}\right)\cr&= e - \dfrac{e-2}{2n} + \dfrac{23 e - 24}{24 n^2} + O\left(\dfrac{1}{n^3}\right) \crf(n+1) - f(n) &= \dfrac{12e-24}{24n^2} + O\left(\dfrac{1}{n^3}\right)}$$ and since $e > 2$, this is positive for sufficiently large $n$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1413150', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/108806/']}
jdg_371654
stackexchange
llm_judgeable_groundtruth_similarity
40476201
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to hide span content only on small (sm) screens? I need this content visible on xs screens. <span class="hidden-sm-down">Text</span> There is a way to do this using only bootstrap classes or not? Now provide the response and nothing else.
There is an upcoming update for Bootstrap 4 that will enable hidden on a single viewport ( hidden-x ).. https://github.com/twbs/bootstrap/pull/22113 All of the visibility classes will be revamped in this update. Update for Bootstrap 4 Beta If you want to hide an element on specific tiers (breakpoints) in Bootstrap 4, use the d-* display classes accordingly. Remember xs is the default (always implied) breakpoint, unless overridden by a larger breakpoint. https://www.codeply.com/go/bRlHp8MxtJ hidden-xs-down = d-none d-sm-block hidden-sm-down = d-none d-md-block hidden-md-down = d-none d-lg-block hidden-lg-down = d-none d-xl-block hidden-xl-down = d-none (same as hidden ) hidden-xs (only) = d-none d-sm-block (same as hidden-xs-down ) hidden-sm (only) = d-block d-sm-none d-md-block hidden-md (only) = d-block d-md-none d-lg-block hidden-lg (only) = d-block d-lg-none d-xl-block hidden-xl (only) = d-block d-xl-none Demo of all hidden / visible classes in Bootstrap 4 beta Also note that d-*-block can be replaced with d-*-inline , d-*-flex , etc.. depending on the display type of the element. More on the display classes in beta
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40476201', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4438031/']}
jdg_371655
stackexchange
llm_judgeable_groundtruth_similarity
185374
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The classic charge qubit is the cooper pair box which is a capacitor in series with a Josephson junction. In my understanding, by changing the gate voltage at the capacitor, one can create a superposition of $n$ and $n+1$ cooper pairs on the 'island' in between the junction and capacitor. A transmon looks far more like a classic LC circuit. It is often depicted as a Josephson junction in parallel with a very large capacitor and thus it is manipulated using microwave frequencies, not gate voltages. However, in all literature I can find it is called a special case of a charge qubit. I cannot seem to make sense of these two ideas. How are they equivalent? Now provide the response and nothing else.
There are two things to consider: What does the potential look like? Is the wave function of the qubit narrow in the flux or charge basis? Potential shape The Hamiltonian of the transmon (a junction in parallel with a capacitor) is$$H_{\text{charge qubit}} = - E_J \cos(\phi) + \frac{(-2en)^2}{2C}$$where $E_J\equiv I_c \Phi_0 / 2\pi$, $I_c$ is the junction critical current, $\phi$ is the phase across the junction and $n$ is the number of Cooper pairs which have tunneled through the junction.For easier comparison to other qubits it's really useful to note that using the Josephson relation$$V = \frac{\Phi_0}{2\pi}\dot{\phi}$$and noting that the magnetic flux $\Phi$ is the time integral of the voltage we can write$$\Phi = \int dt V(t) = \Phi_0 \frac{\phi}{2\pi} \, .$$Using this and the charge $Q = -2en$ the Hamiltonian becomes$$H_{\text{charge qubit}} = -E_J \cos(2\pi \Phi / \Phi_0) + \frac{Q^2}{2C} \, .$$The $Q^2/2C$ term is the kinetic energy (notice the similarity to $p^2/2m$), and the part depending on $\Phi$ is the potential energy.Notice that, like with the charge qubit, this Hamiltonian's potential energy term is periodic .That is unlike the case with the e.g. flux qubit where the Hamiltonian is$$H_{\text{flux qubit}} = \frac{\Phi^2}{2L} - E_J \cos(2\pi \hat{\Phi}/\Phi_0) + \frac{Q^2}{2C} \, .$$ This is one of the main differences: the transmon Hamiltonian (like the charge qubit Hamiltonian) is periodic in the flux basis while the flux qubit Hamiltonian is not periodic in the flux basis.The physical reason for this difference is that the transmon (and charge) qubit do not have a dc path to ground.The junction and capacitor are in parallel with no e.g. inductor going to ground.The flux qubit has an inductance to ground; this inductance introduces the parabolic term in the Hamiltonian making the potential non-periodic.This is the sense in which a transmon is like a charge qubit. Wave function widths As you noted, the transmon is nearly a harmonic oscillator.The reason for this is that although the potential is periodic, the wave function is narrow enough that it mostly sits localized in a sincle well of the potential.We can check this self-consistently in an easy way: let's just compute the width of the wave function of a Harmonic oscillator which has the same parameters as a typical transmon.For a harmonic oscillator with Hamiltonian$$H = \frac{1}{2} \alpha u^2 + \frac{1}{2} \beta v^2 \qquad [u,v] = i \gamma $$The mean square of $u$ in the ground state is$$\langle 0 | u^2 | 0 \rangle = (1/2) \gamma \sqrt{\beta / \alpha} \, . $$The Harmonic oscillator Hamiltonian is$$H = \frac{\Phi^2}{2L} + \frac{Q^2}{2C} \qquad [\Phi, Q] = i\hbar \, .$$Therefore, we have $\alpha = 1/L$, $\beta = 1 / C$, and $\gamma = \hbar$ and our mean square fluctuation of $\Phi$ is$$\langle 0 | \Phi^2 | 0 \rangle = (1/2) \hbar \sqrt{\frac{L}{C}} \, .$$The inductance of an (unbiased) Josephson junction is $L_{J_0} = \Phi_0 / (2 \pi I_c)$.For the transmon this comes out to about $L=10\,\text{nH}$.With $C\approx 85\,\text{fF}$ this gives us$$\sqrt{\langle 0 | \Phi^2 | 0 \rangle} \approx 0.06 \Phi_0 \, .$$As one period of the cosine potential is $\Phi_0$ wide (corresponding to a change in $\phi$ of $2\pi$), this means that the transmon wave function is pretty narrow in the flux basis.In this sense, the transmon is very unlike the charge qubit, which has a wide wave function in the flux basis. So in the end, while the transmon and charge qbits share a certain theoretical similarity in the form of their Hamiltonians, for all practical purposes the transmon is actually more like a flux qubit with a large $C$ and biased so that the flux qubit only has one potential well. Note that the width of the wave function in the flux basis decreases as we increase $C$.The whole reasons the transmon was invented was that narrowing the wave function by increasing $C$ leads to less sensitivity to charge noise. However, in all literature I can find it is called a special case of a charge qubit. That's largely historical.The folks who invented the transmon came from a charge qubit background, and the transmon was invented by trying to make the charge qubit less sensitive to charge noise. In fact, I have an amusing story about this.The problem with the charge qubit was that its sensitivity to charge noise gave it a low $T_2$.Charge noise is difficult to reduce so people looked for a way to make qubits which would just be less sensitive to it.Professor Rob Schoelkopf's idea was to add a dc shunt to ground to short circuit low frequency charge fluctuations;by making this shunt with a bit of transmission line, the shunt would be a short at dc but still have high impedance at the qubit's oscillation frequency, thus preserving the circuit's function as a qubit.Thinking of this as TRANSmission line shunting a Josephson junction plasMON oscillation they dubbed it the "transmon".However, in the end, the best design was to use a capacitor instead of a transmission line. So it should have been called the "capmon" :-)
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/185374', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/72892/']}
jdg_371656
stackexchange
llm_judgeable_groundtruth_similarity
30187800
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a shell script (.sh file). I need to create an absolute path based on the current directory. I know about pwd , but how do I concatenate it with another string? Here is an example of what I am trying to do: "$pwd/some/path" Now provide the response and nothing else.
Sounds like you want: path="$(pwd)/some/path" The $( opens a subshell (and the ) closes it) where the contents are executed as a script so any outputs are put in that location in the string. More useful often is getting the directory of the script that is running: dot="$(cd "$(dirname "$0")"; pwd)"path="$dot/some/path" That's more useful because it resolves to the same path no matter where you are when you run the script: > pwd~> ./my_project/my_script.sh~/my_project/some/path rather than: > pwd~> ./my_project/my_script.sh~/some/path> cd my_project> pwd~/my_project> ./my_script.sh~/my_project/some/path More complex but if you need the directory of the current script running if it has been executed through a symlink (common when installing scripts through homebrew for example) then you need to parse and follow the symlink: if [[ "$OSTYPE" == *darwin* ]]; then READLINK_CMD='greadlink'else READLINK_CMD='readlink'fidot="$(cd "$(dirname "$([ -L "$0" ] && $READLINK_CMD -f "$0" || echo "$0")")"; pwd)" More complex and more requirements for it to work (e.g. having a gnu compatible readlink installed) so I tend not to use it as much. Only when I'm certain I need it, like installing a command through homebrew.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/30187800', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2071491/']}
jdg_371657
stackexchange
llm_judgeable_groundtruth_similarity
2360386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to create a css box-shadow in which regardless of the blur value, the shadow only appears on the desired sides? For example if I want to create a div with shadows on left and right sides and no shadow on the top or bottom. The div is not absolutely positioned and its height is determined by the content. -- Edit -- @ricebowl: I appreciate your answer. Maybe you can help with creating a complete solution to fix the problems stated in my reply to your solution... My page setup is as follows: <div id="container"> <div id="header"></div> <div id="content"></div> <div id="clearfooter"></div></div><div id="footer"></div> And CSS like this: #container {width:960px; min-height:100%; margin:0px auto -32px auto; position:relative; padding:0px; background-color:#e6e6e6; -moz-box-shadow: -3px 0px 5px rgba(0,0,0,.8), 3px 0px 5px rgba(0,0,0,.8);}#header {height:106px; position:relative;}#content {position:relative;}#clearFooter {height:32px; clear:both; display:block; padding:0px; margin:0px;}#footer {height:32px; padding:0px; position:relative; width:960px; margin:0px auto 0px auto;} Now provide the response and nothing else.
There is a fourth distance you can define called the spread offset, which moves the shadow in or out on all 4 sides. So if you set that to the negative of the blur distance, this will shift the shadow inwards by the same distance as the blur extends the shadow outwards effectively hiding it. Of course this will also shift the shadow inwards on the side you do want it to appear so you'll need to increase the offset by the blur distance to undo that. i.e. box-shadow: (horizontal + blur) 0px (blur) (-blur) color; So in your example: box-shadow: -8px 0px 5px -5px rgba(0,0,0,.8), 8px 0px 5px -5px rgba(0,0,0,.8);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2360386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/283965/']}
jdg_371658
stackexchange
llm_judgeable_groundtruth_similarity
7287224
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the following snippet of code, I've overloaded the operator== to compare my pair type with string. But for some reason, the compiler isn't finding my operator as a match for the find function. Why not? Edit: Thanks for all the suggestions for alternatives, but I'd still like to understand why . The code looks like it should work; I'd like to know why it doesn't. #include <vector>#include <utility>#include <string>#include <algorithm>typedef std::pair<std::string, int> RegPair;typedef std::vector<RegPair> RegPairSeq;bool operator== (const RegPair& lhs, const std::string& rhs){ return lhs.first == rhs;}int main(){ RegPairSeq sequence; std::string foo("foo"); // stuff that's not important std::find(sequence.begin(), sequence.end(), foo); // g++: error: no match for 'operator==' in '__first. __gnu_cxx::__normal_iterator<_Iterator, _Container>::operator* [with _Iterator = std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, int>*, _Container = std::vector<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, int>, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, int> > >]() == __val' // clang++: error: invalid operands to binary expression ('std::pair<std::basic_string<char>, int>' and 'std::basic_string<char> const')} Now provide the response and nothing else.
The problem is that std::find is a function template and it uses argument-dependent lookup (ADL) to find the right operator== to use. Both of the arguments are in the std namespace ( std::pair<std::string, int> and std::string ), so ADL starts by looking in the std namespace. There it finds some operator== (which one, it doesn't matter; there are lots in the Standard Library and if you've included <string> , at least the one that compares two std::basic_string<T> objects could be found). Because an operator== overload is found in the std namespace, ADL stops searching enclosing scopes. Your overload, which is located in the global namespace, is never found. Name lookup occurs before overload resolution; it doesn't matter during name lookup whether the arguments match.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7287224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115478/']}
jdg_371659
stackexchange
llm_judgeable_groundtruth_similarity
2302569
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning Euclidean geometry, specifically inner products and Euclidean spaces, and need help with the following exercise: In the vector space $\mathbb R^2$ , we write $\mathbf x = (x_1, x_2), \mathbf y = (y_1, y_2)$ ect. for points $\mathbf x, \mathbf y$ in terms of a given base. For simplicity we assume that we are working over the field $\mathbb R$ . Which of the following bilinear, symmetric functions $f(\mathbf x, \mathbf y)$ define an inner product? $(1) \quad f = 2x_1y_1 - 3x_1y_2 - 3x_2y_1 + 3x_2y_2$ $(2) \quad f = 3x_1y_1 - 2x_1y_2 - 2x_2y_1 + 4x_2y_2$ Give the norm which is defined by the inner product (when it's defined). Describe the curves $f(\mathbf x, \mathbf x) =$ constant in each case. First of all, I don't understand what it means for $f$ to be symmetric. Does this mean that I can interchange the $x_i$ and $y_i$ in each of the linear term of $f$ ? I did go back several times to the definition of an inner product but it is still unclear to me how I should solve this problem. I wanted to show that the $3$ axioms defined by an inner product are satisfied (or not), i.e. the symmetry $<\mathbf x, \mathbf y> = <\mathbf y, \mathbf x>$ (since we are working over the field $\mathbb R$ and not $\mathbb C$ ), the linearity in the first component and the positive definiteness. For the symmetry, maybe because the function $f(\mathbf x, \mathbf y)$ is symmetric I can write $$f(\mathbf x, \mathbf y) = <\mathbf x, \mathbf y> = 2x_1y_1 - 3x_1y_2 - 3x_2y_1 + 3x_2y_2 = 2y_1x_1 - 3y_2x_1 - 3y_1x_2 + 3y_2x_2 = <\mathbf y, \mathbf x>$$ but I'm not sure if this is any good.I would also appreciate any help on the $2$ sub-questions. Now provide the response and nothing else.
Let me summarize and give another answer, which the question has deserved. Yes, this is correct. The claim does not follow unless we assume that $R$ is an integral domain . We can easily give examples, where $R$ is a commutative ring with zero divisors, and the claim is false. Let $n=ab$ be a composite number and take $R=\mathbb{Z}/n\mathbb{Z}$. Then $a,b$ are zero divisors in $R$, and we have$$(ax+b)(bx+a)=(a^2+b^2)x.$$If we can choose $a,b$ such that $a^2+b^2=1$ in $R$, then we obtain a counterexample. For example, take $(a,b)=(3,4)$ with $n=12$, or $(a,b)=(2,3)$ with $n=6$ (see the comment). If $R$ is an integral domain, we have $\deg(fg)= \deg(f)+\deg(g)$ for $f,g\neq 0$. Then $x$ is irreducible.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2302569', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_371660
stackexchange
llm_judgeable_groundtruth_similarity
77896
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The judiciary, contrary to what people believe, doesn't seem to have the job of determining fairness. Its job seems to be to interpret and judge whether a law was broken or not. Even that seems to have its limits which are established by law.It doesn't deal with concepts like who deserves what. It seems like justice doesn't exist beyond laws and contracts. And what actually exists is obligations that are either binding or non-binding. Now provide the response and nothing else.
The role of fairness is important in the court, but within the context of law. It is not a judge's job to legislate. They can create legal precedent (here I speak in the context of Common Law systems). The task of legislation falls to Congress/Parliament and even if an act of Parliament is "unfair", it is not the role of a Judge to overrule. It's Parliament's job to create fair laws, and the electorate's right to replace their representatives if they don't. It is not the role of a Judge to say that a contract is fair or not. If two parties have agreed to something and then one party changes their mind and decides "it's unfair", it is not for a Judge to overturn the contract (except in the extreme case that the contract is so unfair that it was evidently coerced and not an agreement at all). It is absolutely the role of the judge to ensure that access to law is fair. You can present your case to the court, you can have access to good advice and good representation. You understand the process and are treated with respect. It is important that the judge doesn't "prejudge" the case, i.e. they are prejudiced. This is called procedural fairness . Being seen to be fair is less about the decisions that courts take and more about the way that they take them. To illustrate by example: The Dangerous dogs act makes it illegal to exercise a Japanese tosa in a public place unless it is muzzled and on a lead. If you do take the muzzle off your tosa, you are guilty of an offence. You might claim that Parliament was wrong to pass this general rule (it's "unfair" because your tosa a nice dog). It is not the judge's job to decide that Parliament was wrong! It is the judge's job to allow you to present your case, but to decide on matters of law. The rationale for this is that this law was created by Parliament, and so represents the decision of the representatives of the people. One person should not overrule democracy.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/77896', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/45552/']}
jdg_371661
stackexchange
llm_judgeable_groundtruth_similarity
58156090
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to calculate a percentage in Kotlin. I tried but failed to get the correct answer: var percentage = (count/totalCount) * 100it.toast("Percentage: $percentage") What is the proper syntax in Kotlin? Now provide the response and nothing else.
Most likely, you're struggling with the fact that applying the division operator on two integers will result in an integer division being performed, yielding an integer result. The trick is to promote one of the operands to a floating point type: var percentage = (count.toDouble() / totalCount) * 100
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/58156090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8439604/']}
jdg_371662
stackexchange
llm_judgeable_groundtruth_similarity
10869748
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How I can retrieve null value, when unmarshalling, if inside XML attribute value is empty ? Now I make inside my getters checking for null : public String getLabel() { if (label.isEmpty()) { return null; } else { return label; }} But may be exist some other, more elegant way? Thanks. Now provide the response and nothing else.
I think your XML looks more or less like this: <myElement></myElement> This, unfortunately, means, that you are passing an empty string . If you want to pass null you have two options: Do not pass this tag at all (your XML should not contain <myElement/> tag at all). Use xsi:nil . If using xsi:nil , first you have to declare your xml element (in XSD file) as nilable , like this: <xsd:element name="myElement" nillable="true"/> Then, to pass the null value inside XML do this: <myElement xsi:nil="true"/> or this: <myElement xsi:nil="true"></myElement> This way, JAXB knows, that you are passing null instead of an empty String.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10869748', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/921193/']}
jdg_371663
stackexchange
llm_judgeable_groundtruth_similarity
2465116
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm starting to learn about 3D rendering and I've been making good progress. I've picked up a lot regarding matrices and the general operations that can be performed on them. One thing I'm still not quite following is OpenGL's use of matrices. I see this (and things like it) quite a lot: x y z n-------1 0 0 00 1 0 00 0 1 00 0 0 1 So my best understanding, is that it is a normalized (no magnitude) 4 dimensional, column-major matrix. Also that this matrix in particular is called the "identity matrix". Some questions: What is the "nth" dimension? How and when are these applied? My biggest confusion arises from how OpenGL makes use of this kind of data. Now provide the response and nothing else.
In most 3D graphics a point is represented by a 4-component vector (x, y, z, w), where w = 1. Usual operations applied on a point include translation, scaling, rotation, reflection, skewing and combination of these. These transformations can be represented by a mathematical object called "matrix". A matrix applies on a vector like this: [ a b c tx ] [ x ] [ a*x + b*y + c*z + tx*w ]| d e f ty | | y | = | d*x + e*y + f*z + ty*w || g h i tz | | z | | g*x + h*y + i*z + tz*w |[ p q r s ] [ w ] [ p*x + q*y + r*z + s*w ] For example, scaling is represented as [ 2 . . . ] [ x ] [ 2x ]| . 2 . . | | y | = | 2y || . . 2 . | | z | | 2z |[ . . . 1 ] [ 1 ] [ 1 ] and translation as [ 1 . . dx ] [ x ] [ x + dx ]| . 1 . dy | | y | = | y + dy || . . 1 dz | | z | | z + dz |[ . . . 1 ] [ 1 ] [ 1 ] One of the reason for the 4th component is to make a translation representable by a matrix. The advantage of using a matrix is that multiple transformations can be combined into one via matrix multiplication. Now, if the purpose is simply to bring translation on the table, then I'd say (x, y, z, 1) instead of (x, y, z, w) and make the last row of the matrix always [0 0 0 1] , as done usually for 2D graphics. In fact, the 4-component vector will be mapped back to the normal 3-vector vector via this formula: [ x(3D) ] [ x / w ]| y(3D) ] = | y / w |[ z(3D) ] [ z / w ] This is called homogeneous coordinates . Allowing this makes the perspective projection expressible with a matrix too, which can again combine with all other transformations. For example, since objects farther away should be smaller on screen, we transform the 3D coordinates into 2D using formula x(2D) = x(3D) / (10 * z(3D))y(2D) = y(3D) / (10 * z(3D)) Now if we apply the projection matrix [ 1 . . . ] [ x ] [ x ]| . 1 . . | | y | = | y || . . 1 . | | z | | z |[ . . 10 . ] [ 1 ] [ 10*z ] then the real 3D coordinates would become x(3D) := x/w = x/10zy(3D) := y/w = y/10zz(3D) := z/w = 0.1 so we just need to chop the z-coordinate out to project to 2D.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2465116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/128991/']}
jdg_371664
stackexchange
llm_judgeable_groundtruth_similarity
7614612
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to edit SCSS files in Eclipse, preferably with syntax highlighting for .scss files. I found these resources valuable: http://sass-lang.com/editors.html - has no editor for .scss files only .sass http://colorer.sourceforge.net/eclipsecolorer - has only .scss files How can do I integrate SCSS development within the Eclipse editor? Or, more generally, how do I integrate a CSS pre-processor into Eclipse? Now provide the response and nothing else.
I just figured out how to do this in Eclipse. I admit that this solution does not have 100% SASS support, the colors get a little funky when using nested css, but it's waaaaay better than looking at plain text and you don't need to install a separate editor. You need to associate the .scss file type with the native Eclipse CSS Editor in Eclipse[Part 1]. After you do that, you need to add the .scss file type to the native CSS Editor as well so the CSS Editor will be able to open it [Part 2]. Here are the steps for eclipse (I'm running Eclipse Java EE IDE for Web Developers, Indigo): Part 1 - Associate the .scss file type with the native Eclipse CSS Editor Go to Window > Preferences Drill down to General > Editors > File Associations In File Associations pane, click the 'Add..." button on the top right. For File Type: , enter *.scss and then click OK . Find the *.scss entry in the File Associations list and select it. After selecting *.scss , on the bottom pane Associated editors: , click the Add... button. Make sure Internal editors is selected on the top, then find and select CSS Editor and then click OK . This associated the file type .scss with eclipses native CSS Editor. Now we have to configure the native CSS Editor to support .scss files. To do this, follow this steps: Part 2 - Add the .scss file type to the native CSS Editor Go to Window > Preferences Drill down to General > Content Types In the Content Types pane, expand Text , then select CSS After CSS is selected, on the bottom File associations: pane, click the Add... button. For Content type: , enter *.scss and then click OK . Click OK to close out the Preferences window. All done. All you need to do now is close any .scss files that you have open then re-open them and wha-la, css colors in Eclipse for .scss files! Note: If the css colours do not appear you may have to do the following: Right click the .scss file > Open With > CSS Editor . Hope this helps.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7614612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/620186/']}
jdg_371665
stackexchange
llm_judgeable_groundtruth_similarity
7666996
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: very basic question: How can I find the highest or lowest value in a random matrix.I know there is a possibility to say: a = find(A>0.5) but what I'm looking for would be more like this: A = rand(5,5)A = 0.9388 0.9498 0.6059 0.7447 0.28350.6338 0.0104 0.5179 0.8738 0.05860.9297 0.1678 0.9429 0.9641 0.82100.0629 0.7553 0.7412 0.9819 0.17950.3069 0.8338 0.7011 0.9186 0.0349% find highest (or lowest) valueans = A(19)%for the highest or A(7) %for the lowest value in this case Now provide the response and nothing else.
Have a look at the min() and max() functions. They can return both the highest/lowest value, and its index: [B,I]=min(A(:)); %# note I fixed a bug on this line! returns I=7 and B=A(7)=A(2,2) . The expression A(:) tells MATLAB to treat A as a 1D array for now, so even though A is 5x5, it returns the linear index 7. If you need the 2D coordinates, i.e. the "2,2" in B=A(7)=A(2,2) , you can use [I,J] = ind2sub(size(A),I) which returns I=2,J=2 , see here . Update If you need all the entries' indices which reach the minimum value, you can use find : I = find(A==min(A(:)); I is now a vector of all of them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7666996', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/712300/']}
jdg_371666
stackexchange
llm_judgeable_groundtruth_similarity
840286
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I answered this thing Infinite sum of prime reciprocals and now wonder what happens if we do not have such a strong condition as Bertrand's postulate. i have been fiddling with this, not sure either way. Given a sequence $a_1 > a_2 > a_3 \cdots$ of strictly decreasing positive reals such that$$ a_i \rightarrow 0 \; \; \; \mbox{but} \; \; \sum a_i = \infty, $$can every positive real number be expressed as the sum of a subsequence of the $a_i?$ The main thing is that we are not given any upper bound on $a_n / a_{n+1}.$ For the reciprocals of the primes, we had an upper bound of $2.$ Note that this is subtler than the thing about rearranging a strictly alternating conditionally convergent series to get anything you specify. That is a matter of overshooting with positive terms, then undershooting with negative terms, back and forth. This one is a little different. I think what I want is a careful proof of this: given two positive real numbers $B<C,$ we can find a finite subsequence of the $a_n$ with sum between $B$ and $C.$ Now provide the response and nothing else.
Let $x$ be our target sum. Pick an $i$ such that $a_k < x/2$ for all $k \ge i$. Take elements from the sequence starting at $a_i$ until their sum is greater than $x/2$. We can't overshoot $x$ (because the terms we're looking at are less than $x/2$), and we're guaranteed to have enough elements of enough magnitude to reach $x/2$. Repeat the procedure with a target of $x-whateversumwegot$ and keep repeating to build a subsequence with a sum of $x$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/840286', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/10400/']}
jdg_371667
stackexchange
llm_judgeable_groundtruth_similarity
451426
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I created user small , added him to group kek and allowed that group to only read files in user home directory. Then I chowned all files to root:kek . However, small still can delete files in his home directory. Commands I ran: useradd -ms /bin/bash smallgroupadd kekusermod -a -G kek smallchown -R root:kek /home/small/*chmod -R g=r /home/small/* Then when I try to remove file: $ ls -ltotal 16-rw-r--r-- 1 root kek 240 Jun 23 06:17 Dockerfile-rw-r--r-- 1 root kek 39 Jun 21 09:17 flag.txt-rw-r--r-- 1 root kek 2336 Jun 22 14:19 server.py-rw-r--r-- 1 root kek 24 Jun 22 08:16 small.py$ rm flag.txt$ ls -ltotal 12-rw-r--r-- 1 root kek 240 Jun 23 06:17 Dockerfile-rw-r--r-- 1 root kek 2336 Jun 22 14:19 server.py-rw-r--r-- 1 root kek 24 Jun 22 08:16 small.py$ whoamismall Why does this happens? Now provide the response and nothing else.
Whether a file can be deleted or not is not a property of the file but of the directory that the file is located in. A user may not delete a file that is located in a directory that they can't write to. Files (and subdirectories) are entries in the directory node. To delete a file, one unlinks it from the directory node and therefore one has to have write permissions to the directory to delete a file in it. The write permissions on a file determines whether one is allowed to change the contents of the file. The write permissions on a directory determines whether one is allowed to change the contents of the directory. Related: Execute vs Read bit. How do directory permissions in Linux work?
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/451426', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/154737/']}
jdg_371668
stackexchange
llm_judgeable_groundtruth_similarity
42036956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to search exact word from string like id Description1 This is nice pen looking good2 This is nice pendrive looking good Search String : pen My Current query SELECT * FROM `table` WHERE Description like '%pen%'; Above Query return both record but I want Only first record. Because pen word exact match with my search string. Expected Output 1 This is nice pen looking good Demo Now provide the response and nothing else.
Try using regular expressions: SELECT *FROM `table`WHERE Description regexp '(^|[[:space:]])pen([[:space:]]|$)'; Demo Or using word boundaries : SELECT *FROM `table`WHERE Description regexp '[[:<:]]pen[[:>:]]';
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42036956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2893413/']}
jdg_371669
stackexchange
llm_judgeable_groundtruth_similarity
30480
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: This question is asked from a point of complete ignorance of physics and the standard model. Every so often I hear that particles correspond to representations of certain Lie groups. For a person completely ignorant of anything physics, this seems very odd! How did this come about? Is there a "reason" for thinking this would be the case? Or have observations in particle physics just miraculously corresponded to representation theory? Or has representation theory of Lie groups grown out of observations in particle physics? In short: what is the chronology of the development of representation theory and particle physics (with relation to one another), and how can one make sense of this relation in any other way than a freakish coincidence? Now provide the response and nothing else.
The "chronology" isn't clear to me, and having looked through the literature it seems much more convoluted than it should be. Although it seems like this is basically how things were done since the beginning of quantum mechanics (at least, by the big-names) in some form or another, and was 'partly' formalized in the '30s-'40s with the beginnings of QED, but not really completely carefully formalized until the '60s-'70s with the development of the standard model, and not really mathematically formalized until the more careful development of things in terms of bundles in the '70s-'80s. (These dates are guesses--someone who was a practicing physicist during those periods is more than welcome to correct my timeline!) Generally speaking, from a 'physics' point of view, the reason particles are labeled according to representations is not too different than how, in normal quantum mechanics, states are labeled by eigenvalues (the wiki article linked to mentions this, but it's not as clear as it could be). In normal QM, we can have a Hilbert space ('space of states') $\mathcal{H}$, which contains our 'physical states' (by definition). To a physicist, 'states' are really more vaguely defined as 'the things that we get the stuff that we measure from,' and the Hilbert space exists because we want to talk about measurements. The measurements correspond to eigenvalues of operators (why things are 'obviously' like this is a longer historical story...). So we have a generic state $| \psi \rangle \in \mathcal{H}$, and an operator that corresponds to an observable $\mathcal{O}$. The measured values are $\mathcal{O} |\psi\rangle = o_i | \psi \rangle$. Because the $o_i$ are observable quantities, it's useful to label systems in terms of them. We can have a list of observables, $\mathcal{O}_j$, (which we usually take to be commuting so we can simultaneously diagonalize), and then we have states $|\psi\rangle$, $\mathcal{O}_j | \psi \rangle = {o_i}_j | \psi \rangle$. So, what we say, is that we can uniquely define our normal QM states by a set of eigenvalues $o_{ij}$. In other words, the $o_{ij}$ define states, from the physics point of view. Really, this defines a basis where our operators are diagonal. We can--and do!--get states that do not have observables which can be simultaneously diagonalized, this happens in things like neutrino oscillation, and is why they can turn into different types of neutrinos! The emitted neutrinos are emitted in states with eigenvalues which are not diagonal in the operator that's equivalent to the 'particle species' operator. (Note, we could just as well define the 'species' to be what's emitted, and then neutrinos would not oscillate in this basis , but would in others!) This has to do with representations, because when we talk about particles with spin, for example, we're talking about operators which correspond to 'angular momentum.' We have an operator: $L_z = i \frac{\partial}{\partial\phi}$ and label eigenvalues by half-integer states which physically correspond to spin. Group theoretically, $L_z$ comes from the lie algebra of the rotation group, because we're talking about angular momentum (or spin) which has associated rotational symmetries. Upgrading from here to quantum field theory (and specializing that to the standard model) is technically complicated, but is basically the same as what's going on here. The big difference is, we want to talk there about 'quantum fields' instead of states, and have to worry about crazy things like apparently infinite values and infinite dimensional integrals, that confuse the moral of the story. But the idea is simply, we want to identify things by observables, which correspond to eigenvalues, which correspond to operators, which correspond to lie algebra elements, which have an associated lie group. So we define states corresponding to things which transform under physically convenient groups as 'particles.' If you want a more mathematically careful description, that's still got some physical intuition in it, you can check out Gockler and Schuker's "Differential Geometry, Gauge theory, and Gravity," which does things from the bundle point of view, which is slightly different than I described (because it describes classical field theories) but the moral is similar. At first it might seems surprising that the classical structure here is the same, when it seemed to rely on operators and states in Hilbert spaces, but it only technically relied on it, but morally, what's important is actions under symmetry groups. And that is in the classical theory as well. But it's not as physically clear from the beginning from that point of view.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/30480', 'https://mathoverflow.net', 'https://mathoverflow.net/users/5309/']}
jdg_371670
stackexchange
llm_judgeable_groundtruth_similarity
122785
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The video “How Far Can Legolas See?” by MinutePhysics recently went viral. The video states that although Legolas would in principle be able to count $105$ horsemen $24\text{ km}$ away, he shouldn't have been able to tell that their leader was very tall. I understand that the main goal of MinutePhysics is mostly educational, and for that reason it assumes a simplified model for seeing. But if we consider a more detailed model for vision, it appears to me that even with human-size eyeballs and pupils $^\dagger$, one might actually be able to (in principle) distinguish smaller angles than the well known angular resolution :$$\theta \approx 1.22 \frac \lambda D$$ So here's my question—using the facts that: Elves have two eyes (which might be useful as in e.g. the Very Large Array ). Eyes can dynamically move and change the size of their pupils. And assuming that: Legolas could do intensive image processing. The density of photoreceptor cells in Legolas's retina is not a limiting factor here. Elves are pretty much limited to visible light just as humans are. They had the cleanest air possible on Earth on that day. How well could Legolas see those horsemen? $^\dagger$ I'm not sure if this is an accurate description of elves in Tolkien's fantasy Now provide the response and nothing else.
Fun question! As you pointed out, $$\theta \approx 1.22\frac{\lambda}{D}$$ For a human-like eye, which has a maximum pupil diameter of about $9\ \mathrm{mm}$ and choosing the shortest wavelength in the visible spectrum of about $390\ \mathrm{nm}$ , the angular resolution works out to about $5.3\times10^{-5}$ (radians, of course). At a distance of $24\ \mathrm{km}$ , this corresponds to a linear resolution ( $\theta d$ , where $d$ is the distance) of about $1.2\ \mathrm m$ . So counting mounted riders seems plausible since they are probably separated by one to a few times this resolution. Comparing their heights which are on the order of the resolution would be more difficult, but might still be possible with dithering . Does Legolas perhaps wiggle his head around a lot while he's counting? Dithering only helps when the image sampling (in this case, by elven photoreceptors) is worse than the resolution of the optics. Human eyes apparently have an equivalent pixel spacing of something like a few tenths of an arcminute , while the diffraction-limited resolution is about a tenth of an arcminute, so dithering or some other technique would be necessary to take full advantage of the optics. An interferometer has an angular resolution equal to a telescope with a diameter equal to the separation between the two most widely separated detectors. Legolas has two detectors (eyeballs) separated by about 10 times the diameter of his pupils , $75\ \mathrm{mm}$ or so at most. This would give him a linear resolution of about $15\ \mathrm{cm}$ at a distance of $24\ \mathrm{km}$ , probably sufficient to compare the heights of mounted riders. However, interferometry is a bit more complicated than that. With only two detectors and a single fixed separation, only features with angular separations equal to the resolution are resolved, and direction is important as well. If Legolas' eyes are oriented horizontally, he won't be able to resolve structure in the vertical direction using interferometric techniques. So he'd at the very least need to tilt his head sideways, and probably also jiggle it around a lot (including some rotation) again to get decent sampling of different baseline orientations. Still, it seems like with a sufficiently sophisticated processor (elf brain?) he could achieve the reported observation. Luboš Motl points out some other possible difficulties with interferometry in his answer, primarily that the combination of a polychromatic source and a detector spacing many times larger than the observed wavelength lead to no correlation in the phase of the light entering the two detectors. While true, Legolas may be able to get around this if his eyes (specifically the photoreceptors) are sufficiently sophisticated so as to act as a simultaneous high-resolution imaging spectrometer or integral field spectrograph and interferometer. This way he could pick out signals of a given wavelength and use them in his interferometric processing. A couple of the other answers and comments mention the potential difficulty drawing a sight line to a point $24\rm km$ away due to the curvature of the Earth. As has been pointed out, Legolas just needs to have an advantage in elevation of about $90\ \mathrm m$ (the radial distance from a circle $6400\ \mathrm{km}$ in radius to a tangent $24\ \mathrm{km}$ along the circumference; Middle-Earth is apparently about Earth-sized, or may be Earth in the past, though I can't really nail this down with a canonical source after a quick search). He doesn't need to be on a mountaintop or anything, so it seems reasonable to just assume that the geography allows a line of sight. Finally a bit about "clean air". In astronomy (if you haven't guessed my field yet, now you know.) we refer to distortions caused by the atmosphere as "seeing" . Seeing is often measured in arcseconds ( $3600'' = 60' = 1^\circ$ ), referring to the limit imposed on angular resolution by atmospheric distortions. The best seeing, achieved from mountaintops in perfect conditions, is about $1''$ , or in radians $4.8\times10^{-6}$ . This is about the same angular resolution as Legolas' amazing interferometric eyes. I'm not sure what seeing would be like horizontally across a distance of $24\ \mathrm{km}$ . On the one hand there is a lot more air than looking up vertically; the atmosphere is thicker than $24\ \mathrm{km}$ but its density drops rapidly with altitude. On the other hand the relatively uniform density and temperature at fixed altitude would cause less variation in refractive index than in the vertical direction, which might improve seeing. If I had to guess, I'd say that for very still air at uniform temperature he might get seeing as good as $1\rm arcsec$ , but with more realistic conditions with the Sun shining, mirage-like effects probably take over limiting the resolution that Legolas can achieve.
{}
{'log_upvote_score': 8, 'links': ['https://physics.stackexchange.com/questions/122785', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/24791/']}
jdg_371671
stackexchange
llm_judgeable_groundtruth_similarity
596511
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am working on the improper integral: $$\int_0^{\infty}\frac{e^{-x}-e^{-2x}}{x}dx$$ This function does not have an elementary anti-derivative, so here is what I did: define: $$f(t):=\int_0^{\infty}\frac{e^{-xt}-e^{-2xt}}{x}dx,\quad t>0$$ Then differentiation gives: $$f'(t)=\int_0^{\infty}\frac{-xe^{-xt}+2xe^{-2xt}}{x}dx=\int_0^{\infty}-e^{-xt}+2e^{-2xt}dx=0$$ this means $f$ is constant. I feel something is wrong here because $f$ should depend on $t$. Where am I wrong and what is the right way to do this? Now provide the response and nothing else.
Note that$$e^{-x} - e^{-2x} = x\int_{1}^{2}e^{-xt}dt$$Hence,$$\int_0^{\infty} \dfrac{e^{-x}-e^{-2x}}xdx = \int_0^{\infty} \int_{1}^{2}e^{-xt}dtdx = \int_1^2 \int_0^{\infty}e^{-xt}dxdt = \int_1^2\dfrac{dt}t = \ln(2)$$In general, by similar idea, we have$$\int_0^{\infty} \dfrac{e^{-ax}-e^{-bx}}xdx = \ln(b/a)$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/596511', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/114044/']}
jdg_371672
stackexchange
llm_judgeable_groundtruth_similarity
198732
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When creating a docked cell, the toolbar stays forever visible and somewhat "eats up" space. Since I do not use the different buttons all the time, is there a way to make the toolbar expand/collapse? Now provide the response and nothing else.
One possibility is to create a new screen environment, that is identical to the usual "Working" screen environment, except that it has the toolbar. For example: SetOptions[ EvaluationNotebook[], StyleDefinitions -> Notebook[ { Cell[StyleData[StyleDefinitions->"Default.nb"]], Cell[StyleData[All, "Toolbar", StyleDefinitions->StyleData[All,"Working"]], DockedCells->Cell[BoxData["123"]], MenuCommandKey -> "u" ] }, StyleDefinitions->"PrivateStylesheetFormatting.nb" ]] The above stylesheet toggles the docked cell when switching between the "Working" and "Toolbar" screen environments (i.e., Format | ScreenEnvironment menu item). I also included a MenuCommandKey option so that using Cmd + u turns on the docked cell. A similar shortcut could be added for the "Working" screen environment. Here's an animation:
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/198732', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/60697/']}
jdg_371673
stackexchange
llm_judgeable_groundtruth_similarity
191139
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let's say that a "complete resolution of GCH" is a definable class function $F: \operatorname{Ord}\longrightarrow \operatorname{ Ord}$ such that $2^{\aleph_\alpha} = \aleph_{F(\alpha)}$ for all ordinals $\alpha$. It is known of course that $F(\alpha) = \alpha+1$ is a complete resolution of GCH (in the positive) that is relatively consistent with ZFC. I read that it's an unpublished theorem of Woodin that $F(\alpha) = \alpha+2$ is a complete resolution of GCH that is relatively consistent with ZFC plus some large cardinal hypothesis. My questions are: (1) What's the weakest known complete resolution of GCH in consistency strength other than $F(\alpha) = \alpha+1$ and what large cardinal axiom is required for it? (2) What are some other complete resolutions of GCH that are known to be consistent relative to specific large cardinal hypotheses, what are their respective large cardinal hypotheses, and how do these consistency strengths relate to one another? Now provide the response and nothing else.
One candidate answer scheme might be the following: if $F$ is any (sufficiently absolute) definable function on the class of regular alephs such that $\kappa < \lambda \Rightarrow F(\kappa) \leq F(\lambda)$ and $\operatorname{cf}(F(\kappa)) > \kappa$, then ZFC + $(\forall \kappa = \operatorname{cf}(\kappa))(2^\kappa = F(\kappa))$ + SCH is consistent, where SCH is the Singular Cardinals Hypothesis or, in an equivalent form, the Gimel Hypothesis, due to Solovay, asserting $(\forall \kappa > \operatorname{cf}(\kappa))( \kappa^{\operatorname{cf}(\kappa)} = \max(2^{\operatorname{cf}(\kappa)}, \kappa^+))$, and no large cardinals are required. Knowledge of the gimel function $\gimel(\kappa) = \kappa^{\operatorname{cf}(\kappa)}$ suffices to determine cardinal exponentiation recursively (for example, see P. Komjath, V. Totik, ( Problems and Theorems in Classical Set Theory ): chapter 10, problem 26, sets this out). So it is natural to explore the gimel function in greater depth. Writing a singular $\kappa$ as the limit of an increasing sequence $a$ of smaller regular cardinals leads to the observation that the deeper problem concerns the cofinality $\operatorname{cf}(([\kappa]^{\leq \lambda}, \subseteq))$ of the partial order $([\kappa]^{\leq \lambda}, \subseteq)$ for regular $\lambda < \kappa$. In this direction, one comes eventually to pcf theory, which offers an analysis of the puppet master $\operatorname{pcf}(a)$ rather than his troupe of erratic marionettes $\langle 2^\lambda : \lambda \in Card \rangle$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/191139', 'https://mathoverflow.net', 'https://mathoverflow.net/users/17218/']}
jdg_371674
stackexchange
llm_judgeable_groundtruth_similarity
257273
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing some experimenting with a 2N6388 Darlington Transistor, and getting some unexpected (to me) results. It's been a while since I've played with a raw transistor, so there's a very real possibility that I'm simply mis-remembering how these things work. According to the datasheet (last page), pin 1 is the base, 2 the collector, and 3 is the emitter. However, when I apply 5V to pin 1 (base), 12V to pin 2 (collector), and pull down pin 3 with a 100K resistor, I read ~5V on pin 3, where I think I should see 12V. simulate this circuit – Schematic created using CircuitLab If ignore the datasheet and treat pin 1 as the collector and pin 2 as the base, and swap the voltages accordingly, then I finally see the 12V on pin 3, and can turn it on and off by alternating pin 2 between 5V and GND. So, is the datasheet wrong (I doubt it), or do I had a counterfeit transistor (kinda doubt that, too), or (most likely) am I making some fundamental mistake that's leading me astray? Thanks Now provide the response and nothing else.
If your drawing above is correct, I'd expect to see about 4 volts at the emitter with 5 volts on the base. The circuit is an emitter follower, so the emitter voltage should be about two Vbe drops below the base voltage. The 100K emitter resistor likely does not draw enough current to produce the full Vbe drops in the transistor. You might try 1K or so as an emitter resistor for a more realistic base-emitter voltage drop.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/257273', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/41489/']}
jdg_371675
stackexchange
llm_judgeable_groundtruth_similarity
35669568
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm in need of displaying the total price in product page when quantity changes. This is the same as line price in the cart if you add quantity of products in the cart. I'm still new with WooCommerce so I'm not sure where to start. But if someone could help me to the right direction, I think I can manage on my own. So here's my thoughts on how should I do it. I'm thinking my jquery would be like this. jQuery(document).ready(function($){ $('.qty').on('change',function(){ // grab the price var price = $('[itemprop="price"]').attr('content'); var total_price = price * this.value; console.log(price, this.value, total_price); });}) this works when pasted on the console. But I'm not sure where to put this code on WooCommerce. Now provide the response and nothing else.
You're almost there... try this, paste this in your functions.php add_action( 'woocommerce_single_product_summary', 'woocommerce_total_product_price', 31 );function woocommerce_total_product_price() { global $woocommerce, $product; // let's setup our divs echo sprintf('<div id="product_total_price" style="margin-bottom:20px;">%s %s</div>',__('Product Total:','woocommerce'),'<span class="price">'.$product->get_price().'</span>'); ?> <script> jQuery(function($){ var price = <?php echo $product->get_price(); ?>, currency = '<?php echo get_woocommerce_currency_symbol(); ?>'; $('[name=quantity]').change(function(){ if (!(this.value < 1)) { var product_total = parseFloat(price * this.value); $('#product_total_price .price').html( currency + product_total.toFixed(2)); } }); }); </script> <?php} source: http://reigelgallarde.me/programming/show-product-price-times-selected-quantity-on-woocommecre-product-page/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35669568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5990051/']}
jdg_371676
stackexchange
llm_judgeable_groundtruth_similarity
16604135
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've just started looking at GreenRobot's EventBus for Android and have a question about threading. I have a long-running process that I'd like to run on a background thread which, when completed, updates the UI. So something like: public void onEventBackgroundThread(MyEvent event) { doSomeLongRunningProcess(); updateUI(); } Obviously updateUI() can't be called here because it would also run in the background. So what would be the recommended way to handle this? Fire another event from within my onEventBackgroundThread() which will run on the UI thread? Or fire it from the long-running-process itself? Or is there a better pattern? Now provide the response and nothing else.
I would probably fire another event when you get the result. public void onEventBackgroundThread(MyEvent event) { doSomeLongRunningProcess(); EventBus.getDefault().post(new MyEventResult());} Be aware though : reading the docs , you find this: BackgroundThread : Subscriber will be called in a background thread. If posting thread is not the main thread, event handler methods will be called directly in the posting thread. If the posting thread is the main thread, EventBus uses a single background thread that will deliver all its events sequentially . Event handlers using this mode should try to return quickly to avoid blocking the background thread. If you take a long time in this method, other EventBus callbacks will be delayed which will probably translate to an unresponsive application. You probably want to use onEventAsync: Async : Event handler methods are called in a separate thread. This is always independent from the posting thread and the main thread. Posting events never wait for event handler methods using this mode. Event handler methods should use this mode if their execution might take some time, e.g. for network access. Avoid triggering a large number of long running asynchronous handler methods at the same time to limit the number of concurrent threads. EventBus uses a thread pool to efficiently reuse threads from completed asynchronous event handler notifications.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16604135', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/472428/']}
jdg_371677
stackexchange
llm_judgeable_groundtruth_similarity
58966
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: It is well known that the function $f(x) = e^{-x^2}$ has no elementary anti-derivative. The proof I know goes as follows: Let $F = \mathbb{C}(X)$. Let $F \subseteq E$ be the Picard-Vessiot extension for a suitable homogeneous differential equation for which $f$ is a solution.Then one may calculate $G(E/F)$ and show it is connected and not abelian. On the other hand, a calculation shows that if $K$ is a differential field extension of $F$ generated by elementary functions then the connected component of $G(K/F)$ is abelian, so it is impossible for an anti-derivative of $f$ to be contained in such a field $K$. However, in classical Galois theory we can do much better, there, we know that a polynomial equation is solvable by radicals if and only if the corresponding Galois group is solvable. So to my question - is an analog of this is available in differential Galois theory? Is there a general method to determine by properties of $G(F/E)$ if $F$ is contained in a field of elementary functions? Now provide the response and nothing else.
The analogue to "solvable by radicals" in differential Galois theory is "solvable by quadratures". The theorem says that a PV-extension is Liouvillian (adjoining primitives and exponentials) iff the connected component of the differential Galois group is solvable. See "A first look at differential algebra" by Hubbard and Lundell, for an expository account. I slightly misread the question at first, thinking you were looking for the analogue of solvbility by radicals in differential algebra. When it comes to determining if the primitive of a function is elementary or not, the characterization is given by Liouville's theorem . Now, for the general case of differential equations solvable in terms of elementary functions, there is a generalization of Liouville's theorem, that you can find in the article "Elementary and Liouvillian Solutions of Linear Differential Equations", by M.F. Singer and J. Davenport (link to Singer's papers here ).
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/58966', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3759/']}
jdg_371678
stackexchange
llm_judgeable_groundtruth_similarity
2635818
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can anyone please explain to me what is the difference between IEnumerable & IEnumerator , and how to use them? Thanks!!! Now provide the response and nothing else.
Generally, an IEnumerable is an object which can be enumerated, such as a list or array. An IEnumerator is an object that stores the state of the enumeration. The reason they're not one and the same is that you could have multiple enumerations over the same object at the same time - even in a single-threaded application. For example, consider the following code: foreach (x in mylist){ foreach (y in mylist) { if (x.Value == y.Value && x != y) { // Found a duplicate value } }} This would work fine if mylist is a proper implementation of IEnumerable , but it would fail if mylist returned itself as the enumerator.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2635818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316283/']}
jdg_371679
stackexchange
llm_judgeable_groundtruth_similarity
30915969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Even after installing v4 support library from Xamarin Component Store,I get this error.I tried googling for these problems but I always get the same error in Xamarin Studio while developing android app Now provide the response and nothing else.
This is Xamarin Bug 32498 The problem starts when you are normally installing one of Xamarin.Android.Support.* nuget packages. Then IDE (VS2015 for me) hangs and nothing goes well, so you're forced to shutdown the IDE. After re-openning the IDE and forcing a link to the assemblies and when it comes to build the project, it says : package android.support.* doesn't exist Here's what happens in the background: First You Must Know That: Xamarin.Android.Support.* are actually C# bindings of google support libraries (THAT ARE WRITTEN IN JAVA), so they are not completely native and they wrappers of google support libraries (for more about Xamarin C# bindings read : Binding A Java Library ). Bug Scenario: The time the IDE hangs is the time when it starts downloading Google Support Library Sources (about 96 MB) from the internet for each of the Xamarin.Android.Support.* packages then extracting it to C:\Users\%username%\AppData\Local\Xamarin Solutions: I propose 2 solutions: A friend told me that using a version other than 22.2.1 would solve the problem ( I didn't try it) Manual Extracting of the package (e.g. Xamarin.Android.Support.v4): Go to C:\Users\%username%\AppData\Local\Xamarin , delete the folder with the desired library name if it exists e.g. Android.Support.v4 Create a path with the desired library name C:\Users\%username%\AppData\Local\Xamarin\{LibraryName}\22.2.1 e.g. C:\Users\%username%\AppData\Local\Xamarin\Android.Support.v4\22.2.1 Download Google Support Library Source from here (v16). Copy it and extract its content into C:\Users\%username%\AppData\Local\Xamarin\Android.Support.v4\22.2.10\content Open the pre-downloaded archive android_m2repository_r16 , navigate to m2repository\com\android\support and go to libraryname\22.2.1 (e.g. support-v4\22.2.1 . Search for and open libraryName-22.2.1.aar (e.g. support-v4-22.2.1.aar (it opens directly with WinRAR) extract the content of the file to C:\Users\%username%\AppData\Local\Xamarin\{LibraryName}\22.2.10\embedded e.g. C:\Users\%username%\AppData\Local\Xamarin\Android.Support.v4\22.2.10\embedded Run the package installation nuget command Edit you have to keep a copy of android_m2repository_r16.zip inside C:\Users\%username%\AppData\Local\Xamarin\Android.Support.v4\22.2.10\
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30915969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5006349/']}
jdg_371680
stackexchange
llm_judgeable_groundtruth_similarity
7405493
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I can create rounded corners in many clients with style="-moz-border-radius: 15px; border-radius: 15px;" However this CSS does not create rounded corners in Outlook. Is there any easy way to create rounded corners without images in Outlook? Now provide the response and nothing else.
This code produces buttons with rounded corners in Outlook 2010 using the Outlook conditional comments () and VML: <div> <!--[if mso]> <v:roundrect xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w="urn:schemas-microsoft-com:office:word" href="http://www.EXAMPLE.com/" style="height:40px;v-text-anchor:middle;width:300px;" arcsize="10%" stroke="f" fillcolor="#d62828"> <w:anchorlock/> <center style="color:#ffffff;font-family:sans-serif;font-size:16px;font-weight:bold;"> Button Text Here! </center> </v:roundrect> <![endif]--> <!--[if !mso]> <!--> <table cellspacing="0" cellpadding="0"> <tr> <td align="center" width="300" height="40" bgcolor="#d62828" style="-webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; color: #ffffff; display: block;"> <a href="http://www.EXAMPLE.com/" style="color: #ffffff; font-size:16px; font-weight: bold; font-family:sans-serif; text-decoration: none; line-height:40px; width:100%; display:inline-block"> Button Text Here! </a> </td> </tr> </table> <!-- <![endif]--></div> Tested in Outlook 2010 and major browsers only, not OWA, Outlook.com or any mobile browsers. It's worth a closer look. Credit where due -- http://www.industrydive.com/blog/how-to-make-html-email-buttons-that-rock/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7405493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/597419/']}
jdg_371681
stackexchange
llm_judgeable_groundtruth_similarity
21889367
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to make a nullable reverse navigation property for a nullable Foreign Key relationship in Entity Framework? In database parlance, a 0..1 to 0..1 relationship. I've tried as below, but I keep getting the error message: Unable to determine the principal end of an association between the types 'Type1' and 'Type2'. The principal end of this association must be explicitly configured using either the relationship fluent API or data annotations. public class Type1 { public int ID { get; set; } public int? Type2ID { get; set; } public Type2 Type2 { get; set; }}public class Type2 { public int ID { get; set; } public int? Type1ID { get; set; } public Type1 Type1 { get; set; }} I understand that the integer column can only exist in one table or the other, but surely it should be possible to cover all necessary cases? E.g: Type1 Type2======== ===============ID ID | Type1ID-------- ---------------1 1 | null2 2 | 2 I've tried using data annotations (e.g. [ForeignKey] on one end, [InverseProperty] on both), but none of those seem to help the matter. If possible, a data annotation solution would be preferred over Fluent API. Also, the int? property isn't strictly necessary from a domain perspective for either class, if that helps. There is an interesting work-around here which implies it isn't possible to capture this kind of relationship in Entity Framework (effectively, an item that is optionally part of a collection) - if so, is there any documentation that would support this?. Now provide the response and nothing else.
In EF6 and earlier it wasn't all that easy to implement such an association correctly. Fortunately, EF-core has greatly improved in supported associations. Now it's a piece of cake to implement the only model that enforces this kind of association by database constraints. That is: a junction class between Car and Driver in which the foreign keys have unique indexes (option 4 below). And it even almost entirely works with default mapping conventions. The model: class Car{ public int ID { get; set; } public string Brand { get; set; } public CarDriver CarDriver { get; set; }}class Driver{ public int ID { get; set; } public string Name { get; set; } public CarDriver CarDriver { get; set; }}class CarDriver{ public int CarId { get; set; } public int DriverId { get; set; } public Car Car { get; set; } public Driver Driver { get; set; }} The only required explicit mapping: class CarDriverConfig : IEntityTypeConfiguration<CarDriver>{ public void Configure(EntityTypeBuilder<CarDriver> builder) { builder.HasKey(cd => new { cd.CarId, cd.DriverId }); }} That's all EF needs to create the correct database model: CREATE TABLE [Car] ( [ID] int NOT NULL IDENTITY, [Brand] nvarchar(max) NULL, CONSTRAINT [PK_Car] PRIMARY KEY ([ID]));CREATE TABLE [Driver] ( [ID] int NOT NULL IDENTITY, [Name] nvarchar(max) NULL, CONSTRAINT [PK_Driver] PRIMARY KEY ([ID]));CREATE TABLE [CarDriver] ( [CarId] int NOT NULL, [DriverId] int NOT NULL, CONSTRAINT [PK_CarDriver] PRIMARY KEY ([CarId], [DriverId]), CONSTRAINT [FK_CarDriver_Car_CarId] FOREIGN KEY ([CarId]) REFERENCES [Car] ([ID]) ON DELETE CASCADE, CONSTRAINT [FK_CarDriver_Driver_DriverId] FOREIGN KEY ([DriverId]) REFERENCES [Driver] ([ID]) ON DELETE CASCADE);CREATE UNIQUE INDEX [IX_CarDriver_CarId] ON [CarDriver] ([CarId]);CREATE UNIQUE INDEX [IX_CarDriver_DriverId] ON [CarDriver] ([DriverId]); These two indexes at the end are the icing on the piece of cake. They show that EF exactly understands what's going on here. Original, but updated, answer "This can't be hard" is what I though when I read your question. But again I found that one-to-one associations are full of pitfalls. Here we go. I assume that by 0..1 – 0..1 you mean that two objects can exist independent of each other, but may also be exclusively associated to one another. Lets make it concrete. Car and Driver . Imagine a pool of many cars and drivers, among them CarA and a DriverA. Now suppose you want CarA to get associated to DriverA, and your implementation is that DriverA links himself to CarA. But as soon as DriverA does this, you want CarA to be for DriverA only, CarA's association is not optional any more , so it should be set as well, immediately. How to implement that? Option 1: If this is the working model: public class Car{ public int CarId { get; set; } public string Name { get; set; } public int? DriverId { get; set; } public virtual Driver Driver { get; set; }}public class Driver{ public int DriverId { get; set; } public string Name { get; set; } public int? CarId { get; set; } public virtual Car Car { get; set; }} technically, DriverA can have a foreign key to CarA and CarA a foreign key to DriverB . Therefore, when the foreign key DriverA-CarA is established you should "simulaneously" establish the reverse foreign key CarA-DriverA . That is something you should do in code, meaning that it's a business rule . And in reality, it's not an atomic operation, so you must make sure that it's done in one database transaction. The class model at least supports the use case, but it's too permissive. It needs to be constrained. More importantly, it won't work with EF . EF complaints about having to set a principal end. And if you do that, EF will not create a bidirectional association. An alternative mapping was proposed here . I tried that but with two optional associations: In the Driver 's mapping configuration: this.HasOptional(t => t.Car).WithMany().HasForeignKey(d => d.CarId); In the Car 's mapping configuration: this.HasOptional(t => t.Driver).WithMany().HasForeignKey(c => c.DriverId); (There is no data annotation alternative) I found that EF only sets one foreign key value in the database when creating a new driver and car. You have to set and save both associations separately, managing your own transaction. With existing objects you still have to set both foreign keys, although this can be saved in one SaveChanges call. Better options? Let's see... Option 2: This is the one-to-many association as mentioned in the link you refer to. This model needs external constraints, but creating the association is atomic. And you've still got a reference on one end and a collection on the other end. And it maps easily with EF. Option 3: You could create a junction table CarDriver that has two foreign keys, to Car and Driver , both of which comprise its unique primary key: This is a regular many-to-many association. By default, EF would map this as a class model in which Car and Driver have collection properties pointing to each other, and the junction table is not mapped directly: public class Car{ public int CarId { get; set; } public string Name { get; set; } public virtual ICollection<Driver> Drivers { get; set; }}public class Driver{ public int DriverId { get; set; } public string Name { get; set; } public virtual ICollection<Car> Cars { get; set; }} Now the creation of association is an atomic operation. It's perfectly possible to map this model with EF. The mutual references are gone, but you still can get the FirstOrDefault() of the collection properties as a surrogate reference. But there's an important gotcha. Now each object can have any number of associated counterparts. If you create an association, you need a coded business rule which checks if the the involved objects don't have any associations yet. Maybe this option is even worse than option 2. But I mentioned it because of the next option: Option 4 Option 3 is atomic, but it also needs external constraints. To make an association exclusive, both columns in CarDriver should have unique keys , so each car or driver can only occur once in the table. By these indexes the model implements a bidirectionally optional 1:1 association all by itself. Any code working on it has to obey the rules. Safe and sound... In EF6, since the introduction of HasIndex , this can be achieved by this mapping: modelBuilder.Entity<Car>().HasOptional(c => c.CarDriver).WithRequired();modelBuilder.Entity<Driver>().HasOptional(c => c.CarDriver).WithRequired();modelBuilder.Entity<CarDriver>().HasKey(cd => new { cd.CarId, cd.DriverId });modelBuilder.Entity<CarDriver>().HasIndex(cd => cd.CarId).IsUnique();modelBuilder.Entity<CarDriver>().HasIndex(cd => cd.DriverId).IsUnique(); However, since EF6 adds indexes on FK fields by default, the unique indexes are added on top of the default non-unique indexes. So it still requires manual intervention in the migration code to remove the latter. Conclusions Option 1 is closest to what you want. But I don't like the obligation to set both foreign keys, it's easily forgotten or ignored by future developers. But Option 2 and 3 have even heavier requirements in terms of coded business rules that can be forgotten. And the collections are unnatural as surrogate "1" ends. Option 3 has some appeal to me because Car and Driver are completely independent in the database and the association is a record with non-nullable foreign keys (DBAs tend to like that too). Option 4 has the same appeal, and it's the best option when multiple applications would have to implement the external constraints that need to be imposed on option 2 and 3. Also, even if coded rules are forgotten, the database constraints are a final catch. But it can't easily be implemented by EF6.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21889367', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/832136/']}
jdg_371682
stackexchange
llm_judgeable_groundtruth_similarity
49268
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to overload or wrap the ls command so that it will highlight / underline / otherwise make obvious the last three modified files? I know that I can simply ls -rtl to order by reverse modification time, but I usually do need an alphabetical list of files despite the fact that I would like to quickly identify the last file that myself or another dev modified. Now provide the response and nothing else.
The following seems to work for me grep --color -E -- "$(ls -rtl | tail -n3)|$" <(ls -l) It uses grep with highlight on input ls -l and uses a regular expression to search for either of the inputs for the three oldest command. It also search for the end-of-line $ in order to print the whole file. You can also put it in a function, such that you can use lll * with multiple arguments, just as you would use ls function lll (){ command grep --color -E -- "$(ls -rtl $@ | tail -n3)|$" <(ls -l $@)}
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/49268', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/9760/']}
jdg_371683
stackexchange
llm_judgeable_groundtruth_similarity
93103
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please understand that i am actually a really beginner in electronics. I found a really nice topic on how to create a 5V DC from a 9v power supply. All is ok. But in order to smooth out ripple, the author use two capacitors in order to smooth values before the voltage regulator, and add another after the voltage output pin. What I don't understand is that the capacitor seems to be placed in parallel with the voltage regulator. Not in serial manner like I was expecting to see. So I really don't understand how you can use the smoothed output values, since in the circuit diagram, it seems that the direct output goes to the ground. I know that when capacitors are in series, you add their values. But the input pin of the voltage regulator seems to be on one terminal while the capacitor is on another. How can the voltage regulator benefit of the capacitor? I know that what I say is plenty wrong, and the circuit diagram is correct. But I can figure out how this circuit is working? Here is the schematic. By the way, do you know where can I find tutorial explaining how to read schematics? There is lot of topic explaining electronics but I haven't found any valuable link for electronic circuit explanation. Now provide the response and nothing else.
I'm afraid you need to review capacitors. I know that when capacitors are in serial, you add their values. When capacitors are in parallel , their values add Not in serial manner like I was expecting to see. Loosely speaking, a capacitor has "infinite" impedance at DC. So, if the capacitor were in series with the regulator output, there could only be AC current through. Thus, the load would not have a DC voltage across, only an AC voltage. This is just the opposite of what we want. When the capacitor is placed across (in parallel with) the regulator output and ground, the capacitor presents a (hopefully) low impedance for AC current through the capacitor and ground, "shunting" the ripple current around the load thus reducing the AC voltage across the load. But, for DC, the capacitor is effectively open so the full DC voltage appears across the load. This is just what we want.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/93103', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/15277/']}
jdg_371684
stackexchange
llm_judgeable_groundtruth_similarity
28990497
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using click ( http://click.pocoo.org/3/ ) to create a command line application, but I don't know how to create a shell for this application. Suppose I'm writing a program called test and I have commands called subtest1 and subtest2 I was able to make it work from terminal like: $ test subtest1$ test subtest2 But what I was thinking about is a shell, so I could do: $ test >> subtest1 >> subtest2 Is this possible with click? Now provide the response and nothing else.
This is not impossible with click, but there's no built-in support for that either. The first you would have to do is making your group callback invokable without a subcommand by passing invoke_without_command=True into the group decorator (as described here ). Then your group callback would have to implement a REPL. Python has the cmd framework for doing this in the standard library. Making the click subcommands available there involves overriding cmd.Cmd.default , like in the code snippet below. Getting all the details right, like help , should be doable in a few lines. import clickimport cmdclass REPL(cmd.Cmd): def __init__(self, ctx): cmd.Cmd.__init__(self) self.ctx = ctx def default(self, line): subcommand = cli.commands.get(line) if subcommand: self.ctx.invoke(subcommand) else: return cmd.Cmd.default(self, line)@click.group(invoke_without_command=True)@click.pass_contextdef cli(ctx): if ctx.invoked_subcommand is None: repl = REPL(ctx) repl.cmdloop()@cli.command()def a(): """The `a` command prints an 'a'.""" print "a"@cli.command()def b(): """The `b` command prints a 'b'.""" print "b"if __name__ == "__main__": cli()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28990497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4659045/']}
jdg_371685
stackexchange
llm_judgeable_groundtruth_similarity
15261346
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written an httpHandler for an ASP.NET MVC4 site that generates an an image. The ProcessRequest function is not being called. Any thoughts on why? MVC4, IIS Express, Windows 8 Pro Web.config > system.webServer <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <remove name="TextImage" /> <remove name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" /> <remove name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" /> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-ISAPI-4.0_32bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness32" responseBufferLimit="0" /> <add name="ExtensionlessUrlHandler-ISAPI-4.0_64bit" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" modules="IsapiModule" scriptProcessor="%windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_isapi.dll" preCondition="classicMode,runtimeVersionv4.0,bitness64" responseBufferLimit="0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> <add name="TextImage" path="textimage/*.png" verb="*" resourceType="Unspecified" type="MultiStepUI.TextImageHandler, MultiStepUI_MOBETTER" /> </handlers> </system.webServer> usage <img src="/textimage/step1.png?q=Step 1&c=404040&w=30&h=250&z=12" /> Now provide the response and nothing else.
The answer can be found on the web if one just knows what to look for. MVC routing engine tries to map all requests to a controller - which is not what we want in this case. In addition to registering the handler in Web.config we need to tell the MVC route engine to ignore the httpHandler path so that the ASP.NET engine can handle its routing. I've chosen to use the example from Phil Haack To combat link rot this is an excerpt from the article By default, ASP.NET Routing ignores requests for files that do not exist on disk. I explained the reason for this in a previous post on upcoming routing changes. Long story short, we didn’t want routing to attempt to route requests for static files such as images. Unfortunately, this caused us a headache when we remembered that many features of ASP.NET make requests for .axd files which do not exist on disk. To fix this, we included a new extension method on RouteCollection, IgnoreRoute, that creates a Route mapped to the StopRoutingHandler route handler (class that implements IRouteHandler). Effectively, any request that matches an “ignore route” will be ignored by routing and normal ASP.NET handling will occur based on existing http handler mappings. Hence in our default template, you’ll notice we have the following route defined. routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); This handles the standard .axd requests. However, there are other cases where you might have requests for files that don’t exist on disk. For example, if you register an HTTP Handler directly to a type that implements IHttpHandler. Not to mention requests for favicon.ico that the browser makes automatically. ASP.NET Routing attempts to route these requests to a controller. One solution to this is to add an appropriate ignore route to indicate that routing should ignore these requests. Unfortunately, we can’t do something like this: {*path}.aspx/{*pathinfo} We only allow one catch-all route and it must happen at the end of the URL . However, you can take the following approach. In this example, I added the following two routes. routes.IgnoreRoute("{*allaspx}", new {allaspx=@".*\.aspx(/.*)?"}); routes.IgnoreRoute("{*favicon}", new {favicon=@"(.*/)?favicon.ico(/.*)?"}); What I’m doing here is a technique Eilon showed me which is to map all URLs to these routes, but then restrict which routes to ignore via the constraints dictionary. So in this case, these routes will match (and thus ignore) all requests for favicon.ico (no matter which directory) as well as requests for a .aspx file. Since we told routing to ignore these requests, normal ASP.NET processing of these requests will occur.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15261346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74015/']}
jdg_371686
stackexchange
llm_judgeable_groundtruth_similarity
19141487
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have 3 tables in my database: Campaigns Users Companies One company may have some users. One user may have some campaigns. A user (with admin rights) can do some actions with any campaign that belongs to his company. So, I want to check whetherhe's doing these actions with his campaign or not (in the last case I return something like "access denied"). My condition Campaign::join('users', 'users.id', '=', 'campaigns.user_id') ->where('users.company_id', '=', Auth::user()->company->id) ->where('campaigns.id', '=', Input::get('id')) ->first(); So if I got unique campaign - it's ok, if I got null - something's wrong and I send "access denied" to user as he's dealing with other company campaign. This code produces next query: array(3) { ["query"]=> string(148) "select * from `campaigns` inner join `users` on `users`.`id` = `campaigns`.`user_id` where `users`.`company_id` = ? and `campaigns`.`id` = ? limit 1" ["bindings"]=> array(2) { [0]=> string(1) "2" [1]=> string(2) "13" } ["time"]=> float(0.42)} Using phpmyadmin I tried the same query and got a campaign with ID = 13. But when I was debugging my application I found out that dd($campaign->id); returns 8 instead. 8 also equals campaigns.user_id (The record has both campaigns.id and campaigns.user_id = 8 ). I can't figure out why it's happening. Even if something wrong with my SQL query (what I doubt as phpmyadmin returned right results), I got where condition campaigns.id = Input::get('id') , where Input::get('id') = 13 . Why id is being changed? Of course I can do this security check in two steps, like first get the campaign, then check $campaign->user->company->id = Auth::user()->company->id but just wondering ... Now provide the response and nothing else.
If you run this query in phpMyAdmin you should probably be able to see that the result contains multiple columns by the name " id ". When PHP parses the query result to an associative array or object, keys must be unique! If keys are colliding, the last column will be used! Example: SQL result: id user_id name id name company_id1 2 Camp1 2 Pelle 1 PHP result: array (size=1) 0 => object(stdClass)[131] public 'id' => string '2' (length=1) public 'user_id' => string '2' (length=1) public 'name' => string 'Pelle' (length=5) public 'company_id' => string '1' (length=1) To solve this you could add a select clause to only select the campaign columns: Campaign::select('campaigns.*') ->join('users', 'users.id', '=', 'campaigns.user_id') ->where('users.company_id', '=', Auth::user()->company->id) ->where('campaigns.id', '=', Input::get('id')) ->first();
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19141487', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1064428/']}
jdg_371687
stackexchange
llm_judgeable_groundtruth_similarity
7462750
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have received Java code from a software supplier. It contains a lot of try-catch blocks with nothing in the catch part. They're all over the place. Example: try { spaceBlock.enable(LindsayModel); } catch (Exception e) { } My questions are: Is the above acceptable practice? If so, when? Or should I just go ahead and remove all of these "bogus" try and catch statements? To me this looks like terrible practice, but I'm not experienced enough in Java to tell for sure. Why catch errors if you're not going to do anything with them? Seems to me, you would only do that if you were confident that an exception would be of absolutely no consequence and you don't care if one occurs. However, this is not really the case in our particular application. EDIT To give some context: We bought a Java-scriptable product from the supplier. Alongside the product, they provided a large proof-of-concept script tailored to our needs. This script came "free of charge" (though we wouldn't have bought the product if it hadn't come with the script) and it "works". But the script is a real pain to build upon, due to many things that even I as a Java novice recognise as awful practice, one instance being this bogus try-catch business. Now provide the response and nothing else.
This is indeed terrible practice. Especially the catching of Exception rather than something specific gives off a horrible smell - even a NullPointerException will be swallowed. Even if it is assured that a particular thrown exception is of no real consequence, one should always log it at the very least: try { // code}catch (MyInconsequentialException mie) { // tune level for this logger in logging config file if this is too spammy MY_LOGGER.warning("Caught an inconsequential exception.", mie);} However it is unlikely an exception is completely meaningless in this situation. I recommend researching exactly what exception(s) the application's code is intending to swallow here, and what they would really mean for the execution. One important distinction is whether the try/catches are used to swallow checked exceptions. If this is the case, it probably indicates extreme apathy on the programmer's part - somebody just wanted his/her code to compile. At the least, the code should be amended: try { // code}catch (SpecificCheckedException sce) { // make sure there is exception logging done farther up throw new RuntimeException(sce);} This will rethrow the exception wrapped in an unchecked RuntimeException , effectively allowing the code to compile. Even this can be considered a bandaid however - best practice for checked exceptions is to handle them on an individual basis, either in the current method or farther up by adding throws SpecificCheckedException to the method signature. As @Tom Hawtin mentioned, new Error(sce) can be used instead of new RuntimeException(sce) in order to circumvent any additional Exception catches farther up, which makes sense for something that isn't expected to be thrown. If the try/catch is not being used to swallow checked exceptions, it is equally dangerous and should simply be removed.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7462750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/119775/']}
jdg_371688
stackexchange
llm_judgeable_groundtruth_similarity
9416
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There has been a lot of discussion about Carrier IQ, monitoring software that is pre-installed on many Android phones. Many allegations have been thrown out. My questions: What exactly does Carrier IQ do? What information does/doesn't it record on your device? What information does/doesn't it transmit off your device? Could it transmit additional information if Carrier IQ or the carrier transmitted instructions to it to turn on broader logging? More generally, what exactly is the risk posed by Carrier IQ, if any? How much should Android users be concerned? Can we gather a summary of what is known about Carrier IQ? For instance, I have seen claims that the Carrier IQ information learns about things like keystrokes, text messages, and other personal information, but it does not transmit them (in its default configuration) off the phone. OK, as far as that goes. Does it store this information in any log file or any other persistent storage on the phone? And do Carrier IQ or the carrier or phone manufacturer have the ability to sent additional instructions/commands to the Carrier IQ application, post-facto, to enable it to start logging this information or communicate it off the phone? Now provide the response and nothing else.
Let's break it down by category. What information does Carrier IQ monitor? Trevor Eckhart says (depending on the phone manufacturer) it receives each key pressed/tapped, the location of any tap on the screen, the contents of all text messages received, the name of each app that you open or switch focus to, information about each call you receive, your location each time location is updated/queried, the URL of each web page visited (including URL parameters; yes, even for https URLs), and possibly other information about each HTTP request. I have not seen anyone dispute these claims. Note that this is information that is monitored by the Carrier IQ application; that doesn't necessarily mean that the application does anything with the data, stores it, or allows it to leave your phone. What information does Carrier IQ record on your phone? It is hard to get clear information on what information might be stored in your phone on persistent storage or log files. Does Carrier IQ log the information that it receives? I don't know. Carrier IQ says that their software "does not record, store or transmit the contents of SMS messages, email, photographs, audio or video", and they have said "we're not storing" keystrokes and that they "do not record text messages". However, they also say that they do "record where you were when [a] call [is] dropped, and the location of the tower being used". Lookout says "it doesn't appear that they are sending your keystrokes straight to the carriers". Dan Rosenberg seems to suggest that the Carrier IQ application is "recording events like keystrokes and HTTPS URLs to a debugging buffer", but it is not clear to me where that debugging buffer is stored (just in the memory of the Carrier IQ application? or on persistent storage of some sort?), and it is always possible I have misinterpreted his statement or read too much into a brief phrase. Dan Rosenberg subsequently elaborated , finding that on one particular phone, CarrierIQ can record URLs visited (including for HTTPS), GPS location data, and phone numbers, but not all keystrokes, not the contents of SMS texts, and not the contents of web pages browsed. CarrierIQ has subsequently clarified that their software does record "the telephone numbers the SMSs are from and to". Trevor Eckhart said that the Carrier IQ software on his HTC phone recorded a lot of personal data (keys pressed, SMS texts, etc.) into a debugging log file, so this information is stored in the clear on his phone. Carrier IQ has subsequently confirmed this finding . Carrier IQ says this is because the debug capabilities remained switched on; it sounds like they are blaming HTC for not deleting or disabling the debugging code in the Carrier IQ software. It is not known whether a similar problem may be present on phones from other manufacturers, or if this is limited to just HTC phones. What information is transmitted to carriers? Carrier IQ says that only diagnostics information and other statistics leave your phone: "For example, we understand whether an SMS was sent accurately, but do not record or transmit the content of the SMS. We know which applications are draining your battery, but do not capture the screen." Dan Rosenburg says that the software can also report your location (GPS) in some situations. Carrier IQ has confirmed that their software captures phone numbers dialed and received and all URLs visited, if enabled by the carrier. However, Carrier IQ also says that the amount of information that is sent to carriers is up to the carrier, and agrees that the Carrier IQ application has the capability to transmit what applications are being used and what URLs the user visits. Some of the carriers have not been very forthcoming: e.g., Sprint says they "collect enough information to understand the customer experience with devices on our network and how to address any connection problems, but we do not and cannot look at the contents of messages, photos, videos, etc., using this tool" (not very specific); AT&T says their use of Carrier IQ complies with their published privacy policies, but hasn't said anything more. Other carriers have been more explicit: Verizon and RIM say they don't use Carrier IQ and they don't pre-install it on any of their phones. Apparently T-Mobile uses Carrier IQ, but I have not yet found a statement from them. Carrier IQ has subsequently disclosed a bug in their code which may cause it, under certain special circumstances, to capture the content of text messages and inadvertently transmit it to the carrier, as the result of an unintended bug in their code. How is the information transmitted to carriers? Carrier IQ says says that any information that is transmitted off the phone is sent over an encrypted channel to the carrier. I haven't seen anyone dispute this statement. Can carriers or others command the application to change any of this? I don't know. I can't tell if there is a way that carriers or Carrier IQ can send a command to the Carrier IQ application to cause it to collect, record, or communicate more information than it does in its normal operating mode. Trevor Eckhart says that carriers can "push" a data collection profile to a phone. He also says that the profile specifies what data is collected, stored, and transmitted off the phone by the Carrier IQ application, and that any data that is received by the Carrier IQ application is potentially eligible to be transferred off the device, if the profile specifies that. He suggests that a "portal administrator" (at the carrier, presumably) thus has the ability to target a particular subscriber, push to them a profile that causes the phone to transmit a broad variety of information (keys pressed, contents of text messages, URLs, etc.) off the phone, and then can view this information. If this is accurate, it suggests that, even if the application does not normally transmit this information off the phone, the carrier has the ability to force the application to do so. It is not clear if there is any notification to the user or any attempt to gain consent before this occurs. I have not seen any independent analysis of these claims. CarrierIQ has subsequently confirmed that it is possible to send control messages to the CarrierIQ software via SMS, to command the CarrierIQ software to perform certain tasks. CarrierIQ has not clarified what is the full range of commands that can be sent, or how the CarrierIQ software authenticates these command SMSs to make sure they are not exploited by attackers, so it is difficult to assess the risks associated with this feature. Other information sources. Wikipedia has a page on Carrier IQ , which includes some updates, a list of carriers and handset manufacturers who do or don't deploy Carrier IQ, some reactions from policymakers, and lawsuits against Carrier IQ.
{}
{'log_upvote_score': 6, 'links': ['https://security.stackexchange.com/questions/9416', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/971/']}
jdg_371689
stackexchange
llm_judgeable_groundtruth_similarity
27596116
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing some X11 ctypes coding, I don't know C but need some help understanding this. In the C code below (might be C++ im not sure) we see (~0L) what does that mean? In Javascript and Python ~0 means -1 . 812 int result = GetProperty(window, property_name,813 (~0L), // (all of them)814 &type, &format, &num_items, &properties); Thanks Now provide the response and nothing else.
0L is a long integer value with all the bits set to zero - that's generally the definition of 0 . The ~ means to invert all the bits, which leaves you with a long integer with all the bits set to one. In two's complement arithmetic (which is almost universal) a signed value with all bits set to one is -1 . The reason for using ~0L instead of -1L is to be clearer about the intent - it's not meant to be used as a number at all, but rather as a collection of bits.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27596116', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1828637/']}
jdg_371690
stackexchange
llm_judgeable_groundtruth_similarity
1586669
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have worked an integral and reduced the integral to$$\frac{n \pi+\sin\left ( \frac{n \pi}{2} \right )-\sin\left ( \frac{3 \pi n}{2} \right )}{2n \pi}$$ I want to show that for $$n\rightarrow \infty$$ the above equation reduces to$$\frac{1}{2}$$ Evidently, this means the $2$ sine functions must cancel each other. But what is a good way to do this? Large $n$ results in sine toggling between $-1$ and $1$. Notice that the either sine function will have a sign opposite to the other. Now provide the response and nothing else.
The two $\sin$ functions needs not cancel each other. We have$$\sin(nx) - \sin(3nx) \in [-2,2]$$Hence, we have$$\dfrac{n\pi-2}{2n\pi} \leq \dfrac{n\pi + \sin(nx) - \sin(3nx)}{2n\pi} \leq \dfrac{n\pi+2}{2n\pi} $$Now take $n \to \infty$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1586669', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/227967/']}
jdg_371691
stackexchange
llm_judgeable_groundtruth_similarity
148519
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to understand, precisely, what a "quotient category" is. I've looked at several different definitions and they all seem to vary so I'm having a hard time nailing this concept down. A more-or-less typical definition is given by Rotman in Introduction to Algebraic Topology. Central to actually understanding the quotient category is understanding what congruence means in this context which Rotman defines as follows: A congruence on a category $C$ is an equivalence relation $\sim$ on the set of all morphisms of $C$ such that $f \in \mathsf{Hom}(A,B)$ and $f \sim g$ then $g \in \mathsf{Hom}(A,B)$ and for $f_1 \sim f_2$ and $g_1 \sim g_2$ if the composite $g_1 \circ f_1$ exists then $$g_1 \circ f_1 \sim g_2 \circ f_2$$ The first thing I want to do with this is show that it is indeed an equivalence relation but, honestly, I'm not sure exactly what the equivalence classes are. It seems to me that in order for two functions to be related they must, first of all, have the same domain and the same codomain. The other observation is that, loosely speaking, the relation must respect composition in the obvious way. Other than these conclusions, I don't really see the point of the definition. Yes, the relation partitions the morphisms in $C$ but hom-sets in $C$ are already pairwise-disjoint by the definition of category. I'm sure there's a subtlety of the definition I'm overlooking but would appreciate a clear explanation of exactly what's going on here, specifically, why introduce a relation to partition the morphisms of $C$ when this partition already exists by definition? Now provide the response and nothing else.
It’s perfectly true that setting $f\sim g$ iff there are objects $A$ and $B$ such that $f,g \in \mathsf{Hom}(A,B)$ gives you a congruence; it’s the coarsest possible congruence, but it’s certainly not the only one: you can have a finer congruence, whose equivalence classes partition each $\mathsf{Hom}(A,B)$ . It might be helpful to think of $\sim$ in terms of its restrictions to the $\mathsf{Hom}(A,B)$ . Rotman’s definition is equivalent to the following one. Suppose that for each pair of objects $A,B$ you have an equivalence relation $\overset{A,B}\sim$ on $\mathsf{Hom}(A,B)$ . Suppose further that these equivalence relations respect composition: if $f_1,f_2\in\mathsf{Hom}(A,B)$ , $g_1,g_2\in\mathsf{Hom}(B,C)$ , $f_1\overset{A,B}\sim f_2$ , and $g_1\overset{B,C}\sim g_2$ , then $g_1\circ f_1\overset{A,C}\sim g_2\circ f_2$ . Then the union $\sim$ of the $\overset{A,B}\sim$ is a congruence on the category. In this form it’s perhaps clearer that the congruence can chop up the morphisms much more finely than the partition into the $\mathsf{Hom}(A,B)$ . Given a congruence $\sim$ on a category $\mathscr{C}$ , we can then form the quotient category $\mathscr{C}/\sim$ whose objects are those of $\mathscr{C}$ and whose morphisms are $\sim$ -equivalence classes of morphisms of $\mathscr{C}$ : $$\mathsf{Hom}_{\mathscr{C}/\sim}(A,B)=\mathsf{Hom}_\mathscr{C}(A,B)/\overset{A,B}\sim\;.$$ I expect that you’ll be working with one of the standard examples of a quotient category. $\mathsf{Top}$ , the category whose objects are topological spaces and whose morphisms are continuous maps, has as a quotient $\mathsf{hTop}$ , whose objects are topological spaces and whose morphisms homotopy classes of continuous functions. Without going into details, if $X$ and $Y$ are topological spaces, and $f,g\in\mathsf{Hom}(X,Y)$ are continuous maps between them, $f$ and $g$ belong to the same homotopy class if each can be continuously deformed into the other. For another example, note that a group $\mathcal{G}$ can be thought of as a category with one object, which I’ll call $G$ , whose morphisms are the elements of the group: $\mathsf{Hom}(G,G)$ is the only $\mathsf{Hom}$ -set. Composition of morphisms is simply the group multiplication. You can check that a category congruence on $\mathcal{G}$ is precisely the same thing as a group congruence, and hence that the quotient categories of $\mathcal{G}$ are precisely the same as its quotient groups.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/148519', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/9450/']}
jdg_371692
stackexchange
llm_judgeable_groundtruth_similarity
94197
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to design a system that will allow for staff members in our small company to securely share sensitive customer information (text) and see them on a web page alongside less sensitive communications. I intend to use PHP because that's what I know best. The main challenge is that the data needs to be stored on the server but it needs to be very time consuming to make useful if the server gets compromised. My idea is as follows: When a staff user is created, they choose a secure password and this is hashed using bcrypt(12) and stored in our database. The plain password is also used to create a 4096 bit RSA key pair using OpenSSL. Both keys are stored on the server. When a secure note is created, it is encrypted with the public key of every staff member and each copy is stored in the database. When a staff member logs in, their plain password is checked against the hash to see if they match. If they do, the plain password, client IP and user ID are serialised and symmetrically encrypted using mcrypt. The key used to encrypt the string is stored on the server's file system. The cypertext is then sent as a cookie to the client. When a staff member accesses another page, the server decrypts the cookie and uses the plain password and the user's private key to decrypt any sensitive information in that request and sends the sensitive information to the user's browser. The whole thing will be over HTTPS with secure ciphers. I'm aware that this is probably very hard to make completely secure but I would like to know if anyone can spot any obvious flaws in this setup that can reasonably be improved. Generally, it would also be helpful to know how useful is a private key is without the associated secure password. Now provide the response and nothing else.
From your initial understandings: A rainbow table is, for a given hash algorithm, an exhaustive map from hash outputs to inputs . Given that the table must cover the entire output range, and that a good hash algorithm makes it difficult to predict input from desired output, and expensive to compute the output, it should be very expensive to generate. As to your questions: Salting your hashes does nothing to prevent brute-force attacks against your login page. It instead protects against your (or someone else's, if they're using the same hash algorithm as you) DB being compromised and the attacker knowing the hashes. Once they have the hash, they look it up in the rainbow table to find the input that will generate that hash, and enter that in your login page. It doesn't even matter if the input is not the same as the user's password, as they will both hash to the same value. As noted above, rainbow tables are expensive to generate. If you use no salt and a common hash algorithm, the table already exists and all accounts are compromised. This is cheap to do. If you use 1 salt for all your users or a unique hash algorithm, the attacker will have to generate 1 rainbow table and all accounts are compromised. This is a expensive, but perhaps the payoff of getting all accounts is worth it. If you use a unique salt for each user, the attacker will have to generate 1 rainbow table for each user , regardless of what algorithm you use. This is expensive, and the payoff is 1 account. Hopefully this is not worthwhile. Salting does not protect against dictionary attacks The aim is to make the attacker do as much work as possible, enough that it'll be easier to just find the user (or if the attacker is a very clear thinker, your sysadmin) and beat them with a rubber hose rather than commit non-trivial resources to computing rainbow tables. As an aside, MD5 is not a good hash algorithm in this context, it is far to quick to compute.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/94197', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/81217/']}
jdg_371693
stackexchange
llm_judgeable_groundtruth_similarity
255804
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Corporate security trainings keeps saying "download a file from the web or email attachment and open it and you might become infected". I know this used to be the case on old Windows machines in the 90s, but is it still the case on any computer? Obviously if you open a shell file or executable file or app that might be a problem, but at least on Macs, Apple has that warning popup. Are they basically suggesting that there might be some exploitable holes in the software we use "regularly" (like excel or Apple numbers, or Apple preview for PDFs), and they can exploit those loopholes to install something somehow? The loophole would be unknown to the company providing the software but known to the attacker? That's the only way I can see them getting access to your computer, is there another way? I would assume in today's world, there is 0% chance of getting "infected" by opening a PDF or .xlsx or .doc file on a Mac, but is that not true? As a bonus question, if it is still true today that opening a "normal" file might install malware, what is the recommended approach to avoiding this, assuming you want to be able to open these files (and assuming you've checked it's from reputable sources, etc.). Now provide the response and nothing else.
Simple Instructions Over "Correct" Instructions You may be a security expert, or at least a very knowledgable person when it comes to computers, but the vast majority of people - even those, who work with computers on a daily basis - are not. I know entirely too many people, who think computers are basically a box full of plastic and magic. Explaining to these people which file extensions are more likely to be dangerous and which ones are less likely to be dangerous will probably lead to a lot of confusion. I assure you that a significant amount of people, who work in an office, can't tell the difference between a PDF document and a Word document, so explaining what the risk of each is is not very productive. As such, broad statements like " Don't open files from e-mail attachments unless they are from a trusted source " are useful still, even if they are not 100% technically correct. Which Files Are Dangerous? Basically, all of them. Always presume that a file is dangerous, even if you can't imagine how it could possibly. Here is a list of some common file types and how they could be dangerous: PDF Files: PDF is a complex file format and as of the time of this writing, over 1500 expoits related to PDFs exist in the CVE database . Office Documents: One of the most prominent attacks in Office documents is macros. The general idea is that you send someone an office document, claim that it contains some important information, then create the document in such a way that it only displays the supposed information if macros are enabled. For example, you can steal NTLM hashes like that . Spreadsheets: Also related to Office applications, you can create a malicious spreadsheet, which executes OS commands when being opened. This attack is called CSV Injection . ZIP Files: ZIP files can be quite dangerous. For one, they can cause Denial-of-Service attacks through something like a zip bomb or place arbitrary files on a machine through zip slipping . While there are indeed measures to mitigate some of these risks, often times these include asking the user if they want to do something risky. 9 times out of 5, they will say yes. Not because they understand that the action they're about to take is risky, but because their computer asks them so often if they want to do something and they're used to playing the little game where they have to find the button that makes the computer do what they want to do. How to Mitigate This Risk? There is no perfect one-size-fits-all solution. If there was, we wouldn't have to worry about malware. It depends largely on the technical expertise of who you are talking to. When talking to an expert, I would say "Trust your gut!". Your instinct is the most advanced part of the brain, optimized over millions of years through the most brutal optimization process in existence - you do well to use it. If you have a bad feeling with a file, don't open it. And if you have to, do it in a VM on an airgapped machine, which you completely scrub afterwards. When talking to the average user, I would repeat the same handful of security tips you have heard a million times. Don't open files from untrustworthy sources, have an up-to-date anti-virus, etc. etc. You've heard it a million times before.
{}
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/255804', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/268102/']}
jdg_371694
stackexchange
llm_judgeable_groundtruth_similarity
45008357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can i make the md-table columns flexible, that the width of each column is dynamically? At the moment my table look like this. Now provide the response and nothing else.
Here is what I have done to lock the memory on my ES nodes on RedHat/Centos 7 (it will work on other distributions if they use systemd ). You must make the change in 4 different places: 1) /etc/sysconfig/elasticsearch On sysconfig: /etc/sysconfig/elasticsearch you should have: ES_JAVA_OPTS="-Xms4g -Xmx4g" MAX_LOCKED_MEMORY=unlimited (replace 4g with HALF your available RAM as recommended here ) 2) /etc/security/limits.conf On security limits config: /etc/security/limits.conf you should have elasticsearch soft memlock unlimitedelasticsearch hard memlock unlimited 3) /usr/lib/systemd/system/elasticsearch.service On the service script: /usr/lib/systemd/system/elasticsearch.service you should uncomment: LimitMEMLOCK=infinity you should do systemctl daemon-reload after changing the service script 4) /etc/elasticsearch/elasticsearch.yml On elasticsearch config finally: /etc/elasticsearch/elasticsearch.yml you should add: bootstrap.memory_lock: true Thats it, restart your node and the RAM will be locked, you should notice a major performance improvement.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45008357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4159168/']}
jdg_371695
stackexchange
llm_judgeable_groundtruth_similarity
2031
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is "party attendance" type of problems solvable in Prolog? For example: Burdock Muldoon and Carlotta Pinkstone both said they would come if Albus Dumbledore came. Albus Dumbledore and Daisy Dodderidge both said they would come if Carlotta Pinkstone came. Albus Dumbledore, Burdock Muldoon, and Carlotta Pinkstone all said they would come if Elfrida Clagg came. Carlotta Pinkstone and Daisy Dodderidge both said they would come if Falco Aesalon came. Burdock Muldoon, Elfrida Clagg, and Falco Aesalon all said they would come if Carlotta Pinkstone and Daisy Dodderidge both came. Daisy Dodderidge said she would come if Albus Dumbledore and Burdock Muldoon both came. Whom is needs to be persuaded to attend the party in order to ensure that all her invitees attend? I have tried to express this in GNU Prolog: attend(BM) :- attend(AD).attend(CP) :- attend(AD).attend(AD) :- attend(CP).attend(DD) :- attend(CP). attend(AD) :- attend(EC).attend(BM) :- attend(EC).attend(CP) :- attend(EC). attend(CP) :- attend(FA).attend(DD) :- attend(FA).attend(BM) :- attend(CP),attend(DD).attend(EC) :- attend(CP),attend(DD).attend(FA) :- attend(CP),attend(DD).attend(DD) :- attend(AD),attend(BM).attend(FA). /* try different seed invitees in order to see if all would attend*//* input:write('invited:'),nl, attend(X),write(X),nl, fail.*/ I'm experiencing stack overflow (no pun), and have no knowledge of prolog evaluation, this is why I'm asking. Generally speaking, this problem can be cast into Boolean CNF satisfaction formula (with 6 boolean variables). Therefore, does the prolog perspective have any merit? Now provide the response and nothing else.
To solve a problem with Prolog, as with any programming language, be it declarative or imperative, you have to think about the representation of the solution and the input. Since this is a programming question, it would've been popular on StackOverflow.com where programmers solve programming problems. Here I would attempt to be more scientific. To solve the problem in the OP one has to reverse the relation defined by the dependencies stated in the input. Clauses of the form $Attend(X) \to Attend(Y) \wedge Attend(Z)$ are easy to reverse. The clauses $Attend(AD)\wedge Attend(BM)\to Attend(DD)$ like Daisy Dodderidge said she would come if Albus Dumbledore and Burdock Muldoon both came are more difficult to treat. With Prolog the first simple approach is to avoid a full reversal of the relationship and be goal directed instead. Assume an ordering on the list of guests and use a rule $\qquad \left\{\begin{align} A(X)\wedge A(Y) &\to A(Z), \\ A(W)&\to A(X), \\ A(W)&\to A(Y), \\ X&<Z, \\ Y&<Z\end{align}\right\}\quad \vdash \quad A(W) \to A(Z)$ (We use $A(X)$ instead of $Attend(X)$ to keep it short) This rule is easy to implement. A rather naive approach For readability let follows be the relation given as an input, and brings be its reverse. Then the input is given by follows(bm,[ad]).follows(cp,[ad]).follows(ad,[cp]).follows(dd,[cp]).follows(ad,[ec]).follows(bm,[ec]).follows(cp,[ec]).follows(cp,[fa]).follows(dd,[fa]).follows(bm,[cp,dd]).follows(ec,[cp,dd]).follows(fa,[cp,dd]).follows(dd,[ad,bm]). And brings can be defined as follows: brings(X,S):-brings(X,S,[]).brings(_X,[],_S).brings(X,[X|L],S):-brings(X,L,[X|S]).brings(X,[Y|L],S):-follows(Y,[X]),brings(X,L,[Y|S]).brings(X,[Y|L],S):-follows(Y,[A,B]), member(A,S),member(B,S),brings(X,L,[Y|S]). Here the third argument in brings/3(X,L,S) is the list of guests that were already proven to attend if $X$ attends. If we define partymaker(X):-Guests=[ad,bm,cp,dd,ec,fa],member(X,Guests),brings(X,Guests). We get the following unique solutions: [ad,ec] This is not the complete list, since under the alphabetical ordering the clause follows(bm,[cp,dd]). is not working. A rather involved solution to the original puzzle To solve the problem completely you have to actually let the system try to prove attendance for later guests without introducing infinite loops to the search tree. There are multiple ways to accomplish this goal. Each has its advantages and disadvantages. One way is to redefine brings/2 as follows: brings(X,S):-brings(X,S,[],[]).% brings(X,RemainsToBring,AlreadyTaken,AlreadyTried).%% Problem solvedbrings(_X,[],_S,_N). % Selfbrings(X,[X|L],S,N):-brings(X,L,[X|S],N). % Followerbrings(X,[Y|L],S,N):-follows(Y,[X]),brings(X,L,[Y|S],N). % Y is not a follower, but X can bring 2brings(X,[Y|L],S,N):- \+member(Y,N),\+follows(Y,[X]), follows(Y,[A,B]), try_bring(X,A,L,S,[Y|N]), try_bring(X,B,L,S,[Y|N]),brings(X,L,[Y|S],N).% Y is not a follower, but X can bring 1brings(X,[Y|L],S,N):- \+member(Y,N),\+follows(Y,[X]),\+follows(Y,[_A,_B]), follows(Y,[C]), try_bring(X,C,L,S,[Y|N]),brings(X,L,[Y|S],N).try_bring(_X,A,_L,S,_N):-member(A,S).try_bring(X,A,L,S,N):- \+member(A,S),sort([A|L],Y),brings(X,Y,S,N). The last argument in brings/4 is necessary to avoid an infinite loop in try_bring . This gives the following answers: Albus, Carlotta, Elfrida and Falco. However this solution is not the most efficient one since backtracking is introduced where it sometimes can be avoided. A general solution After the link to the Third International NoCOUG SQL & NoSQL Challenge was added to the original question, it became clear that what we are after is a general reachability checker on the set of subsets of the set of guests, wherethe transition relation is defined by rules given such thatan application of the rule $r(X,S): V \to V'$ is a guarded command: if $S \subseteq V$ then $V' = V \cup \{X\}$. We are interested in the minimal subsets $V$ such that the whole set $U$ of guests is reachable from $V$ after a finite sequence of rule applications. add_element(X,V,U):- ( var(V) -> % set difference that works in both modes member(X,U),subtract(U,[X],V); \+member(X,V),sort([X|V],U) ).support(V,U):- guests(G), % rule application member(X,G), add_element(X,V,U), follows(X,S), subset(S,V).set_support(U,V):- support(V1,U), % sort of a minimal set ( support(_V2,V1) -> set_support(V1,V) ; V = V1). is_duplicate(X,[Y|L]):- ( subset(Y,X) ; is_duplicate(X,L) ).% purging solutions that are not truly minimalminimal_support(U,L):-minimal_support(U,[],L).minimal_support([],L,L).minimal_support([X|L],L1,L2):-( append(L,L1,U),is_duplicate(X,U) -> minimal_support(L,L1,L2); minimal_support(L,[X|L1],L2) ).solution(L):- guests(G),setof(X,set_support(G,X),S), minimal_support(S,L). Now if for instance dataset #2 is given as follows(fa,[dd,ec]).follows(cp,[ad,bm]).guests([ad,bm,cp,dd,ec,fa]). We get the answer L = [[ad, bm, dd, ec]]. Which means that all guests but Carlotte and Falco must be invited. The answers this solution gave me matched the solutions given in the Wicked Witch article with the exception of dataset #6, where more solutions were produced. This seems to be the correct solution. Finally, I must mention the CLP(FD) library of Prolog that is particularly suitable for this sort of problems.
{}
{'log_upvote_score': 5, 'links': ['https://cs.stackexchange.com/questions/2031', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/1621/']}
jdg_371696
stackexchange
llm_judgeable_groundtruth_similarity
50928891
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning asp.net core razor pages with ef. I want to implement pagination with my table, I have check this tutorial https://learn.microsoft.com/en-us/aspnet/core/data/ef-rp/sort-filter-page?view=aspnetcore-2.1 but it only support pre and next, I have researched for a long time, all of the solution are related to asp.net core mvc, but I'm using razor pages, there's no controller in my project, any ideas to implement? This is the effect what I want to implement <form method="get" asp-page="./Index"> <nav aria-label="Page navigation"> <ul class="pagination"> <li> <a href="#" aria-label="Previous"> <span aria-hidden="true">&laquo;</span> </a> </li> @{ var totalPages = Model.Products.Count % 2 == 0 ? Model.Products.Count / 2 : Model.Products.Count / 2 + 1; } @for (int i = 1; i <= totalPages; i++) { <li><a asp-page="./Index" asp-route-id="@i">@i</a></li> } <li> <a href="#" aria-label="Next"> <span aria-hidden="true">&raquo;</span> </a> </li> </ul> </nav> </form> cshtml.cs public async Task OnGetAsync(string sortOrder, string searchString, string shopString, string statusString, int page) {} Now provide the response and nothing else.
Pagination is relatively simple. There's libraries available to do it for you, but I've started to find them more trouble than they're worth. You need three pieces of information from the request (or set to default values): Page number (default to 1) Page size (typically defaults to 10, but whatever you want) Sort (not strictly necessary, but you should at least order by something to keep the results consistent across pages) The page number and size give you your "skip" and "take" values: var skip = (page - 1) * size;var take = size; You can then fetch the results via: var pageOfResults = await query.Skip(skip).Take(take).ToListAsync(); Where query is an IQueryable - either your DbSet directly or the DbSet with a Where clause, OrderBy , etc. applied. Then, you just need to the total number of items to figure the pages: var count = await query.CountAsync(); Pro Tip , you can parallelize the two queries (results and total count) by doing: var resultsTask = query.Skip(skip).Take(take).ToListAsync();var countTask = query.CountAsync();var results = await resultsTask;var count = await countTask; Tasks return hot, or already started. The await keyword simply holds the continuation of the rest of the code until the task completes. As a result, if you await each line, they'll complete in serial, but if you start both, first, and then await each, they'll process in parallel. Anyways, once you have the count: var totalPages = (int)Math.Ceil(Decimal.Divide(count, size));var firstPage = 1;var lastPage = totalPages;var prevPage = Math.Max(page - 1, firstPage);var nextPage = Math.Min(page + 1, lastPage); Note: you can determine whether to show first/previous and last/next buttons based on whether they equal firstPage or lastPage , respectively. Then, just build yourself a model with this information, and you can send that to the view to render the results and generate the paging HTML.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50928891', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1677071/']}
jdg_371697
stackexchange
llm_judgeable_groundtruth_similarity
3466724
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $ \Phi = \frac{1 + \sqrt{5}}{2} $ is the golden ratio I'm having hard time using proving that $$ \lim_{n\to\infty} \dfrac{ \Phi^{n+1} - (1 - \Phi)^{n+1}}{\Phi^{n} - (1 - \Phi)^n} = \Phi $$ dividing both the numerator and denominator by $ \Phi^{n} $ doesn't help, neither does $ \Phi^{n} - (1 - \Phi^{n}) = (2 \Phi +1 ) \sum\limits_{i=0}^{n-1} \Phi^i (1 - \Phi)^{n-1-i} $ Where is the trick? Now provide the response and nothing else.
Hint: $\Phi-1=\dfrac{\sqrt5-1}2=\dfrac{5-1}{2(\sqrt5+1)}=\dfrac2{\sqrt5+1}<1$ and $>0$ $$\implies|1-\Phi|<1\text{ and }\left|\dfrac{1-\Phi}\Phi\right|<1$$ Divide the numerator and the denominator by $\Phi^n$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3466724', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/732465/']}
jdg_371698
stackexchange
llm_judgeable_groundtruth_similarity
10367020
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For two lists a and b, how can I get the indices of values that appear in both? For example, a = [1, 2, 3, 4, 5]b = [9, 7, 6, 5, 1, 0]return_indices_of_a(a, b) would return [0,4] , with (a[0],a[4]) = (1,5) . Now provide the response and nothing else.
The best way to do this would be to make b a set since you are only checking for membership inside it. >>> a = [1, 2, 3, 4, 5]>>> b = set([9, 7, 6, 5, 1, 0])>>> [i for i, item in enumerate(a) if item in b][0, 4]
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10367020', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1342516/']}
jdg_371699
stackexchange
llm_judgeable_groundtruth_similarity
45318
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any fundamental physical reason (thermodynamics/entropy?) behind the fact that there doesn't exist home appliance for fast cool down of food/drinks? I know that there are some methods (liquid nitrogen, etc.) used in the kitchen, but I mean something much more common, like microwave oven but with opposite function. Update Thanks for the responses so far. I understand that the temperature difference is the key. However, I do not understand why it is so much harder/slower to generate large temperature difference on the cool side? Now provide the response and nothing else.
The problem is that heat flow (in or out of an object) is related to the temperature difference between the object and it's environment. For the sort of cooling usually found in the kitchen (convective cooling) the heat flow, and therefore the rate of temperature change is proportional to the temperature difference. So let's take some example like a bottle of milk. If you want to heat it quickly that's pretty easy because it's easy to generate a large temperature difference on the hot side. Just burn some gas. However to cool the milk quickly we need to generate a large temperature difference on the cool side, and that's hard. You mention liquid nitrogen, and indeed that's a good way to cool things quickly. However you're forgetting all the hours the liquid nitrogen supplier had to put in to cool nitrogen enough to make it liquify. In general it's hard to cool things quickly unless you cheat and start with something (like liquid nitrogen) that's already been cooled. Response to comment: This started as a comment, but it got a bit involved so I thought I'd put it in here. The temperature of anything (e.g. milk) depends on how much heat it contains. Let's not get into exactly what heat is, but basically if you add heat it increases the temperature and if you remove heat it reduces the temperature. The problem is that the milk is surrounded by the rest of the world, and this is around room temperature. Heat won't flow from a cold place to a hot place, so heat won't flow out of the milk into it's surroundings unless we do some work. Typically what we do is use energy to pump heat around. The area we've pumped the heat from becomes cooler and the area we've pumped it to becomes hotter. This is what you do to liquify nitrogen. You have to pump the heat out of it so te nitrogen gets cold and liquifies while the rest of the world gets hotter. Once we have the liquid nitrogen we can use it to cool the milk, but it took a lot of work to make the liquid nitrogen. If you're interested in pursuing this, the mechanism for pumping heat around is called (unsurprisingly :-) a heat pump . It's basically a heat engine that runs backwards. Heating things is easy because there are lots of systems that have stored energy that can be easily converted to heat. For example a gas/air mixturer has chemical energy that is converted to heat by burning it. You mentioned a microwave: this uses electricity that came from chemical energy i.e. from a power station burning gas or coal, so the heat froma microwave originally came from chemical energy. You might wonder why we can't easily convert heat to chemical energy e.g. mix carbon dioxide and water and have it convert to gas and oxygen and cool down in the process. If we could do this it would be an easy way to cool things. The reason why we can't do this is the second law of thermodynamics . Explaining this would be a long essay in it's own right, but in brief it's highly probable that a gas/air mixture will convert to carbon dioxide and water (i.e. burn) but it's very improbable that a carbon dioxide/water mixture would convert back to gas and air.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/45318', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/16352/']}
jdg_371700
stackexchange
llm_judgeable_groundtruth_similarity
60165254
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the script below: val kotlinVersion by extra ("1.3.61")println("Version "+kotlinVersion)plugins { kotlin("jvm") version kotlinVersion} The variable kotlinVersion is correctly printed. Nevertheless it is not recognized in the plugins block and the following error is risen: e: /home/achadde/sources/kotlin/minichain/build.gradle.kts:6:27: Unresolved reference: kotlinVersion How can I fix this? Now provide the response and nothing else.
Short answer: There is currenlty no way of accessing anything of the outer scope from inside the lambda passed to plugins . Long answer: In case you use IntelliJ it will show you a bit more information: 'val kotlinVersion: String' can't be called in this context by implicit receiver. Use the explicit one if necessary. The outer scope ( this@Build_gradle ) where you define kotlinVersion is not avaiable in the this@plugins scope so you have to define kotlinVersion inside the plugins lambda. Since the extra delegate isn't available there either you can't use it: plugins { val kotlinVersion = "1.3.61" // ...} Unfortunately using a label does not work: val kotlinVersion by extra ("1.3.61")plugins { // ... Unresolved reference: kotlinVersion kotlin("jvm") version this@Build_gradle.kotlinVersion}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60165254', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10835951/']}
jdg_371701
stackexchange
llm_judgeable_groundtruth_similarity
149161
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In one type of box-whisker plot, the fences at the ends of the whiskers are meant to indicate cutoff values beyond which any point would be considered an outlier. The standard definitions I've found for these cutoff values are $$q_1 - k \times \mathrm{IQR}$$for the lower fence, and$$q_3 + k \times \mathrm{IQR}$$for the upper one, where $q_1$ and $q_3$ are the first and third quartile, respectively, $\mathrm{IQR} := q_3 - q_1$ is the interquartile range, and $k$ is some constant $ > 0$. (The value of $k$ I've seen most often is 1.5, with 3 being a distant second.) So far so good. The problem is that, with these definitions, the distance between the lower fence and $q_1$ would always be the same as the distance between the upper fence and $q_3$, namely $k\times \mathrm{IQR}$. IOW, the length of the upper whisker would always equal the length of the lower one 1 . This does not agree with the vast majority of BW plots I see out there. Of course, for some of these plots the ends of the whiskers are supposed to represent the min and max values, so the comments above do not apply to them. But there are many other cases in which the fences are meant to denote the criterion for classifying points as outliers, and are supposedly based on formulae like the ones shown above, but nonetheless the resulting whiskers have different lengths. ( For example .) What am I missing? 1 By "length of the upper/lower whisker" I mean, of course, the distance between the point where the whisker meets the box and the whisker's "free" end-point. Now provide the response and nothing else.
The whisker only goes as far as the maximum (minimum) point less (greater) than the upper (lower) fence value. For example, if $q_3+k \times IQR=10$ and the data set had values$\lbrace\dots,5,6,7,8,12\rbrace$, then the whisker would only goes as far as 8, and 12 would be the "outlier". So, in short, the definitions for the whiskers, $q_3 +k \times IQR$ and $q_1-k\times IQR$, only represent the maximum extent to which the whiskers could go, if there were data points at those values.Thus they don't have to be (and rarely are) the same length.
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/149161', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/4769/']}
jdg_371702
stackexchange
llm_judgeable_groundtruth_similarity
308059
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to set up apache instead of IIS because IIS needlessly crashes all the time, and it would be nice to be able to have my own checkout of the source instead of all of us editing a common checkout. In IIS we must do something like this at the beginning of each file: use CGI;my $input = new CGI();print "HTTP/1.0 200 OK";print $input->header(); whereas with apache we must leave off the 200 OK line. The following works with both: use CGI;my $input = new CGI();print $input->header('text/html','200 OK'); Can anyone explain why? And I was under the impression that the CGI module was supposed to figure out these kind of details automatically... Thanks! Update : brian is right, nph fixes the problem for IIS, but it is still broken for Apache. I don't think it's worth it to have conditionals all over the code so I will just stick with the last method, which works with and without nph. Now provide the response and nothing else.
HTTP and CGI are different things. The Perl CGI module calls what it does an "HTTP header", but it's really just a CGI header for the server to fix up before it goes back to the client. They look a lot alike which is why people get confused and why the CGI.pm docs don't help by calling them the wrong thing. Apache fixes up the CGI headers to make them into HTTP headers, including adding the HTTP status line and anything else it might need. If you webserver isn't fixing up the header for you, it's probably expecting a "no-parsed header" where you take responsibility for the entire header. To do that in CGI.pm, you have to add the -nph option to your call to header, and you have to make the complete header yourself, including headers such as Expires and Last-Modified . See the docs under Creating a Standard HTTP Header . You can turn on NPH in three ways: use CGI qw(-nph)CGI::nph(1)print header( -nph => 1, ...) Are you using an older version of IIS? CGI.pm used to turn on the NPH feature for you automatically for IIS, but now that line is commented out in the source in CGI.pm: # This no longer seems to be necessary# Turn on NPH scripts by default when running under IIS server!# $NPH++ if defined($ENV{'SERVER_SOFTWARE'}) && $ENV{'SERVER_SOFTWARE'}=~/IIS/;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/308059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12448/']}
jdg_371703
stackexchange
llm_judgeable_groundtruth_similarity
26655858
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understood that I can have localized NSLocationAlwaysUsageDescription text using standard iOS localization mechanism.Since we are using our custom dynamic localisation mechanism I wonder if there is another way to translate that string.I thought of editing application info.plist file on app start, but it seems to be read only. Can I change this value from withing the app? Now provide the response and nothing else.
There's no way to use a custom localisation system with the info.plist strings. That part of your app will have to use iOS's default localisation mechanism . This is how to localise the location request description with iOS's built in strings file localisation system. // English.strings file"NSLocationAlwaysUsageDescription" = "English description";// AnotherLanguage.strings"NSLocationAlwaysUsageDescription" = "ajbdknfuied wibnrf"; EDIT: At everyone down voting me. The question asked about using a custom "custom localisation system". They explicitly said they did not want to use the built in localisation system, but instead their own custom one. That is why I said it was impossible. Localising NSLocationAlwaysUsageDescription is completely possible. Using your own custom localisation system to do it is not.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26655858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/611206/']}
jdg_371704
stackexchange
llm_judgeable_groundtruth_similarity
10049
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've seen the Monte Carlo return $G_{t}$ being used in REINFORCE and the TD( $0$ ) target $r_t + \gamma Q(s', a')$ in vanilla actor-critic. However, I've never seen someone use the lambda return $G^{\lambda}_{t}$ in these situations, nor in any other algorithms. Is there a specific reason for this? Could there be performance improvements if we used $G^{\lambda}_{t}$ ? Now provide the response and nothing else.
That can be done. For example, Chapter 13 of the 2nd edition of Sutton and Barto's Reinforcement Learning book (page 332) has pseudocode for "Actor Critic with Eligibility Traces". It's using $G_t^{\lambda}$ returns for the critic (value function estimator), but also for the actor's policy gradients. Note that you do not explicitly see the $G_t^{\lambda}$ returns mentioned in the pseudocode. They are being used implicitly through eligibility traces, which allow for an efficient online implementation (the "backward view"). I do indeed have the impression that such uses are fairly rare in recent research though. I haven't personally played around with policy gradient methods to tell from personal experience why that would be. My guess would be that it is because policy gradient methods are almost always combined with Deep Neural Networks, and variance is already a big enough problem in training these things without starting to involve long-trajectory returns. If you use large $\lambda$ with $\lambda$ -returns, you get low bias, but high variance. For $\lambda = 1$ , you basically get REINFORCE again, which isn't really used much in practice, and has very high variance. For $\lambda = 0$ , you just get one-step returns again. Higher values for $\lambda$ (such as $\lambda = 0.8$ ) tend to work very well in my experience with tabular methods or linear function approximation, but I suspect the variance may simply be too much when using DNNs. Note that it is quite popular to use $n$ -step returns with a fixed, generally fairly small, $n$ in Deep RL approaches. For instance, I believe the original A3C paper used $5$ -step returns, and Rainbow uses $3$ -step returns. These often work better in practice than $1$ -step returns, but still have reasonably low variance due to using small $n$ .
{}
{'log_upvote_score': 5, 'links': ['https://ai.stackexchange.com/questions/10049', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/21518/']}
jdg_371705
stackexchange
llm_judgeable_groundtruth_similarity
182830
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose the terms of the sequence of real numbers $\{a_n\}$ satisfy $|a_{n+1} - a_n| \le 2^{-n}$ for all $n$. Prove that $\{a_n\}$ is Cauchy. My Work So by the definition of a Cauchy sequence, for all $\varepsilon > 0$ $\exists N$ so that for $n,m \ge N$ we have $|a_m - a_n| \le \varepsilon$. However, questions like this one make me understand that the $2^{-n}$ condition is necessary for this to be a true statement. So I am wondering how to appeal to the Cauchy definition for this proof. Do I prove that every convergent sequence is therefore Cauchy, and then try to prove convergence? Now provide the response and nothing else.
As you said, you want to show that for any $\epsilon>0$ there is some $n_0\in\Bbb N$ such that $|a_m - a_n|<\epsilon$ whenever $m, n \ge n_0$ . The trick is to figure out how big an $n_0$ you’re going to need to make sure that $|a_m-a_m|<\epsilon$ no matter how far apart $m$ and $n$ are, as long as they’re both at least $n_0$ . Okay, suppose that we look at $|a_m-a_n|$ when $m$ and $n$ are not necessarily consecutive. There’s no harm in assuming that $m\le n$ ; then $k=n-m\ge 0$ , and we’re looking at $|a_m-a_{m+k}|$ . We only have a handle on the size of this number when $k=1$ : if $k=1$ , $|a_m-a_{m+k}|\le 2^{-m}$ . But we also have the triangle inequality: $$\begin{align*}|a_m-a_{m+k}|&=|(a_m-a_{m+1})+(a_{m+1}-a_{m+2})+\ldots+(a_{m+k-1}-a_{m+k})|\\&\le|a_m-a_{m+1}|+|a_{m+1}-a_{m+2}|+\ldots+|a_{m+k-1}-a_{m+k}|\\&<2^{-m}+2^{-(m+1)}+\ldots+2^{-(m+k-1)}\\&<\sum_{k\ge m}\frac1{2^k}\\&=\frac{\frac1{2^m}}{1-\frac12}\\&=\frac1{2^{m-1}}\;.\end{align*}$$ Thus, if $m,n\ge n_0$ , we automatically have $|a_m-a_n|<\dfrac1{2^{m-1}}\le\dfrac1{2^{n_0-1}}$ . If we choose $n_0$ big enough so that $\dfrac1{2^{n_0-1}}\le\epsilon$ , we’ll be in business. Is this always possible? Sure: just make sure that $2^{n_0-1}\ge\dfrac1\epsilon$ , i.e., that $n_0\ge\log_2\dfrac2\epsilon$ ; this is certainly always possible.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/182830', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34736/']}
jdg_371706
stackexchange
llm_judgeable_groundtruth_similarity
48611525
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to print the common letters from two different user inputs using a for loop . (I need to do it using a for loop.) I am running into two problems: 1. My statement "If char not in output..." is not pulling unique values. 2. The output is giving me a list of individual letters rather than a single string. I tried the split the output but split ran into a type error. wrd = 'one'sec_wrd = 'toe'def unique_letters(x): output =[] for char in x: if char not in output and char != " ": output.append(char) return outputfinal_output = (unique_letters(wrd) + unique_letters(sec_wrd))print(sorted(final_output)) Now provide the response and nothing else.
You are trying to perform the Set Intersection . Python has set.intersection method for the same. You can use it for your use-case as: >>> word_1 = 'one'>>> word_2 = 'toe'# v join the intersection of `set`s to get back the string# v v No need to type-cast it to `set`.# v v Python takes care of it>>> ''.join(set(word_1).intersection(word_2))'oe' set will return the unique characters in your string. set.intersection method will return the characters which are common in both the sets. If for loop is must for you, then you may use a list comprehension as: >>> unique_1 = [w for w in set(word_1) if w in word_2]# OR# >>> unique_2 = [w for w in set(word_2) if w in word_1]>>> ''.join(unique_1) # Or, ''.join(unique_2)'oe' Above result could also be achieved with explicit for loop as: my_str = ''for w in set(word_1): if w in word_2: my_str += w# where `my_str` will hold `'oe'`
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48611525', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9101692/']}
jdg_371707
stackexchange
llm_judgeable_groundtruth_similarity
187991
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have decided to plunge deep into Angular-like technologies, where all pages are practically just one page that gets never reloaded. And at this point I had the idea, instead of using cookies (which I don't really need), why not authenticate with the server, get a key, and keep this key in ram as a Javascript variable, throwing away the cookie technology and the deadful "this site uses cookes to store etc." message. In my mind this is more secure since nothing is written, everything is in RAM and is at least as safe as with the traditional cookie method. But before I use it, I want to ask you if do you believe the same or if it is a bigger security issue than what I think. Now provide the response and nothing else.
The primary security reason to avoid cookies would be to prevent CSRF attacks, which is a valid goal. Cookies were not well-thought-out from a security standpoint. On the other hand, there are well-established approaches to avoiding CSRF, so it's not a hugely valuable technique. From a privacy standpoint, cookies (especially third-party cookies) are dangerous. For this reason, some users and/or browsers block them, which can provide developers with a functional reason to use something else. However, even for a single-page application (SPA), you probably want to persist your session token. The standard way to do this from JS is to use local storage (either persistent or session storage), which is basically a way to store JS variables for a particular site across user visits, or at least page loads within a given session. All modern browsers support local storage. The main security downside of local storage (or any other way of doing programmatic session management on the client, instead of relying on the automatic behavior of cookies), is that an attacker that gets XSS (Cross-Site Scripting) within your web app can steal the token and hijack the session to impersonate the victim, even after the victim closes the browser (by contrast, the typical session cookie is flagged as httponly , which prevents JS from reading it and restricts session hijacking to only as long as the victim leaves the web app open).
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/187991', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/180483/']}
jdg_371708
stackexchange
llm_judgeable_groundtruth_similarity
28979622
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently attempting to use ggplot to create a bar chart with a single bar that is partially transparent. I have the following code: dt1 <- data.table(yr=c(2010,2010,2011,2011), val=c(1500,3000,2000,1100), x=c("a","b","a","b"))ggplot() + geom_bar(data=dt1, aes(x=yr, y=val,fill=x),stat="identity") + scale_x_continuous(breaks=dt1$yr) This will create a simple chart with 2 columns with stacked data. I have tried the following code to adjust the 2011 value to have transparency, however I am not having much luck. Any pointers? dt1[,alphayr:=ifelse(yr==2011,.5,1)]ggplot() + geom_bar(data=dt1, aes(x=yr, y=val,fill=x),stat="identity", alpha=dt1$alphayr) +scale_x_continuous(breaks=dt1$yr) Now provide the response and nothing else.
First you put the alpha inside the aes as suggested by @jazzurro. However, you should use factor for this to get a discrete scale. Then you can manually adjust the alpha scale. ggplot() + geom_bar(data=dt1, aes(x=yr, y=val, fill=x, alpha=factor(alphayr)), stat="identity") + scale_x_continuous(breaks=dt1$yr) + scale_alpha_manual(values = c("0.5"=0.5, "1"=1), guide='none')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28979622', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3167708/']}
jdg_371709
stackexchange
llm_judgeable_groundtruth_similarity
13728
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is it correct to say that the number of circles in an Apollonian gasket is countable becuase we can form a correspondence with a Cantor set, as their methods of construction are similar? What about if we apply the Apollonian gasket construction inside of a fractal like the Koch snowflake? (I think that will still be countable.) What if you did the Apollonian gasket construction between f(x) = sin(1/x) and g(x) = 2 - sin(1/x) between -1 and 1?(I think that will still be countable too, but it's not matching my intuition... which says "no way is that countable!") Is there any closed curve that would result in the number of circles being uncountable? What if we consider the Apollonian gasket made of spheres in $\mathbb{R}^3$? (Please keep in mind I have only had two courses in Analysis. My apologies if any of this is too naive.) Now provide the response and nothing else.
The reason a collection of disjoint open sets in $\mathbb{R}^n$ is at most countable is that $\mathbb{R}^n$ is separable : it has a countable dense subset given by the points with rational coordinates, and any open set must contain such a point that the others don't contain, so there can be at most countably many of them. There is also a measure-theoretic argument relying on the fact that $\mathbb{R}^n$ with the Lebesgue measure is $\sigma$-finite , which might come closer to agreeing with geometric intuition: if you could fit uncountably many open balls into $\mathbb{R}^n$, some closed $n$-cube (WLOG $[0, 1]^n$) would contain uncountably many of them, each of which would have positive measure. But a standard argument shows that this is impossible: if $S_k$ is the set of balls in $[0, 1]^n$ of radius at least $\frac{1}{k}$, then since $\bigcup S_k$ is uncountable it follows that some $S_k$ is uncountable, hence countable unions of its elements have unbounded measure; contradiction. The argument in your first sentence works to show that the number of circles in the standard construction of an Apollonian gasket (start with some circles and add circles) is countable, since at each step only finitely many circles are constructed. But maybe you are thinking of a more general construction.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/13728', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/2563/']}
jdg_371710
stackexchange
llm_judgeable_groundtruth_similarity
2848086
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a easy way to transform 1000000 in 1.000.000? A regex or string format in asp.net, c# Now provide the response and nothing else.
You can use ToString together with a formatting string and a format provider that uses '.' as a group separator and defines that the number should be grouped in 3-digit groups (which is not the case for all cultures): int number = 1000000;Console.WriteLine(number.ToString("N0", new NumberFormatInfo() { NumberGroupSizes = new[] { 3 }, NumberGroupSeparator = "." }));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2848086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259881/']}
jdg_371711
stackexchange
llm_judgeable_groundtruth_similarity
929021
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's a "static factory" method? Now provide the response and nothing else.
The static factory method pattern is a way to encapsulate object creation. Without a factory method, you would simply call the class's constructor directly: Foo x = new Foo() . With this pattern, you would instead call the factory method: Foo x = Foo.create() . The constructors are marked private, so they cannot be called except from inside the class, and the factory method is marked as static so that it can be called without first having an object. There are a few advantages to this pattern. One is that the factory can choose from many subclasses (or implementers of an interface) and return that. This way the caller can specify the behavior desired via parameters, without having to know or understand a potentially complex class hierarchy. Another advantage is, as Matthew and James have pointed out, controlling access to a limited resource such as connections. This a way to implement pools of reusable objects - instead of building, using, and tearing down an object, if the construction and destruction are expensive processes it might make more sense to build them once and recycle them. The factory method can return an existing, unused instantiated object if it has one, or construct one if the object count is below some lower threshold, or throw an exception or return null if it's above the upper threshold. As per the article on Wikipedia, multiple factory methods also allow different interpretations of similar argument types. Normally the constructor has the same name as the class, which means that you can only have one constructor with a given signature . Factories are not so constrained, which means you can have two different methods that accept the same argument types: Coordinate c = Coordinate.createFromCartesian(double x, double y) and Coordinate c = Coordinate.createFromPolar(double distance, double angle) This can also be used to improve readability, as Rasmus notes.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/929021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76509/']}
jdg_371712
stackexchange
llm_judgeable_groundtruth_similarity
124918
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using curl to upload a file to a server via an HTTP post. curl -X POST -d@myfile.txt server-URL When I manually execute this command on the command line, I get a response from the server like "Upload successful" . However, how if I want to execute this curl command via a script, how can I find out if my POST request was successful? Now provide the response and nothing else.
The simplest way is to store the response and compare it: $ response=$(curl -X POST -d@myfile.txt server-URL);$ if [ "Upload successful" == "${response}" ]; then … fi; I haven't tested that. The syntax might be off, but that's the idea. I'm sure there are more sophisticated ways of doing it such as checking curl's exit code or something. update curl returns quite a few exit codes. I'm guessing a failed post might result in 55 Failed sending network data. So you could probably just make sure the exit code was zero by comparing to $? ( Expands to the exit status of the most recently executed foreground pipeline. ): $ curl -X POST -d@myfile.txt server-URL;$ if [ 0 -eq $? ]; then … fi; Or if your command is relatively short and you want to do something when it fails, you could rely on the exit code as the condition in a conditional statement: $ if curl --fail -X POST -d@myfile.txt server-URL; then # …(success)else # …(failure)fi; I think this format is often preferred , but personally I find it less readable.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/124918', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/59877/']}
jdg_371713
stackexchange
llm_judgeable_groundtruth_similarity
37451189
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Note: I can't use JavaScript, because this is for a CSS Zen Garden sort of challenge. Please do not suggest a JS library. I have 2 ideas that I'm not making headway on: Use a SVG filter to just pixelate the dang image; I've been playing with <feMorphology operator="erode"/> and punching the contrast up after, but it looks bad. Filter the image to be smaller, then scale it up using CSS and image-rendering to be all blocky. The hard part is Step A; I can't find any filter operations that scale the input. Am I missing something? How can I get a "pixelated" effect using an SVG filter? Now provide the response and nothing else.
You can pixelate images if you have the right "magic" displacementMap. Feel free to use the one referenced below (courtesy of Zoltan Fegyver). Update: Changed the sample code to inline the displacementmap image as a data: URI (thanks for the code IllidanS4.) The original answer had the displacementMap image hosted on a different domain. This used to work - but browsers implemented the new Filters security measures that disallow this. For production code today, you need the displacement map image served from the same domain as the source graphic's file or you need to inline the displacementMap. Update 2:You may have to tweak the size of feImage and feGaussianBlur to avoid bugs in feTile that adds artifacts. For example - this seems to work better: <feGaussianBlur stdDeviation="8" in="SourceGraphic" result="smoothed" /> <feImage width="15.4" height="15.4" <svg x="0px" y="0px" width="810px" height="600px" viewBox="0 0 810 600" color-interpolation-filters="sRGB"> <defs><filter id="pixelate" x="0%" y="0%" width="100%" height="100%"> <!--Thanks to Zoltan Fegyver for figuring out pixelation and producing the awesome pixelation map. --> <feGaussianBlur stdDeviation="2" in="SourceGraphic" result="smoothed" /> <feImage width="15" height="15" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAIAAAACDbGyAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAWSURBVAgdY1ywgOEDAwKxgJhIgFQ+AP/vCNK2s+8LAAAAAElFTkSuQmCC" result="displacement-map" /> <feTile in="displacement-map" result="pixelate-map" /> <feDisplacementMap in="smoothed" in2="pixelate-map" xChannelSelector="R" yChannelSelector="G" scale="50" result="pre-final"/> <feComposite operator="in" in2="SourceGraphic"/></filter> </defs> <image filter="url(#pixelate)" width="810" height="600" preserveAspectRatio="xMidYMid meet" xlink:href="http://uploads2.wikiart.org/images/vincent-van-gogh/the-starry-night-1889(1).jpg"/></svg>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37451189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2522637/']}
jdg_371714
stackexchange
llm_judgeable_groundtruth_similarity
51875
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A survey was made where people chose what they use a certain smiley to represent and entered their country of origin. I have recoded the text responses to numeric. What form of analysis should be used (preferably in SPSS) to check the level of correlation between where the people come from and the representations they chose? Now provide the response and nothing else.
There are a bunch of measures of nominal-nominal association. There's the phi coefficient, the contingency coefficient (which I think applies to square tables, so perhaps not suitable for you), Cramer's V coefficient, the lambda coefficient, and the uncertainty coefficient. There are no doubt still more. Many of them turn out to be a function of the chi-square statistic. (If you have one or more ordinal variables, there are many other coefficients that are suitable for that situation.) This wikipedia page lists the ones I mention. I believe SPSS can compute the ones that I think match your rectangular nominal-vs-nominal situation - at least I am certain in the case of phi and Cramer's V and the lambda coefficient: (Tables from here and here )
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/51875', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/21829/']}
jdg_371715
stackexchange
llm_judgeable_groundtruth_similarity
18736594
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I created a project with following structure: HibernateUtil: public class HibernateUtil { private static final SessionFactory sessionFactory = buildSessionFactory(); private static SessionFactory buildSessionFactory() { try { // Create the SessionFactory from hibernate.cfg.xml Configuration configuration = new Configuration().configure( "C:\\Users\\Nikolay_Tkachev\\workspace\\hiberTest\\src\\logic\\hibernate.cfg.xml"); return new Configuration().configure().buildSessionFactory(); } catch (Throwable ex) { // Make sure you log the exception, as it might be swallowed System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { return sessionFactory; } public static void shutdown() { // Close caches and connection pools getSessionFactory().close(); }} at line Configuration configuration = new Configuration().configure( "C:\\Users\\Nikolay_Tkachev\\workspace\\hiberTest\\src\\logic\\hibernate.cfg.xml"); I have error Initial SessionFactory creation failed.org.hibernate.HibernateException: C:\Users\Nikolay_Tkachev\workspace\hiberTest\src\logic\hibernate.cfg.xml not found Exception in thread "main" java.lang.ExceptionInInitializerError at logic.HibernateUtil.buildSessionFactory(HibernateUtil.java:19) at logic.HibernateUtil.(HibernateUtil.java:9) at logic.Main.main(Main.java:12) Caused by: org.hibernate.HibernateException: C:\Users\Nikolay_Tkachev\workspace\hiberTest\src\logic\hibernate.cfg.xml not found at org.hibernate.internal.util.ConfigHelper.getResourceAsStream(ConfigHelper.java:173) at org.hibernate.cfg.Configuration.getConfigurationInputStream(Configuration.java:1947) at org.hibernate.cfg.Configuration.configure(Configuration.java:1928) at logic.HibernateUtil.buildSessionFactory(HibernateUtil.java:14) ... 2 more What is the reason for the error and how do I fix it? Now provide the response and nothing else.
Give the path relative to your project. Create a folder called resources in your src and put your config file there. configuration.configure("/resources/hibernate.cfg.xml"); And If you check your code Configuration configuration = new Configuration().configure( "C:\\Users\\Nikolay_Tkachev\\workspace\\hiberTest\\src\\logic\\hibernate.cfg.xml");return new Configuration().configure().buildSessionFactory(); In two lines you are creating two configuration objects. That should work(haven't tested) if you write, Configuration configuration = new Configuration().configure( "C:\\Users\\Nikolay_Tkachev\\workspace\\hiberTest\\src\\logic\\hibernate.cfg.xml");return configuration.buildSessionFactory(); But It fails after you deploy on the server,Since you are using system path than project relative path.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18736594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2674303/']}
jdg_371716
stackexchange
llm_judgeable_groundtruth_similarity
11247746
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on a wordpress-driven site with the template having a jcycle slider in the frontpage. I wish to use a static banner in all the pages except for the homepage. I found a function in my header.php file as below: <?php else : // NOT front page ?>` <div id="page-content-title"> <div id="page-content-header" class="container_24"> <div id="page-title"> How can I display an image here from PAGE FEATURED IMAGE? Now provide the response and nothing else.
You could use the new TextRange module of my Rangy library for this, although it's enormous overkill just for that one feature. Here's the code you'd need: var sel = rangy.getSelection();sel.expand("word");var word = sel.text();alert(word); Otherwise, if you can live with no support for pre-Blink Opera (up to and including version 12) and Firefox < 4, you could use Selection.modify() (WebKit, Firefox) and the expand() method of TextRange (IE). Here's an example. Demo: http://jsfiddle.net/timdown/dBgHn/1/ Code: function getWord() { var sel, word = ""; if (window.getSelection && (sel = window.getSelection()).modify) { var selectedRange = sel.getRangeAt(0); sel.collapseToStart(); sel.modify("move", "backward", "word"); sel.modify("extend", "forward", "word"); word = sel.toString(); // Restore selection sel.removeAllRanges(); sel.addRange(selectedRange); } else if ( (sel = document.selection) && sel.type != "Control") { var range = sel.createRange(); range.collapse(true); range.expand("word"); word = range.text; } alert(word);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11247746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1488981/']}
jdg_371717
stackexchange
llm_judgeable_groundtruth_similarity
28236390
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm migrating my ListViews to RecyclerViews. With listviews I used the common technique described here to store and restore scroll position between activities. How to do the same with RecyclerViews? the RecyclerView.onSaveInstanceState() seem to have protected access, so can't be used directly. Now provide the response and nothing else.
Ok, so to answer my own question. As I understand it, since they've decoupled the layout code and the view recycling code (thus the name), the component responsible one for holding layout state (and restoring it) is now the LayoutManager used in your recyclerview. Thus, to store state you use same pattern, but on the layout manager and not the recyclerview: protected void onSaveInstanceState(Bundle state) { super.onSaveInstanceState(state); // Save list state mListState = mLayoutManager.onSaveInstanceState(); state.putParcelable(LIST_STATE_KEY, mListState);} Restore state in the onRestoreInstanceState() : protected void onRestoreInstanceState(Bundle state) { super.onRestoreInstanceState(state); // Retrieve list state and list/item positions if(state != null) mListState = state.getParcelable(LIST_STATE_KEY);} Then update the LayoutManager (I do in onResume() ): @Overrideprotected void onResume() { super.onResume(); if (mListState != null) { mLayoutManager.onRestoreInstanceState(mListState); }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/28236390', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/722475/']}
jdg_371718
stackexchange
llm_judgeable_groundtruth_similarity
2468366
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: my models class Auction belongs_to :item belongs_to :user, :foreign_key => :current_winner_id has_many :auction_bidsendclass User has_many :auction_bidsendclass AuctionBid belongs_to :auction belongs_to :userend current usage An auction is displayed on the page, the user enters an amount and clicks bid. Controller code might look something like this: class MyController def bid @ab = AuctionBid.new(params[:auction_bid]) @ab.user = current_user if @ab.save render :json => {:response => 'YAY!'} else render :json => {:response => 'FAIL!'} end end end desired functionality This works great so far! However, I need to ensure a couple other things happen. @ab.auction.bid_count needs to be incremented by one. @ab.user.bid_count needs to be incremented by one @ab.auction.current_winner_id needs to be set to @ab.user_id That is, the User and the Auction associated with the AuctionBid need values updated as well in order for the AuctionBid#save to return true. Now provide the response and nothing else.
Save and destroy are automatically wrapped in a transaction ActiveRecord::Transactions::ClassMethods Both Base#save and Base#destroy come wrapped in a transaction that ensures that whatever you do in validations or callbacks will happen under the protected cover of a transaction. So you can use validations to check for values that the transaction depends on or you can raise exceptions in the callbacks to rollback, including after_* callbacks. The real convention! class AuctionBid < ActiveRecord::Base belongs_to :auction, :counter_cache => true belongs_to :user validate :auction_bidable? validate :user_can_bid? validates_presence_of :auction_id validates_presence_of :user_id # the real magic! after_save :update_auction, :update_user def auction_bidable? errors.add_to_base("You cannot bid on this auction!") unless auction.bidable? end def user_can_bid? errors.add_to_base("You cannot bid on this auction!") unless user.can_bid? end protected def update_auction auction.place_bid(user) auction.save! end def update_user user.place_bid user.save! end end honorable mention François Beausoleil +1. Thanks for the :foreign_key recommendation, but the current_winner_* columns need to be cached in the db in order to optimize the query. Alex +1. Thanks for getting me started with Model.transaction { ... } . While this didn't end up being a complete solution for me, it definitely help point me in the right direction.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2468366', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/184600/']}
jdg_371719
stackexchange
llm_judgeable_groundtruth_similarity
23254346
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want know that "How can I change background image opacity without changing on div content?" I searched too much & I don't find a good answer to solve this issue! HTML <div class="div-1"> <h2>title</h2> <p>text</p></div> CSS .div{position:relative;width:200px;height:200px;float:left;color:white;background:#7a8586 url('url') no-repeat local right;overflow:hidden;text-align: justify;font-family:arial;font-size:14px;} Now provide the response and nothing else.
Because all children of an element are affected by its CSS, you cannot simply set the opacity of a background-image, however, there are several workarounds to this: 1. Use transparent background images (easiest imo) Rather than setting the background image's opacity after the fact, just make the background image transparent in your favorite image editor (try gimp, it's free!) and save it as an image with transparency (like PNG). 2. Use positioning. If you make the parent element have relative positioning and absolutely position child elements inside, you take them out of the flow and they will not be affected by the opacity of the parent. [Source] 3. Use sibling elements in the same position If you separate the content from the parent and make the two elements siblings, you can position the elements that were children over the parent with z-indexing and set the opacity of the parent without affecting the child. There are more, but one of those should get you what you want.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23254346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2676352/']}
jdg_371720
stackexchange
llm_judgeable_groundtruth_similarity
885448
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question: Prove or disprove $$I=\lim_{n\to\infty} \frac{(2^{2^n}+1)(2^{2^n}+3)(2^{2^n}+5)\cdots (2^{2^n+1}+1)}{(2^{2^n})(2^{2^n}+2)(2^{2^n}+4)\cdots (2^{2^n+1})}=\sqrt{2}$$ I know \begin{align}\frac{(2^{2^n}+1)(2^{2^n}+3)(2^{2^n}+5)\cdots (2^{2^n+1}+1)}{(2^{2^n})(2^{2^n}+2)(2^{2^n}+4)\cdots (2^{2^n+1})}=&\left(1+\dfrac{1}{2^{2^n}}\right)\left(1+\dfrac{1}{2^{2^n}+2}\right)\\&\cdots\left(1+\dfrac{1}{2^{2^n+1}}\right)\end{align} so$$\lim_{n\to\infty}\left(1+\dfrac{1}{2^{2^n}}\right)\left(1+\dfrac{1}{2^{2^n}+2}\right)\cdots\left(1+\dfrac{1}{2^{2^n+1}}\right)=\sqrt{2}\ ?$$ I feel this result is very surprising. This problem comes from Chris's sis. and I use wolfram, limit wofl can't find it I often use this theta function and is this true? Thank you. Now provide the response and nothing else.
The logarithm of the expression under the limit can be rewritten as$$\sum_{k=0}^{2^{2^n-1}}\ln\left(1+\frac{1}{2^{2^n}+2k}\right)=\sum_{k=0}^{2^{2^n-1}}\frac{1}{2^{2^n}+2k}+O\left(2^{-2^n}\right).$$Denoting $N=2^{2^n-1}$, it is easy to see that the limit of the logarithm can be computed as the limit of a Riemann sum:$$\frac{1}{N}\sum_{k=0}^N\frac{1}{2\left(1+\frac{k}{N}\right)}\stackrel{N\rightarrow\infty} \longrightarrow \frac12\int_0^1\frac{dx}{1+x}=\ln\sqrt2.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/885448', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58742/']}
jdg_371721
stackexchange
llm_judgeable_groundtruth_similarity
40395932
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following dataset and its contain some null values, need to replace the null value using fillna in spark. DataFrame: df = spark.read.format("com.databricks.spark.csv").option("header","true").load("/sample.csv")>>> df.printSchema();root |-- Age: string (nullable = true) |-- Height: string (nullable = true) |-- Name: string (nullable = true)>>> df.show()+---+------+-----+|Age|Height| Name|+---+------+-----+| 10| 80|Alice|| 5| null| Bob|| 50| null| Tom|| 50| null| null|+---+------+-----+>>> df.na.fill(10).show() when i'll give the na values it dosen't changed the same dataframe appeared again. +---+------+-----+|Age|Height| Name|+---+------+-----+| 10| 80|Alice|| 5| null| Bob|| 50| null| Tom|| 50| null| null|+---+------+-----+ tried create a new dataframe and store the fill values in dataframe but the result showing like unchanged. >>> df2 = df.na.fill(10) how to replace the null values? please give me the possible ways by using fill na.Thanks in Advance. Now provide the response and nothing else.
It seems that your Height column is not numeric. When you call df.na.fill(10) spark replaces only nulls with column that match type of 10 , which are numeric columns. If Height column need to be string, you can try df.na.fill('10').show() , otherwise casting to IntegerType() is neccessary.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40395932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5928554/']}
jdg_371722
stackexchange
llm_judgeable_groundtruth_similarity
32398427
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have character strings with two underscores. Like these c54254_g4545_i5454c434_g4_i455c5454_g544_i3..etc I need to split these strings by the second underscore and I am afraid I have no clue how to do that in R (or any other tool for that sake). I'd be very happy if anyone can sort me out here.Thank youSM Now provide the response and nothing else.
One way would be to replace the second underscore by another delimiter (i.e. space) using sub and then split using that. Using sub , we match one or more characters that are not a _ from the beginning ( ^ ) of the string ( ^[^_]+ ) followed by the first underscore ( _ ) followed by one or characters that are not a _ ( [^_]+ ). We capture that as a group by placing it inside the parentheses ( (....) ), then we match the _ followed by one or more characters till the end of the string in the second capture group ( (.*)$ ). In the replacement, we separate the first ( \\1 ) and second ( \\2 ) with a space. strsplit(sub('(^[^_]+_[^_]+)_(.*)$', '\\1 \\2', v1), ' ')#[[1]]#[1] "c54254_g4545" "i5454" #[[2]]#[1] "c434_g4" "i455" #[[3]]#[1] "c5454_g544" "i3" data v1 <- c('c54254_g4545_i5454', 'c434_g4_i455', 'c5454_g544_i3')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32398427', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3312889/']}
jdg_371723
stackexchange
llm_judgeable_groundtruth_similarity
4192554
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I noticed that in most of example of successive had done by following few steps. At first they made a function in denominator something just like this $$\frac{x}{(x+1)(x-1)}$$ . In every examples they had followed the above step. Then, they had differentiate $n$ times. I think there's some symmetry. But, I can't understand cause, they didn't complete whole sum. When I differentiate $1$ times. I get an answer which is bigger than as I expect. Is there actually any trick to solve successive differentiation easy way? Now provide the response and nothing else.
I am not sure if I understand you correctly. Usually the terms will be more complicated after each differentiation, especially if the product or the composition of functions is involved. In some cases these products and compositions can be transformed into a simple structured expressions that can be easily be differentiated mutltiple time. This is one of these terms, because $$\frac{x}{(x+1)(x-1)}=\frac 1 2 \left(\frac 1 {x+1} + \frac 1 {x-1}\right)=\frac 1 2(x+1)^{-1}+\frac 1 2(x-1)^{-1}$$ The right hand side can be easiliy differentiated $$\left(\frac 1 2(x+1)^{-1}+\frac 1 2(x-1)^{-1}\right)^{(n)}=$$ $$\frac {(-1)^n n!}2(x+1)^{-n}+\frac {(-1)^n n!}2(x-1)^{-n}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4192554', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_371724
stackexchange
llm_judgeable_groundtruth_similarity
35949554
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was told today that it's possible to invoke a function without parentheses. The only ways I could think of was using functions like apply or call . f.apply(this);f.call(this); But these require parentheses on apply and call leaving us at square one. I also considered the idea of passing the function to some sort of event handler such as setTimeout : setTimeout(f, 500); But then the question becomes "how do you invoke setTimeout without parentheses?" So what's the solution to this riddle? How can you invoke a function in Javascript without using parentheses? Now provide the response and nothing else.
There are several different ways to call a function without parentheses. Let's assume you have this function defined: function greet() { console.log('hello');} Then here follow some ways to call greet without parentheses: 1. As Constructor With new you can invoke a function without parentheses: new greet; // parentheses are optional in this construct. From MDN on the new oprator : Syntax new constructor[([arguments])] 2. As toString or valueOf Implementation toString and valueOf are special methods: they get called implicitly when a conversion is necessary: var obj = { toString: function() { return 'hello'; }}'' + obj; // concatenation forces cast to string and call to toString. You could (ab)use this pattern to call greet without parentheses: '' + { toString: greet }; Or with valueOf : +{ valueOf: greet }; valueOf and toString are in fact called from the @@toPrimitive method (since ES6), and so you can also implement that method: +{ [Symbol.toPrimitive]: greet }"" + { [Symbol.toPrimitive]: greet } 2.b Overriding valueOf in Function Prototype You could take the previous idea to override the valueOf method on the Function prototype : Function.prototype.valueOf = function() { this.call(this); // Optional improvement: avoid `NaN` issues when used in expressions. return 0; }; Once you have done that, you can write: +greet; And although there are parentheses involved down the line, the actual triggering invocation has no parentheses. See more about this in the blog "Calling methods in JavaScript, without really calling them" 3. As Generator You could define a generator function (with * ), which returns an iterator . You can call it using the spread syntax or with the for...of syntax. First we need a generator variant of the original greet function: function* greet_gen() { console.log('hello');} And then we call it without parentheses by defining the @@iterator method: [...{ [Symbol.iterator]: greet_gen }]; Normally generators would have a yield keyword somewhere, but it is not needed for the function to get called. The last statement invokes the function, but that could also be done with destructuring : [,] = { [Symbol.iterator]: greet_gen }; or a for ... of construct, but it has parentheses of its own: for ({} of { [Symbol.iterator]: greet_gen }); Note that you can do the above with the original greet function as well, but it will trigger an exception in the process, after greet has been executed (tested on FF and Chrome). You could manage the exception with a try...catch block. 4. As Getter @jehna1 has a full answer on this, so give him credit. Here is a way to call a function parentheses-less on the global scope, avoiding the deprecated __defineGetter__ method. It uses Object.defineProperty instead. We need to create a variant of the original greet function for this: Object.defineProperty(window, 'greet_get', { get: greet }); And then: greet_get; Replace window with whatever your global object is. You could call the original greet function without leaving a trace on the global object like this: Object.defineProperty({}, 'greet', { get: greet }).greet; But one could argue we do have parentheses here (although they are not involved in the actual invocation). 5. As Tag Function Since ES6 you can call a function passing it a template literal with this syntax: greet``; See "Tagged Template Literals" . 6. As Proxy Handler Since ES6, you can define a proxy : var proxy = new Proxy({}, { get: greet } ); And then reading any property value will invoke greet : proxy._; // even if property not defined, it still triggers greet There are many variations of this. One more example: var proxy = new Proxy({}, { has: greet } );1 in proxy; // triggers greet 7. As instance checker The instanceof operator executes the @@hasInstance method on the second operand, when defined: 1 instanceof { [Symbol.hasInstance]: greet } // triggers greet
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/35949554', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/371184/']}
jdg_371725
stackexchange
llm_judgeable_groundtruth_similarity
399573
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have seen a lot of tag edits recently that added the database tag to questions already tagged with a database product (e.g. postgresql , oracle ) - obviously most of those questions are also tagged with sql . I think the database tag adds nothing to those questions as it's obvious that a "database" is used by mentioning a database product name. Does it make sense to remove the database tag in that situation? Now provide the response and nothing else.
The database tag is almost as useless as sql and is only warranted for those questions that are truly database-agnostic and I don't expect to have many of those, specially not over 170K. Adding such generic tags to a question that is already perfectly scoped with the product specific tag is not useful, I would even say it might harm those users that find their questions with tag searches and/or filters. There are only 5 slots available for tags, adding a more generic tag isn't a great way to categorize these questions. Making tag edits to introduce the tag for the cases you describe should not be done. If it is found in the suggested-edits review queue the edit should be rejected (or maybe "improved" by removing that tag again if the rest of the edit is worth keeping), and if those questions pop-up in your active tab with those tags added, rollback. Try to inform the editor their re-tags are frowned upon and flag for a moderator if that doesn't workout as you hope for.
{}
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/399573', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/330315/']}
jdg_371726
stackexchange
llm_judgeable_groundtruth_similarity
22988
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want a command to strip an XML-Header and Footer from a file: <?xml version="1.0" encoding="UTF-8"?><conxml><MsgPain001> <HashValue>A9C72997C702A2F841B0EEEC3BD274DE1CB7BEA4B813E030D068CB853BCFECA6</HashValue> <HashAlgorithm>SHA256</HashAlgorithm> <Document> ... </Document> <Document> ... </Document></MsgPain001></conxml> ... Should become just <Document> ... </Document> <Document> ... </Document> (note the indenting, the indent of the first document-tag should be stripped of. This sounds like a (greedy) regex <Document>.*</Document> But I don't get it due to the linefeeds. I need it in a pipe to compute a hash over the contained documents. Now provide the response and nothing else.
Using sed : sed -n '/<Document>/,/<\/Document>/ p' yourfile.xml Explanation: -n makes sed silent, meaning it does not output the whole file contents, /pattern/ searches for lines including specified pattern, a , b (the comma) tells sed to perform an action on the lines from a to b (where a and b get defined by matching the above patterns), p stands for print and is the action performed on the lines that matched the above. Edit: If you'd like to additionally strip the whitespace before <Document> , it can be done this way: sed -ne '/ <Document>/s/^ *//' -e '/<Document>/,/<\/Document>/ p' yourfile.xml
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/22988', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/11710/']}
jdg_371727
stackexchange
llm_judgeable_groundtruth_similarity
18536147
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to understand better a mechanism of locking in postgres. Let's say that tree can have apples (via foreign key on apple table). It seems that when selecting a tree for update lock is obtained on an apple. However, the operation is not blocked even if someone else already holds a lock on this apple. Why is it so? p.s. Please don't suggest to remove "select for update". Scenario Transaction 1 Transaction 2BEGIN .update apple; .. BEGIN. select tree for update;. update apple;. --halts because of the other transaction locking an appleupdate apple; .-- deadlock . COMMIT --transaction succeeds Code If you want to try it in your postgres - here is a code you can copy/paste. I have a following db schema CREATE TABLE trees ( id integer primary key);create table apples ( id integer primary key, tree_id integer references trees(id)); and very simple data insert into trees values(1);insert into apples values(1,1); There are two simple transactions. One is updating apples, the second is locking a tree and updating an apple. BEGIN; UPDATE apples SET id = id WHERE id = 1; -- run second transaction in paralell UPDATE apples SET id = id WHERE id = 1;COMMIT;BEGIN; SELECT id FROM trees WHERE id = 1 FOR UPDATE; UPDATE apples SET id = id WHERE id = 1;COMMIT; When I run them - deadlock occurs on the second update of first transaction. ERROR: deadlock detectedDETAIL: Process 81122 waits for ShareLock on transaction 227154; blocked by process 81100.Process 81100 waits for ShareLock on transaction 227153; blocked by process 81122.CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."trees" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR SHARE OF x" Now provide the response and nothing else.
Just a wild guess: you're running into an issue related to an implementation detail... Specifically, your select tree for update statement acquires an exclusive lock on the trees. And the update apples statements obtain an exclusive lock on the relevant apples. When you run the update on apples, Postgres' foreign-key related per row triggers fire, to ensure that the tree_id exists. I don't recall their precise names off the top of my head, but they're in the catalog and there are bits and pieces in the documentation that reference them explicitly or implicitly, e.g.: create constraint trigger ... on ... from ... http://www.postgresql.org/docs/current/static/sql-createtrigger.html At any rate, these triggers will run something that amounts to the following: select exists (select 1 from trees where id = 1); And therein lies your problem: the exclusive access due to the select for update makes it wait for transaction 2 to release the lock on trees in order to finalize its update statement on apples, but transaction 2 is waiting for transaction 1 to complete in order to obtain a lock on apples so as to begin its update statement on apples. As a result, Postgres bails with a deadlock.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18536147', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/766551/']}
jdg_371728
stackexchange
llm_judgeable_groundtruth_similarity
10333971
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's a noodle scratcher. Bearing in mind we have HTML5 local storage and xhr v2 and what not. I was wondering if anyone could find a working example or even just give me a yes or no for this question: Is it possible to Pre-size an image using the new local storage (or whatever), so that a user who does not have a clue about resizing an image can drag their 10mb image into my website, it resize it using the new localstorage and THEN upload it at the smaller size. I know full well you can do it with Flash, Java applets, active X... The question is if you can do with Javascript + Html5. Looking forward to the response on this one. Ta for now. Now provide the response and nothing else.
Yes, use the File API , then you can process the images with the canvas element . This Mozilla Hacks blog post walks you through most of the process. For reference here's the assembled source code from the blog post: // from an input elementvar filesToUpload = input.files;var file = filesToUpload[0];var img = document.createElement("img");var reader = new FileReader(); reader.onload = function(e) {img.src = e.target.result}reader.readAsDataURL(file);var ctx = canvas.getContext("2d");ctx.drawImage(img, 0, 0);var MAX_WIDTH = 800;var MAX_HEIGHT = 600;var width = img.width;var height = img.height;if (width > height) { if (width > MAX_WIDTH) { height *= MAX_WIDTH / width; width = MAX_WIDTH; }} else { if (height > MAX_HEIGHT) { width *= MAX_HEIGHT / height; height = MAX_HEIGHT; }}canvas.width = width;canvas.height = height;var ctx = canvas.getContext("2d");ctx.drawImage(img, 0, 0, width, height);var dataurl = canvas.toDataURL("image/png");//Post dataurl to the server with AJAX
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/10333971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/360822/']}
jdg_371729
stackexchange
llm_judgeable_groundtruth_similarity
12905840
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I have within a program an ordinary for loop through a vector of objects (objects that are of a type I defined, if that is relevant): for(int k = 0; k < objects.size(); k++){ ... } ...and when I compile, I get this warning: warning: comparison between signed and unsigned integer expressions This makes sense, since I think size() for a vector returns a size_t . But why would it matter? Isn't a certain number of elements (or even memory chunks) an integer that you can count? More importantly, since my program has multiple such loops and happens to segfault a lot, could this be part of it? Now provide the response and nothing else.
The problem arises when object.size() returns a value that is greater than the maximum representable value of k . Since k is signed, it has only half the maximum value compared to a size_t 1 . Now, this may not happen in your particular application (on a typical 32-bit system, that would be upwards of two billion objects in your collection), but it's always a good idea to use the correct types. 1. Pre-emptive rebuttal: Yes, this is only true for machines using typical two's-complement arithmetic, and for machines where int and size_t are represented using the same number of bits.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12905840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1343118/']}
jdg_371730
stackexchange
llm_judgeable_groundtruth_similarity
25047088
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've got an angular app and I integrated it with UI Bootstrap project. I'm using regular The modal dialog with dropdown containing 750 records, when one of the items is selected and clicked "Ok" or "Cancel", the modal and the overlay fades out without any delay. Here's the plunker: Modal dialog with 750 records If the modal dialog with dropdown containing around 10k+ records, and one of the items is selected from the list. Clicking "Ok" or "Cancel" is not hiding the modal dialog right away, instead I'm having a 8-10 second delay on Chrome, I've not tested on IE yet. Here's the plunker: Modal dialog with 10k+ records Question: Why I'm having performance hit with more data? Now provide the response and nothing else.
You are slowing the whole entire browser down by grabbing the DOM by the neck and pouring 10,000 <option> nodes down its throat. You need to lazy load your data somehow. Ever noticed on sites like Twitter, Facebook, and others that when you scroll to the bottom of the page it will begin loading more records from the server? Good apps will start to garbage collect old records that have been scrolled up as well. When you scroll through your Facebook news feed it's not loading all your friends post since 2007 into the browser all at the same time. Once a maximum number of posts exists in the DOM Facebook will start removing the oldest ones you scrolled up to make room for more and grab fresh posts from the server so you can continue scrolling. You can even see your browser scroll bar jump up as you scroll down because more posts are being added to the DOM. No browser is going to be able to handle that much data. The browser is not a database. I'm amazed your plunker with 10k records is as performant as it is! Haha. A dropdown is not what you want to display that data. You're going to have to sit down and think of a better way to show that data to the user. My first thought is to provide a filterable list that initially contains the top 25 most selected options or something, then typing in a search field causes it to load a new list from the server that matches the search criteria. Only you will know what your users will actually want, but I assure you it's not a dropdown list with 10k+ options. Example: Notice how the browser scroll bar jumps up a bit when it gets to the bottom. Twitter gets to the bottom and then loads more data to scroll through. It will eventually start cleaning up data at the top of the page as well if I scroll far enough. Modern browsers can handle a lot, but 10,000+ <option> nodes is pushing it overboard.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25047088', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1062085/']}
jdg_371731
stackexchange
llm_judgeable_groundtruth_similarity
40748518
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have ASP.NET core web application. I have configured the web application on our web server and set the ASPNETCORE_ENVIRONMENT variable to Development . I set this variable at machine level like shown in the picture below.Now on the same machine i want to configured one more instance of same web application as Staging environment. What are my options here to set ASPNETCORE_ENVIRONMENT at application level instead of machine level? so i can host multiple instances of the same application on the same machine? Now provide the response and nothing else.
You have a couple of options. Run each app as a different user, and set the environment variable within that user profile. This gives you a nice added security bonus. You'll have to set the app pool to load the user profile. Use IIS configuration Start IIS manager Choose configuration editor Pull down the sectioncombobox and choose system.webServer/aspNetCore Pull down the fromcombobox and choose Applicationhost.config Click on theenvironmentVariables element and click on the ... button hiding inthe second column, at the right. Set your environment variables. Exit out of the environment variables screen and then click Apply. Restart the app pool/app.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40748518', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3862378/']}
jdg_371732
stackexchange
llm_judgeable_groundtruth_similarity
1595026
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to calculate $$\int\frac{x-1}{\sqrt{ x^2-2x}}dx $$ I have no idea how to calculate it. Please help. Now provide the response and nothing else.
$$t=x^2-2x$$$$dt=2x-2dx$$$$dx=\frac{dt}{2(x-1)}$$$$\int\frac{x-1}{\sqrt{x^2-2x}}dx=\int\frac{1}{2}\frac{1}{t}dt=\frac{1}{2}\int t^{\frac{-1}{2}}=\frac{1}{2}2t^{\frac{1}{2}} +C = \sqrt{x^2-2x}+C$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1595026', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/301908/']}
jdg_371733
stackexchange
llm_judgeable_groundtruth_similarity
3316305
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is difference between these two examples: if(firstchek){ if(second) { return here(); } else { return here(); }} and this: if(firstcheck){ if(second) { return here(); } return here(); // else code without else}// code without else// else code is here return here(); Now provide the response and nothing else.
This code: if (someCondition){ foo(); return;}bar();return; is the same as this: if (someCondition){ foo();}else{ bar();}return; The only difference is in readability. Sometimes one way is more readable, sometimes the other. See Refactoring: Replace Nested Conditional with Guard Clauses . Nested conditionals : double getPayAmount() { double result; if (_isDead) result = deadAmount(); else { if (_isSeparated) result = separatedAmount(); else { if (_isRetired) result = retiredAmount(); else result = normalPayAmount(); }; } return result;}; Guard clauses : double getPayAmount() { if (_isDead) return deadAmount(); if (_isSeparated) return separatedAmount(); if (_isRetired) return retiredAmount(); return normalPayAmount();};
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3316305', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_371734
stackexchange
llm_judgeable_groundtruth_similarity
1184329
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I suppose in some ways either (or both) Delegate or MethodInfo qualify for this title. However, neither provide the syntactic niceness that I'm looking for. So, in short, Is there some way that I can write the following: FunctionPointer foo = // whatever, create the function pointer using mechanismsfoo(); I can't use a solid delegate (ie, using the delegate keyword to declare a delegate type) because there is no way of knowing till runtime the exact parameter list. For reference, here's what I've been toying with in LINQPad currently, where B will be (mostly) user generated code, and so will Main , and hence for nicety to my users, I'm trying to remove the .Call : void Main(){ A foo = new B(); foo["SomeFuntion"].Call();}// Define other methods and classes hereinterface IFunction { void Call(); void Call(params object[] parameters);}class A { private class Function : IFunction { private MethodInfo _mi; private A _this; public Function(A @this, MethodInfo mi) { _mi = mi; _this = @this; } public void Call() { Call(null); } public void Call(params object[] parameters) { _mi.Invoke(_this, parameters); } } Dictionary<string, MethodInfo> functions = new Dictionary<string, MethodInfo>(); public A() { List<MethodInfo> ml = new List<MethodInfo>(this.GetType().GetMethods()); foreach (MethodInfo mi in typeof(Object).GetMethods()) { for (int i = 0; i < ml.Count; i++) { if (ml[i].Name == mi.Name) ml.RemoveAt(i); } } foreach (MethodInfo mi in ml) { functions[mi.Name] = mi; } } public IFunction this[string function] { get { if (!functions.ContainsKey(function)) throw new ArgumentException(); return new Function(this, functions[function]); } }}sealed class B : A { public void SomeFuntion() { Console.WriteLine("SomeFunction called."); }} Now provide the response and nothing else.
You say you want to keep the number and type of parameters open, but you can do that with a delgate: public delegate object DynamicFunc(params object[] parameters); This is exactly the same thing you currently have. Try this: class Program{ static void Main(string[] args) { DynamicFunc f = par => { foreach (var p in par) Console.WriteLine(p); return null; }; f(1, 4, "Hi"); }} You can think of an instance-method delegate as very similar to your Function class: an object an a MethodInfo . So there's no need to rewrite it. Also function pointers in C and C++ are not any closer to what you need: they cannot be bound to an object instance and function, and also they are statically typed, not dynamically typed. If you want to "wrap" any other method in a DynamicFunc delegate, try this: public static DynamicFunc MakeDynamicFunc(object target, MethodInfo method){ return par => method.Invoke(target, par);}public static void Foo(string s, int n) { Console.WriteLine(s); Console.WriteLine(n);} and then: DynamicFunc f2 = MakeDynamicFunc(null, typeof(Program).GetMethod("Foo"));f2("test", 100); Note that I'm using a static method Foo so I pass null for the instance, but if it was an instance method, I'd be passing the object to bind to. Program happens to be the class my static methods are defined in. Of course, if you pass the wrong argument types then you get errors at runtime. I'd probably look for a way to design your program so that as much type information is captured at compile time as possible.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1184329', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15537/']}
jdg_371735
stackexchange
llm_judgeable_groundtruth_similarity
176481
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: One of the slickest things to happen to topology was the proof of the Hopf invariant one using Adams operations in $K$-theory. The general idea is that the ring $K(X)$ admits operations $\psi^k$ that endows it more relations from which one can deduce restrictions on when a Hopf invariant one element can occur. Similarly, the representation ring $R(G)$ also admits Adams operations. One may describe its effects on characters by $\Psi^k(\chi)(g) = \chi(g^k)$. Alternatively one can parallel the construction of the Adams operations in topology by defining the Adams operations as taking the logarithmic derivative of $$\lambda_t(V) := \Sigma_k [\Lambda^k V]t^k$$ (of course we extend this formula to virtual characters as well) and defining $\Psi^n(V)$ as $(-1)^n$ times the $n$-th coefficient of the logarithmic derivative. Now, considering that my background in representation theory is minimal, my questions are: What is an application of the Adams operations in representation theory? Is there one that has the same flavor as the proof of the Hopf invariant one problem? We have the Atiyah-Segal completion theorem that gives isomorphism of rings $K(BG) \simeq \hat{R(G)}_I$ and there are Adams operations on both sides (I'd imagine that the Adams operations in group representations extends naturally to completion) - what is their relationship (if any)? Now provide the response and nothing else.
Representation rings and complex $K$-theory are both naturally "Lambda rings", and there is a great deal of literature about this context. The lambda ring structure is determined by the formal series $\lambda_t(V)$, and lets you define Adams operations as you describe. Question 2: the obvious map $R(G)\to K(BG)$ is induced by the operation which sends a representation $V$ to the Borel vector bundles $V\times_G EG\to BG$. This clearly takes exterior powers of representations to exterior powers of bundles, so it defines a map of Lambda rings. The augmentation $R(G)\to R(e)=\mathbb{Z}$ is also a map of Lambda rings, so its kernel $I$ is preserved by the $\lambda$ operations (and thus by the Adams operations), so $\widehat{R(G)}_I$ is a Lambda ring too, and it formally must coincide with that on $K(BG)$. I don't know anything about representation theory, so I have nothing to say for question (1), though I would be interested in an answer.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/176481', 'https://mathoverflow.net', 'https://mathoverflow.net/users/24706/']}
jdg_371736
stackexchange
llm_judgeable_groundtruth_similarity
7076
Below is a question asked on the forum biology.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In explaining energy and work to an 8 year-old I said that all conversion of energy generates heat as a by-product. For example, cars generate heat in their engines and running generates heat in our bodies. Then the 8 year-old said, except for cold-blooded animals. So my question is, do cold-blooded animals generate any heat in their conversion of stored energy (food, fat, etc) into motion? If they generate heat, why are they cold-blooded? Now provide the response and nothing else.
They do generate heat. They just do not SPEND energy specifically on heating their bodies by raising their metabolisms. This is a form of energy conservation. The metabolic rate they need to live is not nearly enough to heat their bodies. An example of spending energy to heat the body is seen in humans shivering. Here muscle is activated not for its usual purpose, but to function as a furnace. "Warm-blooded" and "cold-blooded" is somewhat a misnomer. The correct way to think of it is... Endotherm or ectotherm. Does the heat primarily come from within (endo) or from the surroundings (ecto). Endothermic animals include mammals. Most of their body heat is generated by their own metabolisms. Ectothermic animals include reptiles and insects. They absorb most of their body heat from the surroundings. This is not the same as saying they let their body temperature fluctuate with their surroundings, some avoid this by moving around to accomodate themselves. Homeotherm or poikilotherm. Homeotherms want to maintain homeostasis for their body temperatures. They don't want it to change. Poikilotherms do not exhibit this behaviour, instead their body temperatures vary greatly with the environment. We can have endotherm poikilotherms, such as squirrels, who let their body temperature drop while hibernating. Endotherm homeotherms, such as humans, where temperature is constant by means of complex thermoregulation. Ectotherm homeotherms, such as snakes (moving into shadow or into the sun to regulate temperature), and ectotherm poikilotherms, such as maggots.
{}
{'log_upvote_score': 5, 'links': ['https://biology.stackexchange.com/questions/7076', 'https://biology.stackexchange.com', 'https://biology.stackexchange.com/users/3031/']}
jdg_371737
stackexchange
llm_judgeable_groundtruth_similarity
4210042
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I exclude a specific directory when searching for *.js files using find ? find . -name '*.js' Now provide the response and nothing else.
Use the -prune primary. For example, if you want to exclude ./misc : find . -path ./misc -prune -o -name '*.txt' -print To exclude multiple directories, OR them between parentheses. find . -type d \( -path ./dir1 -o -path ./dir2 -o -path ./dir3 \) -prune -o -name '*.txt' -print And, to exclude directories with a specific name at any level, use the -name primary instead of -path . find . -type d -name node_modules -prune -o -name '*.json' -print
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/4210042', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/143269/']}
jdg_371738
stackexchange
llm_judgeable_groundtruth_similarity
398436
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Context Consider the following scenario for a company selling goods online. A user can purchase several items (i.e. basket of items), some of which are of particular importance and are tracked specifically (let's call them star items). We wish to test a change in an algorithm (e.g. recommendation, risk assessment, add targeting, whatever...) which may influence both the number of star items sold , and the total sales . This a standard A/B test setup - the randomization unit is at the user level . The purpose of the A/B test is to compare the impact of the algorithm change: the control group has the original algorithm and the variant has the new algorithm One key metric of interest is defined as the ratio of star items sales over total sales . This is a sum of across all transactions of all users in the scope of each A or B group. This means that the analysis unit is at the transaction level, which is different than the randomization unit Metrics are calculated over the entire duration of the test (e.g. 2 weeks) Detail of the metric used Given group A and a set of users $U_A = \{u_1,u_2,...,u_{N_A} \}$ , each user is involved in a number $t_{u_n}$ of transactions. The set $T_A$ of all transactions of all users in group A for the duration of the test is $T_A = \{ t_{u_{11}}, t_{u_{12}}, ... t_{u_{nm} } \}$ . The metric of interest for group A is defined over all transactions in scope of group A. The sums are at the transaction level, not the user level. $$\text{Metric}_A = \frac{\sum_{t_{i} \in T_A} \text{sales star items }$}{\sum_{t_{i} \in T_A} \text{sales }$ }$$ Of course, we could modify the definition to calculate the user level mean and that would simplify everything, but that is not the metric that is being used. Questions What statistical test could be used for such a metric ? An additional consideration is that although we can safely assume that users are i.i.d., it is most likely wrong to assume that individual purchase baskets are i.i.d. for the same user. Here are a few ideas I came across, have any A/B testing practitioners come across similar metrics in the past ? z-test of proportions Bootstrapping and Jacknife Delta method Change the metric (last resort) https://en.wikipedia.org/wiki/Ratio_estimator Edit - Some clarifications The reason behind this question is that I have often seen the z-test of proportion being used in this situation. Popular tools used for A/B testing often default on the proportion test and business users rarely check the underlying assumptions required for the test to be valid. @dnqxt's answer below is a good example: "Just use the z test of proportion !" - but I would like to see a rigorous statistical justification as to why (or why not) this test can be used in such a case. I personally don't think that using a z-test of proportions would work here as sales from a purchase event are not Bernoulli trials. I argue that we cannot say that each dollar sold in the denominator can be viewed as a Bernoulli trial resulting in 0 or 1 star item dollar sold in the numerator. Moreover, since the randomization unit is at the user level, purchase events of the same user are not independent (but I would say that is a secondary issue). I could be wrong here, so please feel free to prove this otherwise ! We could also change the metric to make it a Bernoulli/Binomial which converges to Normal by using counts, but that would be a last resort solution $$ \frac{\# \text{sales with star items} }{\# \text{sales}} $$ Now provide the response and nothing else.
z-test of proportions This applies to a different case when you have binary outcomes. The z-test of proportions compares the proportions of those binary outcomes. (Below some argument is made that you will be able to do a t-test, which for large numbers is approximately the same as the z-test. With proportions you can do a z-test because the binomial distribution has one parameter determining the variance and mean, unlike a normal distribution) Bootstrapping This will be possible but not really neccesary because of the Delta method which provides the error of your observed statistic more straightforward. Delta method You are interested in the ratio of two, possibly correlated variables, 1. the total sales and 2. the sales in star items. These variables are likely asymptotically normal distributed since they are the sums of the sales from many individuals (the testing procedure could be considered to be a process like picking a sample of sales from individual users from a distribution of sales from individual users). Thus you can use the Delta method. The use of the Delta method for the estimation of ratio's is described here . The result of this application of the Delta method actually coincides with an approximation of Hinkley's result , an exact expression for the ratio of two correlated normal distributed variables (Hinkley D.V., 1969, On the Ratio of Two Correlated Normal Random Variables, Biometrica vol. 56 no. 3). (Sidenote: As noted by Xi'an in the comments an earlier description of the exact expression was given by George Marsaglia 1965 in the JASA Vol. 60, No. 309 . A simple modern description is given in 2006 in Jstatsoft Volume 16 Issue 4 ) For $Z = \frac{X}{Y}$ with $$\begin{bmatrix}X\\Y\end{bmatrix} \sim N\left(\begin{bmatrix} \mu_x \\ \mu_y \end{bmatrix} , \begin{bmatrix} \sigma_x^2 & \rho \sigma_x \sigma_y \\ \rho \sigma_x \sigma_y & \sigma_y^2 \end{bmatrix} \right)$$ The exact result is: $$f(z) = \frac{b(z)d(z)}{a(z)^3} \frac{1}{\sqrt{2\pi} \sigma_X\sigma_Y} \left[ \Phi \left( \frac{b(z)}{\sqrt{1-\rho^2}a(z)} \right) - \Phi \left( - \frac{b(z)}{\sqrt{1-\rho^2}a(z)} \right) \right] + \frac{\sqrt{1-\rho^2}}{\pi \sigma_X \sigma_Y a(z)^2} \exp \left( -\frac{c}{2(1-\rho^2)}\right)$$ with $$ \begin{array}{}a(z) &=& \left( \frac{z^2}{\sigma_X^2} - \frac{2 \rho z}{\sigma_X \sigma_Y} + \frac{1}{\sigma_Y^2} \right) ^{\frac{1}{2}} \\ b(z) &=& \frac{\mu_X z}{ \sigma_X^2} - \frac{\rho (\mu_X+ \mu_Y z)}{ \sigma_X \sigma_Y} + \frac{\mu_Y}{\sigma_Y^2} \\c &=& \frac{\mu_X^2}{\sigma_Y^2} - \frac{2 \rho \mu_X \mu_Y + }{\sigma_X \sigma_Y} + \frac{\mu_Y^2}{\sigma_Y^2}\\d(z) &=& \text{exp} \left( \frac {b(z)^2 - c a(z) ^2}{2(1-\rho^2)a(z)^2}\right) \end{array}$$ And an approximation based on an assymptotic behaviour is: (for $\mu_Y/\sigma_Y \to \infty$ ): $$ F(z) \to \Phi\left( \frac{z - \mu_X/\mu_Y}{\sigma_X \sigma_Y a(z)/\mu_Y} \right) $$ You end up with the Delta method result when you insert the approximation $a(z) = a(\mu_X/\mu_Y)$ $$a(z) \sigma_X \sigma_Y /\mu_Y \approx a(\mu_X/\mu_Y) \sigma_X \sigma_Y /\mu_Y = \left( \frac{\mu_X^2\sigma_Y^2}{\mu_Y^4} - \frac{2 \mu_X \rho \sigma_X \sigma_Y}{\mu_Y^3} + \frac{\sigma_X^2}{\mu_Y^2} \right) ^{\frac{1}{2}}$$ The values for $\mu_X,\mu_Y,\sigma_X,\sigma_Y,\rho$ can be estimated from your observations which allow you to estimate the variance and mean of the distribution for single users and related to this the variance and mean for the sample distribution of the sum of several users. Change the metric I believe that it is interresting to do at least an intitial plot of the distribution of the sales (not the ratios) from the single users. Eventually you might end up with a situation that there is a difference between the users in group A and B, but it just happens to be not significant when you regard the single variable of the ratio (this is a bit similar to MANOVA being more powerfull than single ANOVA tests). While the knowledge of a difference between groups, without a significant difference in the metric that you are interrested in, may not help you much in making decisions, it does help you in understanding the underlying theory and possibly design better changes/experiments next time. Illustration Below is a simple illustration: Let the hypothetical distribution of sales from users be distributed as fractions $a,b,c,d$ which indicate how many user are of a particular case (in reality this distribution will be more complex): star item sales 0 $ 40$ other item sales 0 $ a b 10$ c d Then the sample distribution for totals from a groups with 10000 users, with for one algorithm $$a=0.190,b=0.001,c=0.800,d=0.009$$ and the other algorithm $$a=0.170,b=0.001,c=0.820,d=0.009$$ will look like: Which shows 10000 runs drawing new users and computing the sales and ratios. The histogram is for the distribution of the ratios. The lines are computations using the function from Hinkley. You can see that the distribution of the two total sales numbers is approximately a multivariate normal. The isolines for the ratio show that you can estimate the ratio very well as a linear sum (as in the previous mentioned/linked linearized Delta method) and that an approximation by a Gaussian distribution should work well (and then you can use a t-test which for large numbers will be just like a z-test). You can also see that a scatterplot like this might provide you with more information and insight in comparison to using only the histogram. R-Code for computing the graph: set.seed(1)## # function to sampling hypothetic n users # which will buy star items and/or regular items## star item sales# 0 $ 40$ # # regular item sales 0 $ a b# 10$ c d##sample_users <- function(n,a,b,c,d) { # sampling q <- sample(1:4, n, replace=TRUE, prob=c(a,b,c,d)) # total dolar value of items dri = (sum(q==3)+sum(q==4))*10 dsi = (sum(q==2)+sum(q==4))*40 # output list(dri=dri,dsi=dsi,dti=dri+dsi, q=q)}# # function for drawing those blocks for the tilted histogram#block <- function(phi=0.045+0.001/2, r=100, col=1) { if (col == 1) { bgs <- rgb(0,0,1,1/4) cols <- rgb(0,0,1,1/4) } else { bgs <- rgb(1,0,0,1/4) cols <- rgb(1,0,0,1/4) } angle <- c(atan(phi+0.001/2),atan(phi+0.001/2),atan(phi-0.001/2),atan(phi-0.001/2)) rr <- c(90000,90000+r,90000+r,90000) x <- cos(angle)*rr y <- sin(angle)*rr polygon(x,y,col=cols,bg=bgs)}block <- Vectorize(block)## function to compute Hinkley's density formula#fw <- function(w,mu1,mu2,sig1,sig2,rho) { #several parameters aw <- sqrt(w^2/sig1^2 - 2*rho*w/(sig1*sig2) + 1/sig2^2) bw <- w*mu1/sig1^2 - rho*(mu1+mu2*w)/(sig1*sig2)+ mu2/sig2^2 c <- mu1^2/sig1^2 - 2 * rho * mu1 * mu2 / (sig1*sig2) + mu2^2/sig2^2 dw <- exp((bw^2 - c*aw^2)/(2*(1-rho^2)*aw^2)) # output from Hinkley's density formula out <- (bw*dw / ( sqrt(2*pi) * sig1 * sig2 * aw^3)) * (pnorm(bw/aw/sqrt(1-rho^2),0,1) - pnorm(-bw/aw/sqrt(1-rho^2),0,1)) + sqrt(1-rho^2)/(pi*sig1*sig2*aw^2) * exp(-c/(2*(1-rho^2))) out}fw <- Vectorize(fw)## function to compute# theoretic distribution for sample with parameters (a,b,c,d)# lazy way to compute the mean and variance of the theoretic distributionfwusers <- function(na,nb,nc,nd,n=10000) { users <- c(rep(1,na),rep(2,nb),rep(3,nc),rep(4,nd)) dsi <- c(0,40,0,40)[users] dri <- c(0,0,10,10)[users] dti <- dsi+dri sig1 <- sqrt(var(dsi))*sqrt(n) sig2 <- sqrt(var(dti))*sqrt(n) cor <- cor(dti,dsi) mu1 <- mean(dsi)*n mu2 <- mean(dti)*n w <- seq(0,1,0.001) f <- fw(w,mu1,mu2,sig1,sig2,cor) list(w=w,f=f,sig1 = sig1, sig2=sig2, cor = cor, mu1= mu1, mu2 = mu2)}# sample many ntr time to display sample distribution of experiment outcomentr <- 10^4# sample Adsi1 <- rep(0,ntr)dti1 <- rep(0,ntr)for (i in 1:ntr) { users <- sample_users(10000,0.19,0.001,0.8,0.009) dsi1[i] <- users $dsi dti1[i] <- users$ dti}# sample Bdsi2 <- rep(0,ntr)dti2 <- rep(0,ntr)for (i in 1:ntr) { users <- sample_users(10000,0.19-0.02,0.001,0.8+0.02,0.009) dsi2[i] <- users $dsi dti2[i] <- users$ dti}# hiostograms for ratioratio1 <- dsi1/dti1ratio2 <- dsi2/dti2h1<-hist(ratio1, breaks = seq(0, round(max(ratio2+0.04),2), 0.001))h2<-hist(ratio2, breaks = seq(0, round(max(ratio2+0.04),2), 0.001))# plottingplot(0, 0, xlab = "sum of total sales", ylab = "sum of star item sales", xlim = c(82000,92000), ylim = c(2500,6000), pch=21, col = rgb(0,0,1,1/10), bg = rgb(0,0,1,1/10))title("sample distribution for sum of 10 000 users")# isolinesbrks <- seq(0, round(max(ratio2+0.02),2), 0.001)for (ls in 1:length(brks)) { col=rgb(0,0,0,0.25+0.25*(ls%%5==1)) lines(c(0,10000000),c(0,10000000)*brks[ls],lty=2,col=col)}# scatter pointspoints(dti1, dsi1, pch=21, col = rgb(0,0,1,1/10), bg = rgb(0,0,1,1/10))points(dti2, dsi2, pch=21, col = rgb(1,0,0,1/10), bg = rgb(1,0,0,1/10))# diagonal axisphi <- atan(h1$breaks)r <- 90000lines(cos(phi)*r,sin(phi)*r,col=1)# histogramsphi <- h1 $midsr <- h1$ density*10block(phi,r,col=1)phi <- h2 $midsr <- h2$ density*10block(phi,r,col=2)# labels for histogram axisphi <- atan(h1 $breaks)[1+10*c(1:7)]r <- 90000text(cos(phi)*r-130,sin(phi)*r,h1$ breaks[1+10*c(1:7)],srt=-87.5,cex=0.9)text(cos(atan(0.045))*r-400,sin(atan(0.045))*r,"ratio of sum of star items and sum of total items", srt=-87.5,cex=0.9)# plotting functions for Hinkley densities using variance and means estimated from theoretic samples distributionwf1 <- fwusers(190,1,800,9,10000)wf2 <- fwusers(170,1,820,9,10000)rf1 <- 90000+10*wf1 $fphi1 <- atan(wf1$ w)lines(cos(phi1)*rf1,sin(phi1)*rf1,col=4)rf2 <- 90000+10*wf2 $fphi2 <- atan(wf2$ w)lines(cos(phi2)*rf2,sin(phi2)*rf2,col=2)
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/398436', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/192854/']}
jdg_371739
stackexchange
llm_judgeable_groundtruth_similarity
649721
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a set of authorized SSH keys for a set of users in Ansible. I have a users variable set up like so: users: - { username: root, name: 'root' } - { username: user, name: 'User' } In the same role, I also have a set of authorized key files in a files/public_keys directory, one file per authorized key: roles/common/files/public_keys/homeroles/common/files/public_keys/work I want to copy each public key to each user. I have tried using the following task: - name: copy authorized keys authorized_key: user={{ item.0.username }} key={{ item.1 }} with_nested: - users - lookup('fileglob', 'public_keys/*') However, item.1 contains the literal string "lookup('fileglob', 'public_keys/*')" , not each file path under files/public_keys . Is there a way I can get a listing of the files/public_keys directory and copy each public key to each user? Now provide the response and nothing else.
The trick is to transform the fileglob return value into a list via the split function, so you can iterate over the values: - name: copy authorized keys authorized_key: user: "{{ item.0.username }}" key: "{{ lookup('file', item.1) }}" with_nested: - "{{ users }}" - "{{ lookup('fileglob', 'public_keys/*').split(',') }}" Note that using bare variables, without {{ and }} , for with_items was deprecated in Ansible v2.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/649721', 'https://serverfault.com', 'https://serverfault.com/users/23484/']}
jdg_371740