source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
22856198
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: import imaplibdef read(): userName = "xxx@gmail.com" password = "xxxx" name = 'xxx@gmail.com' email_ids = [userName] data = [] imap_server = imaplib.IMAP4_SSL("imap.gmail.com",993) imap_server.login(userName, password) imap_server.select('INBOX') da = [] status, response = imap_server.status('INBOX', "(UNSEEN)") unreadcount = int(response[0].split()[2].strip(').,]')) print unreadcount status, response = imap_server.search(None, '(FROM "xxx@gmail.com")') email_ids = [e_id for e_id in response[0].split()] for e_id in email_ids: _, response = imap_server.fetch(e_id, '(UID BODY[TEXT])') da.append(response[0][1]) print daread() How to organise the code above, to read only unread mails?Also, once we read them, how to mark the messages as read mail using Python? Now provide the response and nothing else.
import imaplibdef read(username, password, sender_of_interest): # Login to INBOX imap = imaplib.IMAP4_SSL("imap.gmail.com", 993) imap.login(username, password) imap.select('INBOX') # Use search(), not status() status, response = imap.search(None, 'INBOX', '(UNSEEN)') unread_msg_nums = response[0].split() # Print the count of all unread messages print len(unread_msg_nums) # Print all unread messages from a certain sender of interest status, response = imap.search(None, '(UNSEEN)', '(FROM "%s")' % (sender_of_interest)) unread_msg_nums = response[0].split() da = [] for e_id in unread_msg_nums: _, response = imap.fetch(e_id, '(UID BODY[TEXT])') da.append(response[0][1]) print da # Mark them as seen for e_id in unread_msg_nums: imap.store(e_id, '+FLAGS', '\Seen')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22856198', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1828870/']}
jdg_378041
stackexchange
llm_judgeable_groundtruth_similarity
26907
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm having a bit of trouble figuring this one out. Lake, meaning "A large, landlocked stretch of water." seems to have some confusion in the Wiktionary pages. I've looked in the American Heritage Dictionary, as well as the Oxford English Dictionary and they both state something along the lines of: "Early Middle English lac , < Old French lac, < Latin lacus basin, tub, tank, lake, pond" and: "[Middle English, from Old French lac and from Old English lacu, both from Latin lacus.]" However, the Wiktionary on page seems to make exception to this and states: "From Middle English lake (“lake, watercourse, body of water”), from Old English lacu (“lake, pond, pool, stream, watercourse”), from Proto-Germanic lakō, lōkiz (“stream, pool, water aggregation", originally "ditch, drainage, seep”), from Proto-Germanic lekaną (“to leak, drain”), from Proto-Indo-European leg-, leǵ- (“to leak”)." It even goes on to address this and states: "Despite their similarity in form and meaning, English lake is not related to Latin lacus (“hollow, lake, pond”)" I know Wikipedia sources can sometimes be questionable, but I would like to not be confused about this. Now provide the response and nothing else.
The word is without doubt Indo-European, the question is whether it is strictly Germanic or did it come via Latin. Pokorny says * laku is the source of Gr. λάκκος, lat. lacus, OIr. loch, and lagu etc. in Germanic: see the Texas collection for more attestations, which includes English "lake". For * leg , the "leak" root, Pokorny gives Armenian, Celtic and Germanic only; the Texas elaboration relates that root to "lack; leak", and nothing like "lake". ( leĝ would be a different root meaning "collect"). Texas lists lagu-lad, lago for OE and not lacu . I would be inclined to believe Texas and Pokorny (and AHD) over Wiki (also bearing in mind that the Texas list for Modern English gives all related words, no matter how they got there). But there's more. There is at least one other vote, the Oxford English Dictionary. It gives two entries for modern English "lake". One, which they list in its earliest form as lac , has the sense "A large body of water entirely surrounded by land", attested c1275 (▸?a1200) in Laȝamon Brut (Calig.) (1963) l. 642 "Ouer þen lac [c1300 Otho lake] of Siluius & ouer þen lac [c1300 Otho lake] of Philisteus", and a1225 St. Marher. 14 "Ich leade ham..iþe ladliche lake of the suti sunne". This is the ordinary use of "lake". They say of the contemporary word The present English form lake (recorded from the 14th cent.) may be due to confusion with lake n.3, or perhaps rather to independent adoption of Latin lacus. Another sense lake n.3 is "A small stream of running water", is attested as OE lacu , citing the attestation from 955 Charter of Edred in Earle Charters 382 Ðæt to Mægðe forda andlang lace ut on Temese. This they say is not from Latin lacus because of the meaning, and they derive the word from the "leak" root * leg . A half-vote also goes to Etymologyonline , which lists both roots, says that the modern word is borrowed, and gives both Old English lacu "stream, pool, pond" and lagu "sea flood, water, extent of the sea" ( lagu being the OE form cited in Pokorny). In other words, the weight of evidence indicates that it is borrowed (in some fashion, including adaptation of semantics), the source would be French (hence Latin), and that it is the result of two distinct roots.
{}
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/26907', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/20986/']}
jdg_378042
stackexchange
llm_judgeable_groundtruth_similarity
14644742
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem testing my mongoose models I have a structure like app models Address User Organization test Both models User and Organization need to know the model Address. My models are structured like: module.exports = function (mongoose, config) { var organizationSchema = new mongoose.Schema({ name : { type : String }, addresses : { type : [mongoose.model('Address')] } }); var Organization = mongoose.model('Organization', organizationSchema); return Organization;}; In my normal app i require Address before requiring User and Organization and everything is fine. I now wrote tests for User and Organization. In order to have the Address model registered i call require('../models/Address.js') This works fine if i run one test. But if i run all tests in a batch i get an error because i tried to register Address twice. OverwriteModelError: Cannot overwrite Address model once compiled. How do i solve this problem? Now provide the response and nothing else.
The problem is that you cant set mongoose model twice. The easiest way to solve your problem is to take advantage of node.js require function. Node.js caches all calls to require to prevent your model from initializing twice. But you wrapping your models with functions. Unwrapping them will solve your problem: var mongoose = require('mongoose');var config = require('./config');var organizationSchema = new mongoose.Schema({ name : { type : String }, addresses : { type : [mongoose.model('Address')] }});module.exports = mongoose.model('Organization', organizationSchema); Alternative solution is to make sure that each model initialized only once. For example, you can initialize all you modules before running your tests: Address = require('../models/Address.js');User = require('../models/User.js');Organization = require('../models/Organization.js');// run your tests using Address, User and Organization Or you can add try catch statement to your models to handle this special case: module.exports = function (mongoose, config) { var organizationSchema = new mongoose.Schema({ name : { type : String }, addresses : { type : [mongoose.model('Address')] } }); try { mongoose.model('Organization', organizationSchema); } catch (error) {} return mongoose.model('Organization');}; Update: In our project we have /models/index.js file to handle everything. First, it calls mongoose.connect to establish connection. Then it requires every model in models directory and creates a dictionary of it. So, when we need some model (e.g. user ) we requires it by calling require('/models').user .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14644742', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1761195/']}
jdg_378043
stackexchange
llm_judgeable_groundtruth_similarity
193364
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I tried searches and the question has been posed in other fora, but no experiment came up. Track chambers (cloud chambers, bubble chambers , time projection chambers, solid state detectors like the vertex detectors at LHC) give the track of the particle as it ionizes the medium, and could be carried out in geometry after the particle has passed the double slit. The straight track should be pointing back to the slit it came from and its record could be used as the points on the screen in the classical double slit experiment. The set up as I see it would be the classical setup for single electron through the double slits but instead of a "screen" one has a detector and detects the track. It should be a long enough detector to get an accuracy less than the slit difference so it could point back to the slit, as the interslit distance is of the order of 100 microns and detectors are giving order of microns accuracies. This experiment, if possible, would resolve the controversy whether the detection of the slit destroys the interference pattern or the detection elements at the slits change the boundary conditions and destroy the interference pattern. An expert's opinion is necessary whether the experiment is possible, whether the energies of the electrons to show interference with a specific d separation is enough to create an accurate track in a solid state detector. If not a cloud chamber would do , but again the energy of the electron would be important because it would have to pass the barrier air/chamber. It could succeed if the double slits were within a cloud/ bubble chamber; the beam count was low (10 to twelve per picture) but it was spread in the vertical direction. If the beam could be focused on the slits , it should be doable. Now provide the response and nothing else.
I'd like to expand my earlier comment into a little essay on the severe practical difficulties in performing the suggested experiment. I'm going to start my asserting that we don't care if the experiment is a "two-slit" per se . It is sufficient that it is a diffractive scattering experiment of some kind. However, we do care about having spacial resolution good enough to distinguish which scattering site (or slit) was the one on the path of the alleged particle the ability to run the experiment at low rate so that we can exclude multi-projectile or beam/beam interaction as the source of any interference that we observe. (Though it's going to turn out that we never even get far enough for this to matter...) Now let's get down to designing the beast. To start with we should note to any casual readers that the diagrams you see in pop-sci treatment are not even remotely to scale: typical classroom demonstration kit for use with lasers has the slits set less than $1\,\mathrm{mm}$ apart and uses projection distances of several meters or more to get fringes that are separated by a few centimeters. Or then use much closer set slits to get large angles. The angular separation between maxima is on order of$$ \Delta \theta = \frac{\lambda}{d} \,,$$where $\lambda$ is the relevant wavelength and $d$ is the scattering site (or slit) separation. Allowing that the distance from the scattering surface to the projection surface is $\ell$, the spacial separation is (in the small angle approximation)$$ \Delta x = l \Delta \theta = \frac{\ell}{d} \lambda \,.$$ Anna has suggested doing the experiment with electrons, which means that we're interested in the de Broglie wavelength usually given by $\lambda = \hbar/p$, and measuring their position en route with a tracking detector of some kind. The tracking detector's spacial resolution is going to be the big barrier here. Let's start by considering a Liquid Argon TPC because it is a hot technology just now. Spacial resolution down to about $1 \,\mathrm{mm}$ should be achievable without any breakthrough in technology (typical devices have $3$-$5\,\mathrm{mm}$ resolution). That sets our value for $d$. Now, to observe a interferences pattern, we need a detector resolution at least four times finer than the spacial resolution. Assume for the sake of argument that I use a detector with a $20 \,\mathrm{\mu{}m}$ spacial resolution. Maybe a MCP or a silicon tracker. That sets $\Delta x = 4(20 \,\mathrm{\mu{}m})$. I also assume that I need $\ell$ to be at least $2d$ to be able to track the particle between the scattering and projection planes. Probably an under-estimate, so be it. Now I can compute the properties of the necessary electron source$$\begin{align*}p &= \frac{\hbar}{\lambda} \\&= \frac{\hbar\ell}{d \, \Delta x} \tag{1}\\&= 2\frac{\hbar}{\Delta x}\\&= \frac{7 \times 10^{-22} \,\mathrm{MeV \, s}}{40 \times 10^{-6} \,\mathrm{m}}\\&= \frac{7 \times 10^{-22} \,\mathrm{MeV}}{7 \times 10^{-12} c} \\&= 10^{-10} \,\mathrm{MeV/c}\\&= 10^{-4} \,\mathrm{eV/c} \,,\end{align*}$$which is safely non-relativistic, so we have a beam energy of $5 \times 10^{-9}\,\mathrm{eV^2}/(m_e c^2)$, and the tracking medium will completely mess up the experiment. By choosing a $20\,\mathrm{m}$ flight path between scattering and detection and getting down to, say, the $10\,\mathrm{\mu{}m}$ scale for $d$ we can get beam momenta up to $10^3\,\mathrm{eV}$ which at lest gives us beam energies about $1\,\mathrm{eV}$. But how are you going to track a $1\,\mathrm{eV}$ electron without scattering it? I'm sure you can get better spacial resolution in silicon, but I don't think you can get the beam energy up high enough to pass a great enough distance through the tracking medium to actually make the measurement. The fundamental problem here is the tension between the desire to track the electron on it's route which forces you to use nearly human scales for parts of the detector and the presence of that pesky $\hbar$ in the numerator of equation (1) which is driving the necessary beam momentum down. The usual method of getting diffractive effects is just to make $d$ small and $\ell$ large enough to compensate for the $\hbar$ but our desire to track the particles works against us there by putting a floor on our attemtps to shrink $d$ and by because longer flight paths mean more sensitivity to scattering by the tracking medium.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/193364', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1492/']}
jdg_378044
stackexchange
llm_judgeable_groundtruth_similarity
42790060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm a bit stuck with this code (this is a sample): public async Task Fail(){ await Task.Run(() => { throw new Exception(); });}[Test]public async Task TestFail(){ Action a = async () => { await Fail(); }; a.ShouldThrow<Exception>();} The code doesn't catch the exception, and fails with Expected a System.Exception to be thrown, but no exception was thrown. I'm sure I'm missing something, but docs seem to suggest this is the way to go. Some help would be appreciated. Now provide the response and nothing else.
You should use Func<Task> instead of Action : [Test]public void TestFail(){ Func<Task> f = async () => { await Fail(); }; f.ShouldThrow<Exception>(); } That will call the following extension which is used to verify asynchronous methods public static ExceptionAssertions<TException> ShouldThrow<TException>( this Func<Task> asyncAction, string because = "", params object[] becauseArgs) where TException : Exception Internally this method will run task returned by Func and wait for it. Something like try{ Task.Run(asyncAction).Wait();}catch (Exception exception){ // get actual exception if it wrapped in AggregateException} Note that test itself is synchronous.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/42790060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5261112/']}
jdg_378045
stackexchange
llm_judgeable_groundtruth_similarity
31696485
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When using the Java 8 Optional class, there are two ways in which a value can be wrapped in an optional. String foobar = <value or null>;Optional.of(foobar); // May throw NullPointerExceptionOptional.ofNullable(foobar); // Safe from NullPointerException I understand Optional.ofNullable is the only safe way of using Optional , but why does Optional.of exist at all? Why not just use Optional.ofNullable and be on the safe side at all times? Now provide the response and nothing else.
Your question is based on assumption that the code which may throw NullPointerException is worse than the code which may not. This assumption is wrong. If you expect that your foobar is never null due to the program logic, it's much better to use Optional.of(foobar) as you will see a NullPointerException which will indicate that your program has a bug. If you use Optional.ofNullable(foobar) and the foobar happens to be null due to the bug, then your program will silently continue working incorrectly, which may be a bigger disaster. This way an error may occur much later and it would be much harder to understand at which point it went wrong.
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/31696485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/490315/']}
jdg_378046
stackexchange
llm_judgeable_groundtruth_similarity
10743914
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using this oversimplified code block: <% if (MyCondition()){ %><myUsedControl/><% }else{ %><myUnusedControl/><% } %> in my ascx file. I assumed that when ASP.Net would read this page, if MyCondition() returned true, it would completely ignore whatever was in the else clause. This is not the case, the myUnusedControl's PageLoad and OnPreRendered events are still being fired when I load the page, even though myUnusedControl is properly hidden when the browser displays the page. Why is this? How can I make sure a chunk of ascx or aspx be completely ignored when a page is rendered? Thanks for your time. Now provide the response and nothing else.
Here's the problem: [HttpPost]public ActionResult Create(LocationViewModel location) Do you see it? It's the name of your action argument: location . Look at your view model now, it has a property named Location : public Location Location { get; set; } This confuses the model binder. It no longer knows whether you need to bind the LocationViewModel or its property. So simply rename to avoid the conflict: [HttpPost]public ActionResult Create(LocationViewModel model)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10743914', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/922202/']}
jdg_378047
stackexchange
llm_judgeable_groundtruth_similarity
62690819
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I clear don't understand the meaning of the definition syntax for SwiftUI because I can't figure out how one would use ToolbarItemGroup. I can define a toolbar with toolbar items like this: .toolbar { ToolbarItem { Button("200%", action: zoom200).foregroundColor(controller.scale == 2.0 ? selectedButtonColor : defaultButtonColor) } ToolbarItem { Button("100%", action: zoom100).foregroundColor(controller.scale == 1.0 ? selectedButtonColor : defaultButtonColor) }} But have been unable to get ToolbarItemGroup to work. Logically I would have expected something like this: .toolbar { ToolbarItemGroup { ToolbarItem { Button("200%", action: zoom200).foregroundColor(controller.scale == 2.0 ? selectedButtonColor : defaultButtonColor) } ToolbarItem { Button("100%", action: zoom100).foregroundColor(controller.scale == 1.0 ? selectedButtonColor : defaultButtonColor) } } ToolbarItemGroup { ToolbarItem { Button("Open", action: open) } ToolbarItem { Button("Close", action: close) } }} Now provide the response and nothing else.
ToolbarItemGroup is designed to group views in the same toolbar. It removes the need for explicit usage of ToolbarItem , as both conform to ToolbarContent . e.g. .toolbar { ToolbarItemGroup { Button("200%", action: zoom200) .foregroundColor(controller.scale == 2.0 ? selectedButtonColor : defaultButtonColor) Button("100%", action: zoom100) .foregroundColor(controller.scale == 1.0 ? selectedButtonColor : defaultButtonColor) } ToolbarItemGroup(placement: .bottomBar) { Spacer() Button("Open", action: open) Spacer() Button("Close", action: close) Spacer() }} It's also the only way I know of to get Spacer s to work, between toolbar items.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62690819', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2567126/']}
jdg_378048
stackexchange
llm_judgeable_groundtruth_similarity
251499
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to install PHP on my Windows Server 2003 machine that is using IIS6.I followed this guide for installing it, including using the binaries of fastCGI. I think I did everything correctly, however, when I went to create my phpinfo(); page I get the following:Server Error, unable to connect to fastcgi server. I've tried looking in all the logs I can think of. Nothing in IIS log that is helpful, PHP never even logged an error or anything and nothing in my events log. Help Please and Thanks :) Now provide the response and nothing else.
That's a pretty out of date blog entry and life has moved on substantially since then. So forget everything you read on that page :) 1. Download PHP: Download the latest stable version of PHP for windows. In this case download the PHP 5.3.6 x86 Non Thread Safe zip file (note: the version number may change as the PHP team release new stable binaries): http://windows.php.net/download/ http://windows.php.net/downloads/releases/php-5.3.6-nts-Win32-VC9-x86.zip Extract the contents of the zip file to a folder called C:\PHP so that in this folder you have the following subfolder structure: C:\PHP\dev , C:\PHP\ext , C:\PHP\extras, etc Rename one of the php-*.ini files to php.ini and edit according to any requirements you may have. 2. Install/Configure Microsoft's FastCGI Extension for IIS5.1/6.0: Install Microsoft's IIS6 FastCGI extension which can be downloaded from here: http://www.iis.net/download/fastcgi Once installed, edit C:\Windows\System32\InetSrv\fcgiext.ini so that it contains only the following: [Types]php=PHP[PHP]ExePath=C:\PHP\php-cgi.exe 3. Configure IIS PHP Scriptmap: Edit or add the .php scriptmap to your chosen website (or you can add globally) by right clicking on the site and choosing properties: Choose the Home Directory tab and click Configuration : Click Add (or if you still have the .php scriptmap listed, then remove it and re-add). You need to populate the Executable and Extension fields with these values: Executable: C:\WINDOWS\system32\inetsrv\fcgiext.dllExtension: .php Click OK and save these changes. 4. Configure PHP FastCGI Settings in PHP.INI: Open C:\PHP\php.ini in an editor and locate and configure these settings: fastcgi.impersonate = 1cgi.fix_pathinfo = 1cgi.force_redirect = 0date.timezone = Europe/London ; (http://uk3.php.net/manual/en/timezones.php) It seems you need to set the timezone now for the phpinfo(); page to render properly, but it's a good idea to set this anyway. 5. Test PHP: Create a file called test.php with the following: <?php phpinfo();?> Browse to this page and if all is good you should see: After that it's all up to you to fine tune your configuration such as deciding which extensions to use and so on. After Thoughts: Depending on how your system is set up you may need to adjust the permissions on the C:\PHP folder. If it's a pretty stock installation and your sites are running in application pools using the Network Service account and you're using the generated IUSR_<MACHINE_NAME> anonymous account then you shouldn't need to change anything.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/251499', 'https://serverfault.com', 'https://serverfault.com/users/21538/']}
jdg_378049
stackexchange
llm_judgeable_groundtruth_similarity
403575
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We all know that logistic regression is used to calculate probabilities through the logistic function. For a dependent categorical random variable $y$ and a set of $n$ predictors $\textbf{X} = [X_1 \quad X_2 \quad \dots \quad X_n]$ the probability $p$ is $$p = P(y=1|\textbf{X}) = \frac{1}{1 + e^{-(\alpha + \boldsymbol{\beta}\textbf{X})}}$$ The cdf of the logistic distribution is parameterized by its scale $s$ and location $\mu$ $$F(x) = \frac{1}{1 - e^{-\frac{x - \mu}{s}}}$$ So, for $\textbf{X} = X_1$ it is easy to see that $$s = \frac{1}{\beta}, \quad \mu = -\alpha s$$ and this way we map the two fashions of the sigmoid curve. However, how does this mapping works when $\textbf{X}$ has more than one predictor? Say $\textbf{X} = [X_1 \quad X_2]$ , what I see from a tri-dimensional perspective is depicted in the figure below. So, $\textbf{s} = [s_1 \quad s_2]$ and $\boldsymbol{\mu} = [\mu_1 \quad \mu_2]$ would become $$\textbf{s} = \boldsymbol{\beta}^{-1}, \quad \boldsymbol{\mu} = -\alpha\textbf{s}$$ and $p$ would derive from the linear combination of the parameters and the predictors in $\textbf{X}$ . The way the unknown parameters of the logistic regression function relate to the cdf of the logistic distribution is what I am trying to understand here. I would be glad if someone could provide with insights on this matter. Now provide the response and nothing else.
One way of defining logistic regression is just introducing it as $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y=1 \mid X=x) = \frac{1}{1+e^{-\eta(x)}}$$ where $\eta(x)=\beta^T x$ is a linear predictor. This is just stating the model without saying where it comes from. Alternatively we can try to develop the model from some underlying principle. Say there is maybe, a certain underlying, latent (not directly measurable) stress or antistress , we denote it by $\theta$ , which determines the probability of a certain outcome. Maybe death (as in dose-response studies) or default , as in credit risk modeling. $\theta$ have some distribution that depends on $x$ , say given by a cdf (cumulative distribution function) $F(\theta;x)$ . Say the outcome of interest ( $Y=1$ ) occurs when $\theta \le C$ for some threshold $C$ . Then $$ \P(Y=1 \mid X=x)=\P(\theta \le C\mid X=x) =F(C;x)$$ and now the logistic distribution wiki have cdf $\frac1{1+e^{-\frac{x-\mu}{\sigma}}}$ and so if we assume the latent variable $\theta$ has a logistic distribution we finally arrive at, assuming the linear predictor $\eta(x)$ represent the mean $\mu$ via $\mu=\beta^T x$ : $$ \P(Y=1\mid x)= \frac1{1+e^{-\frac{C-\beta^T x}{\sigma}}}$$ so in the case of a simple regression we get the intercept $C/\sigma$ and slope $\beta/\sigma$ . If the latent variable has some other distribution we get an alternative to the logit model. A normal distribution for the latent variable results in probit, for instance. A post related to this is Logistic Regression - Error Term and its Distribution .
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/403575', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/238414/']}
jdg_378050
stackexchange
llm_judgeable_groundtruth_similarity
11468221
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Okay, I implemented this SO question to my code: Return True or False Randomly But, I have strange behavior: I need to run ten instances simultaneously, where every instance returns true or false just once per run. And surprisingly, no matter what I do, every time i get just false Is there something to improve the method so I can have at least roughly 50% chance to get true ? To make it more understandable: I have my application builded to JAR file which is then run via batch command java -jar my-program.jar pause Content of the program - to make it as simple as possible: public class myProgram{ public static boolean getRandomBoolean() { return Math.random() < 0.5; // I tried another approaches here, still the same result } public static void main(String[] args) { System.out.println(getRandomBoolean()); }} If I open 10 command lines and run it, I get false as result every time... Now provide the response and nothing else.
I recommend using Random.nextBoolean() That being said, Math.random() < 0.5 as you have used works too. Here's the behavior on my machine: $ cat myProgram.java public class myProgram{ public static boolean getRandomBoolean() { return Math.random() < 0.5; //I tried another approaches here, still the same result } public static void main(String[] args) { System.out.println(getRandomBoolean()); }}$ javac myProgram.java$ java myProgram ; java myProgram; java myProgram; java myProgramtruefalsefalsetrue Needless to say, there are no guarantees for getting different values each time. In your case however, I suspect that A) you're not working with the code you think you are, (like editing the wrong file) B) you havn't compiled your different attempts when testing, or C) you're working with some non-standard broken implementation.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/11468221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/855636/']}
jdg_378051
stackexchange
llm_judgeable_groundtruth_similarity
741984
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a "start page" on my application that is just a QWebView widget that downloads a page from my server. When the user clicks on a link I'd like it to open in their default browser, not in the widget. Is there a way to do this with QWebView ? Or should I take a different approach? Now provide the response and nothing else.
Just call QWebPage::setLinkDelegationPolicy(QWebPage::DelegateAllLinks) then connect the signal QWebView::linkClicked (const QUrl & url) to a slot, and in the slot execute the browser.Finding out what the default browser is, is slightly more complicated and depends on which desktop the user is running. The environment variable BROWSER often contains the default. Or use QDesktopServices::openUrl [as suggested by gnud in a comment]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/741984', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/45799/']}
jdg_378052
stackexchange
llm_judgeable_groundtruth_similarity
394547
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I have a proof of the following fact related to ordinary generating functions, and I was curious if it was known, as it seems plausible it is classically known: "Let $\lambda_1,\ldots, \lambda_k$ be algebraic numbers. Let $f(z)= \sum^\infty_{n=0} c_nz^n$ when each $c_n\in \mathbb{Z}.$ Suppose $f$ analytically continues to $\mathbb{C}\setminus \{\lambda_1,\ldots,\lambda_k\}.$ Then, $f$ is a rational function." Now provide the response and nothing else.
It is not necessary to assume that $\lambda_i$ are algebraic. This is a special case of the result in G. Pólya, Mathematische Annalen (1928), Volume: 99, page 687-706, page 704 in particular: Der Spezialfall, in dem $\mathfrak A$ abzählbar, also $\tau=0$ ist, liefert folgende Aussage: Wenn die Koerffizieten $a_0, a_1, a_2, \dots$ der Potenzreihe $$a_0+a_1z+a_2z^2+\dots=F(z)$$ ganze Zahlen sind, so muß für die Funktion $F(z)$ einer der folgenden drei Fälle zutreffen: Entweder ist $F(z)$ eine rationale Funktion, oder ist $F(z)$ eine mehrdeutige Funktion, oder besitzt $F(z)$ unabzählbar viele singuläre Punkte. The paper actually works with the function $f(z)=F(1/z).$ This is defined near $\infty,$ and it's either rational, doesn't have a unique analytical continuation, or its analytic continuation is singular on an uncountable set $\mathfrak A.$ The quantity $\tau$ is the capacity of $\mathfrak A$ (Chebyshev constant, transfinite diameter), which is zero for compact countable sets.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/394547', 'https://mathoverflow.net', 'https://mathoverflow.net/users/32470/']}
jdg_378053
stackexchange
llm_judgeable_groundtruth_similarity
14663971
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using LINQ how to select from a List within a List public class Model{ public string application { get; set; } public List<Users> users { get; set; }}public class Users{ public string name { get; set; } public string surname { get; set; }}List<Model> list = new List<Model>(); I need to select list where application = "applicationame" and users where surname = "surname" into one list. Now provide the response and nothing else.
If you want to filter the models by applicationname and the remaining models by surname : List<Model> newList = list.Where(m => m.application == "applicationname") .Select(m => new Model { application = m.application, users = m.users.Where(u => u.surname == "surname").ToList() }).ToList(); As you can see, it needs to create new models and user-lists, hence it is not the most efficient way. If you instead don't want to filter the list of users but filter the models by users with at least one user with a given username, use Any : List<Model> newList = list .Where(m => m.application == "applicationname" && m.users.Any(u => u.surname == "surname")) .ToList();
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14663971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/418319/']}
jdg_378054
stackexchange
llm_judgeable_groundtruth_similarity
35179656
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a slice with ~2.1 million log strings in it, and I would like to create a slice of slices with the strings being as evenly distributed as possible. Here is what I have so far: // logs is a slice with ~2.1 million strings in it.var divided = make([][]string, 0)NumCPU := runtime.NumCPU()ChunkSize := len(logs) / NumCPUfor i := 0; i < NumCPU; i++ { temp := make([]string, 0) idx := i * ChunkSize end := i * ChunkSize + ChunkSize for x := range logs[idx:end] { temp = append(temp, logs[x]) } if i == NumCPU { for x := range logs[idx:] { temp = append(temp, logs[x]) } } divided = append(divided, temp)} The idx := i * ChunkSize will give me the current "chunk start" for the logs index, and end := i * ChunkSize + ChunkSize will give me the "chunk end", or the end of the range of that chunk. I couldn't find any documentation or examples on how to chunk/split a slice or iterate over a limited range in Go, so this is what I came up with. However, it only copies the first chunk multiple times, so it doesn't work. How do I (as evenly as possible) chunk an slice in Go? Now provide the response and nothing else.
You don't need to make new slices, just append slices of logs to the divided slice. http://play.golang.org/p/vyihJZlDVy var divided [][]stringchunkSize := (len(logs) + numCPU - 1) / numCPUfor i := 0; i < len(logs); i += chunkSize { end := i + chunkSize if end > len(logs) { end = len(logs) } divided = append(divided, logs[i:end])}fmt.Printf("%#v\n", divided)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/35179656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4949938/']}
jdg_378055
stackexchange
llm_judgeable_groundtruth_similarity
27817577
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am attempting to Take a Screenshot of my Game through code and Share it through an Intent. I able to do of those things, however the screenshot always appears black. Here is the Code Related to Sharing the Screenshot: View view = MainActivity.getView();view.setDrawingCacheEnabled(true);Bitmap screen = Bitmap.createBitmap(view.getDrawingCache(true));.. save Bitmap This is in the MainActivity: view = new GameView(this);view.setLayoutParams(new RelativeLayout.LayoutParams( RelativeLayout.LayoutParams.FILL_PARENT, RelativeLayout.LayoutParams.FILL_PARENT));public static SurfaceView getView() { return view;} And the View itself: public class GameView extends SurfaceView implements SurfaceHolder.Callback {private static SurfaceHolder surfaceHolder;...etc And this is how I am Drawing everything: Canvas canvas = surfaceHolder.lockCanvas(null); if (canvas != null) { Game.draw(canvas);... Ok, based on some answers, i have constructed this: public static void share() { Bitmap screen = GameView.SavePixels(0, 0, Screen.width, Screen.height); Calendar c = Calendar.getInstance(); Date d = c.getTime(); String path = Images.Media.insertImage( Game.context.getContentResolver(), screen, "screenShotBJ" + d + ".png", null); System.out.println(path + " PATH"); Uri screenshotUri = Uri.parse(path); final Intent emailIntent = new Intent( android.content.Intent.ACTION_SEND); emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri); emailIntent.setType("image/png"); Game.context.startActivity(Intent.createChooser(emailIntent, "Share High Score:"));} The Gameview contains the Following Method: public static Bitmap SavePixels(int x, int y, int w, int h) { EGL10 egl = (EGL10) EGLContext.getEGL(); GL10 gl = (GL10) egl.eglGetCurrentContext().getGL(); int b[] = new int[w * (y + h)]; int bt[] = new int[w * h]; IntBuffer ib = IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib); for (int i = 0, k = 0; i < h; i++, k++) { for (int j = 0; j < w; j++) { int pix = b[i * w + j]; int pb = (pix >> 16) & 0xff; int pr = (pix << 16) & 0x00ff0000; int pix1 = (pix & 0xff00ff00) | pr | pb; bt[(h - k - 1) * w + j] = pix1; } } Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888); return sb;} The Screenshot is still Black. Is there something wrong with the way I am saving it perhaps? I have attempted several different methods to take the Screenshot, but none of them worked:The one shown in the code above was the most commonly suggested one. But it does not seem to work.Is this an Issue with using SurfaceView? And if so, why does view.getDrawingCache(true) even exist if I cant use it and how do I fix this? My code: public static void share() { // GIVES BLACK SCREENSHOT: Calendar c = Calendar.getInstance(); Date d = c.getTime(); Game.update(); Bitmap.Config conf = Bitmap.Config.RGB_565; Bitmap image = Bitmap.createBitmap(Screen.width, Screen.height, conf); Canvas canvas = GameThread.surfaceHolder.lockCanvas(null); canvas.setBitmap(image); Paint backgroundPaint = new Paint(); backgroundPaint.setARGB(255, 40, 40, 40); canvas.drawRect(0, 0, canvas.getWidth(), canvas.getHeight(), backgroundPaint); Game.draw(canvas); Bitmap screen = Bitmap.createBitmap(image, 0, 0, Screen.width, Screen.height); canvas.setBitmap(null); GameThread.surfaceHolder.unlockCanvasAndPost(canvas); String path = Images.Media.insertImage( Game.context.getContentResolver(), screen, "screenShotBJ" + d + ".png", null); System.out.println(path + " PATH"); Uri screenshotUri = Uri.parse(path); final Intent emailIntent = new Intent( android.content.Intent.ACTION_SEND); emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri); emailIntent.setType("image/png"); Game.context.startActivity(Intent.createChooser(emailIntent, "Share High Score:"));} Thank you. Now provide the response and nothing else.
There is a great deal of confusion about this, and a few correct answers . Here's the deal: A SurfaceView has two parts, the Surface and the View. The Surface is on a completely separate layer from all of the View UI elements. The getDrawingCache() approach works on the View layer only, so it doesn't capture anything on the Surface. The buffer queue has a producer-consumer API, and it can have only one producer. Canvas is one producer, GLES is another. You can't draw with Canvas and read pixels with GLES. (Technically, you could if the Canvas were using GLES and the correct EGL context was current when you went to read the pixels, but that's not guaranteed. Canvas rendering to a Surface is not accelerated in any released version of Android, so right now there's no hope of it working.) (Not relevant for your case, but I'll mention it for completeness:) A Surface is not a frame buffer, it is a queue of buffers. When you submit a buffer with GLES, it is gone, and you can no longer read from it. So if you were rendering with GLES and capturing with GLES, you would need to read the pixels back before calling eglSwapBuffers() . With Canvas rendering, the easiest way to "capture" the Surface contents is to simply draw it twice. Create a screen-sized Bitmap, create a Canvas from the Bitmap, and pass it to your draw() function. With GLES rendering, you can use glReadPixels() before the buffer swap to grab the pixels. There's a (less-expensive than the code in the question) implementation of the grab code in Grafika ; see saveFrame() in EglSurfaceBase . If you were sending video directly to a Surface (via MediaPlayer) there would be no way to capture the frames, because your app never has access to them -- they go directly from mediaserver to the compositor (SurfaceFlinger). You can, however, route the incoming frames through a SurfaceTexture, and render them twice from your app, once for display and once for capture. See this question for more info. One alternative is to replace the SurfaceView with a TextureView, which can be drawn on like any other Surface. You can then use one of the getBitmap() calls to capture a frame. TextureView is less efficient than SurfaceView, so this is not recommended for all situations, but it's straightforward to do. If you were hoping to get a composite screen shot containing both the Surface contents and the View UI contents, you will need to capture the Canvas as above, capture the View with the usual drawing cache trick, and then composite the two manually. Note this won't pick up the system parts (status bar, nav bar). Update: on Lollipop and later (API 21+) you can use the MediaProjection class to capture the entire screen with a virtual display. There are some trade-offs with this approach, e.g. you're capturing the rendered screen, not the frame that was sent to the Surface, so what you get may have been up- or down-scaled to fit the window. In addition, this approach involves an Activity switch since you have to create an intent (by calling createScreenCaptureIntent on the ProjectionManager object) and wait for its result. If you want to learn more about how all this stuff works, see the Android System-Level Graphics Architecture doc.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27817577', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_378056
stackexchange
llm_judgeable_groundtruth_similarity
14487110
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using the YAML heading of a markdown file to add an excerpt variable to blog posts that I can use elsewhere. In one of these excerpts I refer to an earlier blog post via markdown link markup, and I use the liquid template data variable {{ site.url }} in place of the base URL of the site. So I have something like (trimmed it somewhat) --- title: "Decluttering ordination plots in vegan part 2: orditorp()"status: publishlayout: postpublished: truetags: - tag1- tag2excerpt: In the [earlier post in this series]({{ site.url }}/2013/01/12/decluttering-ordination-plots-in-vegan-part-1-ordilabel/ "Decluttering ordinationplots in vegan part 1: ordilabel()") I looked at the `ordilabel()` function---- However, jekyll and the Maruku md parser don't like this, which makes me suspect that you can't use liquid markup in the YAML header. Is it possible to use liquid markup in the YAML header of pages handled by jekyll? If it is, what I am I doing wrong in the example shown? If it is not allowed, who else can I achieve what I intended? I am currently developing my site on my laptop and don't want to hard code the base URL as it'll have to change when I am ready to deploy. The errors I am getting from Maruku are: | Maruku tells you:+---------------------------------------------------------------------------| Must quote title| ---------------------------------------------------------------------------| the [earlier post in this series]({{ site.url }}/2013/01/12/decluttering-o| --------------------------------------|-------------------------------------| +--- Byte 40 and | Maruku tells you:+---------------------------------------------------------------------------| Unclosed link| ---------------------------------------------------------------------------| the [earlier post in this series]({{ site.url }}/2013/01/12/decluttering-or| --------------------------------------|-------------------------------------| +--- Byte 41 and | Maruku tells you:+---------------------------------------------------------------------------| No closing ): I will not create the link for ["earlier post in this series"]| ---------------------------------------------------------------------------| the [earlier post in this series]({{ site.url }}/2013/01/12/decluttering-or| --------------------------------------|-------------------------------------| +--- Byte 41 Now provide the response and nothing else.
I don't believe it's possible to nest liquid variables inside YAML. At least, I haven't figure out how to do it. One approach that will work is to use a Liquid's replace filter . Specifically, define a string that you want to use for the variable replacement (e.g. !SITE_URL! ). Then, use the replace filter to switch that to your desired Jekyll variable (e.g. site.url ) during the output. Here's a cut down .md file that behaves as expected on my jekyll 0.11 install: ---layout: postexcerpt: In the [earlier post in this series](!SITE_URL!/2013/01/12/)---{{ page.excerpt | replace: '!SITE_URL!', site.url }} Testing that on my machine, the URL is inserted properly and then translated from markdown into an HTML link as expected. If you have more than one item to replace, you can string multiple replace calls together. ---layout: postmy_name: Alan W. Smithmulti_replace_test: 'Name: !PAGE_MY_NAME! - Site: [!SITE_URL!](!SITE_URL!)'---{{ page.multi_replace_test | replace: '!SITE_URL!', site.url | replace: '!PAGE_MY_NAME!', page.my_name }} An important note is that you must explicitly set the site.url value. You don't get that for free with Jekyll. You can either set it in your _config.yml file with: url: http://alanwsmith.com Or, define it when you call jekyll: jekyll --url http://alanwsmith.com
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14487110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/429846/']}
jdg_378057
stackexchange
llm_judgeable_groundtruth_similarity
10860419
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As you know, without using !getValueIsAdjusting when you select a row in a jtable (by clicking) the selection change event fire twice. It doesn't happen if you select a row using the keyboard arrow. To resolve it, you check if getValueIsAdjusting returns false. My question is why does the event fire twice if I select a row by clicking it, but not when using the keyboard arrow? And what does getValueIsAdjusting do to resolve it? Now provide the response and nothing else.
As the javadoc which JB Nizet linked to states, getValueIsAdjusting() checks whether a specific event (a change) is part of a chain, if so it will return true . It will only return false when the specified event is the final one in the chain. In your case, selecting a row by clicking actually fires two events: a mouseDown and mouseUp event and both are sent to your event listener. If you correctly implement getValueIsAdjusting() to return whenever the value is true , you will only act on the final event in the chain, which is the mouseUp event that fires when you let go of the left mouse button. The Java Tutorials include an example that captures events, you can use that to log the selection events and experiment with it yourself. Remove the return on the event.getValueIsAdjusting() check to log every event that's fired.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10860419', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1061370/']}
jdg_378058
stackexchange
llm_judgeable_groundtruth_similarity
8808902
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am stuck with a NAME field, which typically is in the format: FirstName LastName However, I also have the occasional names that are in any of these formats (with prefix or suffix): Mr. First LastFirst Last Jr. What do people think is a safe way to split these into FIRST/LAST name variables in PHP? I can't really come up with anything that tends to work all of the time... Now provide the response and nothing else.
A regex is the best way to handle something like this.Try this piece - it pulls out the prefix, first name, last name and suffix: $array = array( 'FirstName LastName', 'Mr. First Last', 'First Last Jr.', 'Shaqueal O’neal', 'D’angelo Hall',);foreach ($array as $name){ $results = array(); echo $name; preg_match('#^(\w+\.)?\s*([\'\’\w]+)\s+([\'\’\w]+)\s*(\w+\.?)?$#', $name, $results);print_r($results);} The result comes out like this: FirstName LastNameArray( [0] => FirstName LastName [1] => [2] => FirstName [3] => LastName)Mr. First LastArray( [0] => Mr. First Last [1] => Mr. [2] => First [3] => Last)First Last Jr.Array( [0] => First Last Jr. [1] => [2] => First [3] => Last [4] => Jr.)shaqueal o’nealArray( [0] => shaqueal o’neal [1] => [2] => shaqueal [3] => o’neal)d’angelo hallArray( [0] => d’angelo hall [1] => [2] => d’angelo [3] => hall) etc… so in the array $array[0] contains the entire string. $array[2] is always first name and $array[3] is always last name. $array[1] is prefix and $array[4] (not always set) is suffix.I also added code to handle both ' and ’ for names like Shaqueal O’neal and D’angelo Hall.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8808902', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/646456/']}
jdg_378059
stackexchange
llm_judgeable_groundtruth_similarity
3776035
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem for a university lab; Write a short program that outputs all possible strings formed by using characters ‘c’, ‘a’, ‘r’, ‘b’, ‘o’, and ‘n’ exactly once. It seems to be a common interview question and well documented. So I've coded it with Java using a recursive method which wasn't too hard, when or why would you choose not to use recursion and what would be the easiest way of doing it? I started to code a counter that would count down on base 6, the output would then reference char's and print the string. Thanks, Now provide the response and nothing else.
Yes, there are plenty of times I would not use recursion. Recursion is not free, it has a cost in stack space and that can often be a much more limited resource than some others. There's also a time cost, however small, in setting up and tearing down stack frames. By way of example, the much vaunted factorial function is one where I would probably opt for an iterative approach where the numbers were large. Calculating 10000! with the Python: def factorial (n): if n = 1: return 1 return n * factorial (n-1) will attempt to use a whopping 10,000 stack frames (though Python will protect you against this). The equivalent iterative solution: def factorial (n): r = 1 while n > 1: r = r * n n = n - 1 return r will use just the one stack frame and precious little else. It's true that recursive solutions are often more elegant code but you have to temper that with the limitations of your environment. Your carbon example is one where I would actually use recursion since: it uses at most six stack frames (one per character in the string); and it's relatively elegant, at least much more so than six nested loops and huge equality checks. For example the following Python code does the trick: def recur (str, pref = ""): # Terminating condition. if str == "": print pref return # Rotate string so all letters get a chance to be first. for i in range (len (str)): recur (str[1:], pref + str[:1]) str = str[1:] + str[:1]recur ("abc") producing: abcacbbcabaccabcba Of course, if your string can be 10K long, I'd rethink it, since that would involve a lot more stack levels but, provided you keep in low enough, recursion is a viable solution.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3776035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/455487/']}
jdg_378060
stackexchange
llm_judgeable_groundtruth_similarity
22354094
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For the given data, I want to set the outlier values (defined by 95% confidense level or 95% quantile function or anything that is required) as nan values. Following is the my data and code that I am using right now. I would be glad if someone could explain me further. import numpy as np, matplotlib.pyplot as pltdata = np.random.rand(1000)+5.0plt.plot(data)plt.xlabel('observation number')plt.ylabel('recorded value')plt.show() Now provide the response and nothing else.
The problem with using percentile is that the points identified as outliers is a function of your sample size. There are a huge number of ways to test for outliers, and you should give some thought to how you classify them. Ideally, you should use a-priori information (e.g. "anything above/below this value is unrealistic because...") However, a common, not-too-unreasonable outlier test is to remove points based on their "median absolute deviation". Here's an implementation for the N-dimensional case (from some code for a paper here: https://github.com/joferkington/oost_paper_code/blob/master/utilities.py ): def is_outlier(points, thresh=3.5): """ Returns a boolean array with True if points are outliers and False otherwise. Parameters: ----------- points : An numobservations by numdimensions array of observations thresh : The modified z-score to use as a threshold. Observations with a modified z-score (based on the median absolute deviation) greater than this value will be classified as outliers. Returns: -------- mask : A numobservations-length boolean array. References: ---------- Boris Iglewicz and David Hoaglin (1993), "Volume 16: How to Detect and Handle Outliers", The ASQC Basic References in Quality Control: Statistical Techniques, Edward F. Mykytka, Ph.D., Editor. """ if len(points.shape) == 1: points = points[:,None] median = np.median(points, axis=0) diff = np.sum((points - median)**2, axis=-1) diff = np.sqrt(diff) med_abs_deviation = np.median(diff) modified_z_score = 0.6745 * diff / med_abs_deviation return modified_z_score > thresh This is very similar to one of my previous answers , but I wanted to illustrate the sample size effect in detail. Let's compare a percentile-based outlier test (similar to @CTZhu's answer) with a median-absolute-deviation (MAD) test for a variety of different sample sizes: import numpy as npimport matplotlib.pyplot as pltimport seaborn as snsdef main(): for num in [10, 50, 100, 1000]: # Generate some data x = np.random.normal(0, 0.5, num-3) # Add three outliers... x = np.r_[x, -3, -10, 12] plot(x) plt.show()def mad_based_outlier(points, thresh=3.5): if len(points.shape) == 1: points = points[:,None] median = np.median(points, axis=0) diff = np.sum((points - median)**2, axis=-1) diff = np.sqrt(diff) med_abs_deviation = np.median(diff) modified_z_score = 0.6745 * diff / med_abs_deviation return modified_z_score > threshdef percentile_based_outlier(data, threshold=95): diff = (100 - threshold) / 2.0 minval, maxval = np.percentile(data, [diff, 100 - diff]) return (data < minval) | (data > maxval)def plot(x): fig, axes = plt.subplots(nrows=2) for ax, func in zip(axes, [percentile_based_outlier, mad_based_outlier]): sns.distplot(x, ax=ax, rug=True, hist=False) outliers = x[func(x)] ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False) kwargs = dict(y=0.95, x=0.05, ha='left', va='top') axes[0].set_title('Percentile-based Outliers', **kwargs) axes[1].set_title('MAD-based Outliers', **kwargs) fig.suptitle('Comparing Outlier Tests with n={}'.format(len(x)), size=14)main() Notice that the MAD-based classifier works correctly regardless of sample-size, while the percentile based classifier classifies more points the larger the sample size is, regardless of whether or not they are actually outliers.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/22354094', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_378061
stackexchange
llm_judgeable_groundtruth_similarity
45054266
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am aware that declarative programming just passes the input and expects the output without stating the procedure how it is done. In functional programming, is a programming paradigm, which takes an input and returns an output. When I checked the Higher order functional programming, we pass a function to map/reduce, which does not reveal the procedure how it is done. So is higher order functional programming and declarative programming the same thing?? Now provide the response and nothing else.
Short answer : No . Wikipedia defines declarative programming as: In computer science, declarative programming is a programming paradigm - a style of building the structure and elements of computer programs - that expresses the logic of a computation without describing its control flow . Or to state it a bit boldly: "Say what you want, not how you want it.". This is thus in contrast with imperative programming languages where a program is seen as a set of instructions that are done one after another. The fact that map , etc. do not reveal the procedure does not make it declarative: one can use a lot of C libraries that are proprietary and do not allow you to inspect the source code. That however, does not mean that these are declarative. The definition of functional programming on the other hand is: In computer science, functional programming is a programming paradigm - a style of building the structure and elements of computer programs - that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm , which means programming is done with expressions or declarations instead of statements. Based on these definitions one could say that functional programming is a subset of declarative programming . In a practical sense however if we follow the strict definitions, no programming language nowadays is purely, and un-ambigously declarative or functional . One can however say that Haskell is more declarative than Java . Declarative programming is usually considered to be "safer" since people tend to have trouble managing side-effects. A lot of programming errors are the result of not taking all side effects into account. On the other hand it is hard to design a language that allows a programmer to describe what he wants without going into details on how to do it; implement a compiler that will generate - based on such programs - an efficient implementation; and some problems have inherent side effects. For instance if you work with a database, a network connection or a file system, then reading/writing to a file for instance is supposed to have side effects. One can of course decide not to make this part of the programming language (for instance many constraint programming languages do not allow these type of actions, and are a "sub language" in a larger system). There have been several attempts to design such language. The most popular are - in my opinion - logic programming , functional programming , and constraint programming . Each has its merits and problems. We can also observe this declarative approach in for instance databases (like SQL) and text/XML processing (with XSLT , XPath , regular expressions ,...) where one does not specify how a query is resolved, but simply specifies through for instance the regular expression what one is looking for. Whether a programming language is however declarative, is a bit of a fuzzy discussion. Although programming languages, modeling languages and libraries like Haskell, Prolog, Gecode,... have definitely made programming more declarative, these are probably not declarative in the most strict sense. In the most strict sense, one should think that regardless how you write the logic, the compiler will always come up with the same result (although it might take a bit longer). Say for instance we want to check whether a list is empty in Haskell. We can write this like: is_empty1 :: [a] -> Boolis_empty1 [] = Trueis_empty1 (_:_) = False We can however write it like this as well: is_empty2 :: [a] -> Boolis_empty2 l = length l == 0 Both should give the same result for the same queries. If we however give it an infinite list, is_empty1 (repeat 0) will return False whereas is_empty2 (repeat 0) will loop forever. So that means that we somehow still wrote some "control flow" into the program: we have defined - to some extent - how Haskell should evaluate this. Although lazy programming will result in the fact that a programmer does not really specify what should be evaluated first, there are still specifications how Haskell will evaluate this. According to some people, this is the difference between programming and specifying . One of my professors once stated that according to him, the difference is that when you program something, you have somehow control about how something is evaluated, whereas when you specify something, you have no control. But again, this is only one of the many definitions.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45054266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5829906/']}
jdg_378062
stackexchange
llm_judgeable_groundtruth_similarity
12494714
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the difference between homeUrl and baseUrl in Yii framework? Now provide the response and nothing else.
From the docs: baseUrl: Returns the relative URL for the application. This is similar to scriptUrl except that it does not have the script file name, and the ending slashes are stripped off. While, homeUrl is the homepage URL Try echoing somewhere at your application to examine each by yourself: echo Yii::app()->getBaseUrl(true);// true tells to get a relative url, false the other wayecho Yii::app()->getHomeUrl(); How to use each? baseUrl is as @bcmcfc said can be useful as a base for all links in your applications. Now imagine you wanted to link to an image in web_root/myapp/img/ if for example you did that using absolute path e.g <img src="C:/wwww/myapp/img/somepic.jpg> Let's say you finished all your development and now you want to deploy to some linux server!! You can see that all your links will be broken :( but if instead you did: <img src= <?php Yii::app()->baseUrl() ?> . "/img/somepic.jpg" Everything should work fine :) homeUrl is simply the landing page for your app. Well i didn't use this before but i guess you can set diffirent homeurls according to User role after login for example!!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12494714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1576366/']}
jdg_378063
stackexchange
llm_judgeable_groundtruth_similarity
202778
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to estimate a dynamic panel model with firm level time invariant fixed effects and time-varying regional fixed effects. I'm trying to implement this with R package plm , but I run into trouble when I try to include that time-varying regional fixed effects. Here's what I've done: df <- plm.data(df, index = c("firm", "year")fit <- pgmm(formula = y ~ lag(y, 1) + lag(X, 0:1) + region:year | lag(y, 2:99) + lag(X, 2:99), data = df, effect = "individual", model = "twosteps", transformation = "ld") The system becomes singular and does not solve. I can fix that by replacing region:year with region , but that's not what I want to do. So it seems that regional fixed effects can't be time varying for some reason. Is this intended to be so or is there some workaround available? If R cannot handle this, is there some other program that can? Now provide the response and nothing else.
How do you know that your model is overfitted? If an "overfitted" model (let us call it model A ) is giving you truly better predictive power (no cheating, honest out of sample assessment) than some benchmark model that you think is non-overfitted (call it model B ), I would suspect that model B is actually underfitted while model A is less-underfitted than model B or non-overfitted or perhaps slightly overfitted (but not as severely as model B is underfitted). So I would say it is fine to use model A in place of model B if you have to choose one of the two. Regarding whether to keep both predictors or drop one, I would suggest making the choice based on out-of-sample performance assessment. If a model containing both of them gives better forecasts, choose it.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/202778', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/99286/']}
jdg_378064
stackexchange
llm_judgeable_groundtruth_similarity
60902
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I'm trying to draw on the computer a curve that keeps always the same distance(given as parameter) from a given curve. I know the formula for the given curve. I tried moving perpendicular to the first derivative but in some cases when the curve is sharp there are a lot of points creating some problems. This problem happens when the moving the curve with a distance greater that the radius of the curve. Is there a simple/standard way of drawing this kind of drawing? To be more precise I'm using these ("Parallel Curves") formulas when obtaining the above result. It works fine as long as the curve has no "sharp turns". This is the case that bothers me. Thank you, Iulian Now provide the response and nothing else.
If you just want a good easy picture, the simplest thing to do, as sugggested by Charles Matthews' comment, is to draw lots of equal radius circles centered about points on the curve. Your eye and brain will see the envelope. You could also draw lots of disks, or just use a computer drawing program such as Adobe Illustrator, make two copies of the curve (for instance on different layers) and give the lower copy a large stroke-width and a light color. (You can start from a PDF file that contains the mathematical curve, open it in Illustrator or a similar program, and edit as above). If you want a more mathematical description or construction, the equidistant curves for a smooth curve $\gamma$ depend on the cut locus for $\gamma$. The cut locus is the set of points where there is more than one closest point on $\gamma$, and it is closely related to the whole theory of Voronoi diagrams. You can compute a good approximation of it from a Voronoi diagram program or a convex hull program,if you lift the curve to the paraboloid $z = x^2 + y^2 \subset \mathbb R^3$. In the complement of the cut locus, there is a smooth map $(x, y) \to C(x, y)$ where $C(x,y)$ is the closest point on $\gamma$; it can be traced out implicitly, it's the inverse function to what you're already doing. For a generic smooth curve, the cut locus is a piecewise smooth tree, whose endpoints are centers of osculating circles where the curvature of $\gamma$ has a local maximum. (However, in general, the cut locus can be quite complicated and have infinitely many branches, even for a $C^\infty$ curve). The edges of the cut locus can be traced from these endpoints, using the implicit function theorem; the main difficulty is keeping track of enough information to get the correct combinatorics for the graph. It's equivalent to the problem of constructing the convex hull of a simple curve on the paraboloid above.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/60902', 'https://mathoverflow.net', 'https://mathoverflow.net/users/13211/']}
jdg_378065
stackexchange
llm_judgeable_groundtruth_similarity
1821467
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $R$ be an associative ring with $1$. Prove that $R$ is a division ring if and only if $R$ has no proper left ideals. Clearly, if $R$ is a division ring and $I\neq\{0\}$ is a left ideal, then every nonzero $z\in I$ satisfies $1=z^{-1}z\in I$, hence $I=R$. Now if $z\in R$ is nonzero and $R$ has no proper left ideals, then $I:=\{rz:r\in R\}$ is equal to $R$ and there exists $r\in R$ such that $rz=1$. However, we still need to prove $zr=1$. How can we do this? Thank you. Now provide the response and nothing else.
$(rz)r=r(zr)=r$, thus $r(1-zr)=0$, $zr=1$. Remark that the fact that $zr=1$ follow from the fact that $R$ does not have divisors of zero: if $ab=0$ $a,b\neq 0$ , since $Ra=R$, $ra=1$, $r(ab)=(ra)b=b=0$. Contradiction.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1821467', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/85575/']}
jdg_378066
stackexchange
llm_judgeable_groundtruth_similarity
17751056
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to convert my StringReader back to a regular String , as shown: String string = reader.toString(); But when I try to read this string out, like this: System.out.println("string: "+string); All I get is a pointer value, like this: java.io.StringReader@2c552c55 Am I doing something wrong in reading the string back? Now provide the response and nothing else.
The StringReader 's toString method does not return the StringReader internal buffers. You'll need to read from the StringReader to get this. I recommend using the overload of read which accepts a character array. Bulk reads are faster than single character reads. ie. //use string builder to avoid unnecessary string creation.StringBuilder builder = new StringBuilder();int charsRead = -1;char[] chars = new char[100];do{ charsRead = reader.read(chars,0,chars.length); //if we have valid chars, append them to end of string. if(charsRead>0) builder.append(chars,0,charsRead);}while(charsRead>0);String stringReadFromReader = builder.toString();System.out.println("String read = "+stringReadFromReader);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17751056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2188082/']}
jdg_378067
stackexchange
llm_judgeable_groundtruth_similarity
3840793
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am curious about the existence of any "rounding" standards" when it comes to the calculation of financial data. My initial thoughts are to perform rounding only when the data is being presented to the user (presentation layer). If "rounded" data is then used for further calculations, should be use the "rounded" figure or the "raw" figure? Does anyone have any advice? Please note that I am aware of different rounding methods, i.e. Bankers Rounding etc. Now provide the response and nothing else.
The first and most important rule: use a decimal data type , never ever binary floating-point types. When exactly rounding should be performed can be mandated by regulations, such as the conversion between the Euro and national currencies it replaced. If there are no such rules, I'd do all calculations with high precision, and round only for presentation, i.e. not use rounded values for further calculations. This should yield the best overall precision.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3840793', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/464029/']}
jdg_378068
stackexchange
llm_judgeable_groundtruth_similarity
35249881
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I installed Octave via Homebrew using the instructions given here . When I try to generate a plot, I get the following message: Fontconfig error: Cannot load default config filewarning: could not match any font: *-normal-normal-10warning: called from axes at line 66 column 10 gca at line 58 column 9 newplot at line 148 column 8 surf at line 70 column 9 sombrero at line 65 column 5 I then get a long series of the following messages: warning: ft_render: unable to load appropriate fontwarning: could not match any font: *-normal-normal-10 Usually the plot appears once, but if I call it again then Octave quits with the following message. panic: Segmentation fault: 11 -- stopping myself...attempting to save variables to 'octave-workspace'...warning: unable to open 'octave-workspace' for writing...warning: called from __gnuplot_drawnow__>gnuplot_trim_term at line 368 column 10 __gnuplot_drawnow__>gnuplot_set_term at line 119 column 20 __gnuplot_drawnow__ at line 84 column 16Segmentation fault: 11 In discussions of similar errors I have often seen references to /etc/fonts/fonts.conf. My /etc directory does not contain a fonts subdirectory. This may be the source of the problem, but I do not know how to fix it. When I run, brew install fontconfig , I get Warning: fontconfig-2.11.1_1 already installed . When I run sudo find / -name fonts.conf , I get /opt/X11/lib/X11/fontconfig/fonts.conf . brew doctor produces the following warnings, which may be relevant and which I do not know how to fix (short of uninstalling Anaconda): Warning: Anaconda is known to frequently break Homebrew builds, including Vim and MacVim, due to bundling many duplicates of system and Homebrew-availabletools.If you encounter a build failure please temporarily remove Anacondafrom your $PATH and attempt the build again prior to reporting thefailure to us. Thanks!Warning: "config" scripts exist outside your system or Homebrew directories.`./configure` scripts often look for *-config scripts to determine ifsoftware packages are installed, and what additional flags to use whencompiling and linking.Having additional scripts in your path can confuse software installed viaHomebrew if the config script overrides a system or Homebrew providedscript of the same name. We found the following "config" scripts: /Users/greg/anaconda/bin/curl-config /Users/greg/anaconda/bin/freetype-config /Users/greg/anaconda/bin/libdynd-config /Users/greg/anaconda/bin/libpng-config /Users/greg/anaconda/bin/libpng16-config /Users/greg/anaconda/bin/python-config /Users/greg/anaconda/bin/python2-config /Users/greg/anaconda/bin/python2.7-config /Users/greg/anaconda/bin/xml2-config /Users/greg/anaconda/bin/xslt-config I am running OS X 10.11.3 (El Capitan). Now provide the response and nothing else.
Actually, it's not an issue of Octave, but of its complicated setup and misconfiguration of other utilities (especially fontconfig). For proper work you need to set manually path for fontconfig, so it wouldn't look for fonts in inappropriate directory ( /etc/fonts works for Linux installations, but OS X XQuartz goes to different place). You should run command export FONTCONFIG_PATH=/opt/X11/lib/X11/fontconfig to setup fontconfig proper way. Also you can add line export FONTCONFIG_PATH=/opt/X11/lib/X11/fontconfig to your ~/.bash_profile and restart your terminal, so you don't need to run this command anymore.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35249881', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5597137/']}
jdg_378069
stackexchange
llm_judgeable_groundtruth_similarity
503864
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've successfully established an IPsec connection, but it works only partially. One side does not send out packets through the tunnel. It seems as if the network topology is unclear to this side. Any help is highly appreciated! Thanks!! This is the network scheme: "office"(192.168.73.0/24) == "vpn"(192.168.73.1) == "router"(6.6.6.6) <====> "server"(7.7.7.7) == "VM_LAN"(192.168.133.0/24) 6.6.6.6 and 7.7.7.7 are symbolic public IPs, i.e. "router" and "server" both are directly connected to the internet. "vpn" and "server" both run CentOS 6."router" is a cable modem, doing the NAT and port forwarding. The connection is established. On "vpn" I can ping "server"'s internal IP: [root@vpn]# ping 192.168.133.1PING 192.168.133.1 (192.168.133.1) 56(84) bytes of data.64 bytes from 192.168.133.1: icmp_seq=1 ttl=64 time=74.8 ms On "server" I can't ping "vpn", there's not even a packet sent out. The following is a dump from "server" showing above ping packet coming in. I use the same command to test if packets are sent from "server" to "vpn", when pinging from "server", but no packets show up. [root@server]# tcpdump port 500 or port 4500tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes14:40:21.793577 IP 6.6.6.6.ipsec-nat-t > 7.7.7.7.ipsec-nat-t: UDP-encap: ESP(spi=0x712a1d37,seq=0x2), length 13214:40:21.793650 IP 7.7.7.7.ipsec-nat-t > 6.6.6.6.ipsec-nat-t: UDP-encap: ESP(spi=0x840e6b76,seq=0x2), length 132 ipsec verify seems ok: [root@server]# ipsec verifyChecking your system to see if IPsec got installed and started correctly:Version check and ipsec on-path [OK]Linux Openswan U2.6.32/K2.6.32-358.2.1.el6.x86_64 (netkey)Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing for disabled ICMP send_redirects [OK]NETKEY detected, testing for disabled ICMP accept_redirects [OK]Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK]Checking for 'ip' command [OK]Checking /bin/sh is not /bin/dash [OK]Checking for 'iptables' command [OK]Opportunistic Encryption Support [DISABLED] iptables is disabled: [root@server]# iptables -L -nChain INPUT (policy ACCEPT)target prot opt source destinationChain FORWARD (policy ACCEPT)target prot opt source destinationChain OUTPUT (policy ACCEPT)target prot opt source destination[root@server]# route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface7.7.7.7 0.0.0.0 255.255.255.255 UH 0 0 0 eth0169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth00.0.0.0 7.7.7.1 0.0.0.0 UG 0 0 0 eth0 My ipsec.conf: config setup # Debug-logging controls: "none" for (almost) none, "all" for lots. # klipsdebug=none # plutodebug="control parsing" plutodebug="all" # For Red Hat Enterprise Linux and Fedora, leave protostack=netkey protostack=netkey nat_traversal=yes virtual_private="%v4:192.168.73.0/24" oe=off # Enable this if you see "failed to find any available worker" # nhelpers=0conn aaa-office authby=secret left=7.7.7.7 leftsubnet=192.168.133.0/24 right=6.6.6.6 rightsubnet=192.168.73.0/24 rightid=192.168.73.8 auto=add Now provide the response and nothing else.
I'll answer myself and hope that this info will be usable for others with the same issue. The root cause was that packets from "server" weren't routed through the tunnel. Using ip xfrm policy I could see that the policy for routing through the tunnel is that packets need to origin from 192.168.133.0/24. A ping from "server" to "vpn" resulted in these packets though: 17:29:16.549349 IP 7.7.7.7 > 192.168.73.8: ICMP echo request, id 43864, seq 1, length 64 So when doing the ping, the source IP naturally used was the public IP of the server. This wasn't the case for the "vpn" machine, since this machine was already in the subnet. The issue resolved when I added following statement to the configuration file of "server": leftsourceip=192.168.133.1 Now things work as expected and I can reach the subnet behind "vpn" from "server".
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/503864', 'https://serverfault.com', 'https://serverfault.com/users/171823/']}
jdg_378070
stackexchange
llm_judgeable_groundtruth_similarity
5065362
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is driving me crazy. Just testing a site on IE9 and discovered that the 'live' version is rendering a web font I am using smaller than on the dev version. Here is a selection of screen grabs: I am using the Font Squirrel @font-face kit. As you can see, it is fine on Firefox, Chrome and even IE9 when viewing a local version of the site. The only difference between the local and live versions is that the font is being loaded from a different domain on the live site (I have set up the cross-domain policy correctly, as illustrated by the fact it works on Firefox and Chrome). I can't remember what it looked like in IE8 (Microsoft, yet again, haven't thought of developers and have installed IE9 over the top of IE8 with no option to run them simultaneously) The site is at http://enplanner.com so you can view the source. Any help on this would be most appreciated - thank you in advance. Edit: I have removed IE9 and discovered that is looks exactly the same on both local and live in IE8. It appears IE8 has a superior rendering engine that is closer to FF/Chrome than IE9. This is quite a depressing discovery. Now provide the response and nothing else.
IE9 supports .WOFF; IE8 does not, and supports only .EOT fonts. Open the IE9 F12 Developer Tools and you see the following messages: CSS3117: @font-face failed cross-origin request. Resource access is restricted. Neuton-webfont.woffCSS3117: @font-face failed cross-origin request. Resource access is restricted. YanoneKaffeesatz-Regular-webfont.woffCSS3114: @font-face failed OpenType embedding permission check. Permission must be Installable. Neuton-webfont.ttfCSS3114: @font-face failed OpenType embedding permission check. Permission must be Installable. YanoneKaffeesatz-Regular-webfont.ttf Examining your HTTP headers, it's clear that your cross-domain access is not configured properly, as there is no Access-Control-Allow-Origin response header on your WOFF files. They're also delivered with the wrong MIME type ( text/plain ) although that's not causing your problem. However, failure to map woff to the correct MIME type can cause problems as some servers will not serve files with "undefined" extensions and will instead return a HTTP/404 error.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5065362', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_378071
stackexchange
llm_judgeable_groundtruth_similarity
36636255
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to download a captcha image with Selenium, however, I'm getting a different image downloaded than the one showed in the browser.If I try to download the image again, without changing the browser, I get a different one. Any thoughts? from selenium import webdriverimport urllibdriver = webdriver.Firefox()driver.get("http://sistemas.cvm.gov.br/?fundosreg")# Change frame.driver.switch_to.frame("Main")# Download image/captcha.img = driver.find_element_by_xpath(".//*[@id='trRandom3']/td[2]/img")src = img.get_attribute('src')urllib.request.urlretrieve(src, "captcha.jpeg") Now provide the response and nothing else.
Because the link of image's src gives you a random new captcha image once you open that link! Instead of download the file from the image's src , you can take a screenshot to get the one in browser. However, you need to download Pillow ( pip install Pillow ) and use it like the way mentioned in this answer : from PIL import Imagefrom selenium import webdriverdef get_captcha(driver, element, path): # now that we have the preliminary stuff out of the way time to get that image :D location = element.location size = element.size # saves screenshot of entire page driver.save_screenshot(path) # uses PIL library to open image in memory image = Image.open(path) left = location['x'] top = location['y'] + 140 right = location['x'] + size['width'] bottom = location['y'] + size['height'] + 140 image = image.crop((left, top, right, bottom)) # defines crop points image.save(path, 'jpeg') # saves new cropped imagedriver = webdriver.Firefox()driver.get("http://sistemas.cvm.gov.br/?fundosreg")# change framedriver.switch_to.frame("Main")# download image/captchaimg = driver.find_element_by_xpath(".//*[@id='trRandom3']/td[2]/img")get_captcha(driver, img, "captcha.jpeg")driver = webdriver.Firefox()driver.get("http://sistemas.cvm.gov.br/?fundosreg")# change framedriver.switch_to.frame("Main")# download image/captchaimg = driver.find_element_by_xpath(".//*[@id='trRandom3']/td[2]/img")get_captcha(driver, img, "captcha.jpeg") (Note that I've changed the code little bit so it could works in your case.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36636255', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4781562/']}
jdg_378072
stackexchange
llm_judgeable_groundtruth_similarity
27738457
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wanted to find out the number of 0's at end of integer.Suppose anyone enter 2020 it should count 1 ,if number is 2000 it should show 3 etc; I tried following but doesn't accomplish what i want :( Console.WriteLine("Enter Number :");int num = int.Parse(Console.ReadLine());int count = 0;for (int i = 1; i < num.ToString().Count(); i++){ //some logic}Console.WriteLine("Zero in the tail is :");Console.WriteLine(count); Now provide the response and nothing else.
You're not changing anything within your loop - so basically, on each iteration it will either increase Count or it won't, and it'll do the same thing each time - so Count will either be the length of the string, or it will be 0. The simplest option I can think of in terms of text manipulation would be: string text = num.ToString();int count = text.Length - text.TrimEnd('0').Length; Without using text manipulation, however, you could just use division and remainder operations: int count = 0;// Keep going while the last digit is 0while (num > 0 && num % 10 == 0){ num = num / 10; count++;} Note that this will yield a count of 0 for the number 0... whereas the first approach will give a count of 1 (because 0.ToString() is "0"). Adjust either piece of code to suit your requirements :)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27738457', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3449215/']}
jdg_378073
stackexchange
llm_judgeable_groundtruth_similarity
225323
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose there is a function get-data which returns a map of information about the id of the user passed in. Now this function uses 3 functions source-a , source-b and source-c to get three different kinds of maps. Now we combine all these maps into one map and return from get-data . When I test get-data , should I test for existence of data for keys? Does it makes sense for this function to fail unit tests if one of source-a , source-b and source-c fail? If thats function job is to combine data, and it's doing it, it should be enough, right? Now provide the response and nothing else.
Suppose there is a function get-data which returns a map of information about the id of the user passed in. Great. You should test that then. For a given ID, do you get the right data back? now this function uses 3 functions source-a, source-b and source-c to get three different kinds of maps. Which is an implementation detail you should ignore in the test. All you're testing is that your unit of work (this method) does what it's supposed to (take an ID and return XYZ data for that ID). How the method does that isn't particularly relevant - after all, a key benefit of that unit test is that you can refactor the implementation of the method and the test will verify you did that correctly. That said, you're likely going to need to mock the data sources, so at some point the test will likely need to know how the code works to do that. You'll need to balance three competing goals here: making the test isolated (by mocking the data), making the test focused on requirements, and pragmatism. In the end, it's the code that matters. Tests exist to support the actual code, spending a lot of time and trouble mucking about with polishing tests isn't nearly as useful as making tests.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/225323', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/43804/']}
jdg_378074
stackexchange
llm_judgeable_groundtruth_similarity
50504844
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this kind of list: a = ['orange', 'apple', 'banana'] and from this I build another list of tuples using a simple list comprehension : b = [(key, key.upper()) for key in a] The result is something like this: b = [('orange', 'ORANGE'), ('apple', 'APPLE'), ('banana', 'BANANA')] Now I need to insert another tuple at the beginning (or at the end for example) of the newly generated b list: ('---', None) . The two easy solutions would be: # Solution 1b = [('---', None)] + b# Solution 2 b.insert(0, ('---', None)) Question: Is there a way to accomplish the same thing directly from the list comprehension construct? Can something like this (pseudocode) could be possible in python? b = [('---', None), (key, key.upper()) for key in a] Now provide the response and nothing else.
Yes, by using generator comprehensions: b = [('---', None), *((key, key.upper()) for key in a)] Instead of using a generator comprehension, you could also use a list comprehension. The benefit of using generator comprehensions is that they don't create a new list, but hand out items one by one. If your list ( a ) is big, that can make an impact on memory use and/or performance. Using the asterisk (the "splat operator") to unpack such a sequence is PEP 448 — Additional Unpacking Generalizations . It allows you to use the * not just in function calls, but in list, generator, and set comprehensions (the same applying to ** and dictionary comprehensions).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50504844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1191416/']}
jdg_378075
stackexchange
llm_judgeable_groundtruth_similarity
47587352
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to open a local HTML file with headless Chrome using Puppeteer (without a web server)? I could only get it to work against a local server. I found setContent() and goto() in the Puppeteer API documentation , but: page.goto : did not work with a local file or file:// . page.setContent : is for an HTML string Now provide the response and nothing else.
I just did a test locally (you can see I did this on windows) and puppeteer happily opened my local html file using page.goto and a full file url, and saved it as a pdf: 'use strict';const puppeteer = require('puppeteer'); (async() => { const browser = await puppeteer.launch();const page = await browser.newPage(); await page.goto('file://C:/Users/compoundeye/test.html'); await page.pdf({ path: 'test.pdf', format: 'A4', margin: { top: "20px", left: "20px", right: "20px", bottom: "20px" } }); await browser.close(); })(); If you need to use a relative path might want to look at this question about the use of relative file paths: File Uri Scheme and Relative Files
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/47587352', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/237743/']}
jdg_378076
stackexchange
llm_judgeable_groundtruth_similarity
23712581
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to find out the differences between /dev/random and /dev/urandom files What are the differences between /dev/random and /dev/urandom ? When should I use them? when should I not use them? Now provide the response and nothing else.
Using /dev/random may require waiting for the result as it uses so-called entropy pool , where random data may not be available at the moment. /dev/urandom returns as many bytes as user requested and thus it is less random than /dev/random . As can be read from the man page: random When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered. urandom A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead. For cryptographic purposes you should really use /dev/random because of nature of data it returns. Possible waiting should be considered as acceptable tradeoff for the sake of security, IMO. When you need random data fast , you should use /dev/urandom of course. Source: Wikipedia page, man page
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23712581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/718333/']}
jdg_378077
stackexchange
llm_judgeable_groundtruth_similarity
9110536
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use EF 4.3 migrations with multiple code-first DbContexts. My application is separated into several plugins, which possibly have their own DbContext regarding their domain. The application should use one single sql-database. When I try to auto migrate the contexts in an empty database, this is only successful for the first context. Every other context needs the AutomaticMigrationDataLossAllowed-Property set to true but then tries to drop the tables of the previous one. So my question is: How can I tell the migration-configuration just to look after the tables defined in their corresponding context and leave all others alone? What is the right workflow to deal with multiple DbContexts with auto-migration in a single database? Thank you! Now provide the response and nothing else.
Here is what you can do. very simple. You can create Configration Class for each of your context.e.g internal sealed class Configuration1 : DbMigrationsConfiguration<Context1>{ public Configuration1 (){ AutomaticMigrationsEnabled = false; MigrationsNamespace = "YourProject.Models.ContextNamespace1"; }}internal sealed class Configuration2 : DbMigrationsConfiguration<Context2>{ public Configuration2 (){ AutomaticMigrationsEnabled = false; MigrationsNamespace = "YourProject.Models.ContextNamespace2"; }} Now you add migration. You dont need to enable migration since you already did with the 2 classed above. Add-Migration -configuration Configuration1 Context1Init This will create migration script for context1. your can repeat this again for other Contexts. Add-Migration -configuration Configuration2 Context2Init To Update your database Update-Database -configuration Configuration1Update-Database -configuration Configuration2 This can be done in any order. Except you need to make sure each configration is called in sequence.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9110536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144440/']}
jdg_378078
stackexchange
llm_judgeable_groundtruth_similarity
384759
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I convert a PIL Image back and forth to a NumPy array so that I can do faster pixel-wise transformations than PIL's PixelAccess allows? I can convert it to a NumPy array via: pic = Image.open("foo.jpg")pix = numpy.array(pic.getdata()).reshape(pic.size[0], pic.size[1], 3) But how do I load it back into the PIL Image after I've modified the array? pic.putdata() isn't working well. Now provide the response and nothing else.
You're not saying how exactly putdata() is not behaving. I'm assuming you're doing >>> pic.putdata(a)Traceback (most recent call last): File "...blablabla.../PIL/Image.py", line 1185, in putdata self.im.putdata(data, scale, offset)SystemError: new style getargs format but argument is not a tuple This is because putdata expects a sequence of tuples and you're giving it a numpy array. This >>> data = list(tuple(pixel) for pixel in pix)>>> pic.putdata(data) will work but it is very slow. As of PIL 1.1.6, the "proper" way to convert between images and numpy arrays is simply >>> pix = numpy.array(pic) although the resulting array is in a different format than yours (3-d array or rows/columns/rgb in this case). Then, after you make your changes to the array, you should be able to do either pic.putdata(pix) or create a new image with Image.fromarray(pix) .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/384759', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/145/']}
jdg_378079
stackexchange
llm_judgeable_groundtruth_similarity
2772511
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Many Java framework classes implement Iterable , however String does not. It makes sense to iterate over characters in a String , just as one can iterate over items in a regular array. Is there a reason why String does not implement Iterable ? Now provide the response and nothing else.
There really isn't a good answer. An iterator in Java specifically applies to a collection of discrete items (objects). You would think that a String , which implements CharSequence , should be a "collection" of discrete characters. Instead, it is treated as a single entity that happens to consist of characters. In Java, it seems that iterators are only really applied to collections and not to a string. There is no reason why it is this way (near as I can tell - you would probably have to talk to Gosling or the API writers); it appears to be convention or a design decision. Indeed, there is nothing preventing CharSequence from implementing Iterable . That said, you can iterate over the characters in a string like so: for (int i = 0; i < str.length(); i++) { System.out.println(str.charAt(i));} Or: for(char c : str.toCharArray()) { System.out.println(c);} Or: "Java 8".chars().forEach(System.out::println); Also note that you cannot modify a character of a String in place because Strings are immutable. The mutable companion to a String is StringBuilder (or the older StringBuffer). EDIT To clarify based on the comments on this answer. I'm trying to explain a possible rationale as to why there is no Iterator on a String . I'm not trying to say that it's not possible; indeed I think it would make sense for CharSequence to implement Iterable . String provides CharSequence , which, if only conceptually, is different from a String . A String is usually thought of as a single entity, whereas CharSequence is exactly that: a sequence of characters. It would make sense to have an iterator on a sequence of characters (i.e., on CharSequence ), but not simply on a String itself. As Foxfire has rightly pointed out in the comments, String implements the CharSequence interface, so type-wise, a String is a CharSequence . Semantically, it seems to me that they are two separate things - I'm probably being pedantic here, but when I think of a String I usually think of it as a single entity that happens to consist of characters. Consider the difference between the sequence of digits 1, 2, 3, 4 and the number 1234 . Now consider the difference between the string abcd and the sequence of characters a, b, c, d . I'm trying to point out this difference. In my opinion, asking why String doesn't have an iterator is like asking why Integer doesn't have an iterator so that you can iterate over the individual digits.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2772511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/333335/']}
jdg_378080
stackexchange
llm_judgeable_groundtruth_similarity
44518
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: 1. I start with simple differential inequality: find $u\in C^1[0,1]$ such that $u(0) = 0$ and $$u'(t)\leq -u(t)$$for all $t\in [0,1]$. Using Gronwall's lemma one can see that $u\leq 0$. On the other hand it seems to be the only solution, since this inequality keep $u$ non-decreasing for $u<0$ and non-increasing for $u>0$. Is it right that only $u=0$ satisfies this inequality? 2. With Gronwall's lemma one can see that any solution of $$u'(t)\leq\beta(t)u(t)$$is bounded from above by the solution of$$u'(t) = \beta(t)u(t).$$ So there are two main results: (i) there is a solution of $u'\leq \beta u$ which dominates any other solution. (ii) this solution is attained on the correspondent equation. Are there any similar results on PD inequalities of the type$$u_t(t,x)\leq L_x u(t,x)$$where $L_x$ is a differential operator in $x$ variable (of first or second order). The main question for me if (i) is valid for such inequalities. Now provide the response and nothing else.
Related to your second question: I assume your functions are of the form $u: (a,b) \to L^2(\Omega)$. If you can get a bound on $(Lw, w)_2$ in terms of $\|w\|_2^2$ (as you can typically do by linearity, Holder's Inequality, Green's Identity, etc.) then assuming positivity you can do the following: $$u_t(t, x) \le Lu(t, x)$$$$1/2 u(t,x) u_t(t,x) \le 1/2 Lu(t,x) u(t,x)$$ $$\int_{\Omega} 1/2 u(t,x) u_t(t,x) dx \le 1/2(Lu(t), u(t))_2$$ $$d/dt \|u(t)\|_2^2 \le 1/2(Lu, u)_2 \le C\|u(t)\|_2^2$$ and now use Gronwall and the initial condition to get bounds on the $L^2$ norm of the solution. I don't know if that helps.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/44518', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5887/']}
jdg_378081
stackexchange
llm_judgeable_groundtruth_similarity
5437
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I can't find any examples, has it just never happened before or is it impossible because of environmental factors? Now provide the response and nothing else.
Apart from the fact that only a few tropical cyclones follow a more southerly route (a list bellow) after going around the North Atlantic, I think the main problem is that tropical cyclones would weaken significantly after landfall in the Iberian Peninsula. An example of this is Hurricane Raphael in 2012. Source Weather.unisys.com. There is no "easy" path from the Atlantic into the Mediterranean for a system the size of a tropical cyclone. Other examples that followed a similar path but never made it the Mediterranean: Gordon (2006) , Jeanne (1998) , Ivan (1998) , Frances (1992) , Bob (1991) , Arlene (1987) , Chloe (1967) , Carol (1965) , Dolly (1953) . While the cyclones at the point of approaching the Iberian Peninsula are still tracked as tropical depressions, their characteristics are often similar to other extra-tropical storms.
{}
{'log_upvote_score': 4, 'links': ['https://earthscience.stackexchange.com/questions/5437', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/3384/']}
jdg_378082
stackexchange
llm_judgeable_groundtruth_similarity
2452184
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I think I have found all the positive integer solutions to $xy=4x+7y$. I did first was to make $y$ the subject of the expression, and then I $$xy-7y=4x$$$$y=\frac{4x}{x-7}$$$$y=\frac{4}{x-7} * x$$ I started finding the divisors of $4$: $1,2,4$, and setting $x-7$ to be equal to these divisors. I then got the solution pairs: $(8, 32), (9, 18), (11, 11)$. I also realised that if $x-7$ and $x$ were divisible by $7$, then $7$ could be 'cancelled out' from the expression. I substituted $x=14, 21, 35$, and got $3$ more solution pairs: $(14, 8)$, $(21, 6)$, $(35, 5)$. After this, I concluded that there might not be any more solutions, because $$\lim_{x\to \infty} (4*\frac{x}{x-7}) = 4*1=4,$$ and that means that when $x$ is another multiple of $7$, $y$ will never equal $4$. However, how can I prove that there are no other solutions rigorously, without checking each number case by case? Is there a way to find the total number of integer solutions without knowing what they are? Now provide the response and nothing else.
If $xy = 4x+7y$, then $xy-4x-7y=0$. Complete the factorization:$$ (x-7)(y-4) = xy-4x-7y+(\text{something}) = (\text{the **same** something})$$So $x-7$ and $y-4$ must be a factorization of the (something). Note: Just because $x$ and $y$ are positive integers doesn't mean that $x-7$ and $y-4$ are necessarily positive.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2452184', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/285313/']}
jdg_378083
stackexchange
llm_judgeable_groundtruth_similarity
2147736
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to prove that the following are equivalent: CH There exists a Sierpinski set and $\operatorname{non}(\mathcal{N})=2^{\aleph_0}$ There exists a Luzin set and $\operatorname{non}(\mathcal{M})=2^{\aleph_0}$ There exists a Sierpinski- and a Luzin set and one of them has size continuum. I can show that 1 implies 2-4. In Case 2-4 we either have a Luzin or Sierpinski set of size continuum, how can I deduce that it has size $\aleph_1$? Here $\mathcal{N}$ is the ideal of Lebesgue null sets, $\mathcal{M}$ is the ideal of meager sets. A Sierpinksi set is an uncountable set $S\subseteq \mathbb{R}$ s.t. $S\cap N$ is countable for all $N\in\mathcal{N}$. A Luzin set is an uncountable set $L\subseteq \mathbb{R}$ s.t. $L\cap M$ is countable for all $M\in\mathcal{M}$. Now provide the response and nothing else.
Here is a partial answer: I will show that 2 implies 1. Let $\{r_\alpha : \alpha<\mathfrak{c}\}$ be an enumeration of $\mathbb{R}$. Since $\mathrm{non}(\mathcal{N})=\mathfrak{c}$, every subset of $\mathbb{R}$ whose cardinality is strictly less than $\mathfrak{c}$ is measurable, and in fact measure 0. Let $S$ be a Sierpinski set, then $S_\gamma := S\cap \{r_\alpha : \alpha<\gamma\}$ is countable for each $\gamma < \mathfrak{c}$. Moreover if $\beta<\gamma<\kappa$ then $S_\beta \subseteq S_\gamma$. Therefore $\langle S_\gamma : \gamma<\mathfrak{c}\rangle$ forms an increasing sequence of countable sets. If $\mathfrak{c}\ge \omega_2$, then $S_\gamma$ is eventually constant (since it is always countable) and $S=S_\gamma$ for large $\gamma$, which implies $S$ itself is uncountable measure-zero set. Therefore $\mathfrak{c} = \omega_1$. You can also prove the second implication $(3)\implies (1)$ with little modification, by changing some words in the above proof.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2147736', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/202791/']}
jdg_378084
stackexchange
llm_judgeable_groundtruth_similarity
44520803
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to display currentpages tags by using a TagQuery. I wish it was as easy as var tagList = Umbraco.TagQuery.GetAllContentTags(CurrentPage) What's an easy approach for this? Now provide the response and nothing else.
You can implement Finally without type erasure and overhead of std::function : template <typename F>class Finally { F f;public: template <typename Func> Finally(Func&& func) : f(std::forward<Func>(func)) {} ~Finally() { f(); } Finally(const Finally&) = delete; Finally(Finally&&) = delete; Finally& operator =(const Finally&) = delete; Finally& operator =(Finally&&) = delete;};template <typename F>Finally<F> make_finally(F&& f){ return { std::forward<F>(f) };} And use it like: auto&& doFinally = make_finally([&] { var++; }); Demo
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44520803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8086058/']}
jdg_378085
stackexchange
llm_judgeable_groundtruth_similarity
1317152
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to make sure that everything I know about UTF-8 is correct. I have been trying to use UTF-8 for a while now but I keep stumbling across more and more bugs and other weird things that make it seem almost impossible to have a 100% UTF-8 site. There is always a gotcha somewhere that I seem to miss. Perhaps someone here can correct my list or OK it so I don't miss anything important. Database Every site has to store there data somewhere. No matter what your PHP settings are you must also configure the DB. If you can't access the config files then make sure to " SET NAMES 'utf8' " as soon as you connect. Also, make sure to use utf8_ unicode_ ci on all of your tables. This assumes MySQL for a database, you will have to change for others. Regex I do a LOT of regex that is more complex than your average search-replace. I have to remember to use the "/u" modifier so that PCRE doesn't corrupt my strings . Yet, even then there are still problems apparently . String Functions All of the default string functions (strlen(), strpos(), etc.) should be replaced with Multibyte String Functions that look at the character instead of the byte. Headers You should make sure that your server is returning the correct header for the browser to know what charset you are trying to use (just like you must tell MySQL). header('Content-Type: text/html; charset=utf-8'); It is also a good idea to put the correct < meta > tag in the page head. Though the actual header will override this should they differ. <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> Questions Do I need to convert everything that I receive from the user agent (HTML form's & URI) to UTF-8 when the page loads or if I can just leave the strings/values as they are and still run them through these functions without a problem? If I do need to convert everything to UTF-8 - then what steps should I take? mb_detect_encoding seems to be built for this but I keep seeing people complain that it doesn't always work. mb_check_encoding also seems to have a problem telling a good UTF-8 string from a malformed one. Does PHP store strings in memory differently depending on what encoding it is using (like file types) or is it still stored like a regular sting with some of the chars being interpreted differently (like & amp; vs & in HTML). chazomaticus answers this question: In PHP (up to PHP5, anyway), strings are just sequences of bytes. There is no implied or explicit character set associated with them; that's something the programmer must keep track of. If a give a non-UTF-8 string to a mb_* function will it ever cause a problem? If a UTF string is improperly encoded will something go wrong (like a parsing error in regex?) or will it just mark an entity as bad (html)? Is there ever a chance that improperly encoded strings will result in function returning FALSE because the string is bad? I have heard that you should mark you forms as UTF-8 also (accept-charset="UTF-8") but I am not sure what the benefit is..? Was UTF-16 written to address a limit in UTF-8? Like did UTF-8 run out of space for characters? (Y2(UTF)k?) Functions Here are are a couple of the custom PHP functions I have found but I haven't any way to verify that they actually work. Perhaps someone has an example which I can use. First is convertToUTF8() and then seems_utf8 from wordpress. function seems_utf8($str) { $length = strlen($str); for ($i=0; $i < $length; $i++) { $c = ord($str[$i]); if ($c < 0x80) $n = 0; # 0bbbbbbb elseif (($c & 0xE0) == 0xC0) $n=1; # 110bbbbb elseif (($c & 0xF0) == 0xE0) $n=2; # 1110bbbb elseif (($c & 0xF8) == 0xF0) $n=3; # 11110bbb elseif (($c & 0xFC) == 0xF8) $n=4; # 111110bb elseif (($c & 0xFE) == 0xFC) $n=5; # 1111110b else return false; # Does not match any model for ($j=0; $j<$n; $j++) { # n bytes matching 10bbbbbb follow ? if ((++$i == $length) || ((ord($str[$i]) & 0xC0) != 0x80)) return false; } } return true;}function is_utf8($str) { $c=0; $b=0; $bits=0; $len=strlen($str); for($i=0; $i<$len; $i++){ $c=ord($str[$i]); if($c > 128){ if(($c >= 254)) return false; elseif($c >= 252) $bits=6; elseif($c >= 248) $bits=5; elseif($c >= 240) $bits=4; elseif($c >= 224) $bits=3; elseif($c >= 192) $bits=2; else return false; if(($i+$bits) > $len) return false; while($bits > 1){ $i++; $b=ord($str[$i]); if($b < 128 || $b > 191) return false; $bits--; } } } return true;} If anyone is interested I found a great example page to use when testing UTf-8 . Now provide the response and nothing else.
Do I need to convert everything that I receive from the user agent (HTML form's & URI) to UTF-8 when the page loads No. The user agent should be submitting data in UTF-8 format; if not you are losing the benefit of Unicode. The way to ensure a user-agent submits in UTF-8 format is to serve the page containing the form it's submitting in UTF-8 encoding. Use the Content-Type header (and meta http-equiv too if you intend the form to be saved and work standalone). I have heard that you should mark you forms as UTF-8 also (accept-charset="UTF-8") Don't. It was a nice idea in the HTML standard, but IE never got it right. It was supposed to state an exclusive list of allowable charsets, but IE treats it as a list of additional charsets to try, on a per-field basis. So if you have an ISO-8859-1 page and an “accept-charset="UTF-8"” form, IE will first try to encode a field as ISO-8859-1, and if there's a non-8859-1 character in there, then it'll resort to UTF-8. But since IE does not tell you whether it has used ISO-8859-1 or UTF-8, that's of absolutely no use to you. You would have to guess, for each field separately, which encoding was in use! Not useful. Omit the attribute and serve your pages as UTF-8; that's the best you can do at the moment. If a UTF string is improperly encoded will something go wrong If you let such a sequence get through to the browser you could be in trouble. There are ‘overlong sequences’ which encode an low-numbered codepoint in a longer sequence of bytes than is necessary. This means if you are filtering ‘<’ by looking for that ASCII character in a sequence of bytes, you could miss one, and let a script element into what you thought was safe text. Overlong sequences were banned back in the early days of Unicode, but it took Microsoft a very long time to get their shit together: IE would interpret the byte sequence ‘\xC0\xBC’ as a ‘<’ up until IE6 Service Pack 1. Opera also got it wrong up to (about, I think) version 7. Luckily these older browsers are dying out, but it's still worth filtering overlong sequences in case those browsers are still about now (or new idiot browsers make the same mistake in future). You can do this, and fix other bad sequences, with a regex that allows only proper UTF-8 through, such as this one from W3. If you are using mb_ functions in PHP, you might be insulated from these issues. I can't say for sure as mb_* was unusable fragile when I was still writing PHP. In any case, this is also a good time to remove control characters, which are a large and generally unappreciated source of bugs. I would remove chars 9 and 13 from submitted string in addition to the others the W3 regex takes out; it is also worth removing plain newlines for strings you know aren't supposed to be multiline textboxes. Was UTF-16 written to address a limit in UTF-8? No, UTF-16 is a two-byte-per-codepoint encoding that's used to make indexing Unicode strings easier in-memory (from the days when all of Unicode would fit in two bytes; systems like Windows and Java still do it that way). Unlike UTF-8 it is not compatible with ASCII, and is of little-to-no use on the Web. But you occasionally meet it in saved files, usually ones saved by Windows users who have been misled by Windows's description of UTF-16LE as “Unicode” in Save-As menus. seems_utf8 This is very inefficient compared to the regex! Also, make sure to use utf8_unicode_ci on all of your tables. You can actually sort of get away without this, treating MySQL as a store for nothing but bytes and only interpreting them as UTF-8 in your script. The advantage of using utf8_unicode_ci is that it will collate (sort and do case-insensitive compares) with knowledge about non-ASCII characters, so eg. ‘ŕ’ and ‘Ŕ’ are the same character. If you use a non-UTF8 collation you should stick to binary (case-sensitive) matching. Whichever you choose, do it consistently: use the same character set for your tables as you do for your connection. What you want to avoid is a lossy character set conversion between your scripts and the database.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1317152', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/99923/']}
jdg_378086
stackexchange
llm_judgeable_groundtruth_similarity
21222851
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a have a String which came from a text area: (with the variable name string ) This is the first lineAnd this is the second If I were to split that into separate words using string.split(" ") , then check what words contain "\n" for(String s : string.split(" ")) { if(s.contains("\n")) System.out.println(s);} Both line and And in my sentence contain \n . But, if I were to check if the word either started with \n or ended with it, it gives me no results. if(s.contains("\n")) { System.out.println("Contains"); if(s.startsWith("\n")) System.out.println("Starts with"); else if(s.endsWith("\n")) { System.out.println("Ends with"); else System.out.println("Does not contain");} My result from that: ContainsDoes not contain So, if the word contains a \n , but it doesn't start or end with it, where exactly is it and how can I manage it without using replaceAll(String, String) ? Now provide the response and nothing else.
What happens is that the string looks like: "This is the first line\nAnd this is the second" So when you split it by " " you get: "line\nAnd" When you print it, it looks like two separate strings.To demonstrate this, try adding an extra print in the for loop: for (final String s : string.split(" ")) { if (s.contains("\n")) { System.out.print(s); System.out.println(" END"); }} Output: lineAnd END And when you try to check whether a string starts or ends with "\n" you won't get any result because in fact the string "line\nAnd" doesn't start or end with "\n"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21222851', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2398375/']}
jdg_378087
stackexchange
llm_judgeable_groundtruth_similarity
50323522
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any possible optimization for random access on a very big array (I currently use uint8_t , and I'm asking about what's better) uint8_t MyArray[10000000]; when the value at any position in the array is 0 or 1 for 95% of all cases, 2 in 4% of cases, between 3 and 255 inthe other 1% of cases? So, is there anything better than a uint8_t array to use for this? It should be as quick as possible to loop over the whole array in a random order, and this is very heavy on RAM bandwidth, so when having more than a few threads doing that at the same time for different arrays, currently the whole RAM bandwidth is quickly saturated. I'm asking since it feels very inefficient to have such a big array (10 MB) when it's actually known that almost all values, apart from 5%, will be either 0 or 1. So when 95% of all values in the array would only actually need 1 bit instead of 8 bit, this would reduce memory usage by almost an order of magnitude. It feels like there has to be a more memory efficient solution that would greatly reduce RAM bandwidth required for this, and as a result also be significantly quicker for random access. Now provide the response and nothing else.
A simple possibility that comes to mind is to keep a compressed array of 2 bits per value for the common cases, and a separated 4 byte per value (24 bit for original element index, 8 bit for actual value, so (idx << 8) | value) ) sorted array for the other ones. When you look up a value, you first do a lookup in the 2bpp array (O(1)); if you find 0, 1 or 2 it's the value you want; if you find 3 it means that you have to look it up in the secondary array. Here you'll perform a binary search to look for the index of your interest left-shifted by 8 (O(log(n) with a small n, as this should be the 1%), and extract the value from the 4-byte thingie. std::vector<uint8_t> main_arr;std::vector<uint32_t> sec_arr;uint8_t lookup(unsigned idx) { // extract the 2 bits of our interest from the main array uint8_t v = (main_arr[idx>>2]>>(2*(idx&3)))&3; // usual (likely) case: value between 0 and 2 if(v != 3) return v; // bad case: lookup the index<<8 in the secondary array // lower_bound finds the first >=, so we don't need to mask out the value auto ptr = std::lower_bound(sec_arr.begin(), sec_arr.end(), idx<<8);#ifdef _DEBUG // some coherency checks if(ptr == sec_arr.end()) std::abort(); if((*ptr >> 8) != idx) std::abort();#endif // extract our 8-bit value from the 32 bit (index, value) thingie return (*ptr) & 0xff;}void populate(uint8_t *source, size_t size) { main_arr.clear(); sec_arr.clear(); // size the main storage (round up) main_arr.resize((size+3)/4); for(size_t idx = 0; idx < size; ++idx) { uint8_t in = source[idx]; uint8_t &target = main_arr[idx>>2]; // if the input doesn't fit, cap to 3 and put in secondary storage if(in >= 3) { // top 24 bits: index; low 8 bit: value sec_arr.push_back((idx << 8) | in); in = 3; } // store in the target according to the position target |= in << ((idx & 3)*2); }} For an array such as the one you proposed, this should take 10000000 / 4 = 2500000 bytes for the first array, plus 10000000 * 1% * 4 B = 400000 bytes for the second array; hence 2900000 bytes, i.e. less than one third of the original array, and the most used portion is all kept together in memory, which should be good for caching (it may even fit L3). If you need more than 24-bit addressing, you'll have to tweak the "secondary storage"; a trivial way to extend it is to have a 256 element pointer array to switch over the top 8 bits of the index and forward to a 24-bit indexed sorted array as above. Quick benchmark #include <algorithm>#include <vector>#include <stdint.h>#include <chrono>#include <stdio.h>#include <math.h>using namespace std::chrono;/// XorShift32 generator; extremely fast, 2^32-1 period, way better quality/// than LCG but fail some test suitesstruct XorShift32 { /// This stuff allows to use this class wherever a library function /// requires a UniformRandomBitGenerator (e.g. std::shuffle) typedef uint32_t result_type; static uint32_t min() { return 1; } static uint32_t max() { return uint32_t(-1); } /// PRNG state uint32_t y; /// Initializes with seed XorShift32(uint32_t seed = 0) : y(seed) { if(y == 0) y = 2463534242UL; } /// Returns a value in the range [1, 1<<32) uint32_t operator()() { y ^= (y<<13); y ^= (y>>17); y ^= (y<<15); return y; } /// Returns a value in the range [0, limit); this conforms to the RandomFunc /// requirements for std::random_shuffle uint32_t operator()(uint32_t limit) { return (*this)()%limit; }};struct mean_variance { double rmean = 0.; double rvariance = 0.; int count = 0; void operator()(double x) { ++count; double ormean = rmean; rmean += (x-rmean)/count; rvariance += (x-ormean)*(x-rmean); } double mean() const { return rmean; } double variance() const { return rvariance/(count-1); } double stddev() const { return std::sqrt(variance()); }};std::vector<uint8_t> main_arr;std::vector<uint32_t> sec_arr;uint8_t lookup(unsigned idx) { // extract the 2 bits of our interest from the main array uint8_t v = (main_arr[idx>>2]>>(2*(idx&3)))&3; // usual (likely) case: value between 0 and 2 if(v != 3) return v; // bad case: lookup the index<<8 in the secondary array // lower_bound finds the first >=, so we don't need to mask out the value auto ptr = std::lower_bound(sec_arr.begin(), sec_arr.end(), idx<<8);#ifdef _DEBUG // some coherency checks if(ptr == sec_arr.end()) std::abort(); if((*ptr >> 8) != idx) std::abort();#endif // extract our 8-bit value from the 32 bit (index, value) thingie return (*ptr) & 0xff;}void populate(uint8_t *source, size_t size) { main_arr.clear(); sec_arr.clear(); // size the main storage (round up) main_arr.resize((size+3)/4); for(size_t idx = 0; idx < size; ++idx) { uint8_t in = source[idx]; uint8_t &target = main_arr[idx>>2]; // if the input doesn't fit, cap to 3 and put in secondary storage if(in >= 3) { // top 24 bits: index; low 8 bit: value sec_arr.push_back((idx << 8) | in); in = 3; } // store in the target according to the position target |= in << ((idx & 3)*2); }}volatile unsigned out;int main() { XorShift32 xs; std::vector<uint8_t> vec; int size = 10000000; for(int i = 0; i<size; ++i) { uint32_t v = xs(); if(v < 1825361101) v = 0; // 42.5% else if(v < 4080218931) v = 1; // 95.0% else if(v < 4252017623) v = 2; // 99.0% else { while((v & 0xff) < 3) v = xs(); } vec.push_back(v); } populate(vec.data(), vec.size()); mean_variance lk_t, arr_t; for(int i = 0; i<50; ++i) { { unsigned o = 0; auto beg = high_resolution_clock::now(); for(int i = 0; i < size; ++i) { o += lookup(xs() % size); } out += o; int dur = (high_resolution_clock::now()-beg)/microseconds(1); fprintf(stderr, "lookup: %10d µs\n", dur); lk_t(dur); } { unsigned o = 0; auto beg = high_resolution_clock::now(); for(int i = 0; i < size; ++i) { o += vec[xs() % size]; } out += o; int dur = (high_resolution_clock::now()-beg)/microseconds(1); fprintf(stderr, "array: %10d µs\n", dur); arr_t(dur); } } fprintf(stderr, " lookup | ± | array | ± | speedup\n"); printf("%7.0f | %4.0f | %7.0f | %4.0f | %0.2f\n", lk_t.mean(), lk_t.stddev(), arr_t.mean(), arr_t.stddev(), arr_t.mean()/lk_t.mean()); return 0;} (code and data always updated in my Bitbucket) The code above populates a 10M element array with random data distributed as OP specified in their post, initializes my data structure and then: performs a random lookup of 10M elements with my data structure does the same through the original array. (notice that in case of sequential lookup the array always wins by a huge measure, as it's the most cache-friendly lookup you can do) These last two blocks are repeated 50 times and timed; at the end, the mean and standard deviation for each type of lookup are calculated and printed, along with the speedup (lookup_mean/array_mean). I compiled the code above with g++ 5.4.0 ( -O3 -static , plus some warnings) on Ubuntu 16.04, and ran it on some machines; most of them are running Ubuntu 16.04, some some older Linux, some some newer Linux. I don't think the OS should be relevant at all in this case. CPU | cache | lookup (µs) | array (µs) | speedup (x)Xeon E5-1650 v3 @ 3.50GHz | 15360 KB | 60011 ± 3667 | 29313 ± 2137 | 0.49Xeon E5-2697 v3 @ 2.60GHz | 35840 KB | 66571 ± 7477 | 33197 ± 3619 | 0.50Celeron G1610T @ 2.30GHz | 2048 KB | 172090 ± 629 | 162328 ± 326 | 0.94Core i3-3220T @ 2.80GHz | 3072 KB | 111025 ± 5507 | 114415 ± 2528 | 1.03Core i5-7200U @ 2.50GHz | 3072 KB | 92447 ± 1494 | 95249 ± 1134 | 1.03Xeon X3430 @ 2.40GHz | 8192 KB | 111303 ± 936 | 127647 ± 1503 | 1.15Core i7 920 @ 2.67GHz | 8192 KB | 123161 ± 35113 | 156068 ± 45355 | 1.27Xeon X5650 @ 2.67GHz | 12288 KB | 106015 ± 5364 | 140335 ± 6739 | 1.32Core i7 870 @ 2.93GHz | 8192 KB | 77986 ± 429 | 106040 ± 1043 | 1.36Core i7-6700 @ 3.40GHz | 8192 KB | 47854 ± 573 | 66893 ± 1367 | 1.40Core i3-4150 @ 3.50GHz | 3072 KB | 76162 ± 983 | 113265 ± 239 | 1.49Xeon X5650 @ 2.67GHz | 12288 KB | 101384 ± 796 | 152720 ± 2440 | 1.51Core i7-3770T @ 2.50GHz | 8192 KB | 69551 ± 1961 | 128929 ± 2631 | 1.85 The results are... mixed! In general, on most of these machines there is some kind of speedup, or at least they are on a par. The two cases where the array truly trumps the "smart structure" lookup are on a machines with lots of cache and not particularly busy: the Xeon E5-1650 above (15 MB cache) is a night build machine, at the moment quite idle; the Xeon E5-2697 (35 MB cache) is a machine for high performance calculations, in an idle moment as well. It does make sense, the original array fits completely in their huge cache, so the compact data structure only adds complexity. At the opposite side of the "performance spectrum" - but where again the array is slightly faster, there's the humble Celeron that powers my NAS; it has so little cache that neither the array nor the "smart structure" fits in it at all. Other machines with cache small enough perform similarly. The Xeon X5650 must be taken with some caution - they are virtual machines on a quite busy dual-socket virtual machine server; it may well be that, although nominally it has a decent amount of cache, during the time of the test it gets preempted by completely unrelated virtual machines several times.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/50323522', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9591796/']}
jdg_378088
stackexchange
llm_judgeable_groundtruth_similarity
40165286
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an SQL query, when I run it in the SQL Server, it works fine as expected. Now I want to use this query with Entity Framework like so: ViewBag.TimeSlots = dbTimeSlots.Data.SqlQuery("SELECT a.id, concat(a.dateSlot, ' - ', a.timeSlot) as dateTimeSlot, sum(IIF(b.dateSlot is null,0,1)) as counter FROM VIP_Preview_TimeSlots as a LEFT OUTER JOIN [CP-VIP-Preview] as b ON a.dateSlot = b.dateSlot AND a.timeSlot = b.timeSlot GROUP BY a.timeSlot, a.dateSlot, a.[order], a.id Having sum(IIF(b.dateSlot is null,0,1)) < 30 ORDER BY a.[order]").ToList(); However when I run it, I get this error: The data reader is incompatible with the specified ‘CP.Models.VIP_Preview_TimeSlots'. A member of the type, 'timeSlot', does not have a corresponding column in the data reader with the same name. Here is my class: public class VIP_Preview_TimeSlots { public int id { get; set; } [DisplayName("Time Slots")] public string timeSlot { get; set; } [DisplayName("Date Slots")] public string dateSlot { get; set; } public int order { get; set; } } public class VIPPreviewTimeSlots : DbContext { public DbSet<VIP_Preview_TimeSlots> Data { get; set; } } I really dont know why this is not working, the query works, I dont know why Entity Framework has an issue with it, how can I fix this? Even when I try a simple Query: ViewBag.TimeSlots = dbTimeSlots.Data.SqlQuery("SELECT id, concat(dateSlot, ' - ', timeSlot) as dateTimeSlot FROM VIP_Preview_TimeSlots").ToList(); I get the same error. Now provide the response and nothing else.
You can use ViewChild to access the input in your component. First, you need to add #someValue to your input so you can read it in the component: <input #myInput type="file" placeholder="File Name" name="filename" (change)="onChange($event)"> Then in your component you need to import ViewChild from @angular/core : import { ViewChild } from '@angular/core'; Then you use ViewChild to access the input from template: @ViewChild('myInput')myInputVariable: ElementRef; Now you can use myInputVariable to reset the selected file because it's a reference to input with #myInput , for example create method reset() that will be called on click event of your button: reset() { console.log(this.myInputVariable.nativeElement.files); this.myInputVariable.nativeElement.value = ""; console.log(this.myInputVariable.nativeElement.files);} First console.log will print the file you selected, second console.log will print an empty array because this.myInputVariable.nativeElement.value = ""; deletes selected file(s) from the input. We have to use this.myInputVariable.nativeElement.value = ""; to reset the value of the input because input's FileList attribute is readonly , so it is impossible to just remove item from array. Here's working Plunker .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/40165286', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/979331/']}
jdg_378089
stackexchange
llm_judgeable_groundtruth_similarity
7410989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am having difficulty figuring out how to split a list of Ints into a tuple containing two new lists, such that every element (starting with first) goes into the first list and every other element in the second. Like so: split [] = ([],[])split [1] = ([1],[])split [1,2] = ([1],[2])split [1,2,3] = ([1,3],[2])split [1,2,3,4] = ([1,3],[2,4]) I'm trying to accomplish this recursively(with guards) and only using the single argument xs This is my approach that keeps getting error messages: split :: [Int] -> ([Int],[Int])split xs | length(xs) == 0 = ([],[]) | length(xs) == 1 = (xs !! 0 : [],[]) | length(xs) == 2 = (xs !! 0 : [], xs !! 1 : []) | otherwise = (fst ++ xs !! 0, snd ++ xs !! 1) ++ split(drop 2 xs)) Now provide the response and nothing else.
Your split function returns a pair, but in the last case you are using ++ on the result of split . That will be a type error, since ++ works on lists, not pairs. There is also a type error because fst and snd are functions to pick out the elements of a pair, but you are using them is a strange way. Furthermore, use pattern matching instead of using length. Also, the case where you test if the length is 2 is not needed, since the general case removes 2 elements which takes you down to the base case of the empty list. You can also make your function more general by using a type variable a instead of Int in the type. [Edit]: Added code split :: [a] -> ([a], [a])split [] = ([], [])split [x] = ([x], [])split (x:y:xys) = (x:xs, y:ys) where (xs, ys) = split xys
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7410989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/943789/']}
jdg_378090
stackexchange
llm_judgeable_groundtruth_similarity
3488
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm struggling to understand the GAN loss function as provided in Understanding Generative Adversarial Networks (a blog post written by Daniel Seita). In the standard cross-entropy loss, we have an output that has been run through a sigmoid function and a resulting binary classification. Sieta states Thus, For [each] data point $x_1$ and its label, we get the following loss function ... $$H((x_1, y_1), D) = -y_1 \log D(x_1) - (1 - y_1) \log (1 - D(x_1))$$ This is just the log of the expectation, which makes sense. However, according to this formulation of the GAN loss, how can we process the data from both the true distribution and the generator in the same iteration? Now provide the response and nothing else.
The Focus of This Question "How can ... we process the data from the true distribution and the data from the generative model in the same iteration? Analyzing the Foundational Publication In the referenced page, Understanding Generative Adversarial Networks (2017) , doctoral candidate Daniel Sieta correctly references Generative Adversarial Networks, Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio, June 2014 . It's abstract states, "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models ..." This original paper defines two models defined as MLPs (multilayer perceptrons). Generative model, G Discriminative model, D These two models are controlled in a way where one provides a form of negative feedback toward the other, therefore the term adversarial. G is trained to capture the data distribution of a set of examples well enough to fool D. D is trained to discover whether its input are G's mocks or the set of examples for the GAN system. (The set of examples for the GAN system are sometimes referred to as the real samples, but they may be no more real than the generated ones. Both are numerical arrays in a computer, one set with an internal origin and the other with an external origin. Whether the external ones are from a camera pointed at some physical scene is not relevant to GAN operation.) Probabilistically, fooling D is synonymous to maximizing the probability that D will generate as many false positives and false negatives as it does correct categorizations, 50% each. In information science, this is to say that the limit of information D has of G approaches 0 as t approaches infinity. It is a process of maximizing the entropy of G from D's perspective, thus the term cross-entropy. How Convergence is Accomplished Because the loss function reproduced from Sieta's 2017 writing in the question is that of D, designed to minimize the cross entropy (or correlation) between the two distributions when applied to the full set of points for a given training state. $H((x_1, y_1), D) = 1 \, D(x_1)$ There is a separate loss function for G, designed to maximize the cross entropy. Notice that there are TWO levels of training granularity in the system. That of game moves in a two-player game That of the training samples These produce nested iteration with the outer iteration as follows. Training of G proceeds using the loss function of G. Mock input patterns are generated from G at its current state of training. Training of D proceeds using the loss function of D. Repeat if the cross entropy is not yet sufficiently maximized, D can still discriminate. When D finally loses the game, we have achieved our goal. G recovered the training data distribution D has been reduced to ineffectiveness ("1/2 probability everywhere") Why Concurrent Training is Necessary If the two models were not trained in a back and forth manner to simulate concurrency, convergence in the adversarial plane (the outer iteration) would not occur on the unique solution claimed in the 2014 paper. More Information Beyond the question, the next item of interest in Sieta's paper is that, "Poor design of the generator's loss function," can lead to insufficient gradient values to guide descent and produce what is sometimes called saturation. Saturation is simply the reduction of the feedback signal that guides descent in back-propagation to chaotic noise arising from floating point rounding. The term comes from signal theory. I suggest studying the 2014 paper by Goodfellow et alia (the seasoned researchers) to learn about GAN technology rather than the 2017 page.
{}
{'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/3488', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/7858/']}
jdg_378091
stackexchange
llm_judgeable_groundtruth_similarity
63288
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to create a ISO image of my executable By hiding few files off the ISO immage as i want the user only to see an EXE and not the other directories when they open in a PC (Windows) is the targeted ENV in which my users will launch the application. Now provide the response and nothing else.
In zsh, with the extended_glob option enabled, you can use ~ to exclude patterns from globs, so you could use: setopt extended_globmv -- *~my_folder my_folder Or use the negation operator (still with extended_glob ): mv -- ^my_folder my_folder Use braces to avoid typing the directory name twice: mv -- {^,}my_folder In bash (for other answer-seekers using it), you can use Ksh-style extended globs: # If it's not already enabledshopt -s extglobmv -- !(my_folder) my_folder You can also use that syntax in zsh if you enable the ksh_glob option.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/63288', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/31543/']}
jdg_378092
stackexchange
llm_judgeable_groundtruth_similarity
18843017
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My compiler is the latest VC++ 2013 RC. int f(bool b){ return {}; // OK return b ? 1 : { }; // C2059: syntax error : '{' return b ? 1 : {0}; // C2059: syntax error : '{' return b ? {1} : {0}; // C2059: syntax error : '{'} Why can braced-init-list not be used in ternary operator? Is this behavior defined as ill-formed by the C++ standard, or just a bug of the VC++ compiler? Now provide the response and nothing else.
Well, here's what the standard says about the braced-init-list (8.5.3.1): List-initialization can be used as the initializer in a variable definition (8.5) as the initializer in a new expression (5.3.4) in a return statement (6.6.3) as a function argument (5.2.2) as a subscript (5.2.1) as an argument to a constructor invocation (8.5, 5.2.3) as an initializer for a non-static data member (9.2) in a mem-initializer (12.6.2) on the right-hand side of an assignment (5.17) Since this doesn't mention the conditional operator, I guess your compiler is right. Also note that the conditional operator expects expressions on the both sides of : (5.16), and as far as I understand, a brace-initializer is not an expression.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18843017', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/508343/']}
jdg_378093
stackexchange
llm_judgeable_groundtruth_similarity
2389560
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the cover image of the book "Gödel, Escher, Bach", depicted below. The interesting feature is that it shows the existence of a subset of $\mathbb{R}^3$ which projects onto $\mathbb{R}^2$ in three different ways to form the letters of the book's title. It is natural to ask for generalizations: for which subsets $A_1, A_2, A_3$ of $\mathbb{R}^2$ is there set $X \subset \mathbb{R}^3$ such that, with $\pi_1, \pi_2, \pi_3$ the projection maps $\mathbb{R}^3 \rightarrow \mathbb{R}^2$, $\pi_1(X) = A_1, \pi_2(X) = A_2$, and $\pi_3(X) = A_3$? More generally, let $\{ \pi_i \}_{i \in I_{n,m}}$ be the canonical projection maps $\mathbb{R}^n \rightarrow \mathbb{R}^m$, where $m \leq n$. For which sets $\{ A_i \}_{i \in I_{n, m}} \subset \mathbb{R}^m$ is there a set $X$ such that $\pi_i (X) = A_i \forall i \in I_{n, m}$? Other interesting considerations: 1) I do not require the set to be connected. Nevertheless this presents an interesting question as to when the set in question is connected. 2) Let $X$ be the largest possible set satisfying the question, supposing it exists. Is there a simple way to calculate its boundary, $\partial X$? 3) What is the volume of the largest possible set in question in terms of $A_i$? It's worth noting that, if $A_1, A_2, A_3$ are measurable subsets of $I^2$, then there is an interesting formula for the volume of $X$ plus the volume of $Y = I^3 - \pi_1^{-1}(A_1) \cap I^3 - \pi_2^{-1}(A_2) \cap I^3 - \pi_3^{-1}(A_3) \cap I^3$. Now provide the response and nothing else.
The obvious (and maximal) candidate for the 3d object is$$\{\,(x,y,z)\in\Bbb R^3\mid (y,z)\in A_1, (x,z)\in A_2, (x,y)\in A_3\,\}$$obtained by intersecting the maximal sets that give one of the three projections each.The question is if the projections of this maximal set are as desired. This is the case for the first projection if and only if for each $(y,z)\in A_1$ there exists $x\in\Bbb R$ such that $(x,y)\in A_3$ and $(x,z)\in A_2$. Similarly for the other two projections. Hofstadter's examples work because already in the vertical bar of the E, there is so much material in the B (its lower line with final arc) that the G is guaranteed to work; and similarly, in the lower bar of the E, there is so much material in the G (its almost straight lower line) that the B is guaranteed to work; and finally the vertical bar of the B and the left end of the G are material enough to guarantee the E to work. So in a way, the trick is that the B and the G are less round than you might normally write them.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/2389560', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/900250/']}
jdg_378094
stackexchange
llm_judgeable_groundtruth_similarity
42113388
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In application using a mvvm-approach it's pretty common that the ViewModel provides one or more Command -properties. These properties often have the type ICommand or DelegateCommand or something like that. I don't understand why we need such an approach in an mvvm application. Wouldn't it be enough to provide public methods and bind the action from the view directly to this public method? Why does ICommand exist? Now provide the response and nothing else.
Wouldn't it be enough to provide public methods and bind the action from the view directly to this public method? Why ICommand exists? you can't bind to a method in xaml. You need an object. Therefore you need to wrap the method to an object. it is a common pattern in UI, that some actions are not available all the time. In Login form the Login action become available only when you enter username. In MS Word, the Copy or Cut actions becomes available only when you select something, otherwise the buttons are disabled and keyboard shortcuts inactive it is a common pattern, that command can be invoked with different parameters. Plain eventhandlers does not meet those requirements, But ICommand servers exactly those purposes: public interface ICommand{ void Execute(object parameter); bool CanExecute(object parameter); event EventHandler CanExecuteChanged;} It wraps method to an object It says whether to command is available or no, so the UI component (typically a button or menuitem) can reflect it Additionally, it notifies the UI components that the availability of the command has changed, so the UI can reflect it. Now, let's consider Copy&Paste scenario. Using ICommand the markup can look like this: <Button Content="Paste" Command="{Binding PasteCommand}" /><MenuItem Header="Paste" Command="{Binding PasteCommand}" /> public ICommand PasteCommand {get;} = new DelegateCommand(Paste, () => Clipboard != null); How would it look like without ICommand ? To make it easier, lets consider, that XAML would allow to bind to methods: <Button Content="Paste" Click="{Binding Paste}" IsEnabled="{Binding CanPaste}" /><MenuItem Header="Paste" Click="{Binding Paste}" IsEnabled="{Binding CanPaste}"/>public void Paste() {....}private bool _canPaste;public bool CanPaste{ get { return _canPaste } set { if (_canPaste != value) { _canPaste = value; OnNotifyPropertyChanged(nameof(CanPaste); } }} as you can see, not only it is more verbose, but it's also violation of DRY principle. You need to specify both Paste and CanPaste binding every time you want to use the command. What if you started without CanPaste and later you wanted to add it. Then you would have to add the CanPaste binding every occurrence of the Paste call. I guarantee you, that you would forget it somewhere. Now, if you did this in WPF: <Button Content="Paste" Click="Call_ViewModel_Paste" /> //in codebehind:void Call_ViewModel_Paste(oobject sender, RoutedEventArgs e){ ViewModel.Paste();} or eventually: <Button Content="Paste"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <ei:CallMethodAction MethodName="Paste" TargetObject="{Binding}"/> </i:EventTrigger> </i:Interaction.Triggers></Button> Both approaches are correct, they follow MVVM priciples and works without ICommand, but as you can see, neither is as elegant as ICommand
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42113388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6735292/']}
jdg_378095
stackexchange
llm_judgeable_groundtruth_similarity
11731651
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I don't need to use instances of those ResultSet and Connection anymore in my program, why should I call the .close() method on both of them ? What are the dangers (if any) of not doing so ? Now provide the response and nothing else.
There are two questions here: Database connections Holding a database connection open consumes resources on the database; it uses memory and databases are configured to have a maximum number of connections, so you increase to likelihood of running out of connections. Also the state of the session is maintained, so you can run into trouble with locks being accidentally held beyond their intended scope. On the positive side, prepared statements stay compiled and ready for use, so if you code and use your SQL correctly, you can gain significant performance benefits from reusing prepared statements. However, doing so may complicate your code and care is advised. Also, obtaining a connection is quite expensive, so that's why connection pools exist. These leave the connections open, but the client gets connections, uses them, then releases them back to the pool when done. Result set Holding result sets open will also hold certain locks open if you don't commit (which closes then result set), thus depending on your application, you can quickly hit deadlocks or severe liveliness issues. Regardless of whether you hold connections open, always close your result sets as soon as possible to release as much resource back to the database as you can.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11731651', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1542954/']}
jdg_378096
stackexchange
llm_judgeable_groundtruth_similarity
34900159
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have written a benchmark for get and remove of HashMap as below: @BenchmarkMode(Mode.AverageTime)@OutputTimeUnit(TimeUnit.NANOSECONDS)public class HashMapBenchmark { @State(Scope.Benchmark) public static class Mystate { HashMap<String,String> hashmapVar = new HashMap<String,String>(); String key0 = "bye"; @Setup(Level.Iteration) public void setup(){ hashmapVar.put(key0,"bubye"); } } @Benchmark public void hashmapGet(Mystate state ,Blackhole bh) { bh.consume(state.hashmapVar.get(state.key0)); } @Benchmark public void hashmapRemove(Mystate state ,Blackhole bh) { bh.consume(state.hashmapVar.remove(state.key0)); }} It produces this result: Benchmark Mode Samples Score Score error Unitsc.b.HashMapBenchmark.hashmapGet avgt 60 6.348 0.320 ns/opc.b.HashMapBenchmark.hashmapRemove avgt 60 5.180 0.074 ns/op As per the result, remove() is slight faster than get() .Even to remove an element, first it has to retrieve the element, doesn't it? How can remove() be faster? Or am I missing something? Update After using the latest JMH (1.11.3) and here is the result: Benchmark Mode Cnt Score Error UnitsHashMapBenchmark.hashmapGet avgt 60 9.713 ± 0.277 ns/opHashMapBenchmark.hashmapRemove avgt 60 7.677 ± 0.166 ns/op Now provide the response and nothing else.
So the trouble is, these benchmarks measure different things: get() from a populated map, and remove() from an (eventually) empty map. The comparison is meaningless, and you may throw the benchmark away. You have to guarantee the operation is done against the same HashMap . Unfortunately, that requires either using @Setup(Invocation) , which is bad on its own (read the Javadoc!), or sucking up the HashMap construction costs into the benchmark itself: @BenchmarkMode(Mode.AverageTime)@OutputTimeUnit(TimeUnit.NANOSECONDS)public class HashMapBenchmark { @Benchmark public String get() { HashMap<String, String> hm = createMap(); return hm.get("bye"); } @Benchmark public String remove() { HashMap<String, String> hm = createMap(); return hm.remove("bye"); } // extra protection from optimization @CompilerControl(CompilerControl.Mode.DONT_INLINE) private HashMap<String, String> createMap() { HashMap<String, String> hm = new HashMap<>(); hm.put("bye", "bye"); return hm; }} You can be extra-careful and peel the map creation into a separate non-inlineable method: today's compilers do not optimize across calls. On my i7-4790K, 4.0 GHz, Linux x86_64, JDK 8u66: Benchmark Mode Cnt Score Error UnitsHashMapBenchmark.get avgt 15 24.343 ± 0.351 ns/opHashMapBenchmark.remove avgt 15 24.611 ± 0.369 ns/op No drastic difference. In fact, if you look into the generated code with -prof perfasm , it would yield a few quantifiable differences in there. Or, you can quickly characterize both workloads with -prof perfnorm . Note that this case does not answer whether one method or the other better on real maps. The argument could be made for both: get does not modify map, and therefore does not cause memory stores, remove may help load factors so that next remove would get faster, etc. A single benchmark and a paragraph of text is far, far away from any fruitful discussion.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34900159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2270563/']}
jdg_378097
stackexchange
llm_judgeable_groundtruth_similarity
37377
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please help me understand this. Some brexiteer colleagues say it won't be a border, because of the Good Friday Agreement but WTO rules mean there must be a border. Will the UK then decide to break WTO rules consistently and prefer to upset the WTO instead of the Irish? So, is this a matter of illegality versus practicality? Now provide the response and nothing else.
It is not WTO rules which require a hard border , it is regulatory divergence. There will be a border in any scenario short of Irish unification. The Good Friday agreement says that it will be a soft border without intrusive limits and controls on cross-border traffic and commerce. Such a soft border is only practical if the rules on both sides of the border are compatible. There must be no taxes or tariffs if one shops at the grocer on the other side of the border. An electric appliance that is deemed safe on one side of the border must be deemed safe on the other side. If significantly different visa rules apply on both sides of the border, there must be controls to enforce them. Part of the promise of the Brexiteers was that the UK would be free from EU regulations and requirements. Once they use that freedom to make a divergence, then both the EU and the UK will have to police the border to make that happen. The EU plans to regulate some disposable plastic gadgets (like drinking straws) to help the environment. Imagine the UK does not agree. There would have to be EU inspectors at the border to make sure that consumers and businesses don't bring banned plastics over the border. The UK plans to limit the entry of EU citizens. EU citizens can travel to Ireland without limit. If the UK wants to stop them from crossing the border without paper trail, they have to send UK immigration officials to all border crossings so that EU citizens can get their documents stamped.
{}
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/37377', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/24174/']}
jdg_378098
stackexchange
llm_judgeable_groundtruth_similarity
94086
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is the best practice when it comes to writing classes that might have to know about the user interface. Wouldn't a class knowing how to draw itself break some best practices since it depends on what the user interface is (console, GUI, etc)? In many programming books I've come across the "Shape" example that shows inheritance. The base class shape has a draw() method that each shape such as a circle and square override. This allows for polymorphism. But isn't the draw() method very much dependent on what the user interface is? If we write this class for say, Win Forms, then we cannot re-use it for a console app or web app. Is this correct? The reason for the question is that I find myself always getting stuck and hung up on how to generalize classes so they are most useful. This is actually working against me and I'm wondering if I'm "trying too hard". Now provide the response and nothing else.
What is the best practice when it comes to writing classes that might have to know about the user interface. Wouldn't a class knowing how to draw itself break some best practices since it depends on what the user interface is (console, GUI, etc)? That depends on the class and the use case. A visual element knowing how to draw itself is not necessarily a violation of the single responsibility principle. In many programming books I've come across the "Shape" example that shows inheritance. The base class shape has a draw() method that each shape such as a circle and square override. This allows for polymorphism. But isn't the draw() method very much dependent on what the user interface is? Again, not necessarily. If you can create an interface (drawPoint, drawLine, set Color etc.), you can pretty much pass any context for drawing things onto something to the shape, for example within the shape's constructor. This would enable shapes to draw themselves on a console or any canvas given. If we write this class for say, Win Forms, then we cannot re-use it for a console app or web app. Is this correct? Well, that's true. If you write a UserControl (not a class in general) for Windows Forms, then you won't be able to use it with a console. But that's not a problem. Why would you expect a UserControl for Windows Forms to work with any kind of presentation? The UserControl should do one thing and do it well. It's bound to a certain form of presentation by definition. In the end, the user needs something concrete and not an abstraction. This might only be partly true for frameworks, but for end-user applications, it is. However, the logic behind it should be decoupled, so you can use it again with other presentation technologies. Introduce interfaces where necessary, to maintain orthogonality for your application. The general rule is: The concrete things should be exchangeable with other concrete things. The reason for the question is that I find myself always getting stuck and hung up on how to generalize classes so they are most useful. This is actually working against me and I'm wondering if I'm "trying too hard". You know, extreme programmers are fond of their YAGNI attitude. Don't try to write everything generically and don't try too hard trying to make everything general purpose. This is called overengineering and will eventually lead to totally convoluted code. Give each component exactly one task and make sure it does it well. Put in abstractions where necessary, where you expect things to change (e.g. interface for drawing context, like stated above). In general, when writing business applications, you should always try to decouple things. MVC and MVVM are great to decouple the logic from the presentation, so you can reuse it for a web presentation or a console application. Keep in mind that in the end, some things have to be concrete. Your users can't work with an abstraction, they need something concrete. Abstractions are only helpers for you, the programmer, to keep the code extensible and maintainable. You need to hink about where you need your code to be flexible. Eventually all abstractions have to give birth to something concrete. Edit: If you want to read more about architecture and design techniques which can provide best practices, I suggest you read @Catchops answer and read about SOLID practices on wikipedia. Also, for starters, I always recommend the following book: Head First Design Patterns . It'll help you understand abstraction techniques/OOP design practices, more so than the GoF book (which is excellent, it just doesn't suit beginners).
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/94086', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/7957/']}
jdg_378099
stackexchange
llm_judgeable_groundtruth_similarity
41572974
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently making use of the WordPress REST API, and vue-router to transition between pages on a small single page site. However, when I make an AJAX call to the server using the REST API, the data loads, but only after the page has already rendered. The vue-router documentation provides insight in regards to how to load data before and after navigating to each route, but I'd like to know how to load all route and page data on the initial page load, circumventing the need to load data each time a route is activated. Note, I'm loading my data into the acf property, and then accessing it within a .vue file component using this.$parent.acfs . main.js Router Code: const router = new VueRouter({ routes: [ { path: '/', component: Home }, { path: '/about', component: About }, { path: '/tickets', component: Tickets }, { path: '/sponsors', component: Sponsors }, ], hashbang: false});exports.router = router;const app = new Vue({ router, data: { acfs: '' }, created() { $.ajax({ url: 'http://localhost/placeholder/wp-json/acf/v2/page/2', type: 'GET', success: function(response) { console.log(response); this.acfs = response.acf; // this.backgroundImage = response.acf.background_image.url }.bind(this) }) }}).$mount('#app') Home.vue Component Code: export default { name: 'about', data () { return { acf: this.$parent.acfs, } },} Any ideas? Now provide the response and nothing else.
My approach is to delay construction of the store and main Vue until my AJAX call has returned. store.js import Vue from 'vue';import Vuex from 'vuex';import actions from './actions';import getters from './getters';import mutations from './mutations';Vue.use(Vuex);function builder(data) { return new Vuex.Store({ state: { exams: data, }, actions, getters, mutations, });}export default builder; main.js import Vue from 'vue';import VueResource from 'vue-resource';import App from './App';import router from './router';import store from './store';Vue.config.productionTip = false;Vue.use(VueResource);Vue.http.options.root = 'https://miguelmartinez.com/api/';Vue.http.get('data') .then(response => response.json()) .then((data) => { /* eslint-disable no-new */ new Vue({ el: '#app', router, store: store(data), template: '<App/>', components: { App }, }); }); I have used this approach with other frameworks such as Angular and ExtJS.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41572974', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4783229/']}
jdg_378100
stackexchange
llm_judgeable_groundtruth_similarity
126238
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a log file, and when I open it via vim, it looks not readable, and it has [converted] at the bottom. What does [converted] mean? Is there a way to fix the format issue so that it is human readable? Now provide the response and nothing else.
It means that vim detected that the file did not match the charset given by your locale and made a conversion. If you run the command :set from within vim : :set--- Options --- autoindent fileformat=dos scroll=7 textwidth=70 background=dark filetype=asciidoc shiftwidth=2 ttyfast cscopetag helplang=en softtabstop=2 ttymouse=sgr cscopeverbose hlsearch syntax=asciidocnoendofline list tabpagemax=3 expandtab ruler textmode backspace=indent,eol,start comments=s1:/*,ex:*/,://,b:#,:%,:XCOMM,fb:-,fb:*,fb:+,fb:.,fb:> cscopeprg=/usr/bin/cscope fileencoding=utf-8 fileencodings=ucs-bom,utf-8,latin1 Notice the last 2 options, fileencoding & fileencodings . The first is the encoding used for the current file, the second is a commaseparated list of recognized encodings. So when you see that message vim is telling you that it's completed converting the file from fileencoding to encoding . Check out :help fileencoding or :help encoding for additional details. Reference I found the thread below, which I used as a source when this was answered. The original site is now gone (accessible in this answer's history), so I'm moving the contents of that thread here for posterity sake. The link was still in the Wayback Machine . #1 Eli the Bearded January 21st, 2004 - 06:51 pm ET | Report spamIn comp.os.linux.misc, Leon. wrote:Hide the quote"Gaétan Martineau" wrote in messagenews:E9jLb.2903$> [ system_notes]$ vi installation_chouette.txt> What means the [converted] at the bottom of the screen, as in:> "installation_chouette.txt" [converted] 2576L, 113642CIt means that vim detected that the file did not match thecharset given by your locale and made a conversion. What does:setTell you about "fileencoding" and "fileencodings"? The first isthe encoding used for the current file, the second is a commaseparated list of recognized encodings.Hide the quote> This file has accented characters. How can I save the file so that if I> reload if again, I do not see "converted"?Figure out what charset you want, and then:set fileencoding=[charset]:wHide the quoteIt means deleting the Microsoft Dos/ Windows CR LF end of lines, to justLF - unix standard end of lines.It does not. If you open a file with DOS line ends, vim reports [dos]after the filename, not [converted]. If you do have a dos file thatyou wish to convert to unix line ends, you can:set fileformat=unix:wElijah
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/126238', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/39989/']}
jdg_378101
stackexchange
llm_judgeable_groundtruth_similarity
36497723
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am a developing a Ruby On Rails app using Rails 4.2.6. I am using Turbolinks alongside jquery.turbolinks (sorry I could'nt post the links to those elements as I am a newbie on the site). My problem is very simple but I just can't solve it. Here it is:I have a form fetched through AJAX <div class="card-footer"> <a class="btn btn-sm btn-primary-outline" data-remote="true" href="/profiles/Mke5kA/positions/new"><i class="fa fa-plus"></i> Nouvelle expérience professionnelle</a> <div id="new_position_form"></div></div> The form contains Select2 elements that get their data through AJAX = simple_form_for [profile, position], remote: true, html: {id: 'positionForm', class: 'm-b-1'} do |f| = f.input :company_id, as: :select, input_html: {:'data-behaviour' => 'company-select2', :'data-kind' => 'company'} = f.input :title = f.input :summary - location = f.object.build_location = f.simple_fields_for :location do |l| = render 'locations/fields', l: l, city: position.city = render "profiles/shared/date_fields", f: f, model: position = f.input :skill_list, as: :select, input_html: {multiple: true, :data => {:behaviour => 'acts-as-taggable', :'taggable-context' => 'skills'}} %button.btn.btn-primary{:type => "submit"}= icon('check-square-o', 'Enregistrer') = link_to icon('remove', 'Annuler'), 'javascript:void(0)', data: {:'lgnk-behaviour' => "remove-form", :'lgnk-target' => "#positionForm" }, class: 'btn btn-secondary' The Select2 elements are "activated" currently upon Rails Trubolinks events "page:load page:update", but I have also tried "page:change" When the form is fetched: the select2 elements are fine (activated correctly): My problem appears when I try typing in the Select2 that are using AJAX to get the data: all the select2s are duplicated: Here is how I get the Select2 initialized: var loc_tag = function() { $('[data-behaviour="acts-as-taggable"]').not('.select2-hidden-accessible').each (function (index, element) { if ($(element).data('value')) { var options = $(element).data('value').split(', '); $.each(options, function(key, tag){ $(element).append($('<option selected></option>').val(tag).text(tag)); }); } $(element).select2({ ajax: { url: "/tags?context="+$(element).data('taggable-context'), dataType: 'json', headers: { "Accept": "application/json" }, delay: 250, data: function (params) { return { q: params.term, // search term page: params.page }; }, processResults: function (data, page) { return { results: data }; }, cache: true }, escapeMarkup: function (markup) { return markup; }, // let our custom formatter work minimumInputLength: 2, tags: true, language: "fr", theme: "bootstrap", width: "100%", placeholder: 'Mots clés...' }); });};$(document).on('page:load page:update', loc_tag); I want the Select2 elements to get initialized only once (when the form is fetched) and not upon AJAX responses on them getting their data. I have tried jQuery.not(".select2-hiden-accessible") on the elements unsing Select2 (select2-hidden-accessible being the class Select2 adds to an initialized Select2 element) but it does not work. Many thanks for your kind help! Now provide the response and nothing else.
When using Turbolinks 5 and select2, the select2 object is no longer attached (see below for test) to the <select> when using the back button to return to a page. A new select2 object is created and attached after going back but it was unusable. jack's answer didn't work for me because when the new select2 object is added, the <select> still has class='select2-hidden-accessible' which, among other things, sets width: 1px !important . When the new select2 object is created it's basically invisible. The key for me was to destroy all select2 objects before TL caches the page . Here is the solution that worked for me: $(document).on("turbolinks:before-cache", function() { $('.select2-input').select2('destroy');});$(document).on('turbolinks:load', function() { $('.select2-input').select2();}); More Detail I believe this is the correct approach given the Turbolinks documentation (emphasis mine): Preparing the Page to be Cached Listen for the turbolinks:before-cache event if you need to prepare the document before Turbolinks caches it. You can use this event to reset forms, collapse expanded UI elements, or tear down any third-party widgets so the page is ready to be displayed again. Testing select2 Existance To test if the select2 object is attached to the <select> you can execute the following in the console: ('.select2-input').first().data('select2')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36497723', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3018921/']}
jdg_378102
stackexchange
llm_judgeable_groundtruth_similarity
200162
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Can someone please explain to me what is the advantage of using base biasing over emitter biasing. Why is a 'feedback resistor' used in base biasing? Now provide the response and nothing else.
One major parameter which decide biasing in BJT transistors is Bias Stability . As β (hFE) widely varies from transistor to transistor. An stable biasing will provide minimum alteration in the Q-point on wide changes in β. Mathematically stability factor is denoted by, S= delta Ic / delta Icb. S depends on the circuit configuration andthe bias resistors. S should be as small as possible. Consider the thress biasing, Now you can choose a biasing with low value of 'S' for better stability. We never desire high value of 'S'. If one of the transistor stop working in your design then it's replacemnt may not have the same Beta. You want least effect on the Q-point due to the error in beta value. So need a low value of 'S'. Consider these calculations: VCEQ changes by 41% when β changes by 50%. VCEQ changes by 25% when β changes by 50%. VCEQ changes by 6% when β changes by 50%.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/200162', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/91384/']}
jdg_378103
stackexchange
llm_judgeable_groundtruth_similarity
2069036
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let the function $f(x)=x^2$ be defined on the set of positive integers $\{1,2,\ldots\}$. Let the set $S$ be defined as follows: $$S:=\bigl\{x: f(x)\geqslant |S|\bigr\}.$$ Is the set $S$ well defined? Why not? NB. I have a more complicated function than $f(x)=x^2$. Thanks a lot. Now provide the response and nothing else.
The statement $$S=\{x: f(x)\ge\vert S\vert\}$$ is a meaningful statement about a set $S$. However, it doesn't constitute a definition: you can't use the thing you're trying to define, to define itself. Now, that said, there are contexts in which self-referential "definitions" work! (A keyword here is "impredicative.") For example, defining a function by recursion can be viewed as such a definition: if we want to build a function $f$ satisfying "$f(n)$ is the number of $k<n$ such that $f(k)$ is even," we can define $f$ as $$f(n)=I(f\upharpoonright n)$$ (where $f\upharpoonright n$ is the function $f$ restricted to numbers $<n$, and $I$ applied to a function with finite domain outputs the number of times that function spits out an even number). This may seem bonkers at first, but it turns out to make perfect sense: There are no $k<0$, so $f(0)=0$. There is one $k<1$ - namely $0$ - and $f$ spits out an even number on that input (since $f(0)=0$, as we concluded above) - so $f(1)=1$. There's still only one $k<2$ with $f(k)$ even - so $f(2)=1$. And it's not hard to see that indeed $f(n)=1$ for all $n>0$. See e.g. this paper for a serious treatment of definition by recursion as self reference. But we have to be careful. Even if we allow $S$ to appear in its own definition, we still don't let anything go : in order to not be nonsense, a self-referential definition has to have two properties: It's true of at most one set. It's true of at least one set. That is, something satisfying the definition has to exist, and has to be unique. Neither of these is obviously true of the statement you've written down. Indeed, the second one is false in the case you outline : in the case $f(x)=x^2$, there is no $S$ with this property! Such an $S$ would have to be finite (since we never have $f(x)$ infinite), but then taking $n>\vert S\vert$ we'd have $n\in S$ (since $n^2>n>\vert S\vert$). But there are infinitely many $n>\vert S\vert$ if $S$ is finite, so then $S$ would be infinite; contradiction. (Note that all we used was that $x^2$ is not bounded. However, boundedness isn't enough to guarantee the existence of such an $S$: consider the function $f(0)=2$, $f(n)=1$ for $n>0$. Can there be any set $S$ such that $S=\{x: f(x)\ge\vert S\vert\}$? It's actually a good exercise to show that there is any function $f$ for which some $S$ exists! HINT: see what happens if you make $f$ take on its maximum value infinitely many times . . .)
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2069036', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/326647/']}
jdg_378104
stackexchange
llm_judgeable_groundtruth_similarity
2041587
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How could we show that the sequence $a_n=\frac{2n+1}{(n-1)^2}$ decreases monotonically? When we take the quotient $\frac{a_n}{a_{n+1}}$ we get $\frac{n^2(2n+1)}{(n-1)^2(2n+3)}$. Howcan we conclude that this quotient is $\geq 1$ ? Now provide the response and nothing else.
Alternatively, one may write it as the sum of two decreasing sequences:$$\frac{2n+1}{(n-1)^2}=\frac{2(n-1)+3}{(n-1)^2}=\frac{2}{n-1}+\frac{3}{(n-1)^2},\qquad n\ge 2.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2041587', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/80708/']}
jdg_378105
stackexchange
llm_judgeable_groundtruth_similarity
714174
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Follow-up to this question: Why proper time is a measure of space? . The selected answer to me tells us why proper time is an invariant quantity, but I'm still wondering why we equate it to $ds$ . Can there not be two independent invariant quantities? Now provide the response and nothing else.
Generally speaking, the line element can be written as $$ds^2=g_{\mu\nu}(x)dx^{\mu}dx^{\nu},$$ where $g_{\mu\nu}(x)$ are the metric components at the point $(x^0,x^1,x^2,x^3)$ . For example, working in cartesian coordinates $(t,x,y,z)$ , for the Minkowski metric $g_{\mu\nu}(x)=\text{diag}(-1,1,1,1)$ , this simply becomes $$ds^2=-dt^2+dx^2+dy^2+dz^2.$$ Proper time is the time experienced by an observer as it reads on their own clock. This means that an observer would describe their position in spacetime, in their coordinates , say $(t',x',y',z')$ , to be $(\tau,x_0,y_0,z_0)$ , where $\tau$ is the proper time measured by the observer and $x_0,y_0,z_0$ are constants (because in their coordinates the observer is always at rest and everything else moves around them). In particular, this implies that $dx'=dy'=dz'=0$ in the observer's coordinates. So, in this set of coordinates, the line element reads $$ds^2=g'_{\mu\nu}(x')dx'^{\mu}dx'^{\nu}=g'_{00}(\tau,x_0,y_0,z_0) d\tau^2.$$ Now, for an inertial observer (i.e. an observer moving freely on a geodesic), their coordinates are given by the inertial coordinates at the point where the observer is. These coordinates have a couple of important properties: the vanishing of the Christoffel symbols and the fact that the metric is given by the Minkowski metric at the point where the observer is (this goes for both statements). In other words, $g'_{\mu\nu}(\tau,x_0,y_0,z_0)=\text{diag}(-1,1,1,1)$ and the above becomes $ds^2=-d\tau^2$ . For a non-inertial observer, I must admit I have never seen a good explanation of what is meant by "their coordinates" in this case. I believe even in this case there exists a set of coordinates such that the metric at the point where the observer is is still given by the Minkowski metric, so the above argument for the line element is unchanged (the difference in this case being that the Christoffel symbols do not vanish anymore). Maybe someone more knowledgeable may correct me on this one if I am wrong.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/714174', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/337317/']}
jdg_378106
stackexchange
llm_judgeable_groundtruth_similarity
244193
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If an operator commutes with the Hamiltonian of a problem, must it always admit degeneracy? It appears that not necessarily. For example, the parity operator commutes with the Hamiltonian of a free particle as well as that of the 1-D linear harmonic oscillator. But in the former case, we have two-fold degeneracy for a given energy and no degeneracy for the latter. Does it mean that $[P, H]=0$ is not the sufficient condition to have degeneracy in the energy eigenstates? Now provide the response and nothing else.
If an operator commutes with the Hamiltonian of a problem, must it always admit degeneracy? No. The identity always commutes with the Hamiltonian, yet we don't generally find all energy levels to be degenerate. Does it mean that $[P,H]=0$ is not the sufficient condition to have degeneracy in the energy eigenstates? Yes. It is not a sufficient condition. Let $A,B$ be any two self-adjoint operators that commute, $[A,B]=0$. Define the eigenvectors of $A$ through $$A|a\rangle=a\ |a\rangle$$ Now we prove that $B|a\rangle$ is also an eigenvector of $A$:$$A(B|a\rangle)=BA|a\rangle=a(B|a\rangle)$$ Welp, that was easy. We could naively say that $a$ is degenerate, because both $|a\rangle$ and $B|a\rangle$ have the same eigenvalue, but this breaks down if $|a\rangle$ is an eigenvector of $B$, because in that case $B|a\rangle\propto |a\rangle$ and there is no degeneracy! Therefore, if $|a\rangle$ is not an eigenvector of $B$ we find that $a$ is degenerate, with at least two eigenvectors, $|a\rangle$ and $B|a\rangle$. If $|a\rangle$ is an eigenvector of $B$ we can't conclude anything about the degeneracy of $a$. In the case $B=\mathbb I$, we obviously have $[H,B]=0$, but as any vector is an eigenvector of $\mathbb I$, the fact that these operators commute gives no information about energy-degeneracies. In the case of a free particle, the parity operators acts on kets by changing the sign, $P|\boldsymbol p\rangle=|-\boldsymbol p\rangle$, which is not proportional to $|\boldsymbol p\rangle$ (a.e.), and therefore we do have the two-fold degeneracy you said.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/244193', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/36793/']}
jdg_378107
stackexchange
llm_judgeable_groundtruth_similarity
6348143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a problem similar to: HQL - row identifier for pagination I'm trying to implement pagination using HQL. I have a PostgreSQL database. int elementsPerBlock = 10;int page = 2; //offset = 2*10String sqlQuery = "FROM Messages AS msg " + " LEFT JOIN FETCH msg.commands AS cmd " + "ORDER BY msg.identifier ASC" ;Query query = session.createQuery( sqlQuery ) .setFirstResult( elementsPerBlock * ( (page-1) +1 ) ) .setMaxResults( elementsPerBlock ); What happens is that Hibernate fetches ALL the Messages, and returns the needed ones after they were all loaded. Thus, Hibernate fetches 210000 entities instead of the 30 which are returned (each Messages has exactly 2 commands). Is there a way to reduce the overhead by a factor of 7000? edit: I've tries adding .setFetchSize( elementsPerBlock ) . It didn't help. edit 2: the SQL query that is generated is: select ... from schemaName.messages messages0_ left outer join schemaName.send_commands commands1_ on messages0_.unique_key=commands1_.message_key order by messages0_.unique_identifier ASC Absolutenly no LIMIT or OFFSET Now provide the response and nothing else.
Per the JPA 2.0 specification , section 3.8.6 Query Execution, The effect of applying setMaxResults or setFirstResult to a query involving fetch joins over collections is undefined. It varies from database to database, and in my experience, the result is Hibernate usually does the paging in memory instead of at the database query level. What I've usually done is used a separate query to get the ids of the desired objects, and pass that into the query with the fetch join.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6348143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/752462/']}
jdg_378108
stackexchange
llm_judgeable_groundtruth_similarity
56739270
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to solve this challenge on codewars . According to the challenge, the parts of array: ls = [0, 1, 3, 6, 10] Are ls = [0, 1, 3, 6, 10]ls = [1, 3, 6, 10]ls = [3, 6, 10]ls = [6, 10]ls = [10]ls = [] And we need to return an array with the sums of those parts. So my code is as follows: function partsSums(ls) { let arrayOfSums = []; while(ls.length > 0) { let sum = ls.reduce((a, b) => a + b); arrayOfSums.push(sum); ls.shift(); }return arrayOfSums;}console.log(partsSums([0, 1, 3, 6, 10])); The issue is that it wants us to add the last sum 0 when the array is empty. So we should be getting: [ 20, 20, 19, 16, 10, 0 ] Instead of [ 20, 20, 19, 16, 10] So I tried this: function partsSums(ls) { let arrayOfSums = []; while(ls.length > 0) { let sum = ls.reduce((a, b) => a + b); arrayOfSums.push(sum); ls.shift(); }arrayOfSums.push(0);return arrayOfSums;}console.log(partsSums([0, 1, 3, 6, 10])); And this: function partsSums(ls) { ls.push(0); let arrayOfSums = []; while(ls.length > 0) { let sum = ls.reduce((a, b) => a + b); arrayOfSums.push(sum); ls.shift(); }return arrayOfSums;} But these caused execution time-out errors on Codewars: Execution Timed Out (12000 ms) So I also tried: function partsSums(ls) { let arrayOfSums = []; while(ls.length > -1) { let sum = ls.reduce((a, b) => a + b); arrayOfSums.push(sum); ls.shift(); }return arrayOfSums;} But now this causes a TypeError: TypeError: Reduce of empty array with no initial value I am not understanding the concept of how to get 0 into the array when all of the values have been shifted out. The challenge seems to want 0 as the final "sum" of the array, even when the array is empty. But you cannot reduce an empty array - what else can I do here? EDIT : Tried adding initial value to the reduce method: function partsSums(ls) { let arrayOfSums = []; while(ls.length > 0) { let sum = ls.reduce((a, b) => a + b, 0); arrayOfSums.push(sum); ls.shift(); }return arrayOfSums;} Unfortunately this still fails the basic test : expected [] to deeply equal [ 0 ] Now provide the response and nothing else.
There is no reason to compute the sum over and over. On a long array this will be very inefficient ( O(n²) ) and might explain your timeout errors. Compute the sum at the beginning and then subtract each element from it in a loop. ls = [0, 1, 3, 6, 10]function partsSums(ls) { let sum = ls.reduce((sum, n) => sum + n, 0) res = [sum] for (let i = 1; i <= ls.length; i++){ sum -= ls[i-1] res.push(sum ) } return res}console.log(partsSums(ls))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56739270', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3546086/']}
jdg_378109
stackexchange
llm_judgeable_groundtruth_similarity
389837
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a directory where daily subdirectories are created, literally named according to $date . How can I delete folders and their contents that are older than 7 days according to the YYYYMMDD in the file name and not the metadata date? Say I have (skipped some for brevity): 20170817201708232017082820170901 I would end up with the following folders (which those should keep): 2017082820170901 I created a variable that holds the date 7 days ago: dt_prev=$(date -d "`date`-7days" +%Y%m%d) My thought was to ls -l a list of these folder names and compare row by row, but this involves cleaning that list, etc., and I figure there has to be an easier way. Now provide the response and nothing else.
I think the solution would be a simpler version of what glenn jackman posted , e.g. seven_days=$(date -d "7 days ago" +%Y%m%d)for f in [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]; do [ -d "$f" ] || continue (( $f < $seven_days )) && echo rm -r "$f"done Remove the echo if the results look correct. The -d test ensures that we only inspect (remove) directories.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/389837', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/245705/']}
jdg_378110
stackexchange
llm_judgeable_groundtruth_similarity
17658732
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I reset my achievements/leaderboard result from my Android application that is used with Google Play Game Services? Now provide the response and nothing else.
Yes, this is possible from test accounts. It’s a bit tricky so here some example code. Add this in your manifest: <uses-permission android:name="android.permission.GET_ACCOUNTS" /><uses-permission android:name="android.permission.INTERNET" /> And here is the actual code to be executed in the application. Added in the activity that is based on BaseGameActivity. public void resetAchievements(){ if( isSignedIn() ) { String accountName = getGamesClient().getCurrentAccountName(); String scopes = getScopes(); new ResetterTask(this, accountName, scopes).execute((Void) null); }}private class ResetterTask extends AsyncTask<Void, Void, Void>{ public String mAccountName; public String mScope; public Context mContext; public ResetterTask(Context con, String name, String sc) { mContext = con; mAccountName = name; mScope = sc; } @Override protected Void doInBackground(Void... params) { try { String accesstoken = GoogleAuthUtil.getToken(mContext, mAccountName, mScope); HttpClient client = new DefaultHttpClient(); //Reset leader board: /*String leaderboardid = "theleaderboardid"; HttpPost post = new HttpPost ( "https://www.googleapis.com"+ "/games/v1management"+ "/leaderboards/"+ leaderboardid+ "/scores/reset?access_token="+accesstoken );*/ //Reset a single achievement like this: /* String acheivementid = "acheivementid"; HttpPost post = new HttpPost ( "https://www.googleapis.com"+ "/games/v1management"+ "/achievements/"+ acheivementid+ "/reset?access_token="+accesstoken );*/ //This resets all achievements: HttpPost post = new HttpPost ( "https://www.googleapis.com"+ "/games/v1management"+ "/achievements"+ "/reset?access_token="+accesstoken ); client.execute(post); Log.w(LogTag, "Reset achievements done."); } catch(Exception e) { Log.e(LogTag, "Failed to reset: " + e.getMessage(), e); } return null; } @Override protected void onPostExecute(Void result) { //Launch activity to refresh data on client. //NOTE: Incremental achievements will look like they are not reset. //However, next time you and some steps it will start from 0 and //gui will look ok. startActivityForResult(getGamesClient().getAchievementsIntent(), 0); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17658732', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1883479/']}
jdg_378111
stackexchange
llm_judgeable_groundtruth_similarity
32794500
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm currently following a tutorial to help me learn how JavaFX works and in the tutorial they're building a small app to manage peoples information. The tutorial is also using XML for loading/saving but I do not want to use XML and would like to use JSON. I have a Person model that uses StringProperty , IntegerProperty and ObjectProperty . My issue is that I'm not exactly sure what the best way to load and save this would be without it saving unnecessary fields and also loading without Gson throwing an error. Person import java.time.LocalDate;import javafx.beans.property.IntegerProperty;import javafx.beans.property.ObjectProperty;import javafx.beans.property.SimpleIntegerProperty;import javafx.beans.property.SimpleObjectProperty;import javafx.beans.property.SimpleStringProperty;import javafx.beans.property.StringProperty;/** * Model class for a Person. * * @author Marco Jakob */public class Person { private final StringProperty firstName; private final StringProperty lastName; private final StringProperty street; private final IntegerProperty postalCode; private final StringProperty city; private final ObjectProperty<LocalDate> birthday; /** * Default constructor. */ public Person() { this(null, null); } /** * Constructor with some initial data. * * @param firstName * @param lastName */ public Person(String firstName, String lastName) { this.firstName = new SimpleStringProperty(firstName); this.lastName = new SimpleStringProperty(lastName); // Some initial dummy data, just for convenient testing. this.street = new SimpleStringProperty("some street"); this.postalCode = new SimpleIntegerProperty(1234); this.city = new SimpleStringProperty("some city"); this.birthday = new SimpleObjectProperty<LocalDate>(LocalDate.of(1999, 2, 21)); } public String getFirstName() { return firstName.get(); } public void setFirstName(String firstName) { this.firstName.set(firstName); } public StringProperty firstNameProperty() { return firstName; } public String getLastName() { return lastName.get(); } public void setLastName(String lastName) { this.lastName.set(lastName); } public StringProperty lastNameProperty() { return lastName; } public String getStreet() { return street.get(); } public void setStreet(String street) { this.street.set(street); } public StringProperty streetProperty() { return street; } public int getPostalCode() { return postalCode.get(); } public void setPostalCode(int postalCode) { this.postalCode.set(postalCode); } public IntegerProperty postalCodeProperty() { return postalCode; } public String getCity() { return city.get(); } public void setCity(String city) { this.city.set(city); } public StringProperty cityProperty() { return city; } public LocalDate getBirthday() { return birthday.get(); } public void setBirthday(LocalDate birthday) { this.birthday.set(birthday); } public ObjectProperty<LocalDate> birthdayProperty() { return birthday; }} Saving where personData is an ObservableList of Person s try (Writer writer = new FileWriter(file)) { new GsonBuilder().setPrettyPrinting().disableHtmlEscaping().create().toJson(personData, writer);} This way of saving currently produces a save with a lot of unnecessary fields like name , value , etc when it could be "firstName": "Hans" [{ "firstName": { "name": "", "value": "Hans", "valid": true, "helper": { "observable": {} } }, "lastName": { "name": "", "value": "Muster", "valid": true, "helper": { "observable": {} } }, "street": { "name": "", "value": "some street", "valid": true }, "postalCode": { "name": "", "value": 1234, "valid": true }, "city": { "name": "", "value": "some city", "valid": true }, "birthday": {}}] Now when even trying to load the string above with Gson it produces an error, Failed to invoke public javafx.beans.property.StringProperty() with no args . Loader Person[] persons;try (Reader reader = new FileReader(file)) { persons = gson.fromJson(reader, Person[].class);}personData.clear();personData.addAll(persons); I've Googled to see if it was possible to use getters and setters with Gson but it doesn't really seem possible so I'm stuck on what to do. Now provide the response and nothing else.
I know I'm a bit late to the party, but this is for future readers. I had the exact same problem. I ended up writing a bunch of Gson TypeAdapter s, one for each JavaFX property type (and a couple more for Color and Font ). I gathered them all in a lightweight library called FxGson (< 30kB). Now, simply by using FxGson's GsonBuilder , the JavaFX POJOs will be serialized as if their properties were simple values. Using the Person class in your example: Person p = new Person("Hans", "Muster");Gson gson = FxGson.coreBuilder().setPrettyPrinting().disableHtmlEscaping().create();System.out.println(gson.toJson(p)); This outputs: { "firstName": "Hans", "lastName": "Muster", "street": "some street", "postalCode": 1234, "city": "some city", "birthday": { "year": 1999, "month": 2, "day": 21 }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32794500', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3342441/']}
jdg_378112
stackexchange
llm_judgeable_groundtruth_similarity
10568275
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My first use of Maven and I'm stuck with dependencies. I created a Maven project with Eclipse and added dependencies, and it was working without problems. But when I try to run it via command line: $ mvn package # successfully completes$ java -cp target/bil138_4-0.0.1-SNAPSHOT.jar tr.edu.hacettepe.cs.b21127113.bil138_4.App # NoClassDefFoundError for dependencies It downloads dependencies, successfully builds, but when I try to run it, I get NoClassDefFoundError: Exception in thread "main" java.lang.NoClassDefFoundError: org/codehaus/jackson/JsonParseException at tr.edu.hacettepe.cs.b21127113.bil138_4.db.DatabaseManager.<init>(DatabaseManager.java:16) at tr.edu.hacettepe.cs.b21127113.bil138_4.db.DatabaseManager.<init>(DatabaseManager.java:22) at tr.edu.hacettepe.cs.b21127113.bil138_4.App.main(App.java:10)Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.JsonParseException at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) ... 3 more My pom.xml is like this: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>tr.edu.hacettepe.cs.b21127113</groupId> <artifactId>bil138_4</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>bil138_4</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-core-asl</artifactId> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-core-asl</artifactId> <version>1.9.6</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> <version>1.9.6</version> </dependency> </dependencies> </dependencyManagement></project> Can anyone help me? Now provide the response and nothing else.
By default, Maven doesn't bundle dependencies in the JAR file it builds, and you're not providing them on the classpath when you're trying to execute your JAR file at the command-line. This is why the Java VM can't find the library class files when trying to execute your code. You could manually specify the libraries on the classpath with the -cp parameter, but that quickly becomes tiresome. A better solution is to "shade" the library code into your output JAR file. There is a Maven plugin called the maven-shade-plugin to do this. You need to register it in your POM, and it will automatically build an "uber-JAR" containing your classes and the classes for your library code too when you run mvn package . To simply bundle all required libraries, add the following to your POM: <project> ... <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.4.1</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> </execution> </executions> </plugin> </plugins> </build> ...</project> Once this is done, you can rerun the commands you used above: $ mvn package$ java -cp target/bil138_4-0.0.1-SNAPSHOT.jar tr.edu.hacettepe.cs.b21127113.bil138_4.App If you want to do further configuration of the shade plugin in terms of what JARs should be included, specifying a Main-Class for an executable JAR file, and so on, see the "Examples" section on the maven-shade-plugin site .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10568275', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/472416/']}
jdg_378113
stackexchange
llm_judgeable_groundtruth_similarity
4546118
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In C-type languages, there is a strong emphasis on structs/records and objects from the very beginning and in every introductory book. Then, their complete systems are designed around managing such structs, their mutual relations and inheritance. In Lisp documentation, you can usually find 1-2 pages about how Lisp "also" has a defstruct, a simple example, and thats usually it. Also, nesting of structures is never mentioned at all. For someone coming from a C background, it first seems that organizing different data types hierarchically isnt the preferred method in Lisp, but apart from CLOS, which is a full blown object system and too complicated if you just want structs, and apart from craming everything into lists, there isnt an apparent way to transfer your C struct knowledge. What is the idiomatic Lisp way of hierarchically organizing data which most resembles C structs? -- I think the summary answer to my question would be: For beginner learning purposes, defstruct and/or plists, although "legacy features", can be used, since they most closely resemble C structs, but that they have been largerly superseded by the more flexible defclass/CLOS, which what most Lisp programs use today. This was my first question on SO, so thanks everyone for your time answering it. Now provide the response and nothing else.
Use CLOS. It isn't complicated. Otherwise use structures. If you have a specific question how to use them, then just ask. (defclass point () ((x :type number) (y :type number)))(defclass rectangle () ((p1 :type point) (p2 :type point) (color :type color))) Stuff like that eventually leads to interfaces like Rectangles in CLIM (the Common Lisp Interface Manager). History To expand on it a bit: Historically 'structures' have been used in some low-level situations. Structures have single inheritance and slot access is 'fast'. Some Lisp dialects have more to structures than what Common Lisp offers. Then from the mid-70s on various forms of object-oriented representations have been developed for Lisp. Most of the representation of structured objects moved from structures to some kind of object-oriented Lisp extension. Popular during the 80s were class-based systems like Flavors, LOOPS and others. Frame-based or prototype-based systems like KEE Units or Object Lisp were also popular. The first Macintosh Common Lisp used Object Lisp for all its UI and IO facilities. The MIT Lisp machine used Flavors basically everywhere. Starting in the mid 80s ANSI CL was developed. A common OO-system was developed especially for Common Lisp: CLOS. It was based on Flavors and Loops. During that time mostly nothing was done to really improve structures - besides implementors finding ways to improve the implementation and providing a shallow CLOS integration. For example structures don't provide any packing of data. If there are two slots of 4 bits content, there is no way to instruct Common Lisp to encode both slots into a single 8 bit memory region. As an example you can see in the Lisp Machine Manual, chapter on structures (PDF) , that it had much more complex structures than what Common Lisp provides. Some of that was already present in Maclisp in the 70s: DEFSTRUCT in the Maclisp manual . CLOS, the Common Lisp Object System Most people would agree that CLOS is a nice design. It sometimes leads to 'larger' code, mostly because identifiers can get long. But there is some CLOS code, like the one in the AMOP book, that is really nicely written and shows how it is supposed to be used. Over time implementors had to deal with the challenge that developers wanted to use CLOS, but also wanted to have the 'speed' of structures. Which is even more a task with the 'full' CLOS, which includes the almost standard Meta Object Protocol (MOP) for CLOS. So there are some tricks that implementors provide. During the 80s some software used a switch, so it could compiled using structures or using CLOS - CLX (the low-level Common Lisp X11 interface was an example). The reason: on some computers and implementations CLOS was much slower than structures. Today it would be unusual to provide such a compilation switch. If I look today at a good Common Lisp implementation, I would expect that it uses CLOS almost everywhere. STREAMs are CLOS classes. CONDITIONs are CLOS classes. The GUI toolkit uses CLOS classes. The editor uses CLOS. It might even integrate foreign classes (say, Objective C classes) into CLOS. In any non-toy Common Lisp implementation CLOS will be the tool to provide structured data, generic behavior and a bunch of other things. As mentioned in some of the other answers, in some places CLOS might not be needed. Common Lisp can return more than one value from a function: (defun calculate-coordinates (ship) (move-forward ship) (values (ship-x ship) (ship-y ship))) One can store data in closures: (defun create-distance-function (ship x y) (lambda () (point-distance (ship-x ship) (ship-y ship) x y))) For configuration one can use some kind of lists: (defship ms-germany :initial-x 0 :initial-y 0) You can bet that I would implement the ship model in CLOS. A lesson from writing and maintaining CLOS software is that it needs to be carefully designed and CLOS is so powerful that one can create really complex software with it - a complexity which is often not a good idea. Refactor and simplify! Fortunately, for many tasks basic CLOS facilities are sufficient: DEFCLASS, DEFMETHOD and MAKE-INSTANCE. Pointers to CLOS introductions For a start, Richard P. Gabriel has his CLOS papers for download. Also see: http://cl-cookbook.sourceforge.net/clos-tutorial/index.html A Brief Guide to CLOS Book chapter from Practical Common Lisp, Object Reorientation, Classes Book chapter from Practical Common Lisp, Object Reorientation, Generic Functions C++ Coder’s Newbie Guide to Lisp-style OO Book: The Art of the Metaobject Protocol . According to some guy named Alan Kay the most important computer science book in a decade, unfortunately written for Lispers ;-). The book explains how to modify or extend CLOS itself. It also includes a simple CLOS implementation as source. For normal users this book is not really needed, but the programming style is that of real Lisp experts.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4546118', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/555984/']}
jdg_378114
stackexchange
llm_judgeable_groundtruth_similarity
4383
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: [edits made in response to feedback- thanks :-) ] Doh! More edits! Sorry! Hello- I am doing some rather rough and ready data collection with a survey sent out to healthcare staff using a published scale about morale and other such issues. The only thing is that the scale is rather long with all the other things in the survey and I would like to reduce its size by cutting each subscale in half and only using half the items. My intuition is that this is fine, since the subscales are inter-correlated, and while it's not ideal for publication-standard research, it's okay just for a bit of intra-organisational fact finding. I wondered if anyone had any thoughts on the validity of doing this, pitfalls, or anything else. References particularly are gratefully received because my colleagues will need some convincing! Many thanks,Chris B edits- Yes it is a validated scale with known psychometric properties. It's unidimensional and it has subscales, if that's the right way to put it. I'll be working at the subscale and total, not the item, level. 30 items, probably about 40-60 individuals. Cheers! Now provide the response and nothing else.
Although there is still some information lacking (No. individuals and items per subscale), here are some general hints about scale reduction. Also, since you are working at the questionnaire level, I don't see why its length matters so much (after all, you will just give summary statistics, like total or mean scores). I shall assume that (a) you have a set of K items measuring some construct related to morale, (b) your "unidimensional" scale is a second-order factor that might be subdivided into different facets, (c) you would like to reduce your scale to k < K items so as to summarize with sufficient accuracy subjects' totalled scale scores while preserving the content validity of the scale. About content/construct validity of this validated scale: The number of items has certainly been choosen so as to best reflect the construct of interest. By shortening the questionnaire, you are actually reducing construct coverage. It would be good to check that the factor structure remains the same when considering only half of the items (which could also impact the way you select them, after all). This can be done using traditional FA techniques. You hold the responsability of interpreting the scale in a spirit similar to that of the authors. About scores reliability : Although it is a sample-dependent measure, scores reliability decreases when decreasing the number of items (cf. Spearman-Brown formula ); another way to see that is that the standard error of measurement (SEM) will increase, but see An NCME Instructional Module on Standard Error of Measurement , by Leo M Harvill.Needless to say, it applies to every indicator that depends on the number of items (e.g., Cronbach's alpha which can be used to estimate one form of reliability, namely the internal consistency). Hopefully, this will not impact any between-group comparisons based on raw scores. So, my recommendations (the easiest way) would be: Select your items so as to maximise construct coverage; check the dimensionality with FA and coverage with univariate responses distributions; Compare average interitem correlations to previously reported ones; Compute internal consistency for the full scale and your composites; check that they are in agreement with published statistics on the original scale (no need to test anything, these are sample-dependent measures); Test the linear (or polychoric, or rank) correlations between original and reduced (sub)scores, to ensure that they are comparable (i.e., that individuals locations on the latent trait do no vary to a great extent, as objectivated through the raw scores); If you have an external subject-specific variable (e.g., gender, age, or best a measure related to morale), compare known-group validity between the two forms. The hard way would be to rely on Item Response Theory to select those items that carry the maximum of information on the latent trait -- scale reduction is actually one of its best application. Models for polytomous items were partly described in this thread, Validating questionnaires . Update after your 2nd update Forget about any IRT models for polytomous items with so few subjects. Factor Analysis will also suffer from such a low sample size; you will get unreliable factor loadings estimates. 30 items divided by 2 = 15 items (it's easy to get an idea of the increase in the corresponding SEM for the total score), but it will definitively get worse if you consider subscales (this was actually my 2nd question--No. items per subscale, if any)
{}
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/4383', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/199/']}
jdg_378115
stackexchange
llm_judgeable_groundtruth_similarity
3399642
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If an Activity is a singleton in practice, I think I can gain some efficiency by declaring appropriate members "static", with zero risk. Yes? Now provide the response and nothing else.
The Android documentation says - there's never more than one instance of a "singleTask" or "singleInstance" activity, so that instance is expected to handle all new intents. This means you can use static members . Besides, a standard or singleTop should have thread-safe static members only . Suppose the current activity stack is A-B-C-D. If the arriving intent is for an activity of type B which is in "standard" or "singleTop" mode. A new instance of B would be launched as (since B is not at the top of the stack), so the resulting stack would be A-B-C-D-B.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3399642', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/294973/']}
jdg_378116
stackexchange
llm_judgeable_groundtruth_similarity
32995098
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Iv'e spent the last hour trying to get the value of a DisplayName attribute that's applied to a Class . I find it simple enough to get the attribute values from methods and properties but I'm struggling with the class. Could anyone help me out with this relatively small issue? Sample below: The Class [DisplayName("Opportunity")] public class Opportunity { // Code Omitted } The Variable var classDisplayName = typeof(T).GetCustomAttributes(typeof(DisplayNameAttribute),true).FirstOrDefault().ToString(); I have spent much time on MSDN and SO but I guess I'm missing something stupidly simple. Either way great question for future readers too Any help greatly appreciated! Now provide the response and nothing else.
using your example I got it working doing this: var displayName = typeof(Opportunity) .GetCustomAttributes(typeof(DisplayNameAttribute), true) .FirstOrDefault() as DisplayNameAttribute;if (displayName != null) Console.WriteLine(displayName.DisplayName); This outputted "Opportunity". Or for the more generic way you seem to be doing it: public static string GetDisplayName<T>(){ var displayName = typeof(T) .GetCustomAttributes(typeof(DisplayNameAttribute), true) .FirstOrDefault() as DisplayNameAttribute; if (displayName != null) return displayName.DisplayName; return "";} Usage: string displayName = GetDisplayName<Opportunity>(); GetCustomAttributes() returns an object[] , so you need to apply the specific cast first before accessing the required property values.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32995098', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3305976/']}
jdg_378117
stackexchange
llm_judgeable_groundtruth_similarity
533784
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following strings in a very large document: 1.test.html#2.test.md#3.http://test.html#4.https://test.md#5.http://test.md#6.test2.md# Now I want to replace every .md# with .html# but ONLY if there is no http in the string. So only 2 and 6 should have a replacement. How can I do this in a shell script? Now provide the response and nothing else.
With GNU sed. If current line (pattern space) contains http jump to end of script ( b ). Otherwise do search and replace. sed '/http/b; s/\.md#/.html#/' file Output: 1.test.html#2.test.html#3.http://test.html#4.https://test.md#5.http://test.md#6.test2.html# If you want to edit your file "in place" use sed's option -i . See: man sed
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/533784', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/365442/']}
jdg_378118
stackexchange
llm_judgeable_groundtruth_similarity
71987
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As the title says, I am looking for a way to find the minimum number of links to remove from a directed graph to make it acyclic. I am looking both for the minimum number , as well as an actual set of links to remove. How can this be done in a reasonably simple and efficient way? EDIT: In other words, how can I label/order the vertices of the graph so that the adjacency matrix will contain most (nonzero) elements below the diagonal? Now provide the response and nothing else.
This problem is well-known under the name minimum feedback arc set problem . The decision version of the problem says: given a graph $G$ and a parameter $k$, can we break all cycles in $G$ by deleting some set of at most $k$ arcs from it? [Note that, as usual, the decision version is no harder than the computational one of finding the minimum feedback arc set. ] The above decision version of this problem is NP-complete. In fact, it is one of Richard Karp's 21 NP-completeness problems. That is, unless NP collapses to P--widely believed to be unlikely--this problem will not admit a polynomial time algorithm. You can look up the details from the wikipedia page.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/71987', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/12384/']}
jdg_378119
stackexchange
llm_judgeable_groundtruth_similarity
270956
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm new to the mathematics behind the electrical engineering and quite confused. I have done some searching around and haven't found anything, probably because I'm asking the wrong question. On a 9V battery, it says '9V'. What does that mean? Does it mean it always has a voltage of 9V (Around that since the battery drains), or that up to 9V can be pulled? I guess this also hooks into another question: If you have an LED in a circuit, What decides how much voltage it gets? Does the LED limits the amount of power going through it, or does the battery force a particular voltage and current through it, which can be modified by resistors to lower the amp and up the volt? Sorry if I am completely asking wrong. Now provide the response and nothing else.
You are over-thinking all this, and/or have some bad mental models. You also provide this dilemma: that it must either be "always has 9 V" or else "up to 9 V can be pulled." That's not the dilemma and I don't even know what you mean when you say "pulled." A battery is based on some internal chemistry. There are some molecular ion potentials based upon some pretty basic physics which, luckily for us humans, means that a battery chemistry has the effect of providing a relatively stable voltage. You can even get a battery by sticking two dissimilar metal rods into different parts of a lemon, for example. (Not a "good" battery, though.) Batteries that are designed to supply a voltage also have a "compliance current" specification for them. And sometimes, the specification will describe how well they work at several different currents: with low current situations lasting longer and high current situations lasting much shorter, over their useful lives. When a battery is applied to a circuit (like an LED circuit), the battery's fresh chemistry setup tries to provide a roughly stable voltage to that circuit. But if the circuit requires too much current for the chemistry and physical design of the battery, then the voltage will droop. Sometimes, it will droop a whole lot, while still providing some current. Usually, batteries that are tied to a load that could use far more current than they can manage to supply, have their internal chemistry operating at such a pace that it impairs their lifetime of service, too. So it's important to stay within their designed specifications regarding the applied load. A very simple way to visualize a chemical battery is to think of it as an ideal battery with a series resistor attached. In this way, if the circuit tries to draw too much current then this internal series resistor will "drop" some voltage before the circuit itself gets access to it. But this is just a very simple approximation. A real battery is much more complex and modeling them is an art of sorts. But the basic idea is that a battery (or any practical voltage source) isn't perfect and has limitations. But designers depend upon the idea of a range of reasonable voltages when faced with a range of reasonable load currents. So, for example, I might design a circuit for a 9 V battery expecting no lower than 7.5 V and no more than 9.2 V from it over its lifetime of use. No, it won't ALWAYS provide 9 V. It will provide higher voltages early in its life and lower voltages later on as its chemistry gets used up. I have to decide how low I can accept, before my circuit stops working correctly. The lower I can manage, the longer the battery lasts. But the voltage also drops a lot more quickly as it gets near the end of its life, too. So I have to make a reasoned judgment about where to draw that line.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/270956', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/130837/']}
jdg_378120
stackexchange
llm_judgeable_groundtruth_similarity
1060938
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been reading over some code lately and came across some lines such as: somevar &= 0xFFFFFFFF; What is the point of anding something that has all bits turned on; doesn't it just equal somevar in the end? Now provide the response and nothing else.
"somevar" could be a 64-bit variable, this code would therefore extract the bottom 32 bits. edit: if it's a 32-bit variable, I can think of other reasons but they are much more obscure: the constant 0xFFFFFFFF was automatically generated code someone is trying to trick the compiler into preventing something from being optimized someone intentionally wants a no-op line to be able to set a breakpoint there during debugging.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1060938', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105033/']}
jdg_378121
stackexchange
llm_judgeable_groundtruth_similarity
851322
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I've been asking me this for a couple of days and after a bunch of searching I wasn't able to come up with a comprehensible answer, not even a theoretical one that makes sense in my head. I'm playing around with solutions for Mac hosting and I was wondering if I could add thunderbolt ethernet cards to the Macs and bond them in VLANs and therefore semi solve bandwidth bottlenecks to the machines in order to increase access speeds to a DB or external storage. For example: Plug two ethernet cards into a Mac Mini, bond them and have a VLAN with 2 Gb/s bandwidth. Now provide the response and nothing else.
Simply put, no, they are different: with a 10 GbE interface, you get a bandwidth of 10 Gb/s even for a single connection with 10x 1GbE interfaces (and using 802.ad protocol), a single connection/session is limited to 1 Gb/s only. On the other hand, you can serve 10 concurrent session each with a bandwidth of 1 Gb/s In other words, bonding generally does not increase the speed of a single connection. The only exception is Linux bonding type 0 (balance-rr), which sends packets in a round robin fashion, but it has significant drawbacks and limited scaling. For a practical example, give a look here
{}
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/851322', 'https://serverfault.com', 'https://serverfault.com/users/306037/']}
jdg_378122
stackexchange
llm_judgeable_groundtruth_similarity
18588659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to redirect the Process's standard output to a string for later parsing.I would also like to see the output on the screen, while the process is running, and not only when it finishes it's run. Is that even possible? Now provide the response and nothing else.
Use RedirectStandardOutput . Sample from MSDN: // Start the child process.Process p = new Process();// Redirect the output stream of the child process.p.StartInfo.UseShellExecute = false;p.StartInfo.RedirectStandardOutput = true;p.StartInfo.FileName = "Write500Lines.exe";p.Start();// Do not wait for the child process to exit before// reading to the end of its redirected stream.// p.WaitForExit();// Read the output stream first and then wait.string output = p.StandardOutput.ReadToEnd();p.WaitForExit(); Also see OutputDataReceived and BeginOutputReadLine() for an alternative to ReadToEnd() , that will better fulfill your "see output while the process is running" requirement.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18588659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1477926/']}
jdg_378123
stackexchange
llm_judgeable_groundtruth_similarity
4925106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've tried to code a class to avoid a method like "RaisePropertyChanged". I know that I can inherit from a class that has that implementation but in some cases I can't. I've tried with a Extension Method but Visual Studio complain. public static class Extension{ public static void RaisePropertyChanged(this INotifyPropertyChanged predicate, string propertyName) { if (predicate.PropertyChanged != null) { predicate.PropertyChanged(propertyName, new PropertyChangedEventArgs(propertyName)); } }} It said: "The event ' System.ComponentModel.INotifyPropertyChanged.PropertyChanged' can only appear on the left hand side of += or -= " Now provide the response and nothing else.
Reed is right. However, I see what you're trying to do (make your code reusable— good for you ); and I'll just point out that this is often easily rectified by accepting the PropertyChangedEventHandler delegate itself and passing it from within the INotifyPropertyChanged implementation: public static void Raise(this PropertyChangedEventHandler handler, object sender, string propertyName){ if (handler != null) { handler(sender, new PropertyChangedEventArgs(propertyName)); }} Then from within your class which implements INotifyPropertyChanged , you can call this extension method like so: PropertyChanged.Raise(this, "MyProperty"); This works because, as Marc said , within the class declaring the event you can access it like a field (which means you can pass it as a delegate argument to a method, including extension methods).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4925106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/518144/']}
jdg_378124
stackexchange
llm_judgeable_groundtruth_similarity
51672970
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I wire a String parameter in Symfony 3.4? I have simple service and I want to wire a url parameter specified in parameters.yml : namespace AppBundle\Service;use Psr\Log\LoggerInterface;class PythonService { private $logger; private $url; /** * @param LoggerInterface $logger * @param String $url */ public function __construct(LoggerInterface $logger, String $url) { $this->logger = $logger; $this->url = $url; }} My service.yml looks like: AppBunde\Services\PythonService: arguments: ['@logger', '%url%'] But I am getting error: Cannot autowire service "AppBundle\Service\PythonService": argument "$url" of method "__construct()" is type-hinted "string", you should configure its value explicitly. I tried also manually specify parameters: AnalyticsDashboardBunde\Services\PythonService: arguments: $logger: '@logger' $url: '%session_memcached_host%' This gives me following error: Invalid service "AppBundle\Services\PythonService": class "AppBundle\Services\PythonService" does not exist. Now provide the response and nothing else.
First, you have a typo in AppBundle\Services\PythonService (Services <> Service). Then, string <> String. No uppercase in php. You can bind an argument to a certain parameter/service: service.yml: services: _defaults: bind: $memcacheHostUri: '%session_memcached_host%' Service class: (have to be the same var name as specified ^) public function __construct(LoggerInterface $logger, string $memcacheHostUri) Controller action: public function myAwesomeAction(PythonService $pythonService){ $pythonService->doPythonStuffs();} With this solution, if you create others services which need the memecacheHostUri , it will be autowired for these services too. Resources: Argument binding
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51672970', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5594539/']}
jdg_378125
stackexchange
llm_judgeable_groundtruth_similarity
10946
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say I run a supercomputer computation on 100k cores for 4 hours on http://www.nersc.gov/users/computational-systems/edison/configuration , exchanging about 4 PB of data over the network and performing about 4 TB of I/O. The calculation is all integer, so the results are either right or wrong (no intermediate numerical errors). Assuming the code is correct, I would like to estimate the probability that the computation is wrong due to a hardware failure. What is a good way to go about this? Are there good sources for the numbers required to make such an estimate? Now provide the response and nothing else.
Have you looked at the various exascale reports that have come out? Hardward failures are not a significant concern today -- sure, they happen, but their frequency is not sufficiently high to cause grave worry. But they are estimated to be sufficiently frequent on exascale systems with $O(10^8)$ or more cores that codes need to be prepared to react appropriately. I believe that these issues have been laid out in the reports on roadmaps towards exascale. My recollection is that among the various failure modes, single bit flips in memory or on processor cores were not the most significant concerns. Rather, it was entire nodes going down, e.g. due to disk failure, operating system faults, etc. The current exascale designs therefore all call for periodic checkpointing of codes into flash RAM, preferably transmitting the checkpoint data off-node. Codes will then need to be able to restart on the fly from a previously saved state if the system encounters that one node has disappeared, replacing this node with a hot-start node elsewhere in the system.
{}
{'log_upvote_score': 4, 'links': ['https://scicomp.stackexchange.com/questions/10946', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/447/']}
jdg_378126
stackexchange
llm_judgeable_groundtruth_similarity
13398462
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to integrate a project Project A built by a colleague into another python project. Now this colleague has not used relative imports in his code but instead done from packageA.moduleA import ClassAfrom packageA.moduleA import ClassB and consequently pickled the classes with cPickle . For neatness I'd like to hide the package that his ( Project A ) built inside my project. This however changes the path of the classes defined in packageA . No problem, I'll just redefine the import using from ..packageA.moduleA import ClassAfrom ..packageA.moduleA import ClassB but now the un pickling the classes fails with the following message with open(fname) as infile: self.clzA = cPickle.load(infile)ImportError: No module named packageA.moduleA So why doesn't cPickle apparently see the module defs. Do I need to add the root of packageA to system path? Is this the correct way to solve the problem? The cPickled file looks something like ccopy_reg_reconstructorp1(cpackageA.moduleAClassAp2c__builtin__objectp3NtRp4 The old project hierarchy is of the sort packageA/ __init__.py moduleA.py moduleB.pypackageB/ __init__.py moduleC.py moduleD.py I'd like to put all of that into a WrapperPackage MyPackage/.. __init__.py.. myModuleX.py.. myModuleY.pyWrapperPackage/.. __init__.py.. packageA/ .. __init__.py .. moduleA.py .. moduleB.py.. packageB/ .. __init__.py .. moduleC.py .. moduleD.py Now provide the response and nothing else.
You'll need to create an alias for the pickle import to work; the following to the __init__.py file of the WrapperPackage package: from .packageA import * # Ensures that all the modules have been loaded in their new locations *first*.from . import packageA # imports WrapperPackage/packageAimport syssys.modules['packageA'] = packageA # creates a packageA entry in sys.modules It may be that you'll need to create additional entries though: sys.modules['packageA.moduleA'] = moduleA# etc. Now cPickle will find packageA.moduleA and packageA.moduleB again at their old locations. You may want to re-write the pickle file afterwards, the new module location will be used at that time. The additional aliases created above should ensure that the modules in question have the new location name for cPickle to pick up when writing the classes again.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13398462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/100190/']}
jdg_378127
stackexchange
llm_judgeable_groundtruth_similarity
76130
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have to show $n! \leq \left( \frac{n+1}{2} \right)^n$ via induction. This is where I am stuck: $$\left( \frac{n+2}{2} \right)^{n+1} \geq \dots \geq=2 \left( \frac{n+1}{2} \right)^{n+1}= \left( \frac{n+1}{2} \right)^n(n+1)\geq n!(n+1)= (n+1)! $$ I approached this from both sides and this is the closest I can get. I realize that $n+2$ on the left has to be bigger than $n+1$ on the right, but I do not know who to show that it overpowers the factor two I have from the right. What could I do to fill the dots? Currently, I just have it without the dots, but I would be happier if I could back it up. Now provide the response and nothing else.
Assuming $n! \le \left( \frac{n+1}{2} \right)^n$ is true, carry the induction step $$ (n+1) n!\leq (n+1) \left(\frac{n+1}{2}\right)^n =2 \left(\frac{n+1}{2}\right)^{n+1} \stackrel{?}{\leq} \left(\frac{n+2}{2}\right)^{n+1}$$But the last inequality is just$$ 2 \le \left( \frac{n+2}{n+1} \right)^{n+1} = \left( 1 + \frac{1}{n+1} \right)^{n+1} $$It follows because:$$\left( 1 + \frac{1}{n+1} \right)^{n+1} = \sum_{k=0}^{n+1} \binom{n+1}{k} \frac{1}{(n+1)^k} \ge \sum_{k=0}^{1} \binom{n+1}{k} \frac{1}{(n+1)^k} = 1 + (n+1) \frac{1}{n+1} = 2$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/76130', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14291/']}
jdg_378128
stackexchange
llm_judgeable_groundtruth_similarity
835613
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a WCF service that works ok if I create the service without specifying any binding or endpoint (it reads it from the generated values in the App.config when I registered the WCF via Visual Studio). I have a simple method that returns the service reference: return new SmsServiceReference.SmsEngineServiceClient(); This works ok (because the values are read from the config). However, I'd like to have some of these values in a Database (the URI for example) and would like to do something like this: Binding binding = new BasicHttpBinding(); EndpointAddress endpointAddress = new EndpointAddress( "my.uri.com/service.svc" ); return new SmsServiceReference.SmsEngineServiceClient(binding,endpointAddress); This doesn't work. It throws an exception when I try to use the service reference. I suspect that this is because my App.config has more information that the two lines up there are not providing (obviously). The question is, how can I replicate the following App.Config values programmatically? Here's the fragment of my App.Config: (the URI has been altered to protect the innocent). <system.serviceModel><bindings> <basicHttpBinding> <binding name="BasicHttpBinding_ISmsEngineService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding></bindings><client> <endpoint address="http://www.myuri.com/Services/Services.svc/basic" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_ISmsEngineService" contract="SmsServiceReference.ISmsEngineService" name="BasicHttpBinding_ISmsEngineService" /></client> Now provide the response and nothing else.
Most of the values in the App config are also properties in the binding and can be recreated programatically. Personally, I use a method such as the one below to create the binding public static BasicHttpBinding CreateBasicHttpBinding() { BasicHttpBinding binding = new BasicHttpBinding(); binding.AllowCookies = false; binding.ReceiveTimeout = new TimeSpan(0, 10, 0); binding.OpenTimeout = new TimeSpan(0, 1, 0); binding.SendTimeout = new TimeSpan(0, 1, 0); // add more based on config file ... //buffer size binding.MaxBufferSize = 65536; binding.MaxBufferPoolSize = 534288; binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; //quotas binding.ReaderQuotas.MaxDepth = 32; binding.ReaderQuotas.MaxStringContentLength = 8192; // add more based on config file ... return binding; } And I use something like this for creating my Endpoint address public static EndpointAddress CreateEndPoint() { return new EndpointAddress(Configuration.GetServiceUri()); } The serviceUri will be the service URL such as http://www.myuri.com/Services/Services.svc/basic Finally to create the service client Binding httpBinding = CreateBasicHttpBinding(); EndpointAddress address = CreateEndPoint(); var serviceClient = new MyServiceClient(httpBinding, address);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/835613', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2684/']}
jdg_378129
stackexchange
llm_judgeable_groundtruth_similarity
2042821
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A$ be a $3\times 3$ matrix with integer entries such that $\det(A)=1$. At most how many entries of $A$ can be even? I get a possible solution as $6$ by considering the $3 \times 3$ identity matrix. But I am not sure about that is it possible to have more than $6$ even entries. Please help me enumerate this problem to prove my answer. Now provide the response and nothing else.
Using Laplace expansion or Sarrus's rule, we have$$ \begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}=aei-afh-bdi+bfg+cdh-ceg$$ In order for this expression to be equal to $1$, it must be odd, meaning that at least one of the $6$ products must be odd. And if one of the products is odd, then all three of the terms in the product must be odd. Therefore there can be at most $6$ even entries, and the identity matrix shows that there can be exactly six.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2042821', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/292174/']}
jdg_378130
stackexchange
llm_judgeable_groundtruth_similarity
17308661
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: here`s my query SELECT cont.FILTER_VALUE as filter, o.[OBJECT_ID] as Id, o.[OBJECT_NAME] as Name, o.DESCRIPTION as Description, o.CREATED as Created, o.MODIFIED as Modified, u.[LOGIN] as LastModifiedByLogin, o.[OBJECT_NAME] as ObjectName, t.[TEMPLATE_NAME] as TemplateName--,p.[PAGE_NAME] as PageName FROM [OBJECT] AS o LEFT OUTER JOIN [CONTAINER] as cont on cont.[OBJECT_ID] = o.[OBJECT_ID] LEFT JOIN [OBJECT_VALUES] AS ov ON ov.[OBJECT_ID] = o.[OBJECT_ID] LEFT JOIN [PAGE] AS p ON o.[PAGE_ID] = p.[PAGE_ID] INNER JOIN [USERS] as u on u.[USER_ID] = o.LAST_MODIFIED_BY INNER JOIN [PAGE_TEMPLATE] as t on o.[PAGE_TEMPLATE_ID] = t.[PAGE_TEMPLATE_ID] INNER JOIN [site] as s on t.SITE_ID = s.SITE_ID WHERE s.SITE_ID = '34' --AND сont.[FILTER_VALUE] is null--like '%fff%' And it works nice, until I remove the comment. Here's a mess of joins, still it has sense. I inner join main table with couple of others, and left join with optional, so, that I have a column, that contains cont.FILTER_VALUE as filter , its null in some records, I can get it, but I cant filter by this field. I get The multi-part identifier "сont.FILTER_VALUE" could not be bound. I've looked through similar topics, but found no useful information. I don't use any old SQL dialects: everywhere I use INNER/LEFT joins, tried group by and order by , tried to re-order joins - nothing helped. I guess I just don't understand something important about joins, could you tell me, please. Thanx. Now provide the response and nothing else.
There is no need to have a separate Vagrantfile, you can just define multiple VM's in the same file. See the documentation here: http://docs.vagrantup.com/v2/multi-machine/index.html If you are just using one VM in your 'normal' environment and one VM for your 'confluence' environment then it is simply a case of just defining each VM and vagrant up -ing the specific VM. If you have multiple machines that make up each of your environments then you have two options, you can use regular expressions and make sure you name and type the commands correctly or you can put a bit of logic into your Vagrantfile to make it easier for people. For example with a little bit of a hack in your Vagrantfile you can do the following: Vagrant.configure('2') do |config| if ARGV[1] == 'confluence' ARGV.delete_at(1) confluence = true else confluence = false end config.vm.provider :virtualbox do |virtualbox, override| #virtualbox.gui = true virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] virtualbox.customize ["modifyvm", :id, "--memory", 512] override.vm.box = 'Ubuntu 12.10 x64 Server' override.vm.box_url = 'http://goo.gl/wxdwM' end if confluence == false config.vm.define :normal1 do |normal1| normal1.vm.hostname = 'normal1' normal1.vm.network :private_network, ip: "192.168.1.1" end config.vm.define :normal2 do |normal2| normal2.vm.hostname = 'normal2' normal2.vm.network :private_network, ip: "192.168.1.2" end end if confluence == true config.vm.define :confluence1 do |confluence1| confluence1.vm.hostname = 'confluence1' confluence1.vm.network :private_network, ip: "192.168.1.3" end config.vm.define :confluence2 do |confluence2| confluence2.vm.hostname = 'confluence2' confluence2.vm.network :private_network, ip: "192.168.1.4" end endend Now vagrant up brings up your normal vm's and vagrant up confluence brings up your confluence vm's!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17308661', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1468482/']}
jdg_378131
stackexchange
llm_judgeable_groundtruth_similarity
258050
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: CString is quite handy, while std::string is more compatible with STL container. I am using hash_map . However, hash_map does not support CString s as keys, so I want to convert the CString into a std::string . Writing a CString hash function seems to take a lot of time. CString -----> std::string How can I do this? std::string -----> CString:inline CString toCString(std::string const& str){ return CString(str.c_str()); } Am I right? EDIT: Here are more questions: How can I convert from wstring to CString and vice versa? // wstring -> CStringstd::wstring src;CString result(src.c_str());// CString -> wstringCString src;std::wstring des(src.GetString()); Is there any problem with this? Additionally, how can I convert from std::wstring to std::string and vice versa? Now provide the response and nothing else.
According to CodeGuru : CString to std::string : CString cs("Hello");std::string s((LPCTSTR)cs); BUT: std::string cannot always construct from a LPCTSTR . i.e. the code will fail for UNICODE builds. As std::string can construct only from LPSTR / LPCSTR , a programmer who uses VC++ 7.x or better can utilize conversion classes such as CT2CA as an intermediary. CString cs ("Hello");// Convert a TCHAR string to a LPCSTRCT2CA pszConvertedAnsiString (cs);// construct a std::string using the LPCSTR inputstd::string strStd (pszConvertedAnsiString); std::string to CString : (From Visual Studio's CString FAQs... ) std::string s("Hello");CString cs(s.c_str()); CStringT can construct from both character or wide-character strings. i.e. It can convert from char* (i.e. LPSTR ) or from wchar_t* ( LPWSTR ). In other words, char-specialization (of CStringT ) i.e. CStringA , wchar_t -specilization CStringW , and TCHAR -specialization CString can be constructed from either char or wide-character, null terminated (null-termination is very important here) string sources. Althoug IInspectable amends the "null-termination" part in the comments : NUL-termination is not required . CStringT has conversion constructors that take an explicit length argument. This also means that you can construct CStringT objects from std::string objects with embedded NUL characters.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/258050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25749/']}
jdg_378132
stackexchange
llm_judgeable_groundtruth_similarity
17590528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: String has both PadLeft and PadRight . I am in need of padding both left and right (center justification). Is there a standardized way of doing this, or better yet, a built in way of achieving the same goal? Now provide the response and nothing else.
Not that I know of. You can create an extension method if you see yourself using it a lot. Assuming you want your string to end up in the center, use something like the following public string PadBoth(string source, int length){ int spaces = length - source.Length; int padLeft = spaces/2 + source.Length; return source.PadLeft(padLeft).PadRight(length);} To make this an extension method, do it like so: namespace System{ public static class StringExtensions { public static string PadBoth(this string str, int length) { int spaces = length - str.Length; int padLeft = spaces / 2 + str.Length; return str.PadLeft(padLeft).PadRight(length); } }} As an aside, I just include my extensions in the system namespace - it's up to you what you do.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17590528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2083622/']}
jdg_378133
stackexchange
llm_judgeable_groundtruth_similarity
11774048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know C++11 has some standard facilities which would allow to get integral values from unaligned memory. How could something like this be written in a more standard way? template <class R>inline R get_unaligned_le(const unsigned char p[], const std::size_t s) { R r = 0; for (std::size_t i = 0; i < s; i++) r |= (*p++ & 0xff) << (i * 8); // take the first 8-bits of the char return r;} To take the values stored in litte-endian order, you can then write: uint_least16_t value1 = get_unaligned_le<uint_least16_t > (&buffer[0], 2);uint_least32_t value2 = get_unaligned_le<uint_least32_t > (&buffer[2], 4); Now provide the response and nothing else.
The answer you link to actually implements exactly the idea you propose at the end of your question. The decisive formula that you need from that answer is this one: th = arccos(2 * (1 - e / r)^2 - 1) This tells you the angle between two vertices, where r is the radius of the circle and e is the maximum error you're willing to tolerate, i.e. the maximum deviation of your polygon from the circle -- this is the error marked in your diagram. For example, you might choose to set e to 0.5 of a pixel. Because th is measured in radians, and 360 degrees (a full circle) is equal to 2*pi in radians, the number of vertices you need is num_vertices = ceil(2*pi/th)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11774048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/989331/']}
jdg_378134
stackexchange
llm_judgeable_groundtruth_similarity
897847
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: This is a command that works: $ echo 'hi there' | docker run -i ubuntu cathi there This is a command that responds with an error message: $ echo 'hi there' | docker run -it ubuntu catthe input device is not a TTY I would like to figure out exactly what happens here. Not just "remove -t and it'll be fixed". I know that docker run 's -t option stands for "Allocate a pseudo-TTY", and I have read historical overviews of what TTY stands for , but it didn't help me understand what kind of a contract is violated here. Now provide the response and nothing else.
This answer helped me to wrap my head around: by default (without neither -i nor -t options) a Docker container only sends its output toSTDOUT, with -i option comes connection to STDIN, -t option pulls in a terminal interface driver , that works on top of STDIN/STDOUT. And when a terminal driver is pulled in, the communication with a container must conform to the terminal interface protocol . Piping a string does not.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/897847', 'https://serverfault.com', 'https://serverfault.com/users/322326/']}
jdg_378135
stackexchange
llm_judgeable_groundtruth_similarity
2057111
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose there exists such a group. Then Lagrange's theorem assures that the group is of even order. But I conclude from this and this that such a group has odd number of elements of order $2$. Giving us contradiction. Hence there does not exist a finite abelian group $G$ containing exactly $60$ elements of order $2$. More strongly there does not exist a finite group $G$ containing even number of elements of order $2$. Is my understanding correct? Now provide the response and nothing else.
Yes, your understanding is correct. Consider the relation $\sim$ on $G$ (having even order, otherwise it has no element of order $2$) defined by$$a\sim b\quad\text{if and only if}\quad(b=a\text{ or }b=a^{-1})$$This is easily seen to be an equivalence relation. The equivalence classes have either one or two elements. If you remove the two-element equivalence classes, you are dropping an even number of elements from $G$, so what remains is an even number. Drop also the class consisting of $1$ and you remain with an odd number of one-element equivalence classes: these elements are precisely those having order $2$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2057111', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/206726/']}
jdg_378136
stackexchange
llm_judgeable_groundtruth_similarity
6922312
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What i have: currently my app is only telling me the coordinates of my current location. What i want: Get location name from coordinates fetched by gps, so that i could know where exactly i am. (Name of location) Now provide the response and nothing else.
Here is complete code from fetching long - lat to getting address: LocationManager locationManager = (LocationManager)getSystemService(Context.LOCATION_SERVICE);String provider = locationManager.getBestProvider(new Criteria(), true);Location locations = locationManager.getLastKnownLocation(provider);List<String> providerList = locationManager.getAllProviders();if(null!=locations && null!=providerList && providerList.size()>0){ double longitude = locations.getLongitude();double latitude = locations.getLatitude();Geocoder geocoder = new Geocoder(getApplicationContext(), Locale.getDefault()); try { List<Address> listAddresses = geocoder.getFromLocation(latitude, longitude, 1); if(null!=listAddresses&&listAddresses.size()>0){ String _Location = listAddresses.get(0).getAddressLine(0); }} catch (IOException e) { e.printStackTrace();}}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6922312', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/829066/']}
jdg_378137
stackexchange
llm_judgeable_groundtruth_similarity
16214
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: What is known about the computational complexity of factoring integers in general number fields? More specifically: Over the integers we represent integers via their binary expansions. What is the analogous representations of integers in general number fields? Is it known that primality over number fields is in P or BPP? What are the best known algorithms for factoring over number fields? (Do the $\exp \sqrt n$ and the (apparently) $\exp n^{1/3}$ algorithms extend from $\mathbb{Z}$ ?) Here, factoring refers to finding some representation of a number (represented by $n$ bits) as a product of primes. What is the complexity of finding all factorizations of an integer in a number field? Of counting how many distinct factorizations it has? Over $\mathbb{Z}$ it is known that deciding if a given number has a factor in an interval $[a,b]$ is NP-hard. Over the ring of integers in number fields, can it be the case that finding if there is a prime factor whose norm is in a certain interval is already NP-hard? Is factoring in number fields in BQP? Remarks, motivations and updates. Of course the fact that factorization is not unique over number fields is crucial here. The question (especially part 5) was motivated by this blog post over GLL (see this remark ), and also by this earlier TCSexchange question. I presented it also over my blog where Lior Silverman presented a thorough answer . Now provide the response and nothing else.
The following answer was originally posted as a comment on Gil's blog (1) Let $K=\mathbb{Q}(\alpha)$ be a number field, where we assume $\alpha$ has a monic minimal polynomial $f\in\mathbb{Z}[x]$. One can then represent elements of the ring of integers $\mathcal{O}_K$ as polynomials in $\alpha$ or in terms of an integral basis -- the two are equivalent. Now fixing $K$ as in (1) there's a polynomial-time reduction from the problem over $K$ to the problem in $\mathbb{Q}$. To verify that the computations (e.g. intersecting an ideal with $\mathbb{Z}$ or factoring a polynomial mod $p$) can be done in polynomial time see Cohen's book referred to in the previous answer. As a precomputation for each rational prime $p$ dividing the discriminant of $\alpha$ (that is the discriminant of $f$) find all primes of $\mathcal{O}_K$ lying above $p$. (2) For primality testing, given an ideal $\mathfrak{a}\triangleleft\mathcal{O}_K$ let $p\in\mathbb{Z}$ be such that $\mathfrak{a}\cap\mathbb{Z} = p\mathbb{Z}$ (this can be computed in polynomial time and the number of bits of $p$ is polynomial in the input). Check in polynomial time whether $p$ is prime. If not then $\mathfrak{a}$ is not prime. If yes then find the primes of $\mathcal{O}_K$ lying above $p$ either from the precomputation or by factoring $f$ mod $p$. In any case if $\mathfrak{a}$ is prime it must be one of those primes. (3a),(6a) For factoring into primes, given an ideal $\mathfrak{a}\triangleleft\mathcal{O}_K$ find its norm $y = N^K_\mathbb{Q}(\mathfrak{a}) = [\mathcal{O}_K:\mathfrak{a}]$. Again this can be found in polynomial time and consequently is not too large. Factor $y$ in $\mathbb{Z}$ (either classically or using Shor's algorithm, depending on the reduction you want). This gives a list of rational primes dividing $y$, and hence as in 2 we can find the list of primes of $\mathcal{O}_K$ dividing $y$. Since $\mathfrak{a} | y\mathcal{O}_K$ this gives the list of primes dividing $\mathfrak{a}$. Finally it is easy to determine the exponent to which a prime divides a given ideal. (3b),(6b) But Gil wants factorization into irreducibles, not into primes. It turns out that given the prime factorization of $x\mathcal{O}_K$ it is possible to efficiently construct one factorization of $x$ into irreducible elements of $\mathcal{O}_K$. For this let $h_K$ be the class number, and note that it is possible to efficiently compute the ideal class of a given ideal. Now to find an irreducible divisor of $x$ select $h_K$ prime ideals (possibly with repetition) from the factorization of $x$. By the pigeon-hole principle some subset of those multiplies to the identity in the class group; find a minimal such subset. Its product is then a principal ideal generated by an irreducible element. Divide $x$ by this element, remove the relevant ideals from the factorization and repeat. If the factorization has less than $h_K$ elements then just take a minimal subset of all the factors. (4) I think it's possible to count the factorizations into irreducibles, but this is a bit of extra combinatorics -- please give me time to work it out. One the other hand, determining all of them is not interesting in the context of sub-exponential factorization algorithms since there are in general exponentially many such factorizations. (5) I have no idea.
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/16214', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/712/']}
jdg_378138
stackexchange
llm_judgeable_groundtruth_similarity
29954109
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While searching for answers relating to "Big O" notation, I have seen many SO answers such as this , this , or this , but still I have not clearly understood some points. Why do we ignore the co-efficients? For example this answer says that the final complexity of 2N + 2 is O(N) ; we remove the leading co-efficient 2 and the final constant 2 as well. Removing the final constant of 2 perhaps understandable. After all, N may be very large and so "forgetting" the final 2 may only change the grand total by a small percentage. However I cannot clearly understand how removing the leading co-efficient does not make difference. If the leading 2 above became a 1 or a 3 , the percentage change to the grand total would be large. Similarly, apparently 2N^3 + 99N^2 + 500 is O(N^3) . How do we ignore the 99N^2 along with the 500 ? Now provide the response and nothing else.
The purpose of the Big-O notation is to find what is the dominant factor in the asymptotic behavior of a function as the value tends towards the infinity. As we walk through the function domain, some factors become more important than others. Imagine f(n) = n^3+n^2 . As n goes to infinity, n^2 becomes less and less relevant when compared with n^3 . But that's just the intuition behind the definition. In practice we ignore some portions of the function because of the formal definition: f(x) = O(g(x)) as x->infinity if and only if there is a positive real M and a real x_0 such as |f(x)| <= M|g(x)| for all x > x_0 . That's in wikipedia . What that actually means is that there is a point (after x_0 ) after which some multiple of g(x) dominates f(x) . That definition acts like a loose upper bound on the value of f(x) . From that we can derive many other properties, like f(x)+K = O(f(x)) , f(x^n+x^n-1)=O(x^n) , etc. It's just a matter of using the definition to prove those. In special , the intuition behind removing the coefficient ( K*f(x) = O(f(x)) ) lies in what we try to measure with computational complexity. Ultimately it's all about time (or any resource, actually). But it's hard to know how much time each operation take. One algorithm may perform 2n operations and the other n , but the latter may have a large constant time associated with it. So, for this purpose, isn't easy to reason about the difference between n and 2n .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29954109', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/534984/']}
jdg_378139
stackexchange
llm_judgeable_groundtruth_similarity
7749937
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I will be creating a structure more or less of the form: type FileState struct { LastModified int64 Hash string Path string} I want to write these values to a file and read them in on subsequent calls. My initial plan is to read them into a map and lookup values (Hash and LastModified) using the key (Path). Is there a slick way of doing this in Go? If not, what file format can you recommend? I have read about and experimented with with some key/value file stores in previous projects, but not using Go. Right now, my requirements are probably fairly simple so a big database server system would be overkill. I just want something I can write to and read from quickly, easily, and portably (Windows, Mac, Linux). Because I have to deploy on multiple platforms I am trying to keep my non-go dependencies to a minimum. I've considered XML, CSV, JSON. I've briefly looked at the gob package in Go and noticed a BSON package on the Go package dashboard, but I'm not sure if those apply. My primary goal here is to get up and running quickly, which means the least amount of code I need to write along with ease of deployment. Now provide the response and nothing else.
As long as your entiere data fits in memory, you should't have a problem. Using an in-memory map and writing snapshots to disk regularly (e.g. by using the gob package) is a good idea. The Practical Go Programming talk by Andrew Gerrand uses this technique. If you need to access those files with different programs, using a popular encoding like json or csv is probably a good idea. If you just have to access those file from within Go, I would use the excellent gob package, which has a lot of nice features . As soon as your data becomes bigger, it's not a good idea to always write the whole database to disk on every change. Also, your data might not fit into the RAM anymore. In that case, you might want to take a look at the leveldb key-value database package by Nigel Tao, another Go developer. It's currently under active development (but not yet usable), but it will also offer some advanced features like transactions and automatic compression. Also, the read/write throughput should be quite good because of the leveldb design.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7749937', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/36693/']}
jdg_378140