source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
133758
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm an undergraduate student in Chemistry currently studying quantum mechanics and I have a problem with unitary transformations. Here in my book, it is stated that Every unitary operator $\hat{\mathcal{U}}$ can be written in an exponential form as follows: $$\mathrm{e}^{-i\alpha\hat{\mathcal{T}}}=\sum_{k=0}^{\infty}\dfrac{1}{k!}\left(-i\alpha\right)^{k}\hat{\mathcal{T}}^{k} $$ Provided that I have no knowledge of Lie Group/Algebra, my questions are: Why a unitary operator can be always represented by an exponential form? What is the intuitive mathematical meaning of the exponential form/matrix? What is the relation between the operator $\hat{\mathcal{U}}$ and the operator $\hat{\mathcal{T}}$? What is its physical meaning? Now provide the response and nothing else.
There's no escaping Lie theory if you want to understand what is going on mathematically . I'll try to provide some intuitive pictures for what is going on in the footnotes, though I'm not sure if it will be what you are looking for. On any (finite-dimensional, for simplicity) vector space, the group of unitary operators is the Lie group $\mathrm{U}(N)$, which is connected. Lie groups are manifolds , i.e. things that locally look like $\mathbb{R}^N$, and as such possess tangent spaces at every point spanned by the derivatives of their coordinates — or, equivalently, by all possible directions of paths at that point. These directions form, at $g \in \mathrm{U}(N)$, the $N$-dimensional vector space $T_g \mathrm{U}(N)$. 1 Canonically, we take the tangent space at the identity $\mathbf{1} \in \mathrm{U}(N)$ and call it the Lie algebra $\mathfrak{g} \cong T_\mathbf{1}\mathrm{U}(N)$. Now, from tangent spaces, there is something called the exponential map to the manifold itself. It is a fact that, for compact groups, such as the unitary group, said map is surjective onto the part containing the identity. 2 It is a further fact that the unitary group is connected , meaning that it has no parts not connected to the identity, so the exponential map $\mathfrak{u}(N) \to \mathrm{U}(N)$ is surjective, and hence every unitary operator is the exponential of some Lie algebra element. 3 (The exponential map is always surjective locally, so we are in principle able to find exponential forms for other operators, too) So, the above (and the notes) answers to your first three questions: We can always represent a unitary operator like that since $\mathrm{U}(N)$ is compact and connected, the exponential of an operator means "walking in the direction specified by that operator", and while $\mathcal{U}$ lies in the Lie group, $\mathcal{T}$ lies, as its generator, in the Lie algebra. One also says that $\mathcal{T}$ is the infinitesimal generator of $\mathcal{U}$, since, in $\mathrm{e}^{\alpha \mathcal{T}}$, we can see it as giving only the direction of the operation, while $\alpha$ tells us how far from the identity the generated exponetial will lie. The physical meaning is a difficult thing to tell generally - often, it will be that the $\mathcal{T}$ is a generator of a symmetry, and the unitary operator $\mathcal{U}$ is the finite version of that symmetry, for example, the Hamiltonian $H$ generates the time translation $U$, the angular momenta $L_i$ generate the rotations $\mathrm{SO}(3)$, and so on, and so forth — the generator is always the infinitesimal version of the exponentiated operator in the sense that $$ \mathrm{e}^{\epsilon T} = 1 + \epsilon T + \mathcal{O}(\epsilon^2)$$ so the generated operator will, for small $\epsilon$ be displaced from the identity by almost exactly $\epsilon T$. 1 Think of the circle (which is $\mathrm{U}(1)$): At every point on the circle, you can draw the tangent to it - which is $\mathbb{R}$, a 1D vector space. The length of the tangent vector specifies "how fast" the path in that direction will be traversed. 2 Think of the two-dimensional sphere (which is, sadly, not a Lie group, but illustrative for the exponential map). Take the tangent space at one point and imagine you are actually holding a sheet of paper next to a sphere. Now "crumble" the paper around the sphere. You will end up covering the whole sphere, and if the paper is large enough (it would have to be infinte to represent the tangent space), you can even wind it around the sphere multiple times, thus showing that the exponential map cannot be injective , but is easily seen to be surjective . A more precise notion of this crumbling would be to fix some measure of length on the sphere and map every vector in the algebra to a point on the sphere by walking into the direction indicated by the vector exactly as far as its length tells you. 3 This is quite easy to understand - if there were some part of the group wholly disconnected to our group, or if our group had infinite volume (if it was non-compact ), we could not hope to cover it wholly with only one sheet of paper, no matter how large.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/133758', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/58442/']}
jdg_370741
stackexchange
llm_judgeable_groundtruth_similarity
259694
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Through allows multiple functions to be easily applied to a single argument. For instance, suppose I wanted to count the number of characters in {"t", "o", "d", "a", "y"} , and also join them into a single string. Then I could do: s1 = {"t", "o", "d", "a", "y"}Through[{StringJoin, Length}[#]]& @ s1 {today, 5} But suppose I wanted to do this for a list of lists. Then I would need to use Map to apply the functions to each sublist. I could accomplish that with this: s2 = {{"h", "e", "l", "l", "o"}, {"d", "a", "y"}}Thread@ MapThread[# /@ s2 &, {{StringJoin, Length}}] {{hello, 5}, {day, 3}} But that syntax is a bit ugly. It would be simpler and more elegant if there were a function that could do for Map what Through does for Apply , i.e. that allowed one to directly Map multiple functions to a single argument, like so (here I've called the hypothetical function MultiMap ): MultiMap[{StringJoin, Length}[#]]& @ s2 {{hello, 5}, {day, 3}} Does such a function exist? Now provide the response and nothing else.
ClearAll[mapThrough]mapThrough[a_List] := Map[Through @ a @ ## &] or, as suggested by J.M. in comments, mapThrough[a_] := Map @ Through @* a or mapThrough = a |-> Map @ Through @* a Examples: s2 = {{"h", "e", "l", "l", "o"}, {"d", "a", "y"}};mapThrough[{StringJoin, Length}] @ s2 {{"hello", 5}, {"day", 3}} Use with an arbitrary list of functions: mapThrough[{StringReverse @* StringJoin, Reverse, First, Last, #[[{1, -1}]&}] @ s2 {{"olleh", {"o", "l", "l", "e", "h"}, "h", "o", {"h", "o"}}, {"yad", {"y", "a", "d"}, "d", "y", {"d", "y"}}}
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/259694', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/45215/']}
jdg_370742
stackexchange
llm_judgeable_groundtruth_similarity
5
Below is a question asked on the forum mathematica.meta.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Graphics are tightly integrated into the Mathematica interface. The Front End is programmable, and Mathematica has functions to interface with the web, so the question naturally comes up: Could we make it possible to upload images to StackExchange directly from Mathematica, using a palette button, and without the need to first save them to disk? See the answer below for an implementation! Now provide the response and nothing else.
Update 30th March 2015: As of today, the old repository will no longer be maintained. All the former functionality can now be found in the new SE Tools package . The online repository contains a very detailed description how it works and how it can be installed. Original Post This is my implementation (with contributions from @halirutan and help from a number of people) of an image uploader palette, which I would like to share with the community to make it more convenient to use this site. USAGE: When correctly installed, you should see a palette like this: To upload , just select a graphics (or any other part of the notebook), and press the "Upload to SE" button to get a preview before uploading: On Windows there are two buttons: "Upload to SE" and "Upload to SE (pp)". The "pp" (pixel perfect) one will rasterize the selected notebook element exactly the same way you see it in the notebook. Unfortunately I haven't (yet) been able to make this work on platforms other than Windows. The "Upload to SE" button will reformat everything to a width of 650 pixels and will discard any style/magnification information. You can also see the history using the "History..." button. This will show you your recent uploads and you can click on an existing image to copy its URL or clear the history. To update , simply use the Update... button. If there is an update available, this button will turn pink. The palette automatically checks for updates every few days (if it is open). You can also watch a screencast showing how to use the palette. BUG REPORTS, SUGGESTIONS, DEVELOPMENT: Ideas, suggestion, code improvements, problem reports are most welcome! Pease use the GitHub bug tracker for bug reports or feature requests. Just comment on this post for anything else. The source code is available through GitHub. Feel free to fix problems yourself and send pull requests. This can even be done through the GitHub web interface . KNOWN ISSUES There are a few problems that have happened to people, but I am not able to reproduce them. If you can come up with a way to reproduce any of these, please contact me! Sometimes none of the palette buttons do anything. Pressing Update or History will not bring up a new window either. If this happens to you, close the palette and re-open it from the Palettes menu. If that doesn't fix the problem, then open the palette, evaluate SEUploader`checkOnlineVersion[] , then close and re-open the palette. Sometimes no palette buttons show at all, just the Mathematica.SE logo on the left. If you can reproduce this, please contact me. The thumbnails of old uploads (upload history) may get corrupted for reasons unknown to me. The symptom is an error or hang when you press the History button or an error every time you try to upload. To fix this, first try clearing the history in the History... dialog. If the front end hangs when you try to open the History window, clear the history as follows: Close the palette and restart the Front End Identify the file name of the palette. It's found here: SystemOpen@FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "Palettes"}] . Let's call it SE Uploader.nb . Evaluate CurrentValue[$FrontEnd, {"PalettesMenuSettings", "SE Uploader.nb", TaggingRules}] = {} and restart the front end. Make sure you use the correct file name for your system in place of SE Uploader.nb . Only use the file name, not the full path. Alternatively this front end option can be cleared using Format -> Option Inspector after selecting Global Preferences in the top left dropdown. Restart the Front End again, open the palette and check that the History... button brings up an empty window. If the problem was due to corrupted history entries, it should be fixed now. When running in HiDPI mode on OS X, there may be a thin line on the right edge of uploaded images. UNINSTALLING If you used the suggested method to install the palette, the following will remove it completely and clear all settings. This is useful if you are having problems with the palette and want to try reinstalling it. Close the palette, then evaluate the following: DeleteFile[FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "Palettes", "SE Uploader.nb"}]]CurrentValue[$FrontEnd, {"PalettesMenuSettings", "SE Uploader.nb"}] = {}CurrentValue[$FrontEnd, TaggingRules] = DeleteCases[ CurrentValue[$FrontEnd, TaggingRules], "SEUploaderLastUpdateCheck" | "SEUploaderVersion" -> _] Now restart the front end (quit Mathematica completely).
{}
{'log_upvote_score': 8, 'links': ['https://mathematica.meta.stackexchange.com/questions/5', 'https://mathematica.meta.stackexchange.com', 'https://mathematica.meta.stackexchange.com/users/12/']}
jdg_370743
stackexchange
llm_judgeable_groundtruth_similarity
5949194
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following HTML: <div id="Kees_1_test">test 1</div><div id="Kees_12_test">test 2</div><div id="Kees_335_test">test 3</div> I would like a selector that selects the divs that look like $('div[id=Kees_{any number}_test]') . How can I achieve this? Note: the ID's are generated by Asp.Net. Now provide the response and nothing else.
Try this: $('div[id^=Kees_][id$=_test]') That selector selects all elements that have ids that start with Kees_ and end with _test . As lonesomeday suggested, you can use .filter() to ensure that the middle part contains only numbers. You can combine .filter() with the example above: $('div[id^=Kees_][id$=_test]').filter(function() { return /^Kees_\d+_test$/.test(this.id);}); That should about as good as it gets. Note that I added ^$ to the regex, this will return false on id's such as Kees_123_test_foo also but Kees_123_test passes.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5949194', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/201482/']}
jdg_370744
stackexchange
llm_judgeable_groundtruth_similarity
21538772
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a method that calls an angular service and consequently makes an ajax request via the service. I need to make sure that if this is called several times, the previous request in aborted (if it hasn't already been resolved that is). This method can get called multiple times. This method is actually from ngTable on ngTableParams : getData = function($defer, params) { myService.getRecord(params).then(function(res){ ... $defer.resolve(res.Records); }); } Here's the method on the service: this.getRecords = function(params) { ... return Restangular .all('/api/records') .post(filters);}; If ngTable makes 3 calls I want the first 2 to be aborted (unless of course they returned so fast that it got resolved) Now provide the response and nothing else.
You can abort $http calls via the timeout config property, which can be a Promise, that aborts the request when resolved. So in restangular, you can do this like var abort = $q.defer();Restangular.one('foos', 12345).withHttpConfig({timeout: abort.promise}).get();abort.resolve(); To integrate it with your usecase, for example, you could have this in your service: var abortGet;this.getRecords = function(params) { ... if (abortGet) abortGet.resolve(); abortGet = $q.defer(); return Restangular .all('/api/records') .withHttpConfig({timeout: abortGet.promise}) .post(filters);} This way calling getRecords always aborts the previous call if has not been resolved yet!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21538772', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1267778/']}
jdg_370745
stackexchange
llm_judgeable_groundtruth_similarity
27850989
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If name is declared final , why i can still call name.append and the output is: shreya ? I thought final variables cannot be changed once a value is assigned? public class Test1 { final static StringBuilder name = new StringBuilder("sh"); public static void main(String[] args) { name.append("reya"); System.out.println(name); }} Now provide the response and nothing else.
final refers to not being able to change the reference, e.g. you cannot say name = new StringBuilder() . It does not make the referenced object immutable. Immutability is a property of a class . An object of a mutable type is always mutable.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27850989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3680045/']}
jdg_370746
stackexchange
llm_judgeable_groundtruth_similarity
160080
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Unsure if this is more chemistry or physics, but here goes... Consider a Bunsen burner and the flame it produces when the air inlets are closed vs open.According to the description on the linked Wikipedia page, combustion is incomplete when the air inlets are closed, but essentially complete when the air inlets are suitably adjusted. This raises a number of questions about what is going on in each scenario. Why is combustion incomplete when the inlets are closed? Wouldn't gases mix as they rise up allowing most or all of the fuel to burn? Where exactly is combustion occurring relative to the incandescence? Obviously no lower than the tip of the burner where the fuel gas meets air, but how far up the visible flame? According to the answer to this question , combustion is only occurring at the outer shell, but how does fuel remain unexposed to air until it is too cool to react? It would seem that even when the inlets are open, combustion only starts at the tip of the burner, not inside the barrel... why? What prevents the flame from migrating down inside the barrel where fuel and air are already mixing? Motivating my question is a desire to understand how my gas fireplace can make a pretty yellow flame while the flames of a gas cooktop or gas barbecue are blue. Now provide the response and nothing else.
As Ed V has told you, there are plenty of mixed questions, but very interesting ones. I will try to address your key question. Consider a Bunsen burner and the flame it produces when the air inletsare closed vs open. According to the description on the linkedWikipedia page, combustion is incomplete when the air inlets areclosed, but essentially complete when the air inlets are suitablyadjusted. The key point is the Bernoulli's principle in action in a Bunsen burner. If look at the base of the burner's, fuel enters the vertical tube via a very narrow orfice (see part "a" in the picture). Therefore, the velocity of the fuel is pretty high at the base of the burner. This also creates vacuum at the base of the burner. Depending on the size of the air opening at the base (adjustable in modern burners), air is drawn in. Now the diameter of the Burner tube is not very narrow so there is no turbulence (my own speculation) rather the gases (fuel and oxidant) have a laminar flow. In perfect laminar flow, fluids should not mix at all. The blue cone in the flame reflects the shape of laminar flow of the gases. Fastest in the center of the tube and slower near the tube periphery. The amount of oxygen entering the orifice is still not sufficient to burn all the fuel. You have to rely on the diffusion of oxygen in the air to burn the remaining fuel. Important point: In a pure blue flame, there is no incandescence. This blue color is molecular emission. When the inlet is closed, air supply is definitely limited. The flame is relatively yellowish. The cone vanishes and now you have incandescence from soot particles forming from incomplete combustion. What is rather interesting, and I saw a video long time ago on YouTube was the question, what if you supply oxygen and burn the flame in the atmosphere of the fuel gas. All these flame structures vanish in a so-called reverse flame. Picture taken from J. Chem. Educ. 2000, 77, 5, 558 EDIT: A reader asks the evidence of laminar flow in the comment. Think of the diameter of the Bunsen burner tube, and think of the gas velocities needed to create an Reynold's number greater than 2000. With turbulene, you will not get a beautiful blue cone (indicative of laminar flow). See the example here Turbulence in flames
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/160080', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/30402/']}
jdg_370747
stackexchange
llm_judgeable_groundtruth_similarity
412169
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Question: Prove that $(x^n\ln x)^{(n)} = n!(\ln x+1+\frac 12 + ... + \frac 1n)$ What I tried: Using Leibnitz's theorem, with $f=x^n$ and $g=\ln x$.So $$f^{(j)}=n\cdots(n-j+1)x^{n-j} , g^{(j)}=(-1)^{j+1} \dfrac 1{x^{n-j}}$$But somehow I get stuck on the way... Now provide the response and nothing else.
Note that any automorphism $\sigma$ must send $\alpha\in \mathbb Q(\sqrt2,\sqrt3,\sqrt5)$ to another root of the minimal polynomial of $\alpha$. In particular, $\sigma\sqrt2=\pm \sqrt2,\sigma\sqrt3=\pm\sqrt3,$ and $\sigma\sqrt5=\pm\sqrt5$. Since $\{\sqrt2,\sqrt3,\sqrt5\}$ is a generating set for $\mathbb Q(\sqrt2,\sqrt3,\sqrt5)$ as a field over $\mathbb Q$, we see that $\sigma$ is uniquely determined by $\sigma\sqrt2,\sigma\sqrt3,$ and $\sigma\sqrt5$. Thus the only possible automorphisms are$$\begin{align}a+b\sqrt2+c\sqrt3+d\sqrt5&\mapsto a+b\sqrt2+c\sqrt3+d\sqrt5\\a+b\sqrt2+c\sqrt3+d\sqrt5&\mapsto a-b\sqrt2+c\sqrt3+d\sqrt5\\&\vdots\\a+b\sqrt2+c\sqrt3+d\sqrt5&\mapsto a-b\sqrt2-c\sqrt3-d\sqrt5\\\end{align}$$and in order to show that these are all automorphisms, it suffices to show that there are exactly $8$ automorphisms of $\mathbb Q(\sqrt2,\sqrt3,\sqrt5)$. Since $\mathbb Q(\sqrt2,\sqrt3,\sqrt5)$ is the splitting field of a polynomial, we know that it is Galois, so the number of automorphisms it has is equal to its degree. Recall that$$[\mathbb Q(\sqrt2,\sqrt3,\sqrt5):\mathbb Q]=[\mathbb Q(\sqrt2,\sqrt3,\sqrt5):\mathbb Q(\sqrt2,\sqrt3)][\mathbb Q(\sqrt2,\sqrt3):\mathbb Q(\sqrt2)][\mathbb Q(\sqrt2):\mathbb Q]$$and so it suffices to show that each degree on the RHS is $2$. Clearly they are at most $2$, as each extension is obtained by adjoining a root of a quadratic polynomial. Thus it suffices to show that each extension on the RHS is nontrivial, i.e. that $\sqrt2\notin \mathbb Q,\sqrt3\notin \mathbb Q(\sqrt2)$ and $\sqrt5\notin \mathbb Q(\sqrt2,\sqrt3)$. The first is a famous theorem. Since $\mathbb Q(\sqrt2)$ has basis $\{1,\sqrt2\}$, if $\sqrt3\in \mathbb Q(\sqrt2)$ we would have $\sqrt3=a+b\sqrt2$ with $a,b\in\mathbb Q$, so $3=a^2+2b^2+2ab\sqrt2$. Since $\sqrt2$ is irrational we must have $2ab=0$, so $a=0$ or $b=0$, thus we need only observe that $3$ and $3/2$ are not squares in $\mathbb Q$. The same technique (with some additional effort) works to show that $\sqrt5\notin \mathbb Q (\sqrt2,\sqrt3)$, observing that $\{1,\sqrt2,\sqrt3,\sqrt6\}$ is a basis for $\mathbb Q(\sqrt2,\sqrt3)$. Once you see that these are the automorphisms, it should be relatively easy to see what their fixed fields are. For example, the map$$a+b\sqrt2+c\sqrt3+d\sqrt5\mapsto a-b\sqrt2+c\sqrt3+d\sqrt5$$has fixed field $\mathbb Q(\sqrt3,\sqrt5)$ while the map$$a+b\sqrt2+c\sqrt3+d\sqrt5\mapsto a-b\sqrt2-c\sqrt3+d\sqrt5$$has fixed field $\mathbb Q(\sqrt6,\sqrt5)$ (why $\sqrt6$?). These fixed fields are all the maximal subfields of $\mathbb Q(\sqrt2,\sqrt3,\sqrt5)$, and the remaining intersections are pairwise intersections of these subfields (since the only other nontrivial subgroups of the Galois group are generated by pairs of automorphisms), which are easy to determine.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/412169', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43266/']}
jdg_370748
stackexchange
llm_judgeable_groundtruth_similarity
55169659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I import in my .ts component file a .json it says me Cannot find module ../../util/provinces.json . Consider using --resolveJsonModule to import module with .json extensionts (2732).How can I import a .json file with angular2? Ts file Now provide the response and nothing else.
Sharpen square edges. Load the image , convert to grayscale , median blur to smooth, and sharpen to enhance edges. Obtain binary image and remove noise. We threshold to obtain a black/white binary image. Depending on the image, Otsu's thresholding or adaptive thresholding would work. From here we create a rectangular kernel and perform morphological transformations to remove noise and enhance the square contours. Detect and extract squares. Next we find contours and filter using minimum/maximum threshold area . Any contours that pass our filter will be our squares so to extract each ROI, we obtain the bounding rectangle coordinates , crop using Numpy slicing, and save each square image. Sharpen image with cv2.filter2D() using a generic sharpening kernel, other kernels can be found here . Now threshold to get a binary image There's little particles of noise so to remove them, we perform morphological operations Next find contours and filter using cv2.contourArea() with minimum/maximum threshold values. We can crop each desired square region using Numpy slicing and save each ROI like this x,y,w,h = cv2.boundingRect(c)ROI = image[y:y+h, x:x+w]cv2.imwrite('ROI_{}.png'.format(image_number), ROI) import cv2import numpy as np# Load image, grayscale, median blur, sharpen imageimage = cv2.imread('1.png')gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)blur = cv2.medianBlur(gray, 5)sharpen_kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])sharpen = cv2.filter2D(blur, -1, sharpen_kernel)# Threshold and morph closethresh = cv2.threshold(sharpen, 160, 255, cv2.THRESH_BINARY_INV)[1]kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))close = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel, iterations=2)# Find contours and filter using threshold areacnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)cnts = cnts[0] if len(cnts) == 2 else cnts[1]min_area = 100max_area = 1500image_number = 0for c in cnts: area = cv2.contourArea(c) if area > min_area and area < max_area: x,y,w,h = cv2.boundingRect(c) ROI = image[y:y+h, x:x+w] cv2.imwrite('ROI_{}.png'.format(image_number), ROI) cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2) image_number += 1cv2.imshow('sharpen', sharpen)cv2.imshow('close', close)cv2.imshow('thresh', thresh)cv2.imshow('image', image)cv2.waitKey()
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/55169659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11204891/']}
jdg_370749
stackexchange
llm_judgeable_groundtruth_similarity
12992482
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have application in which check-box tree is present. I want to pre-populate the check-box's if user before checked some of those check box. For that am getting XML format from my back-end perl script as shown below.like, in below XML only 0, 43,44,45,46 and 50 are coming so only those respective checkbox need to checked on page load.I want to display those checked check-box on page load by calling perl script from blackened parsing XMl .How can I do this? <perldata> <hashref memory_address="0x86f4880"> <item key="0">1</item> </hashref> </perldata> <perldata> <hashref memory_address="0x86f4880"> <item key="43">1</item> </hashref> </perldata> <perldata> <hashref memory_address="0x86f4880"> <item key="44">1</item> </hashref> </perldata> <perldata> <hashref memory_address="0x86f4880"> <item key="45">1</item> </hashref> </perldata> <perldata> <hashref memory_address="0x86f4880"> <item key="46">1</item> </hashref> </perldata> <perldata> <hashref memory_address="0x86f4880"> <item key="50">1</item> </hashref> </perldata> Now provide the response and nothing else.
Using Swift 2.x (for Swift 3.x , Swift 4.x and Swift 5.x read here below..) , i've found this fantastic article about C Callbacks in Swift . Trying to obtain "all the CGPoint (s)", as explained by Lily Ballard , can be a bad idea as she said. So, I think maybe the best way is to get the path elements points used to create a particular CGPath : //MARK: - CGPath extensionsextension CGPath { func forEach(@noescape body: @convention(block) (CGPathElement) -> Void) { typealias Body = @convention(block) (CGPathElement) -> Void func callback(info: UnsafeMutablePointer<Void>, element: UnsafePointer<CGPathElement>) { let body = unsafeBitCast(info, Body.self) body(element.memory) } print(sizeofValue(body)) let unsafeBody = unsafeBitCast(body, UnsafeMutablePointer<Void>.self) CGPathApply(self, unsafeBody, callback) } func getPathElementsPoints() -> [CGPoint] { var arrayPoints : [CGPoint]! = [CGPoint]() self.forEach { element in switch (element.type) { case CGPathElementType.MoveToPoint: arrayPoints.append(element.points[0]) case .AddLineToPoint: arrayPoints.append(element.points[0]) case .AddQuadCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) case .AddCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayPoints.append(element.points[2]) default: break } } return arrayPoints }} With this extension you can do for example: var bezier = UIBezierPath(ovalInRect: CGRectMake(0, 0, 400, 300))let myOval = bezier.CGPathlet junctionPoints = myOval.getPathElementsPoints()print("junction points are: \(junctionPoints)") Swift 3.x and Swift 4.1 (look below for Swift 4.2 or major..) (there are some corrections due to syntax re-introduction of @convention(c) ): extension CGPath { func forEach( body: @convention(block) (CGPathElement) -> Void) { typealias Body = @convention(block) (CGPathElement) -> Void let callback: @convention(c) (UnsafeMutableRawPointer, UnsafePointer<CGPathElement>) -> Void = { (info, element) in let body = unsafeBitCast(info, to: Body.self) body(element.pointee) } print(MemoryLayout.size(ofValue: body)) let unsafeBody = unsafeBitCast(body, to: UnsafeMutableRawPointer.self) self.apply(info: unsafeBody, function: unsafeBitCast(callback, to: CGPathApplierFunction.self)) } func getPathElementsPoints() -> [CGPoint] { var arrayPoints : [CGPoint]! = [CGPoint]() self.forEach { element in switch (element.type) { case CGPathElementType.moveToPoint: arrayPoints.append(element.points[0]) case .addLineToPoint: arrayPoints.append(element.points[0]) case .addQuadCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) case .addCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayPoints.append(element.points[2]) default: break } } return arrayPoints } func getPathElementsPointsAndTypes() -> ([CGPoint],[CGPathElementType]) { var arrayPoints : [CGPoint]! = [CGPoint]() var arrayTypes : [CGPathElementType]! = [CGPathElementType]() self.forEach { element in switch (element.type) { case CGPathElementType.moveToPoint: arrayPoints.append(element.points[0]) arrayTypes.append(element.type) case .addLineToPoint: arrayPoints.append(element.points[0]) arrayTypes.append(element.type) case .addQuadCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayTypes.append(element.type) arrayTypes.append(element.type) case .addCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayPoints.append(element.points[2]) arrayTypes.append(element.type) arrayTypes.append(element.type) arrayTypes.append(element.type) default: break } } return (arrayPoints,arrayTypes) }} Swift > 4.1 (also Swift 5.x) and iOS 9.x and > compatible extension CGPath { func forEach( body: @escaping @convention(block) (CGPathElement) -> Void) { typealias Body = @convention(block) (CGPathElement) -> Void let callback: @convention(c) (UnsafeMutableRawPointer, UnsafePointer<CGPathElement>) -> Void = { (info, element) in let body = unsafeBitCast(info, to: Body.self) body(element.pointee) } //print(MemoryLayout.size(ofValue: body)) let unsafeBody = unsafeBitCast(body, to: UnsafeMutableRawPointer.self) self.apply(info: unsafeBody, function: unsafeBitCast(callback, to: CGPathApplierFunction.self)) } func getPathElementsPoints() -> [CGPoint] { var arrayPoints : [CGPoint]! = [CGPoint]() self.forEach { element in switch (element.type) { case CGPathElementType.moveToPoint: arrayPoints.append(element.points[0]) case .addLineToPoint: arrayPoints.append(element.points[0]) case .addQuadCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) case .addCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayPoints.append(element.points[2]) default: break } } return arrayPoints } func getPathElementsPointsAndTypes() -> ([CGPoint],[CGPathElementType]) { var arrayPoints : [CGPoint]! = [CGPoint]() var arrayTypes : [CGPathElementType]! = [CGPathElementType]() self.forEach { element in switch (element.type) { case CGPathElementType.moveToPoint: arrayPoints.append(element.points[0]) arrayTypes.append(element.type) case .addLineToPoint: arrayPoints.append(element.points[0]) arrayTypes.append(element.type) case .addQuadCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayTypes.append(element.type) arrayTypes.append(element.type) case .addCurveToPoint: arrayPoints.append(element.points[0]) arrayPoints.append(element.points[1]) arrayPoints.append(element.points[2]) arrayTypes.append(element.type) arrayTypes.append(element.type) arrayTypes.append(element.type) default: break } } return (arrayPoints,arrayTypes) }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12992482', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1745424/']}
jdg_370750
stackexchange
llm_judgeable_groundtruth_similarity
18667633
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible to use Async when using ForEach? Below is the code I am trying: using (DataContext db = new DataLayer.DataContext()){ db.Groups.ToList().ForEach(i => async { await GetAdminsFromGroup(i.Gid); });} I am getting the error: The name 'Async' does not exist in the current context The method the using statement is enclosed in is set to async. Now provide the response and nothing else.
List<T>.ForEach doesn't play particularly well with async (neither does LINQ-to-objects, for the same reasons). In this case, I recommend projecting each element into an asynchronous operation, and you can then (asynchronously) wait for them all to complete. using (DataContext db = new DataLayer.DataContext()){ var tasks = db.Groups.ToList().Select(i => GetAdminsFromGroupAsync(i.Gid)); var results = await Task.WhenAll(tasks);} The benefits of this approach over giving an async delegate to ForEach are: Error handling is more proper. Exceptions from async void cannot be caught with catch ; this approach will propagate exceptions at the await Task.WhenAll line, allowing natural exception handling. You know that the tasks are complete at the end of this method, since it does an await Task.WhenAll . If you use async void , you cannot easily tell when the operations have completed. This approach has a natural syntax for retrieving the results. GetAdminsFromGroupAsync sounds like it's an operation that produces a result (the admins), and such code is more natural if such operations can return their results rather than setting a value as a side effect.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/18667633', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/256409/']}
jdg_370751
stackexchange
llm_judgeable_groundtruth_similarity
603981
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Until the last years, my knowledge was that the rechargeable analogue of the 1.5 V batteries have only 1.2 V. This greatly decreased their usability (many devices did not work well with 1.2 V). The cause of this lower voltage was chemical and it was considered a hard limit until some years ago. However, today there are a lot of 1.5 V rechargeable batteries (lithium or nickel-based). I experienced it as a new development, in the last few years. They seem to do what they state. Sometimes they are going so far that they even have an USB connector to charge it, like this: These batteries can be charged by USB (much more than 1.5 V), but they still give 1.5 V. Other batteries, without an USB connector, can be charged with 1.5 V. In all aspects they look like 1.5 V batteries, except that they look chargeable. How do they work? Maybe they have some integrated DC-to-DC transformer circuitry? Now provide the response and nothing else.
The batteries you show use Li-ion cells internally and include a step-down and a USB 5V to Li-ion charge controller. They’re essentially power banks that output 1.5V with a step-down instead of 5V or 12V with a step-up. (Why must the battery be Li-ion? For this specific cell form factor [AA], a comparable NiMH battery is only 600-1000mAh (720 ~ 1200mWh.) So NiMH chemistry could not achieve 3500mWh at 1.5V (~2300mAh) in that form factor: it's just not dense enough. Not so with Li-ion, as we see.) Such externally-charged '1.5V' Li-ion batteries have been around for a while, since 2014, initially made by the Chinese company Kentli. The Kentli battery and its competitors use a single 3.7V Li-ion cell, with the electronics housed typically in a 'hat' at the (+) end. Kentli's battery uses a recessed 'ring' that connects directly to the battery for charging as opposed to USB, and thus requires a proprietary charger. Nonetheless, the most common '1.5V' rechargeable seems to be the micro-USB ones, like you've shown. Your example appears to be from Poover , an unfortunate company name transliteration if there ever was one. (Maybe it's "The official rechargeable of the Hundred Acre Wood." Yeah, that's it.) On the other hand, here's a video teardown of a '1.5V' rechargeable Li-ion based battery that doesn't use USB or a hokey proprietary connector at all. It's more clever in my opinion (charges with 5V applied via a passive 5V-powered charger.) See https://www.youtube.com/watch?v=L62tCyOP06w and https://budgetlightforum.com/node/69034
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/603981', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/35118/']}
jdg_370752
stackexchange
llm_judgeable_groundtruth_similarity
40710628
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a very weird issue in my lodash codes I have something like data = { 'id':'123', 'employee_name': 'John', 'employee_type': 'new' }var newObj = _.mapValues(data, function (value, key) { var t = _.camelCase(key); console.log(t) -> shows employeeName and employeeType return _.camelCase(key);}); I was expecting my newObj will become data = { 'id':'123', 'employeeName': 'John', 'employeeType': 'new' } after I ran the codes above, it still stays the same as it was like data = { 'id':'123', 'employee_name': 'John', 'employee_type': 'new' } This is super weird and I'm not sure what went wrong. Can someone help me about this? Thanks a lot! Now provide the response and nothing else.
Use _.mapKeys() instead of _.mapValues() : var data = { 'id': '123', 'employee_name': 'John', 'employee_type': 'new' };var newObj = _.mapKeys(data, (value, key) => _.camelCase(key));console.log('newObj: ', newObj); <script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.2/lodash.min.js"></script> If you need to ignore the redundant value param, you can use _.rearg() on _.camelCase() to generate a function that takes the 2nd param (the key ) instead of the 1st param (the value ). var data = { 'id': '123', 'employee_name': 'John', 'employee_type': 'new' };var newObj = _.mapKeys(data, _.rearg(_.camelCase, 1));console.log('newObj: ', newObj); <script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.2/lodash.min.js"></script>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40710628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6480044/']}
jdg_370753
stackexchange
llm_judgeable_groundtruth_similarity
55723434
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have built several angular (2+ typescript) applications over the last few years and am quite used to it. However in a project I am getting involved with we are using a micro service architecture which encourages that everything be split into as small usable (and sensible) parts as possible which are later loaded into the main page and can act independently of each other. If this was a regular HTML page with no interactive content it would be easy to do. But in this case we could have a page with: a search function, a listing function, and a details modal. These things all need to be able to interact with each other and react to events, and preferably act as a SPA, not requiring a page load when you navigate. In a classical Angular app I would have built this with modules and components (which in turn can use common shared components and all be lazy loaded in) with shared routing set up in a single angular application. But how do I do this while keeping to a micro service standard / architecture?. Is that even desirable for the front end? One could see the modules in Angular as a micro services, but is that good enough? I have found libraries such as https://single-spa.js.org/ . Is this the way to solve this? Creating multiple small apps and sewing them together with another framework above them? This would however require a great deal more bandwidth as the frontend needs to download several things multiple times instead of once. At first glance this all seems to complicate the task with little benefit. Especially since in a regular Angular app one part of the app could stop working without really impacting another part of an Angular SPA (which is one of the main goals of MS). Are there some standardized metrics one could use to see when its better to create a single SPA or to go with the split up micro service variant? Now provide the response and nothing else.
Summary A microservices back end doesn't require a microservices front end. You probably want to organise your angular application by module unless you have an exceptionally large application that would be better as multiple smaller independent applications. In which case, Angular may not be the best framework for you. You shouldn't choose to architect the frond end application in a certain way just because the server side application is architected like that. It's perfectly feasible to use microservices on the back end with a single Angular application on the front end. But if you want to divide it up and it makes sense to do so, you should first look at using multiple modules. You are already using modules (these are added to the imports of your app.module file) such as HttpClientModule so it's not a big step to create your own. A module would contain components relating to a part of the application such as a particular feature or a set of UI components. Modules don't immediately provide any technical benefits but from a developer perspective they help organise large codebases. You can however opt to lazy-load these to speed up the initial load time for the application, and to pre-load the lazy loaded modules so the application will load the other modules after loading the initial page. Initially, you would probably want to keep all the modules in a single codebase which makes managing the code easier. As the application grows you can split the modules into separate repositories if you ever get to the point where that seems like a good idea. They can then be installed with your package manager (NPM) and imported as you would import other third party modules, but you still have one app. Using this approach you still have the benefits of a Single Page Application but it's more modular. At the extreme end, you could split it out into completely different applications served from different server routes. Which option to choose depends on your requirements but the following may help. Single Module One application, one codebase, one module. Use this option for small simple applications with limited features. Multiple Modules One application but multiple modules. Code is more organised making it more developer friendly Modules can be independently loaded decreasing the initial load time. Use this option for larger applications or where there are at least two distinct sections to the app. Multiple Applications Multiple applications, each could be built with one or more modules. Modules can be shared between applications as external dependencies. Use this if parts of the application are completely separate. You will need to incorporate aspects of traditional multi-page applications here and use the server for (some of) the routing. You will have additional non-angular complexities; how will one app share data with another? If you don't know if you need this, you probably don't. 99% of Angular projects should be single applications organised as multiple modules. Given that your question is Angular specific, I've created a simple app with routing, and then split it out over multiple lazy loaded modules so you can compare. Here's the standard single module app. Here's a fork of it with exactly the same functionality but split into modules. I've used feature modules, plus a shared module. Feature Modules All the components, directives etc. relating to a feature are contained in the feature module. You can see that the AppModule is much leaner in the multi-module version. Each module should have its own router (if the feature has routes). All reference to the lazy loaded modules should be removed from AppModule in order to prevent Webpack from including them in the main bundle which would cause them to be loaded immediately. Then, a string representation of the path to the modules can be used to point to the module: const appRoutes: Routes = [ { path: '', component: HomeComponent }, { path: 'places', loadChildren: './places/places.module#PlacesModule' }, { path: 'products', loadChildren: './products/products.module#ProductsModule' }] There are different preloading strategies to determine how lazy loaded modules are loaded. PreloadAllModules will load all lazy loaded modules after loading the initial module. imports: [ RouterModule.forRoot(appRoutes, { preloadingStrategy: PreloadAllModules })], Shared Modules Shared modules are used to make components directives etc. available to multiple modules. They need to export any of the declarations that will be used by other modules, as well as the CommonModule: exports: [ CommonModule, LegalComponent] Services Services can also be provided into individual modules. When provided to lazy loaded modules or shared modules used in lazy loaded modules, you will get a different instance of the service. So to keep the example simple, I've left the services in the AppModule providers array.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55723434', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/937131/']}
jdg_370754
stackexchange
llm_judgeable_groundtruth_similarity
16006583
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to capture ctrl + z key combination in javascript with this code: <html><head> <title>Untitled Document</title></head><body> <script type='text/javascript'> function KeyPress(e) { var evtobj = window.event? event : e //test1 if (evtobj.ctrlKey) alert("Ctrl"); //test2 if (evtobj.keyCode == 122) alert("z"); //test 1 & 2 if (evtobj.keyCode == 122 && evtobj.ctrlKey) alert("Ctrl+z"); } document.onkeypress = KeyPress; </script></body></html> Commented line "test1" generates the alert if I hold down the ctrl key and press any other key. Commented line "test2" generates the alert if I press the z key. Put them together as per the line after "test 1 & 2", and holding down the ctrl key then pressing the z key does not generate the alert as expected. What is wrong with the code? Now provide the response and nothing else.
Use onkeydown (or onkeyup ), not onkeypress Use keyCode 90, not 122 function KeyPress(e) { var evtobj = window.event? event : e if (evtobj.keyCode == 90 && evtobj.ctrlKey) alert("Ctrl+z");}document.onkeydown = KeyPress; Online demo: http://jsfiddle.net/29sVC/ To clarify, keycodes are not the same as character codes. Character codes are for text (they differ depending on the encoding, but in a lot of cases 0-127 remain ASCII codes). Key codes map to keys on a keyboard. For example, in unicode character 0x22909 means 好. There aren't many keyboards (if any) who actually have a key for this. The OS takes care of transforming keystrokes to character codes using the input methods that the user configured. The results are sent to the keypress event. (Whereas keydown and keyup respond to the user pressing buttons, not typing text.)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/16006583', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2280882/']}
jdg_370755
stackexchange
llm_judgeable_groundtruth_similarity
3112
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When I'm trying different color themes out, it seems if a particular element is not defined in the new theme the old theme value is retained, and this leads to some clashing. This happens in emacs 24 and also previous versions using color-theme . Is there a way to reset all colors in between to default so that each color theme looks as intended? Now provide the response and nothing else.
Are you sure that you are using color themes and not Emacs custom themes? Color themes are defined by library color-theme.el . Custom themes are available starting with Emacs 24 - and they are not the same as color themes. If you are using color themes then the answer is simple: just enable the pseudo color-theme named [Reset] . That completely undoes the theme: removes all effects that it imposed. If you are in fact using Emacs custom themes then the answer is not so simple. You cannot undo the application of a custom theme. What you can do, which will help a lot, is to disable each custom theme, using disable-theme , after it has been enabled and before enabling another custom theme. Here are two ways to do this "semi-automatically": Advise load-theme , so that it first disables all custom themes before loading (enabling) another one. This was suggested in this StackOverflow answer by Tungd at tungdao.com . (defadvice load-theme (before theme-dont-propagate activate) (mapcar #'disable-theme custom-enabled-themes)) Define and use a function that disables the current theme before enabling a theme . This was suggested by Maris Orbidans . (defun enab-theme (theme) (if current-t43m3 (disable-theme current-t43m3)) (setq current-t43m3 theme) (load-theme theme t)) See those original S.O. posts for a little more info, and see the question and other answers to it on the same S.O. page. (Caveat: The accepted answer on that page is not an answer to the problem of undoing a custom theme.) Be aware that even the solutions described in #2 are not general solutions: they do not undo the application of a custom theme - there is no way to do that. That is, disabling a custom theme does not restore any non-theme customizations (e.g., face changes) that you might have made before enabling that theme and that were overridden by that theme. In other words, undoing a color theme is trivial; undoing a custom theme is impossible. The best you can do for a custom theme is to disable it relative to other custom themes. The best reference for this question and for ways of switching (e.g. cycling) among both color and custom themes is the Emacs Wiki page Color and Custom Themes . And be forewarned: There is a fair amount of misleading information in various places about Emacs custom themes in this regard. People were rightfully excited that Emacs added custom themes as a feature in Emacs 24, and some of them jumped immediately to the mistaken conclusion that custom themes were a replacement for color themes. They are not. Each has some advantages. Not being able to undo is a major disadvantage of custom themes, for example. [ This GNU Emacs bug report is an enhancement request to remedy the inability to undo a custom theme (and thus restore earlier customizations).]
{}
{'log_upvote_score': 6, 'links': ['https://emacs.stackexchange.com/questions/3112', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/2421/']}
jdg_370756
stackexchange
llm_judgeable_groundtruth_similarity
1880247
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $G$ and $K$ be groups. Let $H$ be a normal subgroup of $G$ and $M$ be a normal subgroup of $K$ such that $H\simeq M$. Question: is $ G/H \simeq K / M$? I am fairly certain that this is tru of the groups are finite. For example, if the groups are cyclic, then the quotients are cyclic and by orders of the groups, then quotients would have to be isomorphic. But what happens when the groups are not finite? From Does $\displaystyle \frac{G}{H}$ $\simeq$ $\displaystyle \frac{G}{K}$ $\Rightarrow$ $H$ $\simeq$ $K$? I see that the other direction is not true. Now provide the response and nothing else.
It does not hold for finite groups either. Let $G=K=\Bbb{Z}_4\oplus\Bbb{Z}_2$, and let $H=\langle (0,1)\rangle$ and $M=\langle(2,0)\rangle$ be the given cyclic subgroups, both of order two. Then$$G/H\simeq\Bbb{Z}_4\qquad\text{and}\qquad K/M\simeq\Bbb{Z}_2\oplus\Bbb{Z}_2.$$You need to have an isomorphism $f:G\to K$ such that $f(H)=M$ to be sure about the conclusion.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1880247', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/171192/']}
jdg_370757
stackexchange
llm_judgeable_groundtruth_similarity
104667
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I found this formula for the Euler-Mascheroni constant $\gamma$. Just wondering whether such a formula already exists in literature? Also, wanted to know whether there are formulas that converge faster than this? $$\gamma = \sum_{k = 1}^{\infty} \frac{1}{2^k k} - \sum_{k = 1}^{\infty} \frac{\zeta \left( 2 k + 1 \right)}{2^{2 k} \left( 2 k + 1 \right)} $$ UPDATE:Thanks for your reply quid. I just came across this while doing some calculations with the zeta function. The calculations are a bit too long to be posted, but in short it derives from $$\zeta(s) = \frac{s+1}{2(s-1)} + \frac{s}{8} - \frac{s(s+1)}{2\pi^2}\int_1^\infty \frac{(\tan^{-1}\cot(\pi x))^2}{x^{s+2}}dx$$. Now provide the response and nothing else.
In his 1887 paper Table des valeurs des sommes $S_k = \sum_{1}^\infty n^{-k}$ (Acta Mathematica 10 (1887), 299-302; volume available online ), Stieltjes used almost exactly this formula to compute Euler's constant to 33 decimal places. Of course as quid points out you need to know the zeta values to do this, but the main point of this paper was to compute those values, so he was just getting Euler's constant as a corollary. He uses a slight variant of the formula, with $\zeta(2k+1)-1$ in place of $\zeta(2k+1)$ for faster convergence (and a corresponding adjustment in the other term, which becomes $1+\log 2 - \log 3$). He derives the formula by taking the Taylor series expansion of $\log \Gamma(1+x)$ and using it to compute $\log \Gamma(1+1/2) - \log \Gamma(1-1/2)$.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/104667', 'https://mathoverflow.net', 'https://mathoverflow.net/users/2865/']}
jdg_370758
stackexchange
llm_judgeable_groundtruth_similarity
32192682
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Recently, I've started tinkering with React.js and I love it. I started out in the regular ES5, so as to get the hang of things, the docs are all written in ES5... But now I wanted to try ES6, because it's shiny and new, and it does seem to simplify some things. What bothers me a lot is that for every method I had added into my component classes I now have to bind 'this' to, otherwise it doesn't work. So my constructor ends up looking like this: constructor(props) { super(props); this.state = { ...some initial state... } this.someHandler = this.someHandler.bind(this); this.someHandler = this.someHandler.bind(this); this.someHandler = this.someHandler.bind(this); this.someHandler = this.someHandler.bind(this); this.someHandler = this.someHandler.bind(this); this.someHandler = this.someHandler.bind(this); this.someHandler = this.someHandler.bind(this);} If I were to add even more methods to my class, this would become an even bigger, uglier mess. My question is, is there some way to get around this, or at least make it easier, shorter and less ugly? One of the main reasons I wanted to try React with ES6 was to make my code more concise, but this is doing the opposite. Any suggestions or input would be appreciated. Now provide the response and nothing else.
You can use class fields to do the binding outside the constructor. They look like the following: class Foo extends React.Component { handleBar = () => { console.log('neat'); }; handleFoo = () => { console.log('cool'); }; render() { return ( <div onClick={this.handleBar} onMouseOver={this.handleFoo} /> ); }} Class fields are supported experimentally by Babel via its class properties transform , but they are still "experimental" because they are a Stage 3 Draft (not yet in a Babel preset). You will need to do the binding manually until ES7 or until enabling the feature in Babel, however. This topic is covered briefly in Babel's blog post on React on ES6+ .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/32192682', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1627234/']}
jdg_370759
stackexchange
llm_judgeable_groundtruth_similarity
5116
Below is a question asked on the forum computergraphics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning about 3D computer graphics, but I'm having a hard time understanding why the near plane of a viewing frustum can not be placed at z position $0$ (right at the camera). I can understand conceptually that the near plane is essentially the retinal canvas -- so by definition it must exist to some extent -- but I have trouble understanding why the near plane isn't simply an abstract concept where it's actual position is infinitesimally close to the camera position, as opposed to a definite z position away. In several explanations, the following formula is given to explain why $0$ cannot be used as the near plan position (where $A$ is the camera position, $B$ is an object vertex, and $D$ is the perspective projection of point $B$ onto the near plane): $${ BC \over AC } = { DE \over AE }$$ In this case, the geometry of similar triangles $ABC$ and $ADE$ is used to determine the height of $D$ via the solution of $DE$. It is obvious that if the near plane is at $0$ ($AE = 0$), then a division by $0$ occurs -- hence, why the near plane cannot be located at position $0$. However, why is this method used to determine the position of $D$ on the canvas? I've written a simple raycasting visualizer before and didn't have an explicitly defined near plane. In my engine, I simply defined a $60^\circ$ field of view and divided the number pixels on my screen among that field of view. For example, for a $300$x$300$ screen: $$1\text{ pixel} = 300/60^\circ = 5^\circ$$ Next, I found the angle between my camera and the object vertex ($\angle BAC$) and divided it by $5^\circ$ to acquire the pixel coordinate on my screen. In this method, no explicit near plane was necessary and I used my actual camera position to determine the angle. So how was I able to perform a perspective projection without a near plane in my raycasting method? Now provide the response and nothing else.
The near and far planes of a viewing frustum aren't needed for simple 3D→2D projection. What the near and far planes actually do, in a typical rasterizer setup, is define the range of values for the depth buffer. Depths in the [near, far] range will be mapped into [0, 1] to be stored in the depth buffer. However, the depths aren't simply linearly rescaled. Instead, we use a hyperbolic mapping for depth: the value stored in the depth buffer is the reciprocal of the actual depth distance, with a scale and offset applied. If you look at this curve and imagine moving the near plane value toward $z = 0$, the corresponding point on the $1/z$ curve would shoot up toward infinity. The math blows up and the depth buffer becomes useless (it suffers a catastrophic loss of precision). The reason why we use the reciprocal instead of some other function of depth is basically convenience: the reciprocal depth interpolates nicely in screen space, and it fits naturally into the mathematical framework of perspective projections: there's already a divide by $z$ being applied to the $x, y$ coordinates. If you'd like to know more, I have a short article on the topic: Depth Precision Visualized .
{}
{'log_upvote_score': 5, 'links': ['https://computergraphics.stackexchange.com/questions/5116', 'https://computergraphics.stackexchange.com', 'https://computergraphics.stackexchange.com/users/6646/']}
jdg_370760
stackexchange
llm_judgeable_groundtruth_similarity
24579356
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Should I create AR migrations as I'm changing my models? I am using Mongoid and MongoDB so I do not see any benefits. The only benefit that I can think of is renaming a field - but that I can also do with small script. Would that even work? My gut is telling me that I do not need migrations, but I'd like to hear from someone with more experience. What is the best practice? Should I use migrations with MongoDB? Now provide the response and nothing else.
Since MongoDB does not (as at 2.6) provide any server-side schema enforcement, data migration scripts are not strictly required. This can be particularly helpful for speed of development. However, it may still make sense for you to create migrations for your production data if you want to practice good "data hygiene" and ensure consistency across different deployments. For example: removing unused fields adding new required fields setting default values renaming fields loading required data/fixtures ensuring required indexes You certainly have the choice of doing any of the above as one-off scripts or handling exception cases in your application code. For example, you can lazily add missing fields or defaults as documents are loaded from the database for editing. For Mongoid in particular you may want to try the mongoid_rails_migrations gem.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24579356', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/579843/']}
jdg_370761
stackexchange
llm_judgeable_groundtruth_similarity
102205
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm going to ask what is probably quite a controversial question: "Should one of the mostpopular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 ( U+1D11E ) MUSICAL SYMBOL G CLEF 𝕥 ( U+1D565 ) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 ( U+1D7F6 ) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 ( U+2008A ) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference . For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful? Now provide the response and nothing else.
This is an old answer. See UTF-8 Everywhere for the latest updates. Opinion: Yes, UTF-16 should be considered harmful . The very reason it exists is because some time ago there used to be a misguided belief that widechar is going to be what UCS-4 now is. Despite the "anglo-centrism" of UTF-8, it should be considered the only useful encoding for text. One can argue that source codes of programs, web pages and XML files, OS file names and other computer-to-computer text interfaces should never have existed. But when they do, text is not only for human readers. On the other hand, UTF-8 overhead is a small price to pay while it has significant advantages. Advantages such as compatibility with unaware code that just passes strings with char* . This is a great thing. There're few useful characters which are SHORTER in UTF-16 than they are in UTF-8. I believe that all other encodings will die eventually. This involves that MS-Windows, Java, ICU, python stop using it as their favorite. After long research and discussions, the development conventions at my company ban using UTF-16 anywhere except OS API calls, and this despite importance of performance in our applications and the fact that we use Windows. Conversion functions were developed to convert always-assumed-UTF8 std::string s to native UTF-16, which Windows itself does not support properly . To people who say " use what needed where it is needed ", I say: there's a huge advantage to using the same encoding everywhere, and I see no sufficient reason to do otherwise. In particular, I think adding wchar_t to C++ was a mistake, and so are the Unicode additions to C++0x. What must be demanded from STL implementations though is that every std::string or char* parameter would be considered unicode-compatible. I am also against the " use what you want " approach. I see no reason for such liberty. There's enough confusion on the subject of text, resulting in all this broken software. Having above said, I am convinced that programmers must finally reach consensus on UTF-8 as one proper way. (I come from a non-ascii-speaking country and grew up on Windows, so I'd be last expected to attack UTF-16 based on religious grounds). I'd like to share more information on how I do text on Windows, and what I recommend to everyone else for compile-time checked unicode correctness, ease of use and better multi-platformness of the code. The suggestion substantially differs from what is usually recommended as the proper way of using Unicode on windows. Yet, in depth research of these recommendations resulted in the same conclusion. So here goes: Do not use wchar_t or std::wstring in any place other than adjacent point to APIs accepting UTF-16. Don't use _T("") or L"" UTF-16 literals (These should IMO be taken out of the standard, as a part of UTF-16 deprecation). Don't use types, functions or their derivatives that are sensitive to the _UNICODE constant, such as LPTSTR or CreateWindow() . Yet, _UNICODE always defined, to avoid passing char* strings to WinAPI getting silently compiled std::strings and char* anywhere in program are considered UTF-8 (if not said otherwise) All my strings are std::string , though you can pass char* or string literal to convert(const std::string &) . only use Win32 functions that accept widechars ( LPWSTR ). Never those which accept LPTSTR or LPSTR . Pass parameters this way: ::SetWindowTextW(Utils::convert(someStdString or "string litteral").c_str()) (The policy uses conversion functions below.) With MFC strings: CString someoneElse; // something that arrived from MFC. Converted as soon as possible, before passing any further away from the API call:std::string s = str(boost::format("Hello %s\n") % Convert(someoneElse));AfxMessageBox(MfcUtils::Convert(s), _T("Error"), MB_OK); Working with files, filenames and fstream on Windows: Never pass std::string or const char* filename arguments to fstream family. MSVC STL does not support UTF-8 arguments, but has a non-standard extension which should be used as follows: Convert std::string arguments to std::wstring with Utils::Convert : std::ifstream ifs(Utils::Convert("hello"), std::ios_base::in | std::ios_base::binary); We'll have to manually remove the convert, when MSVC's attitude to fstream changes. This code is not multi-platform and may have to be changed manually in the future See fstream unicode research/discussion case 4215 for more info. Never produce text output files with non-UTF8 content Avoid using fopen() for RAII/OOD reasons. If necessary, use _wfopen() and WinAPI conventions above. // For interface to win32 API functionsstd::string convert(const std::wstring& str, unsigned int codePage /*= CP_UTF8*/){ // Ask me for implementation.. ...}std::wstring convert(const std::string& str, unsigned int codePage /*= CP_UTF8*/){ // Ask me for implementation.. ...}// Interface to MFCstd::string convert(const CString &mfcString){#ifdef UNICODE return Utils::convert(std::wstring(mfcString.GetString()));#else return mfcString.GetString(); // This branch is deprecated.#endif}CString convert(const std::string &s){#ifdef UNICODE return CString(Utils::convert(s).c_str());#else Exceptions::Assert(false, "Unicode policy violation. See W569"); // This branch is deprecated as it does not support unicode return s.c_str(); #endif}
{}
{'log_upvote_score': 9, 'links': ['https://softwareengineering.stackexchange.com/questions/102205', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/35048/']}
jdg_370762
stackexchange
llm_judgeable_groundtruth_similarity
5821
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The man page doesn't give me much hope, but I'm hoping it's an undocumented (and/or GNU-specific) feature. Now provide the response and nothing else.
You can't. Either use ed or GNU sed or perl, or do what they do behind the scenes, which is to create a new file for the contents. ed , portable: ed foo <<EOF1,$s/^\([^,]*\),\([^,]*\),\([^,]*\).*/\1,\3/wqEOF GNU sed : sed -i -e 's/^\([^,]*\),\([^,]*\),\([^,]*\).*/\1,\3/' foo Perl: perl -i -l -F, -pae 'print @F[1,3]' foo cut , creating a new file (recommended, because if your script is interrupted, you can just run it again): cut -d , -f 1,3 <foo >foo.new &&mv -f foo.new foo cut , replacing the file in place (retains the ownership and permissions of foo , but needs protection against interruptions): cp -f foo foo.old &&cut -d , -f 1,3 <foo.old >foo &&rm foo.old I recommend using one of the cut -based methods. That way you don't depend on any non-standard tool, you can use the best tool for the job, and you control the behavior on interrupt.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/5821', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/916/']}
jdg_370763
stackexchange
llm_judgeable_groundtruth_similarity
51863259
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Trying to pass chat client from swift 4 to swift 4.2 and with picker i found trouble. UIImagePickerControllerEditedImage Cannot subscript a value of type '[String : Any]' with an index of type 'UIImagePickerController.InfoKey' func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { if let pickedImage = info[UIImagePickerControllerEditedImage] as? UIImage { self.userProfileImage.contentMode = .scaleAspectFit self.userProfileImage.image = pickedImage } picker.dismiss(animated: true, completion: nil)} Now provide the response and nothing else.
The method signature has changed to func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) You should see a warning message for the func name Instance method 'imagePickerController( :didFinishPickingMediaWithInfo:)' nearly matches optional requirement 'imagePickerController( :didFinishPickingMediaWithInfo:)' of protocol 'UIImagePickerControllerDelegate' Candidate has non-matching type '(UIImagePickerController, [String : Any]) -> ()' Move 'imagePickerController(_:didFinishPickingMediaWithInfo:)' to another extension to silence this warning Make 'imagePickerController(_:didFinishPickingMediaWithInfo:)' private to silence this warning Requirement 'imagePickerController(_:didFinishPickingMediaWithInfo:)' declared here (UIKit.UIImagePickerControllerDelegate)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51863259', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9506836/']}
jdg_370764
stackexchange
llm_judgeable_groundtruth_similarity
1544861
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have two methods bool Foo() and bool Bar() . Which of the following is more readable? if(Foo()){ SomeProperty = Bar();}else{ SomeProperty = false;} or SomeProperty = Foo() && Bar(); On the one hand, I consider the short-circuiting && to be a useful feature and the second code sample is much shorter. On the other hand, I'm not sure people are generally accustomed to seeing && outside a conditional statement, so I wonder if that would introduce some cognitive dissonance that makes the first sample the better option. What do you think? Are there other factors that affect the decision? Like, if the && expression is longer than one line that can fit on the screen, should I prefer the former? Post-answer clarifications: A few things that I should have included in the initial question that the answers brought up. Bar() may be more expensive to execute than Foo() , but neither method should have side effects. The methods are both named more appropriately, not like in this example. Foo() boils down to something like CurrentUserAllowedToDoX() and Bar() is more like, XCanBeDone() Now provide the response and nothing else.
I agree with the general consensus that the Foo() && Bar() form is reasonable unless it is the case that Bar() is useful for its side effects as well as its value. If it is the case that Bar() is useful for its side effects as well as it's value, my first choice would be to redesign Bar() so that production of its side effects and computation of its value were separate methods. If for some reason that was impossible, then I would greatly prefer the original version. To me the original version more clearly emphasizes that the call to Bar() is part of a statement that is useful for its side effects. The latter form to me emphasizes that Bar() is useful for its value. For example, given the choice between if (NetworkAvailable()) success = LogUserOn();else success = false; and success = NetworkAvailable() && LogUserOn(); I would take the former; to me, it is too easy to overlook the important side effect in the latter. However, if it were a choice between if (NetworkAvailable()) tryWritingToNetworkStorage = UserHasAvailableDiskQuota();else tryWritingToNetworkStorage = false; and tryWritingToNetworkStorage = NetworkAvailable() && UserHasAvailableDiskQuota(); I'd choose the latter.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1544861', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6932/']}
jdg_370765
stackexchange
llm_judgeable_groundtruth_similarity
240698
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As the title suggests, I would like to prove that the normed vector space $(\mathbb{R}^\infty , ||.||_{l^p})$ is not a Banach space, where $$\mathbb{R}^\infty :=\{ x:\mathbb{N} \rightarrow \mathbb{R} : \exists \ \bar{n} \in \mathbb{N} \ \ s.t. \ \ x(n)=0 \ \forall n>\bar{n} \}$$ and $$l^p:=\{x:\mathbb{N} \rightarrow \mathbb{R} : \sum_{n=1}^{\infty}|x(n)|^p< \infty \}.$$ As usual, I should start from a Cauchy sequence defined with respect to the distance induced by the considered norm. Now I'm searching for a sequence of objects in $\mathbb{R}^\infty$ which converges to a sequence in $l^p \setminus \mathbb{R}^\infty$. Can someone give me some hint? I found the completeness of metric and normed spaces a very interesting topic, but I have the impression that the proof of the non-completeness, rather than the completeness, is always harder to achieve.Thank you all. p.s.: Has the space $\mathbb{R}^\infty$, as defined previously, a particular name? Now provide the response and nothing else.
There are many different ways of proving that $\mathbb R^\infty$ is not a Banach space under any norm. The cleanest is probably to note that is has a countable basis, $(e_i)$, where $e_i(k) = \delta_{ik}$ (q.v. Kronecker delta ), but a simple application of the Baire Category Theorem gives that no countably-infinite-dimensional vector space can be a Banach space: each finite-dimensional subspace is closed and nowhere dense.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/240698', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/49542/']}
jdg_370766
stackexchange
llm_judgeable_groundtruth_similarity
6374646
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a question about Java serialization in scenarios where you may need to modify your serializable class and maintain backward compatibility. I come from deep C# experience, so please allow me to compare Java with .NET. In my Java scenario, I need to serialize an object with Java's runtime serialization mechanism, and store the binary data in permanent storage to reuse the objects in future. The problem is that, in the future, classes may be subject to changes. Fields may be added or removed. I don't know Java serialization in the deep, except for this fantastic article about how not to program in Java when dealing with serialization.As I imagine(d), the serialVersionUID plays a key role in Java serialization, and this is where I need your help. Apart from the article's example (I know it's bad coding), shall that field not be modified when Eclipse asks to update it after I modified the class? I remember from the .NET world that when I add new fields I must add the [OptionalField] Attribute to the field to get the backward compatibility, so CLR won't require it in old serialized data. Also, when I need to deprecate a field I must only remove the public methods and not the private fields. What are the guidelines for best serialization? Thank you. [Add] Here is an example. Suppose I have class Foo public class Foo { private String bar;} Then I change to: public class Foo { private String bar; private Integer eggs;} Is compatibility broken between these two version? If I deserialize an "oldFoo" when I have the "newFoo" compiled, does eggs equals null or is an exception thrown? I prefer the first, obviously!! Now provide the response and nothing else.
Let's say you have a class MyClass and you want to ensure serialization compatibility going forward, or at least make sure that you don't change its serialized form unintentionally. You can use Verify.assertSerializedForm() from GS Collections test utilities in most cases. Start by writing a test that asserts that your class has a serialVersionUID of 0L and has a serial form that's the empty string. @Testpublic void serialized_form(){ Verify.assertSerializedForm( 0L, "", new MyClass());} Run the test. It will fail since the String represents a Base64 encoding and is never empty. org.junit.ComparisonFailure: Serialization was broken. <Click to see difference> When you click to see the difference, you'll see the actual Base64 encoding. Paste it inside the empty string. @Testpublic void serialized_form(){ Verify.assertSerializedForm( 0L, "rO0ABXNyAC9jYXJhbWVsa2F0YS5zaHVrbmlfZ29lbHZhLkV4ZXJjaXNlOVRlc3QkTXlDbGFzc56U\n" + "hVp0q+1aAgAAeHA=", new MyClass());} Re-run the test. It's likely to fail again with an error message like this. java.lang.AssertionError: serialVersionUID's differ expected:<0> but was:<-7019839295612785318> Paste the new serialVersionUID into the test in place of 0L. @Testpublic void serialized_form(){ Verify.assertSerializedForm( -7019839295612785318L, "rO0ABXNyAC9jYXJhbWVsa2F0YS5zaHVrbmlfZ29lbHZhLkV4ZXJjaXNlOVRlc3QkTXlDbGFzc56U\n" + "hVp0q+1aAgAAeHA=", new MyClass());} The test will now pass until you change the serialized form. If you break the test (change the serialized form) by accident, the first thing to do is check that you've specified the serialVerionUID in the Serializable class. If you leave it out, the JVM generates it for you and it's quite brittle. public class MyClass implements Serializable{ private static final long serialVersionUID = -7019839295612785318L;} If the test is still broken, you can try to restore the serialized form by marking new fields as transient, taking full control over the serialized form using writeObject(), etc. If the test is still broken, you have to decide whether to find and revert your changes which broke serialization or treat your changes as an intentional change to the serialized form. When you change the serialized form on purpose, you'll need to update the Base64 String to get the test to pass. When you do, it's crucial that you change the serialVersionUID at the same time. It doesn't matter what number you choose, as long as it's a number you've never used for the class before. The convention is to change it to 2L , then 3L , etc. If you're starting from a randomly generated serialVersionUID (like -7019839295612785318L in the example), you should still bump the number to 2L because it's still the 2nd version of the serialized form. Note: I am a developer on GS collections.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6374646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/471213/']}
jdg_370767
stackexchange
llm_judgeable_groundtruth_similarity
22795883
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have implemented a UIPageViewController that contains two pages. On the right most page, I am able to swipe to the right, and pull the page back so that when I release, it bounces back. The same thing occurs on the left page when I swipe to the left. (The bouncing is like what happens when you reach the bottom of a safari page) Is there a way to disable the bounce effect? Thanks! Now provide the response and nothing else.
Thus far, none of the answers actually work fully. The edge case that they all fail on is this: Scroll to page 2. Using one finger, drag towards page 1. Place a second finger on the screen and drag towards page 1. Lift the first finger. Repeat until you have dragged past page 0. In that situation, every solution I've seen so far goes past the bounds of page 0. The core problem is that the underlying API is broken, and begins reporting a content offset relative to page 0 without calling our callback to let us know that it is showing a different page. Throughout this process, the API still claims to be showing page 1, going towards page zero even while it is really on page zero going towards page -1. The workaround for this design flaw is remarkably ugly, but here it is: @property (weak,nonatomic) UIPageControl *pageControl;@property (nonatomic,assign) BOOL shouldBounce;@property (nonatomic,assign) CGFloat lastPosition;@property (nonatomic,assign) NSUInteger currentIndex;@property (nonatomic,assign) NSUInteger nextIndex;- (void)viewDidLoad { [super viewDidLoad];... self.shouldBounce = NO; for (id testView in self.pageController.view.subviews) { UIScrollView *scrollView = (UIScrollView *)testView; if ([scrollView isKindOfClass:[UIScrollView class]]) { scrollView.delegate = self; // scrollView.bounces = self.shouldBounce; } }}- (NSInteger)presentationIndexForPageViewController:(UIPageViewController *)pageViewController{ return (NSInteger)self.currentIndex;}- (void)pageViewController:(UIPageViewController *)pageViewController willTransitionToViewControllers:(NSArray *)pendingViewControllers{ id controller = [pendingViewControllers firstObject]; self.nextIndex = [viewControllers indexOfObject:controller];}- (void)pageViewController:(UIPageViewController *)pageViewController didFinishAnimating:(BOOL)finished previousViewControllers:(NSArray *)previousViewControllers transitionCompleted:(BOOL)completed{ if(completed) { // At this point, we can safely query the API to ensure // that we are fully in sync, just in case. self.currentIndex = [viewControllers indexOfObject:[pageViewController.viewControllers objectAtIndex:0]]; [self.pageControl setCurrentPage:self.currentIndex]; } self.nextIndex = self.currentIndex;}- (void)scrollViewDidScroll:(UIScrollView *)scrollView{ /* The iOS page view controller API is broken. It lies to us and tells us that the currently presented view hasn't changed, but under the hood, it starts giving the contentOffset relative to the next view. The only way to detect this brain damage is to notice that the content offset is discontinuous, and pretend that the page changed. */ if (self.nextIndex > self.currentIndex) { /* Scrolling forwards */ if (scrollView.contentOffset.x < (self.lastPosition - (.9 * scrollView.bounds.size.width))) { self.currentIndex = self.nextIndex; [self.pageControl setCurrentPage:self.currentIndex]; } } else { /* Scrolling backwards */ if (scrollView.contentOffset.x > (self.lastPosition + (.9 * scrollView.bounds.size.width))) { self.currentIndex = self.nextIndex; [self.pageControl setCurrentPage:self.currentIndex]; } } /* Need to calculate max/min offset for *every* page, not just the first and last. */ CGFloat minXOffset = scrollView.bounds.size.width - (self.currentIndex * scrollView.bounds.size.width); CGFloat maxXOffset = (([viewControllers count] - self.currentIndex) * scrollView.bounds.size.width); NSLog(@"Page: %ld NextPage: %ld X: %lf MinOffset: %lf MaxOffset: %lf\n", (long)self.currentIndex, (long)self.nextIndex, (double)scrollView.contentOffset.x, (double)minXOffset, (double)maxXOffset); if (!self.shouldBounce) { CGRect scrollBounds = scrollView.bounds; if (scrollView.contentOffset.x <= minXOffset) { scrollView.contentOffset = CGPointMake(minXOffset, 0); // scrollBounds.origin = CGPointMake(minXOffset, 0); } else if (scrollView.contentOffset.x >= maxXOffset) { scrollView.contentOffset = CGPointMake(maxXOffset, 0); // scrollBounds.origin = CGPointMake(maxXOffset, 0); } [scrollView setBounds:scrollBounds]; } self.lastPosition = scrollView.contentOffset.x;}- (void)scrollViewWillEndDragging:(UIScrollView *)scrollView withVelocity:(CGPoint)velocity targetContentOffset:(inout CGPoint *)targetContentOffset{ /* Need to calculate max/min offset for *every* page, not just the first and last. */ CGFloat minXOffset = scrollView.bounds.size.width - (self.currentIndex * scrollView.bounds.size.width); CGFloat maxXOffset = (([viewControllers count] - self.currentIndex) * scrollView.bounds.size.width); if (!self.shouldBounce) { if (scrollView.contentOffset.x <= minXOffset) { *targetContentOffset = CGPointMake(minXOffset, 0); } else if (scrollView.contentOffset.x >= maxXOffset) { *targetContentOffset = CGPointMake(maxXOffset, 0); } }} Basically, it records the offset for each scroll event. If the scroll position has moved a distance that is impossible (I arbitrarily picked 90% of the width of the screen) in the opposite direction from the direction of scrolling, the code assumes that iOS is lying to us, and behaves as though the transition finished properly, treating the offsets as being relative to the new page instead of the old one.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22795883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2551625/']}
jdg_370768
stackexchange
llm_judgeable_groundtruth_similarity
479901
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: $$\rm{}^{12}C +{}^{12}C\rightarrow {}^{20}Ne + {}^4He$$ Why doesn't this reaction occur on Earth but can occur in big stars? Now provide the response and nothing else.
The (nuclear) reaction you quote is highly exothermic, but that in itself is not enough for the reaction to proceed. It needs to be initiated and this requires certain physical conditions to be met. Most importantly, the carbon nuclei need to be brought close enough together that the strong nuclear force can fuse them together. The strong nuclear force is short range, only acting over a few $10^{-15}$ m, so the nuclei need to achieve these separations. However, their positive charges provide a mutually repulsive force that prevents this even in the centre of a massive star . A quantum effect called "tunneling" is required that provides a small probability $p$ that the carbon nuclei can get close enough to fuse, even if classically they don't have sufficient kinetic energy to be squashed close enough to fuse. $p$ is enhanced by starting off with high kinetic energy, that lets the nuclei get moderately close together to allow tunneling to work, and this is how the fusion occurs in stars. The high interior temperatures give the carbon nuclei lots of kinetic energy. Such temperatures $(\sim 10^{9}$ K) do not occur naturally on Earth, although carbon nuclei can be (and have been) propelled towards each other with MeV energies in order to make the reaction occur. It is also possible to initiate the reaction by crushing the nuclei close together with an external force like gravity. However, the densities required are of order $10^{14}$ kg/m $^3$ , and these conditions are not found on Earth either.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/479901', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']}
jdg_370769
stackexchange
llm_judgeable_groundtruth_similarity
2139740
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am following Rob Conery MVC Storefront tutorial series and I get an Inconsistent accessibility error from the following constructor public SqlCatalogRepository(DB dataContext) : public class SqlCatalogRepository : ICatalogRepository{ DB db; public SqlCatalogRepository() { db = new DB(); //turn off change tracking db.ObjectTrackingEnabled = false; } public SqlCatalogRepository(DB dataContext) { //override the current context //with the one passed in db = dataContext; } Here is the error message :Error 1 Inconsistent accessibility: parameter type 'SqlRepository.DB' is less accessible than method 'Data.SqlCatalogRepository.SqlCatalogRepository(SqlRepository.DB)' Now provide the response and nothing else.
Your DB class is not public, so you can't make a public method (or constructor) that takes it as a parameter. (What would callers outside your assembly do?) You need to either make the DB class public or make the SqlCatalogRepository class (or its constructor) internal . Which one you do will depend where your types are being used. If the SqlCatalogRepository is only meant to be used inside your assembly, you should make it internal . ( internal means that it's only visible to other types in the same assembly) If it's meant to be exposed by your assembly to other assemblies, you should make the class public but the constructor internal . If the DB class itself is meant to be used by types outside your assembly, you should make the DB class itself public .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2139740', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/232425/']}
jdg_370770
stackexchange
llm_judgeable_groundtruth_similarity
14124739
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: after trying my brand new service on android i get this: i guess is something related to the manifest file and permissions, the service is started after the last activity, to update data on server and retrieve new data and save id on sqlite on android: here also the manifest file: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.ggservice.democracy" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17" /> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.ggservice.democracy.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:label="@string/app_name" android:name="com.ggservice.democracy.sondaggioActivity"/> <activity android:label="@string/app_name" android:name="com.ggservice.democracy.domandeDiCategoria"/> <service android:name="com.ggservice.democracy.updateDemocracyService" /> </application> </manifest> the logcat: 01-02 15:33:30.960: W/dalvikvm(2570): threadid=1: thread exiting with uncaught exception (group=0x409c01f8)01-02 15:33:31.160: E/AndroidRuntime(2570): FATAL EXCEPTION: main01-02 15:33:31.160: E/AndroidRuntime(2570): java.lang.RuntimeException: Unable to start service com.ggservice.democracy.updateDemocracyService@412f0c60 with Intent { cmp=com.ggservice.democracy/.updateDemocracyService }: android.os.NetworkOnMainThreadException01-02 15:33:31.160: E/AndroidRuntime(2570): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:2376)01-02 15:33:31.160: E/AndroidRuntime(2570): at android.app.ActivityThread.access$1900(ActivityThread.java:123)01-02 15:33:31.160: E/AndroidRuntime(2570): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1210)01-02 15:33:31.160: E/AndroidRuntime(2570): at android.os.Handler.dispatchMessage(Handler.java:99)01-02 15:33:31.160: E/AndroidRuntime(2570): at android.os.Looper.loop(Looper.java:137)01-02 15:33:31.160: E/AndroidRuntime(2570): at android.app.ActivityThread.main(ActivityThread.java:4424)01-02 15:33:31.160: E/AndroidRuntime(2570): at java.lang.reflect.Method.invokeNative(Native Method)01-02 15:33:31.160: E/AndroidRuntime(2570): at java.lang.reflect.Method.invoke(Method.java:511)01-02 15:33:31.160: E/AndroidRuntime(2570): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)01-02 15:33:31.160: E/AndroidRuntime(2570): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)01-02 15:33:31.160: E/AndroidRuntime(2570): at dalvik.system.NativeStart.main(Native Method)01-02 15:33:31.160: E/AndroidRuntime(2570): Caused by: android.os.NetworkOnMainThreadException01-02 15:33:31.160: E/AndroidRuntime(2570): at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1099)01-02 15:33:31.160: E/AndroidRuntime(2570): at java.net.InetAddress.lookupHostByName(InetAddress.java:391)01-02 15:33:31.160: E/AndroidRuntime(2570): at java.net.InetAddress.getAllByNameImpl(InetAddress.java:242)01-02 15:33:31.160: E/AndroidRuntime(2570): at java.net.InetAddress.getAllByName(InetAddress.java:220)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:137)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:360)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487)01-02 15:33:31.160: E/AndroidRuntime(2570): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465)01-02 15:33:31.160: E/AndroidRuntime(2570): at com.ggservice.democracy.JSONParser.getJSONFromUrl(JSONParser.java:38)01-02 15:33:31.160: E/AndroidRuntime(2570): at com.ggservice.democracy.updateDemocracyService.onStartCommand(updateDemocracyService.java:47)01-02 15:33:31.160: E/AndroidRuntime(2570): at android.app.ActivityThread.handleServiceArgs(ActivityThread.java:2359)01-02 15:33:31.160: E/AndroidRuntime(2570): ... 10 more am i doing something wrong? this is also the service: public class updateDemocracyService extends Service{ private pollDataSource datasource; int mStartMode; // indicates how to behave if the service is killed IBinder mBinder; // interface for clients that bind boolean mAllowRebind; // indicates whether onRebind should be used // url to make request private static String url = "http://www.test.com/democracy/domande.php"; // JSON Node names private static final String TAG_DOMANDE = "domande"; private static final String TAG_ID = "id"; private static final String TAG_TESTO = "testo"; // contacts JSONArray JSONArray contacts = null; @Override public void onCreate() { // The service is being created datasource = new pollDataSource(this); datasource.open(); } @Override public int onStartCommand(Intent intent, int flags, int startId) { // Creating JSON Parser instance JSONParser jParser = new JSONParser(); // getting JSON string from URL JSONObject json = jParser.getJSONFromUrl(url); try { // Getting Array of Contacts contacts = json.getJSONArray(TAG_DOMANDE); // looping through All Contacts for(int i = 0; i < contacts.length(); i++){ JSONObject c = contacts.getJSONObject(i); // Storing each json item in variable String id = c.getString(TAG_ID); String name = c.getString(TAG_TESTO); datasource.createCategoria(name); } } catch (JSONException e) { e.printStackTrace(); } Toast.makeText(this, "ho comunicato con un server!", Toast.LENGTH_LONG).show(); return mStartMode; } @Override public IBinder onBind(Intent intent) { // A client is binding to the service with bindService() return mBinder; } @Override public boolean onUnbind(Intent intent) { // All clients have unbound with unbindService() return mAllowRebind; } @Override public void onRebind(Intent intent) { // A client is binding to the service with bindService(), // after onUnbind() has already been called } @Override public void onDestroy() { datasource.close(); // The service is no longer used and is being destroyed }} Now provide the response and nothing else.
This happens because you are doing a network operation on the main thread, and this is not allowed on Android 3.0 and above. Even though it is in a service, services are run on the UI thread unless you specifically launch them in another thread or create a thread inside it. You can fix this by running the task in a service off the main UI thread, by using a Thread or an AsyncTask . Try creating a new thread in onStartCommand() , as suggested by @CommonsWare.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14124739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1225880/']}
jdg_370771
stackexchange
llm_judgeable_groundtruth_similarity
6941713
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: MSDN says "Gets or sets an arbitrary object value that can be used to store custom information about this element." which means I can store anything I want in this property. But if you bind to this property (with property of type String having a value say "XYZ") and use it in Trigger conditions it doesn't work! <Trigger Property="Tag" Value="XYZ"> <Setter Property="Background" Value="Red" /> </Trigger> It does not set the background red. You can try and assume myElement to be a TextBlock! Why is it like this? Now provide the response and nothing else.
Tag has no special functionality in WPF. This works for me: <TextBlock Tag="{Binding Data}" x:Name="tb"> <TextBlock.Style> <Style> <Style.Triggers> <Trigger Property="TextBlock.Tag" Value="XYZ"> <Setter Property="TextBlock.Background" Value="Lime" /> </Trigger> </Style.Triggers> </Style> </TextBlock.Style></TextBlock> And setting the Data object property to "XYZ" in an event.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6941713', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/164184/']}
jdg_370772
stackexchange
llm_judgeable_groundtruth_similarity
652676
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: How can I reset my DC's password? I am able to login to the server in Directory Services Restore Mode. I tried pulling up AD DS Users and Computers from there, but it does not appear that it will load anything about domain users. Now provide the response and nothing else.
When you boot a domain controller into Directory Services Restore Mode (DSRM,) Active Directory is offline on that DC. That is why you cannot load Active Directory Users and Computers on that DC. Assuming you are not using full disk encryption such as Bitlocker, this simple trick will get you back in the game: Boot the domain controller from alternate media, such as a USB thumb drive that has a Windows recovery partition on it. (Diagnostics and Recovery Toolkit (DaRT) makes good recovery media, for example.) Once you have booted into the recovery media, open a command prompt and mount the system hard drive of the domain controller (which probably has a different drive letter now than it usually does.) Type this: copy X:\Windows\System32\Utilman.exe X:\Windows\System32\Utilman.bak Then copy X:\Windows\System32\cmd.exe X:\Windows\System32\Utilman.exe Now that you have replaced Utilman.exe with Cmd.exe, reboot the machine normally. Once it comes back up, click the little "Accessibility" icon in the corner of the logon screen. A command prompt running under the security context of Local System will appear overlaying the logon prompt. And Directory Services is running now. So type: net user DomainAdmin * And reset your domain admin password. Now reboot your machine again and replace the old Utilman.exe. This final step is very important, as your DC is in a very insecure configuration right now as long as this "Utilman.exe hack" is in place. Merry Christmas .
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/652676', 'https://serverfault.com', 'https://serverfault.com/users/228148/']}
jdg_370773
stackexchange
llm_judgeable_groundtruth_similarity
45974831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to make request to SOAP endpoint using axios in my React application. Hence I need to pass xml data in request and receive xml data in response. I have used the axios post with json data but how do I use the same for xml? PFB the code I am using for the same, but it does not work. JSON post request: var xmlData = <note><to>Tove</to><from>Jani</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>var config = { headers: {'Content-Type': 'text/xml'}};axios.post('/save', xmlData, config); Please share if you have any experience with this, TIA. Now provide the response and nothing else.
let xmls='<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"\ xmlns:web="http://www.webserviceX.NET/">\ <soapenv:Header/>\ <soapenv:Body>\ <web:ConversionRate>\ <web:FromCurrency>INR</web:FromCurrency>\ <web:ToCurrency>USD</web:ToCurrency>\ </web:ConversionRate>\ </soapenv:Body>\ </soapenv:Envelope>';axios.post('http://www.webservicex.com/CurrencyConvertor.asmx?wsdl', xmls, {headers: {'Content-Type': 'text/xml'} }).then(res=>{ console.log(res); }).catch(err=>{console.log(err)}); This code help to make soap request
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/45974831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5346095/']}
jdg_370774
stackexchange
llm_judgeable_groundtruth_similarity
33509
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In-place sorting is essentially swapping elements without using extra storage, correct? How can I find the minimum number of swaps required for a list? A C D Q R Z E // input| | | > > > <<< // movementA C D E Q R Z // output Swapping: A C D Q R Z Eswap Q with E, ACDERZQswap R with Q, ACDEQZRswap R with Z, ACDEQRZ. done. 3 swaps. Shifting items left or right is essentially swapping, but I want the optimal number for plucking an item out of line and switching its place with another. Now provide the response and nothing else.
Consider the problem of manipulating a list into a different state where you know the end state. Find each 'enclosed subgraph' bigger than one (I'll explain this later on). Find the sum of the lengths of the subgraphs and subtract the number of subgraphs. There's your answer for the number of swaps. An 'enclosed subgraph' is a minimal subset of the whole where each item in the initial list is also in the end list. So if you construct a subgraph with the indices 4,5 and 9 from the initial state and they have the values 10, 20 and 30 then for it to be an 'enclosed subgraph', you should be able to find the values from the end state with the indices 4, 5 and 9 and those values should be 10, 20 and 30 (though not necessarily in that order). Consider this: a b c d f e | vb a d f c e This would obviously take 3 swaps. (a <=> b, c <=> d, c <=> f) Applying the algorithm above, it has: 3 'enclosed subgraphs', ([a,b],[c,d,f], [e]) 2 subgraphs with more than one item ([a,b], [c,d,f]) There are 5 items in all those subgraphs 5 - 2 == the answer. It becomes a little more difficult when you want to do the minimal number of swaps to get it into sorted order, however, it is not impossible. Find the sorted order index of each item in the list, if you don't want to move any data, then this is n^2 time. Find the 'enclosed subgraphs'. Swap items in the list to get to the correct order, but only swap items within the same subgraph. So, I hope you can see it's not impossible to do the minimal number of swaps to get to sorted order, but it's not worth it, because it requires a ridiculous number of comparisons. Just use heapsort.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/33509', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/5904/']}
jdg_370775
stackexchange
llm_judgeable_groundtruth_similarity
28172008
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two classes that look like this: class BaseClass: def the_dct(self): return self.THE_DCTclass Kid(BaseClass): THE_DCT = {'key': 'value'}# Code i ll be runninginst = Kid()print(inst.the_dct()) Inheritance has to be this way; second class containing THE_DCT and first class containing def the_dct . It works just fine, but my problem is that i get a warning in Pycharm (unresolved attribute reference), about THE_DCT in BaseClass . Is there a reason why it's warning me (as in why i should avoid it)? Is there something i should do differently? Now provide the response and nothing else.
Within BaseClass you reference self.THE_DCT , yet when PyCharm looks at this class, it sees that THE_DCT doesn't exist. Assuming you are treating this as an Abstract Class, PyCharm doesn't know that that is your intention. All it sees is a class accessing an attribute, which doesn't exist, and therefore it displays the warning. Although your code will run perfectly fine (as long as you never instantiate BaseClass ), you should really change it to: class BaseClass(object): THE_DCT = {} def the_dct(self): return self.THE_DCT
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/28172008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4230591/']}
jdg_370776
stackexchange
llm_judgeable_groundtruth_similarity
1363075
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following program: import java.io.*;import java.util.*;public class GCTest { public static void main(String[] args) throws Exception { List cache = new ArrayList(); while (true) { cache.add(new GCTest().run()); System.out.println("done"); } } private byte[] run() throws IOException { Test test = new Test(); InputStream is = test.getInputStream(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buff = new byte[256]; int len = 0; while (-1 != (len = is.read())) { baos.write(buff, 0, len); } return baos.toByteArray(); } private class Test { private InputStream is; public InputStream getInputStream() throws FileNotFoundException { is = new FileInputStream("GCTest.class"); return is; } protected void finalize() throws IOException { System.out.println("finalize"); is.close(); is = null; } }} would you expect the finalize to ever be called when the while loop in the run method is still executing and the local variable test is still in scope? More importantly, is this behaviour defined anywhere? Is there anything by Sun that states that it is implementation-defined? This is kind of the reverse of the way this question has been asked before on SO where people are mainly concerned with memory leaks. Here we have the GC aggressively GCing a variable we still have an interest in. You might expect that because test is still "in scope" that it would not be GC'd. For the record, it appears that sometimes the test "works" (i.e. eventually hits an OOM) and sometimes it fails, depending on the JVM implementation. Not defending the way this code is written BTW, it's just a question that came up at work. Now provide the response and nothing else.
While the object won't be garbage collected if it is still in scope, the JIT compiler might take it out of scope if the variable isn't actually used any further in the code (hence the differing behavior you are seeing) even though when you read the source code the variable still seems to be "in scope." I don't understand why you care if an object is garbage collected if you don't reference it anymore in code, but if you want to ensure objects stay in memory, the best way is to reference them directly in a field of a class, or even better in a static field. If a static field references the object, it won't get garbage collected. Edit: Here is the explicit documentation you are looking for. > I'm assuming an object cannot die before a local reference to it has gone out of scope. This can not be assumed. Neither the Java spec nor the JVM spec guarantees this. Just because a variable is in scope, doesn't mean the object it points to is reachable. Usually it is the case that an object pointed to by an in-scope variable is reachable, but yours is a case where it is not. The compiler can determine at jit time which variables are dead and does not include such variables in the oop-map. Since the object pointed to by "nt" can [sic - should be cannot] be reached from any live variable, it is eligible for collection.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1363075', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15379/']}
jdg_370777
stackexchange
llm_judgeable_groundtruth_similarity
42993870
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i run simple c++ code in visual studio 2015 to pass image to my preTrained caffemodel but when i want create network this error occur : Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: Input (known types: Convolution, Eltwise, LRN, Pooling, Power, Python, ReLU, Sigmoid, Softmax, Split, TanH) my prototxt file : name: "DeepID_face"input: "data_1"input_dim: 1input_dim: 3input_dim: 640input_dim: 480layer { name: "conv1_1" type: "Convolution" bottom: "data_1" top: "conv1_1" param { name: "conv1_w" lr_mult: 1 decay_mult: 1 } param { name: "conv1_b" lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 20 kernel_size: 4 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } }}layer { name: "relu1_1" type: "ReLU" bottom: "conv1_1" top: "conv1_1"}layer { name: "norm1_1" type: "LRN" bottom: "conv1_1" top: "norm1_1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 }}layer { name: "pool1_1" type: "Pooling" bottom: "norm1_1" top: "pool1_1" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "conv2_1" type: "Convolution" bottom: "pool1_1" top: "conv2_1" param { name: "conv2_w" lr_mult: 1 decay_mult: 1 } param { name: "conv2_b" lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 40 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } }}layer { name: "relu2_1" type: "ReLU" bottom: "conv2_1" top: "conv2_1"}layer { name: "norm2_1" type: "LRN" bottom: "conv2_1" top: "norm2_1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 }}layer { name: "pool2_1" type: "Pooling" bottom: "norm2_1" top: "pool2_1" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "conv3_1" type: "Convolution" bottom: "pool2_1" top: "conv3_1" param { name: "conv3_w" lr_mult: 1 decay_mult: 1 } param { name: "conv3_b" lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 60 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } }}layer { name: "pool3_1" type: "Pooling" bottom: "conv3_1" top: "pool3_1" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "conv4_1" type: "Convolution" bottom: "pool3_1" top: "conv4_1" param { name: "conv4_w" lr_mult: 1 decay_mult: 1 } param { name: "conv4_b" lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 80 kernel_size: 2 stride: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0.1 } }}layer{ name:"flatten_pool3_1" type:"Flatten" bottom:"pool3_1" top:"flatten_pool3_1"}layer{ name:"flatten_conv4_1" type:"Flatten" bottom:"conv4_1" top:"flatten_conv4_1"}layer{ name:"contact_conv" type:"Concat" bottom:"flatten_conv4_1" bottom:"flatten_pool3_1" top:"contact_conv"}layer { name: "deepid_1" type: "InnerProduct" bottom: "contact_conv" top: "deepid_1" param { name: "fc6_w" lr_mult: 1 decay_mult: 1 } param { name: "fc6_b" lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 160 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 0.1 } }}layer { name: "loss" type: "Softmax" bottom: "deepid_1" top: "loss"} and simple code that i want compile with visual studio 2015: #define USE_OPENCV#include <cuda_runtime.h>#include <cstring>#include <cstdlib>#include <vector>#include <string>#include <iostream>#include <stdio.h>#include "caffe/caffe.hpp"#include "caffe/util/io.hpp"#include "caffe/blob.hpp"#include <opencv2\opencv.hpp>#include <boost/shared_ptr.hpp>using namespace caffe;using namespace std;int main(int argc, char** argv) { Caffe::set_mode(Caffe::CPU); caffe::string netS = "C:/Users/127051/Documents/Visual Studio 2015/Projects/C++/Caffe/CaffeTest/x64/Release/net_struct.prototxt"; caffe::string netW = "C:/Users/127051/Documents/Visual Studio 2015/Projects/C++/Caffe/CaffeTest/x64/Release/net_weights.caffemodel"; Datum datum; cv::Mat img = cv::imread("D:/FEI/All/1-08.jpg"); if (img.empty()) LOG(ERROR) << "Error during file reading"; else caffe::CVMatToDatum(img, &datum); //get the net boost::shared_ptr<Net<float> > net_; net_.reset(new Net<float>(netS, TEST)); //get trained net net_->CopyTrainedLayersFrom(netW); //get the blob Blob<float>* blob = new Blob<float>(1, datum.channels(), datum.height(), datum.width()); //get the blobproto BlobProto blob_proto; blob_proto.set_num(1); blob_proto.set_channels(datum.channels()); blob_proto.set_height(datum.height()); blob_proto.set_width(datum.width()); const int data_size = datum.channels() * datum.height() * datum.width(); int size_in_datum = std::max<int>(datum.data().size(), datum.float_data_size()); for (int i = 0; i < size_in_datum; ++i) { blob_proto.add_data(0.); } const string& data = datum.data(); if (data.size() != 0) { for (int i = 0; i < size_in_datum; ++i) { blob_proto.set_data(i, blob_proto.data(i) + (uint8_t)data[i]); } } //set data into blob blob->FromProto(blob_proto); //fill the vector vector<Blob<float>*> bottom; bottom.push_back(blob); float type = 0.0; const vector<Blob<float>*>& result = net_->Forward(bottom, &type); //Here I can use the argmax layer, but for now I do a simple for :) float max = 0; float max_i = 0; for (int i = 0; i < 1000; ++i) { float value = result[0]->cpu_data()[i]; if (max < value) { max = value; max_i = i; } } LOG(ERROR) << "max: " << max << " i " << max_i; return 0;} i also set (/OPT:NOREF) but dont fixed that.also change layer format to : layer { name: "data_1" type: "Input" top: "data_1" input_param { shape: { dim: 1 dim: 3 dim: 640 dim: 480 } }} but not fixed.please help me. Now provide the response and nothing else.
This will work for you. Make a class to have the relationship in: public class ColorProduct{ public int ProductId { get; set; } public int ColorId { get; set; } public Color Color { get; set; } public Product Product { get; set; }} Add a ColorProduct collection to your Product and Color classes: public ICollection<ColorProduct> ColorProducts { get; set; } Then use this extension I made to remove the unselected and add the newly selected to the list: public static void TryUpdateManyToMany<T, TKey>(this DbContext db, IEnumerable<T> currentItems, IEnumerable<T> newItems, Func<T, TKey> getKey) where T : class{ db.Set<T>().RemoveRange(currentItems.Except(newItems, getKey)); db.Set<T>().AddRange(newItems.Except(currentItems, getKey));}public static IEnumerable<T> Except<T, TKey>(this IEnumerable<T> items, IEnumerable<T> other, Func<T, TKey> getKeyFunc){ return items .GroupJoin(other, getKeyFunc, getKeyFunc, (item, tempItems) => new { item, tempItems }) .SelectMany(t => t.tempItems.DefaultIfEmpty(), (t, temp) => new { t, temp }) .Where(t => ReferenceEquals(null, t.temp) || t.temp.Equals(default(T))) .Select(t => t.t.item);} Using it looks like this: var model = _db.Products .Include(x => x.ColorProducts) .FirstOrDefault(x => x.ProductId == vm.Product.ProductId);_db.TryUpdateManyToMany(model.ColorProducts, vm.ColorsSelected .Select(x => new ColorProduct { ColorId = x, ProductId = vm.Product.ProductId }), x => x.ColorId);
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42993870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1074258/']}
jdg_370778
stackexchange
llm_judgeable_groundtruth_similarity
3218570
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I mean, aren't <div/> and <div></div> supposed to be exactly the same thing? By browser I mean the latest Firefox and Internet Explorer. And by go crazy I mean ignore styles of tags that contain the <div/> . Now provide the response and nothing else.
aren't <div/> and <div></div> supposed to be exactly the same thing? Only in XML. Are you serving your web page as XML ( application/xhtml+xml )? If so you can use either, but you'd be sacrificing compatibility with IE versions before IE9. If like most people you are serving an XHTML page as text/html , you must abide by the compatibility guidelines , one of which is that you must only use self-closing tags for elements that have an EMPTY content model, ie. the ones where in HTML you'd use a single tag with no close-tag ( <img> et al). Otherwise, you've just written what non-XML HTML parsers will see as a start-tag with no end-tag, which is likely to mess the page's nesting up. <div/> will put the whole of the rest of the page inside that div.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3218570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/113748/']}
jdg_370779
stackexchange
llm_judgeable_groundtruth_similarity
125050
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: ...or are they the same thing? I notice that each has its own Wikipedia entry: Polymorphism , Multiple Dispatch , but I'm having trouble seeing how the concepts differ. Edit: And how does Overloading fit into all this? Now provide the response and nothing else.
Polymorphism is the facility that allows a language/program to make decisions during runtime on which method to invoke based on the types of the parameters sent to that method. The number of parameters used by the language/runtime determines the 'type' of polymorphism supported by a language. Single dispatch is a type of polymorphism where only one parameter is used (the receiver of the message - this , or self ) to determine the call. Multiple dispatch is a type of polymorphism where in multiple parameters are used in determining which method to call. In this case, the reciever as well as the types of the method parameters are used to tell which method to invoke. So you can say that polymorphism is the general term and multiple and single dispatch are specific types of polymorphism. Addendum: Overloading happens during compile time. It uses the type information available during compilation to determine which type of method to call. Single/multiple dispatch happens during runtime. Sample code: using NUnit.Framework;namespace SanityCheck.UnitTests.StackOverflow{ [TestFixture] public class DispatchTypes { [Test] public void Polymorphism() { Baz baz = new Baz(); Foo foo = new Foo(); // overloading - parameter type is known during compile time Assert.AreEqual("zap object", baz.Zap("hello")); Assert.AreEqual("zap foo", baz.Zap(foo)); // virtual call - single dispatch. Baz is used. Zapper zapper = baz; Assert.AreEqual("zap object", zapper.Zap("hello")); Assert.AreEqual("zap foo", zapper.Zap(foo)); // C# has doesn't support multiple dispatch so it doesn't // know that oFoo is actually of type Foo. // // In languages with multiple dispatch, the type of oFoo will // also be used in runtime so Baz.Zap(Foo) will be called // instead of Baz.Zap(object) object oFoo = foo; Assert.AreEqual("zap object", zapper.Zap(oFoo)); } public class Zapper { public virtual string Zap(object o) { return "generic zapper" ; } public virtual string Zap(Foo f) { return "generic zapper"; } } public class Baz : Zapper { public override string Zap(object o) { return "zap object"; } public override string Zap(Foo f) { return "zap foo"; } } public class Foo { } }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/125050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7598/']}
jdg_370780
stackexchange
llm_judgeable_groundtruth_similarity
19258003
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Basically I am trying to make it check if the user has their ID in the database and if they do have it in there it updates it putting $page as 'lastpage' in the database. It also checks to make sure that the value it has set inside there now is less than $page, if it isn't than it does nothing. If the user doesn't have there ID in there than it should add it with whatever $page is set to. The problem is that it isn't updating in the database at all. This is the code I got so far, anyone got ideas? session_start();if(!isset($_SESSION['id'])) {header("Location: ../../index.php");} else {} include '../../connect.php'; include '../../users_func.php'; $id = $_SESSION['id']; $page = 3; $sql_chk = " select * from html where id = '$id' and lastpage = '$page' "; $rs_chk = mysql_query($sql_chk); $num_chk = mysql_num_rows($rs_chk); if ($num_chk == 0) { mysql_query("INSERT INTO `html` (`id`, `lastpage`) VALUES ('$id', '$page') "); } else { $sql = "UPDATE html SET lastpage='$page' WHERE id='$id' and lastpage < $page"; mysql_query($sql) or die("MYSQL Query Failed : " . mysql_error()); } Now provide the response and nothing else.
@Sam 's comment is correct. You need to call beginApperanceTransition:animated: and endAppearanceTransition for viewDidAppear to be triggered. The reason why UINavigationController does not call viewDidAppear when you add a child view controller is because it has overridden its container composition methods to prevent the programmer from adding a child view controller in strange places. In your case, it doesn't want your child view to cover up the navigation bar. The correct usage of a navigation controller is to have children appear under the navigation bar. Nonetheless, you can still force upon this non-standard UI by manually telling the child when it is appearing and when it has finished appearing. Add a child to UINavigationController MyChildViewController* child = [[MyChildViewController alloc] init];[self.navigationController addChildViewController:child];child.view.frame = self.navigationController.view.bounds;[self.navigationController.view addSubview:child.view];child.view.alpha = 0.0;[child beginAppearanceTransition:YES animated:YES];[UIView animateWithDuration:0.3 delay:0.0 options:UIViewAnimationOptionCurveEaseOut animations:^(void){ child.view.alpha = 1.0; } completion:^(BOOL finished) { [child endAppearanceTransition]; [child didMoveToParentViewController:self.navigationController]; }]; Remove a child from UINavigationController [child willMoveToParentViewController:nil];[child beginAppearanceTransition:NO animated:YES];[UIView animateWithDuration:0.3 delay:0.0 options:UIViewAnimationOptionCurveEaseOut animations:^(void){ child.view.alpha = 0.0; } completion:^(BOOL finished) { [child endAppearanceTransition]; [child.view removeFromSuperview]; [child removeFromParentViewController]; }];
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19258003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2327351/']}
jdg_370781
stackexchange
llm_judgeable_groundtruth_similarity
14034478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Consider the following: inline unsigned int f1(const unsigned int i, const bool b) {return b ? i : 0;}inline unsigned int f2(const unsigned int i, const bool b) {return b*i;} The syntax of f2 is more compact, but do the standard guarantees that f1 and f2 are strictly equivalent ? Furthermore, if I want the compiler to optimize this expression if b and i are known at compile-time, which version should I prefer ? Now provide the response and nothing else.
Well, yes, both are equivalent. bool is an integral type and true is guaranteed to convert to 1 in integer context, while false is guaranteed to convert to 0 . (The reverse is also true, i.e. non-zero integer values are guaranteed to convert to true in boolean context, while zero integer values are guaranteed to convert to false in boolean context.) Since you are working with unsigned types, one can easily come up with other, possibly bit-hack-based yet perfectly portable implementations of the same thing, like i & -(unsigned) b although a decent compiler should be able to choose the best implementation by itself for any of your versions. P.S. Although to my great surprise, GCC 4.1.2 compiled all three variants virtually literally, i.e. it used machine multiplication instruction in multiplication-based variant. It was smart enough to use cmovne instruction on the ?: variant to make it branchless, which quite possibly made it the most efficient implementation.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14034478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/882932/']}
jdg_370782
stackexchange
llm_judgeable_groundtruth_similarity
663329
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Assume $m<R$ is the maximal ideal of a commutative local ring with identity, such that $m=m^2$. Is $m$ finitely generated? Is the condition $m=m^2$ redundant? I am trying to apply Nakayama's lemma to the maximal ideal $m$, but I can't choose a finite generating system for it. Now provide the response and nothing else.
Let $k$ be a field and $A=k\times k\times ...$ the product of denumerably many many copies of $k$. Let $I\subset A$ be the ideal of eventually zero sequences and $\mathfrak m\supset I$ a maximal ideal containing it. Since in $A$ every element $a$ is multiple of $a^2$, we certainly have $\mathfrak m=\mathfrak m^2$ but $\mathfrak m$ is not finitely generated: else it would be generated by an idempotent ( by Nakayama ). Edit Since the OP has edited his question, requesting an example with a local ring, here is such an example. Consider the domain $A=\mathbb Q[X^{1/n}|\; n=1,2,\cdots]$ consisting of "polynomials" over a field $k$ with positive rational exponents, and its maximal ideal $M=\langle X^{1/n}|n=1,2,\cdots\rangle\subset A$. Obviously $M=M^2$. If we now localize at $M$ we get the required local ring $R=A_M$, with maximal ideal $\mathfrak m=MA_M$. Indeed, $\mathfrak m=\mathfrak m^2$ is clear and that ideal is not finitely generated: the simplest argument is again that if it were, it would be generated by a single idempotent element (Nakayama). But this is impossible, because $R$ is a domain and thus has only $1$ and $0$ as idempotents.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/663329', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34371/']}
jdg_370783
stackexchange
llm_judgeable_groundtruth_similarity
9634897
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Simple thing, but can't get find it. I want a simple dropdown selection box with a couple of selections. Like in HTML <select><option>1</option><option>2</option></select> what is the code for QML for that ? Now provide the response and nothing else.
Here's a simple example that could maybe be used as a starting point: import QtQuick 1.0Rectangle { width:400; height: 400; Rectangle { id:comboBox property variant items: ["Item 1", "Item 2", "Item 3"] property alias selectedItem: chosenItemText.text; property alias selectedIndex: listView.currentIndex; signal comboClicked; width: 100; height: 30; z: 100; smooth:true; Rectangle { id:chosenItem radius:4; width:parent.width; height:comboBox.height; color: "lightsteelblue" smooth:true; Text { anchors.top: parent.top; anchors.left: parent.left; anchors.margins: 8; id:chosenItemText text:comboBox.items[0]; font.family: "Arial" font.pointSize: 14; smooth:true } MouseArea { anchors.fill: parent; onClicked: { comboBox.state = comboBox.state==="dropDown"?"":"dropDown" } } } Rectangle { id:dropDown width:comboBox.width; height:0; clip:true; radius:4; anchors.top: chosenItem.bottom; anchors.margins: 2; color: "lightgray" ListView { id:listView height:500; model: comboBox.items currentIndex: 0 delegate: Item{ width:comboBox.width; height: comboBox.height; Text { text: modelData anchors.top: parent.top; anchors.left: parent.left; anchors.margins: 5; } MouseArea { anchors.fill: parent; onClicked: { comboBox.state = "" var prevSelection = chosenItemText.text chosenItemText.text = modelData if(chosenItemText.text != prevSelection){ comboBox.comboClicked(); } listView.currentIndex = index; } } } } } Component { id: highlight Rectangle { width:comboBox.width; height:comboBox.height; color: "red"; radius: 4 } } states: State { name: "dropDown"; PropertyChanges { target: dropDown; height:40*comboBox.items.length } } transitions: Transition { NumberAnimation { target: dropDown; properties: "height"; easing.type: Easing.OutExpo; duration: 1000 } } } }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9634897', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1175881/']}
jdg_370784
stackexchange
llm_judgeable_groundtruth_similarity
546246
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am in general interested about how compiler and linker handle global variables. Here click it is explained that additional ROM is needed in case variable is initialized and not 0. So wondering, why is it so. Excuse me for a newbie question, but why are they using the word "ROM" here. Or are they referring to the memory of flash, which acts like ROM? As you see, I am having a bit of confusion. So any help would be appreciated. Best regards Now provide the response and nothing else.
Memory Memory systems come in two key varieties: volatile and non-volatile: Volatile memory is treated as though it powers up in a random state, though it may power up as all zeroes or all ones, too. Volatile memory has to be writable, or else it's not very useful. When you say "volatile" you mean at least these two things: uninitialized values at power-on and writable . Non-volatile memory is considered to have specific known values when it powers up; values that were earlier programmed into the device at another time. These values may include code or data or both. When you say "non-volatile" you mean just that: known, pre-initialized values at power-on . However, in this case, the values may or may not be writable. For example, FRAM (aka FeRAM) can be written to at full memory write speeds, just like SRAM. And it's non-volatile, as well. Core memory (magnetic memory made from toroids with special properties and used mostly in the late 1960s and early 1970s) is another example. Also, some non-volatile memory such as flash or EEPROM can be over-written and will retain the values stored there. But there usually are various conditions that limit its usefulness. And I don't know of any cases where flash or EEPROM are reasonably used in the same way that SRAM variables may be. Note to c, c++, java, Fortran 2003, and c# programmers lacking a sufficient hardware background: The use of volatile above (and non-volatile ) has nothing whateverto do with its use in languages you are exclusively familiar with.Except for the fact of why it came to arise within C in the firstplace. For some short discussion of that history and a link to a veryold post (circa 1990) on the ancient 'newsgroup' ( NNTP )system of the earlierinternet (of which I was a small part) see: Nine ways to break yoursystems code using volatile . I learned C when I was coding on the Unix v6 kernel in 1978. So my lifecrosses over the time when the qualifier volatile came intoeventual use as part of the C language standard about 10 years aftermy own Unix O/S kernel period. You can read this short history of C to see its first appearance. Its language semantics was added to C inorder to address a need with respect to memory-mapped devices. (Memory-mapped floating point and I/O systems go well back into theearly 1970's. And they certainly were common with the Altair 8800and IMSAI 8080, circa 1975. So they existed in very expensive as wellas rather pedestrian computer systems by the mid-1970's. It took quite sometime for computer languages to catch up.) So it's a hardware term whose usage long precedes that in anycomputer language. Language designers eventually tumbled to some of thoseproblems in writing code for device drivers. After a few decades ofrequiring assembly code to deal with it, new languagesemantics finally overcame conservative resistance within languagedesign circles and arrived to address common hardware requirements.The invention of volatile as a qualifier in C follows from earlier hardware usageand was borrowed and re-purposed in C.My meaning above well predates its use by compiler languages. At leastby two decades and probably much more. (I remember seeing the term in1971. But in a context that tells me it existed many years earlier.) There was a time before it was a twinkle in theeye of any programming language designer. It was, in fact, borrowed fromhardware usage as a convenience. Not invented out of whole cloth by language specialists. I mean it in this earlier way with respect to electronic memory systems. Programming Toolchain There has become a dizzying array of available MCUs, today. Some of them are fully pipelined and to some degree even superscaler. But if everything had to be covered here, it would be another book. So that's off the table. Keeping to your basic MCU, they come in two basic flavors: von Neumann or Harvard. I may give a nod to Harvard, later. But von Neumann is easiest and gets the point across. Modern toolchains include multiple units of compilation (compile-time.) These may be in any language form and can also include assembly code, as well. The input at compile-time is a source file. The output at compile-time is usually an object file, which is often just a bunch of various types of records. Modified source files must be compiled to produce their associated object files. A linker is then used to bring these separately compiled object files together. Linkers also often include a separate source file that can direct their operations in combining those source files. This separate source file is sometimes called a linker control file. It's may indicate how to combine (order and position, for example) what it gathers up from the various object files that it must process. The output of the linker step (link-time) is an executable file. That is usually composed mostly of the binary records that describe everything needed for a "unit of execution" in the final target MCU. But it may also include "patch records" (as with older x86 programs, for example) that help the loader when it reads up the executable and maps it into memory. There is, sometimes, a loader. In Windows, there definitely is one. But in MCUs, the executable file is just an exact image of the non-volatile portions (literal text) of the execution unit. It may include some details, such as where to place different segments. But often it is little else and the loading process is then just called "programming a device" or "burning a device" and is part of the built-in services of the IDE being used. Program Model for a Unit of Execution A unit of execution is the complete in-memory specification of the program being run. This includes not only its non-volatile portions but also all of the required RAM (almost always volatile.) Below, I've borrowed and modified an image I drew up years ago for another purpose: There are two colored columns. The left-side one is for von Neumann and the right-side one is for Harvard. For the basic von Neumann architecture, all three sections, CODE , CONST , and crt0 , can be placed into a single ROM (flash.) As Harvard architectures have a separate memory system for code and data, there are two such ROMs required unless there are special instructions added in order to access the code memory system as data. The lighter blue is the same for both: SRAM/DRAM. In the above diagram, I've used ROM loosely. In systems with core memory, for example, it's actually persistent RAM. Some decades ago, MCUs frequently used OTP (one-time programmable) memory and it was truly ROM. Today's systems usually use flash and in many cases it can be written to many, many times. In some cases, where the flash is broken up into multiple sections, writing is possible even while the program is running. (Though not well enough for many purposes.) The key idea here is that ROM as I intend it above stands for non-volatile memory that may be, but does not need to be, writable. (It obviously does have to be readable , though.) There are no other requirements. Likewise, the key idea here is that RAM as I intend it above stands for memory that must be writable . It may be volatile , but doesn't need to be. This is what all of the tools, the compilers, assemblers, and linkers have as their basic concept. There are missing details. Compilers generate code and the code is placed into code segments that the linker collects together in certain ways. I've left out such abstracts as code and data segmentation (this is what the linker processes) and details about how that works. What you see above is just what finally results after all the segments have been organized and placed by the linker. I've used the word persist where I meant non-volatile memory. These sections must power-up with the right values and must be in some kind of persistent memory system(s). The sections listed as volatile can be taken as if they are SRAM or at least some kind of work-alike as with DRAM. They could also be FRAM (which is non-volatile.) But the main point is that they are fast memory and writable . (You need fast memory for stacks and heaps and variables.) Program Model Comments You almost never see all of the code in a program you write. This is particularly true with C and languages other than assembly code. This is because there is a start-up process required. In C compilers, this is usually hidden inside something just called "crt0." (C, run-time, code section 0.) That's the piece that makes sure your stack is set up, the heap space is properly initialized, and that any necessary initial values have been taken care of for your initialized static lifetime variables. In some languages, such as C, even the uninitialized static lifetime variables have defined initial values (int is 0, float is 0.0, etc.) However, not all languages have that requirement. So a linker cannot assume that this is always the case. If you are mixing languages as well as assembly, then there can be uninitialized static lifetime variables that do NOT require initialization. And so there's no need to waste precious CPU cycles initializing them. This fact is why I also included an uninitialized data section. C does not use it. But other languages do. So you cannot assume that it is not present. It may be or it may not be. It all depends. But the model above is very general and will apply to almost any "standard" program model. (Obviously, much more complex arrangements can be and have been designed. But this is the primary one to learn about.) Section Descriptions -- CODE -- Looking back to the diagram above, there is a CODE section. On power-up, this must be valid and workable, immediately. That means it must be in non-volatile memory. (This section includes the crt0 code that is placed to that it starts up when the MCU powers up.) On some machines, CODE may reside behind a protection barrier so that it cannot be read or written, but only executed. (The modern x86 is an example.) On other machines, it's readable but cannot be written into. But in the strict sense, the only requirement is that it may be addressed and executed. There is no necessary requirement that it can be read or written. (That doesn't mean that a relaxed system may allow reading or writing code. Many do. It's just that the only necessary and sufficient requirement is that it can be executed.) For MCUs today, this is usually implemented with flash memory. -- CONST -- There is also a CONST (constant data) section. This section must also be in non-volatile memory. Strictly speaking, this includes all constant values needed by a program. Examples would be error message strings and the value of \$\pi\$ . You don't ever need to write to these values, directly. (Though you may copy them somewhere and modify them.) -- crt0 -- This section includes all of the necessary values used to initialize the static lifetime variables stored in volatile memory (SRAM, for example), whether or not they are writable, before the program starts executing. (That initialization is handled, for example, by the hidden crt0 code for the C language.) This section must also be in non-volatile memory. For MCUs today, this section is usually implemented with flash memory. For a clarifying example, in C, when you write these four ways of saying similar things: char * h1= "Hello there"; /* case 1 */char * const h2= "Hello there"; /* case 2 */const char h3[]= "Hello there"; /* case 3 */char const * const h4= "Hello there"; /* case 4 */ But there are distinct semantics to all four of the above cases. The literal string "Hello there" must be placed into the CONST section. You will never actually write onto it. So it's fine if that string is placed directly into flash memory. (Optimizing C compilers will, of course, notice that these two literal are the same and they won't duplicate these strings.) Assume the MCU has non-volatile flash and volatile SRAM for its memory. Case 1 requires crt0 to copy the string from flash to an SRAM buffer large enough to hold it and to also initialize the SRAM-located variable h1 with the address of that SRAM buffer. This is because the declaration says two things: the literal string itself is writable (you can change it if you want) and also the pointer to that literal string can also be changed (you can make h1 point somewhere else, if you want.) So both the pointer variable as well as all of the contents of the string it points at must be located in SRAM and not in flash. This means crt0 has to initialize both the buffer and the variable. And to do that, it needs correct values located in flash ( CONST ) that it can use to perform that function. Case 2 only requires crt0 to initialize the SRAM-located variable h2 with the address of the string. Since the string itself is not writable (by declaration), it can reside in flash. So there is no necessary need to allocate and initialize an SRAM buffer for the literal string. (Of course, it's not harmful to do that. It's just not required.) The declaration does say that the pointer to that literal string can be changed (you can make h2 point somewhere else, if you want.) So that's why h2 must be located in SRAM. In case 3, h3 isn't really a variable. It's a compile-time constant. Its value points to the literal string. Since h3 isn't a variable, it doesn't require any memory. So only the literal string exists and it can be located in flash. No SRAM required here. crt0 doesn't need to do anything in this case. Case 4 is a little interesting. Technically, this also only uses flash and has no requirement for SRAM. That's because h4 is a constant pointer and you are not allowed to modify it and also because the string it points at is also constant and you are also not allowed to modify that, either. That said, h4 does appear to say that there must be a pointer variable. So h4 probably will require room in the flash, along with the literal string. Optimizing compilers do a lot more, though. It's possible that an optimizing C compiler will remove storage required for h4 since the pointer is constant can cannot be changed. So there's no need to actually allocate space for the pointer. (Though you still may wish it did.) It can simply use what h4 points to whenever h4 is used, when generating code. However, that same optimizing compiler facing this also in the same program: call foo(& h4); Would now be forced into allocating space for h4 . That's because an address was taken and, for there to actually be an address for h4 , it needs an actual address -- it must exist in memory somewhere. This is not an error and the compiler doesn't need to inform you about it. It's just a case where the compiler at first may want to optimize out h4 , but then later finds out it cannot do that because of something else you wrote in your code. Note that in the face of separate compilation units, the definition of h4 may exist in one file while the call() exists in a different file. So there is no possible way the compiler can see both at the same time. This means that the linker is responsible for figuring out this particular optimization detail. This requires the compiler to generate enough information in the object files so that the linker can do its job. And the linker must have enough of the compilation job pushed onto it that it can succeed. -- INITIALIZED DATA -- This is where all of the initialized writable static lifetime variables go. Their initialized values either come from the crt0 section or else must be defined by the language. (In c, a semantic 'zero' is usually applied when the initializer is missing from the static lifetime variable definition.) Every time the program is re-started, these variables must be re-initialized by crt0 code. For MCUs today, this is usually implemented with SRAM memory. -- UNINITIALIZED DATA -- Some languages (not C) allow static lifetime variable definitions which truly specify no initialized value for the variable. Assembly code is a classic case for this. But it's not the only case. For those languages, there is no need for crt0 to do anything. (And I'm still using C's crt0 as a metaphor for other languages which may call it something else.) The values will be set up by the program sometime after it starts running, so there's no need. Since these variables are, by definition, uninitialized its a given that they will be written into. So they must be writable. For MCUs today, this is usually implemented with SRAM memory. -- HEAP -- This is usually set up right at the very end of the static parts of the program (the above listed sections.) Those all have link-time known sizes and therefore are known before the program starts running. Heap is usually allocated "upwards" in memory, towards the stack. It is always writable (necessary) and is therefore typically in SRAM. -- STACK -- This is usually set up right at the very end of the memory system's SRAM addressing. It allocates (usually) in a "downwards" direction, towards the heap. It is also always writable (necessary) and is therefore typically in SRAM. By setting up the heap and the stack to work towards each other, a benefit is that the total memory footprint is known at the time the program starts (operating systems love this, but it's also a necessity for memory-limited MCUs, too.) Another is that although the required stack and heap are of unknown size before the program starts up, they at least are arranged to minimize a conflict later on. (Of course, for stand alone instrumentation code you really do NOT want any conflict ever.) Summary Hopefully that gives you enough of a picture to help you think about the work that compilers, linkers, and loaders perform for you. Oh, and while I'm on a history jag today and just to break some younger minds here, keep also in mind thatthere was a time when the very idea of a hardware stack didn't evenexist with computers. For example, the Hewlett-Packard 21xx processorfamily [I worked on the 2114, 2116, and 21MX] didn't have the concept.Calling a subroutine on these machines caused the first word of thesubroutine to be written by the address following the callinstruction. The subroutine would then return to its caller by doingan indirect-JMP through that word. It took some time -- decades again-- for the idea of hardware support for even one stack, let alone more, to gel and get implemented in newer computer systems. Good ideas take time to develop and precipitate.Not everything was as you see it today.There were lots of poorer ideas that also worked.But sweat and tears bred innovation and eventual acceptance of new organizing approaches.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/546246', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/275734/']}
jdg_370785
stackexchange
llm_judgeable_groundtruth_similarity
443276
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm wondering how can I submit a form via Ajax (using prototype framework) and display the server response in a "result" div.The html looks like this : <form id="myForm" action="/getResults"> [...] <input type="submit" value="submit" /></form><div id="result"></div> I tried to attach a javascript function (which uses Ajax.Updater) to "onsubmit" (on the form) and "onclick" (on the input) but the form is still "non-Ajax" submitted after the function ends (so the whole page is replaced by the results). Now provide the response and nothing else.
Check out Prototype API's pages on Form.Request and Event handling. Basically, if you have this: <form id='myForm'>.... fields ....<input type='submit' value='Go'></form><div id='result'></div> Your js would be, more or less: Event.observe('myForm', 'submit', function(event) { $('myForm').request({ onFailure: function() { .... }, onSuccess: function(t) { $('result').update(t.responseText); } }); Event.stop(event); // stop the form from submitting});
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/443276', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26859/']}
jdg_370786
stackexchange
llm_judgeable_groundtruth_similarity
2287
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This might have been answered already. But I wasn't able to find it. I have: directorydirectory/subdirectory/subdirectoryofsubdirectorydirectory/sub-subdirectory/subdirectoryofsub-subdirectory/subdirectoryofsubdirectoryofsub-subdirectorydirectory/sub-sub-subdirectorydirectory/sub-sub-sub-subdirectory each directory and subdirectory contains files. I want to list them all, pipe the list to cat, and cat the list to a text file. Can this be done with a 'one liner'? If so, please provide it. Or does it require a script and, if so, what commands/structures will I need to write the script? Now provide the response and nothing else.
I think that your best bet is the find command. If you want just the files and not the directories in your list, something like this: find directory/ -type f -print > textfile The find command will recursively list the files. (If you want the directories listed too, remove the -type f ). The > textfile redirects stdout to a file named textfile with no need for cat. find is a powerful tool with many options. See man find if you find that you need to tweak the output slightly. UPDATE (thanks gvkv ): If you want to put the contents of the files into a single text file, then one way to do it is: find Documents/ -type f -exec cat {} + > textfile
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/2287', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1389/']}
jdg_370787
stackexchange
llm_judgeable_groundtruth_similarity
4484232
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find $$\int_0^{\pi}\frac{\sin 5x}{\sin x}dx$$ I can solve it by involving polynomials in sine and cosine as shown in the links below, but it’s huge (doing double angle formulas twice; I noticed that using polynomials in cosine is better because the integral spits out sines which are 0 between the limits) so I want a faster method, if it exists. Please don’t use contour integration:) The only thing I noticed is that the integrand is symmetric about the midpoint in the given interval, i.e. $$\frac{\sin 5x}{\sin x}= \frac{\sin 5(\pi-x)}{\sin(\pi- x)}.$$ Determine the indefinite integral $\int \frac{\sin x}{\sin 5x}dx$ Integral of $\int \frac{\sin(3x)}{\sin(5x)} \, dx$ Expressing $\frac {\sin(5x)}{\sin(x)}$ in powers of $\cos(x)$ using complex numbers Now provide the response and nothing else.
With $2\sin a \cos b = \sin (b+a)-\sin (b-a)$ $$2\sin x\ (\cos 2x + \cos 4x)= \sin {5x}-\sin x $$ Then $$\begin{align}\int_0^\pi \frac{\sin{5x}}{\sin x}dx =&\int_0^\pi(1+2\cos 2x + 2\cos4x)\ dx= \int_0^\pi dx=\pi\end{align}$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4484232', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/1062486/']}
jdg_370788
stackexchange
llm_judgeable_groundtruth_similarity
26804803
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to know why we need to avoid circular dependencies?In the real world if we think, circular dependencies are pretty much important. Like one friend needs something from other friend and the other needs something from this friend, so its kind of circular right? Then why is circular dependency a bad design?If we really need to avoid this then what is the best possible design in Object oriented world for such a situation? Now provide the response and nothing else.
The problem with circular dependencies is rather like the chicken and egg problem. If you depend on me setting something up, and I depend on you setting something up, how do we start? The corollary of this is how do we end - if I have a reference to your resource and you have a reference to mine, I can never clean up because that would break you, and you cannot clean up because that would break me. The answer in both cases is to introduce a middleman, passing the dependency from one of the parties to him, So if you passed your resource on to the middleman, you would depend on me and the middleman, and I would depend on the middleman. Thus you can clean up because you now hold no resource, and I can clean up because no-one depends on me, and then the middleman can clean up.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26804803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/110449/']}
jdg_370789
stackexchange
llm_judgeable_groundtruth_similarity
2862537
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a project configured (I think) to produce Universal binaries. The base SDK is set to 3.2 and the Deployment Target is set to 3.1. Target Device Family is iPhone/iPad and the architecture is armv6 armv7. I had a few questions about how this Universal binary thing really works: 1) When I want to submit an app binary for review, what configuration should I set as the build target? If I set it as "Device - 3.1" I get a warning which says "warning: building with Targeted Device Family" that includes iPad('1,2') requires building with the 3.2 or later SDK" . However, if I build with SDK 3.2, will it still run on iPhones with OS 3.1?What's the right configuration for device and architecture (arm6/arm7)? 2) How do I test the scenario above (built with SDK 3.2, but installed on a device running OS 3.1)? If I build with SDK 3.2, when I try to install it on a phone with OS 3.1, I get an error saying that the phone's OS isn't updated. Thanks! Now provide the response and nothing else.
Our first Universal App just got approved and released in the App Store today, so here are the settings we successfully used: Architectures Architectures : Optimized (armv6 armv7) Base SDK : iPhone Device 3.2 Build Active Architecture Only : unchecked Valid Architectures : (empty) Deployment Targeted Device Family : iPhone/iPad iPhone OS Deployment Target : iPhone OS 3.0 (you can set this to any iPhone 3.x OS but we wanted to target the oldest devices possible. This must be 3.0 or above, 2.x is not valid for a Universal App.) Make sure that you'd also checked these settings for the Target itself (Project->Edit Active Target "AppName"); my initial problems when uploading to iTunes Connect were because the iPhone OS Deployment Target for the Target itself was still set to 2.2.1.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2862537', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/106095/']}
jdg_370790
stackexchange
llm_judgeable_groundtruth_similarity
5289015
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote a program in C having dangling pointer. #include<stdio.h>int *func(void){ int num; num = 100; return &num;}int func1(void){ int x,y,z; scanf("%d %d",&y,&z); x=y+z; return x;}int main(void){ int *a = func(); int b; b = func1(); printf("%d\n",*a); return 0;} I am getting the output as 100 even though the pointer is dangling. I made a single change in the above function func1() . Instead of taking the value of y and z from standard input as in above program, now I am assigning the value during compile time. I redefined the func1() as follows: int func1(void){ int x,y,z; y=100; z=100; x=y+z; return x;} Now the output is 200 . Can somebody please explain me the reason for the above two outputs? Now provide the response and nothing else.
Undefined Behavior means anything can happen, including it'll do as you expect. Your stack variables weren't overwritten in this case. void func3() { int a=0, b=1, c=2;} If you include a call to func3() in between func1 and printf you'll get a different result. EDIT: What actually happens on some platforms. int *func(void){ int num; num = 100; return &num; } Let's assume, for simplicity, that the stack pointer is 10 before you call this function, and that the stack grows upwards. When you call the function, the return address is pushed on stack (at position 10) and the stack pointer is incremented to 14 (yes, very simplified). The variable num is then created on stack at position 14, and the stack pointer is incremented to 18. When you return, you return a pointer to address 14 - return address is popped from stack and the stack pointer is back to 10. void func2() { int y = 1;} Here, the same thing happens. Return address pushed at position, y created at position 14, you assign 1 to y (writes to address 14), you return and stack pointer's back to position 10. Now, your old int * returned from func points to address 14, and the last modification made to that address was func2's local variable assignment. So, you have a dangling pointer (nothing above position 10 in stack is valid) that points to a left-over value from the call to func2
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5289015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/585919/']}
jdg_370791
stackexchange
llm_judgeable_groundtruth_similarity
2878368
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why are there only four registers in the most common CPU (x86)? Wouldn't there be a huge increase in speed if more registers were added? When will more registers be added? Now provide the response and nothing else.
The x86 has always had more than four registers. Originally, it has CS, DS, ES, SS, AX, BX, CX, DX, SI, DI, BP, SP, IP and Flags. Of those, seven (AX, BX, CX, DX, SI, DI, and BP) supported most general operations (addition, subtraction, etc.) BP and BX also supported use as "Base" register (i.e., to hold addresses for indirection). SI and DI can also be used as index registers, which are about the same as base registers, except that an instruction can generate an address from one base register and one index register, but NOT from two index registers or two base registers. At least in typical use, SP is devoted to acting as the stack pointer. Since then, the registers have gotten larger, more have been added, and some of them have become more versatile, so (for example) you can now use any 2 general-purpose registers in 2-register addressing modes. Somewhat strangely, two segment registers (FS and GS) were added in the 386, which also allowed 32-bit segments, which mostly rendered all the segment registers nearly irrelevant. They are sometimes used for thread-local storage. I should also add that when you do multi-tasking, multi-threading, etc., lots of registers can have a pretty serious penalty -- since you don't know which registers are in use, when you do a context switch you have to save all the registers in one task, and load all the saved registers for the next task. In a CPU like the Itanium or the SPARC with 200+ registers, this can be rather slow. Recent SPARCs devote a fair amount of chip area to optimizing this, but their task switches are still relatively slow. It's even worse on the Itanium -- one reason it's less than impressive on typical server tasks, even though it blazes on scientific computing with (very) few task switches. Finally, of course, all this is really quite different from how a reasonably modern implementation of x86 works. Starting with the Pentium Pro, Intel decoupled the architectural registers (i.e., the ones that can be addressed in an instruction) from the implementation. To support concurrent, out of order execution, the Pentium Pro had (if memory serves) a set of 40 internal registers, and used "register renaming" so two (or more) of those might correspond to a given architectural register at a given time. For example, if you manipulate a register, then store it, load a different value, and manipulate that, the processor can detect that the load breaks the dependency chain between those two sets of instructions, so it can execute both of those manipulations simultaneously. The Pentium Pro is now quite old, of course--and of course, AMD has also been around for a while (though their designs are reasonably similar in this respect). While the details change with new processors, having renaming capability that decouples architectural registers from physical registers is now more or less a fact of life.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2878368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/248981/']}
jdg_370792
stackexchange
llm_judgeable_groundtruth_similarity
24897644
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I change default open() ( io.open() in 2.7 ) text encoding in a cross-platform way? So that I didn't need to specify each time open(...,encoding='utf-8') . In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. Though documentation doesn't specify how to set preferred encoding. The function is in locale module, so I need to change locale? Is there any reliable cross-platform way to set UTF-8 locale? Will it affect anything else other than the default text file encoding? Or locale changes are dangerous (can break something), and I should stick to custom wrapper such as: def uopen(*args, **kwargs): return open(*args, encoding='UTF-8', **kwargs) Now provide the response and nothing else.
Don't change the locale or preferred encoding because; it may affect other parts of your code (or the libraries you're using); and it wont be clear that your code depends on open using a specific encoding. Instead, use a simple wrapper: from functools import partialopen_utf8 = partial(open, encoding='UTF-8') You can also specify defaults for all keyword arguments (should you need to).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24897644', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3075942/']}
jdg_370793
stackexchange
llm_judgeable_groundtruth_similarity
59100931
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to build a component that shows a list of blog posts. When a user clicks on a post, it renders the component that shows the details of the post. But when the user hits the back button in the browser, the previous component with posts list re-renders and it looses the previous state and scroll position. Is there a way that I Can save the previous state and the scroll position so that when a user hits the back button they are at the same position and the post list is not re-rendered and doesn't loose the scroll position too? Here's my blog list component code: import axios from "axios";import { Link } from "react-router-dom";class About extends React.Component { state = { posts: [] }; componentDidMount() { axios.get("https://jsonplaceholder.typicode.com/posts").then(res => { this.setState({ posts: res.data.slice(0, 10) }); }); } render() { const { posts } = this.state; const postsList = posts.length ? ( posts.map(post => { return ( <div className="card" key={post.id}> <div className="card-body"> <Link to={"/view/" + post.id}> <h5 className="card-title">{post.title}</h5> </Link> <p className="card-text">{post.body}</p> </div> </div> ); }) ) : ( <div className="text-danger text-center">No Posts yet...</div> ); return <div>{postsList}</div>; }}export default About; Here's is my blog details component: import React from "react";import { withRouter } from "react-router-dom";import axios from "axios";class PostDetail extends React.Component { state = { post: null }; componentDidMount() { let id = this.props.match.params.post_id; axios.get("https://jsonplaceholder.typicode.com/posts/" + id).then(res => { this.setState({ post: res.data }); }); } render() { const post = this.state.post ? ( <div className="card border-primary"> <div className="card-header">{this.state.post.title}</div> <div className="card-body text-primary"> <p className="card-text">{this.state.post.body}</p> </div> </div> ) : ( <div className="text-center text-danger">Loading Post...</div> ); return <div>{post}</div>; }}export default withRouter(PostDetail); Now provide the response and nothing else.
You have to store the scroll position in state on click of post with the use of window.pageYOffset this.setState({ scrollPosition: window.pageYOffset}); And once you click on back button at that time you have to set the window position in the method of componentDidMount . window.scrollTo(0, this.state.scrollPosition); By default you can set the value of scrollPosition is 0 . Updated Here I have used the sessionStorage to maintain the scroll position for demo purpose. You can also use the context API or redux store to manage it. Here is the working demo for you. https://stackblitz.com/edit/react-fystht import React from "react";import axios from "axios";import { Link } from "react-router-dom";class Posts extends React.Component { state = { posts: [] }; componentDidMount() { axios.get("https://jsonplaceholder.typicode.com/posts").then(res => { this.setState({ posts: res.data.slice(0, 20) }, () => { this.handleScrollPosition(); }); }); } // handle scroll position after content load handleScrollPosition = () => { const scrollPosition = sessionStorage.getItem("scrollPosition"); if (scrollPosition) { window.scrollTo(0, parseInt(scrollPosition)); sessionStorage.removeItem("scrollPosition"); } }; // store position in sessionStorage handleClick = e => { sessionStorage.setItem("scrollPosition", window.pageYOffset); }; render() { const { posts } = this.state; const postsList = posts.length ? ( posts.map(post => { return ( <div className="card" key={post.id}> <div className="card-body"> <Link to={"/view/" + post.id} onClick={this.handleClick}> <h5 className="card-title">{post.title}</h5> </Link> <p className="card-text">{post.body}</p> </div> </div> ); }) ) : ( <div className="text-danger text-center">No Posts yet...</div> ); return <div>{postsList}</div>; }}export default Posts; Hope this will help you!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59100931', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4352820/']}
jdg_370794
stackexchange
llm_judgeable_groundtruth_similarity
2341660
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a few places where I need to compare 2 (nullable) values, to see if they're the same. I think there should be something in the framework to support this, but can't find anything, so instead have the following: public static bool IsDifferentTo(this bool? x, bool? y){ return (x.HasValue != y.HasValue) ? true : x.HasValue && x.Value != y.Value;} Then, within code I have if (x.IsDifferentTo(y)) ... I then have similar methods for nullable ints, nullable doubles etc. Is there not an easier way to see if two nullable types are the same? Update: Turns out that the reason this method existed was because the code has been converted from VB.Net, where Nothing = Nothing returns false (compare to C# where null == null returns true). The VB.Net code should have used .Equals... instead. Now provide the response and nothing else.
C# supports "lifted" operators, so if the type ( bool? in this case) is known at compile you should just be able to use: return x != y; If you need generics, then EqualityComparer<T>.Default is your friend: return !EqualityComparer<T>.Default.Equals(x,y); Note, however, that both of these approaches use the " null == null " approach (contrast to ANSI SQL). If you need " null != null " then you'll have to test that separately: return x == null || x != y;
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2341660', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/209578/']}
jdg_370795
stackexchange
llm_judgeable_groundtruth_similarity
29148542
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two activities A and B. Activity A has a mapFragment and I am adding it to backstack. When I come back from to Actvity A from B, I want the fragment to show up in same state as I left it. But getFragmentManager().getBackStackEntryCount() is returning me 0. Here is my code: MapFragment mMapFragment = MapFragment.newInstance(); FragmentTransaction fragmentTransaction = getFragmentManager().beginTransaction(); mMapFragment.getMapAsync(this); fragmentTransaction.replace(R.id.container, mMapFragment); fragmentTransaction.addToBackStack(null); fragmentTransaction.commit(); getFragmentManager().executePendingTransactions(); When coming back from activity B, I have this to know number of getBackStackEntryCount(): System.out.println("Number of entries in backstack "+ getFragmentManager().getBackStackEntryCount()); which is showing me 0. Thanks in advance. Now provide the response and nothing else.
I had a similar problem, in my case getFragmentManager().getBackStackEntryCount() was always returning zero. My problem was I've using support fragments: Fragment fragment = new MyFragment();// note getSupportFragmentManager() instead getFragmentManager()FragmentManager fragmentManager = getSupportFragmentManager();fragmentManager.beginTransaction().replace(R.id.frame, fragment).addToBackStack(null).commit();fragmentManager.executePendingTransactions(); and I've checking getFragmentManager() backStackEntryCount , which always returns zero (it's using another fragment manager): @Overridepublic void onBackPressed() { if (getFragmentManager().getBackStackEntryCount() > 0 ) { getFragmentManager().popBackStack(); }} instead of getSupportFragmentManager , which returns the correct number: @Overridepublic void onBackPressed() { if (getSupportFragmentManager().getBackStackEntryCount() > 0 ) { getSupportFragmentManager().popBackStack(); }}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29148542', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3555350/']}
jdg_370796
stackexchange
llm_judgeable_groundtruth_similarity
40251082
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to detect when an element's getBoundingClientRect() rectangle has changed without actually calculating getBoundingClientRect() ? Something like a "dirty flag"? Naively, I assume that there must be such a mechanism somewhere in the internal workings of browsers, but I haven't been able to find this thing exposed in the DOM API. Maybe there is a way to do this with MutationObservers? My application is a web component that turns DOM elements into nodes of a graph, and draws the edges onto a full screen canvas. See here . Right now, I'm calling getBoundingClientRect() for every element, one time per animation frame frame, even when nothing is changing. It's feeling expensive. I'm usually getting %15-%50 CPU usage on a decently powerful computer at 60 fps. Does anyone know of such a thing? Do you think it's reasonable to expect something like this? Is this kind of thing feasible? Has it ever been proposed before? Now provide the response and nothing else.
As mentioned in the comments above. The APIs you're looking for are: ResizeObserver and IntersectionObserver . However, there are a few things to note: ResizeObserver will only fire when the observed element changes size. And it will essentially only give you correct values for width and height. Both ResizeObserver and IntersectionObserver are supposed to not block paint ResizeObserver will trigger after layout but before paint, which essentially makes it feel synchronous. IntersectionObserver fires asynchronously. What if you need position change tracking This is what IntersectionObserver is made for. It can often be used for visibility detection. The problem here is that IntersectionObserver only fires when the ratio of intersection changes. This means that if a small child moves around within a larger container div, and you track intersection between the parent and the child, you won't get any events except when the child is entering or exiting the parent. You can still track when an element moves at all. This is how: Start by measuring the position of the element you want to track using getBoundingClientRect . Insert a div as an absolutely positioned direct child of body which is positioned exactly where the tracked element is. Start tracking the intersection between this div and the original element. The intersection should start at 1. Whenever it changes to something else: Remeasure the element using getBoundingClientRect . Fire the position/size changed event update the styles of the custom div to the new position of the element. the observer should fire again with the intersection ratio at 1 again, this value can be ignored. NOTE: this technique can also be used for more efficient polypill for ResizeObserver which is a newer feature than IntersectionObserver . The commonly available polyfills rely on MutationObserver which is considerably less efficient.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40251082', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1189799/']}
jdg_370797
stackexchange
llm_judgeable_groundtruth_similarity
17474851
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building a web app that currently uses traditional .asmx web services and I'm looking to upgrade these to WebAPI. I've looked around on the web but I'm looking for the easiest/fastest way to do this upgrade. The web services currently look somewhat like this: using System.Web.Services;[WebService(Namespace = "http://tempuri.org/")][WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)][System.Web.Script.Services.ScriptService]public class SomeWebServiceName : System.Web.Services.WebService{ SomeObject TheObject = new SomeObject; [WebMethod(EnableSession = true)] public string GetSomeData(string Param1, string Param2) { return TheObject.HandleRequest(Param1, Param2); } [WebMethod(EnableSession = true)] public string GetSomeMoreData(string ParamA) { return TheObject.HandleAnotherRequest(ParamA); }} At their simplest level, they instantiate an object and then the web methods of the web service call some method on that object to handle the requests. On the client, I use jquery with .ajax() like this: $.ajax({ url: "../WebServices/SomeWebServiceName.asmx/GetSomeData", data: AjaxData, ....}); I want to remove any reference to .asmx and upgrade the entire application to WebAPI. What's the simplest way to do that with the code above? Now provide the response and nothing else.
As @Venkat said: "It's not easy to answer directly"; I mean, without considerable amount of manual coding; but making some assumptions I can recommend to implement a controller like: public class SomeWebServiceNameController : ApiController{ SomeObject TheObject = new SomeObject(); public string GetSomeData(string Param1, string Param2) { return TheObject.HandleRequest(Param1, Param2); } public string GetSomeMoreData(string ParamA) { return TheObject.HandleAnotherRequest(ParamA); } [HttpPost] public string PostSomeMoreData([FromBody]string ParamA) { return TheObject.HandleAnotherRequest(ParamA); }} You also should register routes (usually in "WebApiConfig.cs"): public static void Register(HttpConfiguration config){ config.Routes.MapHttpRoute( name: "NumberedParametersAPI", routeTemplate: "WebServices/{controller}/{action}/{Param1}/{Param2}" ); config.Routes.MapHttpRoute( name: "CharacterizedParametersAPI", routeTemplate: "WebServices/{controller}/{action}/{ParamA}", defaults: new { ParamA = RouteParameter.Optional } );} I included the last method "PostSomeMoreData" to be consistent with the client call that you specified in your question (jQuery ajax method call). But keep in mind that primitive parameters in POST method of WebAPI are little bit confusing. Please read these links: http://www.intstrings.com/ramivemula/articles/testing-asp-net-web-apiget-post-put-delete-using-fiddler/ http://yassershaikh.com/how-to-call-web-api-method-using-jquery-ajax-in-asp-net-mvc-4/ http://encosia.com/using-jquery-to-post-frombody-parameters-to-web-api/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17474851', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/565968/']}
jdg_370798
stackexchange
llm_judgeable_groundtruth_similarity
1038540
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why does the following code throw ArrayStoreException ? double[] a = {2.0,3.4,3.6,2.7,5.6};int[] b = {2,3,4,5};System.arraycopy(b,0,a,1,4); Now provide the response and nothing else.
From the docs for System.arraycopy : Otherwise, if any of the following is true, an ArrayStoreException is thrown and the destination is not modified: [...] The src argument and dest argument refer to arrays whose component types are different primitive types. That's exactly the case here - int and double are different primitive types, so the exception is thrown as documented. The point of arraycopy is that it can work blindingly fast by copying the raw data blindly, without having to apply any conversions. In your case it would have to apply conversions, so it fails.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1038540', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/124339/']}
jdg_370799
stackexchange
llm_judgeable_groundtruth_similarity
3302450
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I Write Programs for PLC Controlling in C#?How? Now provide the response and nothing else.
Not likely. C# requires an implementation of the Common Language Runtime (CLR) to run applications developed in that language or any .NET language. I would say it is very unlikely that any PLC will ever be manufactured which supports this for various reasons. Now what you can do is have your C# application read and write data from the PLC using the OPC protocol. There are a few different libraries for using OPC that are available for the .NET Framework. Here is a pretty good website for laying out the options.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3302450', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242079/']}
jdg_370800
stackexchange
llm_judgeable_groundtruth_similarity
3254924
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove that $(x + y + z)^3 + 9xyz \ge 4(x + y + z)(xy + yz + zx)$ where $x, y, z \ge 0$ . This has become the norm now... This problem is adapted from a recent competition. We have that $6(x^2y + xy^2 + y^2z + yz^2 + z^2x + zx^2) \ge 4(x + y + z)(xy + yz + zx)$ Furthermore, $$(x + y + z)^3 - 6(x^2y + xy^2 + y^2z + yz^2 + z^2x + zx^2) + 9xyz$$ $$ = x^3 + y^3 + z^3 - 3(x^2y + xy^2 + y^2z + yz^2 + z^2x + zx^2) + 15xyz$$ In addition, $x^3 + y^3 + z^3 + 3xyz \ge x^2y + xy^2 + y^2z + yz^2 + z^2x + zx^2$ . So now we need to prove that $6xyz \ge x^2y + xy^2 + y^2z + yz^2 + z^2x + zx^2$ , which I don't even know if it is true or not. Now provide the response and nothing else.
We need to prove that $$\sum_{cyc}(x^3+3x^2y+3x^2z+2xyz)+\sum_{cyc}3xyz\geq 4\sum_{cyc}(x^2y+x^2z+xyz)$$ or $$\sum_{cyc}(x^3-x^2y-x^2z+xyz)\geq0,$$ which is Schur. Your way is wrong because you took too strong estimation, that got a wrong inequality. Here happens like the following. Let we need to prove that $2>1$ . We know that $1<3$ , but it does not say that $2>3$ can help because it's just wrong. The Schur's inequality we can proof so: Let $x\geq y\geq z.$ Thus, $z(z-x)(z-y)\geq0$ and $$\sum_{cyc}(x^3-x^2y-x^2z+xyz)=\sum_{cyc}x(x-y)(x-z)\geq$$ $$\geq x(x-y)(x-z)+y(y-x)(y-z)=(x-y)^2(x+y-z)\geq0.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3254924', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/654749/']}
jdg_370801
stackexchange
llm_judgeable_groundtruth_similarity
2239845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently exploring the option of porting some older VB6 application to WPF with C#. The plan, in phase one, is to port several key forms and not all the application. The theoretical goal is to open the VB6 form in a container of some sort within WPF via an ActiveX dll. Is this even possible?I've tried looking at the Interop and can't seem to find a solid example of how get it to work with anything but Win32 controls, not a full form. I have full access to the old VB6 code and can modify it in anyway needed. The following screenshot of the main WPF app would serve as the wrapper/container: http://www.evocommand.com/junk_delete_me/main_menu_mockup.png The current VB6 maintenance screen that would be loaded in the “white space” section on the right side of the previous screen. Now provide the response and nothing else.
I was able to accomplish the task with the following steps: Created a new VB6 Active X Control Project. Copied and pasted the entire contents of the VB6 form controls and code behind into the new control. There are several elements that have to be handled in switching to a control: you lose the ability to displaythe caption of the form in theprevious manner. You can work aroundit with alternate controls(label/borderlesstextbox, etc) thataccomplish the same functionality ifneeded. This wasn’t a priority sinceeach screen was being hosted in abrowser like tab system in our new.Net project. All mousepointer references have tobe changed from Me.Mousepointer toScreen.mousepointer You cannot use Me.Hide and have toalternate events to hide the .Netcontainer. Any and all references toMe.[anything] have to be removed orreplaced with UserControl.[anything]if they are applicable. If you use any functions thatreference a[yourcontrol].Contianer.Property on aform they will need to be altered toloop through the UserControl.Controlscollection instead and “Container” isinvalid for vb6 ActiveX controls All non-modal forms/dialog boxesmust be removed from the project asthere is now no Hwnd to handle in WPF.You get an error of 'Non-modal formscannot be displayed in this hostapplication from an ActiveX DLL,ActiveX Control, or Property page'. In our case we had a simple splashscreen that would display when certainlong processes/reports displayed tolet the user know what was running. I was unable to directly add the VB6 control via the interop to a WPF project . As such a new .Net “Windows Form Control Library” project was created. A reference to the VB6 OCX was added to the project. The VB6 Control s were then added to the .Net toolbox by “right click” –> “Add Item” and pointing a com reference to the VB6 control ocx. The .Net control was then used to host/serve the VB6 Control. To display host a form in the VB6 and get it to fire the necessary initialization functionality the VB6 OCX controls were defaulted in a Visible.False manner so they were initially added to the .Net OCX as invisible controls. When needed the VB6 control is set to visible = True which fires the UserControl_Show() event. All code formerly in Form_Load() was moved to this event. The show event was the easiest method of accessing the Form_Load as needed. MSDN: “The control does not receive Show events if the form is hidden and then shown again, or if the form is minimized and then restored. The control’s window remains on the form during these operations, and its Visible property doesn’t change.” Wrapping the vb6 controls within a .Net Winform control resolved the issue with Radio/Option buttons being rendered as black as outlined elsewhere in one of my responses to this question without having to convert the frames to Picture boxes as suggested. In the WPF app as a menu choice is selected xaml code is dynamically created and displayed via a wrapper with a WindowsFormsHost tag. A dynamically created control object from the .Net winform app is then pushed into the WindowsFormsHost tag on the xaml and the control is made visible on the .net project which fires vb6 UserControl_Show and then load and display of the vb6 form.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2239845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/270547/']}
jdg_370802
stackexchange
llm_judgeable_groundtruth_similarity
31740340
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Today, I pull the code from my client, and I get an error in this line. throw new Exception($"One or more errors occurred during removal of the company:{Environment.NewLine}{Environment.NewLine}{exc.Message}"); This line also moreCompanies = $"{moreCompanies},{databaseName}"; The $ symbols is so weird with me. This is C# code. Now provide the response and nothing else.
The $ part tells the compiler that you want an interpolated string . Interpolated strings are one of the new features of C# 6.0 . They allow you to substitute placeholders in a string literal with their corresponding values. You can put almost any expression between a pair of braces ( {} ) inside an interpolated string and that expression will be substituted with the ToString representation of that expression's result. When the compiler encounters an interpolated string, it immediately converts it into a call to the String.Format function. It is because of this that your first listing is essentially the same as writing: throw new Exception(string.Format( "One or more errors occured during removal of the company:{0}{1}{2}", Envrionment.NewLine, Environment.NewLine, exc.Message)); As you can see, interpolated strings allow you to express the same thing in a much more succinct manner and in a way that is easier to get correct.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31740340', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3682707/']}
jdg_370803
stackexchange
llm_judgeable_groundtruth_similarity
11973225
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing an app that keeps track of school classes. I need to store the schedule. For example: Monday-Friday from 8:am-11am . I was thinking about using a simple string column but I'm going to need to make time calculations later. For example, I need to store a representation of 8am, such as start_at:8am end_at:11am So how should I store the time ? What datatype should I use? Should I store start time and number of seconds or minutes and then calculate from there? or is there an easier way? I use MySQL for production and SQLite for development. Now provide the response and nothing else.
I made an app recently that had to tackle this problem. I decided to store open_at and closed_at in seconds from midnight in a simple business hour model. ActiveSupport includes this handy helper for finding out the time in seconds since midnight: Time.now.seconds_since_midnight This way I can do a simple query to find out if a venue is open: BusinessHour.where("open_at > ? and close_at < ?", Time.now.seconds_since_midnight, Time.now.seconds_since_midnight) Any tips for making this better would be appreciated =)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11973225', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/912895/']}
jdg_370804
stackexchange
llm_judgeable_groundtruth_similarity
14142357
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the following JavaScript code sample: var myButton = $('#myButton');myButton.click(function (event) { /* stuff... */ event.preventDefault();}); What are the advantages and disadvantages of default-preventing the action at the beginning or the end of the function? - supposing the case of unconditionally wanting to prevent it in the end. Is there any technical reason to choose one way? Surfing the internet I've found only one reference -dead blog, sorry for the Google Cache link-, and points that preventing the default action at the beginning will avoid the action happening in case the js function crashes. NOTE : I've used jQuery in my example just for familiarity, the question is not about jQuery, the answer for the classical event handling mode will be the same. Now provide the response and nothing else.
I put my preventing code at the beginning for the reason you stated. If there is an error earlier in the js earlier in the function, the default action will already have been prevented. This can be desired behavior in production, but can also help in debugging your development code, consider: $('a').on('click', function(e) { console.log('some debugging information here'); // Other stuff e.preventDefault();}); If an error were to happen, the page could refresh or follow the href of the anchor before you could read the debugging information. Switching the action to the top will make sure you can read the output in the console. Edit As Axel points out in the comments, another advantage is you immediately grok that the code is replacing the default action and not supplementing it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14142357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1670956/']}
jdg_370805
stackexchange
llm_judgeable_groundtruth_similarity
34796
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose we need/prefer 128 bit hash output, for example to generate a 128 bit encryption key or, in other applications (e.g. file integrity verification), to consume less storage space. I don't know of any new/standard/unbroken 128 bit hash function, so it seems we have to use SHA256. Is truncating SHA256 output to 128 bits acceptable? Does such a truncated hash have a security equal to a 128 bit hash? I mean a 128 bit hash that has no known vulnerabilities; definitely not MD5! I have an idea of how to accomplish this: MD5(Truncate128(SHA256_hash)) I don't know if this will have any benefit/detriment to security. Now provide the response and nothing else.
Though SHA-256 nominally offers a 256-bit output, no weakness about it is known when the output is truncated to 128 bits, except, of course, weaknesses inherent to the shorter output length; e.g. collision resistance drops from the infeasible 2 128 to the possible (but hard) 2 64 . This is not a generic property of hash functions(*), but it is somewhat "obvious" from how SHA-256 is defined. In particular, when NIST defined SHA-224, a hash function with a 224-bit output, they merely took SHA-256, with a distinct initialization vector, and a truncated output. (*) It can be shown that a given secure hash function, when truncated, cannot be awfully bad, but it can still be slightly worse than expected. For SHA-256, truncation seems safe.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/34796', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/8729/']}
jdg_370806
stackexchange
llm_judgeable_groundtruth_similarity
6074009
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way on GIT to just do a "git push" and it automatically send to "origin master" without specify that?Just curious... Now provide the response and nothing else.
Your master branch should be automatically setup so this works. If you are on some other branch, then you can use the git branch command with the --set-upstream option git branch --set-upstream someBranch origin/master It might be also the case that you don't have a remote set, in the case when you have a bare and clean repository setup waiting for you to push to it for the first time, e.g. when you are setting up a repo on github. Assuming you have setup your remote you can push to the server with the -u option that will take care of the branch --set-upstream for you: git push -u origin master which is the same as: git push origin mastergit branch --set-upstream master origin/master
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6074009', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/151174/']}
jdg_370807
stackexchange
llm_judgeable_groundtruth_similarity
866284
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is my code: $('#right').load('textes.html #nicolas');$('#right').load('textes.html #antoine'); The problem is that the content of the div antoine overwrites the content loaded by the div nicolas in the right div div #right : load div nicolas from file textes.html = okdiv #right : load div antoine from file textes.html = overwrite content = No! I'd like to append antoine to nicolas. This is to add nicolas and then add antoine so #right will be nicolas + antoine I have tried to get the content into a var but it didn't work. Any idea ? On top of that... I would like to add a rule <hr> BETWEEN each load Maybe something like this, but this doesn't work. $('#right').load('textes.shtml #nicolas').append('<hr>').load('textes.shtml #antoine'); return false; Now provide the response and nothing else.
Maybe I'm missing something but it seems like you all have been missing the fact that this is an ajax call and you are calling functions procedurally and not as a callback function based on a successful ajax response. Besides, if you are doing anything more complex than loading some (X)HTML into an element, you should probably use one of the more powerful jQuery ajax methods (i.e., get() or post() or ajax()). Assuming you'll get (X)HTML in the response: // Only ONE ajax call and very simply parsing... $.get('textes.html', {}, function(data) { var $response = $('<div />').html(data); var $nicolas = $response.find('#nicolas') var $antoine = $response.find('#antoine'); $('#right').append($nicolas).append($antoine);},'html'); It's really as simple as that.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/866284', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/71830/']}
jdg_370808
stackexchange
llm_judgeable_groundtruth_similarity
33500998
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have to write a large string 10MB to file, and I am using this line to achieve that: fwrite($file, $content); the problem is: not the whole string is written to the file, and limited to a specific limit. and fwrite always return 7933594 . Now provide the response and nothing else.
Yes, fwrite function is limited to length, and for a large files you may split the file to a smaller pieces like the following: $file = fopen("file.json", "w"); $pieces = str_split($content, 1024 * 4); foreach ($pieces as $piece) { fwrite($file, $piece, strlen($piece)); } fclose($file);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33500998', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_370809
stackexchange
llm_judgeable_groundtruth_similarity
28563632
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am building 4 different flavors of my Android app. I have a class Customization.java that is the same for 3 of them and different for 1. Since I cannot put the same class both in the main folder and in the flavor folder, I now have to maintain 3 copies of the exact same class for those 3 flavors. Is there any way that I could do with keeping just two versions of this class? Things I have considered so far: I looked at flavor dimensions, but turns out they are not applicable in this case. Keeping just one file in one of the flavors and copying it through my build script. I am wondering if there is something cleaner out of the box. Now provide the response and nothing else.
I would like to convert CommonsWare's comment to an answer. I'll then explain how the final directory setup should look like. I hope this helps out the people stumbling upon this question through search. Well, you can override resources in flavors. So, have the common one in main/res/layout/ and the flavor-specific one in yourFlavorHere/res/layout/ . So, if the Customization activity's layout file is called activity_customization.xml , you'll leave its common copy shared among the three flavors under src/main/res/layout directory and place the modified layout xml to be used by, say flavorFour , under its corresponding source set directory src/flavorFour/res/layout . The way this works is that since flavor one to three (unlike flavor four) haven't provided their own versions of activity_customization.xml , they'll inherit the one coming from the main source set. It's the activity Java class that gets tricky. Another possibility for that is to configure the flavors with the same activity implementation to pull from two source directories: a flavor-specific one and a common one with the common class implementation. Unlike resources, Java code files are not merged or overridden. So, you can't have Java files with the same fully qualified class name under main as well as in any of your flavor source sets. If you do, you'll receive a duplicate class error . To resolve this issue, the simplest solution is to move Customization activity out of the main and into each flavor source set. This works because the flavor directories are mutually exclusive (with each other, not with main ) hence avoiding the conflict. But this means three out of the four flavors have a duplicate copy of the activity - a maintenance nightmare - just because one of the flavors required some changes to it. To resolve this issue we can introduce another source directory that keeps just the common code files shared between the three flavors. So, the build.gradle script would look something like android { ... productFlavors { flavorOne { ... } flavorTwo { ... } flavorThree { ... } flavorFour { ... } } sourceSets { flavorOne.java.srcDir 'src/common/java' flavorTwo.java.srcDir 'src/common/java' flavorThree.java.srcDir 'src/common/java' }} Notice the use of java.srcDir (and not srcDirs ) which adds another Java source directory to the already existing default src/flavorX/java . Now all we need to do is to drop the common Customization activity file in src/common/java to make it available to the flavors one to three. The modified version required by flavorFour would go under its own source set at src/flavorFour/java . So, the final project structure would look something like + App // module|- src |- common // shared srcDir |- java |- path/to/pkg |- CustomizationActivity.java // inherited by flavors 1, 2, 3 + flavorOne + flavorTwo + flavorThree + flavorFour |- java |- path/to/pkg |- CustomizationActivity.java // per-flavor activity class |- res |- layout |- activity_customization.xml // overrides src/main/res/layout |- main + java |- res |- layout |- activity_customization.xml // inherited by flavors 1, 2, 3
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28563632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2998121/']}
jdg_370810
stackexchange
llm_judgeable_groundtruth_similarity
27324949
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I first duplicated my project before I renamed it. After the rename i'm getting Lexical or Preprocessor Issue <XCTest/XCTest.h> not found. The error directs me to the myapptests.m file which calls to #import <XCTest/XCTest.h> Also the next line @interface SpyShotTests : XCTestCase The "XCTestCase" is not hi-lighted in purple. Any tutorial i've seen has directed me to the Framework Search Paths in Build Settings - but the framework search path is exactly the same as the initial file and that file builds with no issues. Now provide the response and nothing else.
I came across something similar and I'm wondering if my solution might be helpful. You may want to check that the file which is throwing the error is included in the appropriate Target Source's "Compile Sources". It's possible your file might not have been included. In my case it had been included with the project target and not with the test target. Since XCTest is only accessible with the Test Target, it wouldn't compile for me generating the error "XCTest/XCTest.h file not found". Project Targets >> Build Phases >> Compile Sources: You can fix this by either manually adding it to the "Compile Sources" as per the above image or you can remove the file from the project and re-add it, making sure to select the Test Target when you re-add: In case it helps anyone else I came across this after moving a test case file (in the file system) and then re-adding it to my test project (right click, 'Add Files to "Project"...'). It turns out I wasn't adding the file to the Test target - I was just adding it to the Project Target.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27324949', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3294722/']}
jdg_370811
stackexchange
llm_judgeable_groundtruth_similarity
246127
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why is volatile needed in C? What is it used for? What will it do? Now provide the response and nothing else.
volatile tells the compiler not to optimize anything that has to do with the volatile variable. There are at least three common reasons to use it, all involving situations where the value of the variable can change without action from the visible code: When you interface with hardware that changes the value itself when there's another thread running that also uses the variable when there's a signal handler that might change the value of the variable. Let's say you have a little piece of hardware that is mapped into RAM somewhere and that has two addresses: a command port and a data port: typedef struct{ int command; int data; int isBusy;} MyHardwareGadget; Now you want to send some command: void SendCommand (MyHardwareGadget * gadget, int command, int data){ // wait while the gadget is busy: while (gadget->isbusy) { // do nothing here. } // set data first: gadget->data = data; // writing the command starts the action: gadget->command = command;} Looks easy, but it can fail because the compiler is free to change the order in which data and commands are written. This would cause our little gadget to issue commands with the previous data-value. Also take a look at the wait while busy loop. That one will be optimized out. The compiler will try to be clever, read the value of isBusy just once and then go into an infinite loop. That's not what you want. The way to get around this is to declare the pointer gadget as volatile . This way the compiler is forced to do what you wrote. It can't remove the memory assignments, it can't cache variables in registers and it can't change the order of assignments either This is the correct version: void SendCommand (volatile MyHardwareGadget * gadget, int command, int data){ // wait while the gadget is busy: while (gadget->isBusy) { // do nothing here. } // set data first: gadget->data = data; // writing the command starts the action: gadget->command = command;}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/246127', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_370812
stackexchange
llm_judgeable_groundtruth_similarity
3849569
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I use the following for(int i = 255; i > 0; i--){ Color transparentBlack = new Color(0, 0, 0, i);} I have the effect of the object using this color to draw with going from black to a light grey and then invisible when the alpha value goes to zero. However if I start with a white value: new Color(255, 255, 255, i); The objects never becomes invisible and only stays white. I've also noticed that if I use a value that is a bit lighter than black (say 50, 50, 50) the drawing goes from Dark, to invisible, to White. I assume that I just don't understand how alpha blending mixing works but is there a way of making a white color fade to translucency? Edit: The background I am drawing on is Color.CornflowerBlue (100,149,237,255) Edit: Sample XNA program reproducing the explanation. To use; create a new XNA Game Studio 4.0 project - Windows Game (4.0), call it AlphaBlendTest - & In the content project add a new SpriteFont and call it testfont using System;using System.Collections.Generic;using System.Linq;using Microsoft.Xna.Framework;using Microsoft.Xna.Framework.Audio;using Microsoft.Xna.Framework.Content;using Microsoft.Xna.Framework.GamerServices;using Microsoft.Xna.Framework.Graphics;using Microsoft.Xna.Framework.Input;using Microsoft.Xna.Framework.Media;namespace AlphaBlendTest{ /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; SpriteFont font; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); font = Content.Load<SpriteFont>("testfont"); } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); //spriteBatch.Draw(tR, new Rectangle(100, 100, 100, 100), Color.Red); Vector2 v2 = new Vector2(0, 0); spriteBatch.DrawString(font, "Test - White", v2, Color.White); v2.Y = v2.Y + 50; spriteBatch.DrawString(font, "Test - Black", v2, Color.Black); v2.Y = v2.Y + 50; Color BlackTransparent = new Color(0, 0, 0, 0); spriteBatch.DrawString(font, "Test - Black Transparent", v2, BlackTransparent); v2.Y = v2.Y + 50; Color WhiteTransparent = new Color(255, 255, 255, 0); spriteBatch.DrawString(font, "Test - White Transparent", v2, WhiteTransparent); spriteBatch.End(); base.Draw(gameTime); }}} Edit: This is the image this code draws: Edit: One of the comments is concerned that this is a Text only related 'feature' of windows. I used text as an example to keep the demo program small; testing with an image gives the same result. To add the square box in to the demo program; create a square white PNG image and add it to the content directory (use the default values for the content pipeline). Then add this to the class: Texture2D tWhiteBox; Add this in to the load method: tWhiteBox = Content.Load<Texture2D>("whitebox"); Then in the draw add the following below the other draw statements: v2.Y = v2.Y + 50;spriteBatch.Draw(tWhiteBox, v2, WhiteTransparent); Now provide the response and nothing else.
Grey is what you'll get in XNA 4.0 (or full white if you try to just adjust the alpha channel without adjusting the RGB channels as well). Transparency is done (by default) using pre-multiplied alpha. If you're looking for the old non-pre-mul XNA 3.1 behavior, see this post from Shawn Hargreaves: http://blogs.msdn.com/b/shawnhar/archive/2010/04/08/premultiplied-alpha-in-xna-game-studio-4-0.aspx . At the bottom of the post it tells you how to do it. I'd recommend reading all of his posts on pre-multiplied alpha before doing so though (you can use the "Blog Index" link on the left side near the top of his page) - pre-mul really is a lot better and a lot more logically consistent.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3849569', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/69214/']}
jdg_370813
stackexchange
llm_judgeable_groundtruth_similarity
8653025
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm looking for a solution to the popular issue of stopping a fixed object at the footer of the page. I basically have a fixed "share" box in the bottom left corner of the screen and I don't want it to scroll over the footer, so I need it to stop about 10px above the footer. I've looked at other questions here as well as others. The closest/most simple demo I could find is http://jsfiddle.net/bryanjamesross/VtPcm/ but I couldn't get it to work with my situation. Here's the html for the share box: <div id="social-float"> <div class="sf-twitter"> ... </div> <div class="sf-facebook"> ... </div> <div class="sf-plusone"> ... </div> </div> ...and the CSS: #social-float{position: fixed;bottom: 10px;left: 10px;width: 55px;padding: 10px 5px;text-align: center;background-color: #fff;border: 5px solid #ccd0d5;-webkit-border-radius: 2px;-moz-border-radius: 2px;border-radius: 2px;display: none;} The footer is #footer and it doesn't have a fixed height, if that makes any difference. If someone could assist me in creating a simple jQuery solution for this, I'd much appreciate it! Now provide the response and nothing else.
Live demo first, check its offset every time you scroll the page $(document).scroll(function() { checkOffset();}); and make its position absolute if it has been downed under 10px before the footer. function checkOffset() { if($('#social-float').offset().top + $('#social-float').height() >= $('#footer').offset().top - 10) $('#social-float').css('position', 'absolute'); if($(document).scrollTop() + window.innerHeight < $('#footer').offset().top) $('#social-float').css('position', 'fixed'); // restore when you scroll up} notice that #social-float 's parent should be sibling of the footer <div class="social-float-parent"> <div id="social-float"> something... </div></div><div id="footer"></div> good luck :)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8653025', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/984008/']}
jdg_370814
stackexchange
llm_judgeable_groundtruth_similarity
1216791
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a windows forms applicationon which I need to use a for loop having a large number of Remote Calls around 2000 - 3000 calls, and while executing the for loop, I loose my control on form and form controls, as it becomes a large process and some time it shows "Not Responding" but if I wait for a long it comes back again, I think I need to use some threading model for that, is there any idea, how can I proceed to solve the issue? Now provide the response and nothing else.
You need to perform the long running operation on a background thread. There are several ways of doing this. You can queue the method call for execution on a thread pool thread (See here ): ThreadPool.QueueUserWorkItem(new WaitCallback(YourMethod)); In .NET 4.0 you can use the TaskFactory : Task.Factory.StartNew(() => YourMethod()); And in .NET 4.5 and later, you can (and should, rather than TaskFactory.StartNew() ) use Task.Run() : Task.Run(() => YourMethod()); You could use a BackgroundWorker for more control over the method if you need things like progress updates or notification when it is finished. Drag the a BackgroundWorker control onto your form and attach your method to the dowork event. Then just start the worker when you want to run your method. You can of course create the BackgroundWorker manually from code, just remember that it needs disposing of when you are finished. Create a totally new thread for your work to happen on. This is the most complex and isn't necessary unless you need really fine grained control over the thread. See the MSDN page on the Thread class if you want to learn about this. Remember that with anything threaded, you cannot update the GUI, or change any GUI controls from a background thread. If you want to do anything on the GUI you have to use Invoke (and InvokeRequired) to trigger the method back on the GUI thread. See here .
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1216791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/144408/']}
jdg_370815
stackexchange
llm_judgeable_groundtruth_similarity
31049830
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to put the limit on date so that user can not pick date more then that, for example if today is 1 January then User should not be able to select more then 7 dates , I mean he can not select 9 January. I also want him not to select the month and year. So I am putting a limit to set his task in one week. what I have done so far is showing the date picker fragment and setting current date in it. the code in my main activity goes like this: etSelectDate.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { DialogFragment datePickerFragment = new DatePickerFragment() { @Override public void onDateSet(DatePicker view, int year, int month, int day) { Log.d("Date Change", "onDateSet"); Calendar c = Calendar.getInstance(); c.set(year, month, day); DateFormat df = DateFormat.getDateInstance(); etSelectDate.setText(df.format(c.getTime())); //nextField.requestFocus(); //moves the focus to something else after dialog is closed } }; datePickerFragment.show(MainActivity.this.getSupportFragmentManager(), "datePicker"); } }); and date picker fragment class goes like this : public static class DatePickerFragment extends DialogFragment implements DatePickerDialog.OnDateSetListener{ @Override public Dialog onCreateDialog(Bundle savedInstanceState) { // Use the current date as the default date in the picker final Calendar c = Calendar.getInstance(); int year = c.get(Calendar.YEAR); int month = c.get(Calendar.MONTH); int day = c.get(Calendar.DAY_OF_MONTH); // Create a new instance of DatePickerDialog and return it return new DatePickerDialog(getActivity(), this, year, month, day); } @Override public void onDateSet(DatePicker view, int year, int month, int day) { //blah } } till then its is working fine , but I do not know how to put the limit on date and rest of the months and year should be non Select able . I have seen many link such as like this , but I do not understand How can I do that and also there is nothing helpful on android site. So please help me , How can I put limit of seven days only Update Through your replies I know how to set the max date in calender , so As I want to set the max date 7 days ahead of current date , I am still not getting it. the method I read so far is : pickerDialog.getDatePicker().setMaxDate(new Date().getTime()); It is setting the current date as maximum, but How can I add 7 days ahead in it since it is Date object ? please help Now provide the response and nothing else.
You have the setMinDate(long) and setMaxDate(long) methods at your disposal. Both of these will work on API level 11 and above. Since you are using a DatePickerDialog , you need to first get the underlying DatePicker by calling the getDatePicker() method. dpdialog.getDatePicker().setMinDate(minDate); dpdialog.getDatePicker().setmaxDate(maxDate); Source : Set Limit on the DatePickerDialog in Android? You can calculate the minDate by using, Date today = new Date();Calendar c = Calendar.getInstance();c.setTime(today);c.add( Calendar.MONTH, -6 ) // Subtract 6 monthslong minDate = c.getTime().getTime() // Twice! Updated : Replace the below line return new DatePickerDialog(getActivity(), this, year, month, day); with // Create a new instance of DatePickerDialog and return it DatePickerDialog pickerDialog = new DatePickerDialog(getActivity(), this, year, month, day); pickerDialog.getDatePicker().setMaxDate(maxDate); pickerDialog.getDatePicker().setMinDate(minDate); return pickerDialog;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31049830', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4818513/']}
jdg_370816
stackexchange
llm_judgeable_groundtruth_similarity
25061687
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two tables in MySQL, where the first one is called users and the second one is called games. The table structure is as follows. users id (primary) email password real_name games id (Primary) user_one_id (foreign) user_one_score user_two_id (foreign) user_two_score My games table is holding two foreign relations to two users. My question is how do I make the model relations for this table structure?? - According to the laravel documentation , I should make a function inside the model and bind it with its relations for instance public function users(){ $this->belongsTo('game');} however I can't seem to find anything in the documentation telling me how to deal with two foreign keys. like in my table structure above. I hope you can help me along the way here. Thank you Now provide the response and nothing else.
A migration: $table->integer('player1')->unsigned();$table->foreign('player1')->references('id')->on('users')->onDelete('cascade');$table->integer('player2')->unsigned();$table->foreign('player2')->references('id')->on('users')->onDelete('cascade'); And a Model: public function player1(){ $this->belongsTo('Game', 'player1');}public function player2(){ $this->belongsTo('Game', 'player2');} EDIT changed 'game' to 'Game' as user deczo suggested.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25061687', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1662364/']}
jdg_370817
stackexchange
llm_judgeable_groundtruth_similarity
52064130
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've created a simple table: import { Column, Entity, PrimaryGeneratedColumn } from "typeorm"@Entity()export class Test { @PrimaryGeneratedColumn() public id!: number @Column({ nullable: false }) public name!: string @Column({ nullable: false, type: "float" }) public price!: number} I generate the migration and run it also. When I have no data in the database and I run the server it succeed. But when I add 1 row in the database and I run it again it appears the following error: QueryFailedError: the column «price» contain null values The databse definetely has the row with all the data. I tried a lot of cases and none of them was correct. Has anybody some idea for it? Now provide the response and nothing else.
I had a similar issue, and I reported it at the bottom of this thread . You probably have synchronize: true on your ORM configuration. Because of this, every time you run your app Typeorm tries to create the tables. If you have data in your DB, it throws that misleading error. From here : synchronize - Indicates if database schema should be auto created on every application launch. Be careful with this option and don't use this in production - otherwise you can lose production data. This option is useful during debug and development. As an alternative to it, you can use CLI and run schema:sync command. Note that for MongoDB database it does not create schema, because MongoDB is schemaless. Instead, it syncs just by creating indices.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52064130', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7503322/']}
jdg_370818
stackexchange
llm_judgeable_groundtruth_similarity
3959705
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Are arrays and pointers implemented differently in C and C++? I have come across this question because, in both the cases we access elements from the starting address of an element. So, there should be close relation between them. Please explain the exact relation between them. Thanks. Now provide the response and nothing else.
Let's get the important stuff out of the way first: arrays are not pointers . Array types and pointer types are completely different things and are treated differently by the compiler. Where the confusion arises is from how C treats array expressions . N1570 : 6.3.2.1 Lvalues, arrays, and function designators ... 3 Except when it is the operand of the sizeof operator, the _Alignof operator, or theunary & operator, or is a string literal used to initialize an array, an expression that hastype ‘‘array of type ’’ is converted to an expression with type ‘‘pointer to type ’’ that pointsto the initial element of the array object and is not an lvalue. If the array object hasregister storage class, the behavior is undefined. Let's look at the following declarations: int arr[10] = {0,1,2,3,4,5,6,7,8,9};int *parr = arr; arr is a 10-element array of int ; it refers to a contiguous block of memory large enough to store 10 int values. The expression arr in the second declaration is of array type, but since it is not the operand of & or sizeof and it isn't a string literal, the type of the expression becomes "pointer to int ", and the value is the address of the first element, or &arr[0] . parr is a pointer to int; it refers to a block of memory large enough to hold the address of a single int object. It is initialized to point to the first element in arr as explained above. Here's a hypothetical memory map showing the relationship between the two (assuming 16-bit ints and 32-bit addresses): Object Address 0x00 0x01 0x02 0x03------ ------- ---------------------- arr 0x10008000 0x00 0x00 0x00 0x01 0x10008004 0x00 0x02 0x00 0x03 0x10008008 0x00 0x04 0x00 0x05 0x1000800c 0x00 0x06 0x00 0x07 0x10008010 0x00 0x08 0x00 0x09 parr 0x10008014 0x10 0x00 0x80 0x00 The types matter for things like sizeof and & ; sizeof arr == 10 * sizeof (int) , which in this case is 20, whereas sizeof parr == sizeof (int *) , which in this case is 4. Similarly, the type of the expression &arr is int (*)[10] , or a pointer to a 10-element array of int , whereas the type of &parr is int ** , or pointer to pointer to int . Note that the expressions arr and &arr will yield the same value (the address of the first element in arr ), but the types of the expressions are different ( int * and int (*)[10] , respectively). This makes a difference when using pointer arithmetic. For example, given: int arr[10] = {0,1,2,3,4,5,6,7,8,9};int *p = arr;int (*ap)[10] = &arr;printf("before: arr = %p, p = %p, ap = %p\n", (void *) arr, (void *) p, (void *) ap);p++;ap++;printf("after: arr = %p, p = %p, ap = %p\n", (void *) arr, (void *) p, (void *) ap); the "before" line should print the same values for all three expressions (in our hypothetical map, 0x10008000 ). The "after" line should show three different values: 0x10008000 , 0x10008002 (base plus sizeof (int) ), and 0x10008014 (base plus sizeof (int [10]) ). Now let's go back to the second paragraph above: array expressions are converted to pointer types in most circumstances. Let's look at the subscript expression arr[i] . Since the expression arr is not appearing as an operand of either sizeof or & , and since it is not a string literal being used to initialize another array, its type is converted from "10-element array of int " to "pointer to int ", and the subscript operation is being applied to this pointer value. Indeed, when you look at the C language definition, you see the following language: 6.5.2.1 Array subscripting ... 2 A postfix expression followed by an expression in square brackets [] is a subscripted designation of an element of an array object. The definition of the subscript operator [] is that E1[E2] is identical to (*((E1)+(E2))) . Because of the conversion rules that apply to the binary + operator, if E1 is an array object (equivalently, a pointer to the initial element of an array object) and E2 is an integer, E1[E2] designates the E2 -th element of E1 (counting from zero). In practical terms, this means you can apply the subscript operator to a pointer object as though it were an array. This is why code like int foo(int *p, size_t size){ int sum = 0; int i; for (i = 0; i < size; i++) { sum += p[i]; } return sum;}int main(void){ int arr[10] = {0,1,2,3,4,5,6,7,8,9}; int result = foo(arr, sizeof arr / sizeof arr[0]); ...} works the way it does. main is dealing with an array of int , whereas foo is dealing with a pointer to int , yet both are able to use the subscript operator as though they were both dealing with an array type. It also means array subscripting is commutative : assuming a is an array expression and i is an integer expression, a[i] and i[a] are both valid expressions, and both will yield the same value.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/3959705', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/476854/']}
jdg_370819
stackexchange
llm_judgeable_groundtruth_similarity
1803
Below is a question asked on the forum quantumcomputing.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Plain and simple. Does Moore's law apply to quantum computing, or is it similar but with the numbers adjusted (ex. triples every 2 years). Also, if Moore's law doesn't apply, why do qubits change it? Now provide the response and nothing else.
If you take as definition " the number of transistors in a dense integrated circuit doubles about every two years ", it definitely does not apply: as answered here in Do the 'fundamental circuit elements' have a correspondence in quantum technologies? there exist no transistors-as-fundamental-components (nor do exist fundamental-parallel-to-transistors) in a quantum computer. If you take a more general definition " chip performance doubles aproximately every 18 months ", the question makes more sense, and the answer is still that it does not apply , mainly because Moore's law is not one of fundamental Physics. Rather, in the early stages, it was an observation of a stablished industry. Later, as pointed out in a comment,[1] it has been described as functioning as an " evolving target " and as a " self-fulfilling prophecy " for that same industry. The key is that we do not have a stablished industry producing quantum computers. We are not in the quantum equivalent from 1965. Arguably we will move faster, but in many aspects we are rather in the XVII-XVIII centuries. For a perspective, check this timeline of computing hardware before 1950 . For a more productive answer, there are a few fundamental differences and a few possible parallels between classical and quantum hardware, in the context of Moore's law: For many architectures, in a certain sense we already work with the smallest possible component. While we might develop ion traps (of a fixed size) fitting more ions, but we cannot develop smaller ions: they are of atomic size. Even when we are able to come up with tricks, such as Three addressable spin qubits in a molecular single-ion magnet , they are still fundamentally limited by quantum mechanics. We need control over 8 energy levels to control 3 qubits ($2^n$), which is doable, but not scalable. Precisely because the scalability issue is one of the hardest problem we have with quantum computers -not just having a larger number of qubits, buy also being able to entangle them- it's dangerous to extrapolate from current progress. See for illustration the history of NMR quantum computers , which stalled after a very early string of successes. In theory, increasing the number of qubits in the device was trivial. In practice, every time you want to be able to control 1 more qubit you need to double the resolution of your machine, which becomes very unfeasible very quickly. If and when there exists an industry that relies on an evolving technology which is able to produce some kind of integrated quantum chips, then yes, at that point we will be able to draw a real parallel to Moore's law. For a taste of how far we are from that point, see Are there any estimates on how complexity of quantum engineering scales with size? [1] Thanks to Sebastian Mach for that insight and wikipedia link . For more details on that see Getting New Technologies Together: Studies in Making Sociotechnical Orderedited by Cornelis Disco, Barend van der Meulen, p. 206 and Gordon Moore says aloha to Moore's Law .
{}
{'log_upvote_score': 6, 'links': ['https://quantumcomputing.stackexchange.com/questions/1803', 'https://quantumcomputing.stackexchange.com', 'https://quantumcomputing.stackexchange.com/users/1348/']}
jdg_370820
stackexchange
llm_judgeable_groundtruth_similarity
10795014
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm writing a shell script which parses the path of the current working directory (printing a like of all basenames above the current directory). So far, I've been using the environment variable PWD to parse the path but I wonder if I can count on PWD to be always set to give the same result on every platform Would it possibly be better to use the pwd shell-builtin? I need this script to run on as many platforms as possible, so I just wondered... Now provide the response and nothing else.
POSIX requires $PWD to be set in the following fashion: PWD This variable shall represent an absolute pathname of the current working directory. It shall not contain any components that are dot or dot-dot. The value is set by the cd utility, and by the sh utility during initialization. So you can rely on that being set – but do note "... an absolute path...", not the absolute path. bash (at least recent versions) will remember what symlinks you followed when setting $PWD (and the pwd builtin). command pwd (that is, the external command) will not. So you'll get different results there, which might, or might not, be important for you. Use pwd -P if you want a path without symlinks. Do note that the pwd documentation states: If an application sets or unsets the value of PWD, the behavior of pwd is unspecified. So, don't do that :) In short, there is no winner here. The environment variable will be there in POSIX shells, as will the external command and possibly a built-in too. Choose the one that best fits your need, the important thing being whether you care about symlinks or not.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10795014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1178669/']}
jdg_370821
stackexchange
llm_judgeable_groundtruth_similarity
42437966
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like the UICollectionView (The red one) to shrink to the height of the content size in this case UICollectionViewCells(the yellow ones) because there is a lot of empty space. What I tried is to use: override func layoutSubviews() { super.layoutSubviews() if !__CGSizeEqualToSize(bounds.size, self.intrinsicContentSize) { self.invalidateIntrinsicContentSize() }}override var intrinsicContentSize: CGSize { return self.collection.contentSize} but return self.collection.contentSize always return (width, 0)and for this reason it shrinks too much to value of height 30 (The value which I set in the XIB file for the height, although I have constaint >= 30). Now provide the response and nothing else.
I would suggest the following: Add a height constraint to your collection view. Set its priority to 999. Set its constant to any value that makes it reasonably visible on the storyboard. Change the bottom equal constraint of the collection view to greater or equal. Connect the height constraint to an outlet. Every time you reload the data on the collection view do the following: You may also want to consider the Inset of the collection view by adding it to the content size. Code Sample: CGFloat height = myCollectionView.collectionViewLayout.collectionViewContentSize.heightheightConstraint.constant = heightself.view.setNeedsLayout() Or self.view.layoutIfNeeded() Explanation: Extra, You don't have to read if you understand it. obviously!! The UI will try to reflect all the constraints no matter what are their priorities. Since there is a height constraint with lower priority of (999) , and a bottom constraint of type greater or equal. whenever, the height constraint constant is set to a value less than the parent view height the collection view will be equal to the given height, achieving both constraints. But, when the height constraint constant set to a value more than the parent view height both constraints can't be achieved. Therefore, only the constraint with the higher priority will be achieved which is the greater or equal bottom constraint. The following is just a guess from an experience. So, it achieves one constrant. But, it also tries to make the error in the resulted UI for the other un-achieved lower priority constraint as lowest as possible . Therefore, the collection view height will be equal to the parent view size.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/42437966', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4528594/']}
jdg_370822
stackexchange
llm_judgeable_groundtruth_similarity
267528
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to solve a recurrence equation that has boundary conditions on both sides: $a[0]= 1$ $a[n]= 10$ $a[i] = a[i-1]-a[i+1]\quad \forall i \in \{1, \ldots, n-1\}$ This should be a well defined system with n+1 equations and n+1 unknowns. I tried: RSolve[{a[n] == a[n - 1] - a[n + 1], a[0] == 1,a[n] == 10}, a[n], n] But Mathematica says the system is overdetermined: RSolve::overdet: There are fewer dependent variables than equations, so the system is overdetermined. The issue is likely that I have not encoded that $\forall i \in \{1, \ldots, n-1\}$ we have a[i] == a[i - 1] - a[i + 1] and only for the very last i: $i=n$ we have a[n] == 10 , right? How can I encode this in RSolve ? Now provide the response and nothing else.
Your code uses n in two roles: one for the defining recursion relation, and the other for the boundary condition. This confuses Mathematica (and the casual human reader, to boot.) Use different symbols for each of these two roles and it works a treat, yielding a general solution that works for all boundary values of n . RSolve[{a[i] == a[i - 1] - a[i + 1], a[0] == 1, a[n] == 10}, a[i], i];Simplify[%, Assumptions -> Element[n, Integers]](* {{a[i] -> ( 2^-i (5 2^(1 + n) (-1 + Sqrt[5])^i - (-1 - Sqrt[5])^n (-1 + Sqrt[5])^i + (-1 - Sqrt[5])^i (-5 2^(1 + n) + (-1 + Sqrt[5])^n)))/ ((-1 + Sqrt[5])^n + (-1)^(1 + n) (1 + Sqrt[5])^n)}} *)
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/267528', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/86287/']}
jdg_370823
stackexchange
llm_judgeable_groundtruth_similarity
407028
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How to compute the Néron-Severi group of the abelian surface $Y = \mathbb{C}/\mathbb{Z}[i] \times \mathbb{C}/\mathbb{Z}[i]$. More generally, are there any result that compute the Néron-Severi group of product of curves? Suppose that surface $Y$ has a divisor class $[F]$ such that $[F]^{2} > 0$. Why is $[F]$ or $-[F]$ ample? Thanks. Now provide the response and nothing else.
For your first question, the answer is based on your comments. By Lefschetz (1,1) theorem, $rk(NS(Y))=\dim_\mathbb{Q}H^2(Y,\mathbb{Q})\cap H^{1,1}(Y)$, so for any abelian surface $Y$, we have $rk(NS(Y))\leq h^{1,1}(Y)=4$. Claim: Suppose $Y=\mathbb{C}/\mathbb{Z}[i]\times \mathbb{C}/\mathbb{Z}[i]$, then $rk(NS(Y))=4$. Proof: Let $(z,w)$ be the complex coordinate of $\mathbb{C}^2$, the universal cover of $Y$. Here $Y=\mathbb{C}^2/\Lambda$, $\Lambda$ is a free abelian group generated by $(1,0),(i,0),(0,1),(0,i)$. Consider $\alpha=idz\wedge d\bar z$, $\beta=idw\wedge d\bar w$, $\gamma=i(dz\wedge d\bar w+dw\wedge d\bar z)$, $\delta=dz\wedge d\bar w-dw\wedge d\bar z$, so $\{\alpha,\beta,\gamma,\delta\}$ is a basis for the complex vector space $H^{1,1}(Y)$. Moreover, it follows by simple calculation that $[\mathbb{C}/\mathbb{Z}[i]\times\{\ast\}]=\beta/2$, $[\{\ast\}\times\mathbb{C}/\mathbb{Z}[i]]=\alpha/2$, $[\{(z,z)\in Y\}]=-\gamma/2$, $[\{(z,iz)\in Y\}]=\delta/2$, so $\alpha,\beta,\gamma,\delta\in H^2(Y,\mathbb{Q})\cap H^{1,1}(Y)$ and they are linearly independent over $\mathbb{Q}$. QED For your second question, since $K_Y=0$, by Riemann-Roch theorem$$h^0(nF)+h^0(-nF)=h^0(nF)+h^2(nF) \geq \frac{(F^2)}{2}n^2 +\chi(\mathcal{O}_Y).$$ Therefore, for $n>>1$, either $nF$ or $-nF$ is effective. WLOG, we assume $D=nF$ is effective. Since $Y$ is abelian surface, every curve $C$ in $Y$ is movable (in the sense of algebraic equivalence), so for any two (possibly reducible) curves $C_1$ and $C_2$ we have $(C_1\cdot C_2)\geq 0$. In particular, for any curve $C$ we have $(D\cdot C)\geq 0$. If $(D\cdot C)=0$ for some curve $C$, together with $(D^2)>0$, the Hodge index theorem implies $(C^2)<0$, we get a contradiction! Thus for any curve $C$, we have $(D\cdot C)>0$, together with $(D^2)>0$, the Nakai-Moishezon criterion tells us that $D=nF$ is ample, so $F$ is ample. Remark : This argument for your second question applies for any abelian surface $Y$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/407028', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/18373/']}
jdg_370824
stackexchange
llm_judgeable_groundtruth_similarity
4179582
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Showing $$\int_{0}^{\infty} \frac{1-3\cos 2x+2\cos 3x}{x^2} dx=0$$ We can show this by re-writing $I$ as $$\implies I=6\int_{0}^{\infty}\frac{\frac{1-\cos(2x)}{2x}-\frac{1-\cos(3x)}{3x}}{x}\,\mathrm dx,$$ which is Frullani Integral . $$J=\int_{0}^{\infty} \frac{f(ax)-f(bx)}{x} dx=[f(\infty)-f(0)]\ln(a/b).$$ Here, $f(x)=\frac{1-\cos(x)}{x},$ hence $I=0.$ So the question is how to show (1), otherwise? Now provide the response and nothing else.
Alternatively, integrate by parts \begin{align}&\int_{0}^{\infty} \frac{1-3\cos 2x+2\cos 3x}{x^2} dx\\= &\int_{0}^{\infty} \frac{6\sin 2x-6\sin 3x}{x} dx\\=& \>6 \int_{0}^{\infty} \frac{\sin 2x}{2x} d(2x) - 6 \int_{0}^{\infty} \frac{\sin 3x}{3x} d(3x) \\ =&\>0\end{align}
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4179582', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/671540/']}
jdg_370825
stackexchange
llm_judgeable_groundtruth_similarity
100696
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following list I want to pick elements from: mylist = {1, 5, 7, 3, 4, 7, 2, 9, 9, 1, 10, 12, 2, 64, 34, 64} I have another list with 0's and 1's which is my selector: selector = {1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1} I want to select all the entries from mylist where selector has a 1. I know I can do this via: Pick[mylist,selector,1] BUT: Pick gives me all the selected elements from mylist together like this: {1, 5, 7, 7, 2, 9, 1, 10, 34, 64} I want the elements which are separated in selector by a 0 to be separate lists. That is, as a result I want to have a list of lists like this: {{1,5,7},{7,2},{9,1,10},{34,64}} Any help appreciated! Now provide the response and nothing else.
mylist = {1, 5, 7, 3, 4, 7, 2, 9, 9, 1, 10, 12, 2, 64, 34, 64};selector = {1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1};pick = Pick[mylist, selector, 1];split = Length /@ Cases[Split[selector], {1 ...}] {3, 2, 3, 2} Internal`PartitionRagged[pick, split] {{1, 5, 7}, {7, 2}, {9, 1, 10}, {34, 64}}
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/100696', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/11993/']}
jdg_370826
stackexchange
llm_judgeable_groundtruth_similarity
429193
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What's the simplest way to parse an Excel file in Perl? Converting it to a text file would also work. Now provide the response and nothing else.
The best way is to use Spreadsheet::ParseExcel . Here is an example: #!/usr/bin/perl -wuse strict;use warnings;use Spreadsheet::ParseExcel;my $parser = Spreadsheet::ParseExcel->new();my $workbook = $parser->parse('Book1.xls');for my $worksheet ( $workbook->worksheets() ) { my ( $row_min, $row_max ) = $worksheet->row_range(); my ( $col_min, $col_max ) = $worksheet->col_range(); for my $row ( $row_min .. $row_max ) { for my $col ( $col_min .. $col_max ) { my $cell = $worksheet->get_cell( $row, $col ); next unless $cell; print "Row, Col = ($row, $col)\n"; print "Value = ", $cell->value(), "\n"; print "Unformatted = ", $cell->unformatted(), "\n"; print "\n"; } }} To convert an Excel file to text with Perl I'd recommend excel2txt which uses Spreadsheet::ParseExcel.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/429193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1094969/']}
jdg_370827
stackexchange
llm_judgeable_groundtruth_similarity
62474
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: It is obviously important to divide truth from conspiracy theories. Truth is good, and conspiracy theories are bad and disturbing. But how would common citizen know what is good, and what is bad? For example, citizen hear a theory that governor A is corrupted (or eating children!), or theory about governor B being hidden racist. Who would help him to have a right, truthful view? Is there any mechanism to divide fake news from TRUTH in forming media narrative? Now provide the response and nothing else.
First off, conspiracy theories involve a powerful group of conspirators all conspiring for the same sinister goal. So "politician X is racist" is more of a statement of fact (which may be true or false) and not a conspiracy theory. 'The Democratic party runs a pedophile ring from a pizzeria' would be a conspiracy theory. 'The government sells weapons to Iran to fund terrorists in south America' would be another example. Michael Shermer has given a 10 point list in helping identifying true vs false conspiracy theories. Among them are: a pattern of "connecting the dots" with no evidence supporting the connection; conspirators would need an unlikely amount of power; the theory is highly complex or requires an unlikely count of co-conspirators; the theory posits world domination or control of the nation/economy/etc (all involved groups have the same aim); the theory goes off small, possibly true events and extrapolates to large, improbable events; the theory doesn't differentiate between facts and speculation; the theory is suspicious of all government agencies or groups; the theory rejects all alternative explanations and facts not supporting it. Jovan Byford also wrote an article on how to differentiate between genuine conspiracies and conspiracy theories . Actual secret collusions "rarely work out according to plan" and involve "different actors, with disparate aims and goals, limited to certain locations and time frame". On the other hand, conspiracy theories are about "spurious connections between disparate historical actors or events" with overarching plots that explain everything. According to Byford, the most important aspect is how evidence is approached. In investigations into real conspiracies, "[t]he approach to evidence demands that sources are checked and claims verified. If there is an absence of proof or if evidence contradicts the hypothesis, this is not automatically considered to be part of a cover-up." On the other hand: For the conspiracy theorist, the opposite applies. The idea of a plot is not a hypothesis, but a fundamental, unshakeable principle. The possibility that the basic premise of the conspiracy theory may be wrong, or that it might be proven wrong by new evidence, is not even entertained. Based off this, deciding on what is a conspiracy and what is an actual secret plot, it is important to properly approach evidence. If a number of different reliable sources all report X, the reaction shouldn't be "well, all of the media/academia/government is obviously in on the conspiracy". Instead, given evidence should be evaluated to see if it fits the conspiracy, or if other explanations are more likely.
{}
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/62474', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/20263/']}
jdg_370828
stackexchange
llm_judgeable_groundtruth_similarity
24071525
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been looking in to the new Swift language trying to find what's the equivalent for an interface(in java) or a protocol(in objective-c) in Swift, after surfing on the internet and searching in the book provided by Apple, I still can't seem to find it. Does any one know what's the name of this component in swift and what's its syntax? Now provide the response and nothing else.
Protocols in Swift are very similar to Objc, except you may use them not only on classes, but also on structs and enums. protocol SomeProtocol { var fullName: String { get } // You can require iVars class func someTypeMethod() // ...or class methods} Conforming to a protocol is a bit different: class myClass: NSObject, SomeProtocol // Specify protocol(s) after the class type You can also extend a protocol with a default (overridable) function implementation: extension SomeProtocol { // Provide a default implementation: class func someTypeMethod() { print("This implementation will be added to objects that adhere to SomeProtocol, at compile time") print("...unless the object overrides this default implementation.") }} Note: default implementations must be added via extension, and not in the protocol definition itself - a protocol is not a concrete object, so it can't actually have method bodies attached. Think of a default implementation as a C-style template; essentially the compiler copies the declaration and pastes it into each object which adheres to the protocol.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24071525', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1965099/']}
jdg_370829
stackexchange
llm_judgeable_groundtruth_similarity
5592112
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: JavaMail specifies a bunch of properties that can be set to configure an SMTP connection. To use STARTTLS it is necessary to set the following property mail.smtp.starttls.enable=true Where do I specify the username/password to use the smtp service? Is it enough to specify the: mail.smtp.user=memail.smtp.password=secret Or do I have to explicitely login using the: transport.connect(server, userName, password) Yes, I already tried to do this and it seems that it is necessary to connect using transport.connect(..) . But if yes, what are the mail.smtp.user & pass properties for? Are they not enough to use smtp with starttls ? Now provide the response and nothing else.
Here is my sendEmail method which is using GMail smtp (JavaMail) with STARTTLS public void sendEmail(String body, String subject, String recipient) throws MessagingException, UnsupportedEncodingException { Properties mailProps = new Properties(); mailProps.put("mail.smtp.from", from); mailProps.put("mail.smtp.host", smtpHost); mailProps.put("mail.smtp.port", port); mailProps.put("mail.smtp.auth", true); mailProps.put("mail.smtp.socketFactory.port", port); mailProps.put("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); mailProps.put("mail.smtp.socketFactory.fallback", "false"); mailProps.put("mail.smtp.starttls.enable", "true"); Session mailSession = Session.getDefaultInstance(mailProps, new Authenticator() { @Override protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(login, password); } }); MimeMessage message = new MimeMessage(mailSession); message.setFrom(new InternetAddress(from)); String[] emails = { recipient }; InternetAddress dests[] = new InternetAddress[emails.length]; for (int i = 0; i < emails.length; i++) { dests[i] = new InternetAddress(emails[i].trim().toLowerCase()); } message.setRecipients(Message.RecipientType.TO, dests); message.setSubject(subject, "UTF-8"); Multipart mp = new MimeMultipart(); MimeBodyPart mbp = new MimeBodyPart(); mbp.setContent(body, "text/html;charset=utf-8"); mp.addBodyPart(mbp); message.setContent(mp); message.setSentDate(new java.util.Date()); Transport.send(message); }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5592112', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/76661/']}
jdg_370830
stackexchange
llm_judgeable_groundtruth_similarity
443431
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I refer to pg.4 of this article . Assuming $\alpha$ is an ordinal, we have to prove $\alpha\cup \{\alpha\}$ or $\alpha +1$ is an ordinal. Isn't this obvious from the construction of ordinals? As per the construction given in the article, for any ordinal $\beta$, the next ordinal is $\beta\cup\{\beta\}$. So is any such explicit proof required, as is given in the article? I quote: "$\alpha +1$ is transitive, for if $y\in\alpha +1$ then either $y=\alpha$ and $\alpha\subset \alpha +1$, or $y\in\alpha$." I don't understand how this follows from the properties of ordinals given on pg.3. I feel this is a proof of the fact that $\alpha +1$ is transitive by assuming that $\alpha +1$ is transitive. One may refer to definition 7 on pg.3 EDIT: Could someone also kindly outline the suffficient conditions for proving that a number is an ordinal? The artice suggests transitivity and strict ordering are suffcient conditions, or maybe I'm reading it wrong. Thanks in advance! Now provide the response and nothing else.
You need to discern between "obvious equivalence" and definitions. The definition of an ordinal is a set which is transitive and well-ordered by $\in$. If $\alpha$ is assumed to be an ordinal this means that it is a transitive set and well-ordered by $\in$. Now you want to prove that $\alpha+1=\alpha\cup\{\alpha\}$ also satisfies the same properties. This is a very simple proof, yes. But it is required regardless. Even more so because this is supposed to be an paper detailing the construction of ordinals to people which are less familiar with them. To the second question, we assume that $\alpha$ is transitive. Therefore if $x\in\alpha\cup\{\alpha\}$, either $x\in\alpha$ and therefore $x\subseteq\alpha\subseteq\alpha\cup\{\alpha\}$, or $x=\alpha$ and then trivially $x\subseteq\alpha\cup\{\alpha\}$. Therefore $\alpha+1$ is transitive whenever $\alpha$ is. As for the equivalent conditions for being an ordinal: $x$ is an ordinal if $x$ is a transitive set which is well-ordered by $\in$. Assuming the axiom of regularity it suffices to require $\in$ to be a linear order instead. $x$ is an ordinal if $x$ is a transitive set, and all its members are transitive sets. This definition requires the axiom of regularity to holds as well.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/443431', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_370831
stackexchange
llm_judgeable_groundtruth_similarity
2089153
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am taking a course in functional analysis and while reviewing the definition of a metric and various examples, my professor mentioned that one of the reason we care about swapping (possibly equivalent?) metrics is in pde, since a solution to a pde may be very difficult to find in one metric space, but easier in another. I was hoping the mathstack exchange community might be able to expand on this and maybe provide me with some examples hopefully that I can understand. Thanks! Now provide the response and nothing else.
Here's one perspective, involving the notion of distance on function spaces. Think of different norms on function spaces as giving us different ways of measuring approximations. The basis for this intuition is that norms measure distance between functions and so they let us know in some sense how good approximations are. Existence theorems whose proofs are ultimately based on considering sequences (in particular, the Riesz representation theorem and its derivatives like Lax-Milgram[1]) are sensitive to the method we use to determine "closeness" of approximations. Often in searching for weak or strong solutions to PDEs we want to use information about the PDE (such as its order). We incorporate this information by choosing the function space with the best metric as the setting of our study. Once we have an appropriate setting, we can effectively use existence theorems like Riesz or Lax-Milgram to study existence and properties of solutions.) To make this concrete, let's consider a positive, symmetric unbounded operator $T$ defined on a dense domain $D$ in a Hilbert space $H$. The basic example is the Laplacian $T = \Delta = -\sum\partial_i^2$ defined on compactly supported smooth functions $D = C^\infty_0(\Omega)$ in a two-dimensional bounded domain $\Omega\subset\Bbb{R}^2$, and $H = L^2(\Omega)$. The equation $\Delta u = f$ is really a question about linear algebra in $H$: "Given $f\in H$, can we find $u\in D$ such that $Tu = f$?" If $T$ were bounded, we'd be able to use the Riesz representation theorem to conclude that a weak solution is indeed an actual solution. However, $T$ is not bounded --- that is, the metric on $H$ is not suitable for finding a solution to the equation. We can define a new norm on $D$ by setting $\|u\|_T^2 = (u,u) + (Tu,u)$. Since $T$ is symmetric and positive this is in fact an inner product. Take the completion of $D$ with respect to this norm; call it $V.$ In fact the extension of the inclusion map $D\hookrightarrow H$ is a compact embedding. (Compactness is not evident; it follows from delicate PDE estimates and is called the Rellich-Kondrachov theorem. The space $V$ is in this case the Sobolev space $H^1_0(\Omega)$.) In this new norm where the measurement of distance includes both $L^2$ and the action of $T$, the Riesz representation theorem now guarantees a bounded weak solution operator $S$ to the equation $(Tu,\cdot) = F$ where $F$ is a bounded functional on $V$. It turns out that $H\to V^*$ defined by taking $f$ to $(f,\cdot)$, where $\cdot$ is considered as an element of $H$, is a compact operator. By composing it with the solution operator we have a compact solution operator $H\to V$ which gives solutions to the weak equation. (Note that I have glossed over many details and possible generalizations, which would comprise the majority of a second graduate class on PDEs. These details can be found in Ch 5-6 of Partial Differential Equations by Evans, or Ch 7-8 of Elliptic Partial Differential Operators of Second Order by Gilbarg and Trudinger. Two of the big questions beyond the scope of this answer: What is the proof of Rellich-Kondrachov? When is a weak solution in fact a strong solution? This latter is known as "elliptic regularity.") [1] The proof of Riesz (by which I mean $H\leftrightarrow H^*$) relies on choice of a vector in one factor of an orthogonal decomposition of $H$; think about how one would compute this.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2089153', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/210391/']}
jdg_370832
stackexchange
llm_judgeable_groundtruth_similarity
26009102
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In the O'Reilly book "Graph Databases" in chapter 6, which is about how Neo4j stores a graph database it says: To understand why native graph processing is so much more efficient than graphs based on heavy indexing, consider the following. Depending on the implementation, index lookups could be O(log n) in algorithmic complexity versus O(1) for looking up immediate relationships. To traverse a network of m steps, the cost of the indexed approach, at O(m log n), dwarfs the cost of O(m) for an implementation that uses index-free adjacency. It is then explained that Neo4j achieves this constant time lookup by storing all nodes and relationships as fixed size records: With fixed sized records and pointer-like record IDs, traversals are implemented simply by chasing pointers around a data structure, which can be performed at very high speed. To traverse a particular relationship from one node to another, the database performs several cheap ID computations (these computations are much cheaper than searching global indexes, as we’d have to do if faking a graph in a non-graph native database) This last sentence triggers my question: how does Titan, which uses Cassandra or HBase as a storage backend, achieve these performance gains or make up for it? Now provide the response and nothing else.
Neo4j only achieves O(1) when the data is in-memory in the same JVM. When the data is on disk, Neo4j is slow because of pointer chasing on disk (they have a poor disk representation). Titan only achieves O(1) when the data is in-memory in the same JVM. When the data is on disk, Titan is faster than Neo4j cause it has a better disk representation. Please see the following blog post that explains the above quantitatively: http://thinkaurelius.com/2013/11/24/boutique-graph-data-with-titan/ Thus, its important to understand when people say O(1) what part of the memory hierarchy they are in. When you are in a single JVM (single machine), its easy to be fast as both Neo4j and Titan demonstrate with their respective caching engines. When you can't put the entire graph in memory, you have to rely on intelligent disk layouts, distributed caches, and the like. Please see the following two blog posts for more information: http://thinkaurelius.com/2013/11/01/a-letter-regarding-native-graph-databases/ http://thinkaurelius.com/2013/07/22/scalable-graph-computing-der-gekrummte-graph/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26009102', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1860591/']}
jdg_370833
stackexchange
llm_judgeable_groundtruth_similarity
2747379
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to programmatically load a web page via the WebBrowser control with the intent of testing the page & it's JavaScript functions. Basically, I want to compare the HTML & JavaScript run through this control against a known output to ascertain whether there is a problem. However, I'm having trouble simply creating and navigating the WebBrowser control. The code below is intended to load the HtmlDocument into the WebBrowser.Document property: WebBrowser wb = new WebBrowser();wb.AllowNavigation = true;wb.Navigate("http://www.google.com/"); When examining the web browser's state via Intellisense after Navigate() runs, the WebBrowser.ReadyState is 'Uninitialized', WebBrowser.Document = null, and it overall appears completely unaffected by my call. On a contextual note, I'm running this control outside of a Windows form object: I do not need to load a window or actually look at the page. Requirements dictate the need to simply execute the page's JavaScript and examine the resultant HTML. Any suggestions are greatly appreciated, thanks! Now provide the response and nothing else.
You should handle the WebBrowser.DocumentComplete event, once that event is raised you will have the Document etc. wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(wb_DocumentCompleted); private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e){ WebBrowser wb = sender as WebBrowser; // wb.Document is not null at this point} Here is a complete example, that I quickly did in a Windows Forms application and tested. public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { WebBrowser wb = new WebBrowser(); wb.AllowNavigation = true; wb.DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(wb_DocumentCompleted); wb.Navigate("http://www.google.com"); } private void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser wb = sender as WebBrowser; // wb.Document is not null at this point } } Edit: Here is a simple version of code that runs a window from a console application. You can of course go further and expose the events to the console code etc. using System;using System.Windows;using System.Windows.Forms;namespace ConsoleApplication1{ class Program { [STAThread] static void Main(string[] args) { Application.Run(new BrowserWindow()); Console.ReadKey(); } } class BrowserWindow : Form { public BrowserWindow() { ShowInTaskbar = false; WindowState = FormWindowState.Minimized; Load += new EventHandler(Window_Load); } void Window_Load(object sender, EventArgs e) { WebBrowser wb = new WebBrowser(); wb.AllowNavigation = true; wb.DocumentCompleted += wb_DocumentCompleted; wb.Navigate("http://www.bing.com"); } void wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { Console.WriteLine("We have Bing"); } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2747379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/330059/']}
jdg_370834
stackexchange
llm_judgeable_groundtruth_similarity
31098
Below is a question asked on the forum engineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: There's basically two ways train wheels can operate, the flanges could be either "inner" or "outer". Switches can be made in equivalent ways for inner and outer flanges. We could expect that, just like some countries, as well as different railway companies within countries, picked up different rail gauges, they could have also varied between inner- and outer-flanges railways. We could imagine that, just like today some countries have left-hand and right-hand traffic, there would be inner and outer style rail wheels. Except this is not the case, the "inner" style is almost universal. Almost, because the "outer" style indeed did exist in railways' debuts. Is there a technical reason that makes the "inner" style preferable ? Now provide the response and nothing else.
If you leave off the flanges and just use cones that are solidly connected to an axle, then cones that taper outward (get smaller as they go out) tend to be self-centering but undamped (or underdamped, I'm not sure which). This is because if the axle is offset, the wheel that's to the outside has a larger diameter, and tends to drive the axle to turn toward the center. The "outer flange" design you should would do the opposite. So the "inner flange" picture that you present is mostly not relying on the flanges to keep the axle centered -- it's the taper of the wheels that does that. The flange is there to damp oscillations, and for when the system is overwhelmed by circumstances. (There's a nice video of this on YouTube, but I'm too lazy right now to find it -- try searching on "train tire design", or maybe those words with the word "stability" tossed in).
{}
{'log_upvote_score': 4, 'links': ['https://engineering.stackexchange.com/questions/31098', 'https://engineering.stackexchange.com', 'https://engineering.stackexchange.com/users/17069/']}
jdg_370835
stackexchange
llm_judgeable_groundtruth_similarity
25201349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have put my app on the Google Play Store. It has been installed by lots of my company's customers. I understand the mechanism of how the app is intended to upgrade. The users should check the auto-update check box in the Playstore app for each app they want to auto-update. However some users have unchecked it or not checked it in the first place. The app i have written is for the care industry and is used by carers to deliver homecare. Some of our customers my have 1200 carers. They would have to call all the carers into the office to update the phones individually. This is obviously unacceptable. Is there a way to programmatically check if there is an updated version of my app on the Play Store? Could i have code that runs every time the user starts the app that checks the Play Store?If there is an updated version then the user could be directed to the Playstore. This will mean it is not essential to have the auto-update checked. Now provide the response and nothing else.
Update 17 October 2019 https://developer.android.com/guide/app-bundle/in-app-updates Update 24 april 2019: Android announced a feature which will probably fix this problem. Using the in-app Updates API: https://android-developers.googleblog.com/2018/11/unfolding-right-now-at-androiddevsummit.html Original: As far a I know, there is no official Google API which supports this. You should consider to get a version number from an API. Instead of connecting to external APIs or webpages (like Google Play Store).There is a risk that something may change in the API or the webpage, so you should consider to check if the version code of the current app is below the version number you get from your own API. Just remember if you update your app, you need to change the version in your own API with the app version number. I would recommend that you make a file in your own website or API, with the version number. (Eventually make a cronjob and make the version update automatic, and send a notification when something goes wrong) You have to get this value from your Google Play Store page (is changed in the meantime, not working anymore): <div class="content" itemprop="softwareVersion"> x.x.x </div> Check in your app if the version used on the mobile is below the version nummer showed on your own API. Show indication that she/he needs to update with a notification, ideally. Things you can do Version number using your own API Pros: No need to load the whole code of the Google Play Store (saves on data/bandwidth) Cons: User can be offline, which makes checking useless since the API can't be accessed Version number on webpage Google Play Store Pros: You don't need an API Cons: User can be offline, which makes checking useless since the API can't be accessed Using this method may cost your users more bandwidth/mobile data Play store webpage could change which makes your version 'ripper' not work anymore.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25201349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/532462/']}
jdg_370836
stackexchange
llm_judgeable_groundtruth_similarity
22906520
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: #Requires -Version 2.0[CmdletBinding()]Param( [Parameter()] [string] $MyParam = $null)if($MyParam -eq $null) { Write-Host 'works'} else { Write-Host 'does not work'} Outputs "does not work" => looks like strings are converted from null to empty string implicitly? Why? And how to test if a string is empty or really $null? This should be two different values! Now provide the response and nothing else.
Okay, found the answer @ https://www.codykonior.com/2013/10/17/checking-for-null-in-powershell/ Assuming: Param( [string] $stringParam = $null) And the parameter was not specified (is using default value): # will NOT workif ($null -eq $stringParam){}# WILL work:if ($stringParam -eq "" -and $stringParam -eq [String]::Empty){} Alternatively, you can specify a special null type: Param( [string] $stringParam = [System.Management.Automation.Language.NullString]::Value) In which case the $null -eq $stringParam will work as expected. Weird!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22906520', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1400869/']}
jdg_370837
stackexchange
llm_judgeable_groundtruth_similarity
11653435
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following setup of a Scala application and a common core library:root -> /ApplicationA -> /project -> /build.sbt -> /CoreLibrary -> /project -> /build.sbt I want to add a reference from ApplicationA to CoreLibrary à la Eclipse project reference, so that every time CoreLibrary changes ApplicationA is built as well. I´ve tried the following contents of build.Scala for ApplicationA: val core = Project( id = "platform-core", base = file("../CoreLibrary")) val main = Project(id = "application, base = file(".")).dependsOn(core) However, when compiling ApplicationA SBT complains that a dependency can only be a subdirectory!!: java.lang.AssertionError: assertion failed: Directory C:\git\CoreLibrary is not contained in build root C:\git\ApplicationA This seems completely straightforward, what's the correct way of having this project dependency? Now provide the response and nothing else.
You can do a source dependency on your project like that : lazy val core = RootProject(file("../CoreLibrary")) val main = Project(id = "application", base = file(".")).dependsOn(core) I have a working example with a multimodule play build : https://github.com/ahoy-jon/play2MultiModule/blob/master/playapp/project/Build.scala But I think the proper way (it depends of your context) of doing it is to create a -> /project/ -> Build.scala -> /ApplicationA -> /project -> /build.sbt -> /CoreLibrary -> /project -> /build.sbt referencing the two projects and the dependencies between them.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/11653435', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/152541/']}
jdg_370838
stackexchange
llm_judgeable_groundtruth_similarity
33091013
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I used the below command to delete files older than a year. find /path/* -mtime +365 -exec rm -rf {} \; But now I want to delete all files whose modified time is older than 01 Jan 2014. How do I do this in Linux? Now provide the response and nothing else.
You can touch your timestamp as a file and use that as a reference point: e.g. for 01-Jan-2014: touch -t 201401010000 /tmp/2014-Jan-01-0000find /path -type f ! -newer /tmp/2014-Jan-01-0000 | xargs rm -rf this works because find has a -newer switch that we're using. From man find : -newer file File was modified more recently than file. If file is a symbolic link and the -H option or the -L option is in effect, the modification time of the file it points to is always used.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33091013', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4438637/']}
jdg_370839
stackexchange
llm_judgeable_groundtruth_similarity
1607467
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As the title says, I'm looking for the last $500$ digits of $2015!-1$. I assume it's a repetition of zeroes from the factorial, so the final result is a lot of $9$-s, but I can't formulate a solution in a mathematical way. I know that $201$ zeroes come from $10$-s in there, also, $202$ from the $5$-s that are multiplied by even numbers, $20$ extra from the $100$-s, $2$ more from the $1000$-s, but I'm still missing a couple. Now provide the response and nothing else.
There are 403 numbers between 1 and 2015 are divisible by 5. Not all of them contribute just one factor of 5. The multiples of 25 contribute two factors of 5 (there are 80 such). We already have 483 factors of 5. A further 16 numbers are divisible by 125 and they contribute yet another factor of 5. We're now up to 499 factors of 5. Finally, the multiples of 625 contribute yet another factor, and there are 3 such numbers. So we have at least 502 factors of 5 in 2015!. (There are in fact exactly 502 factors, but that's not important here). There is no dearth of factors of 2, which occur more often than the factors of 5. So we know that $10^{502}$ divides 2015!. I think you know what to do from here!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1607467', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/188401/']}
jdg_370840