source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
320773
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Per this post , you can force seasonality in auto.arima by selecting D=1. I have a weekly time series which looks like it might (or might not) have a seasonal component (I have a priori reasons for thinking it might have a seasonal component). Data <- as.ts(Data$Sales,order.by=Data$Date, frequency=52) Train <- window(Data,start=3,end=107)Test <- window(Data,start=108,end=116) I tried manually fitting a seasonal model: fit <- arima(Train, order=c(2,0,1) , seasonal = list (order= c(0,1,0) , period = 52))forec <- predict(fit, n.ahead =8) gave an "OK" forecast (see first graph). So I tried improving on it by using auto.arima to find the best model. AutoFit <- auto.arima(Train) This returned an ARIMA(1,1,1) model, which I then fit using: #fit <- arima(Train, order=c(1,1,1)) But this gave worse results than the seasonal model I selected manually (see second graph). So I tried to force seasonality by running: AutoFit <- auto.arima(Train, D=1) But I still get the same ARIMA(1,1,1) model. Why is auto.arima not trying to fit a seasonal model, even why I try to force it? I've also tried: AutoFit <- auto.arima(Train, seasonal=TRUE, D=1) and AutoFit <- auto.arima(Train, seasonal=TRUE, start.P=0, start.Q=0 , D=1) Now provide the response and nothing else.
Rule number 1: when your code does not do what you want, start inspecting your objects. library(forecast)set.seed(1)(Data <- as.ts(rnorm(116), frequency=52) ) yields Time Series:Start = 1 End = 116 Frequency = 1...snip... Note that Frequency is 1, not 52, as we explicitly set above! The problem is that as.ts() silently ignores the frequency parameter and sets the frequency to 1. Here is the help page to stats::ts() : ‘as.ts’ is generic. Its default method will use the ‘tsp’ attribute of the object if it has one to set the start and end times and frequency. So, if you want something seasonal, either supply an x with a tsp attribute to as.ts() ... or use ts() straight from the beginning. set.seed(1)(Data <- ts(rnorm(116), frequency=52) ) which yields Time Series:Start = c(1, 1) End = c(3, 12) Frequency = 52...snip... This looks much better. Note that we now need to supply 2-vectors to window() , corresponding to the Start and End attributes of Data : Train <- window(Data,start=c(1,1),end=c(2,52))auto.arima(Train, D=1) which yields a nice seasonal ARIMA model, as required: Series: Train ARIMA(0,0,0)(0,1,0)[52]
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/320773', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/89649/']}
jdg_370441
stackexchange
llm_judgeable_groundtruth_similarity
808060
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The concept of complete bipartite graphs can be generalized to define the complete multipartite graph $K(r_1,r_2,...,r_k)$. It consists of $k$ sets of vertices each with cardinality $r_i$ for $i$ in $\{1,2,\ldots,k\}$ where all possible "interest" edges are present but no "intraset" edges are present. For bipartite graphs I have Mathematica code:Table[Floor[n/2] Ceiling[n/2], {n, 0, 10}] {0, 0, 1, 2, 4, 6, 9, 12, 16, 20, 25} for tripartite graphs I have: f[n_] := Which[Mod[n, 3] == 0, 3 (n/3)^2, Mod[n, 3] == 1, Floor[n/3]^2 + 2 Ceiling[n/3] Floor[n/3], Mod[n, 3] == 2, Ceiling[n/3]^2 + 2 Ceiling[n/3] Floor[n/3]]; Table[f[n], {n, 0, 10}]{0, 0, 1, 3, 5, 8, 12, 16, 21, 27, 33} In neither case am I convinced that I am correct. It just seems intuitive that the sets must be (as nearly as possible) the same size.How can I generalize for larger k? This question is an exercise in "Combinatorics and Graph Theory" Harris,Herst,Mossinghoff. page 16. I read and understood the solution given by Kaya in another post: $n^2\frac{k-1}{2k}$ but this is only true when $n$ is a multiple of $k$. I want to be able to write a code in Mathematica for any $k$ and any $n$. Now provide the response and nothing else.
Forget the coding, we can solve it explicitly! Let $N=r_1+r_2+...r_k$ be the number of vertices in the graph. Now, for each $r_i$-partite set, we are blocked from making $r_i\choose 2$ edges. However, this is the only restriction on edges, so the number of edges in a complete multipartite graph $K(r_1,\ldots, r_k)$ is just $|E|={N\choose2}-\sum\limits_{i=1}^k{r_i\choose 2}$ Hence, if you want to $\textit{maximize}$ the number of edges for a given $k$, you can just choose each sets such that $r_i=1\forall i$, which gives you the maximum $N\choose 2$. If on the other hand you want to $\textit{minimize}$ the number of edges for a given $k$, we can use a little switching argument to show that the minimum occurs when all the $r_i$s are as near to $\frac{N}{k}$ as possible. Here's the switching argument: Let $r=\lfloor \frac{N}{k}\rfloor$. Assume for the sake of contradiction that there exist $r_i$ and $r_j$ such that $r_j-r_i\geq 2$ in a $k$-partite graph $M$ with a minimum number of edges. Let $|M|$ denote the number of edges in $M$. Consider now $M'$ which we create from $M$ by taking a vertex $x$ in the $r_j$ set and adding it to the $r_j$ set. This switch adds $r_j-1$ edges and gets rid of $r_i$ edges. Hence $|M'|=|M|+r_j-1-r_i\leq |M|-1$ Hence we have a contradiction. This means that the sizes of two sets cannot differ in size by more than $1$. Hence a $k$-partite graph of minimum size must must have $r_i\in\{r,r+1\}$ for all $i\in\{1,2,\ldots,k\}$. In particular, if $N\equiv h \mod k$. Then the minimum number of edges of a $k$-partite graph is $|E|={N\choose 2}-h{r+1\choose 2}-(k-h){r \choose2}$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/808060', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/139257/']}
jdg_370442
stackexchange
llm_judgeable_groundtruth_similarity
10491147
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a .NET 2.0 fileupload control, and next to it a button, like this: <asp:fileupload id="uploadedFile" runat="server" /><asp:Button runat="server" ID="upload" Text="Upload" OnClick="Page_Load"/> Now, this is all on a page within a webpart (sharepoint, that is - for those of you unfamiliar with webparts - it is basically an iframe). Notice also that I am calling Page_Load. After the user browses for the file, the hasfile property of the uploadedFile control becomes true (it was false to begin with). Fine. The user clicks the button and the file uploads. I have code in there that sends me an email to let me know a file has been uploaded as long as hasfile is true . If the user pushes the refresh button in his browser, the page is reloaded, but the hasfile property is not cleared, and so I get sent another email! (If I use Firefox's ability to reload without the cache, then everything is refreshed correctly) How can I clear the hasfile property on the FileUpload control? Or how can I cause the page to skip the cache when the user reloads? I don't mind calling a function other than Page_Load, but then it needs to be able to receive a parameter so I can also call it from Page_Load (on account of other things I am doing). Now provide the response and nothing else.
Add the functionality of clearing the hasfile property in the LoadPage() function which ,suprise suprise, is called when you refresh (as the page loads). Or maybe instead of placing the functionality, code a function that implements said functionality and call it from page load. Guide Code : private void clearFileUpload(){ fileUpload.Dispose();} OR try to deal with the issue immediately after uploading the file. private void clearFileUpload(){ fileUpload.PostedFile.InputStream.Dispose(); // fileUpload.HasFile is now false}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10491147', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1038866/']}
jdg_370443
stackexchange
llm_judgeable_groundtruth_similarity
83957
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Mathematica 9 says that $\int_0^1\int_0^1\int_0^1\frac{1.0}{xyz}\,dz\,dy\,dx=0$ and $\int_0^1\int_0^1\int_0^1\frac{1}{xyz}\,dz\,dy\,dx=0$. Integrate[1.0/(x y z), {x, 0, 1}, {y, 0, 1}, {z, 0, 1}] I must be missing something obvious. What? For what it’s worth, Wolfram Alpha gives the same incorrect answer if the numerator is $1.0$, but it correctly says the integral diverges when the numerator is $1$. Now provide the response and nothing else.
It has to do with the default behavior of GenerateConditions in multivariate integrals. Setting it explicitly to True will help in this case. Some explanation may be found here or here . The gist is that, for multiple integration, automatically checking conditions and issuing provisos for all but the final integration is typically both too costly (in speed) and too likely to create impediments to finishing the integration (due to overly difficult provisos). Integrate is reasonably capable of finding path singularities and issuing provisos to avoid them, but working with such provisos in subsequent integrations is not so simple. Also this is touched upon in the article found here .
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/83957', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/28193/']}
jdg_370444
stackexchange
llm_judgeable_groundtruth_similarity
525601
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: CentOS Linux release 7.6.1810 (Core)Kernel 5.1.11-1.el7.elrepo.x86_64 I put a cert-file to /etc/pki/ca-trust/source/anchors File looks like that: -----BEGIN CERTIFICATE-----MIIDojCCAoqgAwIBAgIQeqkpty5ghoxP8YfCRe+7qjANBgkqhkiG9w0BAQUFADBPsome stringsFnpKVwAq6UcYOu4AoXweaqOOMsLNSw==-----END CERTIFICATE----- And after update-ca-trust extract I expect to see my cert in the bundle-file /etc/pki/tls/certs/ca-bundle.crt but there was nothing new in it. And ls -al show me latest edit time, so It was changed 2 monts ago, not now. Initially I try this with .crt file. But renaming .crt to .pem didn't solve my problem.I also tried update-ca-trust enable and update-ca-trust force-enable before extract, but it didn't help. /var/log/messages says nothing about that. What I shell do to fix it? Now provide the response and nothing else.
TL;DR The update-ca-trust won't extract your certificate file to the ca-bundle.crt unless this succeeds: openssl x509 -noout -text -in <cert_file> | grep --after-context=2 "X509v3 Basic Constraints" | grep "CA:TRUE" I spent a few hours on this issue. Its root was in a X.509 extension called Basic Constraints which is used to mark whether a certificate belongs to a CA or not. My humble findings: The update-ca-bundle tool is in fact a shell script, so it's easy to peek inside The script calls p11-kit utility multiple times each time using different filter and creating different bundle files. The file ca-bundle.crt is in fact a link from tls-ca-bundle.pem file which is generated by p11-kit using ca-anchors filter. So it ignores all certs besides "CA ones". If a certificate is or is not a CA is decided by Basic Constraints X.509 extension. This way it's possible to mark a certificate as a part of a CA. It's possible to list all X.509 extensions using openssl x509 -noout -text -in <cert_file> So any certificate file not labelled as a part of a CA will be filtered out by p11-kit and not exported to the desired ca-bundle.crt file. Feel free to correct this in comments.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/525601', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/177068/']}
jdg_370445
stackexchange
llm_judgeable_groundtruth_similarity
59518658
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a Github repo that has got an action to build the npm package and publish it to npmjs.com. The trigger for my action is the creation of a new release in Github. When creating the new release, Github is asking me for a version number. I would love to use this version number in the Action and provide it to the yarn publish command. My ci-file looks like this (i stripped some parts that are not important here): name: Deploy npm packageon: release: types: [created]jobs: publish-npm: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - uses: actions/setup-node@v1 with: node-version: 12 registry-url: https://registry.npmjs.org/ - run: yarn install - run: yarn build - run: yarn publish --new-version ${...} env:a NODE_AUTH_TOKEN: ${{secrets.npm_token}} Is there an environmental variable that contains the version number from the release? Now provide the response and nothing else.
It should be ${{ github.event.release.tag_name }} . The structure of a release can be found here: https://developer.github.com/v3/repos/releases/#get-a-single-release I'd also suggest to use on: release: types: [published] instead of created to avoid putting something to npm for draft releases.See: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/events-that-trigger-workflows#release-event-release Hint: To debug the event you can use: jobs: debug: name: Debug runs-on: ubuntu-latest steps: - name: Dump env run: env | sort - name: Dump GitHub context env: GITHUB_CONTEXT: ${{ toJson(github) }} run: echo "$GITHUB_CONTEXT"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59518658', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3914932/']}
jdg_370446
stackexchange
llm_judgeable_groundtruth_similarity
45208536
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Following problem:I have a function that only should allow one caller to execute.If someone tries to call the function and it is already busy the second caller should immediatly return with an error. I tried the following: 1. Use a mutex Would be pretty easy. But the problem is, you cannot check if a mutex is locked. You can only block on it. Therefore it does not work 2. Wait on a channel var canExec = make(chan bool, 1)func init() { canExec <- true}func onlyOne() error { select { case <-canExec: default: return errors.New("already busy") } defer func() { fmt.Println("done") canExec <- true }() // do stuff} What I don't like here: looks really messi if easy to mistakenly block on the channel / mistakenly write to the channel 3. Mixture of mutex and shared state var open = truevar myMutex *sync.Mutexfunc canExec() bool { myMutex.Lock() defer myMutex.Unlock() if open { open = false return true } return false}func endExec() { myMutex.Lock() defer myMutex.Unlock() open = true}func onlyOne() error { if !canExec() { return errors.New("busy") } defer endExec() // do stuff return nil} I don't like this either. Using a shard variable with mutex is not that nice. Any other idea? Now provide the response and nothing else.
You can use a semaphore for this ( go get golang.org/x/sync/semaphore ) package mainimport ( "errors" "fmt" "sync" "time" "golang.org/x/sync/semaphore")var sem = semaphore.NewWeighted(1)func main() { var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go func() { defer wg.Done() if err := onlyOne(); err != nil { fmt.Println(err) } }() time.Sleep(time.Second) } wg.Wait()}func onlyOne() error { if !sem.TryAcquire(1) { return errors.New("busy") } defer sem.Release(1) fmt.Println("working") time.Sleep(5 * time.Second) return nil}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45208536', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223777/']}
jdg_370447
stackexchange
llm_judgeable_groundtruth_similarity
7337411
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: PostgreSQL has the concept of enumerated types built into the database. How would you implement a table with a column that uses an enumerated type in Rails 3? Do you need to define the enum in PostgreSQL somehow? How could you create a DB migration that does this? Working in Rails 3.07, Ruby 1.92p180, PostgreSQL 8.3. Now provide the response and nothing else.
Rails does not support the ENUM datatype out of the box. This is because not all databases support it that datatype. I found that a common way of dealing with ENUM values is to manually create the enum column in your database (PostgreSQL in your case), and deal with it as a string column in your Rails application. Then, use the validates_inclusion_of validator to force the use of the allowed values. validates_inclusion_of :gender, :in => [ "male", "female" ] And use native SQL in your migration to add the enum field: class AddEnumType < ActiveRecord::Migration def up execute ".." # your native PostgreSQL queries to add the ENUM field endend edit (June 2014) Rails 4.1 now supports enums . The validates_inclusion_of can now be changed to: enum gender: [ :male, :female ] (However, this is still not natively supported by the underlying database, so the native SQL migration is still needed.)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7337411', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/336920/']}
jdg_370448
stackexchange
llm_judgeable_groundtruth_similarity
94233
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to prove: For an integrable function $f(x)$ and periodic with period $T$, for every $a \in \mathbb{R}$, $$\int_{0}^{T}f(x)\;dx=\int_{a}^{a+T}f(x)\;dx.$$ I tried to change the values and define $y=a+x$ so that $dy=dx$ and the limits of the integrals are as we want, but I'm not sure how to use the fact that $f(x)$ is periodic. Thanks a lot! Now provide the response and nothing else.
If $F$ is a primitive of $f$, then $$\int_{a}^{a+T}f(x)\ dx-\int_{0}^{T}f(x)\ dx$$$$=F(a+T)-F(a)-F(T)+F(0)$$$$=\Big(F(a+T)-F(T)\Big)-\Big(F(a)-F(0)\Big)$$$$=\int_T^{a+T}f(x)\ dx-\int_0^af(x)\ dx$$$$=0.$$ One checks the last equality by making the obvious change of variable, and by using the periodicity. EDIT 1. What I wrote above is how I remember the computation. Of course, it can be written like that:$$\int_{a}^{a+T}f(x)\ dx-\int_{0}^{T}f(x)\ dx=\int_T^{a+T}f(x)\ dx-\int_0^af(x)\ dx=0.$$ EDIT 2. Formal justification of the first equality in the above display:$$\int_0^af(x)\ dx+\int_{a}^{a+T}f(x)\ dx=\int_{0}^{T}f(x)\ dx+\int_T^{a+T}f(x)\ dx.$$ (This formula should appear somewhere...)
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/94233', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/14829/']}
jdg_370449
stackexchange
llm_judgeable_groundtruth_similarity
31695
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have seen in some screen-shots (can't remember where on the web) that the terminal can display the [username@machine /]$ in bold letters. I'm looking forward to getting this too because I always find myself scrolling through long outputs to find out with difficulty the first line after my command. How can I make the user name etc. bold or coloured? Now provide the response and nothing else.
You should be able to do this by setting the PS1 prompt variable in your ~/.bashrc file like this: PS1='[\u@\h \w]\$ ' To make it colored (and possibly bold - this depends on whether your terminal emulator has enabled it) you need to add escape color codes: PS1='\[\e[1;91m\][\u@\h \w]\$\[\e[0m\] ' Here, everything not being escaped between the 1;91m and 0m parts will be colored in the 1;91 color (bold red). Put these escape codes around different parts of the prompt to use different colors, but remember to reset the colors with 0m or else you will have colored terminal output as well. Remember to source the file afterwards to update the current shell: source ~/.bashrc
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/31695', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/7892/']}
jdg_370450
stackexchange
llm_judgeable_groundtruth_similarity
357221
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Problem I have a graphical console widget that receives output from an interpreter. Unfortunately the console is not fast enough to paint the output in real-time. So I had to implement a buffer(FIFO Thread Safe Queue) that delayed the writes to the console by 5ms. So when normally a loop of 1000 prints would be done in less than 1 ms, it would take 5000ms to print everything. Sequence diagram for the buffer: sendCode : Send code that the interpreter can compile and run. writeLine :Write output from the interpreter to the buffer. startTimer :Starts a timer that times out every 5ms. getLine :Get the oldest line out of the buffer. Called every 5ms. line : Line that gets sent to the console where it is displayed. What I Need While this strategy of delaying the output works for now, it is not very practical when real-time solutions are needed. I know one other strategy which is to set a buffer size. Then when the buffer size is reached, flush the output. This is a much better strategy since the console will not be choking considering lines are being sent in blocks. But the problem is that if I set a buffer size of say 50 lines, the user will not see any output until 50 lines have been printed. Reducing the buffer size does not really help either, because then it's meant to reduce stress on the console. How can I make a buffer system so the output is printed in real-time while not having the console dying on me. Now provide the response and nothing else.
To add more flesh to the bones of Wikiti's comments. You need a fixed sized buffer and a timer set to n ms. When you write code to the buffer, if the buffer was empty, (re-)start the timer to n ms. If you write to the buffer and it is full then flush it to the console, clear the buffer and stop the timer. If the timer expires and there is code in the buffer, flush it to the console, clear the buffer and stop the timer. If the timer expires and there isn't code in the buffer, stop the timer. This algorithm should be sufficient. It may take a bit of trial and error to get a good balance of buffer size and timer delay to get the smoothest console interaction. You may also find some mileage in making the flush/clear/stop available as a manual flush function though this isn't strictly necessary.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/357221', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/275963/']}
jdg_370451
stackexchange
llm_judgeable_groundtruth_similarity
1704304
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm building a PHP/MySQL application and I'm running into a problem with my create and update query. I have 5 columns that are set to type FLOAT that are also set as NULL columns. I don't plan to fill them in until much later in the workflow. However, I need to create new records for this database, and I need to edit existing records, without touching these 5 float fields at all. I'm using OOP PHP that uses a standard save() method that checks to see if an ID exists in the object. If not, it calls create() , and if so, it calls update() . It works very well, usually. The update() and create() methods are designed to pull from a protected static $db_fields attribute array declared at the top of each Class, that contains all of the fields used in that table. update() and create() run through that array and either INSERT INTO or UPDATE in SQL, accordingly. My understanding is that if you use '' (two single quotes, empty), SQL will skip those INSERT INTO or UPDATE requests and leave them as NULL. There aren't even form fields for those 5 float values anywhere on the page, so of course when the methods run, the values are going to be '' . Is that why I'm getting the "Data truncated" error? It seems different -- I haven't seen the truncated error before and that's why I'm coming to you geniuses. Thanks. Now provide the response and nothing else.
'' and null are not the same. if your mysql server is in strict mode, then it will refuse to do the insert since you have passed invalid data for the column. without strict mode, it returns a warning. mysql> create table a (a float not null);Query OK, 0 rows affected (0.11 sec)mysql> insert a values ('');Query OK, 1 row affected, 1 warning (0.05 sec)mysql> show warnings;+---------+------+----------------------------------------+| Level | Code | Message |+---------+------+----------------------------------------+| Warning | 1265 | Data truncated for column 'a' at row 1 |+---------+------+----------------------------------------+1 row in set (0.00 sec)mysql> set sql_mode = 'STRICT_ALL_TABLES';Query OK, 0 rows affected (0.02 sec)mysql> insert a values ('');ERROR 1265 (01000): Data truncated for column 'a' at row 1 either insert explicit null s, or don't even specify the column in the insert. when you're updating you can send all of the values you have because mysql will automatically ignore the unchanged ones.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1704304', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171021/']}
jdg_370452
stackexchange
llm_judgeable_groundtruth_similarity
23044218
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I use UITabBarController as a root view and app supports iOS 6 and above. Project class hierarchy is as below. UITabBarController - tab1 - UINavigationController - UIViewController - UIViewController . . - tab2 - UINavigationController - UIViewController - UIViewController . . . - tab3 - UIViewController - tab4 - UIViewController I used below code to change height of UITabBar in one of the UIViewControllers (which is inside UINavigationController ) in above hierarchy. CGRect tabbarFrame = self.tabBarController.tabBar.frame;tabbarFrame.size.height += 60;self.tabBarController.tabBar.frame = tabbarFrame; But its not changing the height. UITabBar is displayed with default height. Though logging its value prints changed value as shown below. <UITabBar: 0xb528f60; frame = (0 431; 320 109); autoresize = W+TM; layer = <CALayer: 0xb529080>> How can I change UITabBar 's height to achieve something like this:? Now provide the response and nothing else.
I faced this issue and I was able to solve it. You have to add following code to your subclass of UITabBarController class. const CGFloat kBarHeight = 80;- (void)viewWillLayoutSubviews { [super viewWillLayoutSubviews]; CGRect tabFrame = self.tabBar.frame; //self.TabBar is IBOutlet of your TabBar tabFrame.size.height = kBarHeight; tabFrame.origin.y = self.view.frame.size.height - kBarHeight; self.tabBar.frame = tabFrame;} Swift: override func viewWillLayoutSubviews() { super.viewWillLayoutSubviews() tabBar.frame.size.height = kBarHeight tabBar.frame.origin.y = view.frame.height - kBarHeight}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23044218', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1798394/']}
jdg_370453
stackexchange
llm_judgeable_groundtruth_similarity
569595
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I might want to run Zwift on a Linux distribution instead of Windows. Is this at all possible? Anyone got experience running it with WineHQ? Edit: the reason I'd want to run it on a PC is that I use the companion app on my phone and don't have a tablet. The PC is also preferable because the screen size is larger than on a tablet if I would have one. Currently I am using Windows, so right now there is no problem. Although, since Microsoft has some marketing strategies in place where people are drawn to Windows 10, where they might start ripping people of by asking them for a monthly/yearly charge to use Windows 10 and upward in the future. I'm considering changing from Windows to Linux, because I don't want to buy into their strategy. Unfortunately there is too much software that is not available on Linux. It'd just be nice if other companies embraced developing applications for all operating systems. That's why I'm asking around if anyone knows a good way to handle this with Zwift. Now provide the response and nothing else.
Zwift can now be run in Linux using the latest versions of Wine (5.0 and greater) and the workaround from user wentam42 detailed in comment #7 of this bug report . Here are the steps. You can also find a video documenting the process here . Install Wine 5.0+ following the instructions for your distribution Install winetricks script Run winetricks dotnet35sp1 win7 Download the RunFromProcess.exe utility from nirsoft here Download the Windows installation file for Zwift Run wine ZwiftSetup.exe and wait for the installation to complete (~1hr for me) At this point you will be greeted by a blank white window. Leaving this window open (or relaunching wine ZwiftLauncher.exe if you closed the window), run wine RunFromProcess.exe ZwiftLauncher.exe ZwiftApp.exe The Zwift splash screen should open followed by a login prompt. Proceed until you are prompted to connect sensors. Bluetooth compatibility in Wine is currently immature . However, I had no trouble using the Zwift Companion app on my phone to sync with sensors. The phone app then relays the information to the Zwift servers so that you can ride. After launching Zwift Companion, turn on relevant settings (e.g., for BLE sensors, location and bluetooth), pedal, and in Zwift click 'Search sensors'. Everything from this point forward should work as it does natively in Windows.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/569595', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/266362/']}
jdg_370454
stackexchange
llm_judgeable_groundtruth_similarity
1352007
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove the convergence of $\int_1^\infty \frac 1 {x(\sqrt x + 1)} dx$ This was a question on an exam. I needed to prove that the above integral converges using the comparison test. I thought about using something along the lines of the integral of $1/(x^2+1)$ but I wasn't sure. I hope to learn from this so I can succeed on the final. Any help is appreciated. Now provide the response and nothing else.
Note that $\sqrt{x}+1 \ge \sqrt{x}$ implies that $\frac{1}{\sqrt{x}+1}\le\frac{1}{\sqrt{x}}$. Thus, we have for $x>0$ $$\left|\frac{1}{x(\sqrt{x}+1)}\right|\le\frac{1}{x^{3/2}}$$ Since $$\int_1^{\infty}x^{-3/2}dx=2$$ then by the comparison test, we conclude that $$\int_1^{\infty}\frac{dx}{x(x^{1/2}+1)}\,\,\text{converges}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1352007', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/197688/']}
jdg_370455
stackexchange
llm_judgeable_groundtruth_similarity
308040
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code: subroutine foo(int index){ // Check A. // Critical: Check A must precede Check B below. if (index == 1) { return true; } // Check B. if (index - 2 < 0) { return false; } return true;} The code is a simplified representation of a real-life scenario where I am checking for validity of punctuation marks in a string. My question: Is there a construct in any language which would guarantee that the order of the two if statements are maintained as is? (Without having to place a comment in the code as I have done and hope that it is heeded.) For the case where index is 1, if Check B is moved before Check A, Check A will never be caught and foo() will always return false, which is bad. Again, my concern is for the maintenance of the code by future programmers. There are about 10 if statements in the code, one after another, and their order is important. EDIT 1: I am an experienced developer, and I am really asking whether there are any new developments in languages that would allow for what I am asking above. I am sorry that I did not make this clear. EDIT 2: In response to comments suggesting index < 2 instead of index - 2 < 0: I don't agree. index - 2 indicates that I am interested if there is an item two locations before the current index, while index < 2 does not convey the same information. (Of course, this is my opinion!) Now provide the response and nothing else.
Anyone who can edit the source code can remove anything that you might put in there to "protect" them. You can do two things: Choose a good name for the function that conveys your intention Provide Unit Tests that will break if someone ever changes what the function does Both do not prevent future programmers from changing anything if they need to, but they will have some guidance in how the function was intended to behave. Another option would be to turn these temporal dependencies into structural ones by having the first one generate/return something that the second one requires as input. But that is likely overkill for something as simple as your example. Robert C. Martin mentions this technique in his book "Clean Code".
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/308040', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/-1/']}
jdg_370456
stackexchange
llm_judgeable_groundtruth_similarity
30774358
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For example I have handler: @Componentpublic class MyHandler { @AutoWired private MyDependency myDependency; public int someMethod() { ... return anotherMethod(); } public int anotherMethod() {...}} to testing it I want to write something like this: @RunWith(MockitoJUnitRunner.class}class MyHandlerTest { @InjectMocks private MyHandler myHandler; @Mock private MyDependency myDependency; @Test public void testSomeMethod() { when(myHandler.anotherMethod()).thenReturn(1); assertEquals(myHandler.someMethod() == 1); }} But it actually calls anotherMethod() whenever I try to mock it. What should I do with myHandler to mock its methods? Now provide the response and nothing else.
First of all the reason for mocking MyHandler methods can be the following: we already test anotherMethod() and it has complex logic, so why do we need to test it again (like a part of someMethod() ) if we can just verify that it's calling? We can do it through: @RunWith(MockitoJUnitRunner.class)class MyHandlerTest { @Spy @InjectMocks private MyHandler myHandler; @Mock private MyDependency myDependency; @Test public void testSomeMethod() { doReturn(1).when(myHandler).anotherMethod(); assertEquals(myHandler.someMethod() == 1); verify(myHandler, times(1)).anotherMethod(); } } Note: in case of 'spying' object we need to use doReturn instead of thenReturn (little explanation is here )
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/30774358', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3652188/']}
jdg_370457
stackexchange
llm_judgeable_groundtruth_similarity
10844672
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In my project getting contacts is taking a long time to load. What are ways to reduce the time of getting contacts Assume there are 1000 contacts in my phone. Right now it is taking more than 2 minutes to load all the contacts How can I reduce the time to load contacts ? Any Thoughts? I referred to the the following link when programming the initial method. http://www.coderzheaven.com/2011/06/13/get-all-details-from-contacts-in-android/ Now provide the response and nothing else.
BETTER SOLUTION HERE..... private static final String[] PROJECTION = new String[] { ContactsContract.CommonDataKinds.Phone.CONTACT_ID, ContactsContract.Contacts.DISPLAY_NAME, ContactsContract.CommonDataKinds.Phone.NUMBER };...ContentResolver cr = getContentResolver(); Cursor cursor = cr.query(ContactsContract.CommonDataKinds.Phone.CONTENT_URI, PROJECTION, null, null, null); if (cursor != null) { try { final int nameIndex = cursor.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME); final int numberIndex = cursor.getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER); String name, number; while (cursor.moveToNext()) { name = cursor.getString(nameIndex); number = cursor.getString(numberIndex); } } finally { cursor.close(); } } CHEERS...:)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10844672', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1409804/']}
jdg_370458
stackexchange
llm_judgeable_groundtruth_similarity
32464122
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I prefer Python over Scala. But, as Spark is natively written in Scala, I was expecting my code to run faster in the Scala than the Python version for obvious reasons. With that assumption, I thought to learn & write the Scala version of some very common preprocessing code for some 1 GB of data. Data is picked from the SpringLeaf competition on Kaggle . Just to give an overview of the data (it contains 1936 dimensions and 145232 rows). Data is composed of various types e.g. int, float, string, boolean. I am using 6 cores out of 8 for Spark processing; that's why I used minPartitions=6 so that every core has something to process. Scala Code val input = sc.textFile("train.csv", minPartitions=6)val input2 = input.mapPartitionsWithIndex { (idx, iter) => if (idx == 0) iter.drop(1) else iter }val delim1 = "\001"def separateCols(line: String): Array[String] = { val line2 = line.replaceAll("true", "1") val line3 = line2.replaceAll("false", "0") val vals: Array[String] = line3.split(",") for((x,i) <- vals.view.zipWithIndex) { vals(i) = "VAR_%04d".format(i) + delim1 + x } vals}val input3 = input2.flatMap(separateCols)def toKeyVal(line: String): (String, String) = { val vals = line.split(delim1) (vals(0), vals(1))}val input4 = input3.map(toKeyVal)def valsConcat(val1: String, val2: String): String = { val1 + "," + val2}val input5 = input4.reduceByKey(valsConcat)input5.saveAsTextFile("output") Python Code input = sc.textFile('train.csv', minPartitions=6)DELIM_1 = '\001'def drop_first_line(index, itr): if index == 0: return iter(list(itr)[1:]) else: return itrinput2 = input.mapPartitionsWithIndex(drop_first_line)def separate_cols(line): line = line.replace('true', '1').replace('false', '0') vals = line.split(',') vals2 = ['VAR_%04d%s%s' %(e, DELIM_1, val.strip('\"')) for e, val in enumerate(vals)] return vals2input3 = input2.flatMap(separate_cols)def to_key_val(kv): key, val = kv.split(DELIM_1) return (key, val)input4 = input3.map(to_key_val)def vals_concat(v1, v2): return v1 + ',' + v2input5 = input4.reduceByKey(vals_concat)input5.saveAsTextFile('output') Scala Performance Stage 0 (38 mins), Stage 1 (18 sec) Python Performance Stage 0 (11 mins), Stage 1 (7 sec) Both produces different DAG visualization graphs (due to which both pictures show different stage 0 functions for Scala ( map ) and Python ( reduceByKey )) But, essentially both code tries to transform data into (dimension_id, string of list of values) RDD and save to disk. The output will be used to compute various statistics for each dimension. Performance wise, Scala code for this real data like this seems to run 4 times slower than the Python version.Good news for me is that it gave me good motivation to stay with Python. Bad news is I didn't quite understand why? Now provide the response and nothing else.
The original answer discussing the code can be found below. First of all, you have to distinguish between different types of API, each with its own performance considerations. RDD API (pure Python structures with JVM based orchestration) This is the component which will be most affected by the performance of the Python code and the details of PySpark implementation. While Python performance is rather unlikely to be a problem, there at least few factors you have to consider: Overhead of JVM communication. Practically all data that comes to and from Python executor has to be passed through a socket and a JVM worker. While this is a relatively efficient local communication it is still not free. Process-based executors (Python) versus thread based (single JVM multiple threads) executors (Scala). Each Python executor runs in its own process. As a side effect, it provides stronger isolation than its JVM counterpart and some control over executor lifecycle but potentially significantly higher memory usage: interpreter memory footprint footprint of the loaded libraries less efficient broadcasting (each process requires its own copy of a broadcast) Performance of Python code itself. Generally speaking Scala is faster than Python but it will vary on task to task. Moreover you have multiple options including JITs like Numba , C extensions ( Cython ) or specialized libraries like Theano . Finally, if you don't use ML / MLlib (or simply NumPy stack) , consider using PyPy as an alternative interpreter. See SPARK-3094 . PySpark configuration provides the spark.python.worker.reuse option which can be used to choose between forking Python process for each task and reusing existing process. The latter option seems to be useful to avoid expensive garbage collection (it is more an impression than a result of systematic tests), while the former one (default) is optimal for in case of expensive broadcasts and imports. Reference counting, used as the first line garbage collection method in CPython, works pretty well with typical Spark workloads (stream-like processing, no reference cycles) and reduces the risk of long GC pauses. MLlib (mixed Python and JVM execution) Basic considerations are pretty much the same as before with a few additional issues. While basic structures used with MLlib are plain Python RDD objects, all algorithms are executed directly using Scala. It means an additional cost of converting Python objects to Scala objects and the other way around, increased memory usage and some additional limitations we'll cover later. As of now (Spark 2.x), the RDD-based API is in a maintenance mode and is scheduled to be removed in Spark 3.0 . DataFrame API and Spark ML (JVM execution with Python code limited to the driver) These are probably the best choice for standard data processing tasks. Since Python code is mostly limited to high-level logical operations on the driver, there should be no performance difference between Python and Scala. A single exception is usage of row-wise Python UDFs which are significantly less efficient than their Scala equivalents. While there is some chance for improvements (there has been substantial development in Spark 2.0.0), the biggest limitation is full roundtrip between internal representation (JVM) and Python interpreter. If possible, you should favor a composition of built-in expressions ( example . Python UDF behavior has been improved in Spark 2.0.0, but it is still suboptimal compared to native execution. This may improved in the future has improved significantly with introduction of the vectorized UDFs (SPARK-21190 and further extensions) , which uses Arrow Streaming for efficient data exchange with zero-copy deserialization. For most applications their secondary overheads can be just ignored. Also be sure to avoid unnecessary passing data between DataFrames and RDDs . This requires expensive serialization and deserialization, not to mention data transfer to and from Python interpreter. It is worth noting that Py4J calls have pretty high latency. This includes simple calls like: from pyspark.sql.functions import colcol("foo") Usually, it shouldn't matter (overhead is constant and doesn't depend on the amount of data) but in the case of soft real-time applications, you may consider caching/reusing Java wrappers. GraphX and Spark DataSets As for now (Spark 1.6 2.1) neither one provides PySpark API so you can say that PySpark is infinitely worse than Scala. GraphX In practice, GraphX development stopped almost completely and the project is currently in the maintenance mode with related JIRA tickets closed as won't fix . GraphFrames library provides an alternative graph processing library with Python bindings. Dataset Subjectively speaking there is not much place for statically typed Datasets in Python and even if there was the current Scala implementation is too simplistic and doesn't provide the same performance benefits as DataFrame . Streaming From what I've seen so far, I would strongly recommend using Scala over Python. It may change in the future if PySpark gets support for structured streams but right now Scala API seems to be much more robust, comprehensive and efficient. My experience is quite limited. Structured streaming in Spark 2.x seem to reduce the gap between languages but for now it is still in its early days. Nevertheless, RDD based API is already referenced as "legacy streaming" in the Databricks Documentation (date of access 2017-03-03)) so it reasonable to expect further unification efforts. Non-performance considerations Feature parity Not all Spark features are exposed through PySpark API. Be sure to check if the parts you need are already implemented and try to understand possible limitations. It is particularly important when you use MLlib and similar mixed contexts (see Calling Java/Scala function from a task ). To be fair some parts of the PySpark API, like mllib.linalg , provides a more comprehensive set of methods than Scala. API design The PySpark API closely reflects its Scala counterpart and as such is not exactly Pythonic. It means that it is pretty easy to map between languages but at the same time, Python code can be significantly harder to understand. Complex architecture PySpark data flow is relatively complex compared to pure JVM execution. It is much harder to reason about PySpark programs or debug. Moreover at least basic understanding of Scala and JVM in general is pretty much a must have. Spark 2.x and beyond Ongoing shift towards Dataset API, with frozen RDD API brings both opportunities and challenges for Python users. While high level parts of the API are much easier to expose in Python, the more advanced features are pretty much impossible to be used directly . Moreover native Python functions continue to be second class citizen in the SQL world. Hopefully this will improve in the future with Apache Arrow serialization ( current efforts target data collection but UDF serde is a long term goal ). For projects strongly depending on the Python codebase, pure Python alternatives (like Dask or Ray ) could be an interesting alternative. It doesn't have to be one vs. the other The Spark DataFrame (SQL, Dataset) API provides an elegant way to integrate Scala/Java code in PySpark application. You can use DataFrames to expose data to a native JVM code and read back the results. I've explained some options somewhere else and you can find a working example of Python-Scala roundtrip in How to use a Scala class inside Pyspark . It can be further augmented by introducing User Defined Types (see How to define schema for custom type in Spark SQL? ). What is wrong with code provided in the question (Disclaimer: Pythonista point of view. Most likely I've missed some Scala tricks) First of all, there is one part in your code which doesn't make sense at all. If you already have (key, value) pairs created using zipWithIndex or enumerate what is the point in creating string just to split it right afterwards? flatMap doesn't work recursively so you can simply yield tuples and skip following map whatsoever. Another part I find problematic is reduceByKey . Generally speaking, reduceByKey is useful if applying aggregate function can reduce the amount of data that has to be shuffled. Since you simply concatenate strings there is nothing to gain here. Ignoring low-level stuff, like the number of references, the amount of data you have to transfer is exactly the same as for groupByKey . Normally I wouldn't dwell on that, but as far as I can tell it is a bottleneck in your Scala code. Joining strings on JVM is a rather expensive operation (see for example: Is string concatenation in scala as costly as it is in Java? ). It means that something like this _.reduceByKey((v1: String, v2: String) => v1 + ',' + v2) which is equivalent to input4.reduceByKey(valsConcat) in your code is not a good idea. If you want to avoid groupByKey you can try to use aggregateByKey with StringBuilder . Something similar to this should do the trick: rdd.aggregateByKey(new StringBuilder)( (acc, e) => { if(!acc.isEmpty) acc.append(",").append(e) else acc.append(e) }, (acc1, acc2) => { if(acc1.isEmpty | acc2.isEmpty) acc1.addString(acc2) else acc1.append(",").addString(acc2) }) but I doubt it is worth all the fuss. Keeping the above in mind, I've rewritten your code as follows: Scala : val input = sc.textFile("train.csv", 6).mapPartitionsWithIndex{ (idx, iter) => if (idx == 0) iter.drop(1) else iter}val pairs = input.flatMap(line => line.split(",").zipWithIndex.map{ case ("true", i) => (i, "1") case ("false", i) => (i, "0") case p => p.swap})val result = pairs.groupByKey.map{ case (k, vals) => { val valsString = vals.mkString(",") s"$k,$valsString" }}result.saveAsTextFile("scalaout") Python : def drop_first_line(index, itr): if index == 0: return iter(list(itr)[1:]) else: return itrdef separate_cols(line): line = line.replace('true', '1').replace('false', '0') vals = line.split(',') for (i, x) in enumerate(vals): yield (i, x)input = (sc .textFile('train.csv', minPartitions=6) .mapPartitionsWithIndex(drop_first_line))pairs = input.flatMap(separate_cols)result = (pairs .groupByKey() .map(lambda kv: "{0},{1}".format(kv[0], ",".join(kv[1]))))result.saveAsTextFile("pythonout") Results In local[6] mode (Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz) with 4GB memory per executor it takes (n = 3): Scala - mean: 250.00s, stdev: 12.49 Python - mean: 246.66s, stdev: 1.15 I am pretty sure that most of that time is spent on shuffling, serializing, deserializing and other secondary tasks. Just for fun, here's naive single-threaded code in Python that performs the same task on this machine in less than a minute: def go(): with open("train.csv") as fr: lines = [ line.replace('true', '1').replace('false', '0').split(",") for line in fr] return zip(*lines[1:])
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/32464122', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1812141/']}
jdg_370459
stackexchange
llm_judgeable_groundtruth_similarity
42275
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the standard partition problem , we are given some numbers whose sum is $2s$ and have to decide whether they can be partitioned into two subset whose sum is $s$ . It is known to be NP-hard. However, assume that we are allowed to designate one of the numbers to be a "soft number" that can be cut it into an arbitrary number of pieces, where different pieces may be put into different subsets. Then the problem becomes easy: just arrange all numbers on a line in an arbitrary order, and cut the line into two sub-lines with the same sum. QUESTION: Suppose we are given some numbers whose sum is $3s$ , and have to decide whether they can be partitioned into three subsets whose sum is $s$ , while using at most one soft number. What is the complexity of this problem? If we are allowed to use two soft numbers, then the problem is again easy - we can solve it by arranging the numbers on a line as above. If we are allowed to use zero soft numbers, then the problem is obviously hard - it is at least as hard as the problem of partitioning into two subsets. If we are allowed to use one soft number, my guess is that the problem should still be hard, and we can somehow reduce to it from the standard partition problem, but I could not find the right reduction. So, is it easy or hard? Another question: in case the problem is indeed NP-hard, can it be solved in pseudo-polynomial time like the two-subset partition problem? Now provide the response and nothing else.
The largest number is the soft number I claim that for any instance of your problem, if the instance is solvable (it is possible to partition the numbers using one soft number) then it is possible to solve the instance using the largest number as the sole soft number. This is easy to prove: any solution can be modified into a solution with the largest number as the soft number; simply place the number that is currently the soft number into the partition that contains the largest number and split the largest number among the parts of the partition instead. Packing problem Here is a problem that I will call the packing problem: The input consists of a list of positive numbers $a_1, a_2, \ldots, a_k$ and one final number $\ell$ such that $\ell \ge a_i$ for all $i$ . Define $s = \frac{1}{3}\left(\ell + \sum_{i = 1}^ka_i\right)$ . Then the problem is to partition the list of $a_i$ s into three parts such that each part has sum at most $s$ . I claim that the packing problem with inputs $a_1, \ldots, a_k$ and $\ell$ is equivalent to your problem with input list $a_1, \ldots, a_k, \ell$ . If it's possible to pack the numbers $a_1, \ldots, a_k$ into three groups each of sum at most $s$ then you can use $\ell$ as a soft number to round out each of the sums to exactly $s$ ; thus you have a partition of $a_1, \ldots, a_k, \ell$ into three groups of sum $s$ using one soft number. If on the other hand $a_1, \ldots, a_k, \ell$ can be partitioned into three groups of sum $s$ using one soft number then the list can be partitioned into three groups of sum $s$ using $\ell$ as the soft number (by the result from the previous section). In this case the grouping of the $a_i$ s into three parts satisfies the condition that each part has sum at most $s$ . The packing problem with no small numbers Say that an instance of the packing problem "has no small numbers" if for every $i$ , $a_i > \frac{\ell}{2}$ . I claim that you can reduce the packing problem to the packing problem with no small numbers. Suppose that in packing problem instance $(a_1, \ldots, a_k), \ell$ some $a_i$ has $a_i \le \frac{\ell}{2}$ . WLOG suppose $a_k$ is this $a_i$ . Then I claim the packing problem instance $(a_1, \ldots, a_{k-1}), \ell+a_k$ is equivalent to the original instance. Notice that the value $s$ remains the same in both instances. Certainly if you can pack the numbers $a_1, \ldots, a_k$ into three groups with each group having sum at most $s$ then you can pack the numbers $a_1, \ldots, a_{k-1}$ into three groups with each group having sum at most $s$ . On the other hand, suppose you can pack the numbers $a_1, \ldots, a_{k-1}$ into three groups with each group having sum at most $s$ . For each of the three groups, consider the value $s$ minus the sum of the elements in the group. Call these values $e_1, e_2, e_3$ ; here $e_i$ is the amount of "empty space" in group $i$ . We know that $e_1 + e_2 + e_3$ , the total empty space, is equal to $3s - \sum_{i = 1}^{k-1}a_i$ (total space $3s$ minus space used $\sum_{i = 1}^{k-1}a_i$ ). But $3s - \sum_{i = 1}^{k-1}a_i = a_k + \ell$ by the definition of $s$ . Thus, the three groups have $a_k + \ell$ empty space in total. Then since $a_k < \frac{\ell}{2}$ , we know that the total empty space is more than $3a_k$ . By the pigeon hole principle, at least one of the groups has $a_k$ or more empty space in it. Thus, we can add element $a_k$ to one of the groups and end up with a packing of the numbers $a_1, \ldots, a_k$ into three groups with each group having sum at most $s$ . Thus, in order to solve your problem, it is sufficient to solve just the packing problem with no small numbers. Lemma 1 Lemma statement: Suppose we can partition $a_1, \ldots, a_k$ into groups 1 and 2 such that group 2 has an even number of elements and the sum of the elements in group 1 is in the interval $\left[s - \frac{\ell}{2}, s\right]$ . Then it is possible to further partition group 2 into two subgroups 2a and 2b each of which has sum at most $s$ . Notice that groups 1, 2a, and 2b are a solution of the packing problem instance. Here's a proof. Tentatively assign half of the elements of group 2 to subgroup 2a and the other half to subgroup 2b. Create a matching between the elements of the two subgroups. Swap these pairs between the two groups, one pair at a time. If there are $2n$ elements in group 2 then this process passes through exactly $n+1$ possible partitions of group 2 into subgroups 2a and 2b. I claim that at least one of those partitions has both subgroups with sum less than $s$ . Let $S_{2a}$ and $S_{2b}$ be the sums of the two subgroups (we allow these values to change as the groups change). At each step of the above process, group 2a gains one element and loses another. Since each element has value between $\frac{\ell}{2}$ and $\ell$ , we can conclude that at each step of the above process, $S_{2a}$ changes by at most $\frac{\ell}{2}$ . Also, the final value of $S_{2a}$ is equal to the initial value of $S_{2b}$ since by the end of the process the groups have entirely swapped. Let $A$ be the average value of $S_{2a}$ and $S_{2b}$ . Note that this is constant as the groups change. We know that the initial and final values of $S_{2a}$ average to $A$ (since the initial and final values of $S_{2a}$ equal the initial values of $S_{2a}$ and $S_{2b}$ ). Thus, $A$ must be between the initial and final values of $S_{2a}$ . As we change the groups, the value of $S_{2a}$ takes steps of size at most $\frac{\ell}{2}$ from one side of $A$ to the other. From this, we can conclude that at some point, the value of $S_{2a}$ must be within half of the maximum step size ( $\frac{\ell}{4}$ ) of $A$ . Since $S_{2b}$ always has the same distance from $A$ as $S_{2a}$ , we can conclude that at some point, both sums $S_{2a}$ and $S_{2b}$ are within $\frac{\ell}{4}$ of $A$ . $3s - \ell - (S_{2a} + S_{2b})$ is the sum of the elements of group 1; thus, $3s - \ell - (S_{2a} + S_{2b}) \ge s - \frac{\ell}{2}$ (by the condition on the sum of group 1). Rearranging, we have that $(S_{2a} + S_{2b}) \le 2s - \frac{\ell}{2}$ . Therefore, the average of $S_{2a}$ and $S_{2b}$ is at most $s - \frac{\ell}{4}$ . Then at the time that both $S_{2a}$ and $S_{2b}$ are within $\frac{\ell}{4}$ of $A$ , we see that they are both at most $s$ as desired. Case 1: $k = 3n$ Sum the $n$ smallest $a_i$ s and also the $n$ largest $a_i$ s. If the $n$ smallest $a_i$ s sum to more than $s$ , then no group can have $n$ or more elements (as any choice of $n$ elements will result in the sum being too large). But the pigeon hole principle tells us that at least one group has at least $n$ elements. Thus, in this case there is no solution to the packing problem instance. If the $n$ largest $a_i$ s sum to at most $s$ , then every choice of $n$ elements has a sum of at most $s$ . In this case, simply partitioning the elements into three groups of equal size solves the packing problem instance. The only remaining case is that the smallest $n$ elements sum to at most $s$ and the largest $n$ elements sum to more than $s$ . As in the lemma, we can slowly transition between these two sets: start with a group initialized with the smallest $n$ elements and swap elements until the group has the largest $n$ elements. Each step moves the sum of the group by at most $\frac{\ell}{2}$ (by the same logic as in the proof of the lemma) and the sum crosses the value $s$ at some point in this process. Therefore, either immediately before or immediately after crossing the value $s$ , the sum is less than $s$ but less by at most $\frac{\ell}{2}$ . In other words, at some point in this process, the group has a sum in the interval $[s-\frac{\ell}{2}, s]$ . Take that group as group 1 and the remaining $2n$ elements as group 2. The lemma applies, allowing us to find a solution to the packing problem instance. Case 2: $k = 3n+1$ Sum the $n+1$ smallest $a_i$ s and also the $n+1$ largest $a_i$ s. If the $n+1$ smallest $a_i$ s sum to more than $s$ , then no group can have $n+1$ or more elements (as any choice of $n+1$ elements will result in the sum being too large). But the pigeon hole principle tells us that at least one group has at least $n+1$ elements. Thus, in this case there is no solution to the packing problem instance. If the $n+1$ largest $a_i$ s sum to at most $s$ , then every choice of $n+1$ elements has a sum of at most $s$ . In this case, simply partitioning the elements into three groups of size $n$ , $n$ , and $n+1$ solves the packing problem instance. The only remaining case is that the smallest $n+1$ elements sum to at most $s$ and the largest $n+1$ elements sum to more than $s$ . As in the previous case, we can slowly transition between these two sets: start with a group initialized with the smallest $n+1$ elements and swap elements until the group has the largest $n+1$ elements. Each step moves the sum of the group by at most $\frac{\ell}{2}$ (by the same logic as in the proof of the lemma) and the sum crosses the value $s$ at some point in this process. Therefore, either immediately before or immediately after crossing the value $s$ , the sum is less than $s$ but less by at most $\frac{\ell}{2}$ . In other words, at some point in this process, the group has a sum in the interval $[s-\frac{\ell}{2}, s]$ . Take that group as group 1 and the remaining $2n$ elements as group 2. The lemma applies, allowing us to find a solution to the packing problem instance. Case 3: $k = 3n+2$ first easy subcase Sum the $n$ smallest $a_i$ s and the $n$ largest $a_i$ s. If either value is in the interval $[s-\frac{\ell}{2}, s]$ , immediately apply the lemma with that choice of $n$ elements as group 1 and the remaining $2n+2$ elements as group 2. This allows us to find a solution to the packing problem instance. If the $n$ smallest $a_i$ s sum to less than $s-\frac{\ell}{2}$ and the $n$ largest $a_i$ s sum to more than $s$ then we can start with the $n$ smallest $a_i$ s and swap elements until we end up with the $n$ largest $a_i$ s. At each step the sum moves at most $\frac{\ell}{2}$ , so at some point in this process we have a group of $n$ elements whose sum is in the interval $[s-\frac{\ell}{2}, s]$ . Take that group as group 1 and the remaining $2n+2$ elements as group 2. The lemma applies, allowing us to find a solution to the packing problem instance. If the $n$ smallest $a_i$ s sum to more than $s$ , then no group can have $n$ or more elements (as any choice of $n$ elements will result in the sum being too large). But the pigeon hole principle tells us that at least one group has at least $n+1$ elements. Thus, in this case there is no solution to the packing problem instance. The only remaining case is that both the $n$ smallest and $n$ largest $a_i$ s add to less than $s-\frac{\ell}{2}$ . Case 3: $k = 3n+2$ second easy subcase Sum the $n+2$ smallest $a_i$ s and the $n+2$ largest $a_i$ s. If either value is in the interval $[s-\frac{\ell}{2}, s]$ , immediately apply the lemma with that choice of $n+2$ elements as group 1 and the remaining $2n$ elements as group 2. This allows us to find a solution to the packing problem instance. If the $n+2$ smallest $a_i$ s sum to less than $s-\frac{\ell}{2}$ and the $n+2$ largest $a_i$ s sum to more than $s$ then we can start with the $n+2$ smallest $a_i$ s and swap elements until we end up with the $n+2$ largest $a_i$ s. At each step the sum moves at most $\frac{\ell}{2}$ , so at some point in this process we have a group of $n+2$ elements whose sum is in the interval $[s-\frac{\ell}{2}, s]$ . Take that group as group 1 and the remaining $2n$ elements as group 2. The lemma applies, allowing us to find a solution to the packing problem instance. If the $n+2$ largest $a_i$ s sum to at most $s$ , then every choice of $n+2$ elements has a sum of at most $s$ . In this case, simply partitioning the elements into three groups of size $n$ , $n+1$ , and $n+1$ solves the packing problem instance. The only remaining case is that both the $n+2$ smallest and $n+2$ largest $a_i$ s add to more than $s$ . Case 3: $k = 3n+2$ hard subcase Suppose neither of the above two subcases handled the instance. Then we know the following: both the $n$ smallest and $n$ largest $a_i$ s add to less than $s-\frac{\ell}{2}$ . both the $n+2$ smallest and $n+2$ largest $a_i$ s add to more than $s$ . Since the $n+2$ smallest $a_i$ s add to more than $s$ , no group can have $n+2$ or more elements. The only way this is possible to accomplish is if the groups have sizes $n$ , $n+1$ , and $n+1$ . Note that since the $n$ largest $a_i$ s add to less than $s-\frac{\ell}{2}$ , it doesn't matter what elements are in the group of size $n$ : that group will have sum less than $s$ anyway. Clearly, we can modify any solution by swapping the $n$ largest $a_i$ s into the group of size $n$ . These swaps only decrease the sums of the other two groups, so there exists a solution to the packing problem instance if and only if there exists a solution in which the $n$ largest $a_i$ s form one of the groups. Thus, the task at hand is simply this: is it possible to partition the $2n+2$ smallest $a_i$ s into two groups of size $n+1$ such that each of those two groups has sum at most $s$ . WLOG suppose that the $a_i$ s are in increasing order so that the $2n+2$ smallest $a_i$ s are $a_1, \ldots, a_{2n+2}$ Let $v$ be the average of the $n$ largest $a_i$ s. Let $x_i = v - a_i$ for each $i$ . Then $3s = \ell + \sum_{i = 1}^{3n+2}a_i = \ell + \sum_{i = 1}^{2n+2}a_i + \sum_{i = 2n+2}^{3n+2}a_i = \ell + \sum_{i = 1}^{2n+2}(v - x_i) + nv = \ell + (3n+2)v - \sum_{i = 1}^{2n+2}(x_i) = (\ell-v) + (3n+3)v - \sum_{i = 1}^{2n+2}(x_i)$ . Then $s = (n+1)v + \frac{\ell - v}{3} - \frac{1}{3}\sum_{i = 1}^{2n+2}(x_i)$ . If we choose some set $I$ of $n+1$ indices, then the sum of the $a_i$ s with $i \in I$ is equal to $\sum_{i\in I}a_i=\sum_{i\in I}v - x_i = (n+1)v-\sum_{i\in I}x_i$ . A set of indices $I$ is a valid choice for one of the groups provided this sum is at most $s$ . In other words, using the alternate definition of $s$ derived above, the condition we are interested in is $\sum_{i\in I}x_i \ge \frac{1}{3}\sum_{i = 1}^{2n+2}(x_i) - \frac{\ell - v}{3}$ . Obviously, the remaining $n+1$ indices $I' = \{1,\ldots, 2n+2\} \setminus I$ must also satisfy this condition: $\sum_{i\in I'}x_i \ge \frac{1}{3}\sum_{i = 1}^{2n+2}(x_i) - \frac{\ell - v}{3}$ . Since $\sum_{i\in I'}x_i = \sum_{i = 1}^{2n+2}x_i - \sum_{i\in I}x_i$ , we can rewrite the second condition as $\sum_{i\in I}x_i \le \frac{2}{3}\sum_{i = 1}^{2n+2}(x_i) + \frac{\ell - v}{3}$ . Thus we have restated this subproblem using an equivalent alternative: we are trying to choose a set $I$ of $n+1$ indices among $\{1, \ldots, 2n+2\}$ such that $\frac{2}{3}\sum_{i = 1}^{2n+2}(x_i) + \frac{\ell - v}{3} \ge \sum_{i\in I}x_i \ge \frac{1}{3}\sum_{i = 1}^{2n+2}(x_i) - \frac{\ell - v}{3}$ . Note that $\ell \ge v$ (and therefore $\ell-v \ge 0$ ) since $v$ is the average of some $a_i$ s and $\ell$ is an upper bound on all $a_i$ s. Let $X = \sum_{i = 1}^{2n+2}(x_i)$ . Our condition above can be rewritten as follows: choose a set $I$ of $n+1$ indices among $\{1, \ldots, 2n+2\}$ such that $\sum_{i\in I}x_i$ is in the interval $[cX, (1-c)X]$ where $c = \frac{1}{3} - \frac{\ell - v}{3X} \le \frac{1}{3}$ . Suppose that for some $j$ , $x_j$ has value at least $\frac{X}{3}$ . In this case, let $I$ consist of the index $j$ and also of the $n-1$ indices corresponding to the smallest $x_i$ s. If these $x_i$ s add to at most $(1-c)X$ then this choice of $I$ satisfies the above constraint: the sum $\sum_{i\in I}x_i$ is at least $x_j \ge \frac{X}{3} = \frac{1}{3}X \ge cX$ and at most $(1-c)X$ . Thus, this set of indices $I$ can be used to select one of the groups of size $n+1$ in a solution to the packing problem instance. On the other hand, if these $x_i$ s add to more than $(1-c)X$ then the instance cannot be solved: one of the two halves will include $x_j$ and the sum of that half is always going to be more than $(1-c)X$ (since it was more than that even when we put the smallest elements with $x_j$ ). Thus, we have handled the case that some $x_j$ has value at least $\frac{X}{3}$ . The remaining case is that every $x_i$ has value at most $\frac{X}{3}$ . Then assign the $x_i$ s to two groups, 1 and 2, arbitrarily. Create a matching of the elements of the two groups and swap those pairs one pair at a time. Over the course of this process, the sum of the elements of group 1 will move from it's initial sum to the initial sum of group 2's elements. In other words, the sum of group 1 will cross the average value $\frac{X}{2}$ . Each step in this process involves group 1 losing an $x_i$ and gaining one; since the $x_i$ s have value at most $\frac{X}{3}$ , a pair of $x_i$ s differs by at most $\frac{X}{3}$ and so each step moves the sum of group 1 by at most $\frac{X}{3}$ . Thus, there will be some point in the process when the sum of group 1 is within half the maximum step size of the average. Thus, at some point the sum of group 1 will fall within $\frac{X}{2} \pm \frac{X}{6}$ . But this is the interval $[\frac{1}{3}X, \frac{2}{3}X]$ , which is a subinterval of $[cX, (1-c)X]$ . Thus, at some point during this process, the group 1 at the time will satisfy our desired condition. We can use this group of $x_i$ s to select a corresponding group of $a_i$ s; these $a_i$ s form one of the two groups of size $n+1$ needed to solve the packing problem instance. Conclusion The above exhaustive casework can be used as an algorithm to partition a list into three groups of the same sum using one soft number. The casework will also identify when this is impossible. The runtime of this algorithm is polynomial.
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/42275', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/9453/']}
jdg_370460
stackexchange
llm_judgeable_groundtruth_similarity
66786737
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I wrote a script to query a PostGIS database, returning a Pandas dataframe like this: ID ... WKT0 1 ... LINESTRING(1.5047434 42.6319022,1.5053385 42.6...1 2 ... LINESTRING(1.5206333 42.5291144,1.5206306 42.5... Now I am trying to write this into a shapefile with GeoPandas, according to their documentation : We use shapely.wkt sub-module to parse wkt format : from shapely import wktdf['Coordinates'] = geopandas.GeoSeries.from_wkt(df['Coordinates']) But when I tried to do the same, I got: AttributeError: type object 'GeoSeries' has no attribute 'from_wkt' My GeoPandas: geopandas 0.8.1 py_0 conda-forge Now provide the response and nothing else.
Use shapely.wkt.loads to create the geometry column. import geopandas as gpdfrom shapely import wktdf['geometry'] = df.WKT.apply(wkt.loads)df.drop('WKT', axis=1, inplace=True) #Drop WKT column# Geopandas GeoDataFramegdf = gpd.GeoDataFrame(df, geometry='geometry')#Export to shapefilegdf.to_file('myshapefile.shp')
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/66786737', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6501428/']}
jdg_370461
stackexchange
llm_judgeable_groundtruth_similarity
45264
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was reading "Szlachetne Zdrowie" (No 7/2019) which is a Polish health magazine released by "Nasz Dziennik" , which is a Catholic, very conservative newspaper of questionable quality. I read the chapter about microwaving written by Barbara Zielonka. She is a food technologist and claims that: Microwaves frequency cause changes in organic compounds structure (isomers of those compounds might be created) and disintegration of many of them, with new, unknown chemical compounds unknown to nature being created. Food heated in microwave contains particles that are not created during conventional heating of the food (conduction, convection, radiation), where heat is transmitted from outside to the inside of the product. (my own loose translation from Polish to English) Also she claims that in The Lancet, there was some research showing that when the milk was microwaved, the amino acid proline in the milk changed its form from L-proline to D-proline and created so called "cis isomers". D-proline might be toxic, she claims. She also claims that the article in The Lancet is stating that "conversion of trans forms to cis forms might be dangerous, because cis amino acids are embedding themselves into peptides and proteins instead of trans isomers" . I was unable to locate that article in The Lancet. So the bottom line is: is it true that microwaving food can create some kind of chemical compounds that are not created when heating the food in traditional ways? If yes, then should we be worried? Could any be dangerous, such as the mentioned D-proline? I thought that microwaving is safe because it is just making water molecules vibrate and thus warming up the food. On the other hand I am very sceptical of the mentioned source where I found this article, but it mentioned The Lancet and it got me interested. Now provide the response and nothing else.
The English abstract of Zur Frage der Aminosäureisomerisierung im Mikrowellenfeld Ergebnisse eines Modellversuches mit Standardlösungen [The question of amino acid isomerization in a microwave field Results of experiments with standard solutions] Zeitschrift für Ernährungswissenschaft September 1992, Volume 31, Issue 3, pp 219–224 is: Aqueous standard-solutions of L-alanine, L-glutamic acid, and L-proline do not reveal any increase of D-enantiomer s after 30 min heating - neither by the conventional method on a hotplate, nor ina standard microwave oven. A specific "microwave effect" and, hence, a special consumer risk is, in contrast to recent assumptions, not detectable. Effects on the amino acids which were observed in conventionally heated samples are explained by higher heat-exposure during the treatment of these samples. In the body of the article, it explained that the research is particularly to test the claims made by the Lancet article: Können durch Erhitzen von Nahrungsmitteln im Mikrowellenherd D-Aminosäuren entstehen? Diese Frage wurde anlässlich einer im Dezember 1989 erschienenen Kurzmitteilung im Lancet (5), in welcher über Isomerisierungen der Aminosäuren Prolin und trans-Hydroxyprolin in erhitzter Milch berichtet wird, zur Diskussion gestellt. Where reference "5" is Lubec G, Wolf Chr, Bartosch S (1989) Amino acid isomerisation and microwave exposure. Lancet Nr. 9:1392-1393 The conclusion of the German article is: Halbstündiges Sieden der Aminosäuren L-Alanin, L-Glutaminsäure und L-Prolin in Wasser hat unter den beschriebenen experimentellen Bedingungen keine nachweisliche Zunahme der D-Enantiomere zur Folge. Ein spezifischer ,Mikrowelleneffekt' ist nicht erkennbar. Maximaltemperaturen von 102-104 °C reichen unter normalen Kochbedingungen unter Atmosphärendruck bei neutralem bis schwach basischem pH-Wert (7-7,5) demnach nicht aus, um signifikante Isomerisierungsreaktionen an den verwendeten Aminosäuren, auch nicht an Prolin, auszulösen. which roughly translates (someone help with German please): Half-hourlong boiling of the amino acids L-alanine, L-glutamic acid and L-proline in water results in no demonstrable increase of the D enantiomers under the experimental conditions described. A specific 'microwave effect' is not recognizable. Maximum temperatures of 102-104 °C under normal cooking conditions under atmospheric pressure at neutral or weakly basic pH levels (7-7.5) is therefore not sufficient to trigger significant isomerization reactions on the amino acids used, not even on proline. So, there is no such effect from proline under neutral pH conditions.
{}
{'log_upvote_score': 7, 'links': ['https://skeptics.stackexchange.com/questions/45264', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/46433/']}
jdg_370462
stackexchange
llm_judgeable_groundtruth_similarity
106112
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use an external HDD for storage on my Raspberry Pi 3B+. The disk is a 0.55A 320GB Western Digital Blue always plugged in, and added to /etc/fstab with defaults,user options. I've checked the disk twice, it works perfectly. After boot, the disk is functional. I can use it to play the music on it (it's the only use it has). But after a while (idle or not, but idle is more common), the disk's led goes off. Trying to access the disk one way or another brings the led back on but freezes, causing this to be logged in kernel messages: INFO: task usb-storage:237 blocked for more than 120 seconds . The only options to use it again after that occurs are whether to reboot or to plug it off and then back in. My PSU is a 30W 3USB ports Aukey supposedly able to deliver up to 2.4A per USB port, yet I still get Under-voltage detected! (0x00050005) messages in kernel logs. The disk is connected to the Pi with a Y USB cable but i've read somewhere it's useless on a raspi ("useless" but it's the only cable I have which makes the disk working on the pi...). What I tried to do:I've disabled both Wifi & Bluetooth by appending pi3-disable-wifi,pi3-disable-bt to dtoverlay in /boot/config.txt, no mouse, screen or keyboard are connected.Only the USB disk and the ethernet are used. X is disabled, the pi autologs to console.I've also tried to set /sys/module/usbcore/parameters/autosuspend to -1 in order to prevent the disk from going to sleep, without success. I've read about a "high power mode" or max_usb_current=1 but people say it is obsolete so I didn't try it. I've thought of connecting the other end of the Y (power) directly to another port of the PSU instead of the Pi, is it any good idea ? Otherwise, I could use another Y cable I have (something like that ) to power to Pi, making it gain its power from two of the PSU ports. Would it work ? I'm really open to any suggestion now. Thanks in advance Now provide the response and nothing else.
My PSU is a 30W 3USB ports Aukey supposedly able to deliver up to 2.4A per USB port But that's not really a PSU; it's a battery charger. USB chargers are generally not intended to be (or sold as) power supplies . This is an important distinction because live electronics without a battery of their own (as opposed to a phone) will draw a rapidly fluctuating amount of current which the charger may not be built to handle well, because it is sort of the opposite of what is desirable with a charger. Hence their voltage may drop under load. Their current rating is simply a maximum and does not promise anything about instantaneous output in a "supply power" scenario. 5V "powerbanks" fare even worse in this department. The best idea is a PSU that is sold explicitly for powering 5V devices. Anecdotally, another common issue you will find reported here many times is that USB power cables sold, e.g., with phones for charging can also be a culprit: They're fine for charging batteries but many are not so good in the live device context. Even PSUs sold explicitly as PSUs (sometime even "for the Raspberry Pi") capable of 2.5A are (in my experience) often not good enough for a Pi 3. At least half of the about a dozen supplies I have trigger the low voltage warning (little lighting bold on screen, and Under-voltage detected! in the system log), although the Pi runs fine (without peripherals). Supplies sold as 5.1 or 5.2V are probably ideal, since this implies they are designed to maintain a steady voltage as a power source. The disk is connected to the Pi with a Y USB cable Pretty sure I have read people doing exactly that here; this list is worth skimming: https://raspberrypi.stackexchange.com/search?q=%22y+cable%22+usb
{}
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/106112', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/111993/']}
jdg_370463
stackexchange
llm_judgeable_groundtruth_similarity
17246693
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: import copya = "deepak"b = 1, 2, 3, 4c = [1, 2, 3, 4]d = {1: 10, 2: 20, 3: 30}a1 = copy.copy(a)b1 = copy.copy(b)c1 = copy.copy(c)d1 = copy.copy(d)print("immutable - id(a)==id(a1)", id(a) == id(a1))print("immutable - id(b)==id(b1)", id(b) == id(b1))print("mutable - id(c)==id(c1)", id(c) == id(c1))print("mutable - id(d)==id(d1)", id(d) == id(d1)) I get the following results: immutable - id(a)==id(a1) Trueimmutable - id(b)==id(b1) Truemutable - id(c)==id(c1) Falsemutable - id(d)==id(d1) False If I perform deepcopy: a1 = copy.deepcopy(a)b1 = copy.deepcopy(b)c1 = copy.deepcopy(c)d1 = copy.deepcopy(d) results are the same: immutable - id(a)==id(a1) Trueimmutable - id(b)==id(b1) Truemutable - id(c)==id(c1) Falsemutable - id(d)==id(d1) False If I work on assignment operations: a1 = ab1 = bc1 = cd1 = d then results are: immutable - id(a)==id(a1) Trueimmutable - id(b)==id(b1) Truemutable - id(c)==id(c1) Truemutable - id(d)==id(d1) True Can somebody explain what exactly makes a difference between the copies? Is it something related to mutable & immutable objects? If so, can you please explain it to me? Now provide the response and nothing else.
Normal assignment operations will simply point the new variable towards the existing object. The docs explain the difference between shallow and deep copies: The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances): A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original. A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original. Here's a little demonstration: import copya = [1, 2, 3]b = [4, 5, 6]c = [a, b] Using normal assignment operatings to copy: d = cprint id(c) == id(d) # True - d is the same object as cprint id(c[0]) == id(d[0]) # True - d[0] is the same object as c[0] Using a shallow copy: d = copy.copy(c)print id(c) == id(d) # False - d is now a new objectprint id(c[0]) == id(d[0]) # True - d[0] is the same object as c[0] Using a deep copy: d = copy.deepcopy(c)print id(c) == id(d) # False - d is now a new objectprint id(c[0]) == id(d[0]) # False - d[0] is now a new object
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/17246693', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1336292/']}
jdg_370464
stackexchange
llm_judgeable_groundtruth_similarity
4320416
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I am working on a makefile and I have the following variable declaration at the top: FILES = file1.cpp file2.cpp file3.cpp Now suppose I want to compile each of those with a special command without specifying each target like this: file1.o : file1.cpp custom_command file1.cppfile2.o : file2.cpp custom_command file2.cppfile3.o : file3.cpp custom_command file3.cpp Is there a better way to do this using the $(FILES) variable I declared above? Something like: $(FILES:.cpp=.o) : $(FILES) custom_command $(FILES) ...only it needs to do this for each file in the $(FILES) variable. Now provide the response and nothing else.
Yes. There are what are known as pattern rules . An example is the easiest to understand: %.o: %.cpp $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@ (remember that Makefiles require tabs). This rule describes how to make an object file from a cpp file. If you do not want such a broad rule, you can use what are called static patterns: objects = file1.o file2.o file3.oall: $(objects)$(objects): %.o: %.cpp $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@ Here's the section on static pattern rules and pattern rules in the GNU Make manual.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4320416', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/193619/']}
jdg_370465
stackexchange
llm_judgeable_groundtruth_similarity
15234882
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Gmail has a issue where conversation labels are not applied to new messages that arrive in the conversation thread. issue details here We found a Google Apps Script that fixes the labels on individual messages in the Gmail Inbox to address this issue. The script is as follows: function relabeller() { var labels = GmailApp.getUserLabels(); for (var i = 0; i < labels.length; i++) { Logger.log("label: " + i + " " + labels[i].getName()); var threads = labels[i].getThreads(0,100); for (var j = 1; threads.length > 0; j++) { Logger.log( (j - 1) * 100 + threads.length); labels[i].addToThreads(threads); threads = labels[i].getThreads(j*100, 100); } }} However this script times out on email boxes with more than 20,000 messages due to the 5 mins execution time limit on Google Apps Script. Can anyone please suggest a way to optimize this script so that it doesn't timeout? Now provide the response and nothing else.
OK, I've been working on this for a few days because I was really frustrated with the strange way that Gmail labels/doesn't label messages in conversations. I'm flabbergasted actually that labels aren't automatically applied to new messages in a conversation. This is not reflected at all in the Gmail UI. There's no way to look at a thread and determine that the labels only apply to some messages in the thread, and you cannot add labels to a single message in the UI. As I was working through my script below, I noticed that you can't even programmatically add labels to a single message. So there really is no reason for the current behavior. With my rant out of the way, I have a few notes about the script. I sort of combined Saqib's code with Serge's code. The script has two parts: an initial run that relabels all threads that have a user label attached, and a maintenance run that labels recent emails (currently looks back 4 days). Only one part executes during a single run. Once the initial run is completed, only the maintenance part will run. You can set a trigger to it run once per day, or more or less often, depending on your needs. The initial run halts after 4 minutes to avoid being terminated by the 5 minute script time limit. It sets a trigger to run again after 4 minutes (both of these times can be changed using constants in the script). The trigger gets deleted at the next run. There is no run-time check in the maintenance section. If you have lots of emails in the last 4 days, the maintenance section might hit the script time limit. I could probably change the script to be more efficient here, but so far it's worked for me so I am not really motivated to improve on it. There's a try/catch statement in the initial run to try to catch the Gmail "write quota error" and exit gracefully (i.e. writing the current progress so it can be picked up again later), but I don't know if it works because I couldn't get the error to happen. You'll get an email when the time limit is reached, and when the initial run is finished. For some reason, the log doesn't always clear fully between runs, even when using the Logger.clear() command. So the status logs that it emails to the user have more than just the most recent run info. I don't know why this occurs. I have used this to process 20,000 emails in around half an hour (including wait times). I actually ran it twice, so it processed 40,000 emails in one day. I guess the Gmail read/write limit of 10,000 isn't what is being applied here (maybe applying a label to 100 threads at a time counts as a single write event instead of 100?). It gets through about 5,000 threads in a 4 minute run, according to the status email it sends. Sorry for the long lines. I blame the widescreen monitors. Let me know what you think! function relabelGmail() { var startTime= (new Date()).getTime(); // Time at start of script var BATCH=100; // total number of threads to apply label to at once. var LOOKBACKDAYS=4; // Days to look back for maintenance section of script. Should be at least 2 var MAX_RUN_TIME=4*60*1000; // Time in ms for max execution. 4 minutes is a good start. var WAIT_TIME=4*60*1000; // Time in ms to wait before starting the script again. Logger.clear();// ScriptProperties.deleteAllProperties(); return; // Uncomment this line and run once to start over completely if(ScriptProperties.getKeys().length==0){ // this is to create keys on the first run ScriptProperties.setProperties({'itemsProcessed':0, 'initFinished':false, 'lastrun':'20000101', 'itemsProcessedToday':0, 'currentLabel':'null-label-NOTREAL', 'currentLabelStart':0, 'autoTrig':0, 'autoTrigID':'0'}); } var itemsP = Number(ScriptProperties.getProperty('itemsProcessed')); // total counter var initTemp = ScriptProperties.getProperty('initFinished'); // keeps track of when initial run is finished. var initF = (initTemp.toLowerCase() == 'true'); // Make it boolean var lastR = ScriptProperties.getProperty('lastrun'); // String of date corresponding to itemsProcessedToday in format yyyymmdd var itemsPT = Number(ScriptProperties.getProperty('itemsProcessedToday')); // daily counter var currentL = ScriptProperties.getProperty('currentLabel'); // Label currently being processed var currentLS = Number(ScriptProperties.getProperty('currentLabelStart')); // Thread number to start on var autoT = Number(ScriptProperties.getProperty('autoTrig')); // Number to say whether the last run made an automatic trigger var autoTID = ScriptProperties.getProperty('autoTrigID'); // Unique ID of last written auto trigger // First thing: google terminates scripts after 5 minutes. // If 4 minutes have passed, this script will terminate, write some data, // and create a trigger to re-schedule itself to start again in a few minutes. // If an auto trigger was created last run, it is deleted here. if (autoT) { var allTriggers = ScriptApp.getProjectTriggers(); // Loop over all triggers. If trigger isn't found, then it must have ben deleted. for(var i=0; i < allTriggers.length; i++) { if (allTriggers[i].getUniqueId() == autoTID) { // Found the trigger and now delete it ScriptApp.deleteTrigger(allTriggers[i]); break; } } autoT = 0; autoTID = '0'; } var today = dateToStr_(); if (today == lastR) { // If new day, reset daily counter // Don't do anything } else { itemsPT = 0; } if (!initF) { // Don't do any of this if the initial run has been completed var labels = GmailApp.getUserLabels(); // Find position of last label attempted var curLnum=0; for ( ; curLnum < labels.length; curLnum++) { if (labels[curLnum].getName() == currentL) {break}; } if (curLnum == labels.length) { // If label isn't found, start over at the beginning curLnum = 0; currentLS = 0; itemsP=0; currentL=labels[0].getName(); } // Now start working through the labels until the quota is hit. // Use a try/catch to stop execution if your quota has been hit. // Google can actually automatically email you, but we need to clean up a bit before terminating the script so it can properly pick up again tomorrow. try { for (var i = curLnum; i < labels.length; i++) { currentL = labels[i].getName(); // Next label Logger.log('label: ' + i + ' ' + currentL); var threads = labels[i].getThreads(currentLS,BATCH); for (var j = Math.floor(currentLS/BATCH); threads.length > 0; j++) { var currTime = (new Date()).getTime(); if (currTime-startTime > MAX_RUN_TIME) { // Make the auto-trigger autoT = 1; // So the auto trigger gets deleted next time. var autoTrigger = ScriptApp.newTrigger('relabelGmail') .timeBased() .at(new Date(currTime+WAIT_TIME)) .create(); autoTID = autoTrigger.getUniqueId(); // Now write all the values. ScriptProperties.setProperties({'itemsProcessed':itemsP, 'initFinished':initF, 'lastrun':today, 'itemsProcessedToday':itemsPT, 'currentLabel':currentL, 'currentLabelStart':currentLS, 'autoTrig':autoT, 'autoTrigID':autoTID}); // Send an email var emailAddress = Session.getActiveUser().getEmail(); GmailApp.sendEmail(emailAddress, 'Relabel job in progress', 'Your Gmail Relabeller has halted to avoid termination due to excess ' + 'run time. It will run again in ' + WAIT_TIME/1000/60 + ' minutes.\n\n' + itemsP + ' threads have been processed. ' + itemsPT + ' have been processed today.\n\nSee the log below for more information:\n\n' + Logger.getLog()); return; } else { // keep on going var len = threads.length; Logger.log( j * BATCH + len); labels[i].addToThreads(threads); currentLS = currentLS + len; itemsP = itemsP + len; itemsPT = itemsPT + len; threads = labels[i].getThreads( (j+1) * BATCH, BATCH); } } currentLS = 0; // Reset LS counter } initF = true; // Initial run is done } catch (e) { // Clean up and send off a notice. // Write current values back to ScriptProperties ScriptProperties.setProperties({'itemsProcessed':itemsP, 'initFinished':initF, 'lastrun':today, 'itemsProcessedToday':itemsPT, 'currentLabel':currentL, 'currentLabelStart':currentLS, 'autoTrig':autoT, 'autoTrigID':autoTID}); var emailAddress = Session.getActiveUser().getEmail(); var errorDate = new Date(); GmailApp.sendEmail(emailAddress, 'Error "' + e.name + '" in Google Apps Script', 'Your Gmail Relabeller has failed in the following stack:\n\n' + e.stack + '\nThis may be due to reaching your daily Gmail read/write quota. \nThe error message is: ' + e.message + '\nThe error occurred at the following date and time: ' + errorDate + '\n\nThus far, ' + itemsP + ' threads have been processed. ' + itemsPT + ' have been processed today. \nSee the log below for more information:' + '\n\n' + Logger.getLog()); return; } // Write current values back to ScriptProperties. Send completion email. ScriptProperties.setProperties({'itemsProcessed':itemsP, 'initFinished':initF, 'lastrun':today, 'itemsProcessedToday':itemsPT, 'currentLabel':currentL, 'currentLabelStart':currentLS, 'autoTrig':autoT, 'autoTrigNumber':autoTID}); var emailAddress = Session.getActiveUser().getEmail(); GmailApp.sendEmail(emailAddress, 'Relabel job completed', 'Your Gmail Relabeller has finished its initial run.\n' + 'If you continue to run the script, it will skip the initial run and instead relabel ' + 'all emails from the previous ' + LOOKBACKDAYS + ' days.\n\n' + itemsP + ' threads were processed. ' + itemsPT + ' were processed today. \nSee the log below for more information:' + '\n\n' + Logger.getLog()); return; // Don't run the maintenance section after initial run finish } // End initial run section statement // Below is the 'maintenance' section that will be run when the initial run is finished. It finds all new threads // (as defined by LOOKBACKDAYS) and applies any existing labels to all messages in each thread. Note that this // won't miss older threads that are labeled by the user because all messages in a thread get the label // when the label action is first performed. If another message is then sent or received in that thread, // then this maintenance section will find it because it will be deemed a "new" thread at that point. // You may need to search further back the first time you run this if it took more than 3 days to finish // the initial run. For general maintenance, though, 4 days should be plenty. // Note that I have not implemented a script-run-time check for this section. var threads = GmailApp.search('newer_than:' + LOOKBACKDAYS + 'd', 0, BATCH); // var len = threads.length; for (var i=0; len > 0; i++) { for (var t = 0; t < len; t++) { var labels = threads[t].getLabels(); for (var l = 0; l < labels.length; l++) { // Add each label to the thread labels[l].addToThread(threads[t]); } } itemsP = itemsP + len; itemsPT = itemsPT + len; threads = GmailApp.search('newer_than:' + LOOKBACKDAYS + 'd', (i+1) * BATCH, BATCH); len = threads.length; } // Write the property data ScriptProperties.setProperties({'itemsProcessed':itemsP, 'initFinished':initF, 'lastrun':today, 'itemsProcessedToday':itemsPT, 'currentLabel':currentL, 'currentLabelStart':currentLS, 'autoTrig':autoT, 'autoTrigID':autoTID});}// Takes a date object and turns it into a string of form yyyymmddfunction dateToStr_(dateObj) { //takes in a date object, but uses current date if not a date if (!(dateObj instanceof Date)) { dateObj = new Date(); } var dd = dateObj.getDate(); var mm = dateObj.getMonth()+1; //January is 0! var yyyy = dateObj.getFullYear(); if(dd<10){dd='0'+dd}; if(mm<10){mm='0'+mm}; dateStr = ''+yyyy+mm+dd; return dateStr;} Edit: 3/24/2017I guess I should turn on notifications or something, because I never saw the question from user29020. In case anyone ever has the same question, here's what I do: I run it as a maintenance function by setting a daily trigger to run each night between 1 and 2 AM. An additional note: It seems that at some point in the last year or so, labeling calls to Gmail have slowed down significantly. It now takes around 0.2 seconds per thread, so I would expect an initial run of 20k emails to take at least 20 runs or so before it makes it all the way through. This also means that if you typically receive more than 100-200 emails a day, the maintenance section might also start to take too long and start to fail. Now that's a lot of emails, but I bet there are some people that receive that many, and it seems much more likely that you would hit that than the 1000 or so daily emails that would have been needed for failure back when I first wrote the script. Anyway, one mitigation would be to reduce the LOOKBACKDAYS to less than 4, but I wouldn't recommend putting it less than 2.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15234882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/420558/']}
jdg_370466
stackexchange
llm_judgeable_groundtruth_similarity
33649639
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a list of 70+ GIT repo URLs(students). Is there any feature that allows me to clone them all at once? Would there be the same for synchronizing the repository with the server? If not, I guess I'd need to write a quick shell script in order to do this. Now provide the response and nothing else.
Shell scripting. Getting the repos The principal idea to get the repos is while read repo; do git clone "$repo"done < repolist.txt assuming the file "repolist.txt" contains one repo URL per line. Updating the repos This one is trickier. While it's easy to iterate over the list of repos,there's the conceptual problem with "synchronizing".Its essense roots in that when you clone the "normal" way — that is, not specifying different funky command-line options modifying the git clone 's defaults — all the branches of the source repo end up being created in the form of the so-called "remote branches" in your resulting local repo.Those remote branches merely track the state of the matching branches in the source repo. A single branch, designated as the "current" in the source repo, is then taken, and a local (that is, yours only) branch is created out of it. That's why when you clone a repo with 100 branches you end up having only a single local branch (which is "master" in 99.9% cases). What follows, is that automatic "synchronization" is a moot point here: when you do git fetch origin in a "normally" cloned repo, the remote branches get updated with their new contents and are hence almost 1 fully synchronized.Note that your local branches are not touched at all.That's because you might have your local work done on them,and so you have to decide on how do you want to reconcile the updatedstate of the remote branches with your local branches, if at all.This is just the default work model assumed by Git because that's whatneeded in most cases. If, instead, you don't intend to do any work on the branches of those repos,and they are for inspection only, the easiest approach is tomake Git have no remote branches at all. To do this, you clone using several explicit steps: Initialize an empty repository: git init <dirname> Configure a remote there: git remote add --mirror=fetch origin <url> The --mirror=fetch tells Git to setup the mapping of what to fetchto what to update with the fetched data in a way which forcefullyoverwrites all local stuff with the remote stuff. Fetch all the data — overwriting everything local: git fetch -u origin The -u (or --update-head-ok ) permits Git to overwrite the branchpointed to by the HEAD reference. This pulls the rug from the feetof the index and the work tree but we'll compensate for that on the next step. Force-update the index and the work tree using the new data: git reset --hard HEAD This makes Git overwrite the index and the work tree with the up-to-date state of the branch pointed at by HEAD — typically "master" but should you check another branch out (see below) it willobviously use that one. Then, to update the data next time you do: git fetch -u origingit reset --hard HEAD and then study what's in the work tree. If you need to view another branch, the usual git branch -a …observe the list and pick a branch, then git checkout <that_branch> will work. In essense, all this dance with explicit repo initialization and adding of a remote in a special way is needed because the --mirror optionof git clone implies creating a bare repository, and we supposedly wanta normal one (I think). To update all the repos located in a directory, do find "$root_dir" -mindepth 1 -maxdepth 1 -type d -print \ | while read repo; do \ cd "$repo" && \ git fetch -u origin && \ git reset --hard HEAD \ done 1 The branches deleted in the remote repo are not deleted locally. To do that, you have to run git remote prune origin .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33649639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2259057/']}
jdg_370467
stackexchange
llm_judgeable_groundtruth_similarity
45338495
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This command fetches all tags: git fetch origin --tags This command fetches a specific tag: git fetch origin refs/tags/1.0.0 But that doesn't let me do: git checkout tags/2.3.18 How can I fetch a single tag and then perform a checkout? Now provide the response and nothing else.
git fetch origin refs/tags/1.0.0 This fails because it doesn't write a local reference: it obtains the remote's refs/tags/1.0.0 , and any tag object(s), commits, etc., required to go with it; it drops those into FETCH_HEAD (as all git fetch commands always do); and ... that's it. It never creates reference refs/tags/1.0.0 in your repository, even though it got everything it needed to do so. To make it create such a tag if it does not yet exist: git fetch origin refs/tags/1.0.0:refs/tags/1.0.0 The name on the right of the colon is the name your Git will use in your repository. You could turn this tag into a branch named wacky , for instance, by naming it refs/heads/wacky . (There's no reason to do this. I am describing this just for illustration.) This is a non-forced fetch, so if you already have a refs/tags/1.0.0 , your Git will refuse to update your reference. If you wish to overwrite any existing 1.0.0 tag, use: git fetch origin +refs/tags/1.0.0:refs/tags/1.0.0 If you wish to fetch all tags, with or without overwriting: git fetch origin 'refs/tags/*:refs/tags/*' with or without a leading plus sign. (Note: the quote marks are just to protect the * from your shell. The plus sign may go inside or outside the quotes. In fact, the quotes themselves can go anywhere as long as they surround all asterisks or other shell meta-characters: refs/tags/'*:refs/tags/*' or you can use backslashes instead: refs/tags/\*:refs/tags/\* In all cases we are just protecting the sensitive asterisk from the shell's "asterisks are tasty, let's eat them" functions.)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/45338495', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2228559/']}
jdg_370468
stackexchange
llm_judgeable_groundtruth_similarity
5592994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is fastest way to remove the last character from a string? I have a string like a,b,c,d,e, I would like to remove the last ',' and get the remaining string back: OUTPUT: a,b,c,d,e What is the fastest way to do this? Now provide the response and nothing else.
Contrary to the question asked, rtrim() will remove any number of characters, listed in the second argument, from the end of the string. In case you expect just a single comma, the following code would do: $newarraynama = rtrim($arraynama, ","); But in my case I had 2 characters, a comma and a space, so I had to change to $newarraynama = rtrim($arraynama, " ,"); and now it would remove all commas and spaces from the end of the string, returning a, b, c, d, e either from a, b, c, d, e, , a, b, c, d, e,,, , a, b, c, d, e, or a, b, c, d, e , ,, , , But in case there could be multiple commas but you need to remove only the last one , then rtrim() shouldn't be used at all - see other answers for the solution that directly answers the question. However, rtrim() could be a good choice if you don't know whether the extra character could be present or not. Unlike substr -based solutions it will return a, b, c, d, e from a, b, c, d, e
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/5592994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/515659/']}
jdg_370469
stackexchange
llm_judgeable_groundtruth_similarity
37160873
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Doesn't const char *s means that "s is a pointer which is pointing towards a constant char " then why it is giving me this warning? I am not trying to change values. In first function warning is return discards 'const' qualifiers from pointer target type . and in second warning is assignment discards 'const' qualifiers from pointer target type . I was trying to make library functions which are defined in string.h , and also tell me how to correct it. char *my_strchr( const char *s, int c ){ for(;*s!='\0';s++) if(*s==c) return s; // warning return 0;}char *my_strpbrk( const char *s1, const char *s2 ){ char *s2ptr; for(;*s1!='\0';s1++) for(s2ptr=s2;*s2ptr!='\0';s2ptr++) //warning if(*s1==*s2ptr) return s2ptr; return 0;} Now provide the response and nothing else.
Doesn't const char *s means that "s is a pointer which is pointing towards a constant char" Indeed it does. You get the warning because you are trying to convert this into a pointer pointing to a (non-constant) char. There is a rule in C saying that it is always ok to convert from pointer-to-type into pointer-to-const-type, but not the other way around . It doesn't matter if your code tries to change the values or not. Just by using char* you tell the compiler that you want a pointer which is allowed to change values. Most of the C standard library functions do not always make sense when it comes to "const correctness". There is for example no way to cleanly implement strchr . You will have to return (char*)s and cast away the const , which is very bad programming practice. This is the fault of the person who specified the strchr function: it is flawed by design.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37160873', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6319640/']}
jdg_370470
stackexchange
llm_judgeable_groundtruth_similarity
19192892
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to figure out how to re-size an image so that it keeps it ratio of width to height, but gets re-sized until the height of the image matches the height of the containing div. I have these images that are pretty large and long (screenshots), and I want to put them into a 200px width, 180px height div for display and without re-sizing the images manually. To make this look good, the sides of the image need to overflow and be hidden with the containing div. This is what I have so far: http://jsfiddle.net/f9krj/2/ HTML <a class="image_container" href="http://www.skintype.ca/assets/background-x_large.jpg"> <img src="http://www.skintype.ca/assets/background-x_large.jpg" alt="" /></a> CSS a.image_container { background-color: #999; width: 200px; height: 180px; display: inline-block; overflow: hidden;}a.image_container img { width: 100%;} As you can see, there is grey color showing on the images parent container which should not be shown at all. In order for that container to be filled completely, the width needs to be overflowed equally on both sides. Is this possible? Is it also possible to account for an image that is also too tall? Now provide the response and nothing else.
Original Answer: If you are ready to opt for CSS3, you can use css3 translate property. Resize based on whatever is bigger. If your height is bigger and width is smaller than container, width will be stretch to 100% and height will be trimmed from both side. Same goes for larger width as well. Your need, HTML: <div class="img-wrap"> <img src="http://lorempixel.com/300/160/nature/" /></div><div class="img-wrap"> <img src="http://lorempixel.com/300/200/nature/" /></div><div class="img-wrap"> <img src="http://lorempixel.com/200/300/nature/" /></div> And CSS: .img-wrap { width: 200px; height: 150px; position: relative; display: inline-block; overflow: hidden; margin: 0;}div > img { display: block; position: absolute; top: 50%; left: 50%; min-height: 100%; min-width: 100%; transform: translate(-50%, -50%);} Voila! Working: http://jsfiddle.net/shekhardesigner/aYrhG/ Explanation DIV is set to the relative position. This means all the child elements will get the starting coordinates (origins) from where this DIV starts. The image is set as a BLOCK element, min-width/height both set to 100% means to resize the image no matter of its size to be the minimum of 100% of it's parent. min is the key. If by min-height, the image height exceeded the parent's height, no problem. It will look for if min-width and try to set the minimum height to be 100% of parents. Both goes vice-versa. This ensures there are no gaps around the div but image is always bit bigger and gets trimmed by overflow:hidden; Now image , this is set to an absolute position with left:50% and top:50% . Means push the image 50% from the top and left making sure the origin is taken from DIV. Left/Top units are measured from the parent. Magic moment: transform: translate(-50%, -50%); Now, this translate function of CSS3 transform property moves/repositions an element in question. This property deals with the applied element hence the values (x, y) OR (-50%, -50%) means to move the image negative left by 50% of image size and move to the negative top by 50% of image size. Eg. if Image size was 200px × 150px, transform:translate(-50%, -50%) will calculated to translate(-100px, -75px). % unit helps when we have various size of image. This is just a tricky way to figure out centroid of the image and the parent DIV and match them. Apologies for taking too long to explain! Resources to read more: https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/translate https://css-tricks.com/centering-css-complete-guide/
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/19192892', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1858493/']}
jdg_370471
stackexchange
llm_judgeable_groundtruth_similarity
562370
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imagine a highschool freshman walks up to you and asks you what hypercomplex numbers are. Explain to her, in a fair amount of detail, the different types of hypercomplex numbers in a way that any person can understand. This was something asked to me by one of my friends. I don't know what Hypercomplex numbers are well enough to explain it. Neither does my friend and that's the reason he asked my that question. Wikipedia and other resources have been rather inefficient in delivering even the basic gist of it to us. We are merely curious to what this extension is basically. Can you please explain it to us? A simple glance of what it is might suffice. Now provide the response and nothing else.
There is some history behind this. In the beginning there was ${\mathbb N}=\{1,2,3,\ldots\}$, with $+$, $\cdot$, and $<\>$. Certain simple equations of the form $a+x=b$ couldn't be solved in ${\mathbb N}$, so they invented ${\mathbb Z}$ containing also $0$ and the negative numbers. Certain simple equations of the form $ax+b=c$ couldn't be solved in ${\mathbb Z}$, so they invented ${\mathbb Q}$. Certain simple equations like $x^2=2$ couldn't be solved in ${\mathbb Q}$, nor was there a representant for the area of a unit disk. So they invented ${\mathbb R}$. Certain simple equations like $x^2+1=0$ couldn't be solved in ${\mathbb R}$, so they invented ${\mathbb C}$, the system of complex numbers. Each complex number can be written in the form $x\>1+y\>i$ with real $x$, $y$ and a special complex number called $i$. Hamilton then tried in vein to set up a "hypercomplex" number system where each "number" would be of the form $x\>\vec i+y\>\vec j+z\>\vec k$, where $\vec i$, $\vec j$, $\vec k$ are the basis vectors used in elementary vector algebra of ${\mathbb R}^3$. He didn't succeed, but he realized that such a system is possible when the individual hypercomplex numbers are of the form $t\>1+x\>\vec i+y\>\vec j+z\>\vec k$ with $t$, $x$, $y$, $z$ real, and if the operations $+$ and $\cdot$ are appropriately defined. In this way the first true hypercomplex number system, called the quaternions , was born. Apart from the commutativity of multiplication all "rules of algebra" are valid in this system. It is then natural to ask, for which dimensions $n$ apart from $1$, $2$, $4$ such a system $S$ with "numbers" $\sum_{k=1}^n x_k e_k$, where $\>x_k\in{\mathbb R}$ and the $e_k$ are certain special numbers of $S$, can be set up such that one has an addition and a "reasonable" multiplication in $S$. It is one of the deep theorems of $20^{\rm th}$ century mathematics that there is just one more such system, the Cayley octonions with $n=8$; but associativity of multiplication is no longer present in this system.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/562370', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/60900/']}
jdg_370472
stackexchange
llm_judgeable_groundtruth_similarity
10964068
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to send some values to a remote URL using curl. but the remote site redirects to another page which is causing me troubles. for example, i send values from www.mywebsite.com to url www.domain.com/index.php, index.php redirects to fetch.php file, but the thing is it opens like this www.mywebsite.com/fetch.php which gives me 404 not found error because this file is on remote site not mine. how can i fix this this is the code that i'm using $postfields = "link=xxxxxx";$ch = curl_init();curl_setopt($ch, CURLOPT_URL, 'domain.com/index.php');curl_setopt($ch, CURLOPT_RETURNTRANSFER, 0);curl_setopt($ch, CURLOPT_HEADER, 1);curl_setopt($ch, CURLOPT_POST, 1);curl_setopt($ch, CURLOPT_POSTFIELDS, $postfields);curl_setopt($ch, CURLOPT_COOKIEJAR, 'rap.txt');curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1');$content = curl_exec($ch);curl_close($ch); Now provide the response and nothing else.
How come the conditional operator (?:) doesn't work when used with two types that inherit from a single base type? The type of the conditional expression has to be either the type of the second operand or the type of the third operand, as per the language specification. The compiler doesn't try to find a common base type, or another type that both operands can be converted to. The use of the expression doesn't affect how its type is determined - so the variable assignment is irrelevant here. As for why the language is defined like this - it makes it considerably simpler to specify, implement, test and predict. This is fairly common in language design - keeping the language simple is usually a better bet in the long run, even if it makes it slightly more awkward in some specific situations. See section 7.14 of the C# 4 spec for more details. Casting either the second or third operand to the type that you actually want for the conditional expression is the way to fix the problem. Note that another situation this often comes up in is nullable types: // Invalidint? a = SomeCondition ? null : 10;// All validint? a = SomeCondition ? (int?) null : 10;int? b = SomeCondition ? default(int?) : 10;int? c = SomeCondition ? null : (int?) 10;
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10964068', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1234374/']}
jdg_370473
stackexchange
llm_judgeable_groundtruth_similarity
4925529
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As I am trying to count the number of records in a table, even when the SQL statement has a LIMIT into it, overall it works, however something weird happens, the code: $sql = "SELECT COUNT(*) AS count FROM posts ORDER BY post_date DESC LIMIT 5";// ... mysql_query, etcwhile($row = mysql_fetch_array($result)){ // ... HTML elements, etc echo $row['post_title']; // ... HTML elements, etc echo $row['count']; // this displays the number of posts (which shows "12").} Although, when displaying through the while loop, it displays this: Notice: Undefined index: post_title in /Applications/MAMP/htdocs/blog/index.php on line 55 If I remove the COUNT(*) AS count , everything will display perfectly... how come it's doing this? Now provide the response and nothing else.
Don't use COUNT(*) to count the number of rows (for a lot of reasons). Write out your full query, and add SQL_CALC_FOUND_ROWS right after SELECT : SELECT SQL_CALC_FOUND_ROWS id, title FROM foo LIMIT 5; Then, after that query executed (right after), run: SELECT FOUND_ROWS(); That will return the number of rows the original SELECT would have returned if you didn't have the LIMIT on the end (accounting for all joins and where clauses). It's not portable, but it's very efficient (and IMHO the right way of handling this type of problem).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4925529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/264795/']}
jdg_370474
stackexchange
llm_judgeable_groundtruth_similarity
57754
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Dual spaces are home to bras in quantum mechanics; cotangent spaces are home to linear maps in the tensor formalism of general relativity. After taking courses in these two subjects, I've still never really understood the physical significance of these "dual spaces," or why they should need to exist. What is a dual / cotangent space? Could someone explain why they are necessary, and what their physical significance is? Now provide the response and nothing else.
As you probably know, the Dual Space of a Vector Space $V$ is the space of all linear functions on the space $V$ . This is one abstract mathematical concept, however it can give to us very nice ways of representing things on physics. In the context of differential geometry, the dual space is where the objects called cotangent vectors, or more briefly covectors, live. A function that assigns a linear functional at each point is a one-form, and they are very natural to integrate over paths. Indeed, remember that if $M$ is a smooth manifold (in other words, some general space which can be curved or not) for each point we can think of the set of all vectors. In symbols, if $p \in M$ is a point of this space, $T_pM$ is the set of all vectors at $p$ . The dual space to $T_pM$ is the cotangent space $T^\ast_pM$ which is the vector space of linear functionals at $p$ . If then $x^i$ is the $i$ -th coordinate assigned by some chart around $p$ , the most natural basis for $T^\ast_pM$ is the set of differentials $\left\{dx^i\right\}$ . So that we have any one-form $\omega(p) = \omega_i(p) dx^i$ . With all of this in mind, let's see how this allows us to better describe things in physics. Think of a force field, we usually think of forces as vectors because they need direction to be described, however, given a displacement a force gives us the work done to move some particle along the displacement. Well, displacements are really naturally vectors, so we can think of forces as linear functionals on vectors and force fields as one-forms. Think about it, a force field would be then $F(p)=F_i(p)dx^i$ and given a vector at $p$ we would have $F(p)(v)=F_i(p)v^i$ since $dx^i(v)=v^i$ . It's obvious that this is giving the work. Also, remember that I've said that it's natural to integrate one-forms over paths. Imagine $\gamma : I \subset \mathbb{R} \to M$ is path, then the work done moving a particle from the starting point to the end point would be: $$W=\int_\gamma F$$ Which is very natural. So, we can think of forces as one-forms that given vectors gives us work. If we think on electric field for instance, then we could think of it as the one-form that given vectors gives us the change in electric potential. Also, one-forms are normally thought geometrically as $n-1$ surfaces in $n$ -space whose value when integrated along a curve is the number of surfaces pierced. Think a little of how this relate to electric fields and potentials. In other words: mathematically the element of the dual space is a linear functional and an assignment of one such function at each point is a one-form. This is just general and abstract. You should just think then: in which moments some object used to describe some phenomenon will be well described using such abstract entities? You find forces, fields and so on. After you see the power of those objects in different places you're going to understand that the "meaning" of a dual space really depends on what you're trying to describe.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/57754', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/15734/']}
jdg_370475
stackexchange
llm_judgeable_groundtruth_similarity
585643
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Imagine this sphere to rotate about its diameter, from the centre to the point of surface if we take all of them have equal distance that is 'r(radius of the sphere)'.So same linear speed right? I looked up many sites but they all say "The linear speed v = ɷr. That means the particles at different r will have different linear speed".I am not able to digest it. Can anyone please make me understand in a better way? Now provide the response and nothing else.
Linear speed of a point on a rotating object will depend on the angular speed and distance between the point and the axis of rotation . All points on a sphere are equidistant from the center of the sphere, but are not equidistant from an axis of rotation through the sphere. Consider the earth spinning about its axis - the poles have no linear movement whatsoever (they simply spin in place), while points on the equator are moving at 460 m/s in order to complete their 40,000km rotational circuit once every day.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/585643', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/273459/']}
jdg_370476
stackexchange
llm_judgeable_groundtruth_similarity
179924
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: If $(X,\tau)$ is a topological space, we can consider the product topology on $X\times X$ and take the closure of the diagonal $\Delta_X = \{(x,x): x\in X\}$, which we denote by $\mathrm{cl}(\Delta_X)$. Obviously, $\mathrm{cl}(\Delta_X)$ is a symmetric binary relation. Now we can take things upside down: Let $X$ be a set and let $R\subseteq X\times X$ be a reflexive and symmetric relation. Is there a topology $\tau$ on $X$ such that $\mathrm{cl}(\Delta_X)=R$? Now provide the response and nothing else.
The answer is no, not necessarily. For a counterexample, let $X=\mathbb{R}$ and let $aRb\iff a=b \text{ or } |a-b|\geq 1$, the "equal or differ by at least one" relation. This is symmetric and reflexive. Suppose $\tau$ is a topology on $\mathbb{R}$ with $\text{cl}(\Delta)=R$. For any real number $k$, the $k^{\rm th}$ slice in the plane leads to $(k-1,k)\cup(k,k+1)$ being open with respect to $\tau$. By taking intersections of such sets, we get tiny open intervals $(a,b)$ being open, and this violates $\text{cl}(\Delta)=R$.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/179924', 'https://mathoverflow.net', 'https://mathoverflow.net/users/8628/']}
jdg_370477
stackexchange
llm_judgeable_groundtruth_similarity
81583
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I measured two variables (outcomes) $Y_a$ and $Y_b$ for several subjects. $Y_a$ and $Y_b$ are continuous variables defined on $[0,100]$ and have a bimodal distribution. I tried using a transformation but came up with nothing worthy. I have two independent variables $X_1$ (factor with 2 levels) and $X_2$ (factor with 3 levels) characterizing each subject. I would like to investigate if $X_1$ and $X_2$ influence $Y_a$ or $Y_b$. In normally distributed outcomes I would have used lm(Ya ~ X1 + X2) . What is the non-parametric equivalent test? Is it the Kruskal-Wallis? Do I have to recode $X_a$ and $X_b$ in one variable? Now provide the response and nothing else.
Linear models do not make assumptions about the distribution of the dependent variable, they make assumptions about the distribution of the error, as measured by the residuals. That said, if the Y variables are bimodal, you may want to think about quantile regression. I wrote about how to do this in SAS but that paper also shows some basic points not specific to SAS . Since evidently you use R you can look into the quantreg package. I suggest quantile regression because, if Y is bimodal, it seems likely to me that different things may be related to changes in Y at or near the different modes. I am not sure what your last sentence refers to.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/81583', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/9671/']}
jdg_370478
stackexchange
llm_judgeable_groundtruth_similarity
35456914
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given a list variable, I'd like to have a data frame of the positions of each element. For a simple non-nested list, it seems quite straightforward. For example, here's a list of character vectors. l <- replicate( 10, sample(letters, rpois(1, 2), replace = TRUE), simplify = FALSE) l looks like this: [[1]][1] "m"[[2]][1] "o" "r"[[3]][1] "g" "m"# etc. To get the data frame of positions, I can use: d <- data.frame( value = unlist(l), i = rep(seq_len(length(l)), lengths(l)), j = rapply(l, seq_along, how = "unlist"), stringsAsFactors = FALSE)head(d)## value i j## 1 m 1 1## 2 o 2 1## 3 r 2 2## 4 g 3 1## 5 m 3 2## 6 w 4 1 Given a trickier nested list, for example: l2 <- list( "a", list("b", list("c", c("d", "a", "e"))), character(), c("e", "b"), list("e"), list(list(list("f")))) this doesn't easily generalize. The output I expect for this example is: data.frame( value = c("a", "b", "c", "d", "a", "e", "e", "b", "e", "f"), i1 = c(1, 2, 2, 2, 2, 2, 4, 4, 5, 6), i2 = c(1, 1, 2, 2, 2, 2, 1, 2, 1, 1), i3 = c(NA, 1, 1, 2, 2, 2, NA, NA, 1, 1), i4 = c(NA, NA, 1, 1, 2, 3, NA, NA, NA, 1), i5 = c(NA, NA, NA, NA, NA, NA, NA, NA, NA, 1)) How do I get a data frame of positions for a nested list? Now provide the response and nothing else.
Here's an approach that yields a slightly different output than you showed, but it'll be useful further down the road. f <- function(l) { names(l) <- seq_along(l) lapply(l, function(x) { x <- setNames(x, seq_along(x)) if(is.list(x)) f(x) else x })} Function f simply iterates (recursively) through all levels of the given list and names it's elements 1,2,...,n where n is the length of the (sub)list. Then, we can make use of the fact that unlist has a use.names argument that is TRUE by default and has effect when used on a named list (that's why we have to use f to name the list first). For the nested list l2 it returns: unlist(f(l2))# 1.1 2.1.1 2.2.1.1 2.2.2.1 2.2.2.2 2.2.2.3 4.1 4.2 5.1.1 6.1.1.1.1 # "a" "b" "c" "d" "a" "e" "e" "b" "e" "f" Now, in order to return a data.frame as asked for in the question, I'd do this: g <- function(l) { vec <- unlist(f(l)) n <- max(lengths(strsplit(names(vec), ".", fixed=TRUE))) require(tidyr) data.frame( value = unname(vec), i = names(vec) ) %>% separate(i, paste0("i", 1:n), sep = "\\.", fill = "right", convert = TRUE)} And apply it like this: g(l2)# value i1 i2 i3 i4 i5#1 a 1 1 NA NA NA#2 b 2 1 1 NA NA#3 c 2 2 1 1 NA#4 d 2 2 2 1 NA#5 a 2 2 2 2 NA#6 e 2 2 2 3 NA#7 e 4 1 NA NA NA#8 b 4 2 NA NA NA#9 e 5 1 1 NA NA#10 f 6 1 1 1 1 An improved version of g , contributed by @AnandaMahto (thanks!), would use data.table : g <- function(inlist) { require(data.table) temp <- unlist(f(inlist)) setDT(tstrsplit(names(temp), ".", fixed = TRUE))[, value := unname(temp)][]} Edit (credits go to @TylerRinkler - thanks!) This has the beneft of easily being converted to a data.tree object which can then be converted to many other data types. With a slight mod to g : g <- function(l) { vec <- unlist(f(l)) n <- max(lengths(strsplit(names(vec), ".", fixed=TRUE))) require(tidyr) data.frame( i = names(vec), value = unname(vec) ) %>% separate(i, paste0("i", 1:n), sep = "\\.", fill = "right", convert = TRUE)}library(data.tree)x <- data.frame(top=".", g(l2))x$pathString <- apply(x, 1, function(x) paste(trimws(na.omit(x)), collapse="/"))mytree <- data.tree::as.Node(x)mytree# levelName#1 . #2 ¦--1 #3 ¦ °--1 #4 ¦ °--a #5 ¦--2 #6 ¦ ¦--1 #7 ¦ ¦ °--1 #8 ¦ ¦ °--b #9 ¦ °--2 #10 ¦ ¦--1 #11 ¦ ¦ °--1 #12 ¦ ¦ °--c #13 ¦ °--2 #14 ¦ ¦--1 #15 ¦ ¦ °--d #16 ¦ ¦--2 #17 ¦ ¦ °--a #18 ¦ °--3 #19 ¦ °--e #20 ¦--4 #21 ¦ ¦--1 #22 ¦ ¦ °--e #23 ¦ °--2 #24 ¦ °--b #25 ¦--5 #26 ¦ °--1 #27 ¦ °--1 #28 ¦ °--e #29 °--6 #30 °--1 #31 °--1 #32 °--1 #33 °--1 #34 °--f And to produce a nice plot: plot(mytree) Other forms of presenting the data: as.list(mytree)ToDataFrameTypeCol(mytree) More on converting data.tree types: https://cran.r-project.org/web/packages/data.tree/vignettes/data.tree.html#tree-conversion http://www.r-bloggers.com/how-to-convert-an-r-data-tree-to-json/
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35456914', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/134830/']}
jdg_370479
stackexchange
llm_judgeable_groundtruth_similarity
186700
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let a function $f:A \rightarrow \mathbb{R}$, defined on a set $A\subset \mathbb{R}$ without isolated points, has a finite limit $g(x)$ in each point $x\in A$. I wish to show that $g: A \rightarrow \mathbb{R}$, $g(x)=\lim_{t \rightarrow x} f(t)$, is continuous. Now provide the response and nothing else.
Hint $\displaystyle\rm\,\ mod\ 23\!:\,\ x\,\equiv\, \frac{\color{brown}{-19}}{4\cdot \color{#0A0}{25}}\,\equiv\,\frac{\color{brown}4}{4\cdot\color{#0A0} 2}\,\equiv\, \frac{\color{blue}1}2\,\equiv\, \frac{\color{blue}{24}}2\,\equiv\, 12,\ \:$ by $\:\ \begin{array}{r}\color{brown}{{-}19\equiv 4}, & \color{blue}{1\equiv 24}\\ \color{#0A0}{25\equiv 2}\,\end{array} $
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/186700', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/38096/']}
jdg_370480
stackexchange
llm_judgeable_groundtruth_similarity
1270874
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: ok, so I've got this code: $(this).find('article.loading').each( function(i) { var el = this; setTimeout(function () { $(el).replaceWith($('#dumpster article:first')); }, speed);}); I want to replace each element with another but I want a delay between each replace. I can't figure out why this isn't working, it just replaces all of them after one timeout. Any ideas? Thanks. Now provide the response and nothing else.
You are looping through the elements and adding a timer to each with the same configuration. Essentially a new timer is instantly set up for each element. On the first tick of all the timers the elements are updated. The interval is the same for each so they all appear to update at the same time. Your logic needs to be centred around the timer. Each tick of the timer needs to update the next element in the collection. You don't need an each loop, use the timer combined with an incremented index as your looping mechanism, stopping the timer once you have updated the last element. var elements = $(this).find('article.loading');var index = 0;setTimeout(function () { $(elements).get(index).replaceWith($('#dumpster article:first')); index++;}, speed); Something like above, but remember to also stop the timer!
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1270874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155616/']}
jdg_370481
stackexchange
llm_judgeable_groundtruth_similarity
7582548
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following class: #pragma once#include <string>#include <iostream>class testclass{public: template <class T> T item(const std::string& key) { std::cout << "non-specialized\n"; return T(); }}; For the item method I would like to provide a specialization for strings. I try to do this the following way (in testclass.cpp): #include "testclass.h"#include <iostream>template<> std::string testclass::item(const std::string& key){ std::cout << "specialized\n"; return std::reverse(key.begin(), key.end());} And then I try to call the function like this: #include <iostream>#include "testclass.h"int main(){ testclass t; std::string key = "foo"; t.item<int>(key); std::string s = t.item<std::string>(key); std::cout << s << std::endl;} However, the output is $ ./a.outnon-specialized non-specialized(empty line) What I excepted was $ ./a.outnon-specialized specializedoof How can I do this properly? I am using g++ 4.5.2 to compile the program. Edit : The solution is the move the whole definition of the specialization of item to testclass.h (but not into the class). I had other mistakes in the program, such as not including <algorithm> (for reverse), and incorrectly thinking that it would return the reversed string. To achieve the excepted behaviour, the .cpp file is left empty, and the header contents are the following: #pragma once#include <string>#include <iostream>#include <algorithm>class testclass{ public: template <class T> T item(const std::string& key) { std::cout << "non-specialized\n"; return T(); }};template<> std::string testclass::item(const std::string& key){ std::cout << "specialized\n"; std::string s = key; std::reverse(s.begin(), s.end()); return s;} Now provide the response and nothing else.
The problem boils down to the common problem of not having the templates in the header file. The compiler, when processing main does not see the specialization and it generates its own instantiation of the generic template for std::string . This is a violation of the ODR, as there are now 2 different specializations for std::string in the same program, but the compiler is not required to diagnose it. The simple solution is to declare/define the specialization in the header so that the compiler can either use it, or at least will know not to generate the specialization from the generic version when processing main
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7582548', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/140367/']}
jdg_370482
stackexchange
llm_judgeable_groundtruth_similarity
24018932
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was working on a C++11 project solely using clang++-3.4 , and decided to compile using g++-4.8.2 in case there were any discrepancies in the errors produced. It turned out that g++ rejects some code that clang++ accepts. I have reduced the problem to the MWE given below. enum { a };template <class T>struct foo{ static constexpr auto value = a;};int main(){ static constexpr auto r = foo<int>::value;} foo.cpp:5:23: error: ‘ const<anonymous enum> foo<int>::value ’, declared using anonymous type, is used but never defined [-fpermissive] static const auto value = A; I would like some help answering the following two questions: Which compiler is correct in its interpretation of the standard? I am assuming that one compiler is right in either accepting or rejecting the code, and the other is wrong. How can I work around this issue? I can't name the anonymous enum, because it is from a third-party library (in my case, the enums were Eigen::RowMajor and Eigen::ColMajor ). Now provide the response and nothing else.
Who's to blame? GCC is inaccurately rejecting your snippet, it is legal according to the C++11 Standard (N3337). Quotations with proof and explanation is located the end of this post. workaround (A) - add the missing definition template <class T>struct foo { static constexpr auto value = a; typedef decltype(a) value_type;};template<class T>constexpr typename foo<T>::value_type foo<T>::value; workaround (B) - use the underlying-type of the enumeration as placeholder #include <type_traits>template <class T>struct foo { static const std::underlying_type<decltype(a)>::type value = a;}; What does the Standard say? ( N3337 ) As stated, the snippet is legal C++11 , as can be read in the following quoted sections. When can we use a type without linkage? [basic.link]p8 has detailed wording that describes when a type is "without linkage" , and it states that an unnamed enumeration count as such type. [basic.link]p8 also explicitly states three contexts where such a type cannot be used, but not one of the contexts apply to our usage, so we are safe. A type without linkage shall not be used as the type of a variable or function with external linkage unless the entity has C language linkage (7.5), or the entity is declared within an unnamed namespace (7.3.1), or the entity is not odr-used (3.2) or is defined in the same translation unit Are you sure we can use auto in such context? Yes, and this can be proven by the following quote: 7.1.6.4p auto specifier [dcl.spec.auto] A auto type-specifier can also be used in declaring a variable in the condition of a selection statement (6.4) or an iteration statement (6.5), in the type-specifier-seq in the new-type-id or type-id of a new-expression (5.3.4), in a for-range-declaration , and in declaring a static data member with a brace-or-equal-initializer that appears within the member-specification of a class definition (9.4.2).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/24018932', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/414271/']}
jdg_370483
stackexchange
llm_judgeable_groundtruth_similarity
43361
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have always wondered why diamonds shine. Can anyone tell me why? Now provide the response and nothing else.
Diamond is one of the hardest material. We know that it's an allotrope of carbon. A diamond (crystalline in nature) has a three dimensional arrangement of carbon atoms linked to each other by strong covalent bonds. What you've shown a round brilliant cut diamond. Actually, the secret that's rattling inside a diamond is refraction, total internal reflection (not to be confused with ordinary reflection) & dispersion. The refractive index of diamond is pretty high ( 2.417 ) and is also dispersive (coefficient is 0.044). Due to this fact, diamond is an important application in optics. Consider an ideal cut diamond. I explain according to the figure below. When the light is incident at an angle $1$, it refracts inside and travels through the lattice. At the surface which separates air & diamond media, the incident angle $2$ is very well above the critical angle ($c_a$) and simultaneously ($3$ & $4$) the reflection takes place at different surfaces of the diamond. Finally, the light refracts out. The first one shows the mechanism of internal dispersive reflection. The second figure shows the reflections inside ideal cut, deep and shallow cut diamonds. Note: For total internal reflection to take place, light must travel from an optically denser medium to a relatively rarer medium. Also, the incident angle should be far high above thee critical angle. There are youtube goodies regarding the topic...
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/43361', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/9845/']}
jdg_370484
stackexchange
llm_judgeable_groundtruth_similarity
12741303
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create an empty Google Sheet (created only with metadata) in Google Drive. When I referred to the Google SpreadSheet API documentation, it says to use the DocumentsList API, but it's deprecated and instead asks me to use the Google Drive API. In the Drive API docs, I could not find any way to create an empty Sheet. Anyone have a clue on how to do this? Now provide the response and nothing else.
You can do this using the Drive API by setting the MIME type to application/vnd.google-apps.spreadsheet : To do this in Python: from apiclient.discovery import buildservice = build('drive', 'v2')import httplib2credentials = ... # Obtain OAuth 2.0 credentialshttp = credentials.authorize(httplib2.Http())body = { 'mimeType': 'application/vnd.google-apps.spreadsheet', 'title': 'Name of Spreadsheet',}file = service.files().insert(body=body).execute(http=http)# or for version 3 it would be# file = service.files().create(body=body).execute(http=http) Head over to the Google APIs Explorer to try it out!
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12741303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1622256/']}
jdg_370485
stackexchange
llm_judgeable_groundtruth_similarity
17806
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently developing a .NET application, which consists of 20 projects. Some of those projects are compiled using .NET 3.5, some others are still .NET 2.0 projects (so far no problem). The problem is that if I include an external component I always get the following warning: Found conflicts between different versions of the same dependent assembly. What exactly does this warning mean and is there maybe a possibility to exclude this warning (like using #pragma disable in the source code files)? Now provide the response and nothing else.
This warning means that two projects reference the same assembly (e.g. System.Windows.Forms ) but the two projects require different versions. You have a few options: Recompile all projects to use the same versions (e.g. move all to .Net 3.5). This is the preferred option because all code is running with the versions of dependencies they were compiled with. Add a binding redirect . This will suppress the warning. However, your .Net 2.0 projects will (at runtime) be bound to the .Net 3.5 versions of dependent assemblies such as System.Windows.Forms . You can quickly add a binding redirect by double-clicking on error in Visual Studio. Use CopyLocal=true . I'm not sure if this will suppress the warning. It will, like option 2 above, mean that all projects will use the .Net 3.5 version of System.Windows.Forms. Here are a couple of ways to identify the offending reference(s): You can use a utility such as the one found at https://gist.github.com/1553265 Another simple method is to set Buildoutput verbosity (Tools, Options, Projects and Solutions, Build andRun, MSBuild project build output verbosity, Detailed) and afterbuilding, search the output window for the warning, and look at thetext just above it. (Hat tip to pauloya who suggested this in thecomments on this answer) .
{}
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/17806', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2078/']}
jdg_370486
stackexchange
llm_judgeable_groundtruth_similarity
1483581
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there some way to find out when a UITableView has finished asking for data from its data source? None of the viewDidLoad / viewWillAppear / viewDidAppear methods of the associated view controller ( UITableViewController ) are of use here, as they all fire too early. None of them (entirely understandably) guarantee that queries to the data source have finished for the time being (eg, until the view is scrolled). One workaround I have found is to call reloadData in viewDidAppear , since, when reloadData returns, the table view is guaranteed to have finished querying the data source as much as it needs to for the time being. However, this seems rather nasty, as I assume it is causing the data source to be asked for the same information twice (once automatically, and once because of the reloadData call) when it is first loaded. The reason I want to do this at all is that I want to preserve the scroll position of the UITableView - but right down to the pixel level, not just to the nearest row. When restoring the scroll position (using scrollRectToVisible:animated: ), I need the table view to already have sufficient data in it, or else the scrollRectToVisible:animated: method call does nothing (which is what happens if you place the call on its own in any of viewDidLoad , viewWillAppear or viewDidAppear ). Now provide the response and nothing else.
This answer doesn't seem to be working anymore, due to some changes made to UITableView implementation since the answer was written. See this comment : Get notified when UITableView has finished asking for data? I've been playing with this problem for a couple of days and think that subclassing UITableView 's reloadData is the best approach : - (void)reloadData { NSLog(@"BEGIN reloadData"); [super reloadData]; NSLog(@"END reloadData");} reloadData doesn't end before the table has finish reload its data. So, when the second NSLog is fired, the table view has actually finish asking for data. I've subclassed UITableView to send methods to the delegate before and after reloadData . It works like a charm.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1483581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/179750/']}
jdg_370487
stackexchange
llm_judgeable_groundtruth_similarity
1926967
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm about to give my first consultation and the topic is proof by induction. The students are new to calculus and I assume they don't know anything besides precalculus. Can anyone give me a proof by induction which is a bit different, challenging, maybe foreshadows other areas of calculus (derivation or whatever) because the prof who teaches them as well already have shown them a lot of easy ones. I'm glad even if you can give an example where it is hard to see that it can be proved by induction. Now provide the response and nothing else.
Here is one that my math professor showed us: show that for $n\ge 1$ a $2^n\times 2^n$ chessboard with one square removed can always be tiled by "L-shaped" pieces. That is, pieces formed by removing a corner from a $2\times2$ square.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1926967', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/300805/']}
jdg_370488
stackexchange
llm_judgeable_groundtruth_similarity
162630
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $\mathbb{G}$ be a reductive group defined over a number field $K$, let $Z$ be its center, and let $\mathbb{A}:=\mathbb{A}_K$ be the ring of adeles of $K$. Reasonably, we care about the $\mathbb{G}(\mathbb{A})$-representation: $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$. It naturally contains the sub-representations $$L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}),\omega):=\{f\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}))|\,\,\,|f|\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A})), \forall z\in Z(\mathbb{A}), g \in \mathbb{G}(\mathbb{A})\,\,\, f(zg)=\omega(z)f(g)\} $$ for every $\omega$ a unitary character of $Z(\mathbb{A})$. In fact $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ is the direct integral of these subrepresentations. I understand that it is generally desirable to deal with $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ by decomposing it into the cuspidal part, which is going to be discrete, and the Eisenstein part, which is (I think!) continuous. In order to define this cuspidal part, people define $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ to be the subrepresentation of $L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ of all of the functions $f$ such that for every $K$-parabolic subgroup $\mathbb{P}$ of $\mathbb{G}$, whose unipotent radical we will call $N$, satisfies that for almost all $g\in\mathbb{G}(\mathbb{A})$ the integral $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn$ is $0$. The definition of a cuspidal representation is then an irreducible unitary subrepresentation of $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}), \omega)$ for some central character $\omega$. I feel that I really do not understand the intuition behind the condition with the parabolic subgroups. Parabolic subgroups and their unipotent radicals seem like very formal constructions to me, but I bet there is some geometric intuition that I'm missing. Is there some geometry that should be in the back of my mind that explains the condition $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn=0$? How does this condition relate to being zeros at the cusps via the classic definition of cusp-forms? Now provide the response and nothing else.
First, one should be a little careful about saying that $L^2(G_k\backslash G_\mathbb A)$ has $L^2(Z_\mathbb A G_k\backslash G_\mathbb A,\omega)$ inside it... since appearing as direct integral "integrands" is not a very strong commitment. If $G$ has non-compact center, $L^2(G_k\backslash G_\mathbb A)$ will have no discrete spectrum at all... which gives the wrong impression (by Selberg et al's proof of various forms of "Weyl's Law" for arithmetic subgroups, namely, that the bulk of the spectrum is cuspidal, hence, discrete). That Gelfand condition about integrals over all unipotent radicals being $0$ is far from obviously "the right thing". It is less a leap to understand that instead of "cusps" we should think of "parabolics" (etc). The constant terms along various parabolics correspond to "going to infinity" in the variety of fashions possible in general, in higher-rank groups. Still, yes, it is mildly amazing that vanishing of all constant terms guarantees discreteness. In general, it is easy to fail to prove this... :) I think that Y. Colin de Verdiere's argument, cast into general form by Jacquet, that appears in Moeglin-Waldspurger's book, is potentially the clearest in terms of describing the causality, as it shows a somewhat more general thing, that square-integrable automorphic forms ($K$-finite, $\mathfrak z$-finite) all of whose various constant terms vanish above some fixed height(s), is already discrete. This follows by proving that the resolvent for Casimir on such a space is compact, which follows by proving a sort of Rellich compactness lemma for an inclusion of a Sobolev $H^1$ into $H^o=L^2$, as appears in Lax-Phillips' book on Automorphic Scattering, about page 204 and following. (The earlier parts are not essential to understanding what's happening just there.) The rough explanation I've heard, and sometimes repeat, although it doesn't truly explain so much, is that (eventual) vanishing of all constant terms says that the given afm has $0$ "average mass" "at infinity", so that it behaves as though it lived on a compact manifold, where a simpler Rellich lemma would apply (by a smooth partition of unity, and reducing to the essentially elementary case of a product of circles, and Fourier series). The historical version of "holomorphic cuspform" (also in the Siegel and Hilbert modular cases) played on some good fortune, in some regards. If it seems lucky, you're probably right. EDIT: in response to comment/query... No, there is no general rubric that says that parabolic subgroups determine spectral features of automorphic forms. Plausibly, in a different universe, the stratification of automorphic $L^2$ could be different. Thus, although the rational-rank-one case was relatively easy to (optimistically) extrapolate from the $SL_2(\mathbb R)$ case, where the cross-sections going out to the point-cusps were elementary objects, all the pseudo-down-to-earth ideas about "going to infinity" and "cusp" that seemed to be decisive for elliptic modular forms, and for Maass' waveforms, rather abruptly not only "fail", but fail qualitatively. Thus (to my mind) Langlands' earning his spot at IAS in the 1960s, for, among other things, carrying out Selberg's highly-optimistic sketch of automorphic spectral decompositions. A number of important, critical surprises: there're not just two sorts of things, cuspforms and "continuous spectrum", and then a little leftover, constants, but a whole range of things. Yes, as has only been proven in recent years, the discrete spectrum dominates, and the discrete spectrum is dominated by cuspforms. But, first, there are cuspidal-data Eisenstein series, apparently not anticipated by Selberg. But, as is the subtlest part of Langlands' SLN 544, and addressed completely only for $GL_n$, in Moeglin-Waldspurger's 1989 paper, there are many non-constant $L^2$ residues of Eisenstein series, for $GL_n$ at least called Speh forms, because Birgit Speh discovered the corresponding repns of real Lie groups $GL_n(\mathbb R)$. The "constant term" along a parabolic $P$ is the trivial-character Fourier component along the unipotent radical of $P$. It is not obvious that this shadow of the thing should be important, but, yes, one proves... "the theory of the constant term"... that the aggregate of the constant terms of a $K$-finite, $\mathfrak z$-finite automorphic form determines its asymptotic behavior "at infinity". Indeed, decompositions along other subgroups are very interesting, especially for number theoretic applications. However, as it happens, it seems that no other decompositions-along-subgroups adequately distinguish non-compact quotients from compact... and compact quotients have discrete spectrum with respect to their invariant Laplacian, or Laplacian on suitable vector bundles. Perhaps in a different universe the non-compactness of interesting arithmetic quotients would have been mediated by different sorts of subgroups, but in this universe the collection of all parabolics seems to do the job. Yes, this was not obvious, and Gelfand deserves substantial credit for formulating things this way...
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/162630', 'https://mathoverflow.net', 'https://mathoverflow.net/users/49242/']}
jdg_370489
stackexchange
llm_judgeable_groundtruth_similarity
966282
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Here is a problem I have been working on recently: Let $f \colon[a,b] \to \mathbb{R}$ be continuous, differentiable on $[a,b]$ except at most for a countable number of points, and $f^{\prime}$ is Lebesgue integrable, then the fundamental theorem of calculus holds, i.e. $\forall x,y \in [a,b]$ we have $$f(y) = f(x) + \int_x^yf'(t)\,dt.$$ The proof I have at the moment is somewhat indirect: I can show that the FTC holds by proving that $f$ is $AC([a,b])$ ( http://en.wikipedia.org/wiki/Absolute_continuity ). The way I can prove this is showing that the Luzin's N-property ( http://en.wikipedia.org/wiki/Luzin_N_property ) is satisfied. I have spent quite a lot of time looking for a direct proof, but nothing seems to work! Can anyone help me? Here is a summary of useful things (which I'll update if I figure out something else interesting!): Thm: If $u \colon [a,b] \to \mathbb{R}$, is continuous and differentiable everywhere on [a,b], with $u' \in L^1$, then the FTC holds.(This is a well know result for the Riemann integral, maybe a little less know in the context of the Lebesgue integral.. At least I've never heard of this result before looking for it! A proof can be found in Rudin's Real & Complex Analysis) Applying 1. we can prove the result in the case the set of non differentiability is finite. (not really useful!) I can also prove it if the set is countable but with only a finite number of accumulation points. This can be done using 2. and adding and subtracting terms to work with telescopic series. Thank you in advance for your help! Now provide the response and nothing else.
The following is a combination of a proof in the book "Principlesof mathematical analysis" by Dieudonne of a version of a mean valuetheorem and of the proof of the Theorem (Theorem 8.21) in Rudin'sbook "Real and Functional Analysis" that you also cite. The proof actually yields the stronger statement that it sufficesthat $f$ is differentiable from the right on $\left[a,b\right]$except for an (at most) countable set $\left\{ x_{n}\mid n\in\mathbb{N}\right\} \subset\left[a,b\right]$. Let $\varepsilon>0$ be arbitrary. As in Rudin's proof, there is alower semicontinuous function $g:\left[a,b\right]\to\left(-\infty,\infty\right]$such that $g>f'$ and$$\int_{a}^{b}g\left(t\right)\, dt<\int_{a}^{b}f'\left(t\right)\, dt+\varepsilon.$$Let $\eta>0$ be arbitrary. Define\begin{eqnarray*}F_{\eta}\left(x\right) & := & \int_{a}^{x}g\left(t\right)\, dt-f\left(x\right)+f\left(a\right)+\eta\left(x-a\right),\\G_{\eta}\left(x\right) & := & F_{\eta}\left(x\right)+\varepsilon\cdot\sum_{\substack{n\in\mathbb{N}\\x_{n}<x}}2^{-n}.\end{eqnarray*}With these definitions, $F_{\eta}$ is continuous with $F_{\eta}\left(a\right)=0=G_{\eta}\left(a\right)$. Furthermore, if $z_{n}\uparrow z$, then $F_{\eta}\left(z_{n}\right)\to F_{\eta}\left(z\right)$and$$\varepsilon\cdot\sum_{\substack{m\in\mathbb{N}\\x_{m}<z_{n}}}2^{-m}\leq\varepsilon\cdot\sum_{\substack{m\in\mathbb{N}\\x_{m}<z}}2^{-m},$$which yields $$\limsup_{n\to\infty}G_{\eta}\left(z_{n}\right)\leq G_{\eta}\left(z\right).\qquad\left(\dagger\right)$$ For $x\in\left[a,b\right)$ there are two cases: $x=x_{n}$ for some $n\in\mathbb{N}$. By continuity of $F_{\eta}$,there is some $\delta_{x}>0$ such that $F_{\eta}\left(t\right)>F_{\eta}\left(x\right)-\varepsilon\cdot2^{-n}$holds for all $t\in\left(x,x+\delta_{x}\right)$. For those $t$,we derive\begin{eqnarray*}G_{\eta}\left(t\right)-G_{\eta}\left(x\right) & = & F_{\eta}\left(t\right)-F_{\eta}\left(x\right)+\varepsilon\cdot\sum_{\substack{m\in\mathbb{N}\\x\leq x_{m}<t}}2^{-m}\\ & > & -\varepsilon\cdot2^{-n}+\varepsilon\cdot\sum_{\substack{m\in\mathbb{N}\\x\leq x_{m}<t}}2^{-m}\geq0.\end{eqnarray*} $x\notin\left\{ x_{n}\mid n\in\mathbb{N}\right\} $. By assumption,this implies that $f$ is differentiable from the right at $x$, whichmeans$$\frac{f\left(t\right)-f\left(x\right)}{t-x}\xrightarrow[t\downarrow x]{}f'\left(x\right)<f'\left(x\right)+\eta.$$Together with $g\left(x\right)>f'\left(x\right)$ and with the lowersemicontinuity of $g$, we see that there is some $\delta_{x}>0$such that$$f\left(t\right)-f\left(x\right)<\left(f'\left(x\right)+\eta\right)\cdot\left(t-x\right)\text{ and }g\left(t\right)>f'\left(x\right)\qquad\forall t\in\left(x,x+\delta_{x}\right).$$Hence, for each $t\in\left(x,x+\delta_{x}\right)$, we get\begin{eqnarray*}G_{\eta}\left(t\right)-G_{\eta}\left(x\right) & = & \varepsilon\cdot\sum_{\substack{m\in\mathbb{N}\\x\leq x_{m}<t}}2^{-m}+F_{\eta}\left(t\right)-F_{\eta}\left(x\right)\\ & \geq & F_{\eta}\left(t\right)-F_{\eta}\left(x\right)\\ & = & \int_{x}^{t}\underbrace{g\left(s\right)}_{>f'\left(x\right)}\, ds-\left[f\left(t\right)-f\left(x\right)\right]+\eta\left(t-x\right)\\ & > & f'\left(x\right)\cdot\left(t-x\right)-\left[f'\left(x\right)+\eta\right]\left(t-x\right)+\eta\left(t-x\right)=0.\end{eqnarray*} In summary, for each $x\in\left[a,b\right)$ there is some $\delta_{x}>0$such that $G_{\eta}\left(t\right)>G_{\eta}\left(x\right)$ for all$x\in\left(x,x+\delta_{x}\right)$. Using $G_{\eta}\left(a\right)=0$,we see $G_{\eta}\left(t\right)\geq0$ for $t\in\left[a,a+\delta_{a}\right)$.Define$$\varrho:=\sup\left\{ t\in\left(a,b\right)\mid G_{\eta}|_{\left[a,t\right)}\geq0\right\} .$$It is easy to see that the supremum is actually attained. Using $\left(\dagger\right)$,we also see $G_{\eta}|_{\left[a,\varrho\right]}\geq0$. If we had $\varrho<b$, the above would yield $G_{\eta}\geq0$ on$\left[a,\varrho\right]\cup\left(\varrho,\varrho+\delta_{\varrho}\right)=\left[a,\varrho+\delta_{\varrho}\right)$,in contradiction to maximality of $\varrho$. Hence, $\varrho=b$which yields\begin{eqnarray*}0 & \leq & G_{\eta}\left(b\right)\\ & = & \varepsilon\cdot\sum_{\substack{m\in\mathbb{N}\\x_{m}<b}}2^{-m}+\int_{a}^{b}g\left(t\right)\, dt-f\left(b\right)+f\left(a\right)+\eta\left(b-a\right)\\ & \leq & \varepsilon+\int_{a}^{b}f'\left(t\right)\, dt+\varepsilon-f\left(b\right)+f\left(a\right)+\eta\left(b-a\right).\end{eqnarray*}Letting $\varepsilon\to0$ and then $\eta\to0$, we conclude$$f\left(b\right)-f\left(a\right)\leq\int_{a}^{b}f'\left(t\right)\, dt.$$Now apply the above argument to $-f$ instead of $f$ (note that $-f$fulfills all assumptions). This yields$$f\left(b\right)-f\left(a\right)\geq\int_{a}^{b}f'\left(t\right)\, dt$$and hence$$\int_{a}^{b}f'\left(t\right)\, dt=f\left(b\right)-f\left(a\right).$$It is clear that the same argument yields$$f\left(y\right)-f\left(x\right)=\int_{x}^{y}f'\left(t\right)\, dt$$for all $x,y\in\left[a,b\right]$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/966282', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_370490
stackexchange
llm_judgeable_groundtruth_similarity
6378
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: After participating in Google Code Jam 2012 Round 2, it occurred to me that the Mountain View problem boiled down to solving a set of linear inequalities, which could be done in Mathematica. I've only tinkered with Mathematica programming before, but I tried it out to see how hard it was for this problem. I got it to work (for the "small" input set only) with a lot of effort. I felt like I must be doing some things the hard way and would like feedback on the proper Mathematica idioms for file I/O and expression building (and anything else). You may have to read the problem description (link above), but the basics are: You read test cases from an input file, C.in. You write answers to an output file, C.out. Each input case contains a number of mountains and a constraint number for each mountain. Each output line contains a list of possible mountain heights or the word "Impossible" My program follows. During development, I send output to stdout. debug toggles that. I put it early because I thought I might use it for Print statements. debug = False; This function solves a single case, including reading input and writing output. solveCase[in_, out_, cn_] := Module[{nm, im, highest, vars, constraints, i, j, k, rhs, lhs, yk, yi, yj, answer}, Read the input: the number of mountains and the apparent highest mountain that can be seen from each of the first nm-1 mountains. nm = Read[in, Number]; highest = Array[0, nm - 1]; For[im = 1, im <= nm - 1, im++, highest[[im]] = Read[in, Number]; ]; Start setting up expressions to pass later to FindInstance . Is there a way to solve for a vector of variables without giving each one a separate name? Not finding any, I create the variables as h1 , h2 , ... and the initial constraints as h1 >= 0 , h2 >= 0 , ... vars = Function[x, ToExpression["h" <> ToString[x]]] /@ Range[nm]; constraints = Function[x, ToExpression["h" <> ToString[x]] >= 0] /@ Range[nm]; Convert the highest info into a list of constraints. When mountain i sees mountain k as the highest, it means the slope of the line between their peaks is above the line for each j between i and k and above or at the line for each j after k . For[i = 1, i <= nm - 1, i++, k = highest[[i]]; For[j = i + 1, j <= nm, j++, If[j == k, Continue[]]; Build constraint, (hk - hi) * (j - i) GT/GE (hj - hi) * (k - i) . This part seems especially clumsy. I ended up resorting to strings, but there must be a way to do it at the expression level. yk = "h" <> ToString[k]; yi = "h" <> ToString[i]; yj = "h" <> ToString[j]; lhs = ToExpression["(" <> yk <> " - " <> yi <> ") * " <> ToString[j - i]]; rhs = ToExpression["(" <> yj <> " - " <> yi <> ") * " <> ToString[k - i]]; If[i < j, AppendTo[constraints, Greater[lhs, rhs]], AppendTo[constraints, GreaterEqual[lhs, rhs]]]; ]; ]; The real work: answer = FindInstance[constraints, vars, Integers]; Write the output as Case #N: h1 h2 ... . WriteString[out, "Case #", cn, ": "]; If[Length[answer] > 0, answer = answer[[1]][[ All, 2]]; For[i = 1, i <= nm , i++, WriteString[out, answer[[i]]]; If[i < nm, WriteString[out, " "], WriteString[out, "\n"]]], WriteString[out, "Impossible\n"]]; ; ]; The "main" program. Set up in and out and cycle through the cases. I'm guessing the entire program should go in a Module with local-only variables. in = OpenRead["~/Documents/math/C.in"];If[debug, out = OutputStream["stdout", 1], out = OpenWrite["~/Documents/math/C.out"]];ncases = Read[in, Number];For[ic = 1, ic <= ncases, ic++, solveCase[in, out, ic]];Close[in];If[! debug, Close[out]]; This method takes forever for the 1000+ mountain case, so if there is a better way than FindInstance , I'd like to hear about that, too. Or maybe the code needs to take into account higher level constraints rather than solving the most general form of the problem. Note: After seeing no use of Mathematica ( Code Jam Language Stats ) in previous contests, I realized Mathematica is not allowed because there is no free version, so this is even more of an academic exercise than I intended. Now provide the response and nothing else.
Preamble After seeing no use of Mathematica (Code Jam Language Stats) in previous contests, I realized Mathematica is not allowed because there is no free version This is only to some point correct. Matlab is allowed too and is of course not free and very expensive. I was helping a friend of mine in round 2 and we weren't even close to good. Partly because we are scientists and we love to analyse stuff. This is kind of the opposite of hacking very fast and if you look at the solutions of the others, you may notice that they mainly consist of spaghetti-like for/do/while battles. The other big part was, that I thought I cannot use Mathematica which eliminates the language where I'm maybe able to hack things faster than in other languages. Surely, the rules say "only free languages" but they also note: Google reserves the right to make further exceptions on a case-by-case basis at its sole discretion. With Matlab being allowed I think it is worth a try to just solve and ask later. Since I've seen this too late, we were using Haskell. Note: As Szabolcs points out in the comment the FAQ state that Mathematica is explicitly not allowed. We should really talk to the Google-guys. Solution Something like FindInstance is probably a very bad idea because it maybe slow for very large things and you can count on that Google always gives you a hard time with the large data-set. From what I have seen so far in all the contest problems I have the feeling, that an algorithmic/iterative way is often better than the implementation of a strong analytic solution. I've seen similar behavior in a problem which was about judges and points in a game-show. There we had solved the underlying mathematical problem but we struggled with one boundary condition. A simple numeric implementation would have led to a fast solution which would have worked for the small and large data-set. Therefore, I show here an iterative method which jumps from peaks to peaks and assigns heights increasing with constant slope. If you are at the end, you go on with the next unassigned mountain and assign heights in between two already assigned mountains. You have to increase the slope to not break the validity. The function gets the values {{c},{v1,v2,...}} from the input where c is the number of peaks and vn is the mountain you see. CalculateHeights[{{count_}, viewsIn : {_Integer ..}}] := Module[{ heights, rightHighestPeak, ptr = 1, curheight = 10^9, slope = 0, startPeak = 1, views, n = Length[viewsIn] + 1}, If[count =!= n, Return["Impossible"]]; rightHighestPeak = n; views = Append[viewsIn, n]; heights = ConstantArray[0, n]; While[True, (* Check whether someone tries to fuck with us *) If[ptr < n && views[[ptr]] <= ptr || views[[ptr]] > n || views[[ptr]] > rightHighestPeak, Return["Impossible"]]; If[heights[[ptr]] === 0, heights[[ptr]] = curheight; curheight += slope*(views[[ptr]] - ptr); ptr = views[[ptr]], ptr = n ]; If[views[ptr] === n || heights[[ptr]] =!= 0, (* Find next unassigned mountain *) While[heights[[startPeak]] =!= 0 && startPeak =!= n, startPeak++; ]; (* Check if we are finished *) If[startPeak === n, Return[heights]]; (* Calculate the new slope and the starting hight *) Block[{leftHighestPeakHigh, rightHighestPeakHigh}, ptr = startPeak; rightHighestPeak = views[[ptr - 1]]; leftHighestPeakHigh = heights[[ptr - 1]]; rightHighestPeakHigh = heights[[views[[ptr - 1]]]]; slope = (rightHighestPeakHigh - leftHighestPeakHigh)/(views[[ptr - 1]] - (ptr - 1)) + 1; curheight = rightHighestPeakHigh - slope*(views[[ptr - 1]] - ptr) ]]]] Some help functions for formatting, importing and exporting and then you can calculate the whole large data-set: ImportData[url_] := Partition[Rest@Import[url, "Table"], 2]FormatOutput[result : {_Integer ..}] := StringJoin[Riffle[ToString /@ result, " "]];FormatOutput[arg__] := arg;RunProblemSet[in_String, out_String] := Export[out, MapIndexed[ "Case #" <> ToString[#2[[1]]] <> ": " <> FormatOutput[CalculateHeights[#1]] &, ImportData[in]], "Table"]RunProblemSet["tmp/C-large-practice.in", "tmp/C-large-practice.out"] And finally you can send the output back to the Google-guys and check whether you did a good job Removing ToExpression from your code Basically there is absolutely no need for the ToExpression calls you used. The only thing you need is a unique set of variables you can use. You don't even have to bother how they are called, because you never see them. Therefore, use Unique to create as many variables as mountains and if you need to build expressions, you just reference to the variables in your list. With this you can just write down your constraints like (yk - yi)*(j - i) > (yj - yi)*(k - i) . Here is your solver CalculateHeights[views : {_Integer ..}] := Module[ {nm = Length[views] + 1, vars, constraints, result}, vars = Table[Unique[], {nm}]; constraints = Thread[vars >= 0]; constraints = Join[constraints, Flatten[Table[ With[{k = views[[i]]}, If[j === k, {}, Block[{yk = vars[[k]], yi = vars[[i]], yj = vars[[j]]}, (yk - yi)*(j - i) > (yj - yi)*(k - i)]]], {i, 1, nm - 1}, {j, i + 1, nm}] ] ]; result = FindInstance[constraints, vars, Integers]; If[result === {}, "Impossible", vars /. First[result] ] ]CalculateHeights[{5, 5, 5, 5}](* Out[36]= {1, 0, 0, 1, 4} *)
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/6378', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1395/']}
jdg_370491
stackexchange
llm_judgeable_groundtruth_similarity
9319532
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Using C# in Visual Studio, I'm inserting a row into a table like this: INSERT INTO foo (column_name)VALUES ('bar') I want to do something like this, but I don't know the correct syntax: INSERT INTO foo (column_name)VALUES ('bar')RETURNING foo_id This would return the foo_id column from the newly inserted row. Furthermore, even if I find the correct syntax for this, I have another problem: I have SqlDataReader and SqlDataAdapter at my disposal. As far as I know, the former is for reading data, the second is for manipulating data. When inserting a row with a return statement, I am both manipulating and reading data, so I'm not sure what to use. Maybe there's something entirely different I should use for this? Now provide the response and nothing else.
SCOPE_IDENTITY returns the last identity value inserted into an identity column in the same scope. A scope is a module: a stored procedure, trigger, function, or batch. Therefore, two statements are in the same scope if they are in the same stored procedure, function, or batch. You can use SqlCommand.ExecuteScalar to execute the insert command and retrieve the new ID in one query. using (var con = new SqlConnection(ConnectionString)) { int newID; var cmd = "INSERT INTO foo (column_name)VALUES (@Value);SELECT CAST(scope_identity() AS int)"; using (var insertCommand = new SqlCommand(cmd, con)) { insertCommand.Parameters.AddWithValue("@Value", "bar"); con.Open(); newID = (int)insertCommand.ExecuteScalar(); }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9319532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/598639/']}
jdg_370492
stackexchange
llm_judgeable_groundtruth_similarity
37978307
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The following HTML is generating unwanted padding: <div class="container-fluid"> <div class="row"> <div class="col-xs-12"> test </div> </div></div> Screenshot showing the problem: Now provide the response and nothing else.
None of the answers here helped me with Bootstrap 4. Adding container-fluid p-0 removed the horizontal padding, but created a horizontal scrollbar. The scrollbars come from the negative margin of the row elements - a 100% width container with no padding gets stretched by 15px on each side. It has nothing to do with column padding, as far as I can see. The only workaround for me was .container-fluid{overflow: hidden;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37978307', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_370493
stackexchange
llm_judgeable_groundtruth_similarity
56785464
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to write a test for a utils method. In that method I get a html element by id and then change the color of the element. The problem is that element is only available after a button click. How can I mock the element? UtilListItem.js import variables from '../stylesheets/Variables.scss';export function activeListItem(props){ let listItem = document.getElementById(props.id); listItem.style.backgroundColor = variables.whiteGray; return listItem;} UtilListeItem.test.js it('check if the correct color is set for the acitve list item', () => { let props = {id:'123'} const listItem = activeListItem(props); expect(listItem.style.backgroundColor).toBe('#ededed');}); error TypeError: Cannot read property 'style' of null Now provide the response and nothing else.
I'd suggest you to you jest.spyOn . It's a really handy way to spy on a function and/or attach some mock behaviour. You can use it like this: imoprt { activeListItem } from './utils';let spy;beforeAll(() => { spy = jest.spyOn(document, 'getElementById');});describe('activeListItem', () => { describe('with found element', () => { let mockElement; beforeAll(() => { // here you create the element that the document.createElement will return // it might be even without an id mockElement = document.createElement(....); spy.mockReturnValue(mockElement); }); // and then you could expect it to have the background it('should have the background applied', () => { expect(mockElement.style.backgroundColor).toBe('#ededed'); }); }); describe('without found element', () => { // and here you can create a scenario // when document.createElement returns null beforeAll(() => { spy.mockReturnValue(null); }); // and expect you function not to throw an error it('should not throw an error', () => { expect(() => activeListItem({id:'123'})).not.toThrow(); }); });}); It's also a good idea to mock the .scss file, since it's a dependency of your utility file, so that when it's change it won't affect your unit test.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56785464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4996174/']}
jdg_370494
stackexchange
llm_judgeable_groundtruth_similarity
18640607
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been using arrays to store related fields during a long time. If I wanted to have related user fields, I used: $user = array( 'id' => 27 'name' => 'Pepe'); But lately, I've been working a lot with objects, and I like it more to use $user->id instead of $user['id']. My question: To achieve an object oriented style, you may use stdClass: $user = new stdClass();$user->id = 27;$user->name = 'Pepe'; or casting from an array $user = (object) array( 'id' => 27, 'name' => 'Pepe'); Is one of them better than the other, in order of performance and style, or can you use whatever you want indistinctly? Thanks! Update: I agree with all the comments, this is not OOP at all, is just about having related data grouped into a structure. My $user example is not the best, because it's a typical example of using classes with method, properties, blablabla... I asked because I have a lot of config structures, such us "initTable", and I want something like: $table => page => init => 1 => end => 25 sort => field => name => order => asc and so on, and I want to know what is better to get init page: $table->page->init **OR** $table['page']['init'] Now provide the response and nothing else.
Based on small test ( http://phpfiddle.org/lite/code/cz0-hyf ) I can say that using "new stdClass()" is about 3 times slower than other options. It is strange, but casting an array is done very efficiently compared to stdClass. But this test meters only execution time. It does not meter memory. P.S. I used phpFiddle only to share code. Test were done at my local PC.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18640607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1168804/']}
jdg_370495
stackexchange
llm_judgeable_groundtruth_similarity
53367195
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am receiving this message when I tried starting my react native app. Usually this kind of format works on other multi screen navigation yet somehow does not work in this case. Here is the error: Invariant Violation: The navigation prop is missing for this navigator. In react-navigation 3 you must set up your app container directly. More info: https://reactnavigation.org/docs/en/app-containers.html Here is my app format: import React, {Component} from 'react';import { StyleSheet, Text, View } from 'react-native';import { createStackNavigator } from 'react-navigation';import Login from './view/login.js'import SignUp from './view/signup.js'const RootStack = createStackNavigator( { Home: { screen: Login }, Signup: { screen: SignUp } }, { initialRouteName: 'Home' });export default class App extends React.Component { render() { return <RootStack />; }} Now provide the response and nothing else.
React Navigation 3.0 has a number of breaking changes including an explicit app container required for the root navigator. In the past, any navigator could act as the navigation container at the top-level of your app because they were all wrapped in “navigation containers”. The navigation container, now known as an app container, is a higher-order-component that maintains the navigation state of your app and handles interacting with the outside world to turn linking events into navigation actions and so on. In v2 and earlier, the containers in React Navigation are automatically provided by the create*Navigator functions. As of v3, you are required to use the container directly. In v3 we also renamed createNavigationContainer to createAppContainer. Also please note that if you are now using v4, navigators have been moved to a separate repo. You'll now need to install and import from 'react-navigation-stack' . For example import { createStackNavigator } from 'react-navigation-stack' The solution below is for v3. import { createStackNavigator, createAppContainer} from 'react-navigation';const MainNavigator = createStackNavigator({...});const App = createAppContainer(MainNavigator); A more comprehensive code example: import { createStackNavigator, createAppContainer } from 'react-navigation';import Login from './view/login.js'import SignUp from './view/signup.js'const RootStack = createStackNavigator({ Home: { screen: Login }, Signup: { screen: SignUp } });const App = createAppContainer(RootStack);export default App;
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/53367195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4967349/']}
jdg_370496
stackexchange
llm_judgeable_groundtruth_similarity
78361
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Mathematica 10 has introduced Associations, elsewhere known as hash tables or dictionaries. Is there Ordered equivalent like in Python, Java, .NET? The data structure should remember the order in which key-value is inserted. https://docs.python.org/2/library/collections.html http://docs.oracle.com/javase/6/docs/api/java/util/LinkedHashMap.html https://msdn.microsoft.com/en-us/library/system.collections.specialized.ordereddictionary(v=vs.110).aspx Now provide the response and nothing else.
That associations are ordered can be easily demonstrated. assoc = <|a -> x, b -> y, c -> z|>;assoc[d] = 42; assoc <|a -> x, b -> y, c -> z, d -> 42|> assoc[a] = w; assoc <|a -> w, b -> y, c -> z, d -> 42|> Since the key d did not exist, the key, value pair d -> 42 was added to the end of the association. On the hand since the key a did exist, it's position was preserved.
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/78361', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/9551/']}
jdg_370497
stackexchange
llm_judgeable_groundtruth_similarity
9254789
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How would one load a product model in Magento, if the product id is not available and only the product's url is? For example, I want to retrieve the product model from it's friendly url, such as electronics/cameras/olympus-stylus-750-7-1mp-digital-camera.html I found the following code in another post : $oRewrite = Mage::getModel('core/url_rewrite')->loadByRequestPath( $path); but it doesn't seem to work correctly. The Magento documentation is very lacking in this area; does anyone know how to accomplish this? Now provide the response and nothing else.
Here's an alternative solution. First use the URL rewrite model to find the route which matches your product: $vPath = 'electronics/cameras/olympus-stylus-750-7-1mp-digital-camera.html'; $oRewrite = Mage::getModel('core/url_rewrite') ->setStoreId(Mage::app()->getStore()->getId()) ->loadByRequestPath($vPath); Then you can call getProductId() on the route to locate the produc's id: $iProductId = $oRewrite->getProductId(); Finally if you require the product model object itself it's then a simple matter to call: $oProduct = Mage::getModel('catalog/product')->load($iProductId); The main difference between the above, and the code example you've posted is the call to setStoreId. The same product may have different URLs depending on which store it's in, so the routing component needs to have the appropriate store context before it can locate the product to display. The advantages of this over Zachary Schuessler's solution is that using the URL rewriter will locate the correct product every time if the trailing portions of the url are the same for different products (e.g. folder1/my-product-name and folder2/my-product-name are different products). Using the URL rewriter also works in situations where "folder1/my-product" refers to different products on different stores. This may or may not apply to your environment.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9254789', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/623854/']}
jdg_370498
stackexchange
llm_judgeable_groundtruth_similarity
39609791
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a simple calculator app and currently struggling at deleting the last character when a my button is tapped. I'm using the dropLast() method but I keep getting the error Missing Argument for parameter #1 in call @IBAction func onDelPressed (button: UIButton!) { runningNumber = runningNumber.characters.dropLast() currentLbl.text = runningNumber} Now provide the response and nothing else.
Swift 4 (Addendum) In Swift, you can apply dropLast() directectly on the String instance, no longer invoking .characters to access a CharacterView of the String : var runningNumber = "12345"runningNumber = String(runningNumber.dropLast())print(runningNumber) // 1234 Swift 3 (Original answer) I'll assume runningNumber is a String instance. In this case, runningNumber.characters.dropLast() is not of type String , but a CharacterView : let foo = runningNumber.characters.dropLast()print(type(of: foo)) // CharacterView You need to use the CharacterView to instantiate a String instance prior to assigning it back to a property of type String , e.g. var runningNumber = "12345"runningNumber = String(runningNumber.characters.dropLast())print(runningNumber) // 1234 I.e., for your case @IBAction func onDelPressed (button: UIButton!) { runningNumber = String(runningNumber.characters.dropLast()) currentLbl.text = runningNumber}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39609791', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6842677/']}
jdg_370499
stackexchange
llm_judgeable_groundtruth_similarity
58724695
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This feels like it should be an easy one. How do I get the latest dates that are in different columns DROP TABLE #indebtednessCREATE TABLE #indebtedness (call_case CHAR(10), date1 DATETIME, date2 DATETIME, date3 DATETIME)INSERT #indebtedness VALUES ('Key1', '2019-10-30', '2019-11-30', '2019-10-25')INSERT #indebtedness VALUES ('Key2', '2019-10-20', '2019-10-30', '2019-10-15')INSERT #indebtedness VALUES ('Key3', '2019-11-11', '2019-10-29', '2019-10-30')INSERT #indebtedness VALUES ('Key4', null , '2019-10-29', '2019-10-13')select call_case, ?? AS 'Latest Date' from #indebtedness I would like the result to be: call_case Latest DateKey1 2019-11-30 Key2 2019-10-30 Key3 2019-11-11 Key4 2019-10-29 Now provide the response and nothing else.
Use a CASE expression: SELECT call_case, CASE WHEN date1 > date2 AND date1 > date3 THEN date1 WHEN date2 > date3 THEN date2 ELSE date3 END AS [Latest Date]FROM #indebtedness; Demo Note that some databases, such as MySQL, SQL Server, and SQLite, support a scalar greatest function. SQL Server does not, so we can use a CASE expression as a workaround. Edit: It appears that in your actual table, one or more of the three date columns could have NULL values. We can adapt the above query as follows: SELECT call_case, CASE WHEN (date1 > date2 OR date2 IS NULL) AND (date1 > date3 OR date3 IS NULL) THEN date1 WHEN date2 > date3 OR date3 IS NULL THEN date2 ELSE date3 END AS [Latest Date]FROM #indebtedness; Demo
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58724695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11867031/']}
jdg_370500
stackexchange
llm_judgeable_groundtruth_similarity
4340992
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to compose a function of type (Floating a) => a -> a -> a with a function of type (Floating a) => a -> a to obtain a function of type (Floating a) => a -> a -> a . I have the following code: test1 :: (Floating a) => a -> a -> atest1 x y = xtest2 :: (Floating a) => a -> atest2 x = xtestBoth :: (Floating a) => a -> a -> atestBoth = test2 . test1--testBoth x y = test2 (test1 x y) However, when I compile it in GHCI, I get the following error: /path/test.hs:8:11: Could not deduce (Floating (a -> a)) from the context (Floating a) arising from a use of `test2' at /path/test.hs:8:11-15 Possible fix: add (Floating (a -> a)) to the context of the type signature for `testBoth' or add an instance declaration for (Floating (a -> a)) In the first argument of `(.)', namely `test2' In the expression: test2 . test1 In the definition of `testBoth': testBoth = test2 . test1Failed, modules loaded: none. Note that the commented-out version of testBoth compiles. The strange thing is that if I remove the (Floating a) constraints from all type signatures or if I change test1 to just take x instead of x and y , testBoth compiles. I've searched StackOverflow, Haskell wikis, Google, etc. and not found anything about a restriction on function composition relevant to this particular situation. Does anyone know why this is happening? Now provide the response and nothing else.
\x y -> test2 (test1 x y)== \x y -> test2 ((test1 x) y)== \x y -> (test2 . (test1 x)) y== \x -> test2 . (test1 x)== \x -> (test2 .) (test1 x)== \x -> ((test2 .) . test1) x== (test2 .) . test1 These two things are not like each other. test2 . test1== \x -> (test2 . test1) x== \x -> test2 (test1 x)== \x y -> (test2 (test1 x)) y== \x y -> test2 (test1 x) y
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4340992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/528678/']}
jdg_370501
stackexchange
llm_judgeable_groundtruth_similarity
1772695
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I tried this code which I found here: Intent intent = new Intent(Intent.ACTION_SENDTO, Uri.fromParts("mailto", "testemail@gmail.com", null)); startActivity(intent); But I get a message on the screen which reads "Unsupported Action". Any ideas of how to get this working? Now provide the response and nothing else.
Try this snippet by dylan : /* Create the Intent */final Intent emailIntent = new Intent(android.content.Intent.ACTION_SEND);/* Fill it with Data */emailIntent.setType("plain/text");emailIntent.putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{"to@email.com"});emailIntent.putExtra(android.content.Intent.EXTRA_SUBJECT, "Subject");emailIntent.putExtra(android.content.Intent.EXTRA_TEXT, "Text");/* Send it off to the Activity-Chooser */context.startActivity(Intent.createChooser(emailIntent, "Send mail...")); Key pieces: using EXTRA_EMAIL for your addresses and using createChooser() in case the user has more than one email client configured.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1772695', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/215723/']}
jdg_370502
stackexchange
llm_judgeable_groundtruth_similarity
22803096
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am writing a script to alter all functions of postgres(changing owner of each function). I am able to list down all the function names using postgres query but not able to list parameters for each of those functions. My problem will be resolved if I get solution for any of the below mentioned problems: Is there any way to list down parameters' data type in each of the function. Do we have any approach to alter functions where instead of passing parameter type can I send some wild card.For Example can I write ALTER FUNCTION schemaname.func(text) OWNER TO 'newowner'as :ALTER FUNCTION schemaname.func(*) OWNER TO 'newowner'. Now provide the response and nothing else.
Is there any way to list down parameters' data type in each of the function. Yes, use the pg_get_function_identity_arguments() function: The following will create a SQL script to alter all functions from the someschema schema: select 'alter function '||nsp.nspname||'.'||p.proname||'('||pg_get_function_identity_arguments(p.oid)||') owner to newowner;'from pg_proc p join pg_namespace nsp ON p.pronamespace = nsp.oidwhere nsp.nspname = 'someschema'; You can spool the output of that into a file and then run that generated script. If you have function names that would require quoting, you probably need to use quote_ident to concatenate the function names. You can wrap all that into a function and use dynamic SQL to make life easier if you need this on a regular basis.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22803096', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1728310/']}
jdg_370503
stackexchange
llm_judgeable_groundtruth_similarity
881388
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to create a daemon in python. I've found the following question , which has some good resources in it which I am currently following, but I'm curious as to why a double fork is necessary. I've scratched around google and found plenty of resources declaring that one is necessary, but not why. Some mention that it is to prevent the daemon from acquiring a controlling terminal. How would it do this without the second fork? What are the repercussions? Now provide the response and nothing else.
Looking at the code referenced in the question, the justification is: Fork a second child and exit immediately to prevent zombies. This causes the second child process to be orphaned, making the init process responsible for its cleanup. And, since the first child is a session leader without a controlling terminal, it's possible for it to acquire one by opening a terminal in the future (System V- based systems). This second fork guarantees that the child is no longer a session leader, preventing the daemon from ever acquiring a controlling terminal. So it is to ensure that the daemon is re-parented onto init (just in case the process kicking off the daemon is long lived), and removes any chance of the daemon reacquiring a controlling tty. So if neither of these cases apply, then one fork should be sufficient. " Unix Network Programming - Stevens " has a good section on this.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/881388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15004/']}
jdg_370504
stackexchange
llm_judgeable_groundtruth_similarity
16889447
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Under Mobile Safari, is it possible to allow one absolutely positioned div to scroll without allowing the entire page to bob up and down when it the scroll reaches the edges (elastically scrolling)? Here is a minimal working example of the issue I'm facing: <!doctype html><html><head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no"> <meta name="apple-mobile-web-app-capable" content="yes" /> <style> * { margin: 0; padding: 0; box-sizing: border-box; } #a, #b { position: absolute; top: 0; left: 0; height: 100%; padding: 10px; overflow: auto; } #a { width: 80px; background: #f00; } #b { background: #00f; left: 80px; width: 100%; } </style> <script src="http://code.jquery.com/jquery-1.10.1.min.js"></script> <script> function pdcb(e) { e.preventDefault(); } function npcb(e) { e.stopPropagation(); } $(document).on('touchstart touchmove', pdcb). on('touchstart touchmove', '.scrollable', npcb); </script></head><body> <div id="a" class="scrollable"> This<br> should<br> be<br> scrollable<br> but<br> not<br> scroll<br> the<br> whole<br> page<br> This<br> should<br> be<br> scrollable<br> but<br> not<br> scroll<br> the<br> whole<br> page<br> This<br> should<br> be<br> scrollable<br> but<br> not<br> scroll<br> the<br> whole<br> page<br> This<br> should<br> be<br> scrollable<br> but<br> not<br> scroll<br> the<br> whole<br> page<br> This<br> should<br> be<br> scrollable<br> but<br> not<br> scroll<br> the<br> whole<br> page<br> </div> <div id="b"> this should never scroll </div></body></html> Solution: $(document).on('touchmove', function(e) { e.preventDefault();}).ready(function() { $(".scrollable").on('touchstart', function(e) { this.allowUp = (this.scrollTop > 0); this.allowDown = (this.scrollTop < this.scrollHeight - this.clientHeight); this.prevTop = null; this.prevBot = null; this.lastY = e.originalEvent.pageY; }).on('touchmove', function(e) { var event = e.originalEvent; var up = (event.pageY > this.lastY), down = !up; this.lastY = event.pageY; if ((up && this.allowUp) || (down && this.allowDown)) event.stopPropagation(); else event.preventDefault(); });}); Now provide the response and nothing else.
While you're not hitting the edges of your div's content, you need to allow the native touchmove event to work on that element (so it can scroll), but you're going to want to stop the event from bubbling up the DOM so that it doesn't trigger scrolling on the page body . When you hit the boundary of your element, you need to prevent the native momentum scrolling entirely . The code I use for this is as follows ( apologies to the original author, this is adapted from a tutorial on this topic I found somewhere on the internet in the past... Can't seem to find the URL now though ): where elem is your DOM node elem.addEventListener('touchstart', function(event){ this.allowUp = (this.scrollTop > 0); this.allowDown = (this.scrollTop < this.scrollHeight - this.clientHeight); this.prevTop = null; this.prevBot = null; this.lastY = event.pageY;});elem.addEventListener('touchmove', function(event){ var up = (event.pageY > this.lastY), down = !up; this.lastY = event.pageY; if ((up && this.allowUp) || (down && this.allowDown)) event.stopPropagation(); else event.preventDefault();}); I usually define an array of elements and loop through them - applying this code to each one iteratively. Best of luck, hope this helps.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16889447', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/251162/']}
jdg_370505
stackexchange
llm_judgeable_groundtruth_similarity
20095
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This question is fairly basic to those in this community... Is 192.168.X.X/24 (255.255.255.0) a class C private address? If so, can someone explain how is it possible to have a subnet mask greater than 255.255.0.0 ? I thought 255.255.0.0 was the default/natural mask of a class C private? I have some understanding that CIDR is different from classful. In this example, however, my professor went on to subnet the networks, separating them by 192.168.10.0 and 192.168.20.0 which made no sense to me considering the original question was written with /24 . If it is of the private class C range, why would the 3rd octet be usable? While I know this question is outdated, I'm studying for Net+/CCNA so I need to know, unfortunately. Now provide the response and nothing else.
Is 192.168.X.X/24 (255.255.255.0) a class C private address? You got it and you usually see it like this because of it being the default on so many home routers. If so, can someone explain how is it possible to have a subnet mask greater than 255.255.0.0? I thought 255.255.0.0 was the default/natural mask of a class C private? Greater as in 255.0.0.0 or 255.255.255.0? It's possible because you have 4 octet's in a Ip4 address. Which gives you 4 octets to mask. In this example, however, my professor went on to subnet the networks, separating them by 192.168.10.0 and 192.168.20.0 which made no sense to me considering the original question was written with /24. Ahh this is where you got confused. A subnet mask will tell you what part of an ip address is the network part. So if you have 192.168.XXX.XXX /16 or 255.255.0.0 we can see that only the first 2 octets are part of the network address and the last 2 octets are part of the host address. so any Ip's that start with 192.168 are now part of the same network and anything in the third octet is only used to identify a host. So, if you take 192.168.10.xxx and 192.168.20.xxx /16 they are in the same network (192.168.xxx.xxx) if you change the mask to /24 (255.255.255.0) the first 3 octet's are now Network's and have to match. So 192.168.10.xxx and 192.168.20.xxx 10 and 20 don't match, they are different networks, and only the last octet is used to identify host on the network (255.255.255.0 255 is for networks, 0 is for host), and this gets much more advanced so get a good understanding of it now while you can. For instance you can use a ACL to route traffic to one host only. say 192.168.10.123 all you have to do is set the netmask to 255.255.255.255 and it will make sure it only goes to the host that matchs ever number in every octet. If it is of the private class C range, why would the 3rd octet be usable? Now you know it's because of the netmask /24 =)
{}
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/20095', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/16726/']}
jdg_370506
stackexchange
llm_judgeable_groundtruth_similarity
57120
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A,B \subset \mathbb{R}$ and $m^*(A)=m^*(B)=1$ and $m^*(A\cup B)=2$. Prove that $m^*(A\cap B)=0$. I tried every way I can think of but I do not know how to figure this out. Only properties that I am aware of are monotonicity, countable subadditivity, the outer measure of empty set is zero, translation invariant. Thank you in advance. Now provide the response and nothing else.
Lebesgue outer measure satisfies the inequality $m^* (A \cup B) + m^* (A \cap B) \le m^* (A) +m^* (B)$. Substitute to get the desired result. The above inequality should not be difficult to prove no matter how you define Lebesgue outer measure. Basically it's an approximation argument. For instance, if you already know that $m^* (A)$ is the infimum of $m(G)$ over all open sets $G$ containing $A$, then you just pick an $\varepsilon >0$, take open sets $G_A$ and $G_B$ which $\varepsilon$-approximate $A$ and $B$ (in the above sense) and then $G_A \cup G_B$ and $G_A \cap G_B$ will approximate well $A \cup B$ and $A \cap B$ (respectively). Since Lebesgue measure is additive on open sets (and more generally, on measurable sets), the inequality will then follow in a straightforward manner. Similarly, you can prove that Lebesgue inner measure satisfies the reverse inequality.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/57120', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13755/']}
jdg_370507
stackexchange
llm_judgeable_groundtruth_similarity
174933
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently needed an updated version of Python3 for a project, so I built it from source; and I believe I made a bit of a mess. All apt based operations now end in an error here: (Reading database ... 320897 files and directories currently installed.)Removing nvidia-prime (0.6.2) ...Traceback (most recent call last): File "/usr/bin/lsb_release", line 28, in <module> import lsb_releaseImportError: No module named 'lsb_release'dpkg: error processing package nvidia-prime (--remove): subprocess installed post-removal script returned error exit status 1Errors were encountered while processing: nvidia-primeE: Sub-process /usr/bin/dpkg returned an error code (1) I believe the nvidia error is just more of a symptom than a problem. This was discovered when I was trying to add a source and was met with: sudo: add-apt-repository: command not found I'd gladly upgrade this box to 14.10, but all upgrade based commands return the same lsb_release message. Any advice on restoring my package management abilities? Edit:Updating with python path info lars@whorus:~/Downloads/Python-3.4.2$ ls -l /usr/bin/python*lrwxrwxrwx 1 root root 9 Dec 18 10:36 /usr/bin/python -> python2.7lrwxrwxrwx 1 root root 9 Apr 18 2014 /usr/bin/python2 -> python2.7-rwxr-xr-x 1 root root 3349512 Mar 22 2014 /usr/bin/python2.7lrwxrwxrwx 1 root root 9 Mar 23 2014 /usr/bin/python3 -> python3.4-rwxr-xr-x 2 root root 4061272 Apr 11 2014 /usr/bin/python3.4-rwxr-xr-x 2 root root 4061272 Apr 11 2014 /usr/bin/python3.4mlrwxrwxrwx 1 root root 10 Mar 23 2014 /usr/bin/python3m -> python3.4m Now provide the response and nothing else.
Ubuntu 14.04 has the lsb_release.py file installed for Python 2.7 as well and lsb_release seems to work under python2.7 as well. You can try this by doing: python2.7 /usr/bin/lsb_release If that works, make a backup of the file /usr/bin/lsb_release and change the first line to read: #! /usr/bin/python2.7 (you can experiment with the -Es options, I would leave them out intially). Once you can run apt-get again, reinstall python3 and it dependencies. You can determine the direct dependencies by using apt-cache depends python3 and use apt-rdepends or reverse-depends (both have to be installed) to get dependencies recursively.
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/174933', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/95414/']}
jdg_370508
stackexchange
llm_judgeable_groundtruth_similarity
10921058
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need a regex expression that will match the following: .50.51.51234 but NOT 0.5.5absnd (any letter character or space) I have this that satisfies all but 0.5.5 ^[.?\d]+$ Now provide the response and nothing else.
This is a fairly common task. The simplest way I know of to deal with it is this: ^[+-]?(\d*\.)?\d+$ There are also other complications, such as whether you want to allow leading zeroes or commas or things like that. This can be as complicated as you want it to be. For example, if you want to allow the 1,234,567.89 format, you can go with this: ^[+-]?(\d*|\d{1,3}(,\d{3})*)(\.\d+)?\b$ That \b there is a word break, but I'm using it as a sneaky way to require at least one numeral at the end of the string. This way, an empty string or a single + won't match. However, be advised that regexes are not the ideal way to parse numeric strings. All modern programming languages I know of have fast, simple, built-in methods for doing that.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10921058', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1043314/']}
jdg_370509
stackexchange
llm_judgeable_groundtruth_similarity
6460
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Are there positively charged ligands which can bind to a central metal atom to form coordination compounds? My thoughts: I know that ligands are Lewis bases which donate a pair of electrons, and the central metal atom is usually a Lewis acid. However, a positive charge on a species usually means it is electron-deficient. In that case, there would be no suitable electrons for it to bond to the central metal atom. So, even if positively charged ligands do exist, I think there would be very few. In such cases, it might bond with central atom which is electron rich forming the coordinate bond in the opposite way. Is that possible? Now provide the response and nothing else.
I would answer as a tentative yes. Your reasoning is correct that a ligand, which behaves as a Lewis base, is unlikely to have a positive charge (and therefore be electron deficient). The earliest reference to a positively-charged ion behaving as a ligand (that I've found) is from the journal Inorganic Chemistry (1969, volume 8 page 2331) which describes the synthesis and characterization of Cu(II) and Mn(II) complexes containing a N-methyldabconium cation: In this case the ligand is positively charged, but there is also a lone pair on the non-methylated nitrogen that is most likely forming the coordinative covalent bond with the metal center. It is this property (a sufficiently large ligand that has a localized positive charge that is remote from a region with Lewis base character) that would result in a positively charged ligand, and we don't need to invoke a switching of the metal roles (where the metal becomes the Lewis base) as you suggest. That said, one of the exciting aspects of Inorganic Chemistry is the constant effort to "break the rules".
{}
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/6460', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/2196/']}
jdg_370510
stackexchange
llm_judgeable_groundtruth_similarity
36313317
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Added facebook pixel like they showed here here Now getting an error: fbevents.js:9 Facebook Pixel Error: Duplicate Pixel ID: some-pixel-id Cannot understand what causes this error. Even when I run the code from their example with removed pixel code from html in the console I get the same result. Anyone knows what that means and how to solve this error ? Now provide the response and nothing else.
I found the solution on this page actually. Its because everytime you send an event the code is initialized with the same pixel id. Actually initialization should happen only once." This fix works. You can verify that by using the Facebook Pixel Helper as an extension in Chrome. if(typeof fbq === 'undefined') { !function(f,b,e,v,n,t,s){if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)};if(!f._fbq)f._fbq=n; n.push=n;n.loaded=!0;n.version='2.0';n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0];s.parentNode.insertBefore(t,s)}(window, document,'script','https://connect.facebook.net/en_US/fbevents.js'); fbq('init', '123123123213121'); fbq('track', 'PageView'); fbq('track', '{{fBPixelType}}'); } else { fbq('track', '{{fBPixelType}}'); }
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36313317', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3030350/']}
jdg_370511
stackexchange
llm_judgeable_groundtruth_similarity
38811140
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to set up a private Docker Registry and save images to AWS S3 instance. The Registry seems to be working fine -- it starts up ok and I can authenticate to it over https. The problem I'm having is that I'm getting an error saving to S3, so I assume there is some permission problem with the S3 IAM policy. The docker run command looks like this: docker run -p 443:5000 \ --link redis:redis \ -e REGISTRY_STORAGE=s3 \ -e REGISTRY_STORAGE_S3_BUCKET=my-docker-registry \ -e REGISTRY_STORAGE_S3_ACCESSKEY=**** \ -e REGISTRY_STORAGE_S3_SECRETKEY=**** \ -e REGISTRY_STORAGE_S3_REGION=us-east-1 \ -v `pwd`/auth:/auth \ -e REGISTRY_AUTH=htpasswd \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ -v `pwd`/certs:/certs \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/my.com_chain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/my.com.key \ -e REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR=redis \ -e REGISTRY_REDIS_ADDR=redis:6379 \ registry:2.5 And the S3 IAM policy looks like this: { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets" ], "Resource":"arn:aws:s3:::*" }, { "Effect":"Allow", "Action":[ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource":"arn:aws:s3:::my-docker-registry" }, { "Effect":"Allow", "Action":[ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource":"arn:aws:s3:::my-docker-registry/*" } ]} The error log entry is: level=error msg="error resolving upload: s3aws: AccessDenied: Access Denied\n\tstatus code: 403, request id: 2B224..." auth.user.name=my-user go.version=go1.6.3 http.request.host=my.domain.com http.request.id=13b79c07-... http.request.method=PATCH http.request.remoteaddr="xx.xx.xx.xx:41392" http.request.uri="/v2/my-test/blobs/uploads/467d94ea-2a77...?_state=zQd-..." http.request.useragent="docker/1.12.0 go/go1.6.3 git-commit/8eab123 kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))" instance.id=8a8db6f1-8fe4 vars.name=my-test vars.uuid=467d94ea-2a77 version=v2.5.0 I've used a similar policy for file uploads in other apps, so I'm not sure where the problem is. What do I need to change in the IAM policy to allow the registry to save to the S3 bucket? Now provide the response and nothing else.
I figured it out - not sure if something changed with how Docker saves image files but it seems you now need to add s3:ListBucketMultipartUploads to the bucket-level permission (middle block below, IAM shown in full for completeness): { "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:ListAllMyBuckets" ], "Resource":"arn:aws:s3:::*" }, { "Effect":"Allow", "Action":[ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource":"arn:aws:s3:::my-docker-registry" }, { "Effect":"Allow", "Action":[ "s3:PutObject", "s3:GetObject", "s3:DeleteObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource":"arn:aws:s3:::my-docker-registry/*" } ]} Seems to work well now. Next steps are to create a docker-compose file with the above docker run args, add a redis container to that, and it's a full private registry solution.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38811140', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/786662/']}
jdg_370512
stackexchange
llm_judgeable_groundtruth_similarity
18618333
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently working with Boost.Python and would like some help to solve a tricky problem. Context When a C++ method/function is exposed to Python, it needs to release the GIL (Global Interpreter Lock) to let other threads use the interpreter. This way, when the python code calls a C++ function, the interpreter can be used by other threads.For now, each C++ function looks like this: // module.cppint myfunction(std::string question){ ReleaseGIL unlockGIL; return 42;} To pass it to boost python, I do: // python_exposure.cppBOOST_PYTHON_MODULE(PythonModule){ def("myfunction", &myfunction);} Problem This scheme works fine, however it implies that module.cpp depends on Boost.Python for no good reason. Ideally, only python_exposure.cpp should depend on Boost.Python . Solution? My idea was to play with Boost.Function to wrap the function calls like this: // python_exposure.cppBOOST_PYTHON_MODULE(PythonModule){ def("myfunction", wrap(&myfunction));} Here wrap would be in charge of unlocking the GIL during the call to myfunction . The problem with this method is that wrap needs to have the same signature as myfunction which would pretty much mean re-implementing Boost.Function ... I would be very thankful if someone had any suggestion to this problem. Now provide the response and nothing else.
Exposing functors as methods is not officially supported . The supported approach would be to expose a non-member function that delegates to the member-function. However, this can result in a large amount of boilerplate code. As best as I can tell, Boost.Python's implementation does not explicitly preclude functors, as it allows for instances of python::object to be exposed as a method. However, Boost.Python does place some requirements on the type of object being exposed as a method: The functor is CopyConstructible. The functor is callable. I.e. instance o can be called o(a1, a2, a3) . The call signature must be available as meta-data during runtime. Boost.Python calls the boost::python::detail::get_signature() function to obtain this meta-data. The meta-data is used internally to setup proper invocation, as well as for dispatching from Python to C++. The latter requirement is where it gets complex. For some reason that is not immediately clear to me, Boost.Python invokes get_signature() through a qualified-id, preventing argument dependent lookup. Therefore, all candidates for get_signature() must be declared before the calling template's definition context. For example, the only overloads for get_signature() that are considered are those declared before the definition of templates that invoke it, such as class_ , def() , and make_function() . To account for this behavior, when enabling a functor in Boost.Python, one must provide a get_signature() overload prior to including Boost.Python or explicitly provide a meta-sequence representing the signature to make_function() . Lets work through some examples of enabling functor support, as well as providing functors that support guards. I have opted to not use C++11 features. As such, there will be some boilerplate code that could be reduced with variadic templates. Additionally, all of the examples will use the same model that provides two non-member functions and a spam class that has two member-functions: /// @brief Mockup class with member functions.class spam{public: void action() { std::cout << "spam::action()" << std::endl; } int times_two(int x) { std::cout << "spam::times_two()" << std::endl; return 2 * x; }};// Mockup non-member functions.void action(){ std::cout << "action()" << std::endl;}int times_two(int x){ std::cout << "times_two()" << std::endl; return 2 * x;} Enabling boost::function When using the preferred syntax for Boost.Function, decomposing the signature into meta-data that meets Boost.Python requirements can be done with Boost.FunctionTypes . Here is a complete example enabling boost::function functors to be exposed as a Boost.Python method: #include <iostream>#include <boost/function.hpp>#include <boost/function_types/components.hpp>namespace boost {namespace python {namespace detail {// get_signature overloads must be declared before including// boost/python.hpp. The declaration must be visible at the// point of definition of various Boost.Python templates during// the first phase of two phase lookup. Boost.Python invokes the// get_signature function via qualified-id, thus ADL is disabled./// @brief Get the signature of a boost::function.template <typename Signature>inline typename boost::function_types::components<Signature>::typeget_signature(boost::function<Signature>&, void* = 0){ return typename boost::function_types::components<Signature>::type();}} // namespace detail} // namespace python} // namespace boost#include <boost/python.hpp>/// @brief Mockup class with member functions.class spam{public: void action() { std::cout << "spam::action()" << std::endl; } int times_two(int x) { std::cout << "spam::times_two()" << std::endl; return 2 * x; }};// Mockup non-member functions.void action(){ std::cout << "action()" << std::endl;}int times_two(int x){ std::cout << "times_two()" << std::endl; return 2 * x;}BOOST_PYTHON_MODULE(example){ namespace python = boost::python; // Expose class and member-function. python::class_<spam>("Spam") .def("action", &spam::action) .def("times_two", boost::function<int(spam&, int)>( &spam::times_two)) ; // Expose non-member function. python::def("action", &action); python::def("times_two", boost::function<int()>( boost::bind(&times_two, 21)));} And its usage: >>> import example>>> spam = example.Spam()>>> spam.action()spam::action()>>> spam.times_two(5)spam::times_two()10>>> example.action()action()>>> example.times_two()times_two()42 When providing a functor that will invoke a member-function, the provided signature needs to be the non-member function equivalent. In this case, int(spam::*)(int) becomes int(spam&, int) . // ... .def("times_two", boost::function<int(spam&, int)>( &spam::times_two)) ; Also, arguments can be bound to the functors with boost::bind . For example, calling example.times_two() does not have to provide an argument, as 21 is already bound to the functor. python::def("times_two", boost::function<int()>( boost::bind(&times_two, 21))); Custom functor with guards Expanding upon the above example, one can enable custom functor types to be used with Boost.Python. Lets create a functor, called guarded_function , that will use RAII , only invoking the wrapped function during the RAII object's lifetime. /// @brief Functor that will invoke a function while holding a guard./// Upon returning from the function, the guard is released.template <typename Signature, typename Guard>class guarded_function{public: typedef typename boost::function_types::result_type<Signature>::type result_type; template <typename Fn> guarded_function(Fn fn) : fn_(fn) {} result_type operator()() { Guard g; return fn_(); } // ... overloads for operator()private: boost::function<Signature> fn_;}; The guarded_function provides similar semantics to the Python with statement. Thus, to keep with the Boost.Python API name choices, a with() C++ function will provide a way to create functors. /// @brief Create a callable object with guards.template <typename Guard, typename Fn>boost::python::objectwith(Fn fn){ return boost::python::make_function( guarded_function<Guard, Fn>(fn), ...);} This allows for functions to be exposed which will run with a guard in a non-intrusive manner: class no_gil; // Guard// ... .def("times_two", with<no_gil>(&spam::times_two)) ; Additionally, the with() function provides the ability to deduce the function signatures, allowing the meta-data signature to be explicitly provided to Boost.Python rather than having to overload boost::python::detail::get_signature() . Here is the complete example, using two RAII types: no_gil : Releases GIL in constructor, and reacquires GIL in destructor. echo_guard : Prints in constructor and destructor. #include <iostream>#include <boost/function.hpp>#include <boost/function_types/components.hpp>#include <boost/function_types/function_type.hpp>#include <boost/function_types/result_type.hpp>#include <boost/python.hpp>#include <boost/tuple/tuple.hpp>namespace detail {/// @brief Functor that will invoke a function while holding a guard./// Upon returning from the function, the guard is released.template <typename Signature, typename Guard>class guarded_function{public: typedef typename boost::function_types::result_type<Signature>::type result_type; template <typename Fn> guarded_function(Fn fn) : fn_(fn) {} result_type operator()() { Guard g; return fn_(); } template <typename A1> result_type operator()(A1 a1) { Guard g; return fn_(a1); } template <typename A1, typename A2> result_type operator()(A1 a1, A2 a2) { Guard g; return fn_(a1, a2); }private: boost::function<Signature> fn_;};/// @brief Provides signature type.template <typename Signature>struct mpl_signature{ typedef typename boost::function_types::components<Signature>::type type;};// Support boost::function.template <typename Signature>struct mpl_signature<boost::function<Signature> >: public mpl_signature<Signature>{};/// @brief Create a callable object with guards.template <typename Guard, typename Fn, typename Policy>boost::python::object with_aux(Fn fn, const Policy& policy){ // Obtain the components of the Fn. This will decompose non-member // and member functions into an mpl sequence. // R (*)(A1) => R, A1 // R (C::*)(A1) => R, C*, A1 typedef typename mpl_signature<Fn>::type mpl_signature_type; // Synthesize the components into a function type. This process // causes member functions to require the instance argument. // This is necessary because member functions will be explicitly // provided the 'self' argument. // R, A1 => R (*)(A1) // R, C*, A1 => R (*)(C*, A1) typedef typename boost::function_types::function_type< mpl_signature_type>::type signature_type; // Create a callable boost::python::object that delegates to the // guarded_function. return boost::python::make_function( guarded_function<signature_type, Guard>(fn), policy, mpl_signature_type());}} // namespace detail/// @brief Create a callable object with guards.template <typename Guard, typename Fn, typename Policy>boost::python::object with(const Fn& fn, const Policy& policy){ return detail::with_aux<Guard>(fn, policy);}/// @brief Create a callable object with guards.template <typename Guard, typename Fn>boost::python::object with(const Fn& fn){ return with<Guard>(fn, boost::python::default_call_policies());}/// @brief Mockup class with member functions.class spam{public: void action() { std::cout << "spam::action()" << std::endl; } int times_two(int x) { std::cout << "spam::times_two()" << std::endl; return 2 * x; }};// Mockup non-member functions.void action(){ std::cout << "action()" << std::endl;}int times_two(int x){ std::cout << "times_two()" << std::endl; return 2 * x;}/// @brief Guard that will unlock the GIL upon construction, and/// reacquire it upon destruction.struct no_gil{public: no_gil() { state_ = PyEval_SaveThread(); std::cout << "no_gil()" << std::endl; } ~no_gil() { std::cout << "~no_gil()" << std::endl; PyEval_RestoreThread(state_); }private: PyThreadState* state_;};/// @brief Guard that prints to std::cout.struct echo_guard { echo_guard() { std::cout << "echo_guard()" << std::endl; } ~echo_guard() { std::cout << "~echo_guard()" << std::endl; }};BOOST_PYTHON_MODULE(example){ namespace python = boost::python; // Expose class and member-function. python::class_<spam>("Spam") .def("action", &spam::action) .def("times_two", with<no_gil>(&spam::times_two)) ; // Expose non-member function. python::def("action", &action); python::def("times_two", with<boost::tuple<no_gil, echo_guard> >( &times_two));} And its usage: >>> import example>>> spam = example.Spam()>>> spam.action()spam::action()>>> spam.times_two(5)no_gil()spam::times_two()~no_gil()10>>> example.action()action()>>> example.times_two(21)no_gil()echo_guard()times_two()~echo_guard()~no_gil()42 Notice how multiple guards can be provided by using a container type, such as boost::tuple : python::def("times_two", with<boost::tuple<no_gil, echo_guard> >( &times_two)); When invoked in Python, example.times_two(21) produces the following output: no_gil()echo_guard()times_two()~echo_guard()~no_gil()42
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18618333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/970787/']}
jdg_370513
stackexchange
llm_judgeable_groundtruth_similarity
3365326
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What would be the easiest way to compare multiple arrays, and remove duplicates? So (arrays inside arrays in this case)... a = [[2, 1], [3, 3], [7, 2], [5, 6]] b = [[2, 1], [6, 7], [9, 9], [4, 3]] c = [[2, 1], [1, 1], [2, 2], [9, 9]] d = [[2, 1], [9, 9], [2, 2], [3, 1]] ...would come out (with priority given to array a, then b, then c, then d) a = [[2, 1], [3, 3], [7, 2], [5, 6]] b = [[6, 7], [9, 9], [4, 3]] c = [[1, 1], [2, 2]] d = [[3, 1]] Now provide the response and nothing else.
It's just set difference or subtraction and you can write it as such. Operator overloading can be a bliss :) a is what it is. a[[2, 1], [3, 3], [7, 2], [5, 6]]b = b - a[[6, 7], [9, 9], [4, 3]]c = c - b - a # or c - (a + b)[[1, 1], [2, 2]]d = d - c - b - a # or d - (a + b + c)[[3, 1]]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3365326', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/405983/']}
jdg_370514
stackexchange
llm_judgeable_groundtruth_similarity
907498
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The plausible looking "identity" $$\sin(\frac{\pi}{51})+\cos(\frac{\pi}{74})=\frac{3}{2\sqrt 2}$$ is not true, but it is close indeed: $$LHS=1.0606\color{blue}{598...}$$$$RHS=1.0606\color{red}{601...}$$ In fact the difference, on the order of $10^{-7}$, is small enough to fool some cheap calculators. Is there any way to disprove this identity without relying on a (good) calculator? Now provide the response and nothing else.
First note that $[\mathbb{Q}(\sin\tfrac{\pi}{51}):\mathbb{Q}]=32$ so $[\mathbb{Q}(\sin\tfrac{\pi}{51}, \sqrt{2}):\mathbb{Q}]\in\{32,64\}.$ Now $[\mathbb{Q}(\cos\tfrac{\pi}{74}):\mathbb{Q}]=36$ and since $36\not\mid 64$ we conclude that $\cos\tfrac{\pi}{74}\not\in\mathbb{Q}(\sin\tfrac{\pi}{51}, \sqrt{2})$ and in particular $\cos\tfrac{\pi}{74}\neq \frac{3}{2\sqrt{2}}-\sin\tfrac{\pi}{51}$.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/907498', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_370515
stackexchange
llm_judgeable_groundtruth_similarity
36360682
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can one check if a variable is empty in Angular 2? I know that there are native ways such as if (myVar === null) {do stuff} but I am looking for something like Angular 1 had such as if (angular.isEmpty(variable)) { do stuff } . Q How do check if variable is empty using Angular 2? Now provide the response and nothing else.
Lets say we have a variable called x, as below: var x; following statement is valid, x = 10;x = "a";x = 0;x = undefined;x = null; 1. Number: x = 10;if(x){//True} and for x = undefined or x = 0 (be careful here) if(x){ //False} 2. String x = null , x = undefined or x = "" if(x){ //False} 3 Boolean x = false and x = undefined , if(x){ //False} By keeping above in mind we can easily check, whether variable is empty, null, 0 or undefined in Angular js. Angular js doest provide separate API to check variable values emptiness.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36360682', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3390279/']}
jdg_370516
stackexchange
llm_judgeable_groundtruth_similarity
21154906
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an instance of the following: Expression<Func<IRequiredDate, bool>> I wish to convert it to an instance of the following, so it can be used to run a query in Entity Framework: Expression<Func<TModel, bool>> This will allow me to utilize a generic filtering query to any Model which implements IRequiredDate, eg.: // In some repository function:var query = DbContext.Set<Order>() .FilterByDateRange(DateTime.Today, DateTime.Today);var query = DbContext.Set<Note>() .FilterByDateRange(DateTime.Today, DateTime.Today);var query = DbContext.Set<Complaint>() .FilterByDateRange(DateTime.Today, DateTime.Today);// The general purpose function, can filter for any model implementing IRequiredDatepublic static IQueryable<TModel> FilterByDate<TModel>(IQueryable<TModel> query, DateTime startDate, DateTime endDate) where TModel : IRequiredDate{ // This will NOT WORK, as E/F won't accept an expression of type IRequiredDate, even though TModel implements IRequiredDate // Expression<Func<IRequiredDate, bool>> dateRangeFilter = x => x.Date >= startDate && x.Date <= endDate; // query = query.Where(dateRangeFilter); // This also WON'T WORK, x.Date is compiled into the expression as a member of IRequiredDate instead of TModel, so E/F knocks it back for the same reason: // Expression<Func<TModel, bool>> dateRangeFilter = x => x.Date >= startDate && x.Date <= endDate; // query = query.Where(dateRangeFilter); // All you need is lov.... uh... something like this: Expression<Func<IRequiredDate, bool>> dateRangeFilter = x => x.Date >= startDate && x.Date <= endDate; Expression<Func<TModel, bool>> dateRangeFilterForType = ConvertExpressionType<IRequiredDate, TModel>(dateRangeFilter); // Must convert the expression from one type to another query = query.Where(dateRangeFilterForType) // Ahhhh. this will work. return query;}public static ConvertExpressionType<TInterface, TModel>(Expression<Func<TInterface, bool>> expression)where TModel : TInterface // It must implement the interface, since we're about to translate them{ Expression<Func<TModel, bool>> newExpression = null; // TODO: How to convert the contents of expression into newExpression, modifying the // generic type parameter along the way?? return newExpression;} I understand that they are different types and cannot be cast. However I am wondering if there is a way to create a new Expression<Func<TModel, bool>> , then rebuild it based on the contents of the Expression<Func<IRequiredDate, bool>> provided, switching any type references from IRequiredDate to TModel in the process. Can this be done? Now provide the response and nothing else.
So the method to actually do the mapping isn't that hard, but sadly there isn't a good way that I can see of generalizing it. Here is a method that takes a Func<T1, TResult> and maps it to a delegate where the parameter is something more derived than T1 : public static Expression<Func<NewParam, TResult>> Foo<NewParam, OldParam, TResult>( Expression<Func<OldParam, TResult>> expression) where NewParam : OldParam{ var param = Expression.Parameter(typeof(NewParam)); return Expression.Lambda<Func<NewParam, TResult>>( expression.Body.Replace(expression.Parameters[0], param) , param);} This uses the Replace method to replace all instances of one expression with another. The definition is: internal class ReplaceVisitor : ExpressionVisitor{ private readonly Expression from, to; public ReplaceVisitor(Expression from, Expression to) { this.from = from; this.to = to; } public override Expression Visit(Expression node) { return node == from ? to : base.Visit(node); }}public static Expression Replace(this Expression expression, Expression searchEx, Expression replaceEx){ return new ReplaceVisitor(searchEx, replaceEx).Visit(expression);} Now we can use this method (which should be given a better name) like so: Expression<Func<object, bool>> oldExpression = whatever;Expression<Func<string, bool>> newExpression = Foo<string, object, bool>(oldExpression); And of course since Func is actually covariant with respect to its parameters, we can be sure that any calls to this method generate expressions that won't add runtime failure points. You could trivially make versions of this for Func<T1, T2, TResult> , and so on and so forth up through the 16 different types of Func if you wanted, just creating a parameter expression for each, and replacing all of the old ones with new ones. It'd be tedious, but just following the pattern. Given that there needs to be a generic argument for both the old and new parameter types though, and that there's no way of inferring the arguments, that'd get...messy.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21154906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1872194/']}
jdg_370517
stackexchange
llm_judgeable_groundtruth_similarity
286203
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The question I'm trying to answer is (fundamentally) this: I have a bag of coins that I suspect are weighted, some towards heads, some towards tails . I toss each coin 4 times and record the outcomes (e.g., 3H1T). As a group, do the coins tend to be unfair? I can't figure out what an appropriate test would be, though it seems like there ought to be one. Here are some relevant thoughts and options I've considered. (1) Binomial test - Appropriate way to test EACH COIN's fairness, but (a) 4 tosses isn't enough for statistical significance ($\alpha$ = .05) at the level of the individual coin and (b) since I suspect different coins may be weighted in opposite directions, lumping all the data together would make these coins cancel each other out. (See similar comments on this question ) (2) Chi-square goodness-of-fit or multinomial test over counts - This will tell me if my observed counts for each outcome (4H0T, 3H1T, 2H2T...) differ from the expected counts (they do), but not how. It will return a high test statistic whether my coins are all magically fair (all 2H2T results) or if they are all weighted (all either 4H0T, or 0H4T). It also ignores the underlying binomial nature of the data. (3) Regression seems like overkill for this data, and linear regression/linear mixed models wouldn't answer the right question anyway: coins with opposite weighting would cancel each other out. For reference, my actual counts are as follows, of a total of 56 "coins". Unfortunately, since they're not real coins, and the experiment is over, I can't just go flip each one a few more times! 16 4H, 10 3H, 9 2H, 8 1H, 13 0H Now provide the response and nothing else.
To understand why we use the t-distribution, you need to know what is the underlying distribution of $\widehat{\beta}$ and of the Residual sum of squares ($RSS$) as these two put together will give you the t-distribution. The easier part is the distribution of $\widehat{\beta}$ which is a normal distribution - to see this note that $\widehat{\beta}$=$(X^{T}X)^{-1}X^{T}Y$ so it is a linear function of $Y$ where $Y\sim N(X\beta, \sigma^{2}I_{n})$. As a result it is also normally distributed, $\widehat{\beta} \sim N(\beta, \sigma^{2}(X^{T}X)^{-1})$ - let me know if you need help deriving the distribution of $\widehat{\beta}$. Additionally, $RSS \sim \sigma^{2}\chi^{2}_{n-p}$, where $n$ is the number of observations and $p$ is the number of parameters used in your regression. The proof of this is a bit more involved, but also straightforward to derive (see proof here Why is RSS distributed chi square times n-p? ). Up until this point I have considered everything in matrix/vector notation, but let's for simplicity use $\widehat{\beta}_{i}$ and use its normal distribution which will give us:\begin{equation}\frac{\widehat{\beta}_{i}-\beta_{i}}{\sigma\sqrt{(X^{T}X)^{-1}_{ii}}} \sim N(0,1)\end{equation} Additionally, from the chi-squared distribution of $RSS$ we have that:\begin{equation}\frac{(n-p)s^{2}}{\sigma^{2}} \sim \chi^{2}_{n-p}\end{equation} This was simply a rearrangement of the first chi-squared expression and is independent of the $N(0,1)$. Additionally, we define $s^{2}=\frac{RSS}{n-p}$, which is an unbiased estimator for $\sigma^{2}$. By the definition of the $t_{n-p}$ definition that dividing a normal distribution by an independent chi-squared (over its degrees of freedom) gives you a t-distribution (for the proof see: A normal divided by the $\sqrt{\chi^2(s)/s}$ gives you a t-distribution -- proof ) you get that: \begin{equation}\frac{\widehat{\beta}_{i}-\beta_{i}}{s\sqrt{(X^{T}X)^{-1}_{ii}}} \sim t_{n-p}\end{equation} Where $s\sqrt{(X^{T}X)^{-1}_{ii}}=SE(\widehat{\beta}_{i})$. Let me know if it makes sense.
{}
{'log_upvote_score': 6, 'links': ['https://stats.stackexchange.com/questions/286203', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/165883/']}
jdg_370518
stackexchange
llm_judgeable_groundtruth_similarity
40731476
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to right a trading bot with python that uses the YoBit api ( https://yobit.net/en/api/ ). The only problem is I can't seem to figure out how to authenticate to use the private API. I was wondering if someone could give me an example of a working call to the private YoBit API. I found a post where someone claimed to have a working authentication for Python 3. I did my best to convert it to Python 2.7 and came up with this... from urllib2 import Request, urlopenfrom pprint import pprintimport jsonimport timeimport requestsimport hmacimport hashlibfrom urllib import urlencodeclass yobit(object): def __init__(self, key, secret): self.key = 'mykey' self.secret = b'mysecret' self.public = ['info', 'ticker', 'depth', 'trades'] self.trade = ['activeorders'] def query(self, method, values={}): if method in self.public: url = 'https://yobit.net/api/3/'+method for i, k in values.iteritems(): url += '/'+k req = requests.get(url) return json.loads(req.text) elif method in self.trade: url = 'https://yobit.net/tapi' values['method'] = method values['nonce'] = str(int(time.time())) body = urlencode(values) signature = hmac.new(self.secret, body, hashlib.sha512).hexdigest() headers = { 'Content-Type': 'application/x-www-form-urlencoded', 'Key': self.key, 'Sign': signature } req = requests.post(url,data=values,headers=headers) return json.loads(req.text) return falseyobit('activeorders','liv_btc') This, however, doesn't return anything. Now provide the response and nothing else.
Here is how the method works: ksizes is used to decide the dimensions of each patch, or in other words, how many pixels each patch should contain. strides denotes the length of the gap between the start of one patch and the start of the next consecutive patch within the original image. rates is a number that essentially means our patch should jump by rates pixels in the original image for each consecutive pixel that ends up in our patch. (The example below helps illustrate this.) padding is either "VALID", which means every patch must be fully contained in the image, or "SAME", which means patches are allowed to be incomplete (the remaining pixels will be filled in with zeroes). Here is some sample code with output to help demonstrate how it works: import tensorflow as tfn = 10# images is a 1 x 10 x 10 x 1 array that contains the numbers 1 through 100 in orderimages = [[[[x * n + y + 1] for y in range(n)] for x in range(n)]]# We generate four outputs as follows:# 1. 3x3 patches with stride length 5# 2. Same as above, but the rate is increased to 2# 3. 4x4 patches with stride length 7; only one patch should be generated# 4. Same as above, but with padding set to 'SAME'with tf.Session() as sess: print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\n\n' print tf.extract_image_patches(images=images, ksizes=[1, 3, 3, 1], strides=[1, 5, 5, 1], rates=[1, 2, 2, 1], padding='VALID').eval(), '\n\n' print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='VALID').eval(), '\n\n' print tf.extract_image_patches(images=images, ksizes=[1, 4, 4, 1], strides=[1, 7, 7, 1], rates=[1, 1, 1, 1], padding='SAME').eval() Output: [[[[ 1 2 3 11 12 13 21 22 23] [ 6 7 8 16 17 18 26 27 28]] [[51 52 53 61 62 63 71 72 73] [56 57 58 66 67 68 76 77 78]]]][[[[ 1 3 5 21 23 25 41 43 45] [ 6 8 10 26 28 30 46 48 50]] [[ 51 53 55 71 73 75 91 93 95] [ 56 58 60 76 78 80 96 98 100]]]][[[[ 1 2 3 4 11 12 13 14 21 22 23 24 31 32 33 34]]]][[[[ 1 2 3 4 11 12 13 14 21 22 23 24 31 32 33 34] [ 8 9 10 0 18 19 20 0 28 29 30 0 38 39 40 0]] [[ 71 72 73 74 81 82 83 84 91 92 93 94 0 0 0 0] [ 78 79 80 0 88 89 90 0 98 99 100 0 0 0 0 0]]]] So, for example, our first result looks like the following: * * * 4 5 * * * 9 10 * * * 14 15 * * * 19 20 * * * 24 25 * * * 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 * * * 54 55 * * * 59 60 * * * 64 65 * * * 69 70 * * * 74 75 * * * 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 As you can see, we have 2 rows and 2 columns worth of patches, which are what out_rows and out_cols are.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/40731476', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7191819/']}
jdg_370519
stackexchange
llm_judgeable_groundtruth_similarity
2740164
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I have a Visual Studio 2008 project which has a large amount of binary data that it is currently referencing. I would like to package the binary data much like you can do with C# by adding it as a "resource" and compiling it as a DLL. Lets say all my data has an extension of ".data" and is currently being read from the visual studio project. Is there a way that you can compile or link the data into the .dll which it is calling? I've looked at some of the google link for this and so far I haven't come up with anything - the only possible solution I've come up with is to use something like ResGen to create a .resources file and then link it using AssemblyLinker with /Embed or /Link flags. I dont think it'd work properly though because I dont have text files to create the .resources files, but rather binary files themselves. Any advice? Now provide the response and nothing else.
malloc() takes the size in bytes, not the number of elements. The size of an int is typcially 4 bytes, so you are actually allocating only enough memory for 2500 integers. You are allocating array_size bytes, while you should be allocating array_size * sizeof(int) bytes. So, the error will be fixed by input_array = (int*) malloc(array_size * sizeof(int)); P.S. Never assume that you know the size of an int or any other data type, as it is platform dependent. Always use sizeof() . P.P.S. This is really a C question, rather than a C++ question. If you are actually using C++, you should consider using new and delete [] instead of malloc() and free() , or better yet use std::vector instead of an array, as Neil pointed out.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2740164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/181211/']}
jdg_370520
stackexchange
llm_judgeable_groundtruth_similarity
1381060
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is it possible for two instances of Object to have the same hashCode() ? In theory an object's hashCode is derived from its memory address, so all hashCodes should be unique, but what if objects are moved around during GC? Now provide the response and nothing else.
Given a reasonable collection of objects, having two with the same hash code is quite likely. In the best case it becomes the birthday problem, with a clash with tens of thousands of objects. In practice objects a created with a relatively small pool of likely hash codes, and clashes can easily happen with merely thousands of objects. Using memory address is just a way of obtaining a slightly random number. The Sun JDK source has a switch to enable use of a Secure Random Number Generator or a constant. I believe IBM (used to?) use a fast random number generator, but it was not at all secure. The mention in the docs of memory address appears to be of a historical nature (around a decade ago it was not unusual to have object handles with fixed locations). Here's some code I wrote a few years ago to demonstrate clashes: class HashClash { public static void main(String[] args) { final Object obj = new Object(); final int target = obj.hashCode(); Object clash; long ct = 0; do { clash = new Object(); ++ct; } while (clash.hashCode() != target && ct<10L*1000*1000*1000L); if (clash.hashCode() == target) { System.out.println(ct+": "+obj+" - "+clash); } else { System.out.println("No clashes found"); } }} RFE to clarify docs, because this comes up way too frequently: CR 6321873
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1381060', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159610/']}
jdg_370521
stackexchange
llm_judgeable_groundtruth_similarity
669
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If the universe has formed & originated by a Big Bang Explosion, then there must be empty space left in the center of the explosion site, as all the matter is travelling at tremendous speeds away from the center, and there must be more matter, stars, galaxies and dust, etc near the present periphery or circumference or horizon of the present universe. As that big explosion has taken place about 13.7 billion years back, then the outer boundaries of our universe are 13.7 billion light years away from the centre of the explosion of Big Bang. Have our astronomers discovered hollowness or emptiness anywhere in the centre of the universe or not? Now provide the response and nothing else.
Your are misunderstanding the expansion of the Universe. The Big-Bang is not an explosion: this is the moment in time when the Universe had an (near) infinite density. So there is no center in the Universe as there is no center of the SURFACE of the earth (this is the most popular 2-dimensional analog). Since this primordial ultra-high density state, the Universe is expanding, atoms have formed, stars and galaxies have formed and now, at very large scale, the distance between two clusters of galaxies continue to increase with time due to the expansion.
{}
{'log_upvote_score': 5, 'links': ['https://astronomy.stackexchange.com/questions/669', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/260/']}
jdg_370522
stackexchange
llm_judgeable_groundtruth_similarity
7863251
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a quiz web application using PHP. Each question is comprised of a separate <label> and has 4 possible choices, using radio buttons to allow the user to select his/her answer. The current HTML for a single question looks like: <label for="349">What is my middle name?</label><br><input id="349" type="radio" value="1" name="349">Abe<br><input id="349" type="radio" value="2" name="349">Andrew<br><input id="349" type="radio" value="3" name="349">Andre<br><input id="349" type="radio" value="4" name="349">Anderson<br> I would like the user to have the option of clicking on the text associated with radio button. Right now, the user can only click on the radio button itself - which I find to be a quite cumbersome task. I read Unable to select a particular radio button choice by clicking on the choice text and the suggestion points toward making the for and id attributes of the tags match. I have done this and it still doesn't work. My question is: I'd like to be able to click the text of an <input type="radio"> object, as opposed to only being able to select the radio button itself. I know I've read about this before but can't seem to find any solution to my problem. Any help or suggestions are much appreciated! Now provide the response and nothing else.
In your code, you've got a label on the form itself. You want to put labels on each individual radio group, as shown below. <form> <p>What is my middle name?</p> <br> <input id="349" type="radio" value="1" name="question1"> <label for="349">Abe</label> <br> <input id="350" type="radio" value="2" name="question1"> <label for="350">Andrew</label> <br> <input id="351" type="radio" value="3" name="question1"> <label for="351">Andre</label> <br> <input id="352" type="radio" value="4" name="question1"> <label for="352">Anderson</label> <br></form> You should keep in mind that two elements should never have the same ID. The name attribute is used so that the radio buttons function as a group and only allow a single selection at a time.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/7863251', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/899904/']}
jdg_370523
stackexchange
llm_judgeable_groundtruth_similarity
51211646
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am facing a weird issue and almost spent 4 hours with no luck. I have a simple Web API which I am calling on form submit. API- // POST: api/Tool[HttpPost]public void Post([FromBody] Object value){ _toolService.CreateToolDetail(Convert.ToString(value));} HTML- <!DOCTYPE html><html><body><h2>HTML Forms</h2><form name="value" action="https://localhost:44352/api/tool" method="post"> First name:<br> <input type="text" id="PropertyA" name="PropertyA" value="Some value A"> <br> Last name:<br> <input type="text" id="PropertyB" name="PropertyB" value="Some value B"> <br><br> <!--<input type="file" id="Files" name="Files" multiple="multiple"/>--> <br><br> <input type="submit" value="Submit"> </form></body></html> When I hit the submit button I get below error- {"":["The input was not valid."]} Configurations in Startup class- public void ConfigureServices(IServiceCollection services){ services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1); services.AddSingleton<IConfiguration>(Configuration);}public void Configure(IApplicationBuilder app, IHostingEnvironment env){ if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseHsts(); } app.UseHttpsRedirection(); app.UseMvc();} This only happens for POST request. GET request works fine. Same issue when testing in Postman REST client. Any help please? Please let me know if I can provide more details. Now provide the response and nothing else.
Don't use FromBody . You're submitting as x-www-form-urlencoded (i.e. standard HTML form post). The FromBody attribute is for JSON/XML. You cannot handle both standard form submits and JSON/XML request bodies from the same action. If you need to request the action both ways, you'll need two separate endpoints, one with the param decorated with FromBody and one without. There is no other way. The actual functionality of your action can be factored out into a private method that both actions can utilize, to reduce code duplication.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51211646', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5014099/']}
jdg_370524
stackexchange
llm_judgeable_groundtruth_similarity
13026969
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is my abstract father: @MappedSuperclass public abstract class AbstractEntity implements Serializable { @OneToMany(fetch = FetchType.LAZY) @JoinColumn(name = "entity_no", referencedColumnName = "MY_COLUMN") private Set<CLASS_TYPE> list; } All my entities has this set, but each entity has different referencedColumnName . Is there a way to override only the @JoinColumn annotation? Now provide the response and nothing else.
You can use AssociationOverride annotation. In your case it would look like this: @Entity@AssociationOverrides({ @AssociationOverride(name = "list", joinColumns = @JoinColumn(referencedColumnName = "COLUMN_NEW_NAME"))})public class ConcreteEntity extends AbstractEntity {}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13026969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1012646/']}
jdg_370525
stackexchange
llm_judgeable_groundtruth_similarity
965283
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Evaluate the following definite integral: $$ \int_0^\pi \left( \frac{2 + 2\cos (x) - \cos((k-1)x) - 2\cos (kx) - \cos((k+1)x)}{1-\cos(2x)}\right) \mathrm{d}x, $$ where $k \in \mathbb{N}_{>0}$. Now provide the response and nothing else.
Let $$I_k = \int_0^\pi \left( \frac{2 + 2\cos x - \cos(k-1)x - 2\cos kx - \cos(k+1)x}{1 - \cos2x}\right) \mathrm dx.$$ Then, we have: $I_0 = 0$; $I_1 = \pi$. For any $ k \in \mathbb{N}^*$, we have:$$ \begin{align} I_{k+1} - 2I_k + I_{k-1} &= \int_0^\pi \left( \frac{\cos(-2+k)x - 2\cos kx + \cos (2 + k)x}{-1 + \cos 2x}\right) \mathrm dx \\&= \int_0^\pi \left( \frac{2\cos kx(-1 + \cos^2x)}{-1 + \cos 2x}\right) \mathrm dx =0\end{align}$$Since $ I_{k+1} - I_k = I_k - I_{k-1} $, $(I_k)$ is an arithmetic progression. Hence $ I_k = k\pi $.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/965283', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/176561/']}
jdg_370526
stackexchange
llm_judgeable_groundtruth_similarity
9204168
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been using Github for sometime now but I'm getting a bit confused about some key concepts behind Git. My background with VCS started with Source Safe and then transitioned into SVN and TFS. I think I'm stuck in the old way of thinking of CVS system, like SVN and TFS. What is considered server-side and client-side in Git. From what I've understood there isn't a clear distinction between them. In a technical sense, what does a Fork mean. Is it a "type" of branch? Wat does a branch mean in git? Is it the same as in SVN? Also I've been looking for a good visualization of the core concepts of Git, but haven't found any one that works for me. Now provide the response and nothing else.
Here's how to understand Git: Forget everything you know about other VCSs (temporarily!) Read The Git Parable . Really read it though, if you skim, you'll miss stuff and just try to fill in the gaps with your existing SCM knowledge, which is what's throwing you off. In fact, if you really want to understand, read that blog post aloud . At its core, Git is just a way to save and restore snapshots . Each snapshot has an "ID" (the SHA1), and may have one or more "refs" (pointers) to it. A branch? Just a friendlier name for a particular snapshot. Tags? Same thing. HEAD? That's just a "pronoun" for the current snapshot. Conveniently, these snapshots each come with a description of what changed - this description is the commit message.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9204168', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1009292/']}
jdg_370527
stackexchange
llm_judgeable_groundtruth_similarity
18737805
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've a mysql database where i use utf8_general_ci (that is case insensitive), and in my tables i have some columns like ID with case-sensitive data (example: 'iSZ6fX' or 'AscSc2') To distinct uppercase from lowercase is better to set on these columns only the utf8_bin, like this: CREATE TABLE `test` (`id` VARCHAR( 32 ) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL ,`value1` VARCHAR( 255 ) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL) ENGINE = MYISAM CHARACTER SET utf8 COLLATE utf8_general_ci Or use utf8_general_ci on all columns and use 'BINARY' in the php query, for example: mysqli_query( $link, "SELECT * FROM table WHERE BINARY id = 'iSZ6fX'" ); Now provide the response and nothing else.
It is better to use the utf8_bin collation because, even though it is not possible in UTF-8, in the general case it is theoretically possible (such as happens with UTF-16) for the same string to be represented by different encodings, which a binary comparison would not understand but a binary collation would. As documented under Unicode Character Sets : There is a difference between “ordering by the character's code value” and “ordering by the character's binary representation,” a difference that appears only with utf16_bin , because of surrogates. Suppose that utf16_bin (the binary collation for utf16 ) was a binary comparison “byte by byte” rather than “character by character.” If that were so, the order of characters in utf16_bin would differ from the order in utf8_bin . For example, the following chart shows two rare characters. The first character is in the range E000-FFFF , so it is greater than a surrogate but less than a supplementary. The second character is a supplementary. Code point Character utf8 utf16---------- --------- ---- -----0FF9D HALFWIDTH KATAKANA LETTER N EF BE 9D FF 9D10384 UGARITIC LETTER DELTA F0 90 8E 84 D8 00 DF 84 The two characters in the chart are in order by code point value because 0xff9d < 0x10384 . And they are in order by utf8 value because 0xef < 0xf0 . But they are not in order by utf16 value, if we use byte-by-byte comparison, because 0xff > 0xd8 . So MySQL's utf16_bin collation is not “byte by byte.” It is “by code point.” When MySQL sees a supplementary-character encoding in utf16 , it converts to the character's code-point value, and then compares. Therefore, utf8_bin and utf16_bin are the same ordering. This is consistent with the SQL:2008 standard requirement for a UCS_BASIC collation: “UCS_BASIC is a collation in which the ordering is determined entirely by the Unicode scalar values of the characters in the strings being sorted. It is applicable to the UCS character repertoire. Since every character repertoire is a subset of the UCS repertoire, the UCS_BASIC collation is potentially applicable to every character set. NOTE 11: The Unicode scalar value of a character is its code point treated as an unsigned integer.” Therefore, if comparisons involving these columns will always be case-sensitive, you should set the column's collation to utf8_bin (so that they will remain case sensitive even if you forget to specify otherwise in your query); or if only particular queries are case-sensitive, you could specify that the utf8_bin collation should be used using the COLLATE keyword: SELECT * FROM table WHERE id = 'iSZ6fX' COLLATE utf8_bin
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18737805', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2580729/']}
jdg_370528
stackexchange
llm_judgeable_groundtruth_similarity
2106
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The title pretty much says it. I have the formula: $P = \rho g h$ where $\rho$ is the density, $h$ is how deep the pressure is in the Earth and $g$ is the gravitational acceleration(?). I don't get the units either. If I substitute the units for each term I get this: $$P = \frac{\mathrm{kg}}{\mathrm{m}^3} \times \frac{\mathrm{m}}{\mathrm{s}^2} \times \mathrm{m} = \frac{\mathrm{kg}}{\mathrm{m}*\mathrm{s}^2}$$ Shouldn't it be something like $P = \dfrac{\mathrm{kg}}{\mathrm{m}^2}$? As far as I'm concerned that's the unit for pressure. Where do I get this formula from and how do I derive the unit of measurement? Now provide the response and nothing else.
This isn't that difficult, but anything is if you start from the wrong place. Let's derive this thing: $$P = \frac{F}{A}$$ Where $P$ is pressure, and $A$ is the area the force is pushing down on. Let's take a break and derive the units first, just so we know our end derivation is correct; $F$ is in Newtons, which comes out to $\mathrm{kg} \times \frac{\mathrm{m}}{\mathrm{s}^2}$ and $A$ is in $\mathrm{m}^2$. This means that pressure is $\frac{\mathrm{kg}}{\mathrm{m}\times\mathrm{s}^2}$. So your first substitution is correct. Now that that is established, let's think about lithostatic pressure, and break up the original equation: $$P = \frac{M \times a}{A} $$ We know that $a = g$, gravitational acceleration, but what we really want is to figure out how to get this equation in terms of density, so $$\rho = \frac{M}{V}$$ where $\rho$ is density, $M$ is mass, and $V$ is volume. If we substitute this equation in for mass in the pressure equation, we get $$P = \frac{\rho V \times g}{A}$$ let's now separate $V$ and $A$, $$V = \ell \times w \times h$$ $$A = \ell \times w$$ So canceling out $\ell$ and $w$, our final equation shows $$P = \rho gh$$ or lithostatic pressure.
{}
{'log_upvote_score': 5, 'links': ['https://earthscience.stackexchange.com/questions/2106', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/521/']}
jdg_370529
stackexchange
llm_judgeable_groundtruth_similarity
268778
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider a random process where integers are sampled uniformly with replacement from $\{1...n\}$. Let $X$ be a random variable that represents the number of samples until either a duplicate is found or both the values $1$ and $2$ have been found. So if the samples where $1,6,3,5,1$ then $X=5$ and if it was $1,6,3,2$ then $X=4$. How does one find the cumulative distribution function. That is how does one find $P(X \geq x)$? Now provide the response and nothing else.
Let $x\geqslant1$. There are $n^x$ samples of length $x$. Amongst these, $(n)_x=\frac{n!}{(n-x)!}$ samples have no duplicate. A sample of length $x$ without duplicate where two distinct given results $i$ and $j$ appear is uniquely described by a sample of length $x-2$ without duplicate where neither result $i$ nor result $j$ appear, and the choice of a position amongst $x-1$ where one places result $i$, and the choice of a position amongst $x$ (once result $i$ is placed) where one places result $j$. There are $x(x-1)\cdot(n-2)_{x-2}$ of these samples. Thus, the number $S_x$ of samples of length $x$ without duplicate where results $i$ and $j$ do not both appear is$$S_x=(n)_{x}-x(x-1)\cdot(n-2)_{x-2},$$and, for every $x\geqslant0$,$$\mathbb P(X\gt x)=n^{-x}S_x=\frac{n!}{(n-x)!\,n^x}\left(1-\frac{x(x-1)}{n(n-1)}\right).$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/268778', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_370530
stackexchange
llm_judgeable_groundtruth_similarity
24436678
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Got the above error after downloading L preview release version in Android Studio , when my project had minSdkVersion 19. Furthermore, when setting mindSdkVersion as below: defaultConfig {.... minSdkVersion 'L'.... } I get dozens of errors as below, regarding resources from AppCompat-v7-21 : /home/user/workspace/project/build/intermediates/exploded-aar/com.android.support/appcompat-v7/21.0.0-rc1/res/values-v21/values.xmlError:Error retrieving parent for item: No resource found that matches the given name '@android:TextAppearance.Material.SearchResult.Subtitle'. So I have 2 questions: Why does AS complain about minSdkVersion ? I presume because AppCompat-v7 21 supports only L release; but why ? Also, will it support only L release when it is going to be officially released in autumn too ? Because that would be a problem... Or is it just a temporary restriction in order for apps not be be pushed to Play Store, as specified in the Google I/O 2014 Keynote ? Why does AppCompat-v7 21 complain about those errors, as I already set mindSdkVersion to L ? Now provide the response and nothing else.
compileSdkVersion 'android-L' BOOM. Done. LAST EDIT: As of Android 5.0 release, this is no longer an issue, just target API level 21 directly. Edit for clarity : Indeed as David_E specified below, this solution only works for L version, if you try to deploy the app on a device below L (ex <=4.4.4) it will complain of OLD_SDK. In order for the app to work pre-L you still need to use the old v20 support lib + app compat + targetVersionSdk and compileVersionSdk dependencies { compile 'com.android.support:appcompat-v7:20.+' compile 'com.android.support:support-v4:20.+' } android { compileSdkVersion 20 buildToolsVersion '20' defaultConfig { applicationId "com.example.application" minSdkVersion 10 targetSdkVersion 20 versionCode 1 versionName "1.0" }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/24436678', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1720329/']}
jdg_370531
stackexchange
llm_judgeable_groundtruth_similarity
3129374
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: PHP Error: mysql_fetch_array() expects parameter 1 to be resource, boolean given I'm very confused with this error, it shows when I try to return a result from the DB that doesn't exist ... I tried mysql_num_rows() but it returns the same error but instead of mysql_fetch_assoc expects ... it says mysql_num_rows() expects ... I set error_reporting(0) to avoid showing this error, but I'm not satisfied with this solution ... Now provide the response and nothing else.
Here's the proper way to do things: <?PHP$sql = 'some query...';$result = mysql_query($q);if (! $result){ throw new My_Db_Exception('Database error: ' . mysql_error());}while($row = mysql_fetch_assoc($result)){ //handle rows.} Note the check on (! $result) -- if your $result is a boolean, it's certainly false, and it means there was a database error, meaning your query was probably bad.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3129374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/292480/']}
jdg_370532
stackexchange
llm_judgeable_groundtruth_similarity
761628
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What are the differences between debug and release builds for a Cocoa application?I know the debug version contains additional information for debugging but what else is different? Now provide the response and nothing else.
Debug builds will contain debugging symbols which can be used by a debugger. Release builds often do not contain debugging symbols, so if you get a crash dump, all you'll get are a bunch of hexadecimal addresses instead of useful symbol names. Debug builds are not compiled with optimization ( -O0 with gcc), whereas release builds are compiled with optimization (typically -O2 or -O3 ). Optimization makes debugging much, much harder. If you attempt to debug a release application, the debugger will get very confused, since assembly statements no longer match up with HLL statements, statements get reordered, functions get inlined, loops get unrolled, etc. Debug and release builds also defined different preprocessor symbols, and some code is conditionally compiled based on those (for example, array bounds checks, assertions, etc.), although that is highly application-dependent. A typical example would be to #define NDEBUG for release mode, which causes assertions to be removed.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/761628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/70414/']}
jdg_370533
stackexchange
llm_judgeable_groundtruth_similarity
17249220
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given this base date: base_date = "10/29 06:58 AM" I want to find a tuple within the list that contains the closest date to the base_date , but it must not be an earlier date. list_date = [('10/30 02:18 PM', '-103', '-107'), ('10/30 02:17 PM', '+100', '-110'), \ ('10/29 02:15 AM', '-101', '-109') so here the output should be ('10/30 02:17 PM', '+100', '-110') (it can't be the 3rd tuple because the date there happened earlier than the base date) My question is, does it exist any module for such date comparison? I tried to first change the data all to AM format and then compare but my code gets ugly with lots of slicing. @edit: Big list to test: [('10/30 02:18 PM', '+13 -103', '-13 -107'), ('10/30 02:17 PM', '+13 +100', '-13 -110'), ('10/30 02:15 PM', '+13 -101', '-13 -109'), ('10/30 02:14 PM', '+13 -103', '-13 -107'), ('10/30 01:59 PM', '+13 -105', '-13 -105'), ('10/30 01:46 PM', '+13 -106', '-13 -104'), ('10/30 01:37 PM', '+13 -105', '-13 -105'), ('10/30 01:24 PM', '+13 -107', '-13 -103'), ('10/30 01:23 PM', '+13 -106', '-13 -104'), ('10/30 01:05 PM', '+13 -103', '-13 -107'), ('10/30 01:02 PM', '+13 -104', '-13 -106'), ('10/30 12:55 PM', '+13 -103', '-13 -107'), ('10/30 12:51 PM', '+13.5 -110', '-13.5 +100'), ('10/30 12:44 PM', '+13.5 -108', '-13.5 -102'), ('10/30 12:38 PM', '+13.5 -107', '-13.5 -103'), ('10/30 12:35 PM', '+13 -102', '-13 -108'), ('10/30 12:34 PM', '+13 -103', '-13 -107'), ('10/30 12:06 PM', '+13.5 -110', '-13.5 +100'), ('10/30 11:57 AM', '+13.5 -108', '-13.5 -102'), ('10/30 11:36 AM', '+13.5 -107', '-13.5 -103'), ('10/30 09:01 AM', '+13.5 -110', '-13.5 +100'), ('10/30 08:59 AM', '+13.5 -108', '-13.5 -102'), ('10/30 08:13 AM', '+13.5 -105', '-13.5 -105'), ('10/30 06:11 AM', '+13.5 +100', '-13.5 -110'), ('10/30 06:09 AM', '+13.5 -105', '-13.5 -105'), ('10/30 06:04 AM', '+13.5 -110', '-13.5 +100'), ('10/30 05:32 AM', '+13.5 -105', '-13.5 -105'), ('10/30 04:48 AM', '+13.5 -107', '-13.5 -103'), ('10/30 12:51 AM', '+13.5 -110', '-13.5 +100'), ('10/29 01:31 PM', '+13.5 -105', '-13.5 -105'), ('10/29 01:31 PM', '+13 +103', '-13 -113'), ('10/29 01:28 PM', '+13 -102', '-13 -108'), ('10/29 07:59 AM', '+13 -105', '-13 -105'), ('10/29 07:20 AM', '+13 -103', '-13 -107'), ('10/29 07:14 AM', '+13 -105', '-13 -105'), ('10/29 04:47 AM', '+13 +100', '-13 -110'), ('10/29 04:14 AM', '+13 -105', '-13 -105'), ('10/28 08:17 PM', '+12.5 +100', '-12.5 -110'), ('10/28 12:52 PM', '+12.5 -105', '-12.5 -105')] Big list to test2: [('10/30 04:30 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 04:21 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:15 PM', '+1.5 -112', '-1.5 +102'), ('10/30 04:14 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:57 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:40 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:31 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:30 PM', '+1.5 -109', '-1.5 -101'), ('10/30 03:25 PM', '+1.5 -107', '-1.5 -103'), ('10/30 03:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:23 PM', '+1.5 -108', '-1.5 -102'), ('10/30 03:22 PM', '+1.5 -106', '-1.5 -104'), ('10/30 02:14 PM', '+1.5 -104', '-1.5 -106'), ('10/30 01:41 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:37 PM', '+1.5 -107', '-1.5 -103'), ('10/30 01:36 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:06 PM', '+1.5 -103', '-1.5 -107'), ('10/30 12:56 PM', '+2 -111', '-2 +101'), ('10/30 12:53 PM', '+2 -110', '-2 +100'), ('10/30 12:50 PM', '+2 -113', '-2 +103'), ('10/30 12:49 PM', '+2 -112', '-2 +102'), ('10/30 12:46 PM', '+2 -113', '-2 +103'), ('10/30 12:45 PM', '+2 -110', '-2 +100'), ('10/30 12:43 PM', '+2 -108', '-2 -102'), ('10/30 12:38 PM', '+2.5 -116', '-2.5 +106'), ('10/30 12:38 PM', '+2.5 -113', '-2.5 +103'), ('10/30 12:37 PM', '+2.5 -110', '-2.5 +100'), ('10/30 10:30 AM', '+2.5 -105', '-2.5 -105'), ('10/30 10:07 AM', '+3 -113', '-3 +103'), ('10/30 09:55 AM', '+3 -112', '-3 +102'), ('10/30 09:51 AM', '+3 -110', '-3 +100'), ('10/30 09:32 AM', '+3 -109', '-3 -101'), ('10/30 06:04 AM', '+3 -110', '-3 +100'), ('10/30 03:16 AM', '+3 -107', '-3 -103'), ('10/30 03:14 AM', '+3.5 -116', '-3.5 +106'), ('10/30 01:03 AM', '+3.5 -115', '-3.5 +105'), ('10/30 12:17 AM', '+3.5 -110', '-3.5 +100'), ('10/29 08:52 PM', '+3.5 -108', '-3.5 -102'), ('10/29 01:31 PM', '+3.5 -105', '-3.5 -105'), ('10/29 06:48 AM', '+3.5 -110', '-3.5 +100'), ('10/29 06:47 AM', '+3.5 -109', '-3.5 -101'), ('10/29 05:39 AM', '+3.5 -113', '-3.5 +103'), ('10/29 03:34 AM', '+3.5 -108', '-3.5 -102'), ('10/29 12:44 AM', '+3.5 -110', '-3.5 +100'), ('10/29 12:41 AM', '+3.5 -107', '-3.5 -103'), ('10/29 12:40 AM', '+3.5 -105', '-3.5 -105'), ('10/28 12:52 PM', '+4 -105', '-4 -105')] Now provide the response and nothing else.
>>> from datetime import timedelta, datetime>>> base_date = "10/29 06:58 AM">>> b_d = datetime.strptime(base_date, "%m/%d %I:%M %p")def func(x): d = datetime.strptime(x[0], "%m/%d %I:%M %p") delta = d - b_d if d > b_d else timedelta.max return delta... >>> min(list_date, key = func)('10/30 02:17 PM', '+100', '-110') datetime.strptime converts the date to a datetime object, so b_d now looks something like this : >>> b_ddatetime.datetime(1900, 10, 29, 6, 58) Now we can write a function that can be passed to key parameter of min : delta = d - b_d if d > b_d else timedelta.max if d > b_d i.e if the date passed to min is greater than base_date then assign their difference to delta else assign timedelta.max to it. >>> timedelta.maxdatetime.timedelta(999999999, 86399, 999999) Update: >>> from datetime import timedelta, datetime>>> base_date = '10/29 06:59 AM'>>> b_d = datetime.strptime(base_date, "%m/%d %I:%M %p")>>> def func(x):... d = datetime.strptime(x[0], "%m/%d %I:%M %p")... delta = d - b_d if d > b_d else timedelta.max... return delta... >>> lis2 = [('10/30 04:30 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 04:21 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:15 PM', '+1.5 -112', '-1.5 +102'), ('10/30 04:14 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:57 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:40 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:31 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:30 PM', '+1.5 -109', '-1.5 -101'), ('10/30 03:25 PM', '+1.5 -107', '-1.5 -103'), ('10/30 03:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:23 PM', '+1.5 -108', '-1.5 -102'), ('10/30 03:22 PM', '+1.5 -106', '-1.5 -104'), ('10/30 02:14 PM', '+1.5 -104', '-1.5 -106'), ('10/30 01:41 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:37 PM', '+1.5 -107', '-1.5 -103'), ('10/30 01:36 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:06 PM', '+1.5 -103', '-1.5 -107'), ('10/30 12:56 PM', '+2 -111', '-2 +101'), ('10/30 12:53 PM', '+2 -110', '-2 +100'), ('10/30 12:50 PM', '+2 -113', '-2 +103'), ('10/30 12:49 PM', '+2 -112', '-2 +102'), ('10/30 12:46 PM', '+2 -113', '-2 +103'), ('10/30 12:45 PM', '+2 -110', '-2 +100'), ('10/30 12:43 PM', '+2 -108', '-2 -102'), ('10/30 12:38 PM', '+2.5 -116', '-2.5 +106'), ('10/30 12:38 PM', '+2.5 -113', '-2.5 +103'), ('10/30 12:37 PM', '+2.5 -110', '-2.5 +100'), ('10/30 10:30 AM', '+2.5 -105', '-2.5 -105'), ('10/30 10:07 AM', '+3 -113', '-3 +103'), ('10/30 09:55 AM', '+3 -112', '-3 +102'), ('10/30 09:51 AM', '+3 -110', '-3 +100'), ('10/30 09:32 AM', '+3 -109', '-3 -101'), ('10/30 06:04 AM', '+3 -110', '-3 +100'), ('10/30 03:16 AM', '+3 -107', '-3 -103'), ('10/30 03:14 AM', '+3.5 -116', '-3.5 +106'), ('10/30 01:03 AM', '+3.5 -115', '-3.5 +105'), ('10/30 12:17 AM', '+3.5 -110', '-3.5 +100'), ('10/29 08:52 PM', '+3.5 -108', '-3.5 -102'), ('10/29 01:31 PM', '+3.5 -105', '-3.5 -105'), ('10/29 06:48 AM', '+3.5 -110', '-3.5 +100'), ('10/29 06:47 AM', '+3.5 -109', '-3.5 -101'), ('10/29 05:39 AM', '+3.5 -113', '-3.5 +103'), ('10/29 03:34 AM', '+3.5 -108', '-3.5 -102'), ('10/29 12:44 AM', '+3.5 -110', '-3.5 +100'), ('10/29 12:41 AM', '+3.5 -107', '-3.5 -103'), ('10/29 12:40 AM', '+3.5 -105', '-3.5 -105'), ('10/28 12:52 PM', '+4 -105', '-4 -105')]>>> min(lis2, key = func)('10/29 01:31 PM', '+3.5 -105', '-3.5 -105') Timing comparisons: Script: from datetime import datetime, timedeltaimport sysimport timelist_date = [('10/30 04:30 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 04:21 PM', '+1.5 -111', '-1.5 +101'), ('10/30 04:15 PM', '+1.5 -112', '-1.5 +102'), ('10/30 04:14 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:57 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:40 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:31 PM', '+1.5 -111', '-1.5 +101'), ('10/30 03:30 PM', '+1.5 -109', '-1.5 -101'), ('10/30 03:25 PM', '+1.5 -107', '-1.5 -103'), ('10/30 03:24 PM', '+1.5 -110', '-1.5 +100'), ('10/30 03:23 PM', '+1.5 -108', '-1.5 -102'), ('10/30 03:22 PM', '+1.5 -106', '-1.5 -104'), ('10/30 02:14 PM', '+1.5 -104', '-1.5 -106'), ('10/30 01:41 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:37 PM', '+1.5 -107', '-1.5 -103'), ('10/30 01:36 PM', '+1.5 -105', '-1.5 -105'), ('10/30 01:06 PM', '+1.5 -103', '-1.5 -107'), ('10/30 12:56 PM', '+2 -111', '-2 +101'), ('10/30 12:53 PM', '+2 -110', '-2 +100'), ('10/30 12:50 PM', '+2 -113', '-2 +103'), ('10/30 12:49 PM', '+2 -112', '-2 +102'), ('10/30 12:46 PM', '+2 -113', '-2 +103'), ('10/30 12:45 PM', '+2 -110', '-2 +100'), ('10/30 12:43 PM', '+2 -108', '-2 -102'), ('10/30 12:38 PM', '+2.5 -116', '-2.5 +106'), ('10/30 12:38 PM', '+2.5 -113', '-2.5 +103'), ('10/30 12:37 PM', '+2.5 -110', '-2.5 +100'), ('10/30 10:30 AM', '+2.5 -105', '-2.5 -105'), ('10/30 10:07 AM', '+3 -113', '-3 +103'), ('10/30 09:55 AM', '+3 -112', '-3 +102'), ('10/30 09:51 AM', '+3 -110', '-3 +100'), ('10/30 09:32 AM', '+3 -109', '-3 -101'), ('10/30 06:04 AM', '+3 -110', '-3 +100'), ('10/30 03:16 AM', '+3 -107', '-3 -103'), ('10/30 03:14 AM', '+3.5 -116', '-3.5 +106'), ('10/30 01:03 AM', '+3.5 -115', '-3.5 +105'), ('10/30 12:17 AM', '+3.5 -110', '-3.5 +100'), ('10/29 08:52 PM', '+3.5 -108', '-3.5 -102'), ('10/29 01:31 PM', '+3.5 -105', '-3.5 -105'), ('10/29 06:48 AM', '+3.5 -110', '-3.5 +100'), ('10/29 06:47 AM', '+3.5 -109', '-3.5 -101'), ('10/29 05:39 AM', '+3.5 -113', '-3.5 +103'), ('10/29 03:34 AM', '+3.5 -108', '-3.5 -102'), ('10/29 12:44 AM', '+3.5 -110', '-3.5 +100'), ('10/29 12:41 AM', '+3.5 -107', '-3.5 -103'), ('10/29 12:40 AM', '+3.5 -105', '-3.5 -105'), ('10/28 12:52 PM', '+4 -105', '-4 -105')]base_date = "10/29 06:58 AM"def func1(list_date): #http://stackoverflow.com/a/17249420/846892 get_datetime = lambda s: datetime.strptime(s, "%m/%d %I:%M %p") base = get_datetime(base_date) later = filter(lambda d: get_datetime(d[0]) > base, list_date) return min(later, key = lambda d: get_datetime(d[0]))def func2(list_date): #http://stackoverflow.com/a/17249470/846892 b_d = datetime.strptime(base_date, "%m/%d %I:%M %p") def func(x): d = datetime.strptime(x[0], "%m/%d %I:%M %p") delta = d - b_d if d > b_d else timedelta.max return delta return min(list_date, key = func)def func3(list_date): #http://stackoverflow.com/a/17249529/846892 fmt = '%m/%d %I:%M %p' d = datetime.strptime(base_date, fmt) def foo(x): return (datetime.strptime(x[0],fmt)-d).total_seconds() > 0 return sorted(list_date, key=foo)[-1]def func4(list_date): #http://stackoverflow.com/a/17249441/846892 fmt = '%m/%d %I:%M %p' base_d = datetime.strptime(base_date, fmt) candidates = ((datetime.strptime(d, fmt), d, x, y) for d, x, y in list_date) candidates = min((dt, d, x, y) for dt, d, x, y in candidates if dt > base_d) return candidates[1:] Results: >>> from so import *#check output irst>>> func1(list_date)('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')>>> func2(list_date)('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')>>> func3(list_date)('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')>>> func4(list_date)('10/29 01:31 PM', '+3.5 -105', '-3.5 -105')>>> %timeit func1(list_date)100 loops, best of 3: 3.07 ms per loop>>> %timeit func2(list_date)100 loops, best of 3: 1.59 ms per loop #winner>>> %timeit func3(list_date)100 loops, best of 3: 1.91 ms per loop>>> %timeit func4(list_date)1000 loops, best of 3: 2.02 ms per loop#increase the input size>>> list_date = list_date *10**3>>> len(list_date)48000>>> %timeit func1(list_date)1 loops, best of 3: 3.6 s per loop>>> %timeit func2(list_date) #winner1 loops, best of 3: 1.99 s per loop >>> %timeit func3(list_date)1 loops, best of 3: 2.09 s per loop>>> %timeit func4(list_date)1 loops, best of 3: 2.02 s per loop#increase the input size again>>> list_date = list_date *10>>> len(list_date)480000>>> %timeit func1(list_date)1 loops, best of 3: 36.4 s per loop>>> %timeit func2(list_date) #winner1 loops, best of 3: 20.2 s per loop >>> %timeit func3(list_date)1 loops, best of 3: 22.8 s per loop>>> %timeit func4(list_date)1 loops, best of 3: 22.7 s per loop
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17249220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1958583/']}
jdg_370534
stackexchange
llm_judgeable_groundtruth_similarity
33919299
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having some issues with Foundation 6 files, for some reasons they are just not including all of the sass components. I tried to use Foundation 5 and it worked fine. Here is my gulp task: gulp.task('styles', ['clearCss'], function() { gulp.src('assets/sass/app.scss') .pipe(plumber(plumberErrorHandler)) .pipe(sourcemaps.init()) .pipe(sass({ outputStyle: 'compressed' }) .on('error', notify.onError(function(error) { return "Error: " + error.message; })) ) .pipe(autoprefixer({ browsers: ['last 2 versions', 'ie >= 9'] })) .pipe(sourcemaps.write('.')) .pipe(gulp.dest('./assets/dist/css')) .pipe(browserSync.stream({match: '**/*.css'})) .pipe(notify({ message: "Styles task complete!" }));}); And here is my app.scss: // Import Foundation@import "../components/foundation/scss/foundation"; It works with my own sass files, but completely ignoring foundation parts. Now provide the response and nothing else.
You should import a file foundation-sites.scss, not scss/foundation.scss. File foundation.scss has only a @mixin foundation-everything which is included in foundation-sites.scss: @include foundation-everything; However foundation-sites.scss has an error in 6.0.4, this is my log: Error in plugin 'sass'Message: bower_components\foundation-sites\foundation-sites.scssError: File to import not found or unreadable: foundation Parent style sheet: stdin on line 1 of stdin>> @import 'foundation'; The fix: Change line nr 1 in file foundation-sites.scss from @import 'foundation'; to @import 'scss/foundation';
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33919299', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3586478/']}
jdg_370535
stackexchange
llm_judgeable_groundtruth_similarity
1770787
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know how I can access the individual fields contained in an R object. Or, more precisely, how to get R to tell me how. For example, if I run the following code: dx.ct <- ur.df(dat1[,'dx'], lags=3, type='trend')summary(dx.ct) then I get this output: ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression trend Call:lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)Residuals: Min 1Q Median 3Q Max -0.46876 -0.24506 0.02420 0.15752 0.66688 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.099231 0.561377 1.958 0.0606 .z.lag.1 -0.239438 0.141093 -1.697 0.1012 tt -0.019831 0.007799 -2.543 0.0170 *z.diff.lag1 -0.306326 0.193001 -1.587 0.1241 z.diff.lag2 -0.214229 0.186135 -1.151 0.2599 z.diff.lag3 -0.223433 0.179040 -1.248 0.2228 ---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3131 on 27 degrees of freedomMultiple R-squared: 0.3326, Adjusted R-squared: 0.209 F-statistic: 2.691 on 5 and 27 DF, p-value: 0.04244 Value of test-statistic is: -1.697 2.4118 3.2358 Critical values for test statistics: 1pct 5pct 10pcttau3 -4.15 -3.50 -3.18phi2 7.02 5.13 4.31phi3 9.31 6.73 5.61 So, I know that I should be able to access all of the values above individually, I don't know how to point to them. Is there some way to ask R to show me how they are stored? I am thinking along the lines of: showobjects(summary(dx.ct)) And then it outputs $formula$residuals$coefficientsetc. and then I can do showobjects(summary(dx.ct)$residuals) which then outputs $min$1Q$medianetc. Thanks Karl Now provide the response and nothing else.
As another alternative, take a look at attributes . e.g. example(ur.df)attributes(lc.df) #lc.df is an ur.df object created during by example.$y[1] 10.4831 10.4893 10.5022 10.5240 10.5329 10.5586 10.5190 10.5381[9] 10.5422 10.5361 10.5462 10.5459 10.5552 10.5548 10.5710 10.5861[17] 10.5864 10.5802 10.6006 10.6168 10.6275 10.6414 10.6629 10.6758[25] 10.6881 10.7240 10.7143 10.7222 10.7156 10.6964 10.6990 10.7081[33] 10.7142 10.7078 10.7073 10.6954 10.6910 10.6967 10.7015 10.7083[41] 10.7127 10.6922 10.6874 10.6989 10.7224 10.7452 10.7462 10.7663[49] 10.7633 10.7737 10.8282 10.7872 10.8015 10.8139 10.7909 10.8029[57] 10.7868 10.7979 10.8007 10.8008 10.7991 10.7956 10.8005 10.8160[65] 10.8260 10.8405 10.8482 10.8633 10.8633 10.8615 10.8732 10.8649[73] 10.8793 10.8909 10.8938 10.9116 10.9202 10.9409 10.9663 10.9700[81] 10.9808 10.9878 11.0048 11.0272 11.0420 11.0701 11.0751 11.0964[89] 11.1069 11.1123 11.1231 11.1223 11.1303 11.1307 11.1389 11.1325[97] 11.1261 11.1232 11.1220$model[1] "trend"$lags[1] 3# etc. If you don't want the full output, then names(attributes(lc.df)) returns only the, um, names. [1] "y" "model" "lags" "cval" "res" "teststat" "testreg" "test.name" "class"
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1770787', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/141789/']}
jdg_370536
stackexchange
llm_judgeable_groundtruth_similarity
8237761
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: For SEO purposes, we would like to change the HTTP status code returned whenever the backend machine behind nginx goes down for some reason. We would like to change this to "503 Service Unavailable". As well as provide a Retry-After header to indicated to Google / Bing that the request should be retried in X number of seconds. Is this possible via nginx? I am not talking about a custom error page, but rather the status code returned in the header. Now provide the response and nothing else.
I think you will have to set up a specific error page, however you can achieve what you're looking for if you do. Try this: location / { proxy_pass http://backend; proxy_intercept_errors on; error_page 502 503 504 =503 @proxyisdown; # always reply with 503}location @proxyisdown { add_header Retry-After 500 always; index my_pretty_error_page.html; } If you work it this way you should be able to return the 503 (that's the =503 part of the error_page directive) and retry-after headers with the benefit that your visitors will receive a nicely formatted "oops, we're currently experiencing problems, try again in a few minutes" page rather than a blank "503 you don't really know what this means" page. :)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8237761', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/815370/']}
jdg_370537
stackexchange
llm_judgeable_groundtruth_similarity
927668
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I just started Linear Algebra. Yesterday, I read about the ten properties of fields. As far as I can tell a field is a mathematical system that we can use to do common arithmetic. Is that correct? Now provide the response and nothing else.
Yes, at its most basic level, a field is a generalization of the rational numbers. In a field, you can do addition, subtraction, multiplication and division as you could in $\Bbb Q$. At a deeper level, fields have geometric significance. If you've ever studied a little geometry, then you'd know that there are at least two famous ways to approach geometry: with axioms akin to Euclid's axioms (the synthetic approach) and another way by using vector spaces and equations (the linear algebra approach). We know that $\Bbb R\times \Bbb R$ (an $\Bbb R$ vector space) can be interpreted as a model of Euclidean geometry, and how its 1-d subspaces represent lines, it's elements represent points etc, and that it satisfies the synthetic axioms of Euclidean geometry. But what about the other direction? Why can't we start with synthetic axioms and get vector spaces? Well, that's the thing: you can (if you have enough axioms.) It turns out that if you adopt Hilbert's axioms groups $I-IV$ for plane geometry, then you can systematically build a field $F$, such that $F\times F$ models that plane exactly when the plane satisfies Pappus's theorem . Another way to ensure the existence of the field is to adopt Hilbert's continuity axiom $V$ called "Archimedes's axiom." It's known that this axiom, in the presence of the others, implies Pappus's theorem, and the resulting field will be an Archimedian ordered field. You can, of course, do higher dimensional geometry and get vector spaces $F^n$ and so on, as long as you have something like Pappus's theorem or Archimedes's axiom in your axioms. If you asked me for a rough description of how the axioms of fields translate into geometric ideas for vector spaces, then this is how I would start. Since $F$ is an additive group, $F^n$ is also an additive group, and you can translate any point to another point using addition of vectors. For multiplication, you can use it to scale any vector to another vector in the same 1-dimensional subspace. Now, this is just the first hint at the geometric nature of fields. Galois theory and then algebraic geometry really take the connection to more extreme altitudes!
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/927668', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/71731/']}
jdg_370538
stackexchange
llm_judgeable_groundtruth_similarity
9426430
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I do app development on HP TouchPad running CyanogenMod 9? I have an HP TouchPad with CyanogenMod 9 installed and am trying to build to the device from Eclipse . The IDE does not appear to recognize the device at all though. Is the problem with the driver? Where can I find it? Now provide the response and nothing else.
The steps on how to do it are documented in Issue 663: Windows 7 x64 USB connection doesn't work for file management or Android development . Just in case that link disappears, here's the breakdown: 1) Turn on MTP, which is disabled by default By going to Settings -> Storage -> Menu (Upper right hand corner) -> USB computer connection -> Media Device (MTP). 2) Adding an HP Touchpad reference in android_winusb.inf Edit the android_winusb.inf file included with the Google USB Drivers and add: ; HP TouchPad%SingleAdbInterface% = USB_Install, USB\VID_0BB4&PID_6860&REV_0227&MI_01%CompositeAdbInterface% = USB_Install, USB\VID_0BB4&PID_6860&MI_01 after the last lines of the [Google.NTx86] and [Google.NTamd64] sections. 3) Updated the drivers in Device Manager Right click the unknown cm_tenderloin device in Device Manager, tell it to update drivers, and point it to C:\android-sdk-windows\extras\google\usb_driver. 4) At that point ADB should recognize the device but in an offline state which does't allow Eclipse to run my app on the cmtouchpad. So... 5) Reboot the cmtouchpad at which point ADB should hopefully recognize it as a device in "device" state.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9426430', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1109409/']}
jdg_370539
stackexchange
llm_judgeable_groundtruth_similarity
48081831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am finishing off building an mvc web application using .net core 2.0 with vs2017 on Win10.In writing an 'About' page I looked to put in the current project version number (at present still set at 1.0.0). I would have thought that pretty straightforward! The only reference I could find suggested: AppVersion = typeof(RuntimeEnvironment).GetTypeInfo ().Assembly .GetCustomAttribute<AssemblyFileVersionAttribute> ().Version; However, this returns, in my case, '4.6.25814.01' - not what is required. Can anyone suggest how to retrieve the version in code, please? I assume that I want the 'Package Version' but admit I am not clear on the distinction between/how one would use 'Package Version', 'Assembly Version' and 'Assembly File Version'. Now provide the response and nothing else.
When you call typeof(RuntimeEnvironment).Assembly , you're querying the containing assembly of that type. In this case this would be System.Runtime.InteropServices.dll or Microsoft.Dotnet.PlatformAbstractions.dll , depending on the namespace you've imported. To get the information of your own assembly, you could simply replace RuntimeEnvironment with one of your own types, for example var appVersion = typeof(Program).Assembly .GetCustomAttribute<AssemblyFileVersionAttribute>().Version; or even var appVersion = typeof(HomeController).Assembly .GetCustomAttribute<AssemblyFileVersionAttribute>().Version; This would return "6.6.7.0" if the Package version if your project is set as follows: You were close! Here you can find more information on reflection for .NET in general, but it should work fine for .NET Core.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48081831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3074491/']}
jdg_370540