source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32647890
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In iOS 8 and lower show a UIActionSheet when keyboard is presented will present the action sheet over the keyboard. With iOS 9 this is no longer the case. In my app we have a chat functionality and want the show a action over the keyboard. We used to use UIActionSheet which worked fine until iOS 8. In iOS 9 the action sheet is present behind the keyboard. I've tried both UIActionSheet and UIAlertController . What we want is a action sheet like in messages.app I've tried placing the action sheet in it own window and overriding canBecomeFirstResponder which just made the keyboard disappear.
Now provide the response and nothing else.
|
I have implemented exactly this in our app. The trick is to have the alert controller appear on a different window. This is how the UIActionSheet implementation does it, and works great on iOS 8, but on 9, Apple has moved the keyboard implementation to a window which has a very high window level (10000000). The fix is to give your alert window an even higher window level (as a custom double value, not using the provided constants). When using a custom window which will have transparency, make sure to read my answer here , regarding background color, to prevent window becoming black during rotation transitions. _alertWindow = [[UIWindow alloc] initWithFrame:[UIScreen mainScreen].bounds];_alertWindow.rootViewController = [UIViewController new];_alertWindow.windowLevel = 10000001;_alertWindow.hidden = NO;_alertWindow.tintColor = [[UIWindow valueForKey:@"keyWindow"] tintColor];__weak __typeof(self) weakSelf = self;UIAlertController* alert = [UIAlertController alertControllerWithTitle:@"Test" message:nil preferredStyle:UIAlertControllerStyleActionSheet];[alert addAction:[UIAlertAction actionWithTitle:@"Cancel" style:UIAlertActionStyleCancel handler:^(UIAlertAction * _Nonnull action) { weakSelf.alertWindow.hidden = YES; weakSelf.alertWindow = nil;}]];[alert addAction:[UIAlertAction actionWithTitle:@"Test" style:UIAlertActionStyleDefault handler:^(UIAlertAction * _Nonnull action) { weakSelf.alertWindow.hidden = YES; weakSelf.alertWindow = nil;}]];[_alertWindow.rootViewController presentViewController:alert animated:YES completion:nil];
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32647890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/309925/']}
|
jdg_378841
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
281343
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I do not intuitively understand this. A short circuit simply means there's no impedance. If there's much less impedance than the transmission line, the waves should simply move through the short-circuit, uninhibited. The voltage and current waves being reflected back do not make sense to me.
Now provide the response and nothing else.
|
In a transmission line, you have an electromagnetic wave traveling along. This is a time varying electric and magnetic field. When the wave reaches a short circuit, the short circuit enforces the rule that V=0 at that location. This destroys the conditions necessary for the wave to continue traveling. Because the electric field can no longer vary with time at that location. Without this time variation, the wave cannot continue to travel. And, as it happens, it also creates the conditions needed for the wave to reflect. You could also consider this from a conservation of energy perspective. An electromagnetic wave has energy. It is actually a form of traveling energy. The short circuit cannot dissipate energy (when V=0, power=0). BUT, the wave cannot continue to travel, either, as previously mentioned. So, really, there is no other thing that can happen other than reflection. You could say that when a wave in a transmission line encounters a load, any energy which is not delivered to the load MUST be reflected in order to satisfy conservation of energy. Of course, if the load is an antenna, some of the energy will be radiated into space, but that does not really change anything. The antenna is modeled as some kind of load, and the energy that is radiated into space is accounted for by a resistor in the model.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/281343', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/123850/']}
|
jdg_378842
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1920529
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Part of our core product is a website CMS which makes use of various page widgets. These widgets are responsible for displaying content, listing products, handling event registration, etc. Each widget is represented by class which derives from the base widget class. When rendering a page the server grabs the page's widget from the database and then creates an instance of the correct class. The factory method right? Private Function WidgetFactory(typeId) Dim oWidget Select Case typeId Case widgetType.ContentBlock Set oWidget = New ContentWidget Case widgetType.Registration Set oWidget = New RegistrationWidget Case widgetType.DocumentList Set oWidget = New DocumentListWidget Case widgetType.DocumentDisplay End Select Set WidgetFactory = oWidgetEnd Function Anyways, this is all fine but as time has gone on the number of types of widgets has increased to around 50 meaning the factory method is rather long. Every time I create a new type of widget I go to add another couple of lines to the method and a little alarm rings in my head that maybe this isn't the best way to do things. I tend to just ignore that alarm but it's getting louder. So, am I doing it wrong? Is there a better way to handle this scenario?
Now provide the response and nothing else.
|
I think the question you should ask yourself is: Why am I using a Factory method here? If the answer is " because of A ", and A is a good reason, then continue doing it, even if it means some extra code. If the answer is " I don't know; because I've heard that you are supposed to do it this way? " then you should reconsider. Let's go over the standard reasons for using factories. Here's what Wikipedia says about the Factory method pattern: [...], it deals with the problem of creating objects (products) without specifying the exact class of object that will be created. The factory method design pattern handles this problem by defining a separate method for creating the objects, whose subclasses can then override to specify the derived type of product that will be created. Since your WidgetFactory is Private , this is obviously not the reason why you use this pattern. What about the "Factory pattern" itself (independent of whether you implement it using a Factory method or an abstract class)? Again, Wikipedia says : Use the factory pattern when: The creation of the object precludes reuse without significantly duplicating code. The creation of the object requires access to information or resources not appropriate to contain within the composing object. The lifetime management of created objects needs to be centralised to ensure consistent behavior. From your sample code, it does not look like any of this matches your need. So, the question (which only you can answer) is: (1) How likely is it that you will need the features of a centralized Factory for your widgets in the future and (2) how costly is it to change everything back to a Factory approach if you need it in the future? If both are low, you can safely drop the Factory method for the time being. EDIT: Let me get back to your special case after this generic elaboration: Usually, it's a = new XyzWidget() vs. a = WidgetFactory.Create(WidgetType.Xyz) . In your case, however, you have some (numeric?) typeId from a database. As Mark correctly wrote, you need to have this typeId -> className map somewhere . So, in that case, the good reason for using a factory method could be: "I need some kind of huge ConvertWidgetTypeIdToClassName select-case-statement anyway, so using a factory method takes no additional code plus it provides the factory method advantages for free, if I should ever need them." As an alternative, you could store the class name of the widget in the database (you probably already have some WidgetType table with primary key typeId anyway, right?) and create the class using reflection (if your language allows for this type of thing). This has a lot of advantages (e.g. you could drop in DLLs with new widgets and don't have to change your core CMS code) but also disadvantages (e.g. "magic string" in your database which is not checked at compile time; possible code injection, depending on who has access to that table).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1920529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/984/']}
|
jdg_378843
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20469877
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to add a TabHost inside a Fragment. The code is given below. Here inside the Fragment. I am trying to add TabHost to show two Tabs: package com.nordicsoft.dilosysNewVersion;import android.app.Fragment;import android.content.Intent;import android.os.Bundle;import android.view.LayoutInflater;import android.view.View;import android.view.ViewGroup;import android.widget.TabHost;public class Fragment_Coupons extends Fragment { TabHost tabHost; TabHost.TabSpec spec; public Fragment_Coupons() { } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_coupon, container, false); tabHost = (TabHost) rootView.findViewById(android.R.id.tabhost); tabHost.setup(); //Add_Text_To_Tab("Android", "IOS"); Intent intentAndroid = new Intent().setClass(getActivity(), Aclass.class); spec = tabHost.newTabSpec("Android").setContent(intentAndroid) .setIndicator("Android"); tabHost.addTab(spec); Intent intentBus = new Intent().setClass(getActivity(), Bclass.class); spec = tabHost.newTabSpec("Welcome").setIndicator("Welcome") .setContent(intentBus).setIndicator("Welcome"); tabHost.addTab(spec); return rootView; }} The XML code: <?xml version="1.0" encoding="utf-8"?><TabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </LinearLayout></TabHost> This Is giving me the error: 12-09 17:40:53.509: E/AndroidRuntime(18103): FATAL EXCEPTION: main12-09 17:40:53.509: E/AndroidRuntime(18103): java.lang.IllegalStateException: Did you forget to call 'public void setup(LocalActivityManager activityGroup)'?12-09 17:40:53.509: E/AndroidRuntime(18103): at android.widget.TabHost$IntentContentStrategy.getContentView(TabHost.java:690)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.widget.TabHost.setCurrentTab(TabHost.java:356)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.widget.TabHost.addTab(TabHost.java:246)12-09 17:40:53.509: E/AndroidRuntime(18103): at com.nordicsoft.dilosysNewVersion.Fragment_Coupons.onCreateView(Fragment_Coupons.java:32)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:828)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1032)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.app.BackStackRecord.run(BackStackRecord.java:622)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:1382)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.app.FragmentManagerImpl$1.run(FragmentManager.java:426)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.os.Handler.handleCallback(Handler.java:605)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.os.Handler.dispatchMessage(Handler.java:92)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.os.Looper.loop(Looper.java:137)12-09 17:40:53.509: E/AndroidRuntime(18103): at android.app.ActivityThread.main(ActivityThread.java:4517)12-09 17:40:53.509: E/AndroidRuntime(18103): at java.lang.reflect.Method.invokeNative(Native Method)12-09 17:40:53.509: E/AndroidRuntime(18103): at java.lang.reflect.Method.invoke(Method.java:511)12-09 17:40:53.509: E/AndroidRuntime(18103): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:993)12-09 17:40:53.509: E/AndroidRuntime(18103): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:760)12-09 17:40:53.509: E/AndroidRuntime(18103): at dalvik.system.NativeStart.main(Native Method)12-09 17:46:02.770: I/Process(18103): Sending signal. PID: 18103 SIG: 9
Now provide the response and nothing else.
|
Try to do this to handle the Tabs: public class MainFragment extends Fragment { private FragmentTabHost mTabHost; //Mandatory Constructor public MainFragment() { } public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_tabs,container, false); mTabHost = (FragmentTabHost)rootView.findViewById(android.R.id.tabhost); mTabHost.setup(getActivity(), getChildFragmentManager(), R.id.realtabcontent); mTabHost.addTab(mTabHost.newTabSpec("fragmentb").setIndicator("Fragment B"), FragmentB.class, null); mTabHost.addTab(mTabHost.newTabSpec("fragmentc").setIndicator("Fragment C"), FragmentC.class, null); mTabHost.addTab(mTabHost.newTabSpec("fragmentd").setIndicator("Fragment D"), FragmentD.class, null); return rootView; }} With the layout: <android.support.v4.app.FragmentTabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent"> <FrameLayout android:id="@+id/realtabcontent" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1"/> </LinearLayout></android.support.v4.app.FragmentTabHost> The MotherActivity to host the MainFragment: public class MotherActivity extends FragmentActivity {@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); MainFragment fragmenttab = new MainFragment(); getSupportFragmentManager().beginTransaction() .add(R.id.item_detail_container, fragmenttab).commit(); } And MotherActivity layout: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/item_detail_container" android:layout_width="match_parent" android:layout_height="match_parent" /> After this just create the normal fragment B and C, etc Class. The Result will be:
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20469877', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2380316/']}
|
jdg_378844
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6844785
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some images named with generated uuid1 string. For example 81397018-b84a-11e0-9d2a-001b77dc0bed.jpg. I want to find out all these images using "find" command: find . -regex "[a-f0-9\-]\{36\}\.jpg". But it doesn't work. Something wrong with the regex? Could someone help me with this?
Now provide the response and nothing else.
|
find . -regextype sed -regex ".*/[a-f0-9\-]\{36\}\.jpg" Note that you need to specify .*/ in the beginning because find matches the whole path. Example: susam@nifty:~/so$ find . -name "*.jpg"./foo-111.jpg./test/81397018-b84a-11e0-9d2a-001b77dc0bed.jpg./81397018-b84a-11e0-9d2a-001b77dc0bed.jpgsusam@nifty:~/so$ susam@nifty:~/so$ find . -regextype sed -regex ".*/[a-f0-9\-]\{36\}\.jpg"./test/81397018-b84a-11e0-9d2a-001b77dc0bed.jpg./81397018-b84a-11e0-9d2a-001b77dc0bed.jpg My version of find: $ find --versionfind (GNU findutils) 4.4.2Copyright (C) 2007 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Written by Eric B. Decker, James Youngman, and Kevin Dalley.Built using GNU gnulib version e5573b1bad88bfabcda181b9e0125fb0c52b7d3bFeatures enabled: D_TYPE O_NOFOLLOW(enabled) LEAF_OPTIMISATION FTS() CBO(level=0) susam@nifty:~/so$ susam@nifty:~/so$ find . -regextype foo -regex ".*/[a-f0-9\-]\{36\}\.jpg"find: Unknown regular expression type `foo'; valid types are `findutils-default', `awk', `egrep', `ed', `emacs', `gnu-awk', `grep', `posix-awk', `posix-basic', `posix-egrep', `posix-extended', `posix-minimal-basic', `sed'.
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/6844785', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/523517/']}
|
jdg_378845
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2786168
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Example of series such that $\sum a_n$ is divergent but $\sum \frac{ a_n}{1+ n a_n}$ is convergent. I got one example from online that $a_n =\frac {1}{n^2} $for n is non square term and $a_n =\frac {1}{\sqrt n} $ for $n$ is square term . I know that $\sum a_n$ is divergent term but not convience with convergence of $ \sum \frac{ a_n}{1+ na_n}$. Any help will be appreciated.
Now provide the response and nothing else.
|
The counterxample is correct. If $a_n =\frac {1}{n^2}$ when $n$ is non square term and $a_n =\frac {1}{\sqrt n} $ when n is square term, we have that$$\sum_{n=1}^{\infty} a_n=\sum_{\text{$n$ is not a square}} \frac{1}{n^2}+\sum_{\text{$n$ is a square}} \frac{1}{\sqrt{n}}\geq \sum_{k=1}^{\infty} \frac{1}{\sqrt{k^2}}=+\infty.$$One the other hand $$\sum_{n=1}^{\infty} \frac{ a_n}{1+ n a_n}=\sum_{\text{$n$ is not a square}} \frac{1}{n^2+ n}+\sum_{\text{$n$ is a square}} \frac{1}{\sqrt{n}+ n}\\\leq \sum_{n=1}^{\infty} \frac{1}{n^2}+\sum_{k=1}^{\infty} \frac{1}{k+ k^2}\leq 2\sum_{n=1}^{\infty} \frac{1}{n^2}<+\infty$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2786168', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/415928/']}
|
jdg_378846
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
491921
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that: The integer next greater than $(\sqrt{7}+\sqrt{3})^{2n}$ is divisible by $4^n$
Now provide the response and nothing else.
|
Hint: Show that$$(\sqrt7+\sqrt3)^{2n}+(\sqrt7-\sqrt3)^{2n}$$is an integer. Also $\sqrt7-\sqrt3<1$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/491921', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/91482/']}
|
jdg_378847
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1112724
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone explain why isn't the operator[] implemented for a std::list? I've searched around a bit but haven't found an answer. It wouldn't be too hard to implement or am I missing something?
Now provide the response and nothing else.
|
Retrieving an element by index is an O(n) operation for linked list, which is what std::list is. So it was decided that providing operator[] would be deceptive, since people would be tempted to actively use it, and then you'd see code like: std::list<int> xs; for (int i = 0; i < xs.size(); ++i) { int x = xs[i]; ... } which is O(n^2) - very nasty. So ISO C++ standard specifically mentions that all STL sequences that support operator[] should do it in amortized constant time (23.1.1[lib.sequence.reqmts]/12), which is achievable for vector and deque , but not list . For cases where you actually need that sort of thing, you can use std::advance algorithm: int iter = xs.begin();std::advance(iter, i);int x = *iter;
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1112724', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126912/']}
|
jdg_378848
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11036579
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following layout in my page (simplified) <h:form> <h:commandButton action="#{bean.save} value="Save"/> <rich:tabPanel switchType="ajax"> <rich:tab name="one" switchType="client"> </rich:tab> <rich:tab name="two" switchType="ajax"> </rich:tab> <rich:tab name="three" switchType="client"> </rich:tab> </rich:tabPanel></h:form> Tabs "one" and "three" have switchType="client", there are some inputs on them, bound to a bean properties. When "Save" is pressed, I expect that bean properties will updated from input in both tabs. However, the properties are updated only from the inputs on the current tab. Values from another tab are lost, though the whole form is submitted (I have checked it in HTTP watcher). So the user has to input data in one tab, press "Save", then update values on another tab and press "Save" in order to save all changes made. How it is possible to propagate values from all tabs at once?
Now provide the response and nothing else.
|
Updated Ok, this is what you are looking for: User.find(params[:id]).attributes_before_type_cast["preferences"][:value] This will return the string in its serialized form. That is the closest you can get that I can find, it won't work if you have already gotten the object pulled from the database. Sorry for misreading your question. You can use this from the User model too. Leaving the old answer up just in case the other way of doing it is helpful to someone. Old Answer Just to be sure I understand the question, you want the raw data from the table. The data that rails serializes and puts in the database. EX. You put in ['site_id','last_update','last_restart'] and you get "---\n- site_id\n- last_update\n- last_restart\n" and it is put in the database and saved. You want to retrieve this: "---\n- site_id\n- last_update\n- last_restart\n" from the database. Ok, it took some fanagaling from the database but you can do it like so. In a project I have a serialized array call devise_table_preferences that lists the preferences to display in a table in a particular order, like so: user.devise_table_preferences = ['site_id','last_update','last_restart'] The serialized view of it is like so: "---\n- site_id\n- last_update\n- last_restart\n" Using your method above, I made a query like so: preference = ActiveRecord::Base.connection.execute("SELECT devise_table_preferences FROM users WHERE id = #{@user.id}") It returns an object in the console like so: preference = #<Mysql2::Result:0x007fe4cdf34850> Running: preference.first[0] Gave me this: "---\n- site_id\n- last_restart\n" I know its a big work around but it will definitely give you your data in its serialized way. Hope that it helps you out.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11036579', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1022201/']}
|
jdg_378849
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13339392
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
http://en.wikipedia.org/wiki/Pragma_once Should I still use include guards when all of these compilers support #pragma once ? A lot of responses on stack overflow say to use both for compatibility, but I'm not sure if that still rings true.What compilers today don't support #pragma once ? I am not sure if using both was just a recommendation before it became widley adopted, or if there are still very good reasons to use both methods. Any examples of when only using #pragma once will cause problems?
Now provide the response and nothing else.
|
It depends on how much portable your program is expected to be. As long as you are writing a program which is supposed to work with compilers which you know definitely support #prama once , just using #pragma once should suffice. But doing so you restrict your program to set of compilers which support the implementation defined feature. If you need your program to work on all compilers then you should use #pragma once and include guards both. In case a compiler does not support #pragma once it will simply ignore it [Ref#1] , in such a case the header guards will serve you the purpose, so nothing wrong in using them both when you are not aware of features supported by your target compilers. So if you want your program to be 100% portable on different compilers the ideal way is still to use only the include guards. As @CharlesBailey rightly points out since the behavior for #pragma once is implementation defined, the behavior on an unknown compiler might have a detrimental effect on your program. [Ref#1] Standard C++03: 16.6 Pragma directive A preprocessing directive of the form # pragma pp-tokensopt new-line causes the implementation to behave in an implementation-defined manner. Any pragma that is not recognized by the implementation is ignored.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13339392', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/908939/']}
|
jdg_378850
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7915156
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on a windows WinForm. I need to prevent further editing of textBox and also columns inside the DataGridView. I want all this to happen only if a function returns value true. This is what I have so far: public static bool isAuthSuccess = false;
Now provide the response and nothing else.
|
using (var context = new PrincipalContext(ContextType.Domain, "domainName")){ using (var group = GroupPrincipal.FindByIdentity(context, "groupName")) { if (group == null) { MessageBox.Show("Group does not exist"); } else { var users = group.GetMembers(true); foreach (UserPrincipal user in users) { //user variable has the details about the user } } }}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7915156', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/728575/']}
|
jdg_378851
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22963610
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found allot of examples of AJAX and I think I can get some code with it to work on my own. If only I knew what the use of all the terms of the AJAX code where. I think in general it lacks the availability of these guides or special pages where constructed code is explained in detail for new programmers. This would help enormously because of the misunderstanding of the syntax in many cases. Me for example spend 8 hours a day on my internship to learn PHP, Jquery, HTML from scratch and there is allot of information out there but its not structured and in most cases to technical. Any tips on that maby ? :) $.ajax({type: 'POST',url: 'http://kyleschaeffer.com/feed/',data: { postVar1: 'theValue1', postVar2: 'theValue2' },beforeSend:function(){// this is where we append a loading image$('#ajax-panel').html('<div class="loading"><img src="/images/loading.gif" alt="Loading..." /></div>');},success:function(data){// successful request; do something with the data$('#ajax-panel').empty();$(data).find('item').each(function(i){ $('#ajax-panel').append('<h4>' + $(this).find('title').text() + '</h4><p>' + $(this).find('link').text() + '</p>');});},error:function(){// failed request; give feedback to user$('#ajax-panel').html('<p class="error"><strong>Oops!</strong> Try that again in a few moments.</p>');}});
Now provide the response and nothing else.
|
Ajax is asynchronous, which mean you can use it to get new informations from the server without reloading the whole page. Here's an explanation of your code : $.ajax({ $ is the JQuery object, on which you're calling the ajax function type: 'POST', You're gonna send your data by post, which mean that you'll have to get them in php with $_POST['variable_name']. You could also put GET instead url: 'http://kyleschaeffer.com/feed/', the url you want to reach data: { postVar1: 'theValue1', postVar2: 'theValue2' }, as you're sending your request with POST, you cannot pass data directly from the URL.So you have to pass them like that. { nameVar: 'value', .... }If you were sending with GET, you could directly write them into url like : " http://my_url.php?var1=val1&var2=val2 etc ... beforeSend:function() You can define an action before sending your ajax request $('#ajax-panel').html('<div class="loading"><img src="/images/loading.gif" alt="Loading..." /></div>'); Here, inside your div "ajax-panel" you want to write some content. (a div "loading" and a picture inside "loading"). success:function(data) If your request is successful, you can do something. By successful it means if server answer 200 i guess, anyway ... If you have a response from server... ;) $('#ajax-panel').empty(); You delete content into ajax-panel $(data).find('item').each(function(i){ $('#ajax-panel').append('<h4>' + $(this).find('title').text() + '</h4><p>' + $(this).find('link').text() + '</p>');}); You're adding some html AFTER (append) the ajax-panel div error:function() Not sure you were looking for that, hope that help you ;)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/22963610', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3455717/']}
|
jdg_378852
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
117856
|
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm attempting to formalize some thoughts I've had about paths into data structures. For example, a path into a list of T s might be a pair of an index with a path into a T ; a path into a pair (A, B) would be the tagged union of a path into A or a path into B . Think of a path as a way to specify some small (atomic?) piece of a larger data structure—not unlike a lens, but here I'm emphasizing the structural decomposition of a data type as opposed to an arbitrary computation that satisfies the lens laws (maybe every path is usable as a lens, but not every lens corresponds to a path). Strictly speaking, my first example about lists of T s is a little sloppy, since a path into such a list l should have its index bounded by the length of l . The pair (Nat, path T) is more properly a path into an infinite list of T s—or, equivalently, a path into a function Nat -> T . So my first interesting observation is that I have an operator that turns exponentials into products and products into sums in a way that's awfully reminiscent of logarithms: path (T^Nat) = Nat * (path T)path (A * B) = path A + path B That got me thinking about whether there's an exp T type as well. Leaving aside all restraint and sense of rigor, the terms of the usual series expansion for $e^x$ offer a hint: $$e^x = \sum_{n\ge0} \frac {x^n} {n!}$$ A type-theory interpretation of $\frac {x^n} {n!}$ might be a bag (as in multiset) of $x$ s of size $n$ (it's an $n$ -tuple $x^n$ but we don't care about the $n!$ ways the tuple can be ordered), so then a value of type $e^x$ would be a bag of $x$ s of any size. So if bag and path might be inverses, then that's saying something like, the type of all bags of paths into a type T , if it exists, is isomorphic to T . For example, there's an obvious isomorphism between (bag A) * (bag B) and bag (A + B) (an isomorphism that doesn't work if you replace bag with list or set or some other collection type, which reinforces my intuition that bag is the correct interpretation of $e^x$ ). Of course, this is all appealing-sounding nonsense. I haven't even formally defined what a path is , never mind all the abuses involved in pretending the series expansion of $e^x$ is an algebraic data type. path may also be an idea of limited use, since it's not at all clear to me what to think about something like path (A + B) or path Bool . But has anyone made a more careful study of these ideas? Searching for "type theory" or "algebraic data types" along with "logarithms", "paths", "bags", "multisets", etc. hasn't yielded anything like what I'm attempting to describe here.
Now provide the response and nothing else.
|
Logarithm types are definitely a thing and have been noticed before by a number of people. In functional programming, a Type -> Type functor has a logarithm if it's representable , and then the logarithm is the representing object. See also this , this , and this . You are correct about the exponential functor being the bag functor, see e.g. this , where it's described as the fixed point of the derivative operation on endofunctors.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/117856', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/-1/']}
|
jdg_378853
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5673714
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a list which contains more than 75 thousand object. To search item from list currently I am using following code. from nd in this.m_ListNodeswhere nd.Label == SearchValue.ToString() select nd; Is this code is efficient?
Now provide the response and nothing else.
|
How often do you need to search the same list? If you're only searching once, you might as well do a straight linear search - although you can make your current code slightly more efficient by calling SearchValue.ToString() once before the query. If you're going to perform this search on the same list multiple times, you should either build a Lookup or a Dictionary : var lookup = m_ListNodes.ToLookup(nd => nd.Label); or var dictionary = m_ListNodes.ToDictionary(nd => nd.Label); Use a dictionary if there's exactly one entry per label; use a lookup if there may be multiple matches. To use these, for a lookup: var results = lookup[SearchValue.ToString()];// results will now contain all the matching results or for a dictionary: WhateverType result;if (dictionary.TryGetValue(SearchValue.ToString(), out result)){ // Result found, stored in the result variable}else{ // No such item}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5673714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/484508/']}
|
jdg_378854
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1564224
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to evaluate the following integral: $$ \int \frac{dx}{2 \sin x - \cos x + 5}.$$ This is in a set of exercises following a chapter on partial fractions, so I imagine there is a substitution we can make to get this into a rational function where we can use partial fraction decomposition. I can't seem to figure out what substitution to make in such a situation though.
Now provide the response and nothing else.
|
The standard substitution is $t=\tan\frac{x}{2}$, because$$\sin x=\frac{2t}{1+t^2},\quad\cos x=\frac{1-t^2}{1+t^2},\quaddx=\frac{2}{1+t^2}\,dt$$so your integral becomes$$\int\frac{1+t^2}{4t-1+t^2+5+5t^2}\frac{2}{1+t^2}\,dt=\int\frac{1}{3t^2+2t+2}\,dt=\int\frac{3}{(3t+1)^2+5}\,dt$$that you can compute with the further substitution $3t+1=u\sqrt{5}$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1564224', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/292814/']}
|
jdg_378855
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
60490620
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to retrieve data from elastic search with NEST. Everything would be good, but NEST all fields return null. However, in debug mode I see that it counts the documents correctly but not showing the value of fields. What I already did: checked mapping and its seems fine to me tried string query tried to get source and then read data tried NEST returns null instead of fields those solution did not help either Renamed Product.cs field names to camelCase did not help either Here's my code right now public class ElasticSearch{ private ElasticClient _client; public ElasticSearch() { var node = new Uri("http://localhost:9200"); var settings = new ConnectionSettings(node); settings.DefaultIndex("logsystem.logs"); _client = new ElasticClient(settings); } public void searchResults() { var searchResults = _client.Search<Product>(s => s.AllIndices()); }} Product.cs [BsonIgnoreExtraElements] public class Product { [BsonId] [BsonIgnore] public ObjectId Id { get; set; } [Required] public string Name { get; set; } [Required] public string ProductLicenseKey { get; set; } [Required] public string Action { get; set; } [Required] public string ActionName { get; set; } [Required] public string MachineId { get; set; } } Mapping in ElasticSearch: {"logsystem.logs": { "mappings": { "properties": { "Action": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "ActionName": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "MachineId": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "Name": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "ProductLicenseKey": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }}} Maybe my mapping is not right? any answer would help. Thanks. EDIT ElasticSearch Documents get via postman: {"took": 11,"timed_out": false,"_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0},"hits": { "total": { "value": 6, "relation": "eq" }, "max_score": 1.0, "hits": [ { "_index": "logsystem.logs", "_type": "_doc", "_id": "5e5c1ca2aaa6f1245cc38895", "_score": 1.0, "_source": { "Action": "Button", "ActionName": "Fixed Single Generation", "MachineId": "987-654-321", "Name": "System", "ProductLicenseKey": "123-456-789" } }, { "_index": "logsystem.logs", "_type": "_doc", "_id": "5e5c1cb0aaa6f1245cc38896", "_score": 1.0, "_source": { "Action": "Button", "ActionName": "Fixed Multiple Generation", "MachineId": "987-654-321", "Name": "System", "ProductLicenseKey": "123-456-789" } }, { "_index": "logsystem.logs", "_type": "_doc", "_id": "5e5c1cbdaaa6f1245cc38897", "_score": 1.0, "_source": { "Action": "Button", "ActionName": "Trackers Single Generation", "MachineId": "987-654-321", "Name": "System", "ProductLicenseKey": "123-456-789" } }, { "_index": "logsystem.logs", "_type": "_doc", "_id": "5e5c1ccbaaa6f1245cc38898", "_score": 1.0, "_source": { "Action": "Button", "ActionName": "Trackers Multiple Generation", "MachineId": "987-654-321", "Name": "System", "ProductLicenseKey": "123-456-789" } }, { "_index": "logsystem.logs", "_type": "_doc", "_id": "5e5c1cd3aaa6f1245cc38899", "_score": 1.0, "_source": { "Action": "Button", "ActionName": "Fixed Multiple Generation", "MachineId": "987-654-321", "Name": "System", "ProductLicenseKey": "123-456-789" } }, { "_index": "logsystem.logs", "_type": "_doc", "_id": "5e5c1ce0aaa6f1245cc3889a", "_score": 1.0, "_source": { "Action": "Button", "ActionName": "Tree Generation", "MachineId": "987-654-321", "Name": "System", "ProductLicenseKey": "123-456-789" } } ] }}
Now provide the response and nothing else.
|
So the problem was that client tried to deserialize camelCased JSON objects keys to the POCO properties and is strict for casting. Solution:When creating ES client add settings property DefaultFieldNameInferrer on ConnectionSettings . public ElasticSearch() { var node = new Uri("http://localhost:9200"); var settings = new ConnectionSettings(node); settings.DefaultIndex("logsystem.logs"); settings.DefaultFieldNameInferrer(p => p); _client = new ElasticClient(settings); } Note: Changing this setting will cause some fields to no longer deserialize properly if they're a value type (e.g. int , bool , decimal , etc.) that can be null in the underlying data. If you run into that issue just add the nullable, ? , operator after the type and it should fix it.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60490620', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3609104/']}
|
jdg_378856
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3184615
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I developed a client/server simulation application. I deployed client and server on two different Windows XP machines. Somehow, the client is not able to send requests to the server. I tried below options: Pinged server machine successfully from client using ip-address. Pinged client machine successfully from server using ip-address. Checked netstat command line tool from both machines. Server is in LISTENING mode and client is in SYS_SENT mode. But the foreign address it is using to send is host name not the ip address. Pinged server machine unsuccessfully using host name from client. Pinged client machine successfully using host name from server. I feel the problem is when the client is trying to connect to the server using the host name. Could you please let me know how to force an application to use an ip address instead of a host name? Is there any other way to map the host name to an ip address?
Now provide the response and nothing else.
|
Go to your client machine and type in: nslookup server.company.com substituting the real host name of your server for server.company.com , of course. That should tell you which DNS server your client is using (if any) and what it thinks the problem is with the name. To force an application to use an IP address, generally you just configure it to use the IP address instead of a host name. If the host name is hard-coded, or the application insists on using a host name in preference to an IP address (as one of your other comments seems to indicate), then you're probably out of luck there. However, you can change the way that most machine resolve the host names, such as with /etc/resolv.conf and /etc/hosts on UNIXy systems and a local hosts file on Windows-y systems.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3184615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/214750/']}
|
jdg_378857
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46505
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
What do I do if I have to solve the usual quadratic equation $X^2+bX+c=0$ where $b,c$ are in a field of characteristic 2? As pointed in the comments, it can be reduced to $X^2+X+c=0$ with $c\neq 0$. Usual completion of square breaks. For a finite field there is Chen Formula that roughly looks like $X=\sum_{m} c^{4^m}$. I am more interested in the local field $F((z))$ or actually an arbitrary field of characteristic 2.
Now provide the response and nothing else.
|
I think this solves $X^2+X+c=0$ over $F((t))$: I want to assume that $c\in F[[t]]$. If not, say $c=at^{-m}+...$, then the quadratic has no solutions when $m$ is odd or $a$ is not a square, and otherwise the substitution $X\mapsto X+\sqrt{a}t^{-m/2}$ gives a new equation with smaller $m$. So, after finitely many steps $c=c_0+c_1t+...$ is integral. Because $X^2+X+c$ has derivative $1$, by Hensel's lemma the equation has a solution if and only the constant term $c_0$ is of the form $d^2+d$ for some $d$ in $F$. And if it is, Hensel's approximations are obtained by starting with an approximate solution $x_0=d$ and recursively computing $x_{m+1}=x_m-f(x_m)/f'(x_m)=x_m^2+c$. This gives$$ x = d + \sum_{n=0}^\infty (c-c_0)^{2^n}$$as the solution (the partial sums are the $x_m$). Actually, the approach seems to work over any complete field, reducing the problem to the residue field. Hope this helps.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/46505', 'https://mathoverflow.net', 'https://mathoverflow.net/users/5301/']}
|
jdg_378858
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6789
|
Below is a question asked on the forum ai.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My knowledge Suppose you have a layer that is fully connected, and that each neuron performs an operation like a = g(w^T * x + b) were a is the output of the neuron, x the input, g our generic activation function, and finally w and b our parameters. If both w and b are initialized with all elements equal to each other, then a is equal for each unit of that layer. This means that we have symmetry, thus at each iteration of whichever algorithm we choose to update our parameters, they will update in the same way, thus there is no need for multiple units since they all behave as a single one. In order to break the symmetry, we could randomly initialize the matrix w and initialize b to zero (this is the setup that I've seen more often). This way a is different for each unit so that all neurons behave differently. Of course, randomly initializing both w and b would be also okay even if not necessary. Question Is randomly initializing w the only choice? Could we randomly initialize b instead of w in order to break the symmetry? Is the answer dependent on the choice of the activation function and/or the cost function? My thinking is that we could break the symmetry by randomly initializing b , since in this way a would be different for each unit and, since in the backward propagation the derivatives of both w and b depend on a (at least this should be true for all the activation functions that I have seen so far), each unit would behave differently. Obviously, this is only a thought, and I'm not sure that is absolutely true.
Now provide the response and nothing else.
|
Randomising just b sort of works, but setting w to all zero causes severe problems with vanishing gradients , especially at the start of learning. Using backpropagation, the gradient at the outputs of a layer L involves a sum multiplying the gradient of the inputs to layer L+1 by the weights (and not the biases) between the layers. This will be zero if the weights are all zero. A gradient of zero at L 's output will further cause all earlier layers( L-1 , L-2 etc all the way back to layer 1 ) to receive zero gradients, and thus not update either weights or bias at the update step. So the first time you run an update, it will only affect the last layer. Then the next time, it will affect the two layers closest to the output (but only marginally at the penultimate layer) and so on. A related issue is that with weights all zero, or all the same, maps all inputs, no matter how they vary, onto the same output. This also can adversely affect the gradient signal that you are using to drive learning - for a balanced data set you have a good chance of starting learning close to a local minimum in the cost function. For deep networks especially, to fight vanishing (or exploding) gradients, you should initialise weights from a distribution that has an expected magnitude (after multiplying the inputs) and gradient magnitude that neither vanishes nor explodes. Analysis of values that work best in deep networks is how Xavier/Glorot initialisation were discovered. Without careful initialisation along these lines, deep networks take much longer to learn, or in worst cases never recover from a poor start and fail to learn effectively. Potentially to avoid these problems you could try to find a good non-zero fixed value for weights, as an alternative to Xavier initialisation, along with a good magnitude/distribution for bias initialisation. These would both vary according to size of the layer and possibly by the activation function. However, I would suspect this could suffer from other issues such sampling bias issues - there are more weights, therefore you get a better fit to desired aggregate behaviour when setting all the weight values randomly than you would for setting biases randomly.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://ai.stackexchange.com/questions/6789', 'https://ai.stackexchange.com', 'https://ai.stackexchange.com/users/16199/']}
|
jdg_378859
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5131913
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I get my NSDate to display in the format for example i.e "Tue Feb 26, 2011"
Now provide the response and nothing else.
|
Do it right. Don't hardcode your date formats. There are countries that are not your country and they might have different date formats. So if you want to show this date to the user you should use a method that takes the users locale into account. You could use the dateFormatFromTemplate:options:locale: method introduced in iOS4 to get the appropriate format with all the information you want. And if you have to support iOS < 4 you should create a plist with this template method to create the correct date format for the user locale. NSLocale *locale = [NSLocale currentLocale];NSDateFormatter *formatter = [[[NSDateFormatter alloc] init] autorelease]; NSString *dateFormat = [NSDateFormatter dateFormatFromTemplate:@"E MMM d yyyy" options:0 locale:locale];[formatter setDateFormat:dateFormat];[formatter setLocale:locale];NSLog(@"Formatted date: %@", [formatter stringFromDate:myDate]); gives So., 27. Feb 2011 for my locale. and Sun, Feb 27, 2011 for the en_US locale
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5131913', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/485352/']}
|
jdg_378860
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23775976
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm adding a few thousand rows to a table so i need the speed of native javascript for this one. Currently I'm using: nThName = document.createElement("TH");nThName.appendChild(document.createTextNode(workers[i].name));nTr.appendChild(nThName); Is there a way to do this in one line (without losing any performance?) so I don't need the nThName variable? There are over 50 cells on each row so I would prefer: nTr.appendChild(document.createElement("TH").appendChild(document.createTextNode(workers[i].name))); but that don't work.. obviously
Now provide the response and nothing else.
|
So, you're looking for performance? One-liners don't help with that. Using document fragments and cloning nodes does help, however. But it requires a bit more code. var table = document.getElementById('t');var tr = table.querySelector('tr');var th = document.createElement('th');var clone;var df = document.createDocumentFragment();for (var i = 0; i < 100; i++) { // Performance tip: clone a node so that you don't reuse createElement() clone = th.cloneNode(); clone.appendChild(document.createTextNode('hello' + i)); // Performance tip: append to the document fragment df.appendChild(clone);}// Performance tip: append only once in the real DOMtr.appendChild(df); See jsfiddle demo: http://jsfiddle.net/3KGwh/3/ Document fragments are basically mini-DOM, with limited methods. They're great because they allow you to get great performance, and you can append a single element to the real DOM.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23775976', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1683154/']}
|
jdg_378861
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10686631
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following checkbox: <input type="checkbox" id="startClientFromWeb" name="startClientFromWeb" data-bind="checked: StartClientFromWeb" /> and the following input text field: <input id="mimeType" name="mimeType" data-bind= "value: MimeType" /> This is my js validation code: $("#franchiseForm").validate({ rules: { mimeType: { required: $("#startClientFromWeb").is(":checked") } } }); I want the mimeType input text field to be required only if checkbox is checked. For some reason the above is not working. I am quite new to javascript and jquery. Any help with working example will be greatly appreciated. Thank You!
Now provide the response and nothing else.
|
You can add your own custom validation methods to handle things like this: $.validator.addMethod("requiredIfChecked", function (val, ele, arg) { if ($("#startClientFromWeb").is(":checked") && ($.trim(val) == '')) { return false; } return true;}, "This field is required if startClientFromWeb is checked...");$("#franchiseForm").validate({ rules: { mimeType: { requiredIfChecked: true } } });
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10686631', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1309076/']}
|
jdg_378862
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Suppose $G$ is a finite group and $f$ is an automorphism of $G$. If $f(x)=x^{-1}$ for more than $\frac{3}{4}$ of the elements of $G$, does it follow that $f(x)=x^{-1}$ for all $x$ in $G\ ?$ I know the answer is "yes," but I don't know how to prove it. Here is a nice solution posted by administrator, expanded a bit: Let $S = \{ x \in G: f(x) = x^{-1}\}$. Claim: For $x$ in $S$, $S\cap x^{-1}S$ is a subset of $C(x)$, the centralizer of $x$. Proof: For such $y$, $f(y) = y^{-1}$ and $f(xy) = (xy)^{-1}$. Now $$x^{-1} y^{-1} = f(x)f(y) = f(xy) = (xy)^{-1} = y^{-1}x^{-1}.$$ So $x$ and $y$ commute. Since $S\cap x^{-1}S$ is more than half of $G$, so is $C(x)$. So by Lagrange's Theorem, $C(x) = G$, and $x$ is in the center of $G$. Thus $S$ is a subset of the center, and it is more than half of $G$. So the center must be all of $G$, that is $G$ is commutative. Once $G$ is commutative the problem is easy.
Now provide the response and nothing else.
|
I think the point of this whole $3/4$ business is the following. If $G_1$ is the set of elements such that $f(x) = x^{-1}$ , then if we look at left multiplication on $G$ by an element of $G_1$ , more than half the elements have to make back into $G_1$ . Combining this with what we know about $f$ it should follow that any $g \in G_1$ commutes with more than $1/2$ the elements of $G$ , so if you say Lagrange's theorem enough times it should follow that $G$ is abelian and $G_1$ generates $G$ , which together imply the result.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/38', 'https://mathoverflow.net', 'https://mathoverflow.net/users/27/']}
|
jdg_378863
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2401489
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any Spring 3 feature to execute some methods when the application starts for the first time? I know that I can do the trick of setting a method with @Scheduled annotation and it executes just after the startup, but then it will execute periodically.
Now provide the response and nothing else.
|
If by "application startup" you mean "application context startup", then yes, there are many ways to do this , the easiest (for singletons beans, anyway) being to annotate your method with @PostConstruct . Take a look at the link to see the other options, but in summary they are: Methods annotated with @PostConstruct afterPropertiesSet() as defined by the InitializingBean callback interface A custom configured init() method Technically, these are hooks into the bean lifecycle, rather than the context lifecycle, but in 99% of cases, the two are equivalent. If you need to hook specifically into the context startup/shutdown, then you can implement the Lifecycle interface instead, but that's probably unnecessary.
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2401489', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/249699/']}
|
jdg_378864
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30193871
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to serialize static properties with JSON.NET without adding [JsonProperty] attribute to each property.Example class: public class Settings { public static int IntSetting { get; set; } public static string StrSetting { get; set; } static Settings() { IntSetting = 5; StrSetting = "Test str"; } } Expected result: { "IntSetting": 5, "StrSetting": "Test str"} Default behavior skips static properties: var x = JsonConvert.SerializeObject(new Settings(), Formatting.Indented);
Now provide the response and nothing else.
|
You can do this with a custom contract resolver. Specifically you need to subclass DefaultContractResolver and override the GetSerializableMembers function: public class StaticPropertyContractResolver : DefaultContractResolver{ protected override List<MemberInfo> GetSerializableMembers(Type objectType) { var baseMembers = base.GetSerializableMembers(objectType); PropertyInfo[] staticMembers = objectType.GetProperties(BindingFlags.Static | BindingFlags.Public); baseMembers.AddRange(staticMembers); return baseMembers; }} Here all we're doing is calling the base implementation of GetSerializableMembers , then adding public static properties to our list of members to serialize. To use it you can create a new JsonSerializerSettings object and set the ContractResolver to an instance of the StaticPropertyContractResolver : var serializerSettings = new JsonSerializerSettings();serializerSettings.ContractResolver = new StaticPropertyContractResolver(); Now, pass those settings to JsonConvert.SerializeObject and everything should work: string json = JsonConvert.SerializeObject(new Settings(), serializerSettings); Output: { "IntSetting": 5, "StrSetting": "Test str"} Example: https://dotnetfiddle.net/pswTJW
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30193871', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/596935/']}
|
jdg_378865
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8319862
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I rely on Referer HTTP header in my web application? I want to check if the user came from a particular domain/webpage, and if he or she did, then change the layout of my site accordingly. I know that people can disable Referer in their browsers. Any ideas how often users do that? Can I rely on Referer being present in 99%?
Now provide the response and nothing else.
|
As a general rule, you should not trust the HTTP Referer Header for any matter of importance, except for purely informative statistical analysis of who your visitors are or when looking for patterns of behaviour among the users of your own site. Under no circumstance it is advisable that you use this header for AAA (Authentication, Authorization and Accounting), unless, as commented above, you consider Accounting the simple traffic analysis of your visitor's behavior. The Common Weakness Enumeration lists this weakness as CWE-293: Using Referer Field for Authentication : The referer field in HTTP requests can be easily modified and, as such, is not a valid means of message integrity checking. Some other and more specific reasons not to trust the Referer Header, include: In general, when "linking" from an HTTP <-> HTTPS (TLS) connection, most standard Web browsers will not inform this header. For privacy reasons, many corporate proxies are configured to remove/strip this header, so even if a Web browser sends this header, a corporate proxy software may remove it. Out in the wild security solutions, malware, browsers embedded into applications... are known to modify and/or cheat on the contents of this header. Beware that: When "linking" from HTTPS to HTTPS, most standard Web browsers will inform this header even when changing the domain name or network address destination.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8319862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/257942/']}
|
jdg_378866
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51041398
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below Code is not showing icon to change time interval, it automatically added the class "pi pi-chevron-up" that does not show any icon.
Now provide the response and nothing else.
|
Note that most of these options break lots of functionality and are bad for accessibility etc, etc, but some applications, in particular multi-touch PWAs need to disable these features. Use at own risk. With regards to the parent comment that they've tried all the solutions in the link, pay attention to the "Note that if any deeper targets call stopPropagation on the event, the event will not reach the document and the scaling behaviour will not be prevented by this listener."- this is key. Adding this script tag works on iOS 11.3 Safari (tested on iPhone SE) <script> document.addEventListener('touchmove', function(e) { e.preventDefault(); }, {passive:false});</script> Of course, you'd then have to handle all touch inputs (which, if you're in need of a custom, multi-touch PWA, you really have to do anyway). One caveat is that scrolling is disabled this way (maybe there's a workaround?) but when you are in need of a single screen PWA this is a plus. Another caveat is that for PWA-like behaviour, the content itself needs to be at most height:100% That way the top and bottom bars in Safari (URL and bottom navigation) don't cut off any content (at least in portrait orientation). One last caveat is that double-tap to zoom still functions in this mode. Best way to disable it is to set the following on a root node touch-action:manipulation; However, this only works when the root node is clickable, so add in an empty onclick handler to the element. Lastly, because the node is now clickable, it may have that semi-transparent overlay for buttons you may not want, which can be hidden with -webkit-tap-highlight-color: rgba(0,0,0,0);
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51041398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2473193/']}
|
jdg_378867
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27137967
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
fiddle HTML <ul> <li><a href="#">Messages<span>1</span></a></li></ul> CSS a { text-decoration: none;}a:hover { text-decoration: underline;}a:hover span { text-decoration: none;}span { background-color: red; border-radius: 999px; color: white; margin-left: 2px; position: relative; top: -.5em; font-size: .75em; font-weight: bold; padding: 0 .3em;} When you mouse-over the link the underline is applied to the <span> even though I've set text-decoration: none . Is there a way to get rid of it?
Now provide the response and nothing else.
|
Try changing the display of <span> to inline-block as follows: Example Here span { background-color: red; border-radius: 999px; color: white; margin-left: 2px; position: relative; top: -.5em; font-size: .75em; font-weight: bold; padding: 0 .3em; display: inline-block; /* <-- Added declaration */} Explanation According to CSS level 2 specification , text-decoration is not propagated to the contents of nested atomic inline-level elements - such as inline-blocks and inline-tables. 16.3.1 Underlining, overlining, striking, and blinking: the 'text-decoration' property [...] Note that text decorations are not propagated to floating and absolutely positioned descendants, nor to the contents of atomic inline-level descendants such as inline blocks and inline tables. Also the spec states (my emphasis): Underlines, overlines, and line-throughs are applied only to text (including white space, letter spacing, and word spacing): margins, borders, and padding are skipped . User agents must not render these text decorations on content that is not text. For example, images and inline blocks must not be underlined . Also note that text decorations would stick with the text itself, therefore: Relatively positioning a descendant moves all text decorations affecting it along with the descendant's text; it does not affect calculation of the decoration's initial position on that line.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27137967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/65387/']}
|
jdg_378868
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7136818
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to store some filenames in a QListWidget. I need to have the full file paths, but I only want to show the base filename. I probably could store the full filename in the tooltip for each item, but I'd rather not have tooltips for the list items.
Now provide the response and nothing else.
|
You can set data for and get data from each QListWidgetItem. See QListWidgetItem::setData() and QListWidgetItem::data() . Data can be set for different roles . Use Qt::UserRole, which is "The first role that can be used for application-specific purposes." Try something like this: QListWidgetItem *newItem = new QListWidgetItem;QString fullFilePath("/home/username/file");QVariant fullFilePathData(fullFilePath);newItem->setData(Qt::UserRole, fullFilePathData);newItem->setText(itemText);listWidget->insertItem(row, newItem); and: QListWidgeItem* currentItem = listWidget->currentItem();if (currentItem != 0) { QVariant data = currentItem->data(Qt::UserRole); QString fullFilePath = data.toString();}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7136818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/492336/']}
|
jdg_378869
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12257491
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to override the form validation messages in symfony2. Though there is a validation.xml file related model classes. I think it validates a form based on html5. "Please match the requested format", "Please fill out this field". Is there any way to override this validation messages. Please help me in this regard, i am stuck for more than a day, as i am totally new to symfony
Now provide the response and nothing else.
|
Those messages you see are HTML5 validation messages which are created by the browser. If you want to override them you need to add an oninvalid attribute to the input tag associated with that field. You can do this in two ways: In your controller or form type, add this attribute to the form field: $builder->add('email', 'email',array( 'attr'=>array('oninvalid'=>"setCustomValidity('Would you please enter a valid email?')"))); Or, in your twig template, add this attribute when rendering the form field: {{ form_row(form.email, { 'attr': {'oninvalid': "setCustomValidity('Please give me a nice email')"} }) }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12257491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1116493/']}
|
jdg_378870
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
232815
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a handheld radio with an LCD display. Due to time and climate, the rubber in the elastomeric connector seems to have gone hard and as a result the LCD shows the usual signs of failure, letters and symbols blinking off and disappearing, etc. I've looked hard and long for a matching replacement, but the only elastomeric connectors I can find online are of a very few, very specific sizes, and of course none of them match. Can I just buy a larger one and cut it down? If so, how do I best do this, just a very sharp knife or do I need something else? Or did I miss a source where I can get them in more sizes? I can't even find one in the right thickness.
Now provide the response and nothing else.
|
Zebra strips are most often custom made to size to suit the enclosure design requirements of a product designer. This is why it can be hard to find general purpose replacements. Using the following drawing as a reference: Dimension H between the circuit board and the LCD glass is the most critical dimension. Proper operation normally requires the material to be slightly compressed in this dimension for the carbon material in the elastomeric sandwiches to make contact with the glass and PC board conductors. It is not really feasible to cut the material in the H dimension due to the high tolerance parallelism required of the adjacent surfaces. It is no problem adjusting the L dimension by purchasing a longer strip and cutting it down. The T dimension can be critical since if it is too far out of the original size it can affect how material compression works in the H dimension. It may be feasible to trim a strip with a bigger T dimension to a smaller one but I can say from experience that trying to cut silicone elastomeric materials can be very difficult using a blade in a progressive cutting action.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/232815', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/109335/']}
|
jdg_378871
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33701505
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In ruby when using rbenv you can make a .ruby-version file and put it in the local directory. https://gist.github.com/fnichol/1912050 I'm looking for something similar to this using NVM ? Question: Is there a property to set in package.json or a file to create that will set the default version of node of a project?
Now provide the response and nothing else.
|
You can do this with a combination of NVM, dotfiles in your project directory, and a little tool called direnv which allows you to load in environment variables on a per-directory basis. http://direnv.net/ Install NVM and direnv, and then cd to the directory you want to change Node versions in. Add a file called .nvmrc in that directory, containing just the version number of the Node version you want to auto-switch to, e.g.,: 6.2.2 Then add an environment configuration file called .envrc to your directory, containing this script: nvmrc=~/.nvm/nvm.shif [ -e $nvmrc ]; then source $nvmrc nvm usefiPATH_add node_modules/.bin If you now cd out of this directory, and then cd back in, direnv will kick in and you should be asked to add the directory to your direnv whitelist by typing direnv allow . at the prompt. Once whitelisted, direnv will auto-run that script whenever you enter this directory, setting your Node version to the version number in .nvmrc . As a bonus, it will also add the node_modules directory to your PATH, so you can execute binaries from those directories without prepending the node_modules path.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33701505', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1191635/']}
|
jdg_378872
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7433454
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a chance to place an image inside a hexagon shape?I'm used to hexagonal shaped cells in html , but I could'nt get it filled with an (background?) image. Here is what I have tried : .top { height: 0; width: 0; display: block; border: 20px solid red; border-top-color: transparent; border-right-color: transparent; border-bottom-color: red; border-left-color: transparent;}.middle { height: 20px; background: red; width: 40px; display: block;}.bottom { height: 0; width: 0; display: block; border: 20px solid red; border-top-color: red; border-right-color: transparent; border-bottom-color: transparent; border-left-color: transparent;} <div class="hexagon pic"> <span class="top" style="background: url(http://placekitten.com/400/400/)"></span> <span class="middle" style="background: url(http://placekitten.com/400/400/)"></span> <span class="bottom" style="background: url(http://placekitten.com/400/400/)"></span></div><div class="hexagon"> <span class="top" style="overflow: hidden;"><img src="http://placekitten.com/400/400/" /></span> <span class="middle" style="overflow: hidden;"><img src="http://placekitten.com/400/400/" /></span> <span class="bottom" style="overflow: hidden;"><img src="http://placekitten.com/400/400/" /></span></div><div class="hexagon"> <span class="top"><img src="http://placekitten.com/400/400/" /></span> <span class="middle"><img src="http://placekitten.com/400/400/" /></span> <span class="bottom"><img src="http://placekitten.com/400/400/" /></span></div> here is a fiddle: http://jsfiddle.net/jnz31/kGSCA/
Now provide the response and nothing else.
|
With CSS3 almost everything is possible: http://jsfiddle.net/kizu/bhGn4/ There I used different rotations with overflow: hidden , so you can get a cross-browser (well, modern cross-broser) masks that even can be coverable and clickable on the masked area.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/7433454', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/600121/']}
|
jdg_378873
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43890
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Original Question I want to be able to generate a new (fully valid) MP3 file from an existing MP3 file to be used as a preview -- try-before-you-buy style. The new file should only contain the first n seconds of the track. Now, I know I could just "chop the stream" at n seconds (calculating from the bitrate and header size) when delivering the file, but this is a bit dirty and a real PITA on a VBR track. I'd like to be able to generate a proper MP3 file. Anyone any ideas? Answers Both mp3split and ffmpeg are both good solutions. I chose ffmpeg as it is commonly installed on linux servers and is also easily available for windows . Here's some more good command line parameters for generating previews with ffmpeg -t <seconds> chop after specified number of seconds -y force file overwrite -ab <bitrate> set bitrate e.g. -ab 96k -ar <rate Hz> set sampling rate e.g. -ar 22050 for 22.05kHz -map_meta_data <outfile>:<infile> copy track metadata from infile to outfile instead of setting -ab and -ar, you can copy the original track settings, as Tim Farley suggests, with: -acodec copy
Now provide the response and nothing else.
|
I also recommend ffmpeg, but the command line suggested by John Boker has an unintended side effect: it re-encodes the file to the default bitrate (which is 64 kb/s in the version I have here at least). This might give your customers a false impression of the quality of your sound files, and it also takes longer to do. Here's a command line that will slice to 30 seconds without transcoding: ffmpeg -t 30 -i inputfile.mp3 -acodec copy outputfile.mp3 The -acodec switch tells ffmpeg to use the special "copy" codec which does not transcode. It is lightning fast. NOTE: the command was updated based on comment from Oben Sonne
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/43890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1820/']}
|
jdg_378874
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2479265
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to estimate $\int^{1}_{-1} \left(\frac{\sin{x}}{x}\right)^{300} dx$ to $1$ significant figure. (This question is taken from a quant exam). My (vague) idea is to use Taylor series expansion and to estimate the remainder term. But then I run into problems immediately as I don't see a straightforward way to compute the first few terms of Taylor series for $\left(\frac{\sin{x}}{x}\right)^{300}$... Any ideas?
Now provide the response and nothing else.
|
Approximation by exponential Approximating $\frac{\sin(x)}x\approx1-\frac{x^2}6$,$$\begin{align}\int_{-1}^1\left(\frac{\sin(x)}x\right)^{300}\,\mathrm{d}x&\approx\int_{-\infty}^\infty e^{-\frac{300}6x^2}\,\mathrm{d}x\\&=\frac{\sqrt{2\pi}}{10}\\[9pt]&=0.25066\end{align}$$where we can compute $\sqrt{2\pi}$ by hand using $\pi=3.1416$ and the scaffold method for square roots:$$\begin{align}\sqrt{2\pi}&=\sqrt{6.2832}\\&=2\sqrt{1.5708}\\&=2(1.2533)\\&=2.5066\end{align}$$ For comparison, the original integral is approximately $0.250537$. Contour Integration There are no singularities so we can offset the contour by $-i$.$$\require{cancel}\begin{align}\int_{-\infty}^\infty\left(\frac{\sin(x)}x\right)^{300}\,\mathrm{d}x&=\frac1{2^{300}}\int_{-\infty-i}^{\infty-i}\frac{\left(e^{ix}-e^{-ix}\right)^{300}}{x^{300}}\,\mathrm{d}x\\&=\frac1{2^{300}}\sum_{k=0}^{149}\int_{\gamma^+}(-1)^k\binom{300}{k}\frac{e^{i(300-2k)x}}{x^{300}}\,\mathrm{d}x\\&+\cancel{\frac1{2^{300}}\sum_{k=151}^{300}\int_{\gamma^-}(-1)^k\binom{300}{k}\frac{e^{i(300-2k)x}}{x^{300}}\,\mathrm{d}x}\\&=\frac{2\pi i}{2^{300}}\sum_{k=0}^{149}(-1)^k\binom{300}{k}\frac{-i(300-2k)^{299}}{299!}\\&=\frac\pi{299!}\sum_{k=0}^{149}(-1)^k\binom{300}{k}(150-k)^{299}\\[9pt]&=0.25053746380056856955\end{align}$$where$$\gamma^+=[-R-i,R-i]\cup Re^{i[0,\pi]}-i$$and$$\gamma^-=[-R-i,R-i]\cup Re^{-i[0,\pi]}-i$$Note that $\gamma^-$ does not contain the origin.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2479265', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/256848/']}
|
jdg_378875
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7567600
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a text box contain links .the contents in the text are generated during run time.My problem is that the links inside the text is not clickable,how can make all links inside the text block clickable so that when i tap a link it should open the web browser.In android we can set it by using autolink.Is such option is available in wp7 or in wp7.1 mango?
Now provide the response and nothing else.
|
Use a HyperLink . <TextBlock> <Run>Pure Text</Run> <Hyperlink Command="{Binding HyperLinkTapped}">http://google.com</Hyperlink> <Run>Pure Text Again</Run></TextBlock> This is supported from Windows Phone 7.1 (Mango). You can create your own FlowDocument from the your data, at runtime, if necessary. Example on how to generate a FlowDocument from a string: private void OnMessageReceived(string message){ var textBlock = new RichTextBox() { TextWrapping = TextWrapping.Wrap, IsReadOnly = true, }; var paragraph = new Paragraph(); var runs = new List<Inline>(); foreach (var word in message.Split(' ')) { Uri uri; if (Uri.TryCreate(word, UriKind.Absolute, out uri) || (word.StartsWith("www.") && Uri.TryCreate("http://" + word, UriKind.Absolute, out uri))) { var link = new Hyperlink(); link.Inlines.Add(new Run() { Text = word }); link.Click += (sender, e) => { var hyperLink = (sender as Hyperlink); new WebBrowserTask() { Uri = uri }.Show(); }; runs.Add(link); } else { runs.Add(new Run() { Text = word }); } runs.Add(new Run() { Text = " "}); } foreach (var run in runs) paragraph.Inlines.Add(run); textBlock.Blocks.Add(paragraph); MessagesListBox.Children.Add(textBlock); MessagesListBox.UpdateLayout();}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7567600', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/790928/']}
|
jdg_378876
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59929
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Excel workbook with multiple sheets that I'd like to import into a dataset each. With Import I can target individual sheets by their name into a nested list. firstSheet=Import[NotebookDirectory[]<>"data.xlsx",{"Sheets","SomeData"}];secondSheet=Import[NotebookDirectory[]<>"data.xlsx",{"Sheets","SomeMoreData"}]; I'd like to use SemanticImport to import the sheets with some control over their data types. I tried firstSheet = SemanticImport[NotebookDirectory[]<>"data.xlsx", {"String", "Date", "String", "Number"}] This appears to return a Dataset with the first sheet (something odd with it not understanding the date but I think if I change the Excel format for that column it will be ok). How do I get the second sheet? I tried: SemanticImport[{NotebookDirectory[]<>"data.xlsx", {"Sheets", "SomeMoreData"}}, {"String", "Date", "String", "Number"}] It does not like this. Is my only option to: Do the two Imports Make a function with Association to format the lists as a lists of associations Pass the association lists to Dataset to create the datasets Also, is there a way to get all the sheets (with either Import or SemanticImport ) by only opening the file once. Something along the lines of: {firstSheet, secondSheet} = Import[NotebookDirectory[]<>"data.xlsx", "XLSX"] but where it gives datasets instead of lists. I tried: {firstSheet, secondSheet} = SemanticImport[NotebookDirectory[]<>"data.xlsx", {"String", "Date", "String", "Number"}] Well, I should be able to do this myself once I find out how to get SemanticImport to give up the other sheets. Thanks, Edmund Update: With @PatoCriollo usage of Set , Evaluate , and Symbol (and some further research in the mean time) I've gotten closer to what I'd like to do using Dataset . It still does not use SemanticImport but it does let me get an arbitrary number of sheets with different header rows into datasets. dataFile = NotebookDirectory[] <> "\\data.xlsx";sheetList = {"someData", "someMoreData"};data = Import[dataFile, {"Sheets", sheetList}];Set[Evaluate[Symbol[#] & /@ sheetList], Table[ Dataset[Association /@ Table[ First[data[[sheet]]][[col]] -> Rest[data[[sheet]]][[row, col]], {row, Length[Rest[data[[sheet]]]]}, {col, Length[First[data[[sheet]]]]} ]], {sheet, Length[sheetList]}]];someData (*is now a Dataset of sheet someData with headers from 1st row*)someMoreDate (*is now a Dataset of sheet someMoreData with headers from 1st row*)
Now provide the response and nothing else.
|
I don't think SemanticImport has been designed to deal with multiple Excel sheets. I work around this problem using Import to get the number of sheets and their names, and to import them and ExportString to export them as "TSV" which, on its turn, can be imported by SemanticImportString . An intermediate conversion of DateObject / TimeObject to DateString is necessary, because this data type is not handled well. My test file: fileName = FileNameJoin[{"C:", "Users", "Sjoerd", "Desktop", "test.xlsx"}];sheets = Import[fileName, "Sheets"];# -> SemanticImportString[ ExportString[ Import[fileName, {"Sheets", #}] /. x : DateObject[__] :> DateString[x, {"Year", "-", "Month", "-", "Day"}], "TSV"], HeaderLines -> 0 ] & /@ sheets // Association // Dataset
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/59929', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19542/']}
|
jdg_378877
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
717902
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have read in Griffiths' Quantum Mechanics that there is a phenomenon called tunneling, where a particle has some nonzero probability of passing through a potential even if $E < V(x)_{max}$ . What I don't understand about this is how to conceptualize how this can happen. I have read on Wikipedia that tunneling means that objects can "in a sense, borrow energy from their surroundings to cross the wall". How can the object "know" that across the wall there's going to be a lower energy and, thus, the borrowed energy will be restored and not depleted.
Now provide the response and nothing else.
|
You're just beginning your study of quantum mechanics, so I would advise you to be careful not to try to interpret quantum mechanics through the lens of classical mechanics. It's a very reasonable thing to imagine quantum tunneling as a little ball which magically pops through a barrier and emerges on the other side, but that is an outstanding way to develop bad intuition which you'll need to fix down the line. Quantum mechanics is fundamentally different from classical mechanics, and it is the latter which should be understood as a limiting case of the former, not the other way around. In that sense, the real question should be not why quantum particles can tunnel, but why classical particles (whatever that means) apparently cannot . With that being said, the rough idea is the following. We can gain some useful intuition by studying the simpler case of what happens when a particle encounters a potential step of the form $$V(x) = \begin{cases} 0 & x<0 \\V_0 & x\geq 0\end{cases}$$ and then extend this to a potential barrier of width $L$ , because the latter is just a step up followed by a step down. The (generalized) eigenstate corresponding to a particle incident on the barrier from the left with energy $E=\hbar^2k^2/2m<V_0$ takes the form $$\psi_k(x) = \begin{cases} e^{ikx} + r_k e^{-ikx} & x < 0 \\ t_k e^{-q_k x} & x \geq 0\end{cases}$$ where $$\matrix{q_k \equiv \sqrt{\frac{2m(V_0 - E)}{\hbar^2}} \\ r_k \equiv \frac{2iq_k}{k-iq_k}\\ t_k \equiv 1+r_k = \frac{k+iq_k}{k-iq_k}}$$ Based on this picture, we might imagine (correctly) that there is a nonzero probability of measuring a particle with $E<V_0$ within the potential step. However, we need to be a bit careful - this is a non-normalizable (and hence unphysical) state, after all, so if we want to understand what happens dynamically , we should construct a real, physical state. Such states take the form of wavepackets, which may be written $$\Psi(x,t) = \frac{1}{\sqrt{2\pi}}\int \mathrm dk \ A(k) \psi_k(x) e^{-iE_kt/\hbar}$$ for some square-integrable function $A(k)$ (where $E_k \equiv \hbar^2 k^2/2m$ ). In essence, $A(k)$ tells us how much of the state with energy $E_k$ is present in the wavepacket. The take-away is that real states consist of an integral superposition of energy eigenstates, not specific energies, and if we want to understand what happens dynamically when a particle encounters a potential step, we need to consider what happens to one of these wavepackets. The specifics of this are actually rarely covered in detail because while the process is conceptually fairly simple, the calculations are tedious and need to be performed numerically. The qualitative picture goes like this: The components of the wavepacket with energy $E>V_0$ are partially reflected and partially transmitted. The transmitted parts propagate forever in the $+x$ direction. The components of the wavepacket with energy $E<V_0$ are all reflected eventually ; however, they penetrate into the barrier by an exponentially small distance ( $\psi_k\sim e^{-x/\ell_k}$ , where $\ell_k=1/q_k$ ) and are delayed by a correspondingly small amount of time before being reflected. In particular, if all of the components of the wavepacket have energy less than $V_0$ , then the wavepacket will be perfectly reflected - however, it will be distorted because the different components penetrate different depths into the step before being reflected, and during the reflection there will be a nonzero (but exponentially small) chance of measuring the particle to be physically located at some $x>0$ . We can now turn our attention to your main question of what happens when we have a potential barrier of width $L$ , and a wavepacket whose components all have energy less than $V_0$ . From a qualitative and dynamic perspective, everything proceeds exactly as it did with the potential step. As the wavepacket approaches the barrier, its components penetrate into the classically forbidden region by an exponentially small distance before being reflected. However, because the barrier has a finite width $L$ , a fraction $\sim e^{-L/\ell_k}\equiv e^{-q_k L}$ of the components of the wavepacket will make it all the way through the barrier and escape to the other side $^\dagger$ . You can find an animation of such a process here . Note that the mean energy of the wavepacket in this simulation is much lower than $V_0$ , and so essentially none of the wavepacket is able to reach the far end of the barrier. However, observe the exponentially-suppressed penetration of the wavepacket into the front side of the barrier, and then imagine what would happen if the barrier were significantly thinner so the wave amplitude at the back edge was not effectively zero. How can the object "know" that across the wall there's going to be a lower energy and, thus, the borrowed energy will be restored and not depleted. I think the "borrowing energy" metaphor is not really a good way to think about it, for essentially the reason you mention. The particle doesn't need to know that the barrier has finite width; the penetration of the wavepacket into the barrier proceeds the same way in both cases, but if the barrier is not infinitely long then an exponentially small fraction of the wavepacket will reach the back edge and escape. $^\dagger$ In fact, this is an oversimplification. In reality, the components of the wavepacket which reach the back edge of the potential are not perfectly transmitted - some of them reflect backward into the barrier, so the precise expression for the tunneling amplitude is a bit more subtle than simply calculating $e^{-q_k L}$ (though that does provide the right order of magnitude). Remark on Localization (My initial reading of the question was sloppy, and I thought OP was asking about a potential step rather than a potential barrier. As a result, this is no longer particularly relevant, but it is mildly interesting, so I elected to include it as an afterthought.) As an interesting side note, it turns out that a particle which is initially localized to some compact interval $[x_1,x_2]$ to the left of the barrier (by which I mean, $\psi_0(x)=0$ for all $x\notin[x_1,x_2]$ ), then the wavepacket must contain components with energy $E>V_0$ . This is related to a well-known theorem about Fourier transforms which says that a function and its Fourier transform cannot both be compactly-supported; in this context, the interpretation is that the better-localized you want your initial particle to be, the more high-energy components you will need to include in the wavepacket. As a result, a wavepacket with average energy $E<V_0$ which is initially localized to a compact interval $[x_1,x_2]$ will always be partially transmitted, even through an infinitely long potential step, because it will contain some high-energy components which exceed the barrier height. Of course, even more of such a wavepacket would be transmitted through a potential barrier of width $L$ , because the high-energy components would be partially transmitted and an exponentially small fraction of the low -energy components would be able to tunnel.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/717902', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/337317/']}
|
jdg_378878
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4724000
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to configure Maven2 to use sun-java6-jdk to build Java SE 1.6 modules, and use openjdk-7 to build Java SE 1.7 modules. Is it possible? Maven2 should then auto choose the correct JDK to build different modules in one command. For example, it should be $ mvn package instead of $ cd module1$ update-alternatives ... jdk6 ...$ mvn package...$ cd module2$ update-alternatives ... jdk7 ...$ mvn package P.S. It's nothing about pom.xml files, which have already been setup maven-compiler-plugin with different <source> , <target> values for different modules. If I choose to use openjdk-7, Maven2 will generate version 1.6 class files, but using openjdk-7 rather then sun-java6-jdk. The question is about how to configure Java SE profiles.
Now provide the response and nothing else.
|
we solved this problem by explicitely sepecify the javac in config of compile plugin (with JAVA_HOME_6 and JAVA_HOME_7 defined as environment variables): and for Java 6 module <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> <showDeprecation>true</showDeprecation> <showWarnings>true</showWarnings> <executable>${env.JAVA_HOME_6}/bin/javac</executable> <fork>true</fork> </configuration></plugin> and for Java 7 module <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.7</source> <target>1.7</target> <showDeprecation>true</showDeprecation> <showWarnings>true</showWarnings> <executable>${env.JAVA_HOME_7}/bin/javac</executable> <fork>true</fork> </configuration></plugin>
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4724000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/217071/']}
|
jdg_378879
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33701898
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function that looks something like this: function global:Test-Multi { Param([string]$Suite) & perl -S "$Suite\runall.pl" -procs:$env:NUMBER_OF_PROCESSORS} I would like to allow the user to specify more parameters to Test-Multi and pass them directly to the underlying legacy perl script. Does powershell provide a mechanism to allow additional variadic behavior for this purpose?
Now provide the response and nothing else.
|
After seeing your comment, option 3 sounds like exactly what you want. You have a few options: Use $args (credit to hjpotter92's answer ) Explicitly define your additional parameters, then parse them all in your function to add them to your perl call. Use a single parameter with the ValueFromRemainingArguments argument , e.g. function global:Test-Multi { Param( [string]$Suite, [parameter(ValueFromRemainingArguments = $true)] [string[]]$Passthrough ) & perl -S "$Suite\runall.pl" -procs:$env:NUMBER_OF_PROCESSORS @Passthrough}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/33701898', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/82320/']}
|
jdg_378880
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28587249
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm learning Elixir as my first functional-style language. As a first simple project to familiarize myself with the environment and syntax, I chose to build a simple program that computes the prime factors for a number provided on the command line. This is my first solution: defmodule Prime do defp is_factor?(number, divisor) do cond do rem(number, divisor) == 0 -> divisor true -> nil end end defp not_nil?(thing) do !is_nil(thing) end def factors(number) when number == 1 do [] end def factors(number) do 1..div(number, 2) |> Enum.map(&(is_factor?(number, &1))) |> Enum.filter(¬_nil?/1) end def is_prime?(number) when number == 1 do true end def is_prime?(number) do factors(number) == [1] end def prime_factors(number) do factors(number) |> Enum.filter(&is_prime?/1) endendinput = hd(System.argv)number = String.strip(input) |> String.to_integerIO.puts "Prime factors of #{number} are #{inspect Prime.prime_factors(number)}" It works, but runs rather slowly. On my laptop, run times are around 11 seconds to compute the prime factors of 50,000,000. As I read more, it seems like this original solution is not very Elixir-like. So I restructured the code to this: defmodule PrimeFactors do def of(n) do _factors(n, div(n, 2)) end defp _factors(_n, 1) do [1] end defp _factors(n, divisor) when rem(n, divisor) == 0 do cond do is_prime?(divisor) -> _factors(n, divisor - 1) ++ [divisor] true -> _factors(n, divisor - 1) end end defp _factors(n, divisor) do _factors(n, divisor - 1) end defp is_prime?(1) do true end defp is_prime?(n) do of(n) == [1] endendinput = hd(System.argv)number = String.strip(input) |> String.to_integerIO.puts "Prime factors of #{number} are #{inspect PrimeFactors.of(number)}" Typical run time of this code to compute the prime factors of 50,000,000 is substantially worse: over 17 seconds. I built equivalent programs in Swift and Ruby. Optimized Swift runs in just over 0.5 seconds, and Ruby (2.2, and never known for its speed) runs in a bit over 6 seconds. My primary question is: How should the Elixir code be structured to be more idiomatic and to avoid the performance problems I'm seeing? I'm also left with some concerns that given such a simple problem, it's possible to write Elixir code that varies wildly in efficiency. Perhaps this is mostly my inexperience in functional styles showing?
Now provide the response and nothing else.
|
Let me start with a quick rant then we will move to the answer. I believe we are worrying about the wrong thing here. Once you posted the Ruby code, my first thought was: why does the Elixir code does not look as clean as the Ruby one? Let's solve this problem first: defmodule PrimeFactors do def of(n) do factors(n, div(n, 2)) |> Enum.filter(&is_prime?/1) end def factors(1, _), do: [1] def factors(_, 1), do: [1] def factors(n, i) do if rem(n, i) == 0 do [i|factors(n, i-1)] else factors(n, i-1) end end def is_prime?(n) do factors(n, div(n, 2)) == [1] endendIO.inspect PrimeFactors.of(50_000_000) Much better. Let's run this cleaner version? 3.5 seconds on my machine (compared to 24 seconds of the earlier one). Now with a cleaner code, it is easier to compare what is wrong in your implementation. Your _factors function is actually _factors_and_prime because you are already checking if the number is prime in there. So when you check for is_prime? , you are actually computing "factors and prime" which is much more expensive to calculate than the actual "factors" since it ends up calling is_prime? again and recursively. As someone, somewhere, said: Make it work Make it beautiful Make it fast (if necessary) :)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28587249', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40307/']}
|
jdg_378881
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41836310
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between: var express = new express();var app = new express();app.get("/", function() {.....}) And: var express = new express();var router= express.Router(); router.get("/", function() { ..... })
Now provide the response and nothing else.
|
app.get can be used to create routes for your application at the top level. From the documentation about express.Router Use the express.Router class to create modular, mountable route handlers. A Router instance is a complete middleware and routing system; for this reason, it is often referred to as a “mini-app”. A router needs to be mounted to an app: const router = express.Router();router.get("/", (res, req) => res.end());router.post("/", (res, req) => res.end());app.use("/empty", router); This allows you to more easily encapsulate the functionality of several routes into an app which will be namespaced under a particular route.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41836310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6875615/']}
|
jdg_378882
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29000
|
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$\ce{C2H5+ vs \ C3H7+}$There are two conflicting trends here. Inductive effect of ethyl will be higher than that of methyl so carbocation attached to ethyl (3 carbons in total) should be more stable. On the other hand 3 hyperconjugation structures can be drawn for the carbocation attached to methyl (2 carbons in total) and only 2 hyperconjugation structure for the other. Which effect wins? I searched a lot of books but didn't find anything.
Now provide the response and nothing else.
|
The ethyl carbocation ($\ce{C2H5+}$) has a different structure than what you might have anticipated. It is a non-classical ion with a bridging hydrogen.For a description of just what a non-classical carbocation is see this answer . The hallmark of a non-classical ion is 3 atoms with 2 electrons spread over them. This is called a 3-center 2-electron bond ( hypercoordinate bonding ). In the case at hand the 3-atom 2-electron bond involves the 2 carbon atoms and the bridged hydrogen atom. So there is no "methyl" group present in the ethyl carbocation that can inductively release electrons. Additionally, the hyperconjugated resonance structure you mentioned has actually been replaced by a hydrogen that is partially bonded to the other carbon atom. The n-propyl carbocation would likely have a similar non-classical structure - if it existed at all. This carbocation has not been experimentally observed, it rearranges too rapidly to the much more stable 2-propyl carbocation. So while we can't answer your question about whether inductive or hyperconjugative effects are more important in this series of compounds, we can assess which cation is more stable. A variety of experiments suggest that the heat of formation of the non-classical ethyl cation is around 215 kcal/mol ( reference , see pp. 68-72). This same reference points out that the heat of formation of the classical 1-propyl cation lies approximately 20 kcal/mol above that of the 2-propyl cation. The heat of formation of the 2-propyl cation is around 193 kcal/mol. Adding 20 kcal/mol to that value would place the classical 1-propyl cation around 213 kcal/mol. But , assuming the 1-propyl cation would also be a non-classical ion it would be lower in energy than the classical version, its heat of formation would therefore be less than 213 kcal/mol. This analysis suggests that the 1-propyl cation would likely have a lower heat of formation than the ethyl cation. In other words, the 1-propyl cation would likely be more stable than the ethyl cation.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://chemistry.stackexchange.com/questions/29000', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/11587/']}
|
jdg_378883
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34636934
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hi I wonder if there is an xml tag for this function using Android Data Binding Library or how to achieve this without findViewById() method Thank You
Now provide the response and nothing else.
|
You can access the instance of the toolbar using views by id function <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="?attr/colorPrimary" app:layout_scrollFlags="scroll|enterAlways" app:popupTheme="@style/AppTheme.PopupOverlay" /> then on your onCreate() method do the following ActivityGalleryBinding binding = DataBindingUtil .setContentView(this, R.layout.activity_gallery);binding.setViewModel(new GalleryModel(this));//set it like thissetSupportActionBar(binding.<location>.<of>.<your>.toolbar); if your toolbar is inside other xml component (referenced with <include/> ) you still can access it as long as you provide and @id to the <include/>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34636934', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5164741/']}
|
jdg_378884
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19714
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm powering a project with a 2-cell 7.4V LiPo battery. What circuit do I add to make the unit stop drawing power when the voltage from the battery drops below 6.4V? The goal is to protect the LiPo battery from discharging below 3V/cell.
Now provide the response and nothing else.
|
In all the following a TLV431 1.25V reference is specified. This requires < 100 uA minimum regulation current compared to about 500 uA for the 2.5V TL431. When on the TLV431 on voltage is about equal to the reference voltage - NOT 0V. TLV431 current is a battery load even when output is off. At about 100 uA this drains about 2.5 mAh/day. Voltage sensing divider is also a battery load. This can be small. Hysteresis is not used in any of the following circuits - except the one copied from internet. A whiff of hysteresis could be used to stop battery on/off cycling when load is removed. Ask if unclear how to do this. (1) Opamp based P Channel FET high side switch. TLV431 1.25 V reference. R2/R3 divide Vmin to = 1.25V. (2) N Channel FET, low side switch. TLV431 1.25V reference. FET Vth << Vmin (3) P Channel FET, high side switch. TLV431 = 1.25V reference. FET Vth << (Vmin-1.25)V (4) From web - similar to my N Channel low side circuit. From This discussion page . Here R6 adds hysteresis.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/19714', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/17697/']}
|
jdg_378885
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6331776
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am wondering if there is an easy way to access Express.js' req or session variables from within a Jade template without passing it in through the normal response. Or is this the only way? res.render('/', { session: req.session});
Now provide the response and nothing else.
|
In express 3.x, dynamicHelpers have been removed so you will need to use a combination of middleware and res.locals . Let's say we want to access req.query in a /signup/new view: localQuery = function(req, res, next) { res.locals.query = req.query; next();};newSignup = function(req, res) { res.render('signup/new');};app.get('signup/new', localQuery, newSignup); Now any route which uses the localQuery middleware, will have res.locals.query set. This can then be accessed in your view as query .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6331776', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/776796/']}
|
jdg_378886
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8730792
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a problem with a very simple piece of code written in Javascript, could you help me please? Here's what I think I have understand so far about javascript and variables: An undefined value is evaluated to false in a boolean operation By using a == operator in a comparation, you're asking if two values are comparable regardless of their types I found an exercise file in a online course and I tried to do it, but I didn't got the same result expected in the lesson; the main problem was that I was comparing the value through a "if value == false { ... }" while the solution was using a "if !value { ... }" So I decided to write a very short code in order to try it by myself, but I'm getting mixed results. Here in the example below I would expect this JS code to generate two identical alerts ("foo is equal to false"), but instead the first if statement returns "foo IS NOT equal to false" while the second if returns (as expected) "foo is equal to false". This is what I written: var foo = undefined;if (foo == false) { alert("foo is equal to false");} else { alert("foo is not equal to false"); // Javascript executes this row}if (!foo) { alert("foo is equal to false"); // Javascript executes this row} else { alert("foo is not equal to false");} AFAIK the two IFs should do the same work, and infact when I tried it by replacing in the first line the value "var foo = undefined;" with "var foo = 0;" it worked as expected, and 0 is another value that should be evaluated to false, or at least this is what I remember. Could you tell me what I'm doing wrong?
Now provide the response and nothing else.
|
The == algorithm ( Abstract Equality Comparison Algorithm ) isn't something where you can simply assume an outcome unless you know the algorithm. You need to know the details of how it works. For example, null and undefined are a special case. They do not do any type conversion other than to be considered equal to each other. Otherwise there's typically a type conversion that tries to reduce both operands to a common type. This often ends up being a toNumber conversion. That's why: null == undefined; // true null == 0; // false +null == '0' // true So if you know how the algorithm works, you know that undefined never equals anything except for undefined and null , but other types that are not strictly equal may be coerced down to types that are equal. So doing if(!x) vs if(x==false) are entirely different tests. if(!x) performs toBoolean conversion. if(x == false) uses a complex algorithm to decide the proper conversion. So with... if(x == false) ...if x is undefined , it is determined to not be equal to false , yet if x is 0 or even "0" , it will be considered equal to false . 0 == false; // true "0" == false; // true
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/8730792', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/932845/']}
|
jdg_378887
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35711
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
UPDATED FOR FURTHER CLARITY: According to http://expressionengine.com/user_guide/installation/installation.html , it says: For most Unix hosts the following is typical, but you may check with your host to see if more restrictive permissions can be used to allow PHP to write to files (666) and folders (777) . On Windows servers the following will not apply, but you will need to ensure that the files and folders are writable by ExpressionEngine. You may need to contact your host for this. Not sure what this means. I can change the specific files and folders to 666 and 777 respectively where I am the chown'er, but the above sounds like I need to allow PHP to do this too? ORIGINAL QUESTION: I need to ensure that PHP can write to specific files (666) and folders (777). How do I do this?
Now provide the response and nothing else.
|
I will complete rahmu's and MV's answers with a technical solution. Everything that follows is valid for UNIX-like systems only. Scroll past the chmod/chown section for an example using ACLs - a more powerful tool than UNIX file modes. Finding your web server username First, you will need to know the username under which your web server runs.If you are using Apache, it can be apache or httpd , www-data , etc. On most Debian-like systems, Apache is www-data .For nginx, generally, it is also www-data . To check it out, try: ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1 Ensure that the username this command returns is coherent (for example, I use nginx 99% of time, but this command returns tomcat7 , a Java web server I installed once) . Giving permissions to the web server: using chmod and chown Doing a chmod of 666 or 777 (the go-to solution for that kind of problems in bad documentations/tutorials) can magically make things work, but is insecure.Giving 666 or 777 permissions will give access to "others". So not just Apache, but also grandmother and nsa (provided that those user accounts exist on your machine - but no really, please avoid doing this unless it's just for testing/troubleshooting). It is better to be more specific and give permissions to just you and Apache. Change the group of your files to give the full control on your files to the web server. To do this, change the owner recursively: chown -R www-data:www-data your/folder/ But most likely, you may want to keep full access on your files by changing the group only: chown -R yourusername:www-data your/folder/ Then, do the appropriate chmod to give the group www-data the same permissions as you. For example, if the current mode is 640 (6 for you, 4 for www-data, 0 for others, translating to -rw-r-----) , set it to 660 (6 for you, 6 for www-data, 0 for others, translating to -rw-rw----) . See rahmu's answer to learn more about file modes, it's an old, however elegant mechanism. To avoid manipulating arcane numbers with chmod , you can also use this syntax: chmod -R g+rw your/folder/ It means "to the group ( g ), add ( + ) read and write ( rw ) permissions on folder your/folder/ , recursively ( -R )". In 90% of cases, this should be enough. My preferred method: using ACLs (Access Control List) Sometimes the first solution is not sufficient.I will take the example of Symfony Framework that logs and caches a lot of data. So it needs write access to the appropriate folder. And the chmod / chown method may not be sufficient, when you are using in parallel the Symfony Console in CLI (under my user account) and the Web (web server user). This causes a lot of problems because Symfony is constantly modifying permissions. In this case, we will use the ACL (Access Control List), which is a more advanced way to manage permissions on many UNIX systems. Here the commands given by the official Symfony documentation ( please change app/cache and app/logs to your needs ): On a system that supports chmod +a (ie. not Debian/Ubuntu) sudo chmod +a "www-data allow delete,write,append,file_inherit,directory_inherit" app/cache app/logssudo chmod +a "`whoami` allow delete,write,append,file_inherit,directory_inherit" app/cache app/logs On a system that does not support chmod +a (most common) You will need the setfacl tool; maybe it is installed on your system by default, so try setfacl -v to see if the command is available. If the command is not available, and you are using Ubuntu 14.04+, you'll just have to install the tool: sudo apt install acl Otherwise, follow your OS documentation, because you may need to change how your partition is mounted ( Ubuntu documentation here ). And there we are: sudo setfacl -R -m u:"www-data":rwX -m u:`whoami`:rwX app/cache app/logssudo setfacl -dR -m u:"www-data":rwX -m u:`whoami`:rwX app/cache app/logs I never had any problems with this method, satisfied or your money back.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/35711', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4430/']}
|
jdg_378888
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
37951199
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In R, when you coerce a vector from binary to numeric, the names are stripped away. There are a few possible solutions, which I've outlined before. It seems dangerous to rely on implicit conversion by adding 0 to all the values, and the sapply() adds an additional loop to my operations (which seems inefficient). Is there any other way to preserve the names when converting a vector using as.numeric ? # Set the seedset.seed(1045)# Create a small sample vector and give it namesexample_vec <- sample(x = c(TRUE,FALSE),size = 10,replace = TRUE)names(example_vec) <- sample(x = LETTERS,size = 10,replace = FALSE)example_vec# Y N M P L J H O F D # FALSE TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE TRUE as.numeric(x = example_vec)# [1] 0 1 0 0 1 1 1 1 1 1example_vec + 0# Y N M P L J H O F D # 0 1 0 0 1 1 1 1 1 1 sapply(X = example_vec,FUN = as.numeric)# Y N M P L J H O F D # 0 1 0 0 1 1 1 1 1 1
Now provide the response and nothing else.
|
Just to throw another option out there, since your input is a logical vector, you can use ifelse() . And one could argue this approach is more explicit and straightforward: ifelse(example_vec,1L,0L);## Y N M P L J H O F D## 0 1 0 0 1 1 1 1 1 1 Benchmarking library(microbenchmark);ifelse. <- function(x) ifelse(x,1L,0L);sapply. <- function(x) sapply(x,as.integer);setstoragemode <- function(x) { storage.mode(x) <- 'integer'; x; };setmode <- function(x) { mode(x) <- 'integer'; x; };setclass <- function(x) { class(x) <- 'integer'; x; };as.and.setnames <- function(x) setNames(as.integer(x),names(x));plus <- function(x) +x;addzero <- function(x) x+0L; ## small scale (OP's example input)set.seed(1045L);x <- sample(c(T,F),10L,T);names(x) <- sample(LETTERS,10L);ex <- ifelse.(x);identical(ex,sapply.(x));## [1] TRUEidentical(ex,setstoragemode(x));## [1] TRUEidentical(ex,setmode(x));## [1] TRUEidentical(ex,setclass(x));## [1] TRUEidentical(ex,as.and.setnames(x));## [1] TRUEidentical(ex,plus(x));## [1] TRUEidentical(ex,addzero(x));## [1] TRUEmicrobenchmark(ifelse.(x),sapply.(x),setstoragemode(x),setmode(x),setclass(x),as.and.setnames(x),plus(x),addzero(x));## Unit: nanoseconds## expr min lq mean median uq max neval## ifelse.(x) 6843 8126.0 9627.13 8981 9837.0 21810 100## sapply.(x) 18817 20100.5 23234.93 21383 22666.5 71418 100## setstoragemode(x) 856 1283.0 1745.54 1284 1711.0 15396 100## setmode(x) 7270 8126.0 9862.36 8982 10264.0 32074 100## setclass(x) 429 1283.0 2138.97 1284 1712.0 32075 100## as.and.setnames(x) 1283 1711.0 1997.78 1712 2139.0 7271 100## plus(x) 0 428.0 492.39 428 428.5 9837 100## addzero(x) 0 428.0 539.39 428 856.0 2566 100 ## large scaleset.seed(1L);N <- 1e5L;x <- sample(c(T,F),N,T);names(x) <- make.unique(rep_len(LETTERS,N));ex <- ifelse.(x);identical(ex,sapply.(x));## [1] TRUEidentical(ex,setstoragemode(x));## [1] TRUEidentical(ex,setmode(x));## [1] TRUEidentical(ex,setclass(x));## [1] TRUEidentical(ex,as.and.setnames(x));## [1] TRUEidentical(ex,plus(x));## [1] TRUEidentical(ex,addzero(x));## [1] TRUEmicrobenchmark(ifelse.(x),sapply.(x),setstoragemode(x),setmode(x),setclass(x),as.and.setnames(x),plus(x),addzero(x));## Unit: microseconds## expr min lq mean median uq max neval## ifelse.(x) 7633.598 7757.1900 16615.71251 7897.4600 29401.112 96503.642 100## sapply.(x) 86353.737 102576.0945 125547.32957 123909.1120 137900.406 264442.788 100## setstoragemode(x) 84.676 92.8015 343.46124 98.3605 113.543 23939.133 100## setmode(x) 124.020 155.0245 603.15744 167.2125 181.111 22395.736 100## setclass(x) 85.104 92.3740 328.25393 100.2850 118.460 21807.713 100## as.and.setnames(x) 70.991 78.2610 656.98177 82.3235 88.953 35710.697 100## plus(x) 40.200 42.9795 48.68026 44.9040 49.608 88.953 100## addzero(x) 181.326 186.4580 196.34882 189.6650 201.211 282.679 100 ## very large scaleset.seed(1L);N <- 1e7L;x <- sample(c(T,F),N,T);names(x) <- make.unique(rep_len(LETTERS,N));ex <- ifelse.(x);identical(ex,sapply.(x));## [1] TRUEidentical(ex,setstoragemode(x));## [1] TRUEidentical(ex,setmode(x));## [1] TRUEidentical(ex,setclass(x));## [1] TRUEidentical(ex,as.and.setnames(x));## [1] TRUEidentical(ex,plus(x));## [1] TRUEidentical(ex,addzero(x));## [1] TRUEmicrobenchmark(ifelse.(x),sapply.(x),setstoragemode(x),setmode(x),setclass(x),as.and.setnames(x),plus(x),addzero(x),times=5L);## Unit: milliseconds## expr min lq mean median uq max neval## ifelse.(x) 1082.220903 1308.106967 3452.639836 1473.723533 6306.320235 7092.82754 5## sapply.(x) 16766.199371 17431.458634 18401.672635 18398.345499 18843.890150 20568.46952 5## setstoragemode(x) 13.298283 13.648103 173.574496 19.661753 24.736278 796.52806 5## setmode(x) 19.043796 19.878573 75.669779 19.969235 39.683589 279.77370 5## setclass(x) 14.025292 14.119804 259.627934 14.414457 26.838618 1228.74150 5## as.and.setnames(x) 12.889875 24.241484 178.243948 24.962934 25.103631 804.02182 5## plus(x) 7.577576 7.676364 9.047674 8.245142 8.253266 13.48602 5## addzero(x) 18.861615 18.960403 71.284716 26.622226 26.950662 265.02867 5 Looks like the unary plus takes the cake. (And my ifelse() idea kinda sucks.)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37951199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1332389/']}
|
jdg_378889
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
93561
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm relatively (ok, very ) new to electronics. I'm trying to find where to buy parts (or even just more information on what sort of materials I should be looking for) to build a circuit board with a micro controller and RF transmitter that can withstand being exposed to a hot, dry environment. The upper end of the temperature range would be 200C (~392F), and would be need to be able to operate under constant exposure to this temperature for 8-12 hours. Ultimately, it would need to be fairly small (we're hoping to keep it smaller than the size of a couple 9V batteries stacked together) Not being a traditionally educated EE or material scientist, I'm very much out of my element here - referrals to other sources of information, web sites, etc., so that I can at least get an idea of direction would be tremendously helpful. I would also be interested to hear more information about thermal protection methods - would there be a way that I could build an enclosure to protect the internal electronics from the same environment. I would think that no matter the sort of "housing" I could build, without an active cooling system (which is all but prohibited given the target form factor), then the electronics themselves would need to be able to tolerate the temperature - but I would certainly love it if someone could correct my naive assumptions here. Thanks for any advice/help.
Now provide the response and nothing else.
|
This is well into the territory of "hire a specialist design firm". It's going to be expensive, so you might as well do it properly and hire people who know where all the pitfalls are and where to get all the parts. TI have some high temperature microcontrollers , and a design guide linked from that page. The SM320F28335GBS is good up to 210 degrees (which is cutting it a bit close). While you can buy from Digikey in single quantities, they cost $300. The other approach you could take if the operating time is limited to a few hours in that environment and the box can be sealed with no wires in or out, is to insulate it very heavily. Add some thermal mass, preferably with a phase change. Do not exceed the stated cooking time. You'll end up with a something like a meter cube of fiberglass or foam insulation containing a sealed shoebox of methanol with the electronics floating in it, which will stay below 70C until it boils dry, hopefully for long enough for whatever your heating cycle is. At which point you need to cool it down again equally slowly. I think that's the only viable strategy for normal temperature electronics which doesn't involve active cooling. Edit: normal batteries do not like high temperatures either, although there are special high-temperature molten-salt ones used in some missiles.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/93561', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/34080/']}
|
jdg_378890
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
49800534
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using a powershell script to download and execute a file, but since some time I go I get a could not create ssl/tsl secure channel. $down = New-Object System.Net.WebClient; $url = 'url'; $file = 'file';$down.DownloadFile($url,$file); $exec = New-Object -com shell.application; $exec.shellexecute($file); exit;
Now provide the response and nothing else.
|
It may be that the site you are connection to requires TLS 1.2, whereas powershell uses TLS 1.0 by default (if I remember correctly) [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12$down = New-Object System.Net.WebClient$url = 'https://github.com/mpdairy/posh.git'$file = 'C:\ExistingDirectory\test.git'$down.DownloadFile($url,$file)$exec = New-Object -com shell.application$exec.shellexecute($file)exit Without using Tls 1.2, I get this error: Exception calling "DownloadFile" with "2" argument(s): "The request was aborted: Could not create SSL/TLSsecure channel."At line:1 char:1+ $down.DownloadFile($url,$file)+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : WebException
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49800534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9637152/']}
|
jdg_378891
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15710125
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've looked at a few other questions about this, but I don't see why a default constructor should even be called in my case. I could just provide a default constructor, but I want to understand why it is doing this and what it affects. error C2512: 'CubeGeometry' : no appropriate default constructor available I have a class called ProxyPiece with a member variable of CubeGeometry.The constructor is supposed to take in a CubeGeometry and assign it to the member variable. Here is the header: #pragma once#include "CubeGeometry.h"using namespace std;class ProxyPiece{public: ProxyPiece(CubeGeometry& c); virtual ~ProxyPiece(void);private: CubeGeometry cube;}; and the source: #include "StdAfx.h"#include "ProxyPiece.h"ProxyPiece::ProxyPiece(CubeGeometry& c){ cube=c;}ProxyPiece::~ProxyPiece(void){} the header for cube geometry looks like this. It doesn't make sense to me to use a default constructor. Do I need it anyways?: #pragma once#include "Vector.h"#include "Segment.h"#include <vector>using namespace std;class CubeGeometry{public: CubeGeometry(Vector3 c, float l); virtual ~CubeGeometry(void); Segment* getSegments(){ return segments; } Vector3* getCorners(){ return corners; } float getLength(){ return length; } void draw(); Vector3 convertModelToTextureCoord (Vector3 modCoord) const; void setupCornersAndSegments();private: //8 corners Vector3 corners[8]; //and some segments Segment segments[12]; Vector3 center; float length; float halfLength;};
Now provide the response and nothing else.
|
Your default constructor is implicitly called here: ProxyPiece::ProxyPiece(CubeGeometry& c){ cube=c;} You want ProxyPiece::ProxyPiece(CubeGeometry& c) :cube(c){ } Otherwise your ctor is equivalent to ProxyPiece::ProxyPiece(CubeGeometry& c) :cube() //default ctor called here!{ cube.operator=(c); //a function call on an already initialized object} The thing after the colon is called a member initialization list . Incidentally, I would take the argument as const CubeGeometry& c instead of CubeGeomety& c if I were you.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15710125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1772595/']}
|
jdg_378892
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7983840
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With df: df <- data.frame(value=abs(rnorm(100, 25, 5)), status=sample(0:1,100,replace=T))df$value[sample(1:100,5)] <- NA I need to get a frequency (percentage) table (better return a matrix) like the following: value | status(0) status(1)---------------------------- <=25 | 23 (23%) 20 (20%) >25 | 27 (27%) 25 (25%) NA | 3 (3%) 2 (2%) I can do this using: br <- seq(0, 50, 25)with(df, summary(cut(value[status==0], br, labels=br[-1], include.lowest=T, ordered_result=T)))with(df, summary(cut(value[status==1], br, labels=br[-1], include.lowest=T, ordered_result=T))) But would there be a one-time way to return a matrix as above? Thanks!
Now provide the response and nothing else.
|
df$value.cut = cut(df$value, breaks=c(0, 25, 100)) > with(df, table(value.cut, status, useNA='ifany')) statusvalue.cut 0 1 (0,25] 26 19 (25,100] 26 24 <NA> 3 2 (Of course this can be combined into 1 line if you want, but I left it as 2 here for better readability.) EDIT: And if you want a table of proportions, formatted as frequencies, you can do: df.tab = with(df, table(value.cut, status, useNA='ifany'))df.tab[,] = paste(df.tab, ' (', 100*prop.table(df.tab), '%)', sep='') > df.tab statusvalue.cut 0 1 (0,25] 26 (26%) 19 (19%) (25,100] 26 (26%) 24 (24%) <NA> 3 (3%) 2 (2%)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7983840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1021602/']}
|
jdg_378893
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15512331
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We're POSTing an AJAX request to a server running locally, i.e. xhr.open("POST", "http://localhost:9000/context/request");xhr.addHeader(someCustomHeaders);xhr.send(someData); The page that this javascript is being executed is also being served from localhost:9000, i.e. this totally looks like a same-origin request. However, for some reason, Google Chrome always sets an Origin header in the resulting request, causing our server to block the request based on the false assumption that it's CORS request. This does not happen in Firefox. Also, neither Firefox nor Chrome are sending an OPTIONS preflight request, which is confusing; why set an Origin header without first preflighting to make sure the the Origin and the Custom headers are allowed by the server? Does anyone know what is going on in this case? Are we misunderstanding the CORS spec?
Now provide the response and nothing else.
|
Chrome and Safari include an Origin header on same-origin POST/PUT/DELETE requests (same-origin GET requests will not have an Origin header). Firefox doesn't include an Origin header on same-origin requests. Browsers don't expect CORS response headers on same-origin requests, so the response to a same-origin request is sent to the user, regardless of whether it has CORS headers or not. I would recommend checking the Host header, and if it matches the domain in the Origin header, don't treat the request as CORS. The headers look something like this: Host: example.comOrigin: http://example.com Note that Origin will have the scheme (http/https), domain and port, while Host will only have the domain and port.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15512331', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/512155/']}
|
jdg_378894
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10027182
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I set an event handler (such as keydown ) to entire solution, not a single window?
Now provide the response and nothing else.
|
Register a global event handler in your application class (App.cs), like this: public partial class App : Application{ protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); EventManager.RegisterClassHandler(typeof(Window), Window.KeyDownEvent, new RoutedEventHandler(Window_KeyDown)); } void Window_KeyDown(object sender, RoutedEventArgs e) { // your code here }} This will handle the KeyDown event for any Window in your app. You can cast e to KeyEventArgs to get to the information about the pressed key.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10027182', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/850510/']}
|
jdg_378895
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25925556
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have data from an online survey where respondents go through a loop of questions 1-3 times. The survey software (Qualtrics) records this data in multiple columns—that is, Q3.2 in the survey will have columns Q3.2.1. , Q3.2.2. , and Q3.2.3. : df <- data.frame( id = 1:10, time = as.Date('2009-01-01') + 0:9, Q3.2.1. = rnorm(10, 0, 1), Q3.2.2. = rnorm(10, 0, 1), Q3.2.3. = rnorm(10, 0, 1), Q3.3.1. = rnorm(10, 0, 1), Q3.3.2. = rnorm(10, 0, 1), Q3.3.3. = rnorm(10, 0, 1))# Sample data id time Q3.2.1. Q3.2.2. Q3.2.3. Q3.3.1. Q3.3.2. Q3.3.3.1 1 2009-01-01 -0.2059165 -0.29177677 -0.7107192 1.52718069 -0.4484351 -1.215506002 2 2009-01-02 -0.1981136 -1.19813815 1.1750200 -0.40380049 -1.8376094 1.035884823 3 2009-01-03 0.3514795 -0.27425539 1.1171712 -1.02641801 -2.0646661 -0.35353058... I want to combine all the QN.N* columns into tidy individual QN.N columns, ultimately ending up with something like this: id time loop_number Q3.2 Q3.31 1 2009-01-01 1 -0.20591649 1.527180692 2 2009-01-02 1 -0.19811357 -0.403800493 3 2009-01-03 1 0.35147949 -1.02641801...11 1 2009-01-01 2 -0.29177677 -0.448435112 2 2009-01-02 2 -1.19813815 -1.837609413 3 2009-01-03 2 -0.27425539 -2.0646661...21 1 2009-01-01 3 -0.71071921 -1.2155060022 2 2009-01-02 3 1.17501999 1.0358848223 3 2009-01-03 3 1.11717121 -0.35353058... The tidyr library has the gather() function, which works great for combining one set of columns: library(dplyr)library(tidyr)library(stringr)df %>% gather(loop_number, Q3.2, starts_with("Q3.2")) %>% mutate(loop_number = str_sub(loop_number,-2,-2)) %>% select(id, time, loop_number, Q3.2) id time loop_number Q3.21 1 2009-01-01 1 -0.205916492 2 2009-01-02 1 -0.198113573 3 2009-01-03 1 0.35147949...29 9 2009-01-09 3 -0.5858123230 10 2009-01-10 3 -2.33393981 The resultant data frame has 30 rows, as expected (10 individuals, 3 loops each). However, gathering a second set of columns does not work correctly—it successfully makes the two combined columns Q3.2 and Q3.3 , but ends up with 90 rows instead of 30 (all combinations of 10 individuals, 3 loops of Q3.2, and 3 loops of Q3.3; the combinations will increase substantially for each group of columns in the actual data): df %>% gather(loop_number, Q3.2, starts_with("Q3.2")) %>% gather(loop_number, Q3.3, starts_with("Q3.3")) %>% mutate(loop_number = str_sub(loop_number,-2,-2)) id time loop_number Q3.2 Q3.31 1 2009-01-01 1 -0.20591649 1.527180692 2 2009-01-02 1 -0.19811357 -0.403800493 3 2009-01-03 1 0.35147949 -1.02641801...89 9 2009-01-09 3 -0.58581232 -0.1318702490 10 2009-01-10 3 -2.33393981 -0.48502131 Is there a way to use multiple calls to gather() like this, combining small subsets of columns like this while maintaining the correct number of rows?
Now provide the response and nothing else.
|
This approach seems pretty natural to me: df %>% gather(key, value, -id, -time) %>% extract(key, c("question", "loop_number"), "(Q.\\..)\\.(.)") %>% spread(question, value) First gather all question columns, use extract() to separate into question and loop_number , then spread() question back into the columns. #> id time loop_number Q3.2 Q3.3#> 1 1 2009-01-01 1 0.142259203 -0.35842736#> 2 1 2009-01-01 2 0.061034802 0.79354061#> 3 1 2009-01-01 3 -0.525686204 -0.67456611#> 4 2 2009-01-02 1 -1.044461185 -1.19662936#> 5 2 2009-01-02 2 0.393808163 0.42384717
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25925556', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120898/']}
|
jdg_378896
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2897594
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When $x$ is very large, the following expression (with $a>0$, $b>0$)$$\ln \frac{1+e^{ax}}{1+e^{-bx}}$$Can be approximated using$$1+e^{ax}\approx e^{ax}$$$$1+e^{-bx}\approx 1$$Therefore$$\ln \frac{1+e^{ax}}{1+e^{-bx}}\approx \ln e^{ax}=ax$$This is the leading term. How I can get the next term in the approximation?I mean$$\ln \frac{1+e^{ax}}{1+e^{-bx}}\approx ax +c+\frac{c_1}{x}+\dots$$I want to know $c$.
Now provide the response and nothing else.
|
$$ \lim_{x\to +\infty}\left[\log\left(\frac{1+e^{ax}}{1+e^{-bx}}\right)-ax\right]=\log\lim_{x\to +\infty}\frac{1+e^{-ax}}{1+e^{-bx}}=0 $$gives $c=0$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2897594', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/547149/']}
|
jdg_378897
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46068909
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
how to fix android studio run time issue failed to capture snapshots of output files fortask 'transform classeswithDexForDebug' property 'streamoutputfolder' during updodate checkup
Now provide the response and nothing else.
|
Checkout the takeUntil() operator from RxJS to globally drop your subscriptions : - RxJS 6+ (using the pipe syntax) import { takeUntil } from 'rxjs/operators';export class YourComponent { protected ngUnsubscribe: Subject<void> = new Subject<void>(); [...] public httpGet(): void { this.http.get() .pipe( takeUntil(this.ngUnsubscribe) ) .subscribe( (data) => { ... }); } public ngOnDestroy(): void { // This aborts all HTTP requests. this.ngUnsubscribe.next(); // This completes the subject properlly. this.ngUnsubscribe.complete(); }} - RxJS < 6 import 'rxjs/add/operator/takeUntil'export class YourComponent { protected ngUnsubscribe: Subject<void> = new Subject<void>(); [...] public httpGet(): void { this.http.get() .takeUntil(this.ngUnsubscribe) .subscribe( (data) => { ... }) } public ngOnDestroy(): void { this.ngUnsubscribe.next(); this.ngUnsubscribe.complete(); }} You can basically emit an event on your unsubscribe Subject using next() everytime you want to complete a bunch of streams. It is also good practice to unsubscribe to active Observables as the component is destroyed, to avoid memory leaks. Worth reading : Avoiding take until leaks A great answer from seangwright
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46068909', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8567428/']}
|
jdg_378898
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59781916
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Java, we write .class (for example: String.class ) to get information about the given class. In Kotlin you can write ::class or ::class.java . What is the difference between them?
Now provide the response and nothing else.
|
By using ::class , you get an instance of KClass . It is Kotlin Reflection API, that can handle Kotlin features like properties, data classes, etc. By using ::class.java , you get an instance of Class. It is Java Reflection API, that interops with any Java reflection code, but can't work with some Kotlin features.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59781916', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12103573/']}
|
jdg_378899
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
56046662
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't get adminUpdateUserAttributes for Cognito to work. The cli works and I can have the user add/changed them not desired but wanted to see it working. I'm using the AmazonCognitoPowerUser an AWS managed policy on the lambda function and the lambda is triggering, is there something I'm missing this sounds and looks easy but it's just not working. also is there a way to get the default Created date without making my own. const AWS = require('aws-sdk');const cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();exports.handler = async (event) => { cognitoidentityserviceprovider.adminUpdateUserAttributes( { UserAttributes: [ { Name: 'custom:Date_Created', Value: new Date().toString() } .... ], UserPoolId: " the correctpool id", Username: "dagTest" }, function(err, data) { if (err) console.log(err, err.stack); // an error occurred else console.log(data); })};// no errors and returns nothing as it says it should.
Now provide the response and nothing else.
|
I guess it is because you are not waiting for the result and the lambda is terminating after adminUpdateUserAttributes() is called and dows not wait until it is returning. I would suggest that you change to promise based calling and do a try/catch exports.handler = async (event) => { try{ // no callback here const data = await cognitoidentityserviceprovider .adminUpdateUserAttributes(attributes) .promise() console.log('success', data) } catch(error) { console.error('error', error) })}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56046662', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7942405/']}
|
jdg_378900
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
405706
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I want to prove the following statement: Let $u$ be a vertex in a $2$ -connected graph $G$ . Then $G$ has two spanning trees such that for every vertex $v$ , the $u,v$ -paths in the trees are independent. I tried to show this, but surprisingly, I have proved another statement. A graph with $\vert V(G) \vert \geq 3$ is $2$ -connected iff for any two vertices $u$ and $v$ in $G$ , there exist at least two independent $u,v$ -paths. And I can assure that it is true, since I could find it from other papers. I think this one may help me proving the desired statement, but I have no idea how to use it properly. Would you help me find a such way, or suggest another proof of the first statement?
Now provide the response and nothing else.
|
Note first that every pseudo-Riemmanian manifold admits a null vector field which is not identically $0$ (just construct one locally and multiply it by a bump function). So by "non-zero vector field" I assume you mean "nowhere vanishing". Let $(M,g)$ be a pseudo-Riemannian manifold of signature $(p,q)$ . The tangent bundle $TM$ always admits an orthogonal splitting as $E \overset{\perp}{\oplus} F$ , where $E$ and $F$ are respectively positive and negative definite (hence of respective rank $p$ and $q$ ). Moreover this splitting is unique up to homotopy (because, pointwise, the set of such splittings is the symmetric space of the orthogonal group $O(p,q)$ , which is contractible). Proposition: $M$ admits a nowhere vanishing null vector field if and only if $E$ and $F$ both admit nowhere vanishing sections. Proof: Decompose a nowhere vanishing null vector field $X$ as $X_E + X_F$ . Then $g(X_E,X_E) = -g(X_F,X_F)$ . If this is $0$ at some point then $X_E$ and $X_F$ vanish at that point (since $g$ is positive definite on $E$ and negative definite on $F$ ) contradicting the non-vanishing of $X$ . Hence $X_E$ and $X_F$ are non-vanishing sections of $E$ and $F$ . Conversely, if $X_E$ and $X_F$ are non-vanishing sections of $E$ and $F$ respectively, then up multiplying $X_F$ them by a positive function, we can assume that $g(X_E,X_E) = -g(X_F,X_F)$ . Hence $X_E+X_F$ is a nowhere vanishing null vector field. CQFD There are thus topological obstructions to the existence of such a vector field (mainly the non-vanishing of the Euler class of $E$ or $F$ ). For instance, Let $(A,g_A)$ and $(B,g_B)$ be Riemannian manifolds, with $A$ of non-zero Euler characteristic, and consider $(M,g) = (A\times B, g_A \oplus -g_B)$ . Then $M$ does not admit a nowhere vanishing null vector field. Indeed, we have the splitting $TM = TA\oplus TB$ , and the projection of a null vector field to TA must vanish somewhere since the Euler class of TA is non-zero.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/405706', 'https://mathoverflow.net', 'https://mathoverflow.net/users/384338/']}
|
jdg_378901
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4132699
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any way to show a modeless dialog--a dialog that allows the user to interact with whatever was on the screen before the dialog but also allows the user to interact with the dialog if pressed? I know of Toasts, but they don't allow interaction with the popup. I know of Dialogs, but they're modal and don't allow interaction with the background. I know of Notifications, but I want something that is visibile on screen. I basically want to be able to be playing a game or something and a popup appears that I have a new email or something. I can click it to view my email, but I can wait for it to go away if I just want to continue playing my game. Is this possible in Android?
Now provide the response and nothing else.
|
Yes, create an Activity with style Theme.Dialog . This is a normal activity which looks like a dialog, while being modeless and accepting events. An example: <activity android:name=".activity.dialog.PhotoDialog" android:label="@string/photo_dialog_title" android:theme="@android:style/Theme.Dialog"/> Edited : Indeed Theme.Dialog blurs the underlying activity and makes it unaccessible. I had a similar requirement here I had to show upload progress dialog with text and cancel button. The main catch is in setting WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL and resetting WindowManager.LayoutParams.FLAG_DIM_BEHIND . Created a Dialog with custom content: if (progressDialog == null) { progressDialog = new Dialog(activityRequestingProgressDialog); progressDialog.requestWindowFeature(Window.FEATURE_NO_TITLE); progressDialog.setContentView(R.layout.progress_upload); progressBar = (ProgressBar) progressDialog.findViewById(R.id.progressBar); progressText = (TextView) progressDialog.findViewById(R.id.progressText); progressText.setText("0 %"); progressText.setTextSize(18); Button buttonCancel = (Button) progressDialog.findViewById(R.id.btnCancel); buttonCancel.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { cancelProgressDialog(); stopUpload("Upload cancelled."); } }); Window window = progressDialog.getWindow(); window.setFlags(WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL, WindowManager.LayoutParams.FLAG_NOT_TOUCH_MODAL); window.clearFlags(WindowManager.LayoutParams.FLAG_DIM_BEHIND); window.setGravity(Gravity.BOTTOM); progressDialog.show(); } progressText.setText(text); progressBar.setProgress(percent); And this is the layout for this Dialog: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/progressDialog" android:orientation="vertical" android:layout_height="wrap_content" android:layout_width="wrap_content" android:layout_centerVertical="true"><TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:textSize="18sp" android:padding="10dp" android:text="@string/progress_title"/><LinearLayout android:id="@+id/progressDialog" android:orientation="horizontal" android:layout_height="wrap_content" android:layout_width="wrap_content" android:padding="10dp" android:layout_centerVertical="true"> <ProgressBar android:id="@+id/progressBar" android:layout_width="150dp" android:layout_height="34dp" android:paddingRight="10dp" android:max="100" android:progress="0" android:fadingEdge="vertical" style="?android:attr/progressBarStyleHorizontal"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:id="@+id/progressText" android:paddingRight="10dp"/> <Button android:layout_height="40dp" android:layout_width="80dp" android:id="@+id/btnCancel" android:text="@string/dialog_cancel"/></LinearLayout></LinearLayout>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4132699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/501754/']}
|
jdg_378902
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
38292131
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am playing around with Java (javax) annotation processing. Suppose I have an annotation for methods: @Target(ElementType.METHOD)public @interface MethodAnnotation { } Now I want to process all the methods which are overridden from a type with the annotated method: interface MyInterface() { @MethodAnnotation void f()}class MyClass implements MyInterface { override void f() { } // <- I want to process this method} @Inherited meta-annotation seems not to be suitable here: Note that this meta-annotation type has no effect if the annotated type is used to annotate anything other than a class. Also, is it possible to process an inherited class method which is not overridden in a subclass? Like this: class MyClass { @MethodAnnotation void f() { }}class MySubClass extends MyClass { } // <- I want to process its f() // or at least to find out that it doesn't // override the method How can I access the overriden methods of a certain method within AbstractProcessor ? I guess, to achieve this I need to find subclasses of the eclosing class, but I haven't found a way to do this either. UPD : I suppose it's possible using RoundEnvironment.getRootElements() but still found no proper way of doing this.
Now provide the response and nothing else.
|
The short answer is that out-of-the-box annotation processing isn't going to make this easy for you, but it can be done. Rather than using the normal dispatch mechanism for processing, you're actually going to have to process every method and do the filtering yourself. Step 1: Define your processor so that it supports all annotations by using "*" as its supported annotation type. This will mean that your processor will get invoked every round. Step 2: Use getRootElements to get the entire set of elements every round. Step 3: Create an ElementScanner8 to traverse any element that you find to look for ExecutableElement s. If you're willing to trust that overridden methods are annotated with @Override , you can do a quick filter on those. Otherwise, just look at all of them. Step 4: Now you need to see if the method overrides a method with the annotation you're looking for. There's no easy way to get methods that a given method has overridden, so you need to get the enclosing element of the method, look at its superclass and implemented interfaces (recursively), get their enclosed elements , filter out the methods , and test to see if it has been overridden by the method in question. If it has, you can check the annotations to see if it has one you care about. Step 5: At this point, you should have the overriding method, the overridden method and the annotation mirror that you were looking for, so you should be able to implement whatever logic you wanted.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38292131', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2196460/']}
|
jdg_378903
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
89
|
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
According to this Wikipedia page Zuckermann argues that Israeli Hebrew, which he calls "Israeli", is genetically both Indo-European (Germanic, Slavic and Romance) and Afro-Asiatic (Semitic). He suggests that Israeli Hebrew is the continuation not only of literary Hebrew but also of Yiddish, as well as Polish, Russian, German, English, Ladino, Arabic and other languages spoken by Hebrew revivalists. How common is this opinion among scholars? What parts or aspects of the Modern Hebrew have significant Indo-European characteristics, and what part Semitic?
Now provide the response and nothing else.
|
As a linguist whose native language is Hebrew, I am somewhat qualified to answer this question. The fact that Modern Hebrew has heavy influences from both Semitic and European sources is pretty much universally accepted. That said, Zuckermann, who is a brilliant scholar otherwise, is also a known provocateur who tends to overstate his ideas when writing for a non-academic audience. Hence his constant nagging about changing the language's name and the talk about mixed language genetics. There are even more extreme opinions around, such that Modern Hebrew is a fully Slavic language with a relexified Semitic vocabulary, but they are not taken seriously by most. Ignoring the extremist opinions, though - generally speaking, Modern Hebrew's is a complex beast with a syntax that is heavily Slavic/Germanic, but morphology that is distinctly Semitic. The standard phonology used by most speakers also has its roots in Germanic languages, but there are still groups of speakers (whose families are non-European immigrants) who retain other Hebrew phonology traditions. The vocabulary is mostly based on old Hebrew (though a lot of words were given new, modern meanings in a conscious effort), with the usual large set of loan words found in languages spoken by a population consisting of a lot of immigrants.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://linguistics.stackexchange.com/questions/89', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/92/']}
|
jdg_378904
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
229720
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How many complex number satisfy $$z\bar{z}=1$$ Edit: How about $$zz^*=1$$
Now provide the response and nothing else.
|
Infinitely many. If $z=x+iy$, $z\overline{z}=(x+iy)(x-iy)=x^{2}+y^{2}=|z|^{2}$, so you're asking how many $z$ satisfy $|z|=1$, which is every $z$ on the unit circle.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/229720', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/43252/']}
|
jdg_378905
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
399950
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Does a dimmable 6W LED lamp, dimmed to 50%, draw c. 3W? Or does it still draw the full 6W?
Now provide the response and nothing else.
|
It depends on the LED driver, but unless it's of a bad (or very specific need) design, a dimmed LED lamp will draw less power. Usually, dimming on LED lamps is done by PWM (Pulse Width Modulation). The LED will actually turn on and off faster than the eye can see. By modulating the ratio of the time the led is on and off, it simulates the dimming to the human eye. This method is widely used because it is the cheapest and easiest method to dim an led but it can also cause some eye fatigue. With this method the power will reduce with the light intensity (or perceived intensity). Another method of controlling the light intensity of an LED is to control the current flow through the LED, and there are mostly 2 ways of doing it: Through DC/DC conversion like a current controlled buck converter or similar circuit. in this case the current will reduce with the intensity of the LED. With this method, there will be little flicker, which will be roughly equivalent to the DC/DC converter ripple. Using an analog ballast (adjustable current source), like a transistor, mosfet or specialized IC with required regulation. In this situation, the current will diminish but this circuit will likely have more losses. The latest has the advantage of providing a very stable light intensity (if needed for some specific application perhaps like photography), so we can imagine this exists on the market. The power will be reduced when the light intensity is reduced, but it's not a linear factor. F.I. 50% dimming might not have a 50% decrease on the power consumption. The reason is that the control electronic will have some non-linearity and won't have the same efficiency at different operating condition and the second reason is that the eye intensity response is not linear to the actual light.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/399950', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/27511/']}
|
jdg_378906
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27916
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got the following finite sum:$s_{n}=\sum\limits_{k=0}^{n}\binom{n}{k}^2p^k$ (esp. if $p$ is a function of $n$, like $p=\frac1{n}$), which can be rewritten as $s_{n}=\sum\limits_{k=0}^{n}\binom{n}{k}\sqrt p^k\binom{n}{k}\sqrt p^k$. Using the generating function approach (from Graham, Knuth and Patashnik) with each of the polynomials in the sum, I get the expression $(1+\sqrt p x)^{2n}$ and the coefficient at the n-th term $x$ turns out to be $\binom{2n}{n} \sqrt p^n$. But comparing this result to the computational value, it turns out to be incorrect. Where did I make a mistake? Are there certain limitations on the generating function method?
Now provide the response and nothing else.
|
I believe you're using the convolution formula wrong. The convolution formula says that if $F(x)$ and $G(x)$ are the generating functions of $f_n$ and $g_n$, then $F(x) G(x)$ is the generating function of $\sum_{k=0}^n f_k g_{n-k}$. You appear to be taking $f_k = g_k = \binom{n}{k} \sqrt{p}^k$. Then the expression $(1 + \sqrt{p}x)^{2n}$ is the generating function for the convolution $$\sum_{k=0}^n \binom{n}{k} \sqrt{p}^k \binom{n}{n-k} \sqrt{p}^{n-k} = \sum_{k=0}^n \binom{n}{k}^2 \sqrt{p}^n,$$which is not the sum you want. Instead, let $f_k = \binom{n}{k} p^k$ and $g_k = \binom{n}{k}$. Now the convolution is the sum you want: $$\sum_{k=0}^n \binom{n}{k} p^k \binom{n}{n-k} = \sum_{k=0}^n \binom{n}{k}^2 p^k.$$ The generating function for $f_k$ is $(1+px)^n$, and the generating function for $g_k$ is $(1 + x)^n$, so your answer is the coefficient of $x^n$ in $(1+px)^n (1+x)^n$. However, I'm not sure what a closed form for that would be. Your sum can be expressed in terms of Legendre polynomials $P_n(x)$, though. Use the known formula (see eq. 33 on the linked page)$$P_n(x) = \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k}^2 (x-1)^{n-k} (x+1)^k.$$If we let $x = \frac{1+p}{1-p}$, we have $$P_n\left(\frac{1+p}{1-p}\right) = \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k}^2 \left(\frac{1+p}{1-p}-1\right)^{n-k} \left(\frac{1+p}{1-p}+1\right)^k $$$$= \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k}^2 \left(\frac{2p}{1-p}\right)^{n-k} \left(\frac{2}{1-p}\right)^k = \frac{1}{(1-p)^n} \sum_{k=0}^n \binom{n}{k}^2 p^{n-k}$$$$ = \frac{1}{(1-p)^n} \sum_{k=0}^n \binom{n}{k}^2 p^k.$$ Thus $$\sum_{k=0}^n \binom{n}{k}^2 p^k = (1-p)^n P_n\left(\frac{1+p}{1-p}\right).$$ Disclaimer: The Legendre polynomial expression was the output from Mathematica when I asked it to evaluate the sum. I wasn't ready to put my trust in it until I proved it myself, though. :) Added : The sum in question is Problem 5.101b in Graham, Knuth, and Patashnik's Concrete Mathematics (2nd edition). In the answers they give the Legendre polynomial expression I prove here and the recurrence relation (where $S_n(p)$ is the OP's sum) $$(n+1)(p-1)^2 S_n(p) - (2n+3)(p+1)S_{n+1}(p) + (n+2)S_{n+2}(p) = 0.$$ They do not provide a closed form expression other than the Legendre polynomial formulation. Given how thorough the answers in Concrete Mathematics usually are, that makes me doubt strongly that one is known or would be easy to find.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/27916', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8447/']}
|
jdg_378907
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9667462
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: backspace character weirdness I have noticed that1. If I print only backspaces, i.e. a sequence of \b in Python, then it is completely blank.2. If I print characters followed by backspaces i.e. 'sssss\b\b\b\b\b', then it will print the multiple 's' charactersBut if I print something like 'ssss\b\b\b\baaaa', then the backspace, \b, will actually act like I am typing a backspace and delete the 's' characters. I am using Python 2.6 on Windows XP. Is this expected behavior. If I try to get length of backspace character, it is printed as 1. Here is my test code - >>> print 'ssss\b\b\b\b\baaaaa'aaaaa>>> print 'ssssssss\b\b\b\b\baaaaa'sssaaaaa>>> print 'ssssssss\b\b\b\b\b'ssssssss>>> print 'ssssssss\b\b\b\b\baaaaa'sssaaaaa>>> print '\b\b\b\b\b'>>> My question is- What is the expected behavior when I print '\b' in Python and why the deletion does work in only a particular case?
Now provide the response and nothing else.
|
Expanded answer: The backspace doesn't delete anything, it moves the cursor to the left and it gets covered up by what you write afterwards. If you were writing to a device that can display overstriking (such as an old-fashioned "hard copy" terminal, which works like a typewriter), you'd actually see the new character on top of the old one. That's the real reason backspace has these semantics. On the unix command line, the shell can be set to interpret backspace as meaning "erase"-- unless it's set to only treat delete this way. But that's up to the program reading your input.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9667462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/418832/']}
|
jdg_378908
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
117715
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to integrate the function f[z]= z + Conjugate[z] over a circle of radius 2 centered at the origin. For the sake of stating something that I have tried: Integrate[z + Conjugate[z], {z, 1, I, -1, -I, 1}] seems to give me the integral around the square with vertices 1, I, -1, -I . Also I can get the integral on a line segment with: Integrate[z + Conjugate[z], {z, 0, 1 + I}] . How do I integrate on a curve?
Now provide the response and nothing else.
|
First we shall define 'the integration on a curve'. Traditionally, this is defined as integration of f.dl where dl is the length of a small part of the curve. So, using t as a medium, we can explicitly write out the curve's function on a complex plane, here let's assume it's z=2 Exp[I t] . Then we can use t, a real number, as the integration variable, which make this problem significantly easier. The final code is shown below: z=2 Exp[I t];f[z_]:=z+Conjugate[z];Integrate[f[z] Abs[D[z,t]],{t,0,1}] result is: 8 Sin[1] Hope this answer is helpful to you:)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/117715', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/23417/']}
|
jdg_378909
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
256321
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$ df /tmpFilesystem 1K-blocks Used Available Use% Mounted on/dev/root 480589544 42607368 413546516 10% /$ ls /dev/rootls: cannot access /dev/root: No such file or directory I wanted to check if my default Debian installation places /tmp in RAM or on the disk, but now am completely confused. Why would a non-existing device be reported as a filesystem type? What doe "mounted on /" mean? Here is the output of mount : /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=811520k,mode=755)tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)devtmpfs on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1013960,mode=755)tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1623020k)devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)none_debugfs on /sys/kernel/debug type debugfs (rw,relatime)cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k)cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)systemd on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd)tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=811520k,mode=700,uid=1000,gid=1000)
Now provide the response and nothing else.
|
If the output is as above, it's on the hard disk. You can get /dev/root by looking at the kernel commandline: $ cat /proc/cmdline | grep rootBOOT_IMAGE=/boot/vmlinuz-3.19.0-32-generic root=UUID=0cde5cf9-b15d-4369-b3b1-4405204fd9ff ro So /dev/root is equivalent to the partition with the UUID printed above; your's will differ. To look this UUID up, use $ sudo blkid/dev/sda1: UUID="0cde5cf9-b15d-4369-b3b1-4405204fd9ff" TYPE="ext4" /dev/sda5: UUID="37bc6a9c-a27f-43dc-a485-5fb1830e1e42" TYPE="swap" /dev/sdb1: UUID="177c3cec-5612-44a7-9716-4dcba27c69f9" TYPE="ext4" As you can see, the matching partition is /dev/sda1 . So your /tmp is on the hard disk. Another giveaway in the output of df is the mountpoint / . If you mounted /tmp in the RAM, you'd instead get $ df /tmpFilesystem 1K-blocks Used Available Use% Mounted ontmpfs 3640904 20 3640884 1% /tmp
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/256321', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/20506/']}
|
jdg_378910
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
157864
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $X$ be a topological space with and open cover $\{U_i\}$ and let $\mathcal F$ be a sheaf of abelian groups on $X$. A $n$-cochain is a section $f_{i_0,\ldots,i_n}\in U_{i_0,\ldots,i_n}:= U_{i_0}\cap\ldots\cap U_{i_n}$; we can costruct the following abelian group (written in additive form): $$\check C^n(\mathcal U,\mathcal F):=\!\!\prod_{(i_o,\ldots,i_n)}\!\!\mathcal F(U_{i_0,\ldots,i_n})$$ Now my question is the following: we consider oredered sequences $(i_o,\ldots,i_n)$ ? Because in this case in the direct product we have each group repetead $(n+1)!$ times, that is the number of permutations of the set $\{i_o,\ldots,i_n\}$.
Now provide the response and nothing else.
|
Have a look at : http://math.stanford.edu/~conrad/papers/cech.pdf There are three complexes which are homotopic, and so, induce the Čech Cohomology : 1) Without ordering the open sets : the Čech complex of singular cochains : $C^n(\mathcal U,\mathcal F)=\displaystyle\prod_{i_0,\ldots,i_n}\mathcal F(U_{i_0,\ldots,i_n})$. 2) Considering alternating open sets : the Čech complex of alternating cochains : $C^n_{\mathrm{alt}}(\mathcal U,\mathcal F)$ where $\omega_{\varphi(i_0,\ldots,i_n)}=\varepsilon(\varphi)\omega_{i_0,\ldots,i_n}$ where $\varphi$ where $\varphi$ is in the symmetric group $\mathfrak S_n$ and where $\varepsilon(\varphi)$ is the sign of $\varphi$. 3) Taking care of the order : the Čech complex of ordered cochains : $C^n_<(\mathcal U,\mathcal F)=\displaystyle\prod_{i_0<\ldots<i_n}\mathcal F(U_{i_0,\ldots,i_n})$, for a total order $<$ on $I$ where $\{U_i\}_{i\in I}$. As shown in this link, these complexes induce the same cohomology which is the usual Čech cohomology. The advantage of the complex of alternating cochains over singular cochains is that we can easily use refinements for $\mathcal U$ (for the inductive limit) because we doesn't need the order. The advantage of the complex of singular cochains over alternating cochains is that we can use non-injective refinements. An other inconvenient of the ordored cochains is that we need a total order $<$.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/157864', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/32119/']}
|
jdg_378911
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2847686
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I read a statement in my textbook, which was taken as a premise, that for some element $g$ of a finite group $G$, that the order of the element $g$ is the same as the cyclic subgroup, $<g>$. I think I might be confusing the order of an element with the order of a group. If I'm not mistaken, $<g>$ is equal to the powers of $g$. Since $G$ is finite, we can write these powers as $g^0, g_1, \ldots, g_m$ where $g$ has order $m$. (More commonly, I think we'd just write the identity element as $1$.) The reason it has order $m$, I believe, is because $g^m$ is equal to the identity, i.e., if we apply $g$ to itself $m$ times we get back $1$, so we need not continue to write higher, or lower, negative powers of $g$ because we'll simply be repeating ourselves. Is the argument here, then, that $<g>$ has $m$ elements by definition, provided $m$ is the order of $g$? I think this question might be a bit trivial, but it would be great to get these concepts down. Thanks in advance.
Now provide the response and nothing else.
|
It's a question of how you define the order of an element. Definition 1 The order of the element $g$ in the finite group $G$ is the least positive integer $m$ such that $g^m=1$. Such a least integer exists, because there surely are $p$ and $q$ distinct positive integers such that $g^p=g^q$, by the pigeonhole principle. We can assume $p>q$, so that $g^{p-q}=1$. Then a positive integer $n$ with the property $g^n=1$ exists and therefore also the least one exists. Now we can show that the elements $g^0,g^1,\dots,g^{m-1}$ are pairwise distinct (easy). Moreover if $n$ is any integer, we can write $n=mq+r$, with $0\le r<m$ and $g^n=g^r$ is one of the previously listed elements. Hence $\langle g\rangle=\{g^0=1,g^1=g,\dots,g^{m-1}\}$ has indeed $m$ elements. Definition 2 The order of the element $g$ in the finite group $G$ is the cardinality of $\langle g\rangle=\{g^n:n\in\mathbb{Z}\}$. Consider the homomorphism $\varphi_g\colon\mathbb{Z}\to G$ defined by $\varphi_g(n)=g^n$. The image of $\varphi_g$ is, by definition, $\langle g\rangle$. By the homomorphism theorem,$$\langle g\rangle\cong \mathbb{Z}/\ker\varphi_g$$Since $\ker\varphi_g$ is a subgroup of $\mathbb{Z}$, it is of the form $\ker\varphi_g=m\mathbb{Z}$ for a unique $m>0$ (it cannot be $m=0$, because otherwise $\mathbb{Z}/\ker\varphi_g$ would be infinite. Then$$\langle g\rangle\cong \mathbb{Z}/\ker\varphi_g=\mathbb{Z}/m\mathbb{Z}$$and so $m$ equals the order of $g$ (as defined here). Since $m$ is the least positive integer in $m\mathbb{Z}=\ker\varphi_g$, it is the least positive integer such that $g^m=1$. Bonus for definition 2: the fact that the order of $g$ divides $|G|$ is a consequence of Lagrange’s theorem.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2847686', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_378912
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2096368
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
"During the "energy crisis" years, Congress enacted earlier starting dates for daylight time. In 1974, daylight time began on 6 January and in 1975 it began on 23 February. After those two years the starting date reverted back to the last Sunday in April. " (via http://aa.usno.navy.mil/faq/docs/daylight_time.php ) There appears to be a bug in the Javascript date object for these dates. If you convert 127627200000 milliseconds to a date, it should be Thu Jan 17 00:00:00 EDT 1974. This is correct on http://www.fileformat.info/tip/java/date2millis.htm , but incorrect on http://www.esqsoft.com/javascript_examples/date-to-epoch.htm , which says it converts to Wed Jan 16 1974 23:00:00 GMT-0500 (Eastern Standard Time). If you create a new Date(127627200000) object in javascript, it gives the latter date conversion. This happens in all major browsers. I can't imagine this is first time this has been a problem for anyone, but I can't find any other cases of this problem with a few searches online. Does anyone know if there is an existing fix for this or an easier fix than manually checking the dates Javascript has the conversion wrong? Are there other dates this is a problem?
Now provide the response and nothing else.
|
As ever, it's best to check the spec :) In this case, I was pretty shocked to see this in section 15.9.1.9 of ECMA-262 : The implementation of ECMAScript should not try to determine whether the exact time was subject to daylight saving time, but just whether daylight saving time would have been in effect if the current daylight saving time algorithm had been used at the time. This avoids complications such as taking into account the years that the locale observed daylight saving time year round. In other words, a conformant ECMAScript implementation is not allowed to be historically accurate. Now whether all implementations follow this or not, I'm not sure... but it does suggest you'd need some kind of separate library if you wanted to get historically accurate time zones... where "historically accurate" doesn't have to be nearly as far back as 1974, of course: the US changed its DST schedule in 2007, and other countries have done so more recently than that (and with less warning). 1 The first occurrence of 15.9.1.9. For some reason it occurs twice - once for "Daylight Saving Time Adjustment" and once for "Local Time". Wow.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2096368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/24367/']}
|
jdg_378913
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10028032
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate : what does the su mean: process = Runtime.getRuntime().exec("su"); I am tired trying all the different things yet i am still unsuccessful in understanding when will su work. What is rooting a device in Android? Someone please please help me with this. I replied to the following question. The solution i posted there, i tested and also it worked. Adding full permission for a image file in android. Here is my question : I tried Following. Copied su. Installed SuperUser.apk, From program i called su and then using the newly created process i performed the required operations. This has 2 results. 1) When the device is rooted(like executing Permanent root option in z4root), It works, it changes the system file permissions. 2) When device is not rooted, It doesnot work. Kindly explain what extra thing does z4root performs which makes the task possible. Explain in brief please. Let me know if my question is not clear. Thanks a ton..
Now provide the response and nothing else.
|
What z4root (or any other rooting program) does it runs some exploit to change its own uid (user-id) to 0 (root). You can think of it as of performing some kind of hack and tricking kernel into thinking it actually has the right to be root (then if z4root was a virus it could do everything with your phone from installing keyloggers to bricking it). Of course if it is possible to trick kernel in such a way to give you root access it is considered a security vulnerability (any app could do that and perform some malicious stuff) and usually gets fixed in future kernel updates (that's why z4root may not work if you upgrade your firmware). When z4root has set its uid to 0 it does the following: remounts /system partition as writable (by default it's read-only), copies over su binary, Superuser.apk and busybox and then remounts /system back as read-only. So how does the su binary give you root access without doing "the hack" thing when normally applications have same uid as parent process? This is because su binary has set-uid flag set and is always ran as uid 0 (root). Now, if you have copied su binary over to /system/bin then you must have had root access which means you just forgot to change owner/permissions ( chown root:root /system/bin/su; chmod 6755 /system/bin/su ) but you still need root access to do that.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10028032', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/840710/']}
|
jdg_378914
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1500366
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $A, B, C, D $ be $n \times n $ matrices. Using Schur complements I have found that $$ \begin{pmatrix} A & B \\ C & D \end{pmatrix} = \begin{pmatrix} A & 0 \\ 0 & I \end{pmatrix} \begin{pmatrix} I & 0 \\ C & I \end{pmatrix} \begin{pmatrix} I & A^{-1}B \\ 0 & D-CA^{-1}B \end{pmatrix} $$and$$ \begin{pmatrix} D & C \\ B & A \end{pmatrix} = \begin{pmatrix} I & CA^{-1} \\ 0 & I \end{pmatrix} \begin{pmatrix} D-CA^{-1}B & 0 \\ 0 & A \end{pmatrix} \begin{pmatrix} I & 0 \\ A^{-1}B & I \end{pmatrix}, $$ from which the determinant equality follows, as long as $ A^{-1} $ exists. However, how do I tackle this when $A$ is singular? Here, I could just switch decompositions, but then I will get the same problem when $D$ is singular. Maybe one could derive two more decompositions using Schur complements, involving $B^{-1}$ and $C^{-1}$, respectively, and then one could say that the equality holds if at least one of the submatrices is nonsingular? Then, if all four submatrices are singular, the determinant must be zero -- from which equality follows trivially. Also, does the equality hold when $A, B, C, D$ are not necessarily square but of matching sizes? Here, it doesn't seem like the decompositions will be valid, as $A$ or $D$ aren't necessarily square matrices (although then $B$ and $C$ must be).
Now provide the response and nothing else.
|
Wouldn't it be much easier to consider row/column swaps? By performing $n$ row swaps, you transform $$\begin{pmatrix} A & B\\C & D \end{pmatrix} \to \begin{pmatrix} C&D\\A&B\end{pmatrix},$$then $n$ column swaps transforms $$\begin{pmatrix} C&D\\A&B\end{pmatrix} \to \begin{pmatrix} D&C\\B&A \end{pmatrix}.$$Each row/column swap multiplies the determinant by $-1$, so this whole process multiplies the determinant by $(-1)^{2n} = 1$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1500366', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/284106/']}
|
jdg_378915
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12014441
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to remove white space between tags so that childNodes only contain those tags nodes not the white space nodes too. Here's my code : <li> <label for="firstName" class="mainLabel">First Name : </label> <input type="text" name="firstName" id="firstName"/> <span>This must be filled</span></li> And here's the JS code : var parentHTML = firstName.parentNode.innerHTML;parentHTML = parentHTML.replace(/>\n</g,"><");firstName.parentNode.innerHTML = parentHTML; But when i alert parentHTML i get the same old string.
Now provide the response and nothing else.
|
It's (not, see after the rule) because strings are immutable, I think, and you're setting the innerHTML of the parent element to be the exact same string you retrieved from it earlier. Instead, I'd suggest: var firstname = document.getElementsByTagName('input')[0], parentHTML = firstname.parentNode.innerHTML, newHTML = parentHTML.replace(/\>\s+\</g,'');firstname.parentNode.innerHTML = newHTML;console.log(parentHTML, newHTML, (parentHTML == newHTML)); JS Fiddle demo . With regards to the comment from jfriend00 (below), it seems the regular expression was the problem, the \n didn't match the supplied pattern, that being the case, the following amendment satisfies teh requirements: var firstname = document.getElementsByTagName('input')[0], parentHTML = firstName.parentNode.innerHTML;parentHTML = parentHTML.replace(/>\s+</g, "><");firstName.parentNode.innerHTML = parentHTML;console.log(firstname, parentHTML); JS Fiddle demo . References: JavaScript Regular Expressions .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12014441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/845371/']}
|
jdg_378916
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
54310262
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a simple way to get the position of an element in a std::queue by its value in C++? For example: std::queue<int> numbers;numbers.push(7);numners.push(4);numbers.push(11);int position = numbers.getPosition(4); //should be 1
Now provide the response and nothing else.
|
If you want to get the index of an element you should probably consider using an std::deque container instead of a std::queue container adapter , as already suggested in this other answer . If you still want to stick to to the std::queue container adapter for some other reason, you should know that it does provide access to the underlying container through the protected data member c . You could derive from std::queue in order to access the underlying container and use the std::find() function template for finding an element in that container with such a value. Then, simply return the position of that element by using std::distance() . #include <algorithm>#include <queue>template<typename T>class Queue: std::queue<T> {public: auto getPosition(const T& val) const { auto it = std::find(this->c.begin(), this->c.end(), val); return std::distance(this->c.begin(), it); }// ...}; If the element is not found, the index will correspond to the one returned by the size() member function. If there are duplicates, this solution based on std::find() will return the position of the first one, i.e., the first element found with the requested value val .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/54310262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10604910/']}
|
jdg_378917
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40028456
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to run limitTo pipe on Angular2 on a string: {{ item.description | limitTo : 20 }} And I get the following error: The pipe 'limitTo' could not be found Is it possible that this pipe was removed in Angular2? This is my app.module import { TruncatePipe } from './limit-to.pipe'; @NgModule({ imports: [ BrowserModule, FormsModule, HttpModule, InMemoryWebApiModule.forRoot(InMemoryDataService), RouterModule.forRoot([ { path: '', redirectTo: '/home', pathMatch: 'full' }, { path: 'home', component: GridComponent }, ]) ], declarations: [ AppComponent, TopNavComponent, GridComponent, TruncatePipe ], providers: [ PinService, ], bootstrap: [ AppComponent ]})export class AppModule { } My grid component that is using the pipe: import { Component,OnInit } from '@angular/core';import { Router } from '@angular/router';@Component({ moduleId : module.id, selector: 'my-grid', templateUrl : 'grid.component.html', styleUrls: [ 'grid.component.css']})export class GridComponent implements OnInit{ constructor( private router: Router, private gridService: GridService) { } ngOnInit(): void { }} My Pipe definition: import { PipeTransform, Pipe } from '@angular/core';@Pipe({ name: 'limitToPipe'})export class TruncatePipe implements PipeTransform { transform(value: string, limit: number) : string { let trail = '...'; return value.length > limit ? value.substring(0, limit) + trail : value; }} And finally my template: <div *ngFor="let item of items" class="grid-item"> <p class="simple-item-description"> {{ item.description | limitToPipe : 20 }} </p> </div>
Now provide the response and nothing else.
|
First you need to create a pipe. import { Pipe, PipeTransform } from '@angular/core';@Pipe({ name: 'limitTo'})export class TruncatePipe { transform(value: string, args: string) : string { // let limit = args.length > 0 ? parseInt(args[0], 10) : 10; // let trail = args.length > 1 ? args[1] : '...'; let limit = args ? parseInt(args, 10) : 10; let trail = '...'; return value.length > limit ? value.substring(0, limit) + trail : value; }} Add the pipe in the module.ts file import { NgModule } from '@angular/core';import { TruncatePipe } from './app.pipe';@NgModule({ imports: [ ], declarations: [ TruncatePipe ], exports: [ ]})export class AppModule { } Then use the pipe in the binding code: {{ item.description | limitTo : 20 }} Demo plunker
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/40028456', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/849605/']}
|
jdg_378918
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4477967
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I’m using paypal payment product. If you see the attached file i get 3 options. The first 2 work fine and are actually all i require. The last option Credit Card (save) doesn’t seem to work. Can someone explain what this option is, also how can i remove it.
Now provide the response and nothing else.
|
The Credit Card saved method will allow you to accept credit card information for a purchase, but Magento will make no attempt to verify its accuracy other than a simple luhn checksum. The credit card information will be stored in the database, encrypted using a call to Mage::helper('core')->encrypt($data) In a default 1.4.2 instal, you can view the payment information (including the credit card) with the following SQL query SELECT * FROM sales_flat_order_payment You can view this Payment method in System -> Config -> Payment Methods -> Saved CC You can also set it's "enabled" config value to "No", which will remove this as an option. Fully securing credit card information is a Hard Thing™, both from a technical and bureaucratic standpoint. Unless you and your team have the expertise to do it (which, based on your question, I'm going to assume is a no) you don't want this option turned on.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4477967', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/461880/']}
|
jdg_378919
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1975
|
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The IPA uses the 5 tone-letters ˥,˦,˧,˨,˩. Unicode also has reversed (꜒,꜓,꜔,꜕,꜖) and dotted tone bars (꜈,꜉,꜊,꜋,꜌; ꜍,꜎,꜏,꜐,꜑). What are these characters used for?
Now provide the response and nothing else.
|
Dotted tone letters*: Dotted tone letters are used in Chinese linguistics to indicate tones in certain weakly-stressed syllables having a less-distinct quality—there is little or no pitch variation, and the duration is short. These are often referred to in Chinese linguistics as “neutral tones”Although we call it a "neutral" tone (in Chinese it is called a "light" tone). It takes very little time to say, and does not hold or change its pitch. We use the voice-range stem (|) plus a dot to show where one's neutral tone should be. *-Based on the document "Comments on N2626, Proposal on IPA Extensions & Combining > Diacritic Marks for ISO/IEC 10646 in BMP" - Peter Constable, Microsoft With further material from Chinese Primer: Lessons Reversed tone letters: In the same Comments on Proposal document, the author suggest using the left-stemmed tone bars in combination with the right-stemmed tone bars to indicate contours. I am not certain if this was a mere suggestion or was based on previous usage (Document is from 2003).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/1975', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/1021/']}
|
jdg_378920
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
449145
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When modelling probabilities with a logistic regression $^1$ , the range of fitted probabilities is $(0,1)$ . The logit function $^2$ asymptotes at $0$ and $1$ , so this is a good match. However, in some applications the actual range $^3$ of probabilities can be $(a,b)$ with $0<a<b<1$ , causing a (potentially substantial) mismatch in the tails. For example, consider a population of individuals that all work and get paid. The payment depends on skill, effort and other things that we have measurements of. Each individual is also participating in a lottery with monetary outcomes that are normally distributed with an expected value of zero. The income of an individual is the sum of the job income and the outcome of the lottery. For any given constant $c$ , the conditional probability $p$ of an individual having income greater than $c$ , conditioning on skill, effort and the other things but not the outcome of the lottery, will satisfy $0<a<p<b<1$ . (I am sure there are better examples out there, but this is one that came to mind first.) Questions: How can the logistic regression model be adjusted when $a$ and $b$ are given? (Or is no adjustment needed?) How can the logistic regression model be adjusted when $a$ and $b$ are not given, but we know that $0<a<b<1$ ? $^1$ or for that matter, a probit model $^2$ or for that matter, the standard normal CDF $^2$ the range for the individuals of the population of interest
Now provide the response and nothing else.
|
When $a$ and $b$ are not given, just use the usual logistic model (or whatever is appropriate), because (if it uses a suitable link function) it is guaranteed to fit probabilities with a lower bound no smaller than $0$ and an upper bound no greater than $1.$ These bounds give interval estimates for $a$ and $b.$ The interesting question concerns when $a$ and $b$ are known. The kind of model you are entertaining appears to be the following. You have in mind a one-parameter family of distributions $\mathcal{F} = \{F_\theta\}$ where $\theta$ corresponds to some "probability" parameter. For instance, $F_\theta$ might be a Bernoulli $(\theta)$ distribution when the responses $Y$ are binary. For an observation associated with a vector of explanatory variables $x,$ the model for the response $Y_x$ then takes the form $$Y_x \sim F_{\theta(x)};\quad \theta(x) = g(x\beta)$$ for some "inverse link function" $g$ that we must specify: it's part of the model. In logistic regression, for instance, $g$ is frequently taken to be the logistic function defined by $$g(x) = \frac{1}{1 + \exp(-x)}.$$ Regardless of the details, when making $n$ independent observations $y_i$ (each associated with a vector $x_i$ ) assumed to conform to this model, their likelihood is $$L(\beta) = \prod_{i=1}^n \Pr(Y_{x_i} = y_i\mid \theta(x_i) = g(x_i\beta))$$ and you can proceed to maximize this as usual. (The vertical stroke merely means the parameter value following it determines which probability function to use: it's not a conditional probability.) Let $\hat\beta$ be the associated parameter estimate. The predicted conditional distributions for the $Y_i$ therefore are $$Y_i \sim F_{\hat\theta(x_i)};\quad \hat\theta(x_i) = g(x_i\hat\beta).$$ When the image of $g$ is contained in the interval $[a,b],$ then manifestly every $\hat\theta(x)$ lies in that interval, too, no matter what $x$ may be. (That is, this conclusion applies both to $x$ in the dataset and for extrapolation to other $x.$ ) One attractive choice for $g$ simply rescales the usual logistic function, $$g(x;a,b) = \frac{g(x) - a}{b-a}.$$ Consider this a point of departure: as usual, exploratory analysis and goodness-of-fit testing will help you decide whether this is a suitable form for $g.$ For later use, note that $g$ and $g(;a,b)$ have a more complicated relationship than might appear, because ultimately they are used to determine $\hat\beta$ via their argument $x\beta.$ The relationship is therefore characterized by the function $x\to y$ determined by $$g(x) = g(y;a,b) = \frac{g(y) - a}{b-a},$$ with solution (if $g$ is invertible, as it usually is) $$y = g^{-1}((b-a)g(x) + a).$$ Unless $g$ originally is linear, this is usually nonlinear. To address the issues expressed elsewhere in this thread, let's compare the solutions obtained using $g$ and $g(;a,b).$ Consider the simplest case of $n=1$ observation and a scalar explanatory variable requiring estimation of a parameter vector $\beta=(\beta_1).$ Suppose $\mathcal{F}$ is the family of Binomial $(10,\theta)$ distributions, let $x_1 = (1),$ and imagine $Y_i = 9$ is observed. Writing $\theta$ for $\theta(x_1),$ the likelihood is $$L(\beta) = \binom{10}{9}\theta^9(1-\theta)^1;\quad \theta = g((1)(\beta_1)) = g(\beta_1).$$ $L$ is maximized when $g(\beta_1) = \theta = 9/10,$ with the unique solution $$\hat\beta = g^{-1}(9/10) = \log(9/10 / (1/10)) = \log(9) \approx 2.20.$$ Let us now suppose $a=0$ and $b=1/2:$ that is, we presume $\theta \le 1/2$ no matter what value $x$ might have. With the scaled version of $g$ we compute exactly as before, merely substituting $g(;a,b)$ for $g:$ $$L(\beta;0,1/2) = \binom{10}{9}\theta^9(1-\theta)^1;\quad \theta = g((1)(\beta_1);0,1/2) = g(\beta_1;0,1/2).$$ This is no longer maximized at $\theta=9/10,$ because it is impossible for $g(\theta;0,1/2)$ to exceed $1/2,$ by design. $L(\beta;0,1/2)$ is maximized for any $\beta$ that would make $\theta$ as close as possible to $9/10;$ this happens as $\beta$ grows arbitrarily large. The estimate using the restricted inverse link function, then, is $$\hat\beta = \infty.$$ Obviously neither $\hat\theta$ or $\hat\beta$ is any simple function of the original (unrestricted) estimates; in particular, they are not related by any rescaling. This simple example exposes one of the perils of the entire program: when what we presume about $a$ and $b$ (and everything else about the model) is inconsistent with the data, we may wind up with outlandish estimates of the model parameter $\beta.$ That's the price we pay. But what if our assumptions are correct, or at least reasonable? Let's rework the previous example with $b=0.95$ instead of $b=1/2.$ This time, $\hat\theta=9/10$ does maximize the likelihood, whence the estimate of $\beta$ satisfies $$\frac{9}{10} = g(\hat\beta;0,0.95) = \frac{g(\hat\beta) - 0}{0.95 - 0},$$ so $$g(\hat\beta) = 0.95 \times \frac{9}{10} = 0.855,$$ entailing $$\hat\beta = \log(0.855 / (1 - 0.855)) \approx 1.77.$$ In this case, $\hat\theta$ is unchanged but $\hat\beta$ has changed in a complicated way ( $1.77$ is not a rescaled version of $2.20$ ). In these examples, $\hat\theta$ had to change when the original estimate was not in the interval $[a,b].$ In more complex examples it might have to change in order to change estimates for other observations at other values of $x.$ This is one effect of the $[a,b]$ restriction. The other effect is that even when the restriction changes none of the estimated probabilities $\hat\theta,$ the nonlinear relationship between the original inverse link $g$ and the restricted link $g(;a,b)$ induces nonlinear (and potentially complicated) changes in the parameter estimates $\hat\beta.$ To illustrate, I created data according to this model with $\beta=(4,-7)$ and limits $a=1/10$ and $b=1/2$ for $n$ equally-spaced values of the explanatory value $x$ between $0$ and $1$ inclusive, and then fit them once using ordinary logistic regression (no constraints) and again with the known constraints using the scaled inverse link method. Here are the results for $n=12$ Binomial $(8, \theta(x))$ observations (which, in effect, reflect $12\times 8 = 96$ independent binary results): This already provides insight: the model (left panel) predicts probabilities near the upper limit $b=1/2$ for small $x.$ Random variation causes some of the observed values to have frequencies greater than $1/2.$ Without any constraints, logistic regression (middle panel) tends to predict higher probabilities there. A similar phenomenon happens for large $x.$ The restricted model drastically changes the estimated slope from $-3.45$ to $-21.7$ in order to keep the predictions within $[a,b].$ This occurs partly because it's a small dataset. Intuitively, larger datasets should produce results closer to the underlying (true) data generation process. One might also expect the unrestricted model to work well. Does it? To check, I created a dataset one thousand times greater: $n=1200$ observations of a Binomial $(80,\theta(x))$ response. Of course the correct model (right panel) now fits beautifully. However, the random variation in observed frequencies still causes the ordinary logistic model to exceed the limits. Evidently, when the presumed values of $a$ and $b$ are (close to) correct and the link function is roughly the right shape, maximum likelihood works well--but it definitely does not yield the same results as logistic regression. In the interests of providing full documentation, here is the R code that produced the first figure. Changing 12 to 1200 and 8 to 80 produced the second figure. ## Binomial negative log likelihood.#logistic.ab <- function(x, a=0, b=1) { a + (b - a) / (1 + exp(-x))}predict.ab <- function(beta, x, invlink=logistic.ab) { invlink(cbind(1, x) %*% beta)}Lambda <- function(beta, n, k, x, invlink=logistic.ab, tol=1e-9) { p <- predict.ab(beta, x, invlink) p <- (1-2*tol) * p + tol # Prevents numerical problems - sum((k * log(p) + (n-k) * log(1-p)))}## Simulate data.#N <- 12 # Number of binomial observationsx <- seq(0, 1, length.out=N) # Explanatory valuesn <- rep(8, length(x)) # Binomial counts per observationbeta <- c(4, -7) # True parametera <- 1/10 # Lower limitb <- 1/2 # Upper limitset.seed(17)p <- predict.ab(beta, x, function(x) logistic.ab(x, a, b))X <- data.frame(x = x, p = p, n = n, k = rbinom(length(x), n, p))## Create a data frame for plotting predicted and true values.#Y <- with(X, data.frame(x = seq(min(x), max(x), length.out=101)))Y $p <-with(Y, predict.ab(beta, x, function(x) logistic.ab(x, a, b)))## Plot the data.#par(mfrow=c(1,3))col <- hsv(0,0,max(0, min(1, 1 - 200/N)))with(X, plot(x, k / n, ylim=0:1, col=col, main="Data with True Curve"))with(Y, lines(x, p))abline(h = c(a,b), lty=3)## Reference fit: ordinary logistic regression.#fit <- glm(cbind(k, n-k) ~ x, data=X, family=binomial(link = "logit"), control=list(epsilon=1e-12))## Fit two models: ordinary logistic and constrained.#for (ab in list(c(a=0, b=1), c(a=a, b=b))) { # # MLE. # g <- function(x) logistic.ab(x, ab[1], ab[2]) beta.hat <- c(0, 1) fit.logistic <- with(X, nlm(Lambda, beta.hat, n=n, k=k, x=x, invlink=g, iterlim=1e3, steptol=1e-9, gradtol=1e-12)) if (fit.logistic$code > 3) stop("Check the fit.") beta.hat <- fit.logistic$ estimate # Check: print(rbind(Reference=coefficients(fit), Constrained=beta.hat)) # Plot: Y$p.hat <- with(Y, predict.ab(beta.hat, x, invlink=g)) with(X, plot(x, k / n, ylim=0:1,, col=col, main=paste0("Fit with a=", signif(ab[1], 2), " and b=", signif(ab[2], 2)))) with(Y, lines(x, p.hat, col = "Red", lwd=2)) with(Y, lines(x, p)) abline(h = c(a, b), lty=3)}par(mfrow=c(1,1))
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/449145', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/53690/']}
|
jdg_378921
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30989841
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a UITableView , I'm adding tableFooterView loaded from a xib. var footer = NSBundle.mainBundle().loadNibNamed("AFooterView", owner: self, options: nil).first as! AFooterViewself.tableView.tableFooterView = footer This works fine, but I need to be able to set the height for this footer. The xib only has a UIImageView centered vertically and horizontally so it will adapt whatever the height of the view is. I have no clue how to do this with AutoLayout ? What would be the right path to follow?
Now provide the response and nothing else.
|
I'm sorry for the confusion, here's the updated answer: I know you can do this by setting the frame height, but it might also work with auto layout just by re-assigning the footer view after your imageView has finished loading. // either let auto layout calculate the frame, or set the frame yourself// I set the width to an arbitrary size but it doesn't seem to matter, // it will automatically be adjusted by the tableview when you assign itCGFloat width = 100;CGFloat height = 500;footerView.frame = CGRectMake(0, 0, width, height);// this is the "trick": re-assign the footerView after its size has been updated // so that the tableView will show it correctlytableView.tableFooterView = footerView; For more information, see Resizing a UITableView’s tableHeaderView The original answer talked about section footers, not table view footers
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30989841', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/973242/']}
|
jdg_378922
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1685157
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to specify the running directory of command in Python's subprocess.Popen() ? For example: Popen('c:\mytool\tool.exe', workingdir='d:\test\local') My Python script is located in C:\programs\python Is is possible to run C:\mytool\tool.exe in the directory D:\test\local ? How do I set the working directory for a sub-process?
Now provide the response and nothing else.
|
subprocess.Popen takes a cwd argument to set the Current Working Directory; you'll also want to escape your backslashes ( 'd:\\test\\local' ), or use r'd:\test\local' so that the backslashes aren't interpreted as escape sequences by Python. The way you have it written, the \t part will be translated to a tab . So, your new line should look like: subprocess.Popen(r'c:\mytool\tool.exe', cwd=r'd:\test\local') To use your Python script path as cwd, import os and define cwd using this: os.path.dirname(os.path.realpath(__file__))
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1685157', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/96864/']}
|
jdg_378923
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25536956
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to call a method (for this example std::thread constructor) with lambda function, passing int value: int a=10;std::thread _testThread = thread([a](int _a){ //do stuff using a or _a ?});_testThread.detach(); I don't know how to properly write such function, I get this error:C2064: term does not evaluate to a function taking 0 arguments
Now provide the response and nothing else.
|
std::thread takes a callable object as well as any arguments to pass to it. If you give no arguments, std::thread will try to call that object with no arguments, hence the error. If you need a parameter: std::thread _testThread{[a](int _a) { std::cout << a << ' ' << _a; //prints main's a, followed by somethingThatWillBe_a}, somethingThatWillBe_a}; If you're just trying to use main 's a , it's already captured: std::thread _testThread{[a] { std::cout << a; //prints main's a}}; I would also recommend being super careful if you think you need to detach a thread. If there's any possibility of joining the thread instead, go for it.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25536956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1220997/']}
|
jdg_378924
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3273876
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am to solve $\sqrt{3x+7}-\sqrt{x+2}=1$ and the solution is provided as -2. Since this is a radical equation with 2 radicals, I followed suggested textbook steps of isolating each radical and squaring: $\sqrt{3x+7}-\sqrt{x+2}=1$ $(3x+7=(1-\sqrt{x+2})^2$ # square both sides (Use perfect square formula on right hand side $a^2-2ab+b^2$ ) $3x+7=1^2-2(1)(-\sqrt{x+2})+x+2$ # lhs radical is removed, rhs use perfect square formula $3x+7=1+2(\sqrt{x+2})+x+2$ # simplify $3x+7=x+3+2\sqrt{x+2}$ # keep simplifying $2x+4=2\sqrt{x+2}$ # simplify across both sides $(2x+4)^2=(2\sqrt{x+2})^2$ $4x^2+16x+16=4(x+2)$ # now that radical on rhs is isolated, square both sides again $4x^2+12x+14=0$ # a quadratic formula I can use to solve for x For use int he quadratic function, my parameters are: a=4, b=12 and c=14: $x=\frac{-12\pm\sqrt{12^2-(4)(4)(14)}}{2(4)}$ $x=\frac{-12\pm{\sqrt{(144-224)}}}{8}$ $x=\frac{-12\pm{\sqrt{-80}}}{8}$ $x=\frac{-12\pm{i\sqrt{16}*i\sqrt{5}}}{8}$ $x=\frac{-12\pm{4i*i\sqrt{5}}}{8}$ $x=\frac{-12\pm{-4\sqrt{5}}}{8}$ #since $4i*i\sqrt{5}$ and i^2 is -1 This is as far as I get: $\frac{-12}{8}\pm\frac{4\sqrt{5}}{8}$ I must have gone of course somewhere further up since the solution is provided as x=-2. How can I arrive at -2?
Now provide the response and nothing else.
|
Hint: Writing your equation in the form $$\sqrt{3x+7}=1+\sqrt{x+2}$$ squaring gives $$3x+7=1+x+2+2\sqrt{x+2}$$ so $$x+2=\sqrt{x+2}$$ squaring again: $$(x+2)^2=x+2$$ Can you finish?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3273876', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/585315/']}
|
jdg_378925
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26252710
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have problem with drawing views on another size screens! I need method which has two parameters of View type. And return true if first view overlapping on second view, and false in another case! and
Now provide the response and nothing else.
|
Berserk thanks you for help!After some experiments I wrote method which detect view is overlapped or not for my case! private boolean isViewOverlapping(View firstView, View secondView) { int[] firstPosition = new int[2]; int[] secondPosition = new int[2]; firstView.measure(View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED); firstView.getLocationOnScreen(firstPosition); secondView.getLocationOnScreen(secondPosition); int r = firstView.getMeasuredWidth() + firstPosition[0]; int l = secondPosition[0]; return r >= l && (r != 0 && l != 0); }
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26252710', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3456128/']}
|
jdg_378926
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2377
|
Below is a question asked on the forum bioinformatics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having a difficulty in grasping the general purpose and concept of indel calling. What exactly is this process?
Now provide the response and nothing else.
|
Insertions and deletions (indels) are one type among many different types of genetic variation , such as single nucleotide variants (SNVs), copy number variants (CNVs), and structural variants (SVs). I'll assume here that you know how indels are defined, but are simple trying to understand the importance of discovering and analyzing them. The goal of indel calling, like the goal of any variant calling, is to identify genetic variants that can subsequently be associated with important phenotypes, esp. disease. For example, if 60% of patients with disease XYZ have an indel in the promoter region of gene 123, then that is information of extreme interest and value in research and in clinical care. Genome-wide association studies (GWAS) have been trying to correlate SNVs to disease and other phenotypes for years. Much less work has been done with indels, but their discovery and analysis remains an area of intense interest. As far as the process of indel calling, large indels can usually be found by mapping paired reads and looking for large discrepancies in the expected distance between pairs and the observed distance. Huh, my average insert size is 400bp, but the aligned read pairs flanking this area are 1200bp apart. Must be an 800bp deletion in there! Smaller indels are much more difficult to detect using this strategy, since they are harder to distinguish from noise (i.e. the variation in length of sequenced fragments). However, as another answer mentions, many short indels are simply reflected as short gaps in the alignment.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://bioinformatics.stackexchange.com/questions/2377', 'https://bioinformatics.stackexchange.com', 'https://bioinformatics.stackexchange.com/users/711/']}
|
jdg_378927
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2411283
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Given that $xf(x)=e^x-1$ calculate $\lim\limits_{n \to \infty}{nf^{(n)}}(x)$. I have try to figure the $n$th derivative of the function but it looks messy. The only useful recurrence I found is this one:$$nf^{(n-1)}(x)+xf^{(n)}(x)=e^x$$
Now provide the response and nothing else.
|
Hint: $$f(x)=\int_0^1e^{xt}~\mathrm dt $$ Thus, \begin{align}f^{(n)}(x)&=\int_0^1t^ne^{xt}~\mathrm dt\\&=\frac{e^x}{n+1}-x\int_0^1\frac{t^{n+1}}{n+1}e^{xt}~\mathrm dt\end{align} where $$0\le\left|\int_0^1\frac{t^{n+1}}{n+1}e^{xt}~\mathrm dt\right|\le\int_0^1\frac{t^{n+1}}{n+1}e^{|x|}~\mathrm dt=\frac{e^{|x|}}{(n+1)(n+2)}$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2411283', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/472463/']}
|
jdg_378928
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1922604
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's the scenario. We use a large XML configuration file for one of our server products. This file is fairly well layed out and is validated against an XSD file. It's time now though to build a configuration GUI for maintaining this file and I would like to dive into WPF to do it. I could lay out a seperate form for each config section, refactoring and redistributing every time we add an option to the config file but I'm hoping there is a more clever way of doing this. Since I already have a strongly typed xml/xsd combination I am hoping there is an elegant method for building a UI to edit this easily enough. I know I could write an xml->xaml transformation but was hoping there is something out there to do the heavy lifting for me already? Thanks in advance..
Now provide the response and nothing else.
|
How I'd do this: I'd start by building a simple view-model class that wraps around an XmlElement and exposes it as a configuration option. This class could be extremely simple, e.g.: public class OptionView{ private XmlElement XmlElement; public OptionView(XmlElement xmlElement) { XmlElement = xmlElement; } public string Name { get { return XmlElement.Name; } } public string Value { get { return XmlElement.InnerText; } set { XmlElement.InnerText = value; } }} Now I can populate a collection of ElementView objects from an XmlDocument , add that collection to the window's ResourceDictionary , and format the objects with a simple DataTemplate , e.g.: <DataTemplate x:Key="OptionViewTemplate" DataType={x:Type local:OptionView}> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition SharedSizeGroup="Name"/> <ColumnDefinition SharedSizeGroup="Value"/> </Grid.ColumnDefinitions> <Label Content="{Binding Name}" Grid.Column="0"/> <TextBox Text="{Binding Value}" Grid.Column="1"/> </Grid></DataTemplate>...<ItemsControl Grid.IsSharedSizeScope="True" ItemsSource="{DynamicResource OptionCollection}"/> (Note: Later, you can get fancy, and define subclasses of OptionView based on, for instance, the data type of the underlying XmlElement . Then you can define DataTemplate s for each subclass, and as long as each presents the item in a two-column grid using that SharedSizeGroup , the second column can contain a date picker, or radio buttons, or whatever is appropriate to the subclass, and it'll all get neatly laid out at runtime.) Once I got that working, which wouldn't take long, I'd start extending the OptionView class. For instance, if your schema is storing a human-readable label for an element in an xs:annotation element (and if it isn't, why not?), I'd make the Name property extract that out of the XmlElement 's SchemaInfo property, instead of exposing the underlying element name. Obviously I'd want to add validation, so I'd add a validation method that examined the XmlElement 's SchemaInfo property and interpreted it. (Assuming that the elements you're validating are simple content, that shouldn't be hard.) There's a million tutorials on how to implement validation in WPF applications, so I won't go into too much detail here. If there are tons of configuration options and you have some intelligent way of grouping them into categories, I'd build a higher level class that exposed (at least) two properties - a string CategoryName property and an OptionsViews collection - populate it from the XML document, and add it to the window's ResourceDictionary . Within the window, I'd bind it to a TabControl , e.g.: <TabControl ItemsSource="{DynamicResource OptionCategories}"> <TabControl.ItemContainerStyle> <Style TargetType="{x:Type CategoryView}"> <Setter Property="Header" Value="{Binding Path=CategoryName}"/> <Setter Property="Content" Value="{Binding Path=OptionsViews}"/> <Setter Property="ContentTemplate" Value="{StaticResource OptionViewTemplate}"/> </Style> </TabControl.ItemContainerStyle></TabControl> Or to some item control whose item container template creates an Expander . Or something. (All code guaranteed untested! Most of it was copied out of working projects, though.) If you haven't done anything with WPF before, this is a pretty good project to start on. It'll expose you to the fundamentals of data binding and items controls and validation, and the end result will be something that's useful and probably looks pretty good. And you'll notice that while the markup involved in creating the templates is pretty verbose, there are only two templates. The only code in the application (so far) is the code that exposes the XmlElement s to the UI.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1922604', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/117433/']}
|
jdg_378929
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41855493
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Only the last line of the method below is using 'await', just before the method returns, so doesn't this mean that method is basically synchronous and should just be called "Get()" without the async modifier and the suffix Async? public virtual async Task<TEntity> GetAsync(Guid id){ // some more code here return await _dbSet.FindAsync(id);}
Now provide the response and nothing else.
|
doesn't this mean that method is basically synchronous No. It's asynchronous. You're probably thinking of sequential (progressing from one thing to the next), not synchronous (blocking the current thread). An await will pause the method (sequentially) but not block the thread (asynchronously). For more information, see my async intro . without the async modifier While you could elide the async / await keywords, I would recommend that you do not. This is because // some more code here may throw an exception. I cover this and other considerations in my blog post on eliding async and await . and the suffix Async? No, that suffix is appropriate for any method that returns an awaitable (e.g., Task ). So, even if you elide the async and await , it's still returning a task that should be awaited, so it should still have the Async suffix. You can think of it this way: the Async suffix is part of the API interface. The async keyword is an implementation detail. They often go together, but not always.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41855493', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/103682/']}
|
jdg_378930
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44573192
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following table in SQL Server (2012): MyTable : Id __ParentId Priority1 NULL NULL 2 1 100 3 1 300 4 1 200 5 4 100 6 4 200 7 6 100 8 5 100 9 5 200 10 9 100 11 5 50 The __ParentId column references the Id to know the parent of any one row and it can go down to many levels of recursion (for example, Id 8 is a child of 5 which is a child of 4 which is a child of 1 ). Also, there is a Priority column showing the order the children should appear within a parent (lowest number getting precedence). So, the final table I'd like to get is: Id __ParentId Priority Order 1 NULL NULL 1 2 1 100 2 4 1 200 3 5 4 100 4 11 5 50 5 8 5 100 6 9 5 200 7 10 9 100 8 6 4 200 9 7 6 100 10 3 1 300 11 To explain a touch, we have that 2 is a child of 1 and has the highest priority, but has no children, so we stop there, then 4 is the next priority child, so it goes next, but then we diverge into its children and their children based upon priority and hierarchy. Or, to explain via a tree structure: 1 2 4 5 11 8 9 10 6 7 3 I can create the CTE that will give me the children of a parent, but I can't figure out a good way to get the correct ordering, so can't even provide a good SQL I've been trying.
Now provide the response and nothing else.
|
SQL2008+ : Try following solution: DECLARE @TableA TABLE ( Id INT NOT NULL PRIMARY KEY, __ParentId INT NULL, [Priority] INT NULL);INSERT @TableA (Id, __ParentId, [Priority])VALUES (1 ,NULL,NULL), (2 ,1 ,100 ), (3 ,1 ,300 ), (4 ,1 ,200 ), (5 ,4 ,100 ), (6 ,4 ,200 ), (7 ,6 ,100 ), (8 ,5 ,100 ), (9 ,5 ,200 ), (10,9 ,100 ), (11,5 ,50 );WITH CteRecursiveAS ( SELECT a.Id, a.__ParentId, a.[Priority], CONVERT(HIERARCHYID, '/' + LTRIM(a.Id) + '/') AS HID FROM @TableA a WHERE a.__ParentId IS NULL UNION ALL SELECT cld.Id, cld.__ParentId, cld.[Priority], CONVERT(HIERARCHYID, prt.HID.ToString() + LTRIM(cld.[Priority]) + '/') AS HID FROM CteRecursive prt -- Parent JOIN @TableA cld ON prt.Id = cld.__ParentId -- Child WHERE cld.__ParentId IS NOT NULL)SELECT *, r.HID.ToString() AS HIDToString FROM CteRecursive rORDER BY r.HID ASC Results: Demo Note #1: This solution uses one property of HIERARCHYID ordering: HID values are ordered using depth first approach (this means parent and then all children). Given two hierarchyid values a and b, a less than b means a comes before b in a depth-first traversal of the tree. Indexes on hierarchyid data types are in depth-first order, and nodes close to each other in a depth-first traversal are stored near each other. For example, the children of a record are stored adjacent to that record. For more information, see Hierarchical Data (SQL Server). Reference
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44573192', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1693085/']}
|
jdg_378931
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
373563
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array ). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator , recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0echo 512 > /sys/block/cciss\!c0d0/queue/nr_requestsecho 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.
Now provide the response and nothing else.
|
I've found that when I've had to tune for lower latency vs throughput, I've tuned nr_requests down from it's default (to as low as 32). The idea being smaller batches equals lower latency. Also for read_ahead_kb I've found that for sequential reads/writes, increasing this value offers better throughput, but I've found that this option really depends on your workload and IO pattern. For example on a database system that I've recently tuned I changed this value to match a single db page size which helped to reduce read latency. Increasing or decreasing beyond this value proved to hurt performance in my case. As for other options or settings for block device queues: max_sectors_kb = I've set this value to match what the hardware allows for a single transfer (check the value of the max_hw_sectors_kb (RO) file in sysfs to see what's allowed) nomerges = this lets you disable or adjust lookup logic for merging io requests. (turning this off can save you some cpu cycles, but I haven't seen any benefit when changing this for my systems, so I left it default) rq_affinity = I haven't tried this yet, but here is the explanation behind it from the kernel docs If this option is '1', the block layer will migrate request completions to the cpu "group" that originally submitted the request. For some workloads this provides a significant reduction in CPU cycles due to caching effects. For storage configurations that need to maximize distribution of completion processing setting this option to '2' forces the completion to run on the requesting cpu (bypassing the "group" aggregation logic)" scheduler = you said that you tried deadline and noop. I've tested both noop and deadline, but have found deadline win's out for the testing I've done most recently for a database server. NOOP performed well, but for our database server I was still able to achieve better performance adjusting the deadline scheduler. Options for deadline scheduler located under /sys/block/{sd,cciss,dm-}*/queue/iosched/ : fifo_batch = kind of like nr_requests, but specific to the scheduler. Rule of thumb is tune this down for lower latency or up for throughput. Controls the batch size of read and write requests. write_expire = sets the expire time for write batches default is 5000ms. Once again decrease this value decreases your write latency while increase the value increases throughput. read_expire = sets the expire time for read batches default is 500ms. Same rules apply here. front_merges = I tend to turn this off, and it's on by default. I don't see the need for the scheduler to waste cpu cycles trying to front merge IO requests. writes_starved = since deadline is geared toward reads the default here is to process 2 read batches before a write batch is processed. I found the default of 2 to be good for my workload.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://serverfault.com/questions/373563', 'https://serverfault.com', 'https://serverfault.com/users/13325/']}
|
jdg_378932
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4468
|
Below is a question asked on the forum stats.meta.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the reputation graphs of stats.stackexchange in a log-log scale: This seems to mean that very few users answer most of the questions, at least compared to stackoverflow : See this other query of quality of answers , you can see that there are a few users with more than 1200 answers, and one with more than 3k answers. Is there any other data confirming that few users answer most of the questions, or am I completely wrong? You can look at the data of the last year answers per user and reputation. 4492 users answered questions, many just 1 or 2 and 1 up to 700 answers!
Now provide the response and nothing else.
|
One problem with looking at the total number of answers to look at whether a few users answer most questions is it doesn't tell you whether someone is answering questions now . Someone may have a thousand answers but not have answered a question in years. (Yes, this may mean that the "few users" issue is even fewer than you suppose) So I'd look at something like answers in the last year, perhaps. But yes, a lot of answers on site are concentrated in just a few hundred users, and somewhat more concentrated than on stackoverflow. [However, that doesn't indicate that the users who answer most here are heavier answerers than on stackoverflow. I answer a fair few questions here, but it's still only an average of about 14 a week, or about two a day -- more on the weekend, somewhat fewer during the week -- but heavy answerers on SO answer more in a day than I do in a week (don't ask me how, though average answer length may be a factor).] I think at least part of the reason for that is it takes considerably less learning time to get to the point of giving reasonable answers to typical SO questions than it does here; the entry barrier is lower there. As a result, while there's something like a hundred times as many questions posted per day, there's quite a lot more than a hundred times as many regular answerers. [I'd very much like to expand our base of users who answer questions regularly, but I'm really not sure of a good way to do that.]
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.meta.stackexchange.com/questions/4468', 'https://stats.meta.stackexchange.com', 'https://stats.meta.stackexchange.com/users/105234/']}
|
jdg_378933
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
61573305
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I dynamically hide the status and the navigation bar completely ? The app contains a regular navigation drawer with a appbar / toolbar and FAB buttons. When switching to full screen, the content of the navigation and the status bar is scrolled away. Two empty bars are left on the screen. I want those empty bars to hide. I created a minimal demo app . On the left is the regular app. When pushing on the fab, the app should be shown fullscreen. How can I get the bars to hide? QUESTION: Please write which change(s) are needed in the minimal demo project? Updated with a second solution : The GREAT solution provided by @Roaim works. Essential was to set the android:fitsSystemWindows layout property to false. If you still have trouble with the showing and hiding of status/navigation bars, this solutin may help you. Hide the bars completely: public static void hideSystemUI() { if (getSupportActionBar() != null) { getSupportActionBar().hide(); } getWindow().getDecorView().setSystemUiVisibility( View.SYSTEM_UI_FLAG_IMMERSIVE | View.SYSTEM_UI_FLAG_LAYOUT_STABLE | View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN | View.SYSTEM_UI_FLAG_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_FULLSCREEN);} And show all bars: public static void showSystemUI() { if (getSupportActionBar() != null) { getSupportActionBar().show(); } getWindow().clearFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN); getWindow().getDecorView().setSystemUiVisibility(View.SYSTEM_UI_FLAG_VISIBLE);}
Now provide the response and nothing else.
|
Update The issue was with your layout file. I just set android:fitsSystemWindows=false to fix the issue. I made a pull request to your repo, which I think solves your issue. You should follow the following official documentations: Hide the status bar Hide the navigation bar Hide the Status Bar on Android 4.0 and Lower public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // If the Android version is lower than Jellybean, use this call to hide // the status bar. if (Build.VERSION.SDK_INT < 16) { getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); } setContentView(R.layout.activity_main); } ...} Hide the Status Bar on Android 4.1 and Higher View decorView = getWindow().getDecorView(); // Hide the status bar. int uiOptions = View.SYSTEM_UI_FLAG_FULLSCREEN; decorView.setSystemUiVisibility(uiOptions); // Remember that you should never show the action bar if the // status bar is hidden, so hide that too if necessary. ActionBar actionBar = getActionBar(); actionBar.hide(); Hide the Navigation Bar View decorView = getWindow().getDecorView(); // Hide both the navigation bar and the status bar. // SYSTEM_UI_FLAG_FULLSCREEN is only available on Android 4.1 and higher, but as // a general rule, you should design your app to hide the status bar whenever you // hide the navigation bar. int uiOptions = View.SYSTEM_UI_FLAG_HIDE_NAVIGATION | View.SYSTEM_UI_FLAG_FULLSCREEN; decorView.setSystemUiVisibility(uiOptions);
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/61573305', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3143823/']}
|
jdg_378934
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40551506
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to construct an npda for L = {w ∈ {a,b}*| n a (w) <= 3*n b (w)}. This means that for every b there can be at most 3 a 's. First of all, this is what I have done so far. From the start state, we push a single " a " on to the stack. (at the end of the day, we need to see this " a " to get to the final state, if there were more than 3 a's for every b, we would have popped this " a ", and we would not reach the final state). Then for every b on the string, I would push 3 a 's. For every a on the input I would pop a single " a ". At the end, if there is an a on the stack we go to the final state. Click here for the npda drawing So lets consider a string where nb(w)= 1 and na(w) = 3. We could have string of the sort baaa, aaab, abaa, aaba. (there are others too) If we were to run the npda for baaa. This would work fine. Reading nothing (lambda) we push a . Then we read b , and push aaa . The stack content is (aaaa). Then we read a and pop a single a. We do this 3 times and stack becomes (a). After reading the string, there is there is an a left on the stack so we are good to go to final state. The issue is that this construction only works when b supplies 3 a's to the stack in excess first before the a 's show up on the string. If we run the npda on the string aaab, this would no longer work. We would have single a on the stack, reading the first a we would have to pop an a . Reading the second a, no operation that can be done. There is nothing on the stack and we cant push an a because that would mess everything up. How could I fix this construction or is there a better npda construction for the language. I have been working on this for days. Help would be greatly appreciated. Also know that I am very new to npda so it could be that I am doing something that is fundamentally wrong. So, be clear in the explanation. Thanks
Now provide the response and nothing else.
|
What happened "Copy commits" is just what git rebase does . It copies some commits, then shuffles the branch pointers around so as to "forget" or "abandon" the original commits. (But see below.) Here is an illustration of how git rebase does this copying. The single letters represent commits, and the names on the right are branch names , which actually point only to one commit, namely the "tip of the branch". Each commit points back to its parent commit, i.e., the A--B connector lines are really meant to be left-pointing arrows (and ones that go diagonally also still point left, to earlier commits, with later commits being towards the right): C--D <-- branch1 /A--B \ E <-- branch2 This is the "before" picture, where you have only "original" commits. You now decide to git checkout branch1 and git rebase branch2 so that C and D come after E . But Git can't actually change the original C--D at all, so instead it copies them to new copies, C' and D' , with the new ones being slightly different: they come after E (and also use whatever code changes you did in E ): C--D [abandoned] /A--B \ E <-- branch2 \ C'-D' <-- branch1 Completely forgetting the original C--D would be OK here, but what if you decide this was a bad idea after all? A rebase keeps the original value of the branch in your "reflogs" to remember it. It also uses the special name ORIG_HEAD . This is much easier to use, but there's only one ORIG_HEAD , while there are a potentially-infinite number of reflog entries. Reflog entries are kept for at least 30 days by default, giving you time to change your mind. Look back up at the second graph and imagine that ORIG_HEAD is added. Now, the problem you have hit occurs because it's not just branch names that remember previous commits. Each commit also remembers its own previous commits, via those connecting, left-pointing arrows. So let's see what happens if there was either another name, or some other (merge) commit, remembering C and D . For instance, what if we have this much more complicated starting graph: .-----F <-- branch3 / / / C--D <-- branch1 / /A--B \ E <-- branch2 If we now "rebase" branch1 , we get this: .-----F <-- branch3 / / / C--D [ORIG_HEAD and reflog] / /A--B \ E <-- branch2 \ C'-D' <-- branch1 Commit F is a merge commit : it points back to both commit A and commit D . So it retains the original D , which retains the original C , giving us kind of a mess. F could be a plain ordinary commit, pointing back only to D , and we would see the same problem. Plain ordinary commits are much easier to copy, though, so if F were not a merge—if our F pointed back only to D and not to A —we could carefully rebase branch3 as well, copying F to F' , where F' comes after our new D' . It's possible to re-do the merge too, but that's a bit trickier (not that copying F correctly is all that easy either way—it's easy to "get lost" and copy C--D yet again by mistake). When this happens You will encounter this problem whenever you copy commits that you or someone else made, and both you and the "someone else" (perhaps the "other you") are also still using the originals. This happened with our commit F , for instance: we were still using the original C--D chain. We can fix this by making a new F' and using that, as long as we're the only one using branch3 . But if branch3 is published , or for that matter if we've published branch1 , so that someone else might have them as origin/branch1 or origin/branch3 , we have lost control over the original copies of C--D . Hence the standard advice is to rebase only private (unpublished) commits, since you know who is using them—it's just you of course—and you can check with yourself and make sure you're not using them, or that it's OK to copy them because you also plan to copy or otherwise re-do commits like F . If you have done the rebase—made the copies— and published them (pushed them to origin ), you're kind of stuck. You can "undo" your rebase anyway, and beg everyone else who shares the use of origin to make sure they don't use your C'-D' type copies for anything because you're putting the originals back. (For more-advanced groups of users, you can even all agree that certain branches get rebased regularly, and you and they must all recognize when this happens, and all of you will then take care to switch to the new commit copies. However, this is probably not what you want to do right now!) Undoing it So, if you (a) can and (b) want to "undo" your rebase, now the reflog, or the saved ORIG_HEAD , really come in handy. Let's take the second example again and look at what we have after we forgot that branch3 still remembers the original C-D commits: .-----F <-- branch3 / / / C--D [ORIG_HEAD and reflog] / /A--B \ E <-- branch2 \ C'-D' <-- branch1 Now, imagine we erase the name branch1 from the bottom row and write in a new <-- branch1 pointing to commit D : .-----F <-- branch3 / / / C--D <-- branch1 / /A--B \ E <-- branch2 \ C'-D' [abandoned] Now that we've abandoned C'-D' , just stop looking at it. Compare this graph to the original graph, and voila! That's what you want! The command that "moves" a branch label in arbitrary fashion like this is git reset (it moves the current branch, so you have to be on branch1 ). Look up the raw commit hash for D in the reflog, or check that ORIG_HEAD is correct, or use the reflog spelling to identify commit D . (For newbies, I find cut-and-paste of the raw hash is the way to go.) For instance, try: $ git log --graph --decorate --oneline ORIG_HEAD to see if ORIG_HEAD gets you the right hash. If not, try git reflog branch1 (looking at the specific reflog for branch1 here) to find hashes, then use: $ git log --graph --decorate --oneline branch1@{1} (or cut and paste the raw hash instead of using branch1@{1} ). Once you've found the desired "original" commit, you can then: $ git status # to make sure you're on the right branch # and that everything is clean, because # "git reset --hard" wipes out in-progress work!$ git reset --hard ORIG_HEAD (or put in branch1@{1} , or the raw hash ID, in place of ORIG_HEAD as usual). 1 That moves the current branch (which we just checked) so that it points to the given commit ( branch1@{1} , from the reflog, or ORIG_HEAD or a raw hash ID), to get us that final graph drawing back. The --hard sets both our index/staging-area, and our work-tree, to match the new commit to which we've just re-pointed our branch. 1 The general idea here, which recurs all the time in Git, is that we must name some specific commit , from which Git finds the rest of the commits if necessary. Any name works: a branch name, a name like HEAD , a reflog name like master@{1} , or a raw commit hash ID. Git does not really care how you tell it "look at this here commit"; ultimately, Git resolves that name to one those big ugly SHA-1 hash IDs, and uses that.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40551506', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6891080/']}
|
jdg_378935
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
118142
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?
Now provide the response and nothing else.
|
I keep it simple. A library has a base exception type extended from std:::runtime_error (that's from C++ apply as appropriate to other languages). This exception takes a message string so we can log; every throw point has a unique message (usually with a unique ID). That's about it. Note 1 : In the situations where somebody catching the exception can fix the exceptions and re-start the action. I will add derived exceptions for things that can be potentially uniquely be fixed at a remote location. But this is very very rare (Remember the catcher is unlikely to be close to the throw point thus fixing the problem is going to be hard (but everything is dependent on situation)). Note 2 : Sometimes the library is so simple it is not worth giving it its own exception and std::runtime_error will do. It is only important to have an exception if the ability to distinguish it from std::runtime_error can give the user enough information to do something with it. Note 3 : Within a class I usually prefer error codes (but these will never escape across the public API of my class). Looking at your trade offs: The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) Do more exceptions really give you finer grain control? The question becomes can the catching code really fix the error based on the exception. I am sure there are situations like that and in these cases you should have another exception. But all the exceptions you have listed above the only useful correction is to generate a big warning and stop the application. More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code This is great reason for using exceptions. But the information must be useful to the person who is caching it. Can they use the information to perform some corrective action? If the object is internal to your library and can not be used to influence any of the API then the information is useless. You need to be very specific that the information thrown has a useful value to the person that can catch it. The person catching it is usually outside your public API so tailor your information so that it can be used with things in your public API. If all they can do is log the exception then it is best to just throw an error message rather than lots of data. As the catcher will usually build an error message with the data. If you build the error message then it will be consistent across all catchers, if you allow the catcher to build the error message you could get the same error reported differently depending on who is calling and catching. Less exceptions, but embedding an error code that can be used as a lookup You have to determine weather the error code can be used meaningfully. If it can then you should have its own exception. Otherwise your users now need to implement switch statements inside there catch (which defeats the whole point of having catch automatically handle stuff). If it can't then why not use an error message in the exception (no need to split the code and the message it makes it a pain to look up). Returning error codes and flags directly from functions (sometimes not possible from threads) Returning error codes is great internally. It allows you to fix bugs there and then and you have to make sure you fix all error codes and account for them. But leaking them across your public API is a bad idea. The problem is that programmers often forget to check for error states (at least with an exception an unchecked error will force the application to quit an un-handled error will generally corrupt all your data). Implemented an event or callback system upon error (avoids stack unwinding) This method is often used in conjunction with other error handling mechanism (not as an alternative). Think of your windows program. A user initiates an action by selecting a menu item. This generates an action on the event queue. The event queue eventually assigns a thread to handle the action. The thread is supposed to handle the action and eventually return to the thread pool and await another task. Here an exception must be caught at the base by the thread tasked with the job. The result of catching the exception will usually result in an event being generated for the main loop which will eventually result in an error message being displayed to the user. But unless you can continue in the face of the exception the stack is going to unwind (for the thread at least).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/118142', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/40074/']}
|
jdg_378936
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
244753
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
How could people classify all rank $2$ complex vector bundles over $S^2\times S^2$ up to isomorphism? Could you give a rank 2 complex vector bundle which cannot be split as a sum of two line bundles?
Now provide the response and nothing else.
|
This is a special case of The space of homotopy classes of maps of products of spheres . Classifying rank 2 complex vector bundles on $S^2\times S^2$ is the same as calculating the set of pointed homotopy classes $\langle S^2\times S^2,BU(2)\rangle$. For this I would use the cofibration sequence$$S^3\to S^2\vee S^2 \to S^2\times S^2 \to S^4\to S^3\vee S^3\to \cdots$$where the first map is the attaching map of the top cell of $S^2\times S^2$, and the fourth map is its suspension, therefore is null-homotopic. The map $q:S^2\times S^2\to S^4$ can be identified with collapsing the complement of a small open ball $B^4\subset S^2\times S^2$. Taking maps into $BU(2)$ results in an exact sequence $$0 \to \pi_4(BU(2))\to \langle S^2\times S^2,BU(2)\rangle \to \pi_2(BU(2))\oplus \pi_2(BU(2))\to \pi_3(BU(2))$$which reduces (using $\pi_i(U(2))=\mathbb{Z},0,\mathbb{Z}$ for $i=1,2,3$) to a short exact sequence of sets $$0\to \mathbb{Z} \to \langle S^2\times S^2, BU(2)\rangle \to \mathbb{Z}\oplus\mathbb{Z} \to 0.$$This is not quite the full classification (which I'm sure must appear in the literature somewhere, if you look hard enough), but allows us to say some things. For instance, the second map is restriction to either $S^2$ factor, so I think this shows us how to produce a rank 2 complex bundle over $S^2\times S^2$ which is not a product of line bundles: take a non-trivial rank 2 bundle over $S^4$ (such as are classified by their $c_2$, by Greg Arone's answer to your previous question ), and pull it back via the collapse map $q:S^2\times S^2\to S^4$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/244753', 'https://mathoverflow.net', 'https://mathoverflow.net/users/95296/']}
|
jdg_378937
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4639011
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm interested in finding the asymptotic at $n\to\infty$ of $$b_n:= \frac{e^{-n}}{(n-1)!}\int_0^\infty\prod_{k=1}^{n-1}(x+k)\,e^{-x}dx=e^{-n}\int_0^\infty\frac{e^{-x}}{x\,B(n;x)}dx$$ Using a consecutive application of Laplace' method, I managed to get (here) $$b_n\sim(e-1)^{-n}$$ but this approach is not rigorous, and I cannot find even next asymptotic term, let alone a full asymptotic series. So, my questions are: how we can handle beta-function in this (and similar) expressions at $n\to\infty$ whether we can get asymptotic in a rigorous way ?
Now provide the response and nothing else.
|
First approach. We have \begin{align*}b_n & = \frac{{{\rm e}^{ - n} }}{{\Gamma (n)}}\int_0^{ + \infty } {\frac{{\Gamma (x + n)}}{{\Gamma (x + 1)}}{\rm e}^{ - x} {\rm d}x} \\ & = \frac{{{\rm e}^{ - n} }}{{\Gamma (n)}}\int_0^{ + \infty } {\frac{1}{{\Gamma (x + 1)}}{\rm e}^{ - x} \left( {\int_0^{ + \infty } {s^{x + n - 1} {\rm e}^{ - s} {\rm d}s} } \right)\!{\rm d}x} \\ & = \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {\frac{1}{{\Gamma (x + 1)}}\left( {\int_0^{ + \infty } {t^{x + n - 1} {\rm e}^{ - {\rm e}t} {\rm d}t} } \right)\!{\rm d}x} \\ & = \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {t^{n - 1} {\rm e}^{ - {\rm e}t} \left( {\int_0^{ + \infty } {\frac{{t^x }}{{\Gamma (x + 1)}}{\rm d}x} } \right)\!{\rm d}t} .\end{align*} Employing Ramanujan's formula $$\int_0^{ + \infty } {\frac{{t^x }}{{\Gamma (1 + x)}}{\rm d}x} = {\rm e}^t - \int_{ - \infty }^{ + \infty } {\frac{{{\rm e}^{ - t{\rm e}^y } }}{{y^2 + \pi ^2 }}{\rm d}y} ,$$ yields the exact expression \begin{align*}b_n & = \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {t^{n - 1} {\rm e}^{ - ({\rm e} - 1)t} {\rm d}t} - \frac{1}{{\Gamma (n)}}\int_0^{ + \infty } {t^{n - 1} {\rm e}^{ - {\rm e}t} \int_{ - \infty }^{ + \infty } {\frac{{{\rm e}^{ - t{\rm e}^y } }}{{y^2 + \pi ^2 }}{\rm d}y}\, {\rm d}t} \\ & = \frac{1}{{({\rm e} - 1)^n }} - \int_{ - \infty }^{ + \infty } {\frac{1}{{({\rm e} + {\rm e}^y )^n }}\frac{1}{{y^2 + \pi ^2 }}{\rm d}y} .\end{align*} Since $$\int_{ - \infty }^{ + \infty } {\frac{1}{{({\rm e} + {\rm e}^y )^n }}\frac{1}{{y^2 + \pi ^2 }}{\rm d}y} \le \frac{1}{{{\rm e}^n }}\int_{ - \infty }^{ + \infty } {\frac{{{\rm d}y}}{{y^2 + \pi ^2 }}} = \frac{1}{{{\rm e}^n }},$$ we indeed have $$b_n \sim \frac{1}{{({\rm e} - 1)^n }}$$ as $n\to +\infty$ . Second approach. Changing the order of summation and integration yields $$\sum\limits_{n = 1}^\infty {b_n z^n } = z\int_0^{ + \infty } {\frac{{{\rm d}x}}{{({\rm e} - z)^{x + 1} }}} = \frac{z}{{({\rm e} - z)\log ({\rm e} - z)}}$$ for sufficiently small $z$ . Now note that $$\frac{z}{{({\rm e} - z)\log ({\rm e} - z)}} = \frac{{\rm e} - 1}{{({\rm e} - 1) - z}} + H(z)$$ where $H(z)$ is holomorphic in the disc $|z|<\mathrm{e}$ . The first term may be expanded as $$\frac{{\rm e} - 1}{{({\rm e} - 1) - z}} = \sum\limits_{n = 0}^\infty {\frac{1}{{({\rm e} - 1)^{n} }}z^n } .$$ On the other hand, the $n$ th Maclaurin series coefficient of $H(z)$ is $\mathcal{O}((\mathrm{e}-\varepsilon)^{-n})$ by the Cauchy–Hadamard theorem for any $\varepsilon>0$ as $n\to+\infty$ . Thus $$b_n \sim \frac{1}{{({\rm e} - 1)^n }}$$ as $n\to +\infty$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4639011', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/869237/']}
|
jdg_378938
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
70239228
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Perl, the expression "aa" .. "bb" creates a list with the strings: aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az ba bb In Raku, however, (at least with Rakudo v2021.08), the same expression creates: aa ab ba bb Even worse, while "12" .. "23" in Perl creates a list of strings with the numbers 12, 13, 14, 15, ..., 23, in Raku the same expression creates the list ("12", "13", "22", "23") . The docs seem to be quite silent about this behaviour; at least, I could not find an explanation there. Is there any way to get Perl's behaviour for Raku ranges? (I know that the second problem can be solved via typecast to Int. This does not apply to the first problem, though.)
Now provide the response and nothing else.
|
It's possible to get the Perl behavior by using a sequence with a custom generator: say 'aa', *.succ … 'bb';# OUTPUT: «aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az ba bb»say '12', *.succ … '23';# OUTPUT: «12 13 14 15 16 17 18 19 20 21 22 23» (Oh, and a half solution for the '12'..'23' case: you already noted that you can cast the endpoints to a Numeric type to get the output you want. But you don't actually need to cast both endpoints – just the bottom. So 12..'23' still produces the full output. As a corollary, because ^'23' is sugar for 0..^'23' , any Range built with &prefix:<^> will be numeric.) For the "why" behind this behavior, please refer to my other answer to this question.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/70239228', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2170917/']}
|
jdg_378939
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1062035
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When the Client tries to connect to a disconnected IP address, there is a long timeout over 15 seconds... How can we reduce this timeout? What is the method to configure it? The code I'm using to set up a socket connection is as following: try{ m_clientSocket = new Socket( AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPAddress ip = IPAddress.Parse(serverIp); int iPortNo = System.Convert.ToInt16(serverPort); IPEndPoint ipEnd = new IPEndPoint(ip, iPortNo); m_clientSocket.Connect(ipEnd); if (m_clientSocket.Connected) { lb_connectStatus.Text = "Connection Established"; WaitForServerData(); }}catch (SocketException se){ lb_connectStatus.Text = "Connection Failed"; MessageBox.Show(se.Message);}
Now provide the response and nothing else.
|
I found this. Simpler than the accepted answer, and works with .NET Framework v2 Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);// Connect using a timeout (5 seconds)IAsyncResult result = socket.BeginConnect( sIP, iPort, null, null );bool success = result.AsyncWaitHandle.WaitOne( 5000, true );if ( socket.Connected ){ socket.EndConnect( result );}else { // NOTE, MUST CLOSE THE SOCKET socket.Close(); throw new ApplicationException("Failed to connect server.");} //...
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1062035', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93563/']}
|
jdg_378940
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.