source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
260638
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: I am not an expert in functional analysis but I was studying some, motivated from some mathematical physics considerations. I am not quite sure whether this is research-level, but let me state some context first: An old result of functional analysis tells us that a symmetric (in the sense that $(Ax,y)=(x,Ay)$, for all $x,y \in H$), unbounded operator $A$, acting on a Hilbert space $H$, cannot be defined on the whole space but only in a dense subspace of it. This is a direct consequence of the Hellinger-Toeplitz theorem . (see also: Riesz-Nagy, "Functional Analysis", 1955, p.296 and also Reed-Simon, "Methods of Modern Mathematical Physics", 1975, p.84). Since the operators of interest in physics are self-adjoint (and thus symmetric) they fall into this. On the other hand, it is well known that any linear map from a subspace of a Banach space $X$ to another Banach space $Y$ can be extended to a linear map $X\to Y$ defined on the whole of $X$ using Zorn's Lemma (see for example: Unbounded linear operator defined on $l^2$ ). Now the question is: Since the extension of a linear, unbounded operator to the whole of the space through the AC, will produce a -still- unbounded, linear operator, does the previous remark imply that the extension of linear, self-adjoint, unbounded operators on the whole of the space, produces non-self-adjoint operators? What would be a concrete relevant example? Related question: Invertible unbounded linear maps defined on a Hilbert space Now provide the response and nothing else.
Yes, you've got it right. Given an unbounded self-adjoint operator $A$ with domain $D(A) \subset H$, using Zorn's lemma you can produce an everywhere defined operator $A'$ on $H$ which extends $A$. (In fact you can produce many such operators; the extension is highly non-unique.) By Hellinger–Toeplitz, $A'$ cannot be symmetric. So it definitely isn't self-adjoint. Another way to see this that $A'$ cannot be self-adjoint is to note that, by the closed graph theorem, $A'$ cannot be closed. Since you used Zorn's lemma in an essential way, you won't get a "concrete" description of such an $A'$. There's a strong sense in which this is true. A common working definition of "concrete" is "something whose existence you can prove using only the axiom of dependent choice (DC)". There's a famous theorem of Solovay (extended by Shelah) that it's consistent with DC that every set of reals has the property of Baire (BP); i.e. there are models of set theory in which DC and BP both hold (but full AC necessarily fails). But from BP you can prove that every everywhere defined operator on any Banach space is bounded. So in such models, $A$ won't have any extension to all of $H$. Put another way, you can't even prove $A'$ exists without using AC in an essential way, so you certainly can't construct it concretely. You can read more about these ideas in Schechter's Handbook of Analysis and its Foundations.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/260638', 'https://mathoverflow.net', 'https://mathoverflow.net/users/85967/']}
jdg_378241
stackexchange
llm_judgeable_groundtruth_similarity
16507239
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: my table1 is : T1 col1 col2 C1 john C2 alex C3 piers C4 sara and so table 2: T2 col1 col2 R1 C1,C2,C4 R2 C3,C4 R3 C1,C4 how to result this?: query result col1 col2 R1 john,alex,sara R2 piers,sara R3 john,sara please help me? Now provide the response and nothing else.
Ideally, your best solution would be to normalize Table2 so you are not storing a comma separated list. Once you have this data normalized then you can easily query the data. The new table structure could be similar to this: CREATE TABLE T1( [col1] varchar(2), [col2] varchar(5), constraint pk1_t1 primary key (col1));INSERT INTO T1 ([col1], [col2])VALUES ('C1', 'john'), ('C2', 'alex'), ('C3', 'piers'), ('C4', 'sara');CREATE TABLE T2( [col1] varchar(2), [col2] varchar(2), constraint pk1_t2 primary key (col1, col2), constraint fk1_col2 foreign key (col2) references t1 (col1));INSERT INTO T2 ([col1], [col2])VALUES ('R1', 'C1'), ('R1', 'C2'), ('R1', 'C4'), ('R2', 'C3'), ('R2', 'C4'), ('R3', 'C1'), ('R3', 'C4'); Normalizing the tables would make it much easier for you to query the data by joining the tables: select t2.col1, t1.col2from t2inner join t1 on t2.col2 = t1.col1 See Demo Then if you wanted to display the data as a comma-separated list, you could use FOR XML PATH and STUFF : select distinct t2.col1, STUFF( (SELECT distinct ', ' + t1.col2 FROM t1 inner join t2 t on t1.col1 = t.col2 where t2.col1 = t.col1 FOR XML PATH ('')), 1, 1, '') col2from t2; See Demo . If you are not able to normalize the data, then there are several things that you can do. First, you could create a split function that will convert the data stored in the list into rows that can be joined on. The split function would be similar to this: CREATE FUNCTION [dbo].[Split](@String varchar(MAX), @Delimiter char(1)) returns @temptable TABLE (items varchar(MAX)) as begin declare @idx int declare @slice varchar(8000) select @idx = 1 if len(@String)<1 or @String is null return while @idx!= 0 begin set @idx = charindex(@Delimiter,@String) if @idx!=0 set @slice = left(@String,@idx - 1) else set @slice = @String if(len(@slice)>0) insert into @temptable(Items) values(@slice) set @String = right(@String,len(@String) - @idx) if len(@String) = 0 break end return end; When you use the split, function you can either leave the data in the multiple rows or you can concatenate the values back into a comma separated list: ;with cte as( select c.col1, t1.col2 from t1 inner join ( select t2.col1, i.items col2 from t2 cross apply dbo.split(t2.col2, ',') i ) c on t1.col1 = c.col2) select distinct c.col1, STUFF( (SELECT distinct ', ' + c1.col2 FROM cte c1 where c.col1 = c1.col1 FOR XML PATH ('')), 1, 1, '') col2from cte c See Demo . A final way that you could get the result is by applying FOR XML PATH directly. select col1, ( select ', '+t1.col2 from t1 where ','+t2.col2+',' like '%,'+cast(t1.col1 as varchar(10))+',%' for xml path(''), type).value('substring(text()[1], 3)', 'varchar(max)') as col2from t2; See SQL Fiddle with Demo
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16507239', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2374783/']}
jdg_378242
stackexchange
llm_judgeable_groundtruth_similarity
2492446
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Are there any known design principles, best-practices and design patterns that one can follow while designing a C project? Or useful design principles for procedural (imperative) programming in general? (I'm child of the 'object-oriented generation' and have to design a large C project for the first time) Now provide the response and nothing else.
Information hiding - as espoused by Parnas ( Software Fundamentals ). Careful management of headers and visibility: Everything in a source file that can be hidden from the outside world should be; only the documented external interface should be exposed. Everything that is exposed is declared in a header. That header is used where the functionality is needed (and where it is defined). The header is self-contained - when you need it, you use it, and you don't have to fret about 'what other headers do I also have to include' because the header ensures it works by including anything it needs to make it work. The header is self-protected - so it does not matter if it is included multiple times. #ifndef HEADER_H_INCLUDED#define HEADER_H_INCLUDED...rest of header contents, including other #include lines if necessary#endif /* HEADER_H_INCLUDED */ Design sets of functions to work on 'objects' (usually structures) - and use those functions rather than poking around the innards of the structure in the code that is using it. Think of it as self-imposed encapsulation.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2492446', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/284260/']}
jdg_378243
stackexchange
llm_judgeable_groundtruth_similarity
1146824
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Russel's Paradox, showing $X=\{x|x\notin x\}$ can't exist is not very hard.If $X \in X$, then $X \notin X$ by definiition, in the other case, $X \notin X$, then $X \in X$ by definition. Both cases are impossible. But how about whole things $X=\{x|x=x\}$? $X \in X$ probably cause the problem, but I don't know why violation of axiom of foundation in proper class is problem. Now provide the response and nothing else.
Because in presence of the Axion of Separation (or Axiom of Specification ), if the "universal set" $V = \{ x \mid x=x \}$ exists, we can have : $R = \{ x \mid x \in V \land x \notin x \}$ and $R$ is the "illegal" Russell's set.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1146824', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/80391/']}
jdg_378244
stackexchange
llm_judgeable_groundtruth_similarity
19005678
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So basically, as the title says, I want to have an upload button that allows a client to upload an image and it will then be displayed in a div. Of course, this would just be client side, so if the page was ever refreshed then the image would disappear. The image would then be styled accordingly and given fixed widths and heights. I searched online and couldn't find anything at all. Very new to jQuery, although I can code fluently in Javascript. Also not to sure if this is possible or not without the help of AJAX and/or PHP. Would like to avoid these if possible. All help is greatly appreciated. Now provide the response and nothing else.
Here is a working JSFiddle for what you are looking for function readURL(e) { if (this.files && this.files[0]) { var reader = new FileReader(); $(reader).load(function(e) { $('#blah').attr('src', e.target.result); }); reader.readAsDataURL(this.files[0]); }}$("#imgInp").change(readURL); As a side note, the above solution uses jQuery although it is not required for a working solution, Javascript only : function readURL(input) { if (input.files && input.files[0]) { var reader = new FileReader(); reader.onload = function (e) { document.getElementById('blah').src = e.target.result; } reader.readAsDataURL(input.files[0]); }}And the HTML: <input type='file' id="imgInp" onchange="readURL(this);" /> <img id="blah" src="#" alt="your image" /> function readURL() { // rehide the image and remove its current "src", // this way if the new image doesn't load, // then the image element is "gone" for now $('#blah').attr('src', '').hide(); if (this.files && this.files[0]) { var reader = new FileReader(); $(reader).load(function(e) { $('#blah') // first we set the attribute of "src" thus changing the image link .attr('src', e.target.result) // this will now call the load event on the image }); reader.readAsDataURL(this.files[0]); }}// below makes use of jQuery chaining. This means the same element is returned after each method, so we don't need to call it again$('#blah') // here we first set a "load" event for the image that will cause it change it's height to a set variable // and make it "show" when finished loading .load(function(e) { // $(this) is the jQuery OBJECT of this which is the element we've called on this load method $(this) // note how easy adding css is, just create an object of the css you want to change or a key/value pair of STRINGS .css('height', '200px') // or .css({ height: '200px' }) // now for the next "method" in the chain, we show the image when loaded .show(); // just that simple }) // with the load event set, we now hide the image as it has nothing in it to start with .hide(); // done$("#imgInp").change(readURL); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script><form id="form1" runat="server"> <input type='file' id="imgInp" /> <img id="blah" src="#" alt="your image" /> </form> See the jsFiddle Fork made here to help explain how to make more use of jQuery to answer some of your comment questions.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/19005678', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1799136/']}
jdg_378245
stackexchange
llm_judgeable_groundtruth_similarity
3450884
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $K$ be a field, and suppose that $[K^{sep} : K] = \infty$ . Can we find, for any prime number $p$ and any $k \geq 0$ , a separable irreducible polynomial $P$ such that $p^k$ divides the degree of $P$ ? If not, what are some examples? Now provide the response and nothing else.
Lemma For all $x$ we have $$\sqrt{x^2+x+1}\geq {\sqrt{3}\over 2}(x+1)$$ Proof After squaring and clearing the denominator we get $$4x^2+4x+4\geq 3(x^2+2x+1)$$ which is the same as $$x^2-2x+1\geq 0$$ Using lemma we get $$\sqrt{a^2+ab+b^2}= b\sqrt{\Big({a\over b}\Big)^2+{a\over b}+1}\geq b\cdot {\sqrt{3}\over 2}({a\over b}+1)=$$ $$={\sqrt{3}\over 2}(a+b) $$ so $$... \geq {\sqrt{3}\over 2}(a+b) + {\sqrt{3}\over 2}(b+c)+ {\sqrt{3}\over 2}(c+a) =3\sqrt{3}$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3450884', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/404944/']}
jdg_378246
stackexchange
llm_judgeable_groundtruth_similarity
15467553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to the MVVM pattern, and a little confused on when to use Code Behind. I have a very simple form right now, that includes one TextBox, and one DataGrid. What I would like is to be able to have the DataGrid change its selected item based on the TextBox. I have done this in Code Behind and it works fine using the following code: private void textBox1_TextChanged(object sender, TextChangedEventArgs e){ for (int i = 0; i < dataGrid1.Items.Count; i++) { string cellContent = dtReferral.Rows[i][0].ToString(); try { if (cellContent != null && cellContent.Substring(0, textBox1.Text.Length).Equals(textBox1.Text)) { object item = dataGrid1.Items[i]; dataGrid1.SelectedItem = item; dataGrid1.ScrollIntoView(item); //row.MoveFocus(new TraversalRequest(FocusNavigationDirection.Next)); break; } } catch { } }} Now, I just want to highlight the Item in the Datagrid that starts with text in textbox, and allow the user to press a button to edit selected item. Is it okay to have this logic in the Code Behind file? Or would I need to do this through some sort of binding? If I should do this through the View Model with Binding, any direction would be appreciated. Thank you. Now provide the response and nothing else.
If you only want to highlight the cells with the text from the TextBox you could make an AttatchedProperty for the DataGrid to accept your search value from the TextBox and create another AttatchedProperty for the Cell to indicate a match that you can usee to set properties in the Cell style. Then we create a IMultiValueConverter to check the Cell value for a match to the search Text . This way its reusable on other projects as you only need the AttachedProperties and Converter Bind the AttachedProperty SearchValue to your TextBox Text property. <DataGrid local:DataGridTextSearch.SearchValue="{Binding ElementName=SearchBox, Path=Text, UpdateSourceTrigger=PropertyChanged}" Then create a Style for DataGridCell and create a Setter for the AttachedProperty IsTextMatch using the IMultiValueConverter to return if the cells text matches the SearchValue <Setter Property="local:DataGridTextSearch.IsTextMatch"> <Setter.Value> <MultiBinding Converter="{StaticResource SearchValueConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="Content.Text" /> <Binding RelativeSource="{RelativeSource Self}" Path="(local:DataGridTextSearch.SearchValue)" /> </MultiBinding> </Setter.Value></Setter> Then we can use the Cells attached IsTextMatch property to set a highlight using a Trigger <Style.Triggers> <Trigger Property="local:DataGridTextSearch.IsTextMatch" Value="True"> <Setter Property="Background" Value="Orange" /> </Trigger></Style.Triggers> Here is a working example showing my rambilings :) Code: namespace WpfApplication17{ public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); for (int i = 0; i < 20; i++) { TestData.Add(new TestClass { MyProperty = GetRandomText(), MyProperty2 = GetRandomText(), MyProperty3 = GetRandomText() }); } } private string GetRandomText() { return System.IO.Path.GetFileNameWithoutExtension(System.IO.Path.GetRandomFileName()); } private ObservableCollection<TestClass> _testData = new ObservableCollection<TestClass>(); public ObservableCollection<TestClass> TestData { get { return _testData; } set { _testData = value; } } } public class TestClass { public string MyProperty { get; set; } public string MyProperty2 { get; set; } public string MyProperty3 { get; set; } } public static class DataGridTextSearch { // Using a DependencyProperty as the backing store for SearchValue. This enables animation, styling, binding, etc... public static readonly DependencyProperty SearchValueProperty = DependencyProperty.RegisterAttached("SearchValue", typeof(string), typeof(DataGridTextSearch), new FrameworkPropertyMetadata(string.Empty, FrameworkPropertyMetadataOptions.Inherits)); public static string GetSearchValue(DependencyObject obj) { return (string)obj.GetValue(SearchValueProperty); } public static void SetSearchValue(DependencyObject obj, string value) { obj.SetValue(SearchValueProperty, value); } // Using a DependencyProperty as the backing store for IsTextMatch. This enables animation, styling, binding, etc... public static readonly DependencyProperty IsTextMatchProperty = DependencyProperty.RegisterAttached("IsTextMatch", typeof(bool), typeof(DataGridTextSearch), new UIPropertyMetadata(false)); public static bool GetIsTextMatch(DependencyObject obj) { return (bool)obj.GetValue(IsTextMatchProperty); } public static void SetIsTextMatch(DependencyObject obj, bool value) { obj.SetValue(IsTextMatchProperty, value); } } public class SearchValueConverter : IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { string cellText = values[0] == null ? string.Empty : values[0].ToString(); string searchText = values[1] as string; if (!string.IsNullOrEmpty(searchText) && !string.IsNullOrEmpty(cellText)) { return cellText.ToLower().StartsWith(searchText.ToLower()); } return false; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { return null; } }} Xaml: <Window x:Class="WpfApplication17.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication17" Title="MainWindow" Height="350" Width="525" Name="UI"> <StackPanel DataContext="{Binding ElementName=UI}"> <TextBox Name="SearchBox" /> <DataGrid x:Name="grid" local:DataGridTextSearch.SearchValue="{Binding ElementName=SearchBox, Path=Text, UpdateSourceTrigger=PropertyChanged}" ItemsSource="{Binding TestData}" > <DataGrid.Resources> <local:SearchValueConverter x:Key="SearchValueConverter" /> <Style TargetType="{x:Type DataGridCell}"> <Setter Property="local:DataGridTextSearch.IsTextMatch"> <Setter.Value> <MultiBinding Converter="{StaticResource SearchValueConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="Content.Text" /> <Binding RelativeSource="{RelativeSource Self}" Path="(local:DataGridTextSearch.SearchValue)" /> </MultiBinding> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="local:DataGridTextSearch.IsTextMatch" Value="True"> <Setter Property="Background" Value="Orange" /> </Trigger> </Style.Triggers> </Style> </DataGrid.Resources> </DataGrid> </StackPanel></Window> Result: Edit: If you just want to select the row based on a single Column you can modify quite easily :). Override the Style of DataGridRow instead of DataGridCell . <Style TargetType="{x:Type DataGridRow}"> First pass in the property you want into the IMultiValueConverter this should be your DataContext <MultiBinding Converter="{StaticResource SearchValueConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="DataContext.MyProperty" /> <Binding RelativeSource="{RelativeSource Self}" Path="(local:DataGridTextSearch.SearchValue)" /></MultiBinding> Then change the Trigger to set IsSelected on the Row <Style.Triggers> <Trigger Property="local:DataGridTextSearch.IsTextMatch" Value="True"> <Setter Property="IsSelected" Value="True" /> </Trigger></Style.Triggers> Should look like this: <DataGrid x:Name="grid" local:DataGridTextSearch.SearchValue="{Binding ElementName=SearchBox, Path=Text, UpdateSourceTrigger=PropertyChanged}" ItemsSource="{Binding TestData}" > <DataGrid.Resources> <local:SearchValueConverter x:Key="SearchValueConverter" /> <Style TargetType="{x:Type DataGridRow}"> <Setter Property="local:DataGridTextSearch.IsTextMatch"> <Setter.Value> <MultiBinding Converter="{StaticResource SearchValueConverter}"> <Binding RelativeSource="{RelativeSource Self}" Path="DataContext.MyProperty" /> <Binding RelativeSource="{RelativeSource Self}" Path="(local:DataGridTextSearch.SearchValue)" /> </MultiBinding> </Setter.Value> </Setter> <Style.Triggers> <Trigger Property="local:DataGridTextSearch.IsTextMatch" Value="True"> <Setter Property="IsSelected" Value="True" /> </Trigger> </Style.Triggers> </Style> </DataGrid.Resources> </DataGrid> Result:
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15467553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1947195/']}
jdg_378247
stackexchange
llm_judgeable_groundtruth_similarity
6291555
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've discover this JDateChooser from searching how to use Item Combobox at Java Swing. Do you know how to install this? Here is the link JDateChooser I can't find any instructions on how to install it.. Can you share some instructions on how to install it... thanks in advance v(^_^)v Now provide the response and nothing else.
It's very simple. Download the Toedter jcalendar-1.4.jar (also maven repository will locate it)If you are using Netbeans you can create an new bean and add the Toedter beans to your Palette Manager: This gives you the capability to drag and drop these anywhere you like: If you drag and drop the bean somewhere, the code that is generated looks like here: jDateChooser1 = new com.toedter.calendar.JDateChooser();jDateChooser1.setCursor(new java.awt.Cursor(java.awt.Cursor.DEFAULT_CURSOR));jDateChooser1.setDateFormatString("dd/MM/yyyy"); You can then use the beans like this: java.sql.Date di = rs.getDate("edate"); jDateChooser1.setDate(di); or java.util.Date jud = jDateChooser1.getDate();long t = jud.getTime();java.sql.Date sqd = new java.sql.Date(t);rs.updateDate("edate", sqd); or like this if you want to format the Date: java.util.Date jud = jDateChooser1.getDate();java.text.SimpleDateFormat sdf = new java.text.SimpleDateFormat("MMMM dd, yyyy");jLabel1.setText(sdf.format(jud));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6291555', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/528125/']}
jdg_378248
stackexchange
llm_judgeable_groundtruth_similarity
253005
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Prove that: $\displaystyle5^{2012}+1$ is divisible with $313$. What I try and what I know: $313$ is primeand I try use the following formula : $$a^n+b^n=(a+b)(a^{n-1}-a^{n-2}b+\ldots\pm(-1)^{n}b^{n-1})$$ but still nothing. this problem can be solved using a elementary proof because I found it a mathematical magazine for children with the age of 14. Now provide the response and nothing else.
$5^4=625\equiv -1\pmod {313}$ as $626=2\cdot313$ So, $5^{2012}=(5^4)^{503}\equiv (-1)^{503}\pmod {313}\equiv-1$ Alternatively, $5^4=625=313\cdot2-1$ So, $5^{2012}=(5^4)^{503}=(313\cdot2-1)^{503}=(313\cdot2)^{513}+\binom {513}1(313\cdot2)^{512}(-1)^1+\cdots+\binom {513}{512}(313\cdot2)(-1)^{512}-1$ Observe that all the except the last is divisible by $313$ So, the remainder i.e., $5^{2012} \mod {313}$ is $-1$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/253005', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33954/']}
jdg_378249
stackexchange
llm_judgeable_groundtruth_similarity
83902
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I am planning to port an application(rather make it run) on win 2k8 core. But it seems that a lot of dlls are missing on core. I understand that this is a stripped down version of Windows server, but then how is one supposed to develop applications or rather make existing applications run seamlessly on server core. While running my app(works fine on rest of the platforms including Win 7), I've found that the following dlls are missing oledlg.dllmsjetoledb40.dlldhcpsapi.dlldsauth.dll just to name a few. I'm sure i might come across some more. For now, of what I could think of, the only quick way to fix this is to ship the dlls with my app. Which I guess not everyone would agree with. It's not the best solution and comes with redistro licensing hassles. Any thoughts? Thanks. Samrat Patil. Now provide the response and nothing else.
The main point of the default "core only" install of Windows Server 2008 is that is includes almost nothing - it is a bare system into which you install the few services and components that you need and nothing else (like the Debian base install and similar, if you are more familiar with Linux). Those DLLs are likely not "missing" - they are just part of components that you have not installed on the machine yet. You should find out which components and services provide the libraries and other facilities your app requites and list them as pre-install dependencies for your application. You should not ship system or 3rd party DLLs yourself as you suggest (unless the libraries come with relevant instructions, or a redistributable install set or mergable MSI) as you risk significant library versioning issues on the destination system. Also, you won't need just those DLLs that are referenced directly - those libraries will probably depend upon other that are also not present and may require other things (such as a mountain of registry settings perhaps) to be present in order for them to work correctly - getting all that together yourself would not be a pleasant job. Edit: You might find more help finding which components your project is depending upon (that the Core install is lacking by default) over on StackOverflow, as it is probably more a developer question than an admin one.
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/83902', 'https://serverfault.com', 'https://serverfault.com/users/122256/']}
jdg_378250
stackexchange
llm_judgeable_groundtruth_similarity
10423143
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to exclude certain paths/directories/files when searching a git repository using git grep ? Something similar to the --exclude option in the normal grep command? I need to use git grep because using grep directly runs too slowly on large git repositories. Now provide the response and nothing else.
In git 1.9.0 the "magic word" exclude was added to pathspec s. So if you want to search for foobar in every file except for those matching *.java you can do: git grep foobar -- ':(exclude)*.java' Or using the ! "short form" for exclude: git grep foobar -- ':!*.java' Note that in git versions up to v2.12, when using an exclude pathspec , you must have at least one "inclusive" pathspec . In the above examples you'd want to add ./* (recursively include everything under the current directory) somewhere after the -- as well. In git v2.13 this restriction was lifted and git grep foobar -- ':!*.java' works without the ./* . There's a good reference for all the "magic words" allowed in a pathspec at git-scm.com (or just git help glossary ).
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10423143', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2344982/']}
jdg_378251
stackexchange
llm_judgeable_groundtruth_similarity
299721
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: This may be more of a philosophical question than a physics question, but here goes. The standard line is that nonrenormalizable QFT's aren't predictive because you need to specify an infinite number of couplings/counterterms. But strictly speaking, this is only true if you want your theory to be predictive at all energy scales . As long as you only consider processes below certain energy scales, it's fine to truncate your Lagrangian after a finite number of interaction terms (or stop your Feynman expansion at some finite skeleton vertex order) and treat your theory as an effective theory. Indeed, our two most precise theories of physics - general relativity and the Standard Model - are essentially effective theories that only work well in certain regimes (although not quite in the technical sense described above). As physicists, we're philosophically predisposed to believe that there is a single fundamental theory, that requires a finite amount of information to fully specify, which describes processes at all energy scales. But one could imagine the possibility that quantum gravity is simply described by a QFT with an infinite number of counterterms, and the higher-energy the process you want to consider, the more counterterms you need to include. If this were the case, then no one would ever be able to confidently predict the result of an experiment at arbitrarily high energy. But the theory would still be completely predictive below certain energy scales - if you wanted to study the physics at a given scale, you'd just need to experimentally measure the value of the relevant counterterms once , and then you'd always be able to predict the physics at that scale and below. So we'd be able to predict that physics at arbitrarily high energies that we would have experimental access to , regardless of how technologically advanced our experiments were at the time. Such a scenario would admittedly be highly unsatisfying from a philosophical perspective, but is there any physical argument against it? Now provide the response and nothing else.
You suggest that we can use a nonrenormalizible theory (NR) at energies greater than the cutoff, by meausuring sufficiently many coefficients at any energy. However, a general expansion of an amplitude for a NR that breaks down at a scale $M$ reads$$A(E) = A^0(E) \sum c_n \left (\frac{E}{M}\right)^n$$I assumed that the amplitude was characterized by a single energy scale $E $. Thus at any energy $E\ge M$, we cannot calculate amplitudes from a finite subset of the unknown coefficients. On the other hand, we could have an infinite stack of (NR) effective theories (EFTs). The new fields introduced in each EFT could successively raise the cutoff. In practice, however, this is nothing other than discovering new physics at higher energies and describing it with QFT. That's what we've been doing at colliders for decades.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/299721', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/92058/']}
jdg_378252
stackexchange
llm_judgeable_groundtruth_similarity
61615
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: In Cube-free infinite binary words it was established that there are infinitely many cube-free infinite binary words (see the earlier question for definitions of terms). The construction given in answer to that question yields a countable infinity of such words. In a comment on that answer, I raised the question of whether there is an uncountable infinity of such words. My comment has not generated any response; perhaps it will attract more interest as a question. I should admit that I ask out of idle curiosity, and have no research interest in the answer; it just seems like the logical question to ask once you know some set is infinite. Now provide the response and nothing else.
Denote by $\mu$ the mapping from the Thue-Morse sequence, $\mu(0)=01$ and $\mu(1)=10$. Now define a sequence of maps from binary words to binary words, $g$, so that $g_{\emptyset}(w)=w$, $g_{0b}(w)=\mu^2(g_{b}(w))$ and $g_{1b}(w)=0\mu^2(g_{b}(w))$. Now given an infinite binary sequence $B=b_1b_2\dots$ define $w_{B}$ to be the limit of $$g_{b_1}(w),g_{b_1b_2}(w),g_{b_1b_2b_3}(w),\dots$$The $w_B$ give you uncountably many $7/3$-power free words (so in particular, cube free) which moreover have infinitely many overlaps. This stronger result is proved here . I believe all known constructions of large families of such sequences are defined by iterative mappings.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/61615', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3684/']}
jdg_378253
stackexchange
llm_judgeable_groundtruth_similarity
38298607
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Why does the following code give warnings? int main(void){ struct {int x; int y;} test = {42, 1337}; struct {int x; int y;} *test_ptr = &test;} Results: warning: initialization from incompatible pointer type [-Wincompatible-pointer-types] struct {int x; int y;} *test_ptr = &test; ^ Now provide the response and nothing else.
They're two anonymous structure types (they neither have a tag). All such structure types (in a single translation unit) are distinct — they're never the same type. Add a tag! The relevant sentence in the standard is in §6.7.2.1 Structure and union specifiers : ¶8 The presence of a struct-declaration-list in a struct-or-union-specifier declares a new type, within a translation unit. The struct-declaration-list refers to the material between { and } in the type. That means that in your code, there are two separate types, one for each struct { … } . The two types are separate; you cannot officially assign a value of one type to the other, nor create pointers, etc. In fact, you can't reference those types again after the semicolon. That means you could have: int main(void){ struct {int x; int y;} test = {42, 1337}, *tp = &test; struct {int x; int y;} result, *result_ptr; result_ptr = &result; …} Now test and tp refer to the same type (one a structure, one a pointer to the structure), and similarly result and result_ptr refer to the same type, and the initializations and assignments are fine, but the two types are different. It's not clear that you create a compound literal of either type — you'd have to write (struct {int x; int y;}){.y = 9, .x = 8} , but the presence of the struct-declaration-list means that is another new type. As noted in the comments, there is also section §6.2.7 Compatible type and composite type , which says: ¶1 … Moreover, two structure, union, or enumerated types declared in separate translation units are compatible if their tags and members satisfy the following requirements: If one is declared with a tag, the other shall be declared with the same tag. If both are completed anywhere within their respective translation units, then the following additional requirements apply: there shall be a one-to-one correspondence between their members such that each pair of corresponding members are declared with compatible types; if one member of the pair is declared with an alignment specifier, the other is declared with an equivalent alignment specifier; and if one member of the pair is declared with a name, the other is declared with the same name. For two structures, corresponding members shall be declared in the same order. For two structures or unions, corresponding bit-fields shall have the same widths. Roughly speaking, that says that if the definitions of the types in the two translation units (think 'source files' plus included headers) are the same, then they refer to the same type. Thank goodness for that! Otherwise, you couldn't have the standard I/O library working, amongst other minor details.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/38298607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4262157/']}
jdg_378254
stackexchange
llm_judgeable_groundtruth_similarity
56872370
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am learning Typescript and I am trying to implement a simple React/Redux app. When I use sync actions it works fine, but the problems are with the async action. I am following the official redux tutorial. First I declare the state for the session export interface UserSessionState { loggedIn: boolean;} Then I declare the interface for the action interface UpdateSessionAction { type: 'USER_LOGIN'; payload: boolean;} I export them with Union Types export type UserActionTypes = UpdateSessionAction; Then I have the actual Action export function updateSession(loggedIn: UserSessionState) { return { type: 'USER_LOGIN', payload: loggedIn, };} I have a fake api call function api() { return Promise.resolve(true);} And finally the login export const userLogin = (): ThunkAction< void, {}, {}, AnyAction> => async (dispatch: ThunkDispatch<{}, {}, AnyAction>) => { const res = await api(); dispatch(updateSession({ loggedIn: res }));}; In the reducer I simply initialize the state initialState: UserSessionState = {loggedIn: false} Then I do the normal redux stuff for the reducer. Finally in my store I call the initial action for checking the state store.dispatch(userLogin()); I keep getting this error: Argument of type 'ThunkAction<Promise<void>, {}, {}, AnyAction>' is not assignable to parameter of type 'AnyAction'. Property 'type' is missing in type 'ThunkAction<Promise<void>, {}, {}, AnyAction>' but required in type 'AnyAction'. I am missing a type but I have no idea what I do wrong. Now provide the response and nothing else.
In short: You get this error because what returned from your userLogin() function is a ThunkAction , which is missing type Why this is happening? dispatch should accept parameter of type AnyAction . AnyAction is a redux type, which extends Action (which have a mandatory property type ). This is from the current redux types file export interface Action<T = any> { type: T}/** * An Action type which accepts any other properties. * This is mainly for the use of the `Reducer` type. * This is not part of `Action` itself to prevent users who are extending `Action. */export interface AnyAction extends Action { // Allows any extra properties to be defined in an action. [extraProps: string]: any} How to fix it? Use ThunkDispatch type instead of redux's standard Dispatch . The following example and more can be found on this Gist const mapDispatchToProps = (dispatch: ThunkDispatch<MyState, void, Action>) => { return { onRequestClick: (arg: any) => dispatch(myAsyncAction(arg)), };} Also, see this article , section Map Dispatch to Props
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/56872370', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7972985/']}
jdg_378255
stackexchange
llm_judgeable_groundtruth_similarity
136005
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there a way to SetOptions locally? For example, I have a Module and inside it I am doing a lot of Plot s with similar options. If I do a SetOptions[Plot, ...] inside the Module , the change propagates outside. I don't like this because I have other Module s where I am doing other plots with different options. Is there a way to do a SetOptions[Plot, ...] that only affects "local" plots (say inside a Module , but I am open to any scoping construct)? Here Plot is only an example. In general I want to set options locally for any symbol. Now provide the response and nothing else.
Version 12.2 In version 12.2, use the new function WithCleanup[] : With[{plotOptions = Options[Plot]}, WithCleanup[SetOptions[Plot, PlotStyle -> Green], Plot[Sin[x], {x, -π, π}], SetOptions[Plot, plotOptions]]] which is a direct replacement of the older undocumented function described below. Older versions Usual caveats about using undocumented functions aside, here is how one might use Internal`WithLocalSettings[] : With[{plotOptions = Options[Plot]}, Internal`WithLocalSettings[SetOptions[Plot, PlotStyle -> Green], Plot[Sin[x], {x, -π, π}], SetOptions[Plot, plotOptions]]] but I do not think this to be any better than Nasser's proposal . As can be surmised from how it was used above, you can think of the three arguments of Internal`WithLocalSettings[] as three stages: setup, body, and clean-up. Carl notes in a comment below that the advantage of using this function is that any code in the first and third arguments are uninterruptible. A more usual case for its use would be for localizing changes to system settings that are not easily accessible to SetOptions[] . Using the Wizard's code from here as an example: With[{spopt = SystemOptions["SparseArrayOptions"]}, Internal`WithLocalSettings[ SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> 1}], ind = {{3, 1}, {3, 3}, {1, 3}, {2, 1}, {3, 2}, {3, 1}, {3, 2}, {3, 3}, {1, 3}, {3, 1}}; val = {1, 1, 3, 0, 3, 4, 3, 1, 1, 1}; SparseArray[ind -> val] // Normal, SetSystemOptions[spopt]]] {{0, 0, 4}, {0, 0, 0}, {6, 6, 2}} (See also this SO thread and this example usage by Oleksandr .)
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/136005', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/534/']}
jdg_378256
stackexchange
llm_judgeable_groundtruth_similarity
24373076
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am new to C++. I often see conditional statement like below: if statement_0;else if statement_1; Question: Syntactically , shall I treat else if as a single keyword? Or is it actually an nested if statement within the outer else like below? if statement_0;else if statement_1; Now provide the response and nothing else.
They are not a single keyword if we go to the draft C++ standard section 2.12 Keywords table 4 lists both if and else separately and there is no else if keyword. We can find a more accessible list of C++ keywords by going to cppreferences section on keywords . The grammar in section 6.4 also makes this clear: selection-statement: if ( condition ) statement if ( condition ) statement else statement The if in else if is a statement following the else term. The section also says: [...]The substatement in a selection-statement (each substatement, in the else form of the if statement) implicitly defines a block scope (3.3). If the substatement in a selection-statement is a single statement and not a compound-statement , it is as if it was rewritten to be a compound-statement containing the original substatement. and provides the following example: if (x) int i;can be equivalently rewritten asif (x) { int i;} So how is your slightly extended example parsed? if statement_0;else if statement_1; else if statement_2 ; will be parsed like this: if { statement_0;}else{ if { statement_1; } else { if { statement_2 ; } }} Note We can also determine that else if can not be one keyword by realizing that keywords are identifiers and we can see from the grammar for an identifier in my answer to Can you start a class name with a numeric digit? that spaces are not allowed in identifiers and so therefore else if can not be a single keyword but must be two separate keywords .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/24373076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3701346/']}
jdg_378257
stackexchange
llm_judgeable_groundtruth_similarity
955624
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Thanks to this thread How to download and save a file from Internet using Java? I know how to download a file, now my problem is that I need to authenticate on the sever from which I'm dowloading. It's an http interface to a subversion server. Which field do I need to look up into ? Using the code posted in the last comment, I get this exception: java.io.IOException: Server returned HTTP response code: 401 for URL: http://myserver/systemc-2.0.1.tgz at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1305) at java.net.URL.openStream(URL.java:1009) at mypackage.Installer.installSystemc201(Installer.java:29) at mypackage.Installer.main(Installer.java:38) Thanks, Now provide the response and nothing else.
You extend the Authenticator class and register it. The javadocs at the link explain how. I don't know if this works with the nio method that got the accepted answer to the question, but it for sure works for the old fashioned way that was the answer under that one. Within the authenticator class implementation, you are probably going to use a PasswordAuthentication and override the getPasswordAuthentication() method of your Authenticator implementation to return it. That will be the class which is passed the user name and password you need. Per your request, here is some sample code: public static final String USERNAME_KEY = "username";public static final String PASSWORD_KEY = "password";private final PasswordAuthentication authentication;public MyAuthenticator(Properties properties) { String userName = properties.getProperty(USERNAME_KEY); String password = properties.getProperty(PASSWORD_KEY); if (userName == null || password == null) { authentication = null; } else { authentication = new PasswordAuthentication(userName, password.toCharArray()); }}protected PasswordAuthentication getPasswordAuthentication() { return authentication;} And you register it in the main method (or somewhere along the line before you call the URL): Authenticator.setDefault(new MyAuthenticator(properties)); The usage is simple, but I find the API convoluted and kind of backwards for how you typically think about these things. Pretty typical of singleton design.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/955624', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/105413/']}
jdg_378258
stackexchange
llm_judgeable_groundtruth_similarity
146715
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When designing a novel system in a hospital or other clinical setting, I think it would be important to gather information from doctors, nurses, pharmacists, technicians, et al on use cases for the system. Aside from the fact that all of these are also busy professionals, what is a good way to get them to share information pertinent to their (possibly chaotic and time sensitive) workflow that would be usable in a software engineering design? Now provide the response and nothing else.
The only way to reverse-engineer what medical professionals are doing is to observe first hand, while they are working. Find and follow willing, chatty professionals like white on rice, and ask them to narrate the work and thought process. If they need prodding, ask questions that drive towards rules, like: What are you thinking? Why did you choose this over that? What are you looking for? What does that rule out? As you learn, share your mental model, and ask questions that shape & refine the rules. Be the journal of their work. Anything short of active, hands-on, in-person, respectful observation and feedback is guessing .. not engineering.
{}
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/146715', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/23251/']}
jdg_378259
stackexchange
llm_judgeable_groundtruth_similarity
36557294
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know, there are many articles about this topic, but I have a problem and I can't find any solution. I have a classic spring security java config: @Configuration@EnableWebSecuritypublic class SecurityConfig extends WebSecurityConfigurerAdapter {@Autowiredprivate AuctionAuthenticationProvider auctionAuthenticationProvider;@Autowiredpublic void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth.authenticationProvider(auctionAuthenticationProvider);}@Overrideprotected void configure(HttpSecurity http) throws Exception { http.httpBasic(); ExpressionUrlAuthorizationConfigurer<HttpSecurity>.ExpressionInterceptUrlRegistry authorizeRequest = http.authorizeRequests(); configureAdminPanelAccess(authorizeRequest); configureFrontApplicationAccess(authorizeRequest); configureCommonAccess(authorizeRequest); http.csrf() .csrfTokenRepository(csrfTokenRepository()).and() .addFilterAfter(csrfHeaderFilter(), CsrfFilter.class); http.logout() .clearAuthentication(true) .invalidateHttpSession(true);}...} Also, I have two controller methods, where I login/logout from my web application by AJAX. When I would like to logout, I first call this method, which I expect to clear user sessions and clear everything from the security context. @Override@RequestMapping(value = "/logout", method = GET, produces = APPLICATION_JSON_UTF8_VALUE)public ResponseEntity<Boolean> logout(final HttpServletRequest request, final HttpServletResponse response) { Authentication auth = SecurityContextHolder.getContext().getAuthentication(); if (auth != null){ new SecurityContextLogoutHandler().logout(request, response, auth); } return new ResponseEntity<>(Boolean.TRUE, HttpStatus.OK);} After this I reload my client web application and each time, when it is reloaded, I check whether the user is authenticated by calling the following controller method: @Override@RequestMapping(value = "/user", method = GET, produces = APPLICATION_JSON_UTF8_VALUE)public ResponseEntity<UserDetails> user() { Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal(); if (principal instanceof UserDetails) { return new ResponseEntity<>((UserDetails) principal, HttpStatus.OK); } return null;} And here I aways receive the last authenticated user. It seems that in the previous logout method, Spring logout doesn't work. Keep in mind that I tried to logout with the following code, without any success: @Override @RequestMapping(value = "/logout", method = GET, produces = APPLICATION_JSON_UTF8_VALUE) public ResponseEntity<Boolean> logout(final HttpServletRequest request) { try { request.logout(); return new ResponseEntity<>(Boolean.TRUE, HttpStatus.OK); } catch (ServletException ex) { if (LOG.isDebugEnabled()) { LOG.debug("There is a problem with the logout of the user", ex); } } Are you have any idea what I miss in my config and the logout process? Now provide the response and nothing else.
From your question, I see you are trying to create your own logout and you also trying to use the default Spring logout. I advise that you should choose one method and not mix them both. There are two I recommend to logout from Spring: First: Default spring security logout .logout().logoutRequestMatcher(new AntPathRequestMatcher("/logout")).logoutSuccessUrl("/logout.done").deleteCookies("JSESSIONID").invalidateHttpSession(true) From the example above, you should only need to call the /logout URL whenever you want to logout the user. No need to create any @Controller to handle that logout instead Spring will help to log the user out. You also can add other thing you want to invalidate here. Second: Programmatically logout @RequestMapping(value = {"/logout"}, method = RequestMethod.POST)public String logoutDo(HttpServletRequest request,HttpServletResponse response){HttpSession session= request.getSession(false); SecurityContextHolder.clearContext(); session= request.getSession(false); if(session != null) { session.invalidate(); } for(Cookie cookie : request.getCookies()) { cookie.setMaxAge(0); } return "logout";} If you are using this logout approach, you don't need to include the first method in ht eSpring security config. By using this method, you can add an extra action to perform before and after logout done. BTW, to use this logout, just call the /logout url and the user will be logged out manually. This method will invalidate the session, clear Spring security context and cookies. In addition for the second method, if you are using RequestMethod.POST , you need to include the CSRF key on the POST request. The alternative way is to create a form with a hidden input CSRF key. This is some example of auto generated logout link with jQuery : $("#Logout").click(function(){ $form=$("<form>").attr({"action":"${pageContext.request.contextPath}"+"/logout","method":"post"}) .append($("<input>").attr({"type":"hidden","name":"${_csrf.parameterName}","value":"${_csrf.token}"})) $("#Logout").append($form); $form.submit();}); You just need to create a hyperlink <a id="Logout">Logout</a> to use it. If you are using RequestMethod.GET ,just include a CSRF key as a parameter in you link like this: <a href="${pageContext.request.contextPath}/logout?${_csrf.parameterName}=${_csrf.token}">Logout</a> Thats all, hope it helps.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36557294', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4035814/']}
jdg_378260
stackexchange
llm_judgeable_groundtruth_similarity
17313277
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have a 10x10 text box ( 100 of them ) I write this code to write into text file : foreach (Control control in Panel1.Controls) { var textBox = control as TextBox; if (textBox != null) { if (string.IsNullOrEmpty(textBox.Text)) // ignore this { textBox.Style["visibility"] = "hidden"; } textBox.Enabled = false; if (numberofCommas > 8) { stringWriter.Write(textBox.Text); numberofCommas = 0; } else { stringWriter.Write("," + textBox.Text ); numberofCommas++; recordsWritten++; } if (recordsWritten == 10) { stringWriter.WriteLine(); recordsWritten = 0; } else { } From the above i want to have 10 rows of 9 commas in the text file but instead i have 9 rows of 10 commas in the text file , is my code logic wrong? because i have been looking it for hours , i still couldn't solve it . sorry if my logic is bad , i am new to programming. Now provide the response and nothing else.
reduce is more appropriate here than filter as it doesn't build a temporary array just for counting. var dataset = [2,2,4,2,6,4,7,8];var search = 2;var count = dataset.reduce(function(n, val) { return n + (val === search);}, 0);console.log(count); In ES6: let count = dataset.reduce((n, x) => n + (x === search), 0); Note that it's easy to extend that to use a custom matching predicate, for example, to count objects that have a specific property: people = [ {name: 'Mary', gender: 'girl'}, {name: 'Paul', gender: 'boy'}, {name: 'John', gender: 'boy'}, {name: 'Lisa', gender: 'girl'}, {name: 'Bill', gender: 'boy'}, {name: 'Maklatura', gender: 'girl'}]var numBoys = people.reduce(function (n, person) { return n + (person.gender == 'boy');}, 0);console.log(numBoys); Counting all items, that is, making an object like {x:count of xs} is complicated in javascript, because object keys can only be strings, so you can't reliably count an array with mixed types. Still, the following simple solution will work well in most cases: count = function (ary, classifier) { classifier = classifier || String; return ary.reduce(function (counter, item) { var p = classifier(item); counter[p] = counter.hasOwnProperty(p) ? counter[p] + 1 : 1; return counter; }, {})};people = [ {name: 'Mary', gender: 'girl'}, {name: 'Paul', gender: 'boy'}, {name: 'John', gender: 'boy'}, {name: 'Lisa', gender: 'girl'}, {name: 'Bill', gender: 'boy'}, {name: 'Maklatura', gender: 'girl'}];// If you don't provide a `classifier` this simply counts different elements:cc = count([1, 2, 2, 2, 3, 1]);console.log(cc);// With a `classifier` you can group elements by specific property:countByGender = count(people, function (item) { return item.gender});console.log(countByGender); 2017 update In ES6, you use the Map object to reliably count objects of arbitrary types. class Counter extends Map { constructor(iter, key=null) { super(); this.key = key || (x => x); for (let x of iter) { this.add(x); } } add(x) { x = this.key(x); this.set(x, (this.get(x) || 0) + 1); }}// again, with no classifier just count distinct elementsresults = new Counter([1, 2, 3, 1, 2, 3, 1, 2, 2]);for (let [number, times] of results.entries()) console.log('%s occurs %s times', number, times);// counting objectspeople = [ {name: 'Mary', gender: 'girl'}, {name: 'John', gender: 'boy'}, {name: 'Lisa', gender: 'girl'}, {name: 'Bill', gender: 'boy'}, {name: 'Maklatura', gender: 'girl'}];chessChampions = { 2010: people[0], 2012: people[0], 2013: people[2], 2014: people[0], 2015: people[2],};results = new Counter(Object.values(chessChampions));for (let [person, times] of results.entries()) console.log('%s won %s times', person.name, times);// you can also provide a classifier as in the abovebyGender = new Counter(people, x => x.gender);for (let g of ['boy', 'girl']) console.log("there are %s %ss", byGender.get(g), g); A type-aware implementation of Counter can look like this (Typescript): type CounterKey = string | boolean | number;interface CounterKeyFunc<T> { (item: T): CounterKey;}class Counter<T> extends Map<CounterKey, number> { key: CounterKeyFunc<T>; constructor(items: Iterable<T>, key: CounterKeyFunc<T>) { super(); this.key = key; for (let it of items) { this.add(it); } } add(it: T) { let k = this.key(it); this.set(k, (this.get(k) || 0) + 1); }}// example:interface Person { name: string; gender: string;}let people: Person[] = [ {name: 'Mary', gender: 'girl'}, {name: 'John', gender: 'boy'}, {name: 'Lisa', gender: 'girl'}, {name: 'Bill', gender: 'boy'}, {name: 'Maklatura', gender: 'girl'}];let byGender = new Counter(people, (p: Person) => p.gender);for (let g of ['boy', 'girl']) console.log("there are %s %ss", byGender.get(g), g);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/17313277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2376998/']}
jdg_378261
stackexchange
llm_judgeable_groundtruth_similarity
36331234
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: It is my first time for me to use Alamofire , and it got me really frustrated. I'm using the following code to call a signup API on the backend API Alamofire.request(.POST, "\(self.authBaseURL)/signup", parameters: params, headers: headers, encoding: .JSON) .validate(statusCode: 200..<300) .validate(contentType: ["application/json"]) .responseJSON { response in switch response.result { case .Success(let JSON): print("Success with JSON: \(JSON)") success(updatedUser) case .Failure(let error): print("Request failed with error: \(error)") failure(error) } } The problem is that the error object I'm getting in the .Failure function doesn't contain the server side message.I have tried to access the rest of the objects (request, response, data, result) I could not find my error message anywhere I'm always getting the following error, no matter what the server message has to say.Request failed with error: FAILURE: Error Domain=com.alamofire.error Code=-6003 "Response status code was unacceptable: 400" UserInfo={NSLocalizedFailureReason=Response status code was unacceptable: 400} Is there is anything wrong I'm doing? Swift 2.2, AlamoFire 3.3.0, Xcode 7.3 Now provide the response and nothing else.
I managed to get it to work exactly the way I want is by dropping the status validation and check for the statusCode manually Alamofire.request(.POST, "\(self.authBaseURL)/signup", parameters: params, headers: headers, encoding: .JSON) .validate(contentType: ["application/json"]) .responseJSON { response in if response.response?.statusCode == 200 { print("Success with JSON: \(response.result.value)") success(updatedUser) } else { let error = response.result.value as! NSDictionary let errorMessage = error.objectForKey("message") as! String print(errorMessage) failure(errorMessage) } }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36331234', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1079781/']}
jdg_378262
stackexchange
llm_judgeable_groundtruth_similarity
4185930
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: A good example of this is either on the Twitter launch screen (the screen with the large icons that is seen when the application is first launch) or even just look at the application tray when you focus an application icon. Basically I need to highlight an ImageView where the highlight contours to the image within the ImageView and looks like it's a border to that image. I would also like to customize the highlight to have it be a certain color and for it to fade out. Thanks, groomsy Now provide the response and nothing else.
You need to assign the src attribute of the ImageView a state list drawable. In other words, that state list would have a different image for selected, pressed, not selected, etc. - that's how the Twitter App does it. So if you had an ImageView: <ImageView style="@style/TitleBarLogo" android:contentDescription="@string/description_logo" android:src="@drawable/title_logo" /> The src drawable (title_logo.xml) would look like this: <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_focused="true" android:state_pressed="true" android:drawable="@drawable/title_logo_pressed"/> <item android:state_focused="false" android:state_pressed="true" android:drawable="@drawable/title_logo_pressed"/> <item android:state_focused="true" android:drawable="@drawable/title_logo_selected"/> <item android:state_focused="false" android:state_pressed="false" android:drawable="@drawable/title_logo_default"/></selector> The Google IO Schedule app has a good example of this.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4185930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/349179/']}
jdg_378263
stackexchange
llm_judgeable_groundtruth_similarity
5113707
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have been trying to generate a list of every possible 4 character string which could be made up of any given set of characters. I have used a function to generate every 4 character combination from a set of characters but each character is only ever used once. I need every possible combination using a given set of chars for example: String[] elements = {"a", "b", "c", "1", "2", "3"};int[] indices;CombinationGenerator x = new CombinationGenerator (elements.length, 4);StringBuffer combination;while (x.hasMore ()) { combination = new StringBuffer (); indices = x.getNext (); for (int i = 0; i < indices.length; i++) { combination.append (elements[indices[i]]); } System.out.println (combination.toString ());} Using the CombinationGenerator class from here ,this will return every unique 4 character combination such as: 'abcd' , 'abc1', 'acb2', 'acb1' But, I want every possible string that could be created using the given characters. For example: 'aaaa', 'aaab', 'abc1', 'aac1', '11c2' I have tried every recursive and permutation method I've been able to find or come up with but I'm stumped on getting any further than generating all the combinations like above, then generating every permutation of each combination, but I can't work out how to create a set of combinations using repeated characters. Any help, or even just the theory on how it could be done would be helpful. Now provide the response and nothing else.
You're going to have to be more specific on exactly WHAT you want your function to get. There are many different definitions of "combinations" and you haven't specified whether you want ordered or unordered combinations. Mathematically, if you have n elements and want a LIST of k of them (ordered with repeats), that gives you n ^ k combinations. (6 ^ 4 = 1296 combinations in your original example, which is a lot!). However, if you have n elements and want a MULTISET of k of them (unordered with repeats), that gives you (n + k - 1)! / (k! * (n - 1)!) combinations and is a much harder enumeration. If k is small, you can generate the first one with a limited number of for loops but this becomes cumbersome very quickly as k grows. This strongly hints at the need for a RECURSIVE method: public static String[] getAllLists(String[] elements, int lengthOfList){ //initialize our returned list with the number of elements calculated above String[] allLists = new String[(int)Math.pow(elements.length, lengthOfList)]; //lists of length 1 are just the original elements if(lengthOfList == 1) return elements; else { //the recursion--get all lists of length 3, length 2, all the way up to 1 String[] allSublists = getAllLists(elements, lengthOfList - 1); //append the sublists to each element int arrayIndex = 0; for(int i = 0; i < elements.length; i++) { for(int j = 0; j < allSublists.length; j++) { //add the newly appended combination to the list allLists[arrayIndex] = elements[i] + allSublists[j]; arrayIndex++; } } return allLists; }} Not only will this method generate all the lists, but it will enumerate them in order. That is, the output will be aaaaaaabaaacaaa1aaa2aaa3aabaaabbaabcaab1...3323333a333b333c333133323333 using your original input. It can also generate any length of words (be very careful with this! Just with words of length 8 I wound up with 1,679,616 combinations!). If the method confuses you (it's a recursive method, so it's a bit hard to follow) or if you want a solution to the second combination problem, feel free to ask. Also, this method is somewhat inefficient because it recalculates the combinations for all the sublists, so it's not viable for really long lists. If you really wanted efficiency you would store the already-calculated tuples in a global list.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5113707', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/633302/']}
jdg_378264
stackexchange
llm_judgeable_groundtruth_similarity
24775858
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have following input: !foo\[bar[bB]uz\[xx/ I want to match everything from start to [ , including escaped bracket \[ and ommiting first characters if in [!#\s] group Expected output: foo\[bar I've tried with: (?![!#\s])[^/\s]+\[ But it returns: foo\[bar[bB]uz\[ Now provide the response and nothing else.
String#split() expects a regular expression as the first argument and | is a control character in regex. To make regex parser understand that you mean to split by the literal | , you need to pass \| to the regex parser. But \ is a control character in Java string literals. So, to make Java compiler understand that you want to pass \| to the regex parser, you need to pass "\\|" to the String#split() method.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24775858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/223386/']}
jdg_378265
stackexchange
llm_judgeable_groundtruth_similarity
293978
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on setting up a web server running RedHat Enterprise 6 with Apache and PHP inside of a chroot jail environment. The chroot directory for apache is /chroot/httpd. I followed this example yet when I go to start apache, I see the following in /var/log/httpd/error_log . [warn] ./mod_dnssd.c: No services found to register[Mon Jul 25 13:14:31 2011] [notice] core dump file size limit raised to 4294967295 bytes[Mon Jul 25 13:14:31 2011] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0[Mon Jul 25 13:14:31 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)[Mon Jul 25 13:14:31 2011] [notice] Digest: generating secret for digest authentication ...[Mon Jul 25 13:14:31 2011] [notice] Digest: done[Mon Jul 25 13:14:31 2011] [notice] mod_chroot: changed root to /chroot/httpd.[Mon Jul 25 13:14:31 2011] [error] (13)Permission denied: could not create /var/run/httpd.pid[Mon Jul 25 13:14:31 2011] [error] httpd: could not log pid to file /var/run/httpd.pid[Mon Jul 25 13:14:31 2011] [warn] ./mod_dnssd.c: No services found to register Also, SELinux is enabled and according to the instructions, you are supposed to change the value of the httpd_disable_trans boolean to 1 using the command setsebool httpd_disable_trans 1 However, I cannot find such a boolean under /selinux/booleans or anywhere in the system. The command produces the following error: Could not change active booleans: Invalid boolean I've scoured the web for why this boolean is not present in the system with no result. I have no idea if it's SELinux that's not allowing httpd to start or if it is a permissions issue. I have double checked the permissions and they seem fine. Any suggestions? Thank you. Update: I've determined that SELinux is indeed the reason for those errors. Changing the default policy from Enforcing to Permissive does allow apache to start just fine. The question is, why is httpd_disable_trans not available in the system? That would allow me to maintain the security of SELinux along with apache. Also, on a side note, with apache inside a chroot environment, is it best to host the web content inside the /chroot or create symbolic links from there to where it is located? My goal is that I need to enable web content inside user directories stored under /users. Update 2: Some Apache config lines that I believe are relevant: .....ServerRoot /etc/httpdLockFile /var/run/httpd.lockCoreDumpDirectory /var/runScoreBoardFile /var/run/httpd.scoreboardPidFile /var/run/httpd.pidChrootDir "/chroot/httpd"LoadModule auth_basic_module modules/mod_auth_basic.soLoadModule auth_digest_module modules/mod_auth_digest.soLoadModule authn_file_module modules/mod_authn_file.soLoadModule authn_alias_module modules/mod_authn_alias.soLoadModule authn_anon_module modules/mod_authn_anon.soLoadModule authn_dbm_module modules/mod_authn_dbm.soLoadModule authn_default_module modules/mod_authn_default.soLoadModule authz_host_module modules/mod_authz_host.soLoadModule authz_user_module modules/mod_authz_user.soLoadModule authz_owner_module modules/mod_authz_owner.soLoadModule authz_groupfile_module modules/mod_authz_groupfile.soLoadModule authz_dbm_module modules/mod_authz_dbm.soLoadModule authz_default_module modules/mod_authz_default.soLoadModule ldap_module modules/mod_ldap.soLoadModule authnz_ldap_module modules/mod_authnz_ldap.soLoadModule include_module modules/mod_include.soLoadModule log_config_module modules/mod_log_config.soLoadModule logio_module modules/mod_logio.soLoadModule env_module modules/mod_env.soLoadModule ext_filter_module modules/mod_ext_filter.soLoadModule mime_magic_module modules/mod_mime_magic.soLoadModule expires_module modules/mod_expires.soLoadModule deflate_module modules/mod_deflate.soLoadModule headers_module modules/mod_headers.soLoadModule usertrack_module modules/mod_usertrack.soLoadModule setenvif_module modules/mod_setenvif.soLoadModule mime_module modules/mod_mime.soLoadModule dav_module modules/mod_dav.soLoadModule status_module modules/mod_status.soLoadModule autoindex_module modules/mod_autoindex.soLoadModule info_module modules/mod_info.soLoadModule dav_fs_module modules/mod_dav_fs.soLoadModule vhost_alias_module modules/mod_vhost_alias.soLoadModule negotiation_module modules/mod_negotiation.soLoadModule dir_module modules/mod_dir.soLoadModule actions_module modules/mod_actions.soLoadModule speling_module modules/mod_speling.soLoadModule userdir_module modules/mod_userdir.soLoadModule alias_module modules/mod_alias.soLoadModule substitute_module modules/mod_substitute.soLoadModule rewrite_module modules/mod_rewrite.soLoadModule proxy_module modules/mod_proxy.soLoadModule proxy_balancer_module modules/mod_proxy_balancer.soLoadModule proxy_ftp_module modules/mod_proxy_ftp.soLoadModule proxy_http_module modules/mod_proxy_http.soLoadModule proxy_ajp_module modules/mod_proxy_ajp.soLoadModule proxy_connect_module modules/mod_proxy_connect.soLoadModule cache_module modules/mod_cache.soLoadModule suexec_module modules/mod_suexec.soLoadModule disk_cache_module modules/mod_disk_cache.soLoadModule cgi_module modules/mod_cgi.soLoadModule version_module modules/mod_version.soLoadModule chroot_module /usr/lib/httpd/modules/mod_chroot.soInclude conf.d/*.confUser apacheGroup apache.... Now provide the response and nothing else.
I just figure out what issue. This is our environment: RedHat 5 with latest apache RPM When you look at the error logs it complains about not been able to create the httpd.pid, under the "run" dir. It didn't make sense because that directory had the correct context for read/write "httpd_sys_rw_content_t" (which I had to find from "/etc/selinux/targeted/contexts/customizable_types". I realized (after hours of searching) that on the error log it doesn't give you the full path, but when apache stars it chroot dir to "/home/httpdjail". Under this folder I found another "run" dir. After changing the permissions to: chcon -Rv -t httpd_sys_content_rw_t /home/httpdjail/ IT WORKED!! ^^ I'm guessing if you give the right permissions to your "/chroot/httpd" it will fix your issue. Hope this help!
{}
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/293978', 'https://serverfault.com', 'https://serverfault.com/users/57860/']}
jdg_378266
stackexchange
llm_judgeable_groundtruth_similarity
45812401
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: While experimenting with Docker and Docker Compose I suddenly ran into "no space left on device" errors. I've tried to remove everything using methods suggested in similar questions, but to no avail. Things I ran: $ docker-compose rm -v$ docker volume rm $(docker volume ls -qf dangling=true)$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")$ docker system prune$ docker container prune$ docker rm $(docker stop -t=1 $(docker ps -q))$ docker rmi -f $(docker images -q) As far as I'm aware there really shouldn't be anything left now. And it looks that way: $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE Same for volumes: $ docker volume lsDRIVER VOLUME NAME And for containers: $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Unfortunately, I still get errors like this one: $ docker-compose upPulling adminer (adminer:latest)...latest: Pulling from library/adminer90f4dba627d6: Pulling fs layer19ae35d04742: Pulling fs layer6d34c9ec1436: Download complete729ea35b870d: Waitingbb4802913059: Waiting51f40f34172f: Waiting8c152ed10b66: Waiting8578cddcaa07: Waitinge68a921e4706: Waitingc88c5cb37765: Waiting7e3078f18512: Waiting42c465c756f0: Waiting0236c7f70fcb: Waiting6c063322fbb8: WaitingERROR: open /var/lib/docker/tmp/GetImageBlob865563210: no space left on device Some data about my Docker installation: $ docker infoContainers: 0Running: 0Paused: 0Stopped: 0Images: 1Server Version: 17.06.1-ceStorage Driver: aufsRoot Dir: /var/lib/docker/aufsBacking Filesystem: extfsDirs: 15Dirperm1 Supported: trueLogging Driver: json-fileCgroup Driver: cgroupfsPlugins: Volume: localNetwork: bridge host macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2init version: 949e6faSecurity Options:apparmorseccomp Profile: defaultKernel Version: 4.10.0-32-genericOperating System: Ubuntu 16.04.3 LTSOSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 7.685GiBName: engelbertID: UO4E:FFNC:2V25:PNAA:S23T:7WBT:XLY7:O3KU:VBNV:WBSB:G4RS:SNBHDocker Root Dir: /var/lib/dockerDebug Mode (client): falseDebug Mode (server): falseRegistry: https://index.docker.io/v1/Experimental: falseInsecure Registries:127.0.0.0/8Live Restore Enabled: falseWARNING: No swap limit support And my disk info: $ df -hFilesystem Size Used Avail Use% Mounted onudev 3,9G 0 3,9G 0% /devtmpfs 787M 10M 778M 2% /run/dev/nvme0n1p3 33G 25G 6,3G 80% /tmpfs 3,9G 46M 3,8G 2% /dev/shmtmpfs 5,0M 4,0K 5,0M 1% /run/locktmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup/dev/loop0 81M 81M 0 100% /snap/core/2462/dev/loop1 80M 80M 0 100% /snap/core/2312/dev/nvme0n1p1 596M 51M 546M 9% /boot/efi/dev/nvme0n1p5 184G 52G 123G 30% /hometmpfs 787M 12K 787M 1% /run/user/121tmpfs 787M 24K 787M 1% /run/user/1000 And: $ df -hi /var/lib/dockerFilesystem Inodes IUsed IFree IUse% Mounted on/dev/nvme0n1p3 2,1M 2,0M 68K 97% / As said, I'm still experimenting, so I'm not sure if I've posted all relevant info - let me know if you need more. Anyone any idea what more could be the issue? Now provide the response and nothing else.
The problem is that /var/lib/docker is on the / filesystem, which is running out of inodes. You can check this by running df -i /var/lib/docker Since /home 's filesystem has sufficient inodes and disk space, moving Docker's working directory there there should get it going again. (Note that the this assumes there is nothing valuable in the current Docker install.) First stop the Docker daemon. On Ubuntu, run sudo service docker stop Then move the old /var/lib/docker out of the way: sudo mv /var/lib/docker /var/lib/docker~ Now create a directory on /home : sudo mkdir /home/docker and set the required permissions: sudo chmod 0711 /home/docker Link the /var/lib/docker directory to the new working directory: sudo ln -s /home/docker /var/lib/docker Then restart the Docker daemon: sudo service docker start Then it should work again.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45812401', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/859631/']}
jdg_378267
stackexchange
llm_judgeable_groundtruth_similarity
8292
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that $k$ different voters have each produced a partial ordering of candidates $1,\ldots,n$. For example, if $k=3$ and $n=4$, the voters may have produced the partial orders $1<2$ $2<1<3<4$ $2<3<1$ Our goal is to fuse all these into a single ranking of the candidates in a way that minimizes conflicts. The following seems like a natural way to do this. Given a total order $L$, let $C(i,j)$ be the number of the partial orders which rank $i$ and $j$ in the opposite way from $L$ (i.e., if $L$ ranks $i$ before $j$, $C(i,j)$ equals the number among the $k$ partial orders that rank $j$ before $i$). We seek to find a total order $L$ which minimizes $\sum_{i<j} C(i,j)$. This seems like it would be a well-studied problem, but I have been unable to find any references, likely because I'm not aware of the right keywords. Can any one point me towards papers where algorithms for this problem were considered? Now provide the response and nothing else.
This is a special case of metric $1$-medians, where the metric space is $S_n$ (the symmetric group on $n$ elements) with distance function number of inversions (i.e. the distance between two permutations is the number of pairs $i, j: i<j$ s.t. $i$ and $j$ are ordered differently). This distance metric is also known as Kemeny distance and is related to the weak Bruhat order. This paper by Ailon, Charikar, and Newman considers this and related problems. The problem you're asking about is called Rank Aggregation in their paper. Look at their intro for more references. The problem is NP-hard. A simple 2-approximation is to pick the best of the given $k$ permutations (i.e. the permutation that minimizes the objective function). Here is a proof: Let $\pi$ be the optimal permutation and $\pi_{j^*}$ the best of $\pi_1, \ldots, \pi_k$. For any $i, j \in [k]$, by the triangle inequality, $d(\pi_i, \pi) + d(\pi_j, \pi) \geq d(\pi_i, \pi_j)$. Sum over all $i, j$, and you get $2n\mathsf{OPT} \geq \sum_j{\mathsf{ALG}(j)}$, where $\mathsf{ALG(j)}$ is the cost of choosing $\pi_j$ as the solution, and $\mathsf{OPT}$ is the cost of the optimal solution. Since $\pi_{j^*}$ was chosen so that $\mathsf{ALG}(j^*)$ is minimum over all $\mathsf{ALG}(j)$ for $j \in [k]$, we have $\mathsf{ALG}(j^*) \leq \sum_j{\mathsf{ALG}(j)} /n$ and we have $\mathsf{ALG}(j^*) \leq 2\mathsf{OPT}$. The paper I referred you to above has another, almost equally simple 2-approximation (but the proof is not that simple, though still not bad). They can show that the bad cases for the two approximation algorithms are different, and taking the better solution of the two gives 11/7 approximation factor. Then there is also a PTAS by Mathieu and Schudy (Warren sometimes visits this site btw), by a reduction to weighted Minimum Feedback Arc Set in Tournamets (the same reduction is used in the previous paper, I think). Here is the link
{}
{'log_upvote_score': 4, 'links': ['https://cstheory.stackexchange.com/questions/8292', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/6606/']}
jdg_378268
stackexchange
llm_judgeable_groundtruth_similarity
97966
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: DynamicModule[{totalpages, pagesize}, StringJoin[{"of ", ToString@totalpages, "."}], Initialization :> (pagesize = 5; totalpages = Ceiling[Length[Range[456]]/pagesize];) ](* "of totalpages$443602." *) As I understand DynamicModule , totalpages will be created from the scoping list when the code is executed. Then before the output is first displayed the Initialization option code will be executed which should set a value to totalpages . Finally the output will be displayed. However, the result I appear to be getting is that the Initialization code is not being called; only the scoping code. Or perhaps it is being called after the output is displayed. In any case I'm not getting the expected result. What have I misunderstood? It is interesting that the following works. However I can't use a pagesize variable in this case. DynamicModule[{totalpages = Ceiling[Length[Range[456]]/5]}, StringJoin[{"of ", ToString@totalpages, "."}] ](* "of 92." *) Version 10.2 - Will be installing 10.3 later this week. Now provide the response and nothing else.
The behaviour we see is due to the fact that the expression referencing totalPages has dynamic dependencies, but it is not wrapped in Dynamic itself. The simple fix is to add Dynamic in front of StringJoin : DynamicModule[{totalpages, pagesize}, Dynamic@StringJoin[{"of ", ToString@totalpages, "."}], Initialization :> (pagesize = 5; totalpages = Ceiling[Length[Range[456]]/pagesize];) ](* of 92. *) However, it is much more interesting to examine the aspects of DynamicModule that explain the peculiar results we see. In particular, the exhibited expressions reveal some of the localization strategies used at different stages of the DynamicModule evaluation process. Discussion For the discussion that follows, we will use an expression that reveals three important stages that occur during DynamicModule evaluation: DynamicModule[{x, y = "y-eval"}, {ToString[x], x, Dynamic[x], ToString[y], y, Dynamic[y]}, Initialization :> (x = "x-init"; y = "y-init")](* {"x$123", x$$, "x-init", "y-eval", "y-eval", "y-init"} *) The results "x$123" , x$$ and "x-init" correspond to the three stages in question. An outline of the sequence of events is as follows. Stage 1: Initial Evaluation As per the documentation of DynamicModule , the expression is first evaluated after localizing all variables as if by Module . This generates the symbols x$123 and y$123 . Only latter is assigned a value. The body of the DynamicModule is evaluated with these bindings in place. ToString[x] captures the normally unobservable symbol name x$123 in a string: (* {"x$123", x$123, Dynamic[x$123], "y-eval", "y-eval", Dynamic[y$123]} *) Stage 2: Placeholder Localization The preceding result will be used to build the actual dynamic boxes. Any local variables are renamed again to become so-called "placeholders": (* { "x$123", $CellContext`x$$, Dynamic[$CellContext`x$$], "y-eval", "y-eval", Dynamic[$CellContext`y$$] } *) Placeholders are necessary because a rendered DynamicModule can be duplicated by copy-and-paste, each copy needing to use different variables. Placeholders are replaced by such unique variables when rendered for display. Stage 3: Display When the DynamicModule becomes visible, the placeholders are resolved, and the initialization form is evaluated. Both x$$ and y$$ receive values, but only Dynamic parts of the expression will reflect those values. That is why we see the final result which, when rendered, shows the non-dynamic reference to the placeholder x$$ as an unevaluated symbol: (* {"x$123", x$$, "x-init", "y-eval", "y-eval", "y-init"} *) The present response has glossed over some details in the interest of brevity. There are more than three stages, and some of the stages operate upon box forms instead of the simple expressions shown above. (75323) discusses some of those details in more depth.
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/97966', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/19542/']}
jdg_378269
stackexchange
llm_judgeable_groundtruth_similarity
577
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't have access to devices to run TRILL for testing and learning purposes, so is there a way to set-up a virtualized lab to try it ? What would be the virtualisation system, the vendor, the software, the version... Now provide the response and nothing else.
There are few devices that support standards based TRILL in the real world. Traditional Cisco IOS is probably the most /supported/ network OS for running in a hypervisor; however, in Cisco land the NXOS platform is the only platform that supports Fabric Path (Cisco's TRILL), and that won't work in Dynamips or IOU. Furthermore TRILL is a L2 technology. Switches are hard to virtualize because of the special forwarding hardware (TCAM) used in them. In short I'm afraid you're out of luck on using virtual switches to test TRILL.
{}
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/577', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/564/']}
jdg_378270
stackexchange
llm_judgeable_groundtruth_similarity
365319
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm designing a step-up constant current source to drive LED strips (nominal load voltage is about 54 VDC). Requirements: V in : 18..32 VDC I out = 0.2 A V out = 54 VDC (nominal) - 57 VDC (maximum) Since the circuit should have an on-off input, I decided to use LM2586SX-ADJ . Problem A hand-made fast prototype worked fine at the R&D stage, so we manufactured a hundred of the circuit. The circuit works fine after energizing. However, after some time (I really can't say an exact duration, but it varies between 15 minutes and 1 hour) the inductor starts to buzz, overheats and then finally fails permanently (burns) in a few seconds . I've to say that both the IC and the inductor keep quite cool during normal operation. What I've tried At first, I thought that the problem comes from the DC resistance of the inductor. So I replaced the inductor with 7447709681 from Würth . It didn't help. Increased the switching frequency to nearly 200 kHz. It didn't help. Placed a 0.1 µ capacitor across the input of the LM2586. It didn't help. Placed a snubber (47 Ω and 10 nF) across the SW pin. It didn't help. Schematic: PCB: NOTES: The bottom layer is completely GND with neither cuts nor holes. There's a pi filter (100 µF elco - 68 µH - 100 µF elco) before the input, VX . But it's in another sheet, so I couldn't show it here. BL input comes from the microcontroller (5 V or GND). So I'm stuck at this problem. Any help will be greatly appreciated. Now provide the response and nothing else.
I believe you are exceeding the peak inverse voltage (PRV) rating of D4, the 40 V Schottky diode. During your switching cycle when the SW pin on the 2586 goes to 0 V, D4 becomes reverse biased due to the level at the output at the top of C34. With the output set to 57 V, this exceeds the 40 V reverse rating of D4. This can only be observed and measured with an oscilloscope; you cannot see this with a multimeter. Whether this is the cause or there's still something else, I suggest you use a 60 V diode in place of the 40 V for D4. More detailed explanation: When the switch is off, charge is pumped into C34 and is drained off by the load. With the diode shorted, C34 no longer holds that charge when the switch is on, but quickly decreases toward zero. The feedback senses the drop and the switching controller commands a longer on time to build up a higher current in the inductor. When this on time becomes long enough, the inductor will saturate. When saturated, it no longer functions as an inductor, and the current through L10 will be limited by only its winding resistance and applied voltage.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/365319', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/103420/']}
jdg_378271
stackexchange
llm_judgeable_groundtruth_similarity
10117136
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am hopelessly lost when it comes to recursive functions. I am required to create a recursive function to traverse a binary tree and insert a new node in between specific values. Would i need to recopy my traverse function and modify it in every other function that i use it in? Would someone please evaluate the traverse function? I think my traversing code is alright. Node traverse (Node currentNode){ if (!currentNode.left.equals(null)){ traverse (currentNode.left); return currentNode.left; } if (!currentNode.right.equals(null)){ traverse (currentNode.right); return currentNode.right; } return currentNode;} Now provide the response and nothing else.
When it comes to binary trees, there are several different types of traversals that can be done recursively. They're written in the order they're referenced then visited (L=Left child, V = visit that node, R = right child). In-order traversal (LVR) Reverse order traversal (RVL) Preorder traversal (VLR) Postorder traversal (LRV) Your code appears to be performing the postorder traversal method, but you're getting a few things mixed up. First, the node is what you want to traverse; the data is what you want to visit. Second, you have no reason to return the node itself, in the way that this is implemented. Your code doesn't allow for a condition to say, 'I'm looking for this particular data, do you have it Mr. Node@0xdeadbeef?', which would be found with some sort of extra search parameter. An academic BST traversal only prints the nodes itself. If you wanted to add a search functionality, it's only one more parameter, as well as an additional check for the right node. Here's a snippet: // Academicpublic void traverse (Node root){ // Each child of a tree is a root of its subtree. if (root.left != null){ traverse (root.left); } System.out.println(root.data); if (root.right != null){ traverse (root.right); }}// Search with a valid node returned, assuming intpublic Node traverse (Node root, int data){ // What data are you looking for again? if(root.data == data) { return root; } if (root.left != null && data < root.data) { return traverse (root.left, data); } if (root.right != null && data > root.data) { return traverse (root.right, data); } return null;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10117136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1054899/']}
jdg_378272
stackexchange
llm_judgeable_groundtruth_similarity
18276349
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to figure out the difference between using $() and $. from other developers. As far as my understanding goes, $() refers to objects within the DOM , but I am not 100% clear as to how the $. works. I have used the $. format before, but never understood how it works. For example: $.each(element, function() {}); or $.fn etc... It would be nice to shed some light and clarity on this topic. Now provide the response and nothing else.
$ is an identifier. It is used as a variable. It has a function assigned to it. Putting () after a function will call it. The function jQuery assigns to it does lots of different things depending on what sort of arguments you pass to it. (It is horribly overloaded). (e.g. if you pass it a function, it will call that function when the document ready event fires. If you pass it a string of HTML, it will create a DOM representation of that HTML and wrap it in a jQuery object. If you pass it a DOM node, it will wrap that node in a jQuery object. If you pass it a CSS selector, it will search the document for matching DOM nodes and wrap them with a jQuery object). In JavaScript, functions are objects. Objects can have properties. You can access a property on an object via $.name_of_property . Those properties can also have functions (or other objects) assigned to them.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18276349', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2236403/']}
jdg_378273
stackexchange
llm_judgeable_groundtruth_similarity
28480481
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using Entity Framework Code First with Code First migrations. During a migration, I need to create a new table, and then insert some data into it. So I create the table with : CreateTable("MySchema.MyNewTable", c => new { MYCOLUMNID = c.Int(nullable: false, identity: true), MYCOLUMNNAME = c.String(), }) .PrimaryKey(t => t.MYCOLUMNID); Then I try to insert data with : using (var context = new MyContext()){ context.MyNewTableDbSet.AddOrUpdate(new[] { new MyNewTable { MYCOLUMNNAME = "Test" } }); context.SaveChanges();} But I get an error : Invalid object name 'mySchema.MyNewTable'. Is it possible to do what I need ? Create a table and inserto data into it in the same migration ? I already have other migrations where I create tables or insert data into a table, but never in the same migration... Now provide the response and nothing else.
My recommendation is move that insert code to the Seed method. Migrations introduced its own Seed method on the DbMigrationsConfiguration class. This Seed method is different from the database initializer Seed method in two important ways: It runs whenever the Update-Database PowerShell command is executed.Unless the Migrations initializer is being used the Migrations Seed method will not be executed when your application starts. It must handle cases where the database already contains data becauseMigrations is evolving the database rather than dropping andrecreating it. For that last reason it is useful to use the AddOrUpdate extension method in the Seed method. AddOrUpdate can check whether or not an entity already exists in the database and then either insert a new entity if it doesn’t already exist or update the existing entity if it does exist. So, try to run the script that you want this way: Update-Database –TargetMigration: ScriptName And the Seed method will do the job of inserting data. As Julie Lerman said on her blog : The job of AddOrUpdate is to ensure that you don’t create duplicates when you seed data during development.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28480481', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2558653/']}
jdg_378274
stackexchange
llm_judgeable_groundtruth_similarity
2482916
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm completely new to the javascript and ajax world but trying to learn. Right now I'm testing the XMLHttpRequest and I can't make work even the simplest example. This is the code I'm trying to run <script type="text/javascript"> function test() { xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (xhr.readyState == 4 && xhr.status == 200){ var container = document.getElementById('line'); container.innerHTML = xhr.responseText; } else { alert(xhr.status); } } xhr.open('GET', 'http://www.google.com', true); xhr.send(null); } </script> And I always get the alert with the status 0. I've read tons of webs about this and I don't know what am I missing. I will appreciate any help, thanks! Now provide the response and nothing else.
You are running into the Same Origin Policy . Unless your code is actually running on www.google.com (which is unlikely), this is going to error. Also, and while this isn't causing you a problem at the moment, it is poor practice and can lead to race conditions, you are using globals all over the place. Make the xhr variable local to the function var xhr = new XMLHttpRequest(); And refer to it with this inside the onreadstatechange method. if (this.readyState == 4 && this.status == 200){// etc etc
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2482916', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/297979/']}
jdg_378275
stackexchange
llm_judgeable_groundtruth_similarity
117568
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: How can I add a Column of values in a file which has a certain number of rows.I have a input file like this: Input file: SPATA17 1 217947738LYPLAL1 1 219383905FAM47E 4 77192838SHROOM3 4 77660162SHROOM3 4 77660731SHROOM3 4 77662248 Output file: SPATA17 1 217947738 file1LYPLAL1 1 219383905 file1FAM47E 4 77192838 file1SHROOM3 4 77660162 file1SHROOM3 4 77660731 file1SHROOM3 4 77662248 file1 In this case, I want to add a Column of values, upto the number of rows in the file.The value remains consistent, such as "file1". The reason is I have 100 of those files.I don't want to open each file and paste a column.Also is there any way to automate this, by going in a directory and adding a column of values.The value comes from the filename, which has to be added in each row of the file in the last/first column. Now provide the response and nothing else.
You can use a one-liner loop like this: for f in file1 file2 file3; do sed -i "s/$/\t$f/" $f; done For each file in the list, this will use sed to append to the end of each line a tab and the filename. Explanation: Using the -i flag with sed to perform a replacement in-place, overwriting the file Perform a substitution with s/PATTERN/REPLACEMENT/ . In this example PATTERN is $ , the end of the line, and REPLACEMENT is \t (= a TAB), and $f is the filename, from the loop variable. The s/// command is within double-quotes so that the shell can expand variables.
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/117568', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/60411/']}
jdg_378276
stackexchange
llm_judgeable_groundtruth_similarity
10097246
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to setup Hadoop version 0.20.203.0 in a pseudo distributed configuration using the following guide: http://www.javacodegeeks.com/2012/01/hadoop-modes-explained-standalone.html After running the start-all.sh script I run "jps". I get this output: 4825 NameNode5391 TaskTracker5242 JobTracker5477 Jps5140 SecondaryNameNode When I try to add information to the hdfs using: bin/hadoop fs -put conf input I got an error: hadoop@m1a2:~/software/hadoop$ bin/hadoop fs -put conf input12/04/10 18:15:31 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377) at org.apache.hadoop.ipc.Client.call(Client.java:1030) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224) at $Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446)12/04/10 18:15:31 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null12/04/10 18:15:31 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/input/core-site.xml" - Aborting...put: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 112/04/10 18:15:31 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/input/core-site.xml : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/input/core-site.xml could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:596) at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377) at org.apache.hadoop.ipc.Client.call(Client.java:1030) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:224) at $Proxy1.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3104) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2975) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2255) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2446) I am not totally sure but I believe that this may have to do with the fact that the datanode is not running. Does anybody know what I have done wrong, or how to fix this problem? EDIT: This is the datanode.log file: 2012-04-11 12:27:28,977 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting DataNodeSTARTUP_MSG: host = m1a2/139.147.5.55STARTUP_MSG: args = []STARTUP_MSG: version = 0.20.203.0STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011************************************************************/2012-04-11 12:27:29,166 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2012-04-11 12:27:29,181 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.2012-04-11 12:27:29,183 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2012-04-11 12:27:29,183 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started2012-04-11 12:27:29,342 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.2012-04-11 12:27:29,347 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!2012-04-11 12:27:29,615 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-hadoop/dfs/data: namenode namespaceID = 301052954; datanode namespaceID = 229562149 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:354) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:268) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1480) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1419) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1437) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1563) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1573)2012-04-11 12:27:29,617 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down DataNode at m1a2/139.147.5.55************************************************************/ Now provide the response and nothing else.
That error you are getting in the DN log is described here: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#java-io-ioexception-incompatible-namespaceids From that page: At the moment, there seem to be two workarounds as described below. Workaround 1: Start from scratch I can testify that the following steps solve this error, but the side effects won’t make you happy (me neither). The crude workaround I have found is to: Stop the cluster Delete the data directory on the problematic DataNode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /app/hadoop/tmp/dfs/data Reformat the NameNode (NOTE: all HDFS data is lost during this process!) Restart the cluster When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try. Workaround 2: Updating namespaceID of problematic DataNodes Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is “minimally invasive” as you only have to edit one file on the problematic DataNodes: Stop the DataNode Edit the value of namespaceID in /current/VERSION to match the value of the current NameNode Restart the DataNode If you followed the instructions in my tutorials, the full path of the relevant files are: NameNode: /app/hadoop/tmp/dfs/name/current/VERSION DataNode: /app/hadoop/tmp/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir in this tutorial to /app/hadoop/tmp). If you wonder how the contents of VERSION look like, here’s one of mine: # contents of /current/VERSION namespaceID=393514426 storageID=DS-1706792599-10.10.10.1-50010-1204306713481 cTime=1215607609074 storageType=DATA_NODE layoutVersion=-13
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/10097246', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/798148/']}
jdg_378277
stackexchange
llm_judgeable_groundtruth_similarity
20700185
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to implement a django form with datepicker. I made my forms.py from django import formsclass DateRangeForm(forms.Form): start_date = forms.DateField(widget=forms.TextInput(attrs= { 'class':'datepicker' })) end_date = forms.DateField(widget=forms.TextInput(attrs= { 'class':'datepicker' })) and views.py if request.method == "POST": f = DateRangeForm(request.POST) if f.is_valid(): c = f.save(commit = False) c.end_date = timezone.now() c.save() else: f = DateRangeForm() args = {} args.update(csrf(request)) args['form'] = f return render(request, 'trial_balance.html', { 'form': f }) balance.html <div> <form action="" method="POST"> {% csrf_token %} Start Date:{{ form.start_date }}&nbsp;&nbsp; End Date:{{ form.end_date }}<br/> <input type = "submit" name = "submit" value = "See Results"> </form></div> And still there is no datepicker in my input box of that form.I also tried with including my files link in the script as in my balance.html <script src="{{ STATIC_URL }}js/jquery-1.3.2.min.js"></script> still the datepicker is not working. But when including jquery in my html file, it also makes not to work jquery-treetable which I have implemented in my html file. How to make the datepicker work ? Now provide the response and nothing else.
You can use forms.DateInput() widget, instead of forms.TextInput() : from functools import partialDateInput = partial(forms.DateInput, {'class': 'datepicker'})class DateRangeForm(forms.Form): start_date = forms.DateField(widget=DateInput()) end_date = forms.DateField(widget=DateInput()) To make JQuery Datepicker work, you have to initialise it: <link rel="stylesheet" href="http://code.jquery.com/ui/1.11.0/themes/smoothness/jquery-ui.css"> <script src="http://code.jquery.com/jquery-1.9.1.js"></script><script src="http://code.jquery.com/ui/1.11.0/jquery-ui.js"></script><script>$(document).ready(function() { $('.datepicker').datepicker();});</script>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20700185', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1386218/']}
jdg_378278
stackexchange
llm_judgeable_groundtruth_similarity
4759681
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to post a Form through a MVC Model into a Save function in a controller. I'm also using tinymce on the client side which results a HTML code based string such like <p> Content text blah blah ...</p> . The problem is that I cannot post a string that includes <p> something </p> But surprisingly, < p > something < / p > this string (with spaces after "<") has NO problem. But, I cannot handle this html code and make these spaces before posting every time. There must be a better way. So, How can I post a string that includes HTML code through $.post method? (If you must know, this project is a Content Management System. So, I have to save the HTML based content text into a SQL table.) I saw by debugging, the post action does not even reach to the Controller and I think this is a only javascript problem, am I right? Here is the code I am using: Javascript function JqueryFromPost(formId) { var form = $(formId); var action = form.attr("action"); var serializedForm = form.serializeArray(); $.post(action, serializedForm, function (data) { //Getting the data Result here... });} CS Code [HttpPost] public JsonResult SaveArticle(ArticleModel model) { JsonResult JResult = new JsonResult(); if (ModelState.IsValid) //I do the saving here ending with "JResult.Data = "Success";" (this could also be Failed. So, its just to explain) return JResult; } Now provide the response and nothing else.
ASP.NET has built-in request validation that automatically helps protect against XSS and HTML injection attacks. If you want to explicitly disable this validation you could decorate the action you are posting to with the [ValidateInput(false)] attribute: [HttpPost][ValidateInput(false)] public ActionResult SaveArticle(ArticleModel model){ var JResult = new JsonResult(); if (ModelState.IsValid) { ... } return JResult;} Also if you are running this on ASP.NET 4.0 for this attribute to take effect you need to add the following to your web.config: <httpRuntime requestValidationMode="2.0" /> And if you are using ASP.NET MVC 3.0 you could decorate only the property on your model that requires HTML with the [AllowHtml] attribute: public class ArticleModel { [AllowHtml] public string SomeProperty { get; set; } public string SomeOtherProperty { get; set; }} Also in your javascript function you probably want serialize() instead of serializeArray() : function JqueryFromPost(formId) { var form = $(formId); $.post(form.action, form.serialize(), function (data) { //Getting the data Result here... });}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4759681', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/584508/']}
jdg_378279
stackexchange
llm_judgeable_groundtruth_similarity
254576
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Why don't some services offer Google/Facebook/Apple/Twitter login? Namely Crypto exchanges. I assume they want as many users as possible & this is a great way to get more. Is there some sort of security vulnerability associated with them? Edit: For Google & Apple login since both offer email services (gmail & icloud), offering the login button for these is the same thing as asking them to verify their email address. Assuming all you do on the login buttons is get the verified email address (which is all you need). Of course you'd still want 2FA Now provide the response and nothing else.
There are a variety of reasons that a company may not want to offer a federated login option. Some of them include the following: People don't necessarily protect their social media accounts very well. A company may want the ability to require a strong password or 2FA to log in, and that's harder to do when you use a third-party login. Also, services may not want the compromise of your social media account to be a compromise of their account. Some third-party login providers provide access to email addresses, and some don't. Apple uses a custom email. For situations where a service needs access to an email, whether for reasons of identity (e.g., GitHub and associating commits with accounts), fraud and abuse prevention, or less ethical reasons (e.g., non-confirmed opt-in marketing or other types of spam), a third-party login may not be sufficient. Depending on the way the third-party login provider works, you may end up with only a username, or a fixed ID as a result of the login information. If you store the username and not the ID, then you have a problem if the original owner deletes their account and someone else creates one named the same thing. If you don't implement third-party login, this doesn't happen. In the specific case of cryptocurrency exchanges, typically you are going to have to provide some sort of financial information to conduct business, and often additional information for local know-your-customer requirements. In many jurisdictions, these laws are very strict. Since you are already providing a good deal of information, much of which is quite sensitive, a custom username and password wouldn't be seen as very burdensome. Some services are highly regulated and must meet audit requirements, such as those from companies working in the financial industry or those selling to governments. These audits take a long time, involve a lot of personnel, and tend to be extraordinarily expensive. Adding third-party login increases the scope of the audit and makes other people's security or compliance problems the company in question's problems, and they would like to avoid that. Of course, these are some general reasons. Individual companies may have other reasons, but we have no way of knowing what they are.
{}
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/254576', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/124704/']}
jdg_378280
stackexchange
llm_judgeable_groundtruth_similarity
188942
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We are often asked whether we install antivirus software on our servers, specifically the kind of signature-based scanners that run on a schedule. Client security questionnaires and ISMSes often mention it. For long running servers this naturally makes sense. Although it theoretically opens the window to attacks through the compromise of, say, ClamAV's signature update process, this is far less likely than other forms of infections that periodic AV scans can detect and quarantine. However, modern infrastructure is often based on immutable "machine images", such as Amazon Web Service's Amazon Machine Images, which are used as the base for groups of highly transient servers that scale up and down throughout the day based on overall usage. Individual servers in these groups can last anywhere from one hour to six hours, but rarely last longer than a day in our case. Asking in sysadmin/devops circles, the consensus seems to be to not bother with antivirus on these servers. Some of the points I've heard against AV on such servers are: The servers die in a day or less in most cases, so malware would have a hard time persisting. When do you schedule the scan? Not on startup, as that's presumably when the infrastructure needs the new server's resources the most, so even a nice d scan process could be a problem. The basis of new servers is from an immutable image, so automatic quarantining is only fixing the problem for a single server for a short period of time, not fixing the problem in the base image that allowed in infection to begin with. However, I find myself questioning this view for these reasons: Modern viruses often have good network propagation mechanisms, so spotting malware even on transient servers seems important. Combining antivirus scanning with centralised logging allows alerting of malware even if automatic quarantining isn't effective in the long term on a transient, short lived server. Just allowing staff to know about the presence of malware in a transient group of servers is critical. Service managers like systemd allow offsetting the first scan and scheduling after that, so avoiding hitting startup time is easy. Could some of the experts here give their view on this? Am I right to still want antivirus scanners on such servers? Now provide the response and nothing else.
You are asking yourselves the right questions but asking us the wrong one. Security controls, like AV, are meant to address threats in order to reduce the impact to an acceptable level. You have identified the threats and the likely impact of those threats. Great! Now you need to see if signature-based AV addresses those threats and reduces impact to an acceptable level (or if the impact levels are already acceptable). Can you address the propagation issue by changing the firewall/networking to block connections initiating from the front-facing servers? If so, then AV might not be necessary. Could you gain the threat intel ("allowing staff to know about the presence of malware") by logging the non-transient servers? What does the data from the transient servers give you that your permanent servers do not? What's the likelihood that the transient servers will get infected uniquely? If you can get the data from other sources, then you might not need AV. How do you update the AV database? Will you have to launch the server, update the database, then run a scheduled scan? Is this delay acceptable for the level of service you require? If the delay does not make sense for you, then you might want to use other mitigations. But the underlying question is about the type of AV that you are assuming. You are assuming locally installed , signature-based AV, but there are other, more dynamic types that do not require database updates and only scans new or incoming data. Network-based, in-line, AV is possible, as well as IDS. So the question you need to ask is if this one implementation meets your needs or if the risks can be met by other means.
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/188942', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/20892/']}
jdg_378281
stackexchange
llm_judgeable_groundtruth_similarity
1914885
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I get the exception when executing the following code. Any ideas what is wrong? string queueName = "FormatName:Direct=TCP:1.1.1.1\\Private$\\test";MessageQueue queue;if (MessageQueue.Exists(queueName)) queue = new System.Messaging.MessageQueue(queueName);else queue = MessageQueue.Create(queueName);queue.Send(sWriter.ToString()); Edit:Here is the exception message and first line of stacktrace Cannot determine whether a queue with the specified format name exists. at System.Messaging.MessageQueue.Exists(String path) It works for a local queue by the way. Now provide the response and nothing else.
From your sample, it looks like you're trying to check whether a remote private queue exists, but as the MessageQueue.Exists documentation says: Exists cannot be called to verify the existence of a remote private queue. Trying to do so will produce an InvalidOperationException . If you really need this information for your workflow, you can use the MessageQueue. GetPrivateQueuesByMachine method and iterate the results to find a match. If you do, I recommend reading Are Remote MSMQ Queues Reliable? , which discusses this approach in some depth. This post from the excellent "MSMQ from the plumber's mate" blog suggests another alternative: don't even check whether your queues exist, "but instead handle the non-delivery of the message should it turn out that the queue doesn't exist." (You'll need to track administration queues and/or dead-letter queues, but you should probably be doing that anyway.)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1914885', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/38498/']}
jdg_378282
stackexchange
llm_judgeable_groundtruth_similarity
2699086
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I sort this array by the value of the "order" key? Even though the values are currently sequential, they will not always be. Array( [0] => Array ( [hashtag] => a7e87329b5eab8578f4f1098a152d6f4 [title] => Flower [order] => 3 ) [1] => Array ( [hashtag] => b24ce0cd392a5b0b8dedc66c25213594 [title] => Free [order] => 2 ) [2] => Array ( [hashtag] => e7d31fc0602fb2ede144d18cdffd816b [title] => Ready [order] => 1 )) Now provide the response and nothing else.
Try a usort . If you are still on PHP 5.2 or earlier, you'll have to define a sorting function first: function sortByOrder($a, $b) { return $a['order'] - $b['order'];}usort($myArray, 'sortByOrder'); Starting in PHP 5.3, you can use an anonymous function: usort($myArray, function($a, $b) { return $a['order'] - $b['order'];}); With PHP 7 you can use the spaceship operator : usort($myArray, function($a, $b) { return $a['order'] <=> $b['order'];}); Finally, in PHP 7.4 you can clean up a bit with an arrow function: usort($myArray, fn($a, $b) => $a['order'] <=> $b['order']); To extend this to multi-dimensional sorting, reference the second/third sorting elements if the first is zero - best explained below. You can also use this for sorting on sub-elements. usort($myArray, function($a, $b) { $retval = $a['order'] <=> $b['order']; if ($retval == 0) { $retval = $a['suborder'] <=> $b['suborder']; if ($retval == 0) { $retval = $a['details']['subsuborder'] <=> $b['details']['subsuborder']; } } return $retval;}); If you need to retain key associations, use uasort() - see comparison of array sorting functions in the manual.
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/2699086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/149664/']}
jdg_378283
stackexchange
llm_judgeable_groundtruth_similarity
20803826
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: string_view was a proposed feature within the C++ Library Fundamentals TS( N3921 ) added to C++17 As far as i understand it is a type that represent some kind of string "concept" that is a view of any type of container that could store something viewable as a string. Is this right ? Should the canonical const std::string& parameter type become string_view ? Is there another important point about string_view to take into consideration ? Now provide the response and nothing else.
The purpose of any and all kinds of "string reference" and "array reference" proposals is to avoid copying data which is already owned somewhere else and of which only a non-mutating view is required. The string_view in question is one such proposal; there were earlier ones called string_ref and array_ref , too. The idea is always to store a pair of pointer-to-first-element and size of some existing data array or string. Such a view-handle class could be passed around cheaply by value and would offer cheap substringing operations (which can be implemented as simple pointer increments and size adjustments). Many uses of strings don't require actual owning of the strings, and the string in question will often already be owned by someone else. So there is a genuine potential for increasing the efficiency by avoiding unneeded copies (think of all the allocations and exceptions you can save). The original C strings were suffering from the problem that the null terminator was part of the string APIs, and so you couldn't easily create substrings without mutating the underlying string (a la strtok ). In C++, this is easily solved by storing the length separately and wrapping the pointer and the size into one class. The one major obstacle and divergence from the C++ standard library philosophy that I can think of is that such "referential view" classes have completely different ownership semantics from the rest of the standard library. Basically, everything else in the standard library is unconditionally safe and correct (if it compiles, it's correct). With reference classes like this, that's no longer true. The correctness of your program depends on the ambient code that uses these classes. So that's harder to check and to teach.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/20803826', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1147772/']}
jdg_378284
stackexchange
llm_judgeable_groundtruth_similarity
3688731
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table of about 100M rows that I am going to copy to alter, adding an index. I'm not so concerned with the time it takes to create the new table, but will the created index be more efficient if I alter the table before inserting any data or insert the data first and then add the index? Now provide the response and nothing else.
Creating index after data insert is more efficient way (it even often recomended to drop index before batch import and after import recreate it). Syntetic example (PostgreSQL 9.1, slow development machine, one million rows): CREATE TABLE test1(id serial, x integer);INSERT INTO test1(id, x) SELECT x.id, x.id*100 FROM generate_series(1,1000000) AS x(id);-- Time: 7816.561 msCREATE INDEX test1_x ON test1 (x);-- Time: 4183.614 ms Insert and then create index - about 12 sec CREATE TABLE test2(id serial, x integer);CREATE INDEX test2_x ON test2 (x);-- Time: 2.315 msINSERT INTO test2(id, x) SELECT x.id, x.id*100 FROM generate_series(1,1000000) AS x(id);-- Time: 25399.460 ms Create index and then insert - about 25.5 sec (more than two times slower)
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3688731', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/17339/']}
jdg_378285
stackexchange
llm_judgeable_groundtruth_similarity
4646691
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $A$ be a matrix. What are some necessary/sufficient conditions for the Gram matrix $A^T A$ to be invertible? This question came up when I was trying to learn about least-squares regression. Is it true that a regression matrix will always have $A^TA$ invertible? Now provide the response and nothing else.
I’m assuming that you’re talking about matrices over a field, e.g. $\mathbb R$ or $\mathbb C$ , so that the various definitions of “rank” coincide. $A^\top A$ is invertible iff it has full rank. It has the same rank as $A$ (since it annihilates the same vectors as $A$ on both sides). So if $A$ is $m\times n$ (so that $A^\top A$ is $n\times n$ ), then $A^\top A$ is invertible iff $m\ge n$ and $A$ has rank $n$ . I’m not sure what you mean by a “regression matrix”. If you perform linear regression, $A^\top A$ may be singular if you don’t have enough different points to work with or if some of the functions whose linear combination you’re considering are linearly dependent.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4646691', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/753912/']}
jdg_378286
stackexchange
llm_judgeable_groundtruth_similarity
46135993
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Thanks to Firebase v 3.9.0, my social OAuth is working great in my ionic app. I have one little change I'd like to make. When prompted to login, it says "Sign in to continue to my-real-appname-12345f.firebaseapp.com ." How to I change that to something more user-friendly like, you know, the app's actually name. To clarify, I am using Firebase to handle authentication for both Google and Facebook. The message is the same for both. Now provide the response and nothing else.
I asked Firebase support and got the following reply. Items in italics are my additions. In order to update firebase-project-id.firebaseapp.com in the OAuth consent screen, you need a custom domain with Firebase Hosting (Firebase Console > Hosting > Connect Domain). This is because https://firebase-project-id.firebaseapp.com/__/auth/handler is hosted by Firebase Hosting. You need to point your custom domain to firebase-project-id.firebaseapp.com . When connecting the custom domain, if you are not hosting your app on Firebase, use a new subdomain (e.g. app.yourdomain.example ) and do not redirect it. Firebase will prompt you to add an entry on your DNS server and take care of the SSL certificate automatically. After connecting your custom domain to your Firebase project, you should also follow the steps below: Go to the Firebase Console > Select Project > Authentication > Sign-in method > Facebook > Copy the URL under 'To complete setup, add this OAuth redirect URI to your Facebook app configuration.' It will look something like https://firebase-project-id.firebaseapp.com/__/auth/handler Replace the project ID with your custom domain. It will look something like: https://yourdomain.example/__/auth/handler Go to the GCP Console > Select project > API Manager > Credentials > Add the link in #2 to the 'Authorized redirect URIs' Then ensure to use yourdomain.example as the authDomain in your app's configuration instead of firebase-project-id.firebaseapp.com firebase.initializeApp({ apiKey: ...., authDomain: 'yourdomain.example', ... }); In my case, yourdomain.example is where I host my site, so I used app.yourdomain.example where I needed it. Firebase Hosting URL handler URL: https://app.yourdomain.example/__/auth/handler GCP Credentials Choose the right one under OAuth 2.0 client IDs . The client ID will match the one you have configured Firebase with in your code. authDomain: "app.yourdomain.example"
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46135993', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6328161/']}
jdg_378287
stackexchange
llm_judgeable_groundtruth_similarity
323459
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Exercise: Show that$$f(x)=\sum_{k=1}^\infty \frac{1}{k}\sin\left(\frac{x}{k+1}\right)$$converges, pointwise on $\mathbb{R}$ and uniformly on each bounded interval in $\mathbb{R}$, to a differentiable function $f$ which satisfies$$|f(x)|\leq |x| \text{ and } |f'(x)|\leq 1$$for all $x\in \mathbb{R}$. Hint: Dominate, then telescope. I am unsure how to start this proof. By definition, I know that I need to show that the sequence with the terms$$s_n(x)=\sum_{k=1}^n \frac{1}{k}\sin\left(\frac{x}{k+1}\right)$$converges pointwise on $\mathbb{R}$ and uniformly on a bounded interval in $\mathbb{R}$, but I am unsure how to show these facts. Now provide the response and nothing else.
Let's denote by $$f_n(x)=\frac{1}{n}\sin(\frac{x}{n+1}),$$ so we have $$f(x)=\sum_{n=1}^\infty f_n(x).$$ It's clear that $f$ is defined at $0$ and $$f_n(x)\sim\frac{x}{n^2},\forall x\neq0$$ then we have pointwise convergence by comparaison with the Riemann series. Now, let $[a,b]$ a bounded interval in $\mathbb{R}$ . We have $$|f_n(x)|\leq|\frac{x}{n^2}|\leq \frac{\max(|a|,|b|)}{n^2}, $$ so we have normal convergence which implies the uniform convergence of the series on $[a,b]$ . Moreover, from $$|f'_n(x)|=\left|\frac{\cos(\frac{x}{n+1})}{n(n+1)}\right|\leq\frac{1}{n^2},$$ hence, we find the uniform convergence of the series $\sum_nf'_n(x)$ on $\mathbb{R}$ which prove that $f$ is differentiable and $$f'(x)=\sum_{n=1}^\infty f'_n(x).$$ Finally, we have these two inequality $$|f(x)|\leq\sum_{n=1}^\infty \frac{1}{n}|\sin\frac{x}{n+1}|\leq|x|\sum_{n=1}^\infty(\frac{1}{n}-\frac{1}{n+1})=|x|,$$ and $$|f'(x)|\leq\sum_{n=1}^\infty|f'_n(x)|\leq\sum_{n=1}^\infty(\frac{1}{n}-\frac{1}{n+1})=1.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/323459', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/44350/']}
jdg_378288
stackexchange
llm_judgeable_groundtruth_similarity
1133777
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is $\mathbb Z[[X]]\otimes \mathbb Q$ isomorphic to $\mathbb Q[[X]]$? Here tensor product is over the ring $\mathbb Z$ and $\mathbb Z[[X]] $ denotes formal power series over $\mathbb Z$.I think this is true if we take polynomial rings instead of power series. Any help in this regards will be appreciated. Now provide the response and nothing else.
Consider the natural homomorphism ${\mathbb Z}[[x]]\otimes_{\mathbb Z}{\mathbb Q}\to{\mathbb Q}[[x]]$. It is injective but not an isomorphism since $1+\frac{1}{2}x+\frac{1}{4}x^2 + ...$ does not belong to the image. What about other 'strange' isomorphisms? If there was some isomorphism ${\mathbb Z}[[x]]\otimes_{\mathbb Z} {\mathbb Q}\cong{\mathbb Q}[[x]]$, then ${\mathbb Z}[[x]]\otimes_{\mathbb Z} {\mathbb Q}$ was a discrete valuation ring, i.e. a principal ideal domain with a unique prime element $\pi$. Consider now the elements $x$ and $x-2$ in ${\mathbb Z}[[x]]\otimes_{\mathbb Z} {\mathbb Q}$. They are both non-invertible in ${\mathbb Z}[[x]]\otimes_{\mathbb Z} {\mathbb Q}$: for $x$, it is not even invertible in ${\mathbb Q}[[x]]$, and for $2-x$, it is invertible in ${\mathbb Q}[[x]]$, but its inverse $\frac{1}{2}+\frac{1}{4}x+\frac{1}{8}x^2 + ...$ does not come from ${\mathbb Z}[[x]]\otimes_{\mathbb Z}{\mathbb Q}$. Hence $x$ and $2-x$ are of the form $\pi^k \varepsilon$ and $\pi^l\eta$ for $k,l\geq 1$ and units $\varepsilon,\eta$. This however would force $x^l$ to be associate to $(2-x)^k$, which is a contradiction since this is not even true in ${\mathbb Q}[[x]]$ as $(2-x)^k$ is a unit there but $x^l$ is not.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1133777', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/58671/']}
jdg_378289
stackexchange
llm_judgeable_groundtruth_similarity
2147783
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I used a variable with a lot of data in it, say String data .I wanted to use a small part of this string in the following way: this.smallpart = data.substring(12,18); After some hours of debugging (with a memory visualizer) I found out that the objects field smallpart remembered all the data from data , although it only contained the substring. When I changed the code into: this.smallpart = data.substring(12,18)+""; ..the problem was solved! Now my application uses very little memory now! How is that possible? Can anyone explain this? I think this.smallpart kept referencing towards data, but why? UPDATE: How can I clear the big String then? Will data = new String(data.substring(0,100)) do the thing? Now provide the response and nothing else.
Doing the following: data.substring(x, y) + "" creates a new (smaller) String object, and throws away the reference to the String created by substring(), thus enabling garbage collection of this. The important thing to realise is that substring() gives a window onto an existing String - or rather, the character array underlying the original String. Hence it will consume the same memory as the original String. This can be advantageous in some circumstances, but problematic if you want to get a substring and dispose of the original String (as you've found out). Take a look at the substring() method in the JDK String source for more info. EDIT: To answer your supplementary question, constructing a new String from the substring will reduce your memory consumption, provided you bin any references to the original String. NOTE (Jan 2013). The above behaviour has changed in Java 7u6 . The flyweight pattern is no longer used and substring() will work as you would expect.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2147783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/252579/']}
jdg_378290
stackexchange
llm_judgeable_groundtruth_similarity
9915673
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a number, for example 1234567897865; how do I max it out and create 99999999999999 ? I did this this way: int len = ItemNo.ToString().Length; String maxNumString = ""; for (int i = 0; i < len; i++) { maxNumString += "9"; } long maxNumber = long.Parse(maxNumString); what would be the better, proper and shorter way to approach this task? Now provide the response and nothing else.
var x = 1234567897865; return Math.Pow(10, Math.Ceiling(Math.Log10(x+1e-6))) - 1; To expand on comments below, if this problem was expressed in hex or binary, it could be done very simply using shift operators i.e., "I have a number, in hex,, for example 3A67FD5C; how do I max it out and create FFFFFFFF?" I'd have to play with this to make sure it works exactly, but it would be something like this: var x = 0x3A67FD5C; var p = 0;while((x=x>>1)>0) p++; // count how many binary values are in the number return (1L << 4*(1+p/4)) - 1; // using left shift, generate 2 to // that power and subtract one
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9915673', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1076389/']}
jdg_378291
stackexchange
llm_judgeable_groundtruth_similarity
518701
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My clipboard is populated with text, but when I run string clipboardData = Clipboard.GetText(System.Windows.Forms.TextDataFormat.Text); I get back an empty string. I've toyed with various forms of the call including: string clipboardData = Clipboard.GetText();string clipboardData = Clipboard.GetText(System.Windows.Forms.TextDataFormat.UnicodeText); But with the same result. Am I missing something obvious? Now provide the response and nothing else.
You can only access the clipboard from an STA thread. Rick Brewster ran into this with some refactoring of the regular Edit->Paste command, in Paint.NET. Code: IDataObject idat = null;Exception threadEx = null;Thread staThread = new Thread( delegate () { try { idat = Clipboard.GetDataObject(); } catch (Exception ex) { threadEx = ex; } });staThread.SetApartmentState(ApartmentState.STA);staThread.Start();staThread.Join();// at this point either you have clipboard data or an exception Code is from Rick. http://forums.getpaint.net/index.php?/topic/13712-/page__view__findpost__p__226140 Update : Jason Heine made a good point of adding () after delegate to fix the ambiguous method error.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/518701', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/46249/']}
jdg_378292
stackexchange
llm_judgeable_groundtruth_similarity
25141822
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: working in ember-cli testing. After all tests passed it returns extra two test with errors. Uncaught Error: Assertion Failed: calling set on destroyed object Source : '../dist/assets/vendor.js:13269' this is one unit test configuration import Ember from "ember";import { test,moduleFor } from 'ember-qunit';import startApp from '../helpers/start-app';var App;module('An Integration test',{ setup:function(){ App=startApp(); }, teardown: function() { Ember.run(App, 'destroy'); }}); Now provide the response and nothing else.
This is either because in the result of a promise or any other deferred code you do not check the destroy status of an object, or because you didn't teardown something that has been setup and interact with DOM events or anything external to the core of Ember. I used to have this especially on some jQuery plugins which I mapped to Ember, and during the tests the plugins were destroying too slowly and I was then either not using a run loop, or not checking the destroyed status of the Ember object I was manipulating. You can do so with: if ( !(obj.get('isDestroyed') || obj.get('isDestroying')) ) { // do your destroying code setting stuff} Also think about destroying any jQuery plugins that might have been initialised in the code of your views (anything setup in didInsertElement should be teardown in willDestroyElement for example).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25141822', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2498416/']}
jdg_378293
stackexchange
llm_judgeable_groundtruth_similarity
59145640
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a custom component <app-button><span></span></app-button> . It has CSS styles like: span:hover { color: red;} When I use this component in another and tried to apply CSS styles in parent component it has not effect: <app><app-button></app-button></app> Inside app component I have tried: app-button span:hover { color: green; } It does not work for me Now provide the response and nothing else.
you could use the ng-deep selector: ::ng-deep app-button span:hover { color: green;} which will make your styles penetrate to child components. but this functionality is due to be deprecated soon according to the angular team, and people are advised to get off of it. (PERSONAL opinion: too many projects rely on ng-deep for them to deprecate it anytime soon) the best way to do it currently IMO is with a global style sheet with something like: app app-button span:hover { color: green;} you also could set the view encapsulation to none on your parent component, but that's functionally similar to a global style sheet, and can be confusing if you don't set things up correctly and forget where / why you put global styles that only load when a component loads, and leads to some bugs in my experience.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/59145640', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8291684/']}
jdg_378294
stackexchange
llm_judgeable_groundtruth_similarity
17763415
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to delete files that are older than 59 minutes. I have this so far: find /tmp -daystart -maxdepth 1 -mmin +59 -type f -name "*.*" -exec rm -f {} \; This doesn't work and seems to delete all files. I've tested this several times and I think the issue is to do with daystart . I've read the man page and it seems to base time on the beginning of the day rather than from 24 hours ago. If this is the case how can I accurately delete files that are older than 59 minutes? Do I need to account for daystart and add some more minutes? Example: ubuntu@ip-10-138-30-118:/tmp$ ls -latotal 8drwxrwxrwt 2 root root 4096 Jul 20 14:39 ./drwxr-xr-x 23 root root 4096 Jun 25 18:34 ../-rw-rw-r-- 1 ubuntu ubuntu 0 Jul 20 12:35 a.txt Both the following commands, return the file: ubuntu@ip-10-138-30-118:/tmp$ find /tmp -daystart -maxdepth 1 -mmin +59 -type f -name "*.*"/tmp/a.txt And: ubuntu@ip-10-138-30-118:/tmp$ find /tmp -daystart -maxdepth 1 -mmin +359 -type f -name "*.*"/tmp/a.txt However, the file is not older than 659 minutes (10.9 hours)! But at 759 (12.65 hours), it doesn't return the file anymore? Now provide the response and nothing else.
When used with -mmin , -daystart appears to make it calculate from the end of today, not the beginning. If you just want to find files modified more than 59 minutes ago, you don't need that option. -mmin calculates from the current time by default. barmar@dev:~/testdir$ dateSat Jul 20 10:02:20 CDT 2013barmar@dev:~/testdir$ ls -ltotal 0-rw-r--r-- 1 barmar adm 0 Jul 20 09:57 a.txtbarmar@dev:~/testdir$ find . -maxdepth 1 -mmin +2 -type f./a.txtbarmar@dev:~/testdir$ find . -maxdepth 1 -mmin +10 -type f
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17763415', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51649/']}
jdg_378295
stackexchange
llm_judgeable_groundtruth_similarity
15840
Below is a question asked on the forum chemistry.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was wondering about the amazing conductive properties of graphene, lets assume a large copper bar that is 10 kg, current of 1kA and probably more can flow in it, what about graphene? It certainly can since its a better conductor, but it would be much lighter? In the milligrams possibly? Also, can graphene replace the heavy busbars that carry MW of power?Making it much lighter? Now provide the response and nothing else.
The trick with graphene is that a lot of its amazing properties only work when you have continuous perfect sheets of it, and making graphene like this is currently beyond us, for large scales anyways. It is true that graphene has very high electron mobility $\approx10^{5}~\mathrm{cm^2/Vs}$ at room temperature , which works out to on the order of $10~\mathrm{n\Omega\cdot m}$ in bulk (which assumes you can make a perfect multilayer structure that maintains the properties of a single sheet). An impressive figure, but only $\approx40\%$ better than copper. Of course, copper is about 6 times more dense than graphene, so if mass is the main concern, graphene would be a pretty good improvement. Still, we're not talking about replacing $10~\mathrm{kg}$ of copper with a few milligrams. Given how much cheaper and easier it is to make metal wires, we're not at the point of using graphene for this kind of bulk application. There are also some more mundane problems to sort out, e.g. graphene is brittle like a ceramic, which might cause mechanical issues.
{}
{'log_upvote_score': 5, 'links': ['https://chemistry.stackexchange.com/questions/15840', 'https://chemistry.stackexchange.com', 'https://chemistry.stackexchange.com/users/7669/']}
jdg_378296
stackexchange
llm_judgeable_groundtruth_similarity
34647868
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When doing P/Invoke, it is important to make the data layout match. We can control the layout of struct by using some attribute. For example: struct MyStruct { public bool f;} gives a size of 4. While we can tell compiler to make it a 1 byte bool to match C++ type of bool : struct MyStruct{ [MarshalAs(UnmanagedType.I1)] public bool f;} gives a size of 1. These make sense. But when I test fixed bool array, I was confused. unsafe struct MyStruct{ public fixed bool fs[1];} gives a size of 4 bytes. and unsafe struct MyStruct{ public fixed bool fs[4];} still gives a size of 4 bytes. but unsafe struct MyStruct{ public fixed bool fs[5];} gives a size of 8. It looks like in fixed bool array, the size of bool element is still 1 byte, but the alignment is 4 bytes. This doesn't match C++ bool array, which is 1 byte size and alignment. Can someone explain me on this? Update : I finally find out, the reason is, bool type in a struct, then that struct will NEVER be blittable! So don't expect a struct which has bool type inside to be same layout as in C. Regards,Xiang. Now provide the response and nothing else.
A bool is rather special, it goes back to Dennis Ritchie's decision to not give the C language a bool type. That caused plenty of mayhem, language and operating system designers added it themselves and made incompatible choices. It was added to the Winapi as the BOOL typedef. That's the default marshaling if you don't force another type. Typedef-ed as int to keep it compatible with C, takes 4 bytes as you found out. And aligns to 4, as you found out, like any int does. It was added to C++. Without a size specification, most C++ compiler implementations chose a single byte for storage. Most notably the Microsoft C++ compiler did, the most likely implementation you'll interop with. It was added to COM Automation as VARIANT_BOOL. Originally targeted as the new extension model for Visual Basic to get rid of the VBX restrictions, it became wildly popular and just about any language runtime on Windows now supports it. VB back then was heavily affected by 16-bit operating system sensibilities, a VARIANT_BOOL takes 2 bytes. All three native runtime environments are likely targets for interop in a C# program. Clearly the CLR designers had a very difficult choice to make, having to pick between 1, 2 and 4 bytes. There is no way to win, while the CLR does have a shot at guessing at COM interop, it cannot know whether you try to interop with a C-based api or a C++ program. So they made the only logical choice: none of them. A struct or class type that contains a bool is never blittable . Not even when you apply [MarshalAs(UnmanagedType.U1)], the one that would make it compatible with the CLR type. Not so sure that was a good decision, it however was the one they made so we'll have to deal with it. Getting a blittable struct is highly desirable, it avoids copying. It allows native code to directly access the managed heap and stack. Pretty dangerous and many a broken pinvoke declaration has corrupted the GC heap without the usual benefit of the unsafe keyword alert. But impossible to beat for speed. You get a blittable struct by not using bool . Use byte instead. You can still get the bool back by wrapping the struct member with a property. Don't use an auto-implemented property, you must care about the position of the byte. Thus: struct MyStruct { private byte _f; public bool f { get { return _f != 0; } set { _f = value ? 1 : 0; } }} Native code is oblivious to the property. Don't fret about runtime overhead for the getter and setter, the jitter optimizer makes them disappear and they turn into a single CPU instruction each.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34647868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2218586/']}
jdg_378297
stackexchange
llm_judgeable_groundtruth_similarity
338292
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I've been troubleshooting an analog circuit and have just come to the realization that Sine Voltage sources at mV are broken. Here is one that is supposed to provide a 1mV 1Hz sine wave on top of a 300mV DC signal. Instead I get complete garbage. Running LTSpice version 4.22. Anyone else seen this issue or can advise what is the problem? Now provide the response and nothing else.
Most probably you have waveform compression enabled and the compression algorithm has too high a relative tolerance to process such a signal correctly, since the AC component is so small compared to the DC offset. Open the control panel and decrease the relative tolerance: LTspice usually performs waveform compression to avoid generating huge data files for waveforms. The compression algorithm is lossy, so you might lose details and have artifacts like those you see on your simulation. LTspice online help excerpt: LTspice compresses the raw data files as they are generated. A compressed file can be 50 times smaller than the un-compressed one. This is a lossy compression. This pane of the control panel allows you to control how lossy the compression runs. Sometimes it is useful to disable compression entirely, for complex waveforms, but expect GB size .raw files (these are the files where LTspice saves waveform data). Nevertheless, usually it is sufficient to play around with compression settings, but only when you reasonably know what to expect from a simulation. Some commenters have pointed out that increasing tolerances, or specifically, reltol SPICE parameter will increase simulation time. This is true, but compression relative tolerance has nothing to do with the reltol SPICE parameter, which is found in another tab of the control panel: To further reinforce my point, I'll cite the whole section of LTspice's help concerning the compression tab of the control panel (emphasis mine): LTspice compresses the raw data files as they are generated. Acompressed file can be 50 times smaller than the un-compressed one.This is a lossy compression. This pane of the control panel allows youto control how lossy the compression runs. Window Size(No. of Points): Maximum number of points that can becompressed into two end points. Relative Tolerance: The relative error allowed between the compresseddata and the uncompressed data. Absolute Voltage tolerance[V]: The voltage error allowed by thecompression algorithm. Absolute Current tolerance[A]: The current error allowed be thecompression algorithm. These compression settings are not remembered between programinvocations to encourage use of the defaults. They are available onthe control panel for diagnostic purposes. The tolerances and windowsize can be specified with option parameters plotreltol, plotvntol,plotabstol and plotwinsize in .option statements placed as SPICEdirectives on the schematic. You will probably want to turn of compression when using .fourstatements or doing FFT's in post analysis of your data with the SPICEdirective: .options plotwinsize=0 Again, the last directive just amounts to disable compression as you would do using the control panel. As I've already said above, this is just what you want to do to avoid artifacts, but huge files will be generated.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/338292', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/23669/']}
jdg_378298
stackexchange
llm_judgeable_groundtruth_similarity
428137
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a cascaded IIR filter that has a sampling frequency of 1kHz and consist of a highpass filter with a cutoff frequency of 0.8Hz and a lowpass filter with a cutoff frequency of 5Hz. I have generated a C header and implemented it in some arduino code on a UNO that takes an analog value every millisecond and filters it. //FilterData#define M 6#define N 6double b[M+1] = {3.738001453478e-06,1.694065894509e-21,-1.121400436044e-05,8.470329472543e-22,1.121400436044e-05,-1.694065894509e-21,-3.738001453478e-06};double a[N+1] = {1, -5.927117642542, 14.63822857612, -19.28168484046,14.28685989898, -5.645991366596, 0.9297053744989};double x[M+1] = {0,0,0,0,0,0,0};double y[N+1] = {0,0,0,0,0,0,0};ISR(TIMER1_COMPA_vect){ int val = analogRead(sensorPin); Serial.print("val: ");Serial.println(val); val = signalFilter(val); Serial.print("Filtered val: ");Serial.println(val);}int signalFilter(int invalue){ Serial.print("invalue=");Serial.println(invalue); //FIR Serial.print("x=["); for(uint32_t k = M; k > 0; k--){ Serial.print(x[k-1]);Serial.print(", "); x[k] = x[k-1]; } x[0] = (float)invalue; Serial.print(x[0]);Serial.println("]"); //IIR Serial.print("y=["); for(uint32_t k = N; k > 0; k--){ Serial.print(y[k-1]);Serial.print(", "); y[k] = y[k-1]; } Serial.print("y[0]");Serial.println("]"); Serial.print("["); double FIR = 0; for(uint32_t i = 0; i <= M; i++){//Loop for the sum. Serial.print("(");Serial.print(b[i]);Serial.print("*");Serial.print(x[M-i]);Serial.print(")"); if(i < M){Serial.print("+");} FIR += (b[i]*x[M-i]); } Serial.print("]+"); Serial.print("["); double IIR = 0; for(uint32_t j = 1; j <= N; j++){ Serial.print("(");/*Serial.print((a[j]*y[N-j]));*/Serial.print(a[j]);Serial.print("*");Serial.print(y[N-j]);Serial.print(")"); if(j < N){Serial.print("+");} IIR += (a[j]*y[N-j]); } Serial.println("]"); Serial.print("FIR=");Serial.println(FIR); Serial.print("IIR=");Serial.println(IIR); IIR = FIR+IIR; y[0] = IIR; Serial.print("y[0]=");Serial.println(y[0]); return (int)y[0];} But I get a really odd output that eventually overflows. val: 754invalue=754x=[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 754.00]y=[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, y[0]][(0.00*0.00)+(0.00*0.00)+(-0.00*0.00)+(0.00*0.00)+(0.00*0.00)+(-0.00*0.00)+ (-0.00*754.00)]+[(-5.93*0.00)+(14.64*0.00)+(-19.28*0.00)+(14.29*0.00)+(-5.65*0.00)+(0.93*0.00)]FIR=-0.00IIR=0.00y[0]=-0.00Filtered val: 0val: 972invalue=972x=[0.00, 0.00, 0.00, 0.00, 0.00, 754.00, 972.00]y=[0.00, 0.00, 0.00, 0.00, 0.00, -0.00, y[0]][(0.00*0.00)+(0.00*0.00)+(-0.00*0.00)+(0.00*0.00)+(0.00*0.00)+(-0.00*754.00)+(-0.00*972.00)]+[(-5.93*0.00)+(14.64*0.00)+(-19.28*0.00)+(14.29*0.00)+(-5.65*-0.00)+(0.93*-0.00)]FIR=-0.00IIR=0.01y[0]=0.01Filtered val: 0val: 974invalue=974x=[0.00, 0.00, 0.00, 0.00, 754.00, 972.00, 974.00]y=[0.00, 0.00, 0.00, 0.00, -0.00, 0.01, y[0]][(0.00*0.00)+(0.00*0.00)+(-0.00*0.00)+(0.00*0.00)+(0.00*754.00)+(-0.00*972.00)+(-0.00*974.00)]+[(-5.93*0.00)+(14.64*0.00)+(-19.28*0.00)+(14.29*-0.00)+(-5.65*0.01)+(0.93*0.01)]FIR=0.00IIR=-0.09y[0]=-0.08Filtered val: 0val: 971invalue=971x=[0.00, 0.00, 0.00, 754.00, 972.00, 974.00, 971.00]y=[0.00, 0.00, 0.00, -0.00, 0.01, -0.08, y[0]][(0.00*0.00)+(0.00*0.00)+(-0.00*0.00)+(0.00*754.00)+(0.00*972.00)+(-0.00*974.00)+(-0.00*971.00)]+[(-5.93*0.00)+(14.64*0.00)+(-19.28*-0.00)+(14.29*0.01)+(-5.65*-0.08)+(0.93*-0.08)]FIR=0.01IIR=0.57y[0]=0.58Filtered val: 0val: 634invalue=634x=[0.00, 0.00, 754.00, 972.00, 974.00, 971.00, 634.00]y=[0.00, 0.00, -0.00, 0.01, -0.08, 0.58, y[0]][(0.00*0.00)+(0.00*0.00)+(-0.00*754.00)+(0.00*972.00)+(0.00*974.00)+ (-0.00*971.00)+(-0.00*634.00)]+[(-5.93*0.00)+(14.64*-0.00)+(-19.28*0.01)+(14.29*-0.08)+(-5.65*0.58)+(0.93*0.58)]FIR=0.00IIR=-4.13y[0]=-4.13Filtered val: -4val: 531invalue=531x=[0.00, 754.00, 972.00, 974.00, 971.00, 634.00, 531.00]y=[0.00, -0.00, 0.01, -0.08, 0.58, -4.13, y[0]][(0.00*0.00)+(0.00*754.00)+(-0.00*972.00)+(0.00*974.00)+(0.00*971.00)+(-0.00*634.00)+(-0.00*531.00)]+[(-5.93*-0.00)+(14.64*0.01)+(-19.28*-0.08)+(14.29*0.58)+(-5.65*-4.13)+(0.93*-4.13)]FIR=-0.00IIR=29.50y[0]=29.50Filtered val: 29val: 215invalue=215x=[754.00, 972.00, 974.00, 971.00, 634.00, 531.00, 215.00]y=[-0.00, 0.01, -0.08, 0.58, -4.13, 29.50, y[0]][(0.00*754.00)+(0.00*972.00)+(-0.00*974.00)+(0.00*971.00)+(0.00*634.00)+(-0.00*531.00)+(-0.00*215.00)]+[(-5.93*0.01)+(14.64*-0.08)+(-19.28*0.58)+(14.29*-4.13)+(-5.65*29.50)+(0.93*29.50)]FIR=-0.00IIR=-210.56y[0]=-210.56Filtered val: -210val: 599invalue=599x=[972.00, 974.00, 971.00, 634.00, 531.00, 215.00, 599.00]y=[0.01, -0.08, 0.58, -4.13, 29.50, -210.56, y[0]][(0.00*972.00)+(0.00*974.00)+(-0.00*971.00)+(0.00*634.00)+(0.00*531.00)+(-0.00*215.00)+(-0.00*599.00)]+[(-5.93*-0.08)+(14.64*0.58)+(-19.28*-4.13)+(14.29*29.50)+(-5.65*-210.56)+(0.93*-210.56)]FIR=-0.00IIR=1503.07y[0]=1503.07Filtered val: 1503val: 469invalue=469x=[974.00, 971.00, 634.00, 531.00, 215.00, 599.00, 469.00]y=[-0.08, 0.58, -4.13, 29.50, -210.56, 1503.07, y[0]][(0.00*974.00)+(0.00*971.00)+(-0.00*634.00)+(0.00*531.00)+(0.00*215.00)+(-0.00*599.00)+(-0.00*469.00)]+[(-5.93*0.58)+(14.64*-4.13)+(-19.28*29.50)+(14.29*-210.56)+(-5.65*1503.07)+(0.93*1503.07)]FIR=-0.00IIR=-10729.75y[0]=-10729.75Filtered val: -10729val: 541invalue=541x=[971.00, 634.00, 531.00, 215.00, 599.00, 469.00, 541.00]y=[0.58, -4.13, 29.50, -210.56, 1503.07, -10729.75, y[0]][(0.00*971.00)+(0.00*634.00)+(-0.00*531.00)+(0.00*215.00)+(0.00*599.00)+(-0.00*469.00)+(-0.00*541.00)]+[(-5.93*-4.13)+(14.64*29.50)+(-19.28*-210.56)+(14.29*1503.07)+(-5.65*-10729.75)+(0.93*-10729.75)]FIR=0.00IIR=76594.85y[0]=76594.85Filtered val: 11058 Is there anyone that could tell me whats wrong? Now provide the response and nothing else.
I'm looking for an instrumentation amplifier and most of them I see have very narrow Vcm range with respect to their supply. Consider using the AMP04 instrumentation amplifier : - Although not tested and guaranteed, the AMP04 inputs are biased in a way that they can amplify signals linearly with commonmode voltage as low as –0.25 volts below ground. This holds true over the industrial temperature range from –40°C to +85°C. It operates from a single 5 volt rail too and, the output can swing down to a couple of mV. However, a limitation is that from a 5 volt supply, the upper input commonmode range is limited to +3 volts. There may be better alternatives - I suggested this one because I use it in a design.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/428137', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/216102/']}
jdg_378299
stackexchange
llm_judgeable_groundtruth_similarity
345902
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have 3 categorical independent variable and all the variables are more than 2 categories like 6 locations, 4 types, 6 maturity levels. Can I still use multiple regression? If so, how can I do that? If not, is there a better test for my experiment? I was asked to clarify more:To explain more: I have 6 dependent variables and I want to see, for example if the location has an effect on my dependent variables. should I look at this effect one-by-one? Can I look the effect of 3 categorical independent variable on a dependent variable at the same time? Now provide the response and nothing else.
There is a conjugate distribution for $\theta$ in the model $X_1, X_2, \dotsc, X_n$ iid $\text{N}(\theta, \theta^2)$ with $\theta>0$, but it is not a commonly named distribution. First find the likelihood of this "normal parabola" model, as it is called:$$L(\theta) = \prod_i^n \frac1{\sqrt{2\pi}\theta}e^{-\frac12\sum_i^n \left(\frac{x_i-\theta}{\theta} \right)^2}$$and (leaving out constants) this is proportional to$$L(\theta) \propto \theta^{-n} e^{-\theta^{-2}\sum_i x_i^2/2 + \theta^{-1}\sum_i x_i}$$Then we need a prior distribution on $\theta$ that can "absorb" these three factors. Using a form of bayes' theorem,$$\pi(\theta | x) \propto f(x | \theta) \pi(\theta) \\= \theta^{-n} e^{-\theta^{-2}\sum_i x_i^2/2} e^{\theta^{-1}\sum_i x_i}\pi(\theta)$$So we need a prior density having some corresponding factors. We can try with a form which is a particular generalization of a generalized inverse gamma density$$ \pi(\theta) = K^{-1} \cdot \theta^{-\alpha-1}e^{-\frac{\beta}{\theta}-\frac{\gamma}{\theta^2}},\qquad \theta>0$$which is convergent for $\alpha>0,\beta\in\mathbb{R}, \gamma>0$. The proportionality factor $K$ has a very complicated form which will be given at the end. This distribution will have a finite expectation for $a>1$ and a finite variance for $a>2$. Using this we find that $$\pi(\theta | x) \propto \theta^{-(n+\alpha)-1} e^{-(\beta-\sum_i x_i)/\theta } e^{-(\gamma+\sum_i x_i^2/2)/\theta^2}$$that is, the posterior has the same form as the prior. I calculated the constant $K$ with maple: int( (theta^(-a-1))*exp(-b/theta - c/theta^2), theta=0..infinity ) assuming a>0,c>0,b,real; which gives the following result:$$ 1/2\,{\frac {{c}^{-a/2}}{\sqrt {\pi}} \cdot \\ \left( 1/4\,{\frac {{\pi}^{2}b}{\sqrt {c}\cos \left( 1/2\,\pi\,a \right) \Gamma \left( 2-a/2 \right) } \left( 1/2\,{\frac {{b}^{2}}{c}}-1+a \right) {\it LaguerreL} \left( 1/2-a/2,1/2, \\ 1/4\,{\frac {{b}^{2}}{c}} \right) }-1/8\,{\frac {{\pi}^{2}{b}^{3}}{{c}^{3/2}\cos \left( 1/2\,\pi\,a \right) \Gamma \left( 2-a/2 \right) }{\it LaguerreL} \left( 1/2-a/2,3/2,1/4\,{\frac {{b}^{2}}{c}} \right) }+ \\ 1/2\,{\frac {{\pi}^{2}}{\sin \left( 1/2\,\pi\,a \right) \Gamma \left( 3/2-a/2 \right) } \left( 1/2\,{\frac {{b}^{2}}{c}}+1 \right) {\it LaguerreL} \left( -a/2,1/2,1/4\,{\frac {{b}^{2}}{c}} \right) }- \\ 1/4\,{\frac {{\pi}^{2}{b}^{2}}{c\sin \left( 1/2\,\pi\,a \right) \Gamma \left( 3/2-a/2 \right) }{\it LaguerreL} \left( -a/2,3/2,1/4\,{\frac {{b}^{2}}{c}} \right) } \right) }$$For the ${\it LaguerreL}$ function see https://en.wikipedia.org/wiki/Laguerre_polynomials .
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/345902', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/208074/']}
jdg_378300
stackexchange
llm_judgeable_groundtruth_similarity
10095956
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to run code on several cores (I tried both the snow and parallel packages). I have cl <- makeCluster(2)y <- 1:10sapply(1:5, function(x) x + y) # WorksparSapply(cl, 1:5, function(x) x + y) The last line returns the error: Error in checkForRemoteErrors(val) : 2 nodes produced errors; first error: object 'y' not found Clearly parSapply isn't finding y in the global environment. Any ways to get around this? Thanks. Now provide the response and nothing else.
The nodes don't know about the y in the global environment on the master. You need to tell them somehow. library(parallel)cl <- makeCluster(2)y <- 1:10# add y to function definition and parSapply callparSapply(cl, 1:5, function(x,y) x + y, y)# export y to the global environment of each node# then call your original codeclusterExport(cl, "y")parSapply(cl, 1:5, function(x) x + y)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10095956', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/160553/']}
jdg_378301
stackexchange
llm_judgeable_groundtruth_similarity
3386162
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have recently got into set theory, and just recently started dealing with proofs involving set inclusions. I tried tackling some problems and was successful in doing so, up until I got to the following question, which I, unfortunately, got stuck on: I know that to show that both sets are equal, it's required to show that both $S⊆T$ and $T⊆S$ , but I don't really know how to show it. I have tried to brainstorm and thought about solving for x in the first set, but it didn't help me much, and I got stuck. I am not exactly sure what am I missing... Could anybody please help me get on track? Now provide the response and nothing else.
You just need to verify the axioms of the definition of a group for $\mathscr{H}=(H, \otimes_{15}).$ The set $H$ is closed under $\otimes_{15}$ by inspection of the multiplication table. (It satisfies the Latin square property.) The identity is $1$ . The inverse of $4$ is itself. The inverse of $7$ is $13$ and vice versa . Associativity of $\otimes_{15}$ is inherited from that of ordinary multiplication. Hence $\mathscr{H}$ is a group.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3386162', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/671740/']}
jdg_378302
stackexchange
llm_judgeable_groundtruth_similarity
4793420
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When the text of a UILabel gets truncated there are 3 dots inserted by default.Is it possible to change these characters or disable them? Now provide the response and nothing else.
I have written a custom truncating class that you can pop into you code where ever. Just use this method below. it will return true if truncation has taken place, and MaxWidth can be left as 0 if you just want to use the labels default frame width. Put maxWidth as something less than the frames width to shorten it within its frames bounds. Swift 2 (with some swift 3 comments for converting) usage: Truncater.replaceElipsis(forLabel: label, withString: "???")let didTruncate = Truncater.replaceElipsis(forLabel: label, withString: "1234", andMaximumWidth: 50) //maxWidth is not number of chars, but label width in CGFloat class: import UIKitclass Truncater { class func replaceElipsis(forLabel label:UILabel, withString replacement:String) -> Bool { return replaceElipsis(forLabel: label, withString: replacement, andMaximumWidth:0) } class func replaceElipsis(forLabel label:UILabel, withString replacement:String, andMaximumWidth width:CGFloat) -> Bool { if(label.text == nil){ return false } let origSize = label.frame; var useWidth = width if(width <= 0){ useWidth = origSize.width //use label width by default if width <= 0 } label.sizeToFit() let labelSize = label.text!.sizeWithAttributes([NSFontAttributeName: label.font]) //.size(attributes: [NSFontAttributeName: label.font]) for swift 3 if(labelSize.width > useWidth){ let original = label.text!; let truncateWidth = useWidth; let font = label.font; let subLength = label.text!.characters.count var temp = label.text!.substringToIndex(label.text!.endIndex.advancedBy(-1)) //label.text!.substring(to: label.text!.index(label.text!.endIndex, offsetBy: -1)) for swift 3 temp = temp.substringToIndex(temp.startIndex.advancedBy(getTruncatedStringPoint(subLength, original:original, truncatedWidth:truncateWidth, font:font, length:subLength))) temp = String.localizedStringWithFormat("%@%@", temp, replacement) var count = 0 while temp.sizeWithAttributes([NSFontAttributeName: label.font]).width > useWidth { count+=1 temp = label.text!.substringToIndex(label.text!.endIndex.advancedBy(-(1+count))) temp = temp.stringByTrimmingCharactersInSet(NSCharacterSet.whitespaceCharacterSet()) //remove this if you want to keep whitespace on the end temp = String.localizedStringWithFormat("%@%@", temp, replacement) } label.text = temp; label.frame = origSize; return true; } else { label.frame = origSize; return false } } class func getTruncatedStringPoint(splitPoint:Int, original:String, truncatedWidth:CGFloat, font:UIFont, length:Int) -> Int { let splitLeft = original.substringToIndex(original.startIndex.advancedBy(splitPoint)) let subLength = length/2 if(subLength <= 0){ return splitPoint } let width = splitLeft.sizeWithAttributes([NSFontAttributeName: font]).width if(width > truncatedWidth) { return getTruncatedStringPoint(splitPoint - subLength, original: original, truncatedWidth: truncatedWidth, font: font, length: subLength) } else if (width < truncatedWidth) { return getTruncatedStringPoint(splitPoint + subLength, original: original, truncatedWidth: truncatedWidth, font: font, length: subLength) } else { return splitPoint } }} Objective C + (bool) replaceElipsesForLabel:(UILabel*) label With:(NSString*) replacement MaxWidth:(float) width class: //=============================================Header=====================================================#import <Foundation/Foundation.h>#import <UIKit/UIKit.h>@interface CustomTruncater : NSObject+ (bool) replaceElipsesForLabel:(UILabel*) label With:(NSString*) replacement MaxWidth:(float) width;@end//========================================================================================================#import "CustomTruncater.h"@implementation CustomTruncaterstatic NSString *original;static float truncateWidth;static UIFont *font;static int subLength;+ (bool) replaceElipsesForLabel:(UILabel*) label With:(NSString*) replacement MaxWidth:(float) width {CGRect origSize = label.frame;float useWidth = width;if(width <= 0) useWidth = origSize.size.width; //use label width by default if width <= 0[label sizeToFit];CGSize labelSize = [label.text sizeWithFont:label.font];if(labelSize.width > useWidth) { original = label.text; truncateWidth = useWidth; font = label.font; subLength = label.text.length; NSString *temp = [label.text substringToIndex:label.text.length-1]; temp = [temp substringToIndex:[self getTruncatedStringPoint:subLength]]; temp = [NSString stringWithFormat:@"%@%@", temp, replacement]; int count = 0; while([temp sizeWithFont:label.font].width > useWidth){ count++; temp = [label.text substringToIndex:(label.text.length-(1+count))]; temp = [temp stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceCharacterSet]]; //remove this if you want to keep whitespace on the end temp = [NSString stringWithFormat:@"%@%@", temp, replacement]; } label.text = temp; label.frame = origSize; return true;}else { label.frame = origSize; return false;}}+ (int) getTruncatedStringPoint:(int) splitPoint {NSString *splitLeft = [original substringToIndex:splitPoint];subLength /= 2;if(subLength <= 0) return splitPoint;if([splitLeft sizeWithFont:font].width > truncateWidth){ return [self getTruncatedStringPoint:(splitPoint - subLength)];}else if ([splitLeft sizeWithFont:font].width < truncateWidth) { return [self getTruncatedStringPoint:(splitPoint + subLength)];}else { return splitPoint;}}@end
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4793420', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/550177/']}
jdg_378303
stackexchange
llm_judgeable_groundtruth_similarity
17444689
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have several svn and git repositories. I want to have a view (web) where I can see: All changes of user "foo" on day yyyy-mm-dd. The result should be sorted by time and it should not matter if a change was made in svn or git. If you know a tool or have a hint how to get this, please let me know it. Related: https://softwareengineering.stackexchange.com/questions/129152/are-there-open-source-alternatives-to-bitbucket-github-kiln-and-similar-dvcs Now provide the response and nothing else.
You are reading all rows into a list, then processing that list. Don't do that . Process your rows as you produce them. If you need to filter the data first, use a generator function: import csvdef getstuff(filename, criterion): with open(filename, "rb") as csvfile: datareader = csv.reader(csvfile) yield next(datareader) # yield the header row count = 0 for row in datareader: if row[3] == criterion: yield row count += 1 elif count: # done when having read a consecutive series of rows return I also simplified your filter test; the logic is the same but more concise. Because you are only matching a single sequence of rows matching the criterion, you could also use: import csvfrom itertools import dropwhile, takewhiledef getstuff(filename, criterion): with open(filename, "rb") as csvfile: datareader = csv.reader(csvfile) yield next(datareader) # yield the header row # first row, plus any subsequent rows that match, then stop # reading altogether # Python 2: use `for row in takewhile(...): yield row` instead # instead of `yield from takewhile(...)`. yield from takewhile( lambda r: r[3] == criterion, dropwhile(lambda r: r[3] != criterion, datareader)) return You can now loop over getstuff() directly. Do the same in getdata() : def getdata(filename, criteria): for criterion in criteria: for row in getstuff(filename, criterion): yield row Now loop directly over getdata() in your code: for row in getdata(somefilename, sequence_of_criteria): # process row You now only hold one row in memory, instead of your thousands of lines per criterion. yield makes a function a generator function , which means it won't do any work until you start looping over it.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/17444689', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/633961/']}
jdg_378304
stackexchange
llm_judgeable_groundtruth_similarity
61385525
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm fairly sure it is, but it is beyond my capabilities at this stage. As the title says I would like to include navbar as php for each page, to avoid repeating code, but need to modify it on each page to denote which is active. I've got the following navbar code: <nav class="navbar navbar-expand-sm navbar-light" style="background-color: #68B3E2;"> <a class="navbar-brand" href="#"><img class="header_logo" src="images/logo.png" width="120" height="120" alt="..."></a> <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <ul class="navbar-nav mr-auto"> <li class="nav-item"> <a class="nav-link" href="/index.html">Home</a> </li> <li class="nav-item"> <a class="nav-link" href="/editing.html">Editing</a> </li> <li class="nav-item"> <a class="nav-link" href="/reamping.html">Reamping</a> </li> <li class="nav-item active"> <a class="nav-link" href="/mixing.html">Mixing <span class="sr-only">(current)</span></a> </li> <li class="nav-item"> <a class="nav-link" href="/mastering.html">Mastering</a> </li> </ul> <a class="right_btn btn-primary btn-lg" href="/buypage.html" role="button">BUY NOW</a> </div> </nav> Now provide the response and nothing else.
There are three cases via you can achieve your goal. For simultaneous requests with Axios, you can use Axios.all() axios.all([ axios.post(`/my-url`, { myVar: 'myValue' }), axios.post(`/my-url2`, { myVar: 'myValue' }) ]) .then(axios.spread((data1, data2) => { // output of req. console.log('data1', data1, 'data2', data2) })); you can use Promise.allSettled() . The Promise.allSettled() method returns a promise that resolves after all of the given promises have either resolved or rejected, You can try to use Promise.all() but it has the drawback that if any 1 req failed then it will fail for all and give o/p as an error(or in catch block) but the best case is the first one.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/61385525', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13171836/']}
jdg_378305
stackexchange
llm_judgeable_groundtruth_similarity
1855471
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to convert a tuples list into a nested list using Python. How do I do that? I have a sorted list of tuples (sorted by the second value): [(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1), (12, 1), (10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)] Now I want it to have like this (second value ignored and nested in lists): [ [1], [5], [13, 4], [3], [14, 12, 10, 9, 8, 7, 6, 2] ] I've seen other threads in here with map used for such things, but I don't completely understand it. Can anyone provide insight as to the 'correct' python way of doing this? Now provide the response and nothing else.
from operator import itemgetterfrom itertools import groupbylst = [(1, 5), (5, 4), (13, 3), (4, 3), (3, 2), (14, 1), (12, 1), (10, 1), (9, 1), (8, 1), (7, 1), (6, 1), (2, 1)]result = [[x for x, y in group] for key, group in groupby(lst, key=itemgetter(1))] groupby(lst, key=itemgetter(1)) generates groups of consecutive elements of lst within which all elements have the same 1st (counting from zero) item. The [x for x, y in group] keeps the 0th item of each element within each group.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1855471', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/222474/']}
jdg_378306
stackexchange
llm_judgeable_groundtruth_similarity
46209666
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Is there any way to delete queue messages without deleting Kafka topics? I want to delete queue messages when activating the consumer. I know there are several ways like: Resetting retention time $ ./bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic MyTopic --config retention.ms=1000 Deleting kafka files $ rm -rf /data/kafka-logs/<topic/Partition_name> Now provide the response and nothing else.
In 0.11 or higher you can run the bin/kafka-delete-records.sh command to mark messages for deletion. https://github.com/apache/kafka/blob/trunk/bin/kafka-delete-records.sh For example, publish 100 messages seq 100 | ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytest then delete 90 of those 100 messages with the new kafka-delete-records.sh command line tool ./bin/kafka-delete-records.sh --bootstrap-server localhost:9092 --offset-json-file ./offsetfile.json where offsetfile.json contains {"partitions": [{"topic": "mytest", "partition": 0, "offset": 90}], "version":1 } and then consume the messages from the beginning to verify that 90 of the 100 messages are indeed marked as deleted. ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytest --from-beginning919293949596979899100
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/46209666', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8606510/']}
jdg_378307
stackexchange
llm_judgeable_groundtruth_similarity
23852252
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is for Rails 4.04 and Ruby 2.1. I'd like to allow my users to have addresses. So I generated an address table and gave it columns (NUMBER, STREET, CITY, STATE). Now when I go to the following url, I'd like to be able edit this information: webapp.com/users/edit/ However I noticed it only showed the same old information (name, password, email). So I went to the view and added simple_fields for my new relationship so the view now looks like this: <%= simple_form_for(resource, as: resource_name, url: registration_path(resource_name), html: { method: :put }) do |f| %> <div class="form-inputs"> <%= f.input :email, required: true, autofocus: true %> <%= f.input :name, required: false %> <%= f.simple_fields_for :addresses do |a| %> <%= a.input :number %> <%= a.input :street %> <%= a.input :city %> <%= a.input :state %> <%= a.input :country %> <% end %><%end%> However it still doesn't generate the fields needed for address. I think this is because none of my users currently have any addresses attached to their account profile (because this migration was just created). However, in this case there should be blank fields generated so I can ADD address information. I feel like I need to do something in the Users#Edit action like this @users.each do |user| user.address.build end Is that right? How can I override the users controller because this was created by Devise and I don't actually have a users controller (I looked for it it and couldn't find it). UPDATE Ok, I'm getting closer. I had to create my own controller to override Devise's default registrations controller as explained in the second answer of this stack overflow article: Override devise registrations controller So now I am getting into that controller which currently looks like this: class Users::RegistrationsController < Devise::RegistrationsController def edit super endend However, when I get to my view, it's still SKIPPING the block that starts like this: <%= f.simple_fields_for :addresses do |a| %> However, if I go manually into my DB and add a record in the addresses table and link it to my currently_signed in user via the foreign key, then the block does not get skipped. So whats the best way to generate this connection if the address record does not yet exist? Is it the build method? e.g. user.address.build in the controller SOLUTION Yes, I needed to added this method to my new registrations_controller.rb file def edit if resource.addresses.size == 0 resource.addresses.build end super end It is now working the way I intended it. Now provide the response and nothing else.
require(ggplot2)require(plyr)my.data <- as.data.frame(rbind( cbind( rnorm(1e3), 1) , cbind( rnorm(1e3)+2, 2), cbind( rnorm(1e3)+3, 3), cbind( rnorm(1e3)+4, 4)))my.data$V2=as.factor(my.data$V2) calculate the density depending on V2 res <- dlply(my.data, .(V2), function(x) density(x$V1))dd <- ldply(res, function(z){ data.frame(Values = z[["x"]], V1_density = z[["y"]], V1_count = z[["y"]]*z[["n"]])}) add an offset depending on V2 dd$offest=-as.numeric(dd$V2)*0.2 # adapt the 0.2 value as you needdd$V1_density_offest=dd$V1_density+dd$offest and plot ggplot(dd, aes(Values, V1_density_offest, color=V2)) + geom_line()+ geom_ribbon(aes(Values, ymin=offest,ymax=V1_density_offest, fill=V2),alpha=0.3)+ scale_y_continuous(breaks=NULL)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23852252', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1316464/']}
jdg_378308
stackexchange
llm_judgeable_groundtruth_similarity
14131939
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Disable Eclipse's error discovery. (Codan false positives) With GCC 4.8/Clang 3.3 C++ support so far ahead of what Eclipse is doing with syntax checking (in terms of feature support), Eclipse is marking many things as errors that are actually valid code (template aliasing happens to be my main problem, but there are other issues as well). When I compile (or attempt to anyway, having some issues, see here for more) whatever compiler happens to be doing the work does its job, and its errors get propagated through, but the code that it says is OK is still underlines (red and gold spiders for errors and warnings respectively), which makes it much harder to see what is going on. Is there a way to get rid of these errors/warnings? Even better would be a way to get rid of warnings only after compile attempt, and for as long as the relevant parts of the code don't change, but still leave them enabled in general. (Actually the best would be a plugin for Eclipse that supports all, or at least more of C++11 than Juno does by itself, but I can't seem to find that) Now provide the response and nothing else.
UPDATE: It's been a long time since I posted the original answer and it has become outdated. I double-checked today (Mar 15, 2014): in Eclipse Kepler (Build id 20130614-0229) it is sufficient to add under Project > Properties > C/C++ Build > Settings then on the Tool Settings tab GCC C++ Compiler > Miscellaneous the -std=c++11 flag, then under Window > Preferences > C/C++ > Build > Settings on the Discovery tab chose CDT GCC Built-in Compiler Settings and add the -std=c++11 flag to Command to get compiler specs . On my machine it looks like this after the change: ${COMMAND} -E -P -v -dD -std=c++11 "${INPUTS}" clean and rebuild both your project and your index ( Project > C/C++ Index > Rebuild ) as Eclipse tends to cache error messages and show them even though they are gone after changing the settings. This works on my machine for sure. If it doesn't on yours, then you might want to give a shot to this: C++11 full support on Eclipse although I am neither sure about the correctness of this approach nor was it necessary to do it on my machine. As of March 7, 2014 users claim that it helped them whereas the above approach didn't. The original post, now outdated: These bogus errors come from Codan . The whole thing is because Codan and the compiler have different understanding of C++ and Codan is buggy . Possible workarounds Click on the project properties, then C/C++ General > Code Analysis > Syntax and Semantic Errors and deselect whatever false errors you are getting. Drawback: you will most likely end up disabling most of the errors and warning one by one, which is quite annoying. Disable the static analysis completely at C/C++ General > Code Analysis > Syntax and Semantic Errors . You won't get the true errors from Codan but only later from the compiler. None of them is a solution but at least you can still use the Eclipse IDE.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14131939', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/725709/']}
jdg_378309
stackexchange
llm_judgeable_groundtruth_similarity
54426177
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using SVG Font Awesome icons in a simple HTML doc, and I want add shadows to them. So I tried doing... <div style="text-shadow: 2px 2px 5px #000"> <i class="fa fa-search-plus"></i></div> ...but it doesn't work. So, what is the correct way to do that? Now provide the response and nothing else.
TL;DR Use CSS filter: drop-shadow(...) . DOCUMENTATION The reason text-shadow property does not work is that Font Awesome is not text when you use svg version loaded by javascript. I tried loading it using css and it works. Font Awesome loaded with CSS: .fa-globe{text-shadow:3px 6px rgba(255,165,0,.75)} <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.7.0/css/all.css"><i class="fas fa-10x fa-globe"></i> This will not work. Text-shadow has no effect and box-shadow makes a shadow around a square. .fa-globe{text-shadow:1px 6px rgba(255,0,0,.5)}.fa-globe{box-shadow:0 .5rem 1rem 0 rgba(255,0,0,.5),0 .375rem 1.25rem 0 rgba(255, 165,0,.19)} <script defer src="https://use.fontawesome.com/releases/v5.7.0/js/all.js"></script><i class="fas fa-10x fa-globe"></i> EDIT You can add filter:drop-shadow property and it will create a shadow around svg icons. DOCS: https://developer.mozilla.org/en-US/docs/Web/CSS/filter-function/drop-shadow .fa-globe{filter:drop-shadow(20px 10px 1px red)} <script defer src="https://use.fontawesome.com/releases/v5.7.0/js/all.js"></script><i class="fas fa-10x fa-globe"></i>
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/54426177', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10426320/']}
jdg_378310
stackexchange
llm_judgeable_groundtruth_similarity
51696478
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I implement the current time into Text format? I feel like it should be fairly simple but struggling to do so. Basic example; Now provide the response and nothing else.
Using the answer here and changing it a bit:- You can try the following: import 'package:flutter/material.dart';import 'package:intl/intl.dart';void main() { runApp(TabBarDemo());}class TabBarDemo extends StatelessWidget { @overrideWidget build(BuildContext context) {DateTime now = DateTime.now();String formattedDate = DateFormat('kk:mm:ss \n EEE d MMM').format(now);return MaterialApp( home: DefaultTabController( length: 3, child: Scaffold( appBar: AppBar( bottom: TabBar( tabs: [ Tab(icon: Icon(Icons.access_alarm),text: "Alarm",), Tab(icon: Icon(Icons.access_time),text:"Clock" ,), Tab(icon: Icon(Icons.timer),text:"Timer"), ], ), title: Text('Tabs Demo'),backgroundColor: Colors.black, ), body: TabBarView( children: [ Icon(Icons.access_alarm), Center(child: Text(formattedDate,textAlign: TextAlign.center,style: new TextStyle(fontWeight: FontWeight.bold,fontSize: 25.0),)), Icon(Icons.timer), ], ), ), ),);}} Should give you this:
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/51696478', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9267159/']}
jdg_378311
stackexchange
llm_judgeable_groundtruth_similarity
17254855
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to understand how well C# and F# can play together. I've taken some code from the F# for Fun & Profit blog which performs basic validation returning a discriminated union type: type Result<'TSuccess,'TFailure> = | Success of 'TSuccess | Failure of 'TFailuretype Request = {name:string; email:string}let TestValidate input = if input.name = "" then Failure "Name must not be blank" else Success input When trying to consume this in C#; the only way I can find to access the values against Success and Failure (failure is a string, success is the request again) is with big nasty casts (which is a lot of typing, and requires typing actual types that I would expect to be inferred or available in the metadata): var req = new DannyTest.Request("Danny", "fsfs");var res = FSharpLib.DannyTest.TestValidate(req);if (res.IsSuccess){ Console.WriteLine("Success"); var result = ((DannyTest.Result<DannyTest.Request, string>.Success)res).Item; // Result is the Request (as returned for Success) Console.WriteLine(result.email); Console.WriteLine(result.name);}if (res.IsFailure){ Console.WriteLine("Failure"); var result = ((DannyTest.Result<DannyTest.Request, string>.Failure)res).Item; // Result is a string (as returned for Failure) Console.WriteLine(result);} Is there a better way of doing this? Even if I have to manually cast (with the possibility of a runtime error), I would hope to at least shorten access to the types ( DannyTest.Result<DannyTest.Request, string>.Failure ). Is there a better way? Now provide the response and nothing else.
Working with discriminated unions is never going to be as straightforward in a language that does not support pattern matching. However, your Result<'TSuccess, 'TFailure> type is simple enough that there should be some nice way to use it from C# (if the type was something more complicated, like an expression tree, then I would probably suggest to use the Visitor pattern). Others already mentioned a few options - both how to access the values directly and how to define Match method (as described in Mauricio's blog post). My favourite method for simple DUs is to define TryGetXyz methods that follow the same style of Int32.TryParse - this also guarantees that C# developers will be familiar with the pattern. The F# definition looks like this: open System.Runtime.InteropServicestype Result<'TSuccess,'TFailure> = | Success of 'TSuccess | Failure of 'TFailuretype Result<'TSuccess, 'TFailure> with member x.TryGetSuccess([<Out>] success:byref<'TSuccess>) = match x with | Success value -> success <- value; true | _ -> false member x.TryGetFailure([<Out>] failure:byref<'TFailure>) = match x with | Failure value -> failure <- value; true | _ -> false This simply adds extensions TryGetSuccess and TryGetFailure that return true when the value matches the case and return (all) parameters of the discriminated union case via out parameters. The C# use is quite straightforward for anyone who has ever used TryParse : int succ; string fail; if (res.TryGetSuccess(out succ)) { Console.WriteLine("Success: {0}", succ); } else if (res.TryGetFailure(out fail)) { Console.WriteLine("Failuere: {0}", fail); } I think the familiarity of this pattern is the most important benefit. When you use F# and expose its type to C# developers, you should expose them in the most direct way (the C# users should not think that the types defined in F# are non-standard in any way). Also, this gives you reasonable guarantees (when it is used correctly) that you will only access values that are actually available when the DU matches a specific case.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17254855', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/25124/']}
jdg_378312
stackexchange
llm_judgeable_groundtruth_similarity
19292924
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a bunch of scripts running in my terminal (and I don't have the ability to edit them) which output messages to the terminal. I would like my terminal to automatically color specific words in the output. For example, some of the scripts output FAIL when a test fails. How can I configure the terminal to color this specific word, any time it appears, to be in a specific color (for example, red). Now provide the response and nothing else.
It's probably easier to colour the words yourself, rather than getting the terminal to colour them for you. If you can't edit the scripts that create the output, can you filter them through something else? At the most likely to be available end of the scale you could pipe your output through grep : tail -F logfile | grep --color -P "FAIL|" This matches either "FAIL" or "", and highlights the matched portion of the string. You could further use something more specialised, as described in this blog post , for example.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19292924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/618265/']}
jdg_378313
stackexchange
llm_judgeable_groundtruth_similarity
429760
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to run an executable on a remote server, to which I connect via ssh -Y. I think the executable uses openGL The server runs Ubuntu and the local system runs OSX. ssh -Y normally opens a display on my local machine by X11. This works well with other applications (firefox, matlab etc..) This time I get the message: libGL error: No matching fbConfigs or visuals foundlibGL error: failed to load driver: swrastX Error of failed request: GLXBadContext Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 6 (X_GLXIsDirect) Serial number of failed request: 35 Current serial number in output stream: 34X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 24 (X_GLXCreateNewContext) Value in failed request: 0x0 Serial number of failed request: 34 Current serial number in output stream: 35 I also ran glxinfo (I was trying things I found on forums) and got this name of display: localhost:11.0libGL error: No matching fbConfigs or visuals foundlibGL error: failed to load driver: swrastX Error of failed request: GLXBadContext Major opcode of failed request: 149 (GLX) Minor opcode of failed request: 6 (X_GLXIsDirect) Serial number of failed request: 23 Current serial number in output stream: 22 Could someone help with this? Thank you! Now provide the response and nothing else.
EDIT May 5th, 2021 : With the release of XQuartz 2.8.0, the configuration path appears to have changed from org.macosforge.xquartz.X11 to org.xquartz.X11 . The same instructions still apply, just replace the old path with the new if you are from the future. Although the answers here have fixes, I'm submitting another one that I can use for future reference when this issue comes up every other year in my work :) It happens often when X forwarding (via SSH, Docker, etc). You need to allow OpenGL drawing (iglx), which by default is disabled on a lot of X11 servers (like XQuarts or the standard X11 server on Ubuntu). Some other logs you may see related to this are below. XRequest.155: GLXBadContext 0x500003aXRequest.155: BadValue (integer parameter out of range for operation) 0x0XRequest.155: GLXBadContext 0x500003b[xcb] Unknown sequence number while processing queue[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called[xcb] Aborting, sorry about that.../../src/xcb_io.c:259: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed. The fix is to enable iglx. First, check if you have an XQuarts version that supports this feature. The latest as of this writing does, but it is deprecated so may not in the future. My version is XQuartz 2.7.11 (xorg-server 1.18.4) . Next, run defaults write org.macosforge.xquartz.X11 enable_iglx -bool true . You should be able to confirm it is set by running $ defaults read org.macosforge.xquartz.X11{ "app_to_run" = "/opt/X11/bin/xterm"; "cache_fonts" = 1; "done_xinit_check" = 1; "enable_iglx" = 1; ####### this should be truthy "login_shell" = "/bin/sh"; "no_auth" = 0; "nolisten_tcp" = 0; "startx_script" = "/opt/X11/bin/startx -- /opt/X11/bin/Xquartz";} Finally, restart xquartz (or your whole machine). You may need to re-run xhost + to disable security & authentication (fine for isolated machines, dangerous for internet-exposed). You should now be able to run your GUI applications as expected. Hope this helps!
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/429760', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/170338/']}
jdg_378314
stackexchange
llm_judgeable_groundtruth_similarity
323764
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $S_n$ be the group of $n$ -permutations. Denote the number of inversions of $\sigma\in S_n$ by $\ell(\sigma)$ . QUESTION. Assume $n>2$ . Does this cancellation property hold true? $$\sum_{\sigma\in S_n}(-1)^{\ell(\sigma)}\sum_{i=1}^ni(i-\sigma(i))=0.$$ Now provide the response and nothing else.
Let $n$ be some integer greater than 2. Since the number of even and odd permutations in $S_n$ is the same we have $\sum_{\sigma\in S_{n}}(-1)^{\ell(\sigma)}=0$ therefore the contribution of $\sum_{\sigma\in S_{n}}(-1)^{\ell(\sigma)}\left(\sum_{i=1}^n i^2\right)$ is zero. It remains to show that $$\sum_{\sigma\in S_{n}}(-1)^{\ell(\sigma)}\sum_{i=1}^n i\sigma(i)=0.$$ Notice that if we write $P(x)=\det\left(x^{ij}\right)_{i,j=1}^n$ then this sum is simply $P'(1)$ . However the order of vanishing of $P$ at $1$ is $\binom{n}{2}$ (notice that the matrix is pretty much a Vandermonde matrix) and this is greater than $1$ since $n>2$ , therefore $P'(1)=0$ .
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/323764', 'https://mathoverflow.net', 'https://mathoverflow.net/users/66131/']}
jdg_378315
stackexchange
llm_judgeable_groundtruth_similarity
41979458
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't know what am doing wrong as no errors are report. I have a component class import { Component, OnInit, ViewContainerRef } from '@angular/core';@Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css']})export class AppComponent implements OnInit { testhtml = "<p>Hello world</p>"; constructor(){} }} And in my template file, I do something like this: <div class="blog-post">[innerHtml]="testhtml"</div> But this doesn't seem to work. Is there something else I need to import? I am using angular-cli "version": "1.0.0-beta.26", Now provide the response and nothing else.
Angular uses {{property}} for interpolation of values. That is the way that you would display plain text in your div , like so Solution 1: <div class="blog-post">{{testhtml}}</div> But that will write out text, not HTML. For HTML, you will need to bind to the property Solution 2: <div class="blog-post" [innerHtml]="testhtml"></div> Note I moved the [innerHtml] to inside the div tag. Leaving out the square brackets would bind to the attribute, so you would need to interpolate again Solution 3: <div class="blog-post" innerHtml="{{testhtml}}"></div> The property binding ( Solution 2 ) is the preferred method.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/41979458', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6180766/']}
jdg_378316
stackexchange
llm_judgeable_groundtruth_similarity
28126532
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: bit more shinnangins I have a project i'm trying to debug, using grails 2.4.4 and GGTS3.6.3 if run-app the project - works ok when I try and set break point and the debug if get this error |Loading Grails 2.4.4Error |There was an error loading the BuildConfig: argument type mismatch (NOTE: Stack trace has been filtered. Use --verbose to see entire trace.)java.lang.IllegalArgumentException: argument type mismatch at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1270) at grails.util.BuildSettings.getForkConfig(BuildSettings.groovy:1515) at grails.util.BuildSettings.establishProjectStructure(BuildSettings.groovy:1389) at grails.util.BuildSettings.postLoadConfig(BuildSettings.groovy:1121) at grails.util.BuildSettings.loadConfig(BuildSettings.groovy:1116) at grails.util.BuildSettings.loadConfig(BuildSettings.groovy:1088) at grails.util.BuildSettings.loadConfig(BuildSettings.groovy) at grails.util.BuildSettings.loadConfig(BuildSettings.groovy:1074) at grails.util.BuildSettings.loadConfig(BuildSettings.groovy) at grails.util.BuildSettings$loadConfig$0.callCurrent(Unknown Source) at grails.util.BuildSettings.loadConfig(BuildSettings.groovy:1054) at org.springsource.loaded.ri.ReflectiveInterceptor.jlrMethodInvoke(ReflectiveInterceptor.java:1270)Error |There was an error loading the BuildConfig: argument type mismatch this looks like a repeat of this link on earlier version of grails previous question note says claimed fixed - but fails for me - any else got this issue Now provide the response and nothing else.
Pay no attention to those. Use the various output functions, such as .format() instead. See the Moment.js guidance on this topic . In short, all fields that are prefixed with an underscore ( _ ) should be considered off limits. The moment internals have some quirks due to how the Date object works. All of the functions in the public API take them into account, but you probably don't want to figure them out yourself. Just to be complete though, I'll elaborate on their purpose: _i is the input used when create the moment object. It can be a string, a number, an array, or a Date object. However, if another moment object is passed in, the _i will be copied to that moments _i , and other properties will also be copied over. _i will never be a moment object. _i can also be undefined, in the case of creating the current moment with moment() . _d is the instance of the Date object that backs the moment object. If you are in "local mode", then _d will have the same local date and time as the moment object exhibits with the public API. The timestamps returned by getTime or valueOf will also match. If you are in "UTC mode", then _d will still have the same UTC date and time as the moment object exhibits with the public API. This may be confusing, as you'd need to look at getUTCDate and other UTC-based functions on _d in order to see them match. The timestamps will still match here as well. If you've changed the time zone offset, with the utcOffset , zone , or tz functions, then the _d value cannot stand alone . It must also consider if _offset is defined. If it is, then the timestamp backing the _d object has to first be adjusted by the amount of the offset. You can see this behavior in the implementation of the valueOf method here . Also, if you look at the string output of _d when a different offset or time zone has been applied, it will appear that _d is using the local time zone. However, that conversion to local time is simply a side effect of the toString function of the Date object. Moment does not use that result in its functions. This is the behavior for these two fields as of the current version (2.10.6 as I'm writing this). However, there are other fields as well, and since these are internal fields, it's entirely possible the behavior could change in a future version. In particular, see issue #2616 .
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/28126532', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1313852/']}
jdg_378317
stackexchange
llm_judgeable_groundtruth_similarity
1141162
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I need to construct an automaton that recognizes the following language of strings over the alphabet $\{a,b\}$:The set of all strings over alphabet $\{a,b\}$ with the subsequence $abba$. (A subsequence is like a substring, except it doesn't have to be consecutive characters. For example, $abaaba$ and $babbbba$ are in the language.) Now provide the response and nothing else.
Without integrals. If $n\ge4$$$\sum_{k=n}^{n^2}\frac1k>\sum_{k=n}^{2n-1}\frac1k+\sum_{k=2n}^{3n-1}\frac1k+\sum_{k=3n}^{4n-1}\frac1k>n\,\frac{1}{2\,n}+n\,\frac{1}{3\,n}+n\,\frac{1}{4\,n}=\frac12+\frac13+\frac14=\frac{14}{12}.$$
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1141162', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/177703/']}
jdg_378318
stackexchange
llm_judgeable_groundtruth_similarity
2896106
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In all programming languages supporting optional parameters that I have seen there is a imitation that the optional parameters must appear at the end of the declaration. No required parameters may be included after an optional item.What is the reason for that ? I guess it can be compiler/interpreter requirement. Now provide the response and nothing else.
Well, if they were at the front, how would you detect when they've stopped being supplied? The only way would be if the variable type was different after the optional parameters. Bit of a weird requirement, so it makes sense that you just force them to be last (save the trouble of complex rules for detecting the "final" optional parameter). Besides, it's the most natural way to do it when calling the function.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2896106', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/303756/']}
jdg_378319
stackexchange
llm_judgeable_groundtruth_similarity
650417
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm struggling to work out this circuit. In particular, I'm not sure what component the diode-like symbols represent. It is related to a voltage adder using an op-amp, which I understand. Now provide the response and nothing else.
The Zener diode looking symbol is probably representing a precision shunt reference. 1.25 volts is a common value for a shunt reference IC like this part : - Plenty of manufacturers make devices like this so I expect the circuit symbol is for a precision reference rather than a proper Zener diode. Added to this is that I've never seen a Zener diode that has a voltage rating below around 2 volts hence, I believe it to be a precision shunt reference chip like above. What is this circuit doing? I've added voltages to the image. So the output potentiometer is capable of producing an output voltage (with respect to 0 volts) of up to +/- 1.25 volts depending on the values of R2 and R4.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/650417', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/272190/']}
jdg_378320
stackexchange
llm_judgeable_groundtruth_similarity
4997729
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I currently have one large external javascript file that is used on the page. I currently wrap the code in a self-invoking function because I have other sections that are loaded using ajax tabs, so I want to avoid naming clashes with those other external js files. The code in the file is organized like below. I would like to split some of the code inside the plannerTab namespace into smaller files, yet still have it be part of that namespace. How could I do this? Or, do you guys recommend a different approach? Thanks! // Document Ready$(function (){ // initializes table plannerTab.plannerTable.init();});var plannerTab = (function (){ // All the code for the page is in here. I would like to extract sections // from in here and put them into their own external files while still keeping // the namespacing}(); Update How could I separate parts from within the plannerTab variable into smaller external js files, and still maintain that they are part of the plannerTab namespace? A small example below. // Scope: plannerTab.config - Would like to store configuartion into a separate file var config = { selectors: { tableId: '#plannerTable', addTaskId: '#AddTask', editTaskSelector: '#plannerTable .edit', dateFilterSelector: '#plannerTable_TimeFilter li', deleteTaskClass: '.delete', searchFilter: '#plannerTable_filter', selectedDateFilter: 'selected-dateFilter', taskCellSelector: '#plannerTable .task-col', taskClass: '.taskId' }, urls: { addTaskFormURL: '/Planner/Planner/LoadAddTaskForm', editTaskFormURL: '/Planner/Planner/LoadEditTaskForm', deleteTaskURL: '/Planner/Planner/DeleteTask', getTasksByDateRangeURL: '/Planner/Planner/GetTasksByDateRange', viewTaskURL: '/Planner/Planner/ViewTask' } }; Now provide the response and nothing else.
Rolling back a migration and re-editing it is only safe if that migration is local and has not been pushed out to any repository. Editing a migration that others have applied may cause issues. The safest way to add columns to an existing table is to just create a new migration: rails g migration add_public_and_private_to_document public:string private:string If you use the add_[column_names]_to_[model] naming convention, rails will work out the appropriate table and create the migration you want. Have a read here for more info: http://guides.rubyonrails.org/migrations.html
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4997729', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/173432/']}
jdg_378321
stackexchange
llm_judgeable_groundtruth_similarity
27494296
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've embedded an SVG files data directly into my html. It shows in Chrome and Firefox, but in IE11 it doesn't show at all. The pastebin link to the SVG is http://pastebin.com/eZpLXFfD I've tried adding a META TAG but to no avail. At first I thought it might be a IE Compatibility view issue, but toggling that feature did not work either. <meta http-equiv="X-UA-Compatible" content="IE=8"> Any help? What else Information Can I provide to help answer this question Now provide the response and nothing else.
It sounds as though you're not in a modern document mode. Internet Explorer 11 shows the SVG just fine when you're in Standards Mode. Make sure that if you have an x-ua-compatible meta tag, you have it set to Edge, rather than an earlier mode. <meta http-equiv="X-UA-Compatible" content="IE=edge"> You can determine your document mode by opening up your F12 Developer Tools and checking either the document mode dropdown (seen at top-right, currently "Edge") or the emulation tab: If you do not have an x-ua-compatible meta tag (or header), be sure to use a doctype that will put the document into Standards mode, such as <!DOCTYPE html> .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/27494296', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3254959/']}
jdg_378322
stackexchange
llm_judgeable_groundtruth_similarity
1358848
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: In the book Linear Algebra Done Right I came across this example for the sum of vector spaces. Did he say the second $W + U$ is still equal to 1.7 because it doesn't matter whether you say $(x+y, y, 0)$ or $(x, y, 0)$ or am I missing something? It seems like this notation can get confusing if that is the case. Thanks, Jackson Now provide the response and nothing else.
It does matter whether you say $(x + y, y, 0)$ or $(x, y, 0)$ on the level of individual vectors. But when you consider the collections \begin{align*}\{(x, y, 0)&: x, y \in \Bbb F^3\} \quad \text{and} \\\{(x + y, y, 0)&:x, y \in \Bbb F^3\}\end{align*} the individual differences get wiped away; the two sets are equal, as collections of vectors (as others have pointed out). For a concrete example using $\Bbb F = \Bbb R$, it's easy to see that $(2, 3, 0)$ is in the first set. But it's slightly less easy to see that that we can write $\big(2, 3, 0\big)$ as $\big((-1) + 3, 3, 0\big)$, and thus it belongs in the second set as well. Perhaps vowing to use different variables, i.e., writing the second set as $\{(x' + y', y', 0): x', y' \in \Bbb F^3\}$ will help you ignore the individual differences. It can be confusing, and it can take some getting used to. Linear algebra is often a transition into higher math classes, so it's good you're paying attention to, and coping with, these details now!
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1358848', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/220023/']}
jdg_378323
stackexchange
llm_judgeable_groundtruth_similarity
33322455
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have always used the express versions of Visual Studio for my Asp.Net projects. In the past, I would use a basic FTP synchronizer to push updated files (*.vb) to our server, then the changes would just show up on the website instantly. Now, for some reason, when I make changes to our *.vb files, they are not being reflected on the server after I synchronize over ftp, unless I build the project first. In addition, for our .Net 4.0 project, VS 2015 14.0.23107 is adding the following directories, with tons of stuff inside of them: /.vs /My Project /Obj There are loads of files within these directories which I have no idea what they do, and for some reason our project has taken on a completely different behavior. Now when we try to synchronize over FTP, there are a ton more files, and it seems that changing the actual underlying source doesn't work. We have to synchronize all the other files in the above directories, then we can see the changes. Is this a new way they are doing things, or is this because VS is now free and we are getting a better version where we have to "publish" not "synchronize?" Is there a way to go back to the simple way of doing things, where we just have a plain directory with our source files and sync them over to the server? Should we not do it this way? If not, what method should we be using and what files should we be pushing to the server? Now provide the response and nothing else.
I'll just promote my comment to an answer. There are several aspects of this question: Use publish, this feature is already for long available in Visual Studio and works well. There is plenty of ways to customize it and it supports a lot of technologies, including FTP. It's also more convenient, systematic and reliable way of deployment than manually copying files to your FTP. You can also share your publishing configuration among developers and store several of them. No loss here. I don't quite get why would you like to copy the source (.vb) files to the server. What you would usually like to achieve is to get compiled DLL's + resources copied to your server, and source files 'secure' on developers machines. You can compile your sources on the server if you really need it, but then just plug it into a source control, use ms build etc. Anyway, build/publish actions are there to prepare the deployment files for you, manual copying is pure bad. For the new folders: Obj is everything but new, its created to store some resources, crap, more here: What is obj folder generated for? .vs stores user specific settings, and you should ignore it as well as obj folder, more here: Should I add the Visual Studio 2015 .vs folder to source control? My Project is most likely your own folder, nothing related to VS. To sum up, as long as you use asp 4, 4.5 nothing changes. Only the 5.0 intruduces a bit different rules for deployment. Most of the problems you get are easily solved using the right tools (Publish). It will know what files to ship (binaries + resources included in project) and what to ignore (source files, caches, crap). It's convenient, less error-prone and can do much more for you.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33322455', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2152289/']}
jdg_378324
stackexchange
llm_judgeable_groundtruth_similarity
14822342
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: If I have more than one enum , eg: enum Greetings{ hello, bye, how }; enum Testing { one, two, three }; How can I enforce the usage of the correct enum ?For example, I don't want someone to use hello when they should be using one for better debugging and readability. Now provide the response and nothing else.
In C, you can fake it with boilerplate code. typedef enum { HELLO_E, GOODBYE_E } greetings_t;struct greetings { greetings_t greetings; };#define HELLO ((struct greetings){HELLO_E})#define GOODBYE ((struct greetings){GOODBYE_E})typedef enum { ONE_E, TWO_E } number_t;struct number { number_t number; };#define ONE ((struct number){ONE_E})#define TWO ((struct number){TWO_E})void takes_greeting(struct greetings g);void takes_number(struct number n);void test(){ takes_greeting(HELLO); takes_number(ONE); takes_greeting(TWO); takes_number(GOODBYE);} This should not incur any overhead, and produces errors instead of warnings: $ gcc -c -std=c99 -Wall -Wextra test2.ctest2.c: In function ‘test’:test2.c:19: error: incompatible type for argument 1 of ‘takes_greeting’test2.c:20: error: incompatible type for argument 1 of ‘takes_number’ Notice that I'm not using GNU extensions, and no spurious warnings are generated. Only errors. Also note that I'm using a version of GCC that's as old as dirt, $ gcc --versionpowerpc-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5493)Copyright (C) 2005 Free Software Foundation, Inc.This is free software; see the source for copying conditions. There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. This should work with any compiler with support for C99's compound literals.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14822342', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1061193/']}
jdg_378325
stackexchange
llm_judgeable_groundtruth_similarity
317632
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have multiple (9v 1500mA to 9v 3500mA) LED light fixtures that I have tested chaining a single fan (12v 5.2w and 12v 0.16A fans) and they seem to work under the brief test with enough air flow for my needs. Anything I should be careful about while doing this and any direction to better understand if there are concerns or how to calculate? I am a software engineer, just getting into electronics via raspberry pi projects, so still learning as I go. The actual project right now is regulating / climate control of multiple dart frog terrariums. Now provide the response and nothing else.
There is generally no harm in running a fan or any DC motor on lower voltage. If the voltage is too low, then the motor will not run (or continue running if already spinning but not start), because there is not enough torque generated to oppose the “cogging” effect of the layout of magnets inside the motor. (This effect is why when a fan is turned off you will see it bounce and stop at a particular angle, and you can feel it by pushing the blades around). In this case, the motor is stalled and in principle this could burn out the motor just like any other stall, but that is less likely as the voltage and therefore the stall current is lower. If you find that a fan promptly starts up at a given lower voltage, then you should feel free to use that lower voltage. If you need a fan to operate at a wide range of speeds including very slow ones, then rather than changing the supply voltage, it is better to use a fan which has a PWM (pulse-width modulation) input, which is driven by a microcontroller or other oscillator (e.g. a 555 chip) you provide and tells the brushless motor controller inside the fan what speed to run at. Because the controller has access to the full supply voltage when needed, it can start up and run at much lower speeds. Since you're interested in climate control, this might be a good future project — a temperature-controlled fan. It can be done with a single-purpose fan control IC, a microcontroller, or even with separate chips (an oscillator and a comparator at minimum). Finally, I'm guessing that you don't have this problem, but just to avoid any confusion for other readers: from context, when you say "LED power supply" I assume you mean a regular (constant-voltage) 9 V power supply, not a constant- current LED driver. You should not run a fan off the output of a LED driver.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/317632', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/156017/']}
jdg_378326
stackexchange
llm_judgeable_groundtruth_similarity
9210542
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: From the node.js documentation: Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file. Is there a way to invalidate this cache? i.e. for unit testing, I'd like each test to be working on a fresh object. Now provide the response and nothing else.
You can always safely delete an entry in require.cache without a problem, even when there are circular dependencies. Because when you delete, you just delete a reference to the cached module object, not the module object itself, the module object will not be GCed because in case of circular dependencies, there is still a object referencing this module object. Suppose you have: script a.js: var b=require('./b.js').b;exports.a='a from a.js';exports.b=b; and script b.js: var a=require('./a.js').a;exports.b='b from b.js';exports.a=a; when you do: var a=require('./a.js')var b=require('./b.js') you will get: > a{ a: 'a from a.js', b: 'b from b.js' }> b{ b: 'b from b.js', a: undefined } now if you edit your b.js: var a=require('./a.js').a;exports.b='b from b.js. changed value';exports.a=a; and do: delete require.cache[require.resolve('./b.js')]b=require('./b.js') you will get: > a{ a: 'a from a.js', b: 'b from b.js' }> b{ b: 'b from b.js. changed value', a: 'a from a.js' } === The above is valid if directly running node.js. However, if using tools that have their own module caching system, such as jest , the correct statement would be: jest.resetModules();
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/9210542', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120820/']}
jdg_378327
stackexchange
llm_judgeable_groundtruth_similarity
44015292
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How to sort list of doubles by the fractional part of the double. E.g: For input <1.2, 2.3, 1.12, 5.1> , after sorting, the output should be <5.1, 1.12, 1.2, 2.3> Now provide the response and nothing else.
You can achieve this through OrderBy() and Math.Truncate() method as like the following. Where x-Math.Truncate(x) gives you the number after the decimal point and OrderBy will arrange them in the ascending order. Have a look at this example and try yourself with the following snippet List<double> input = new List<double>(){1.2, 2.3, 1.12, 5.1}; input = input.OrderBy(x=>x-Math.Truncate(x)).ToList(); Console.WriteLine(String.Join("\n",input)); Or you can try this as well .OrderBy(x=>x-(int)x) instead for OrderBy(x=>x-Math.Truncate(x)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/44015292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/534377/']}
jdg_378328
stackexchange
llm_judgeable_groundtruth_similarity
1586280
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm working on the following problem: Let $A$ be the ring of $n \times n$ matrices over a field $\mathbb{F}$ . (a) Show that for any subspace $V$ of $\mathbb{F}^n$ , the set $I_V$ of matrices whose kernel contains $V$ is a left ideal of $A$ . (b) Show that every left ideal of $A$ is principal. I've done part $a)$ , but would like to know if you can prove $b)$ directly from $a)$ . It seems to me that given the left ideal $J$ , it should be the case that if $V$ is the intersection of the kernels of matrices in $J$ , then we should have $J = I_V$ . I can show that $I_V$ is principal, and certainly $J$ is contained in $I_V$ , but I can't show the other direction. I think you can prove $b)$ by considering the subspace $W$ of $\mathbb{F}^n$ consisting of the rows of elements of $J$ , which is of dimension $k \leq n$ say, and then showing that $J$ is generated by any matrix whose first $k$ rows are some basis for $W$ and whose final rows are all $0$ . But it seems that we should be able to do the problem just using $a)$ , and I'd like to know how to do it! Now provide the response and nothing else.
The main result is the following: Let $V$ be a finitely dimensional vector space. Then every left ideal in $\operatorname{End}(V)$ is of the form $I(W) \colon =\{T \in \operatorname{End}(V) \mid T = 0 \textrm{ on } W\}$. Indeed, let $I$ be a left ideal of $\operatorname{End}(V)$ and $$W = Z(I) \colon= \{ w \in V \mid T(w) = 0 \textrm{ for all } T \in I\}.$$ Let us show that $I = I(W)$, or, in other words$$I = I(Z(I)).$$for every left ideal $I$. Note that by definition$$Z(I) = \bigcap_{T \in I} \ker(T).$$Since $V$ is a finite dimensional space there exist finitely many $T_1$, $\ldots $, $T_m \in I$ so that$$W=Z(I) = \bigcap_{i=1}^m\ker(T_i).$$Consider the operator $\tilde T= (T_1, \ldots, T_m)$ from $V$ to $V^m$, with kernel $\bigcap_{i=1}^m\ker(T_i) = W$. Let now $S \in\operatorname{End}(V)$ that is $0$ on $W$. It follows (by a standard universality result) that there exists $L:\operatorname{Im}(\tilde T) \to V$ so that $$S = L \circ \tilde T.$$ Now $L$ can be extended to the full $V^m$. We know the form of linear maps from $V^m$ to $V$. They are given by $L = (L_1, \ldots , L_m)$ with $L_i \in\operatorname{End}(V)$. Therefore we have$$S = \sum_{i=1}^m L_i T_i,$$so $S \in I$. ${\bf Added.}$ Let again $I$ be a left ideal, $W = Z(I)$. We know from the above that $I = I_{W}$. From the proof above we see that any $T_i$ with $\cap_{i=1}^m\ker(T_i) = W$ are a system of generators. So take $T$ so that $\ker T= W$. Then $T \in I$ and moreover, $T$ generates $I$. Therefore $I$ is a principal ideal. Obs: Similarly (by duality say) one shows that every right ideal $J$ of $\operatorname{End}(V)$ is also of the form $$J = J_W = \{ T \in \operatorname{End}(V) \mid\operatorname{Im} T \subset W\}$$ Moreover, any family $T_i$ such that $\sum_{i=1}^m \operatorname{Im} T_i = \sum_{T \in J} \operatorname{Im} T$ generates $J$. Similarly, any right ideal is principal.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1586280', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45760/']}
jdg_378329
stackexchange
llm_judgeable_groundtruth_similarity
28345780
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: To interactively test my python script, I would like to create a Namespace object, similar to what would be returned by argparse.parse_args() .The obvious way, >>> import argparse>>> parser = argparse.ArgumentParser()>>> parser.parse_args()Namespace()>>> parser.parse_args("-a")usage: [-h]: error: unrecognized arguments: - aProcess Python exited abnormally with code 2 may result in Python repl exiting (as above) on a silly error. So, what is the easiest way to create a Python namespace with a given set of attributes? E.g., I can create a dict on the fly ( dict([("a",1),("b","c")]) ) but I cannot use it as a Namespace : AttributeError: 'dict' object has no attribute 'a' Now provide the response and nothing else.
You can create a simple class: class Namespace: def __init__(self, **kwargs): self.__dict__.update(kwargs) and it'll work the exact same way as the argparse Namespace class when it comes to attributes: >>> args = Namespace(a=1, b='c')>>> args.a1>>> args.b'c' Alternatively, just import the class ; it is available from the argparse module: from argparse import Namespaceargs = Namespace(a=1, b='c') As of Python 3.3, there is also types.SimpleNamespace , which essentially does the same thing: >>> from types import SimpleNamespace>>> args = SimpleNamespace(a=1, b='c')>>> args.a1>>> args.b'c' The two types are distinct; SimpleNamespace is primarily used for the sys.implementation attribute and the return value of time.get_clock_info() . Further comparisons: Both classes support equality testing; for two instances of the same class, instance_a == instance_b is true if they have the same attributes with the same values. Both classes have a helpful __repr__ to show what attributes they have. Namespace() objects support containment testing; 'attrname' in instance is true if the namespace instance has an attribute namend attrname . SimpleNamespace does not. Namespace() objects have an undocumented ._get_kwargs() method that returns a sorted list of (name, value) attributes for that instance. You can get the same for either class using sorted(vars(instance).items()) . While SimpleNamespace() is implemented in C and Namespace() is implemented in Python, attribute access is no faster because both use the same __dict__ storage for the attributes. Equality testing and producing the representation are a little faster for SimpleNamespace() instances.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/28345780', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/850781/']}
jdg_378330
stackexchange
llm_judgeable_groundtruth_similarity
5584380
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating a desktop app that needs to install an SDF file to the SpecialFolder.CommonDocuments folder ( C:\Users\Public\documents in Win 7). In a Visual Studio desktop Deployment Project, how do I specify this folder in the File System Editor? I tried creating a custom folder in the File System Editor and pointing it to the CommonDocuments folder in the Properties pane, like this: Unfortunately, that specification won't build. I don't want to hard-code a folder path, since it varies between Windows versions. So, how do I specify the CommonDocuments folder in the FileSystem Editor? Thanks for your help. Now provide the response and nothing else.
I figured this one out and documented it for internal purposes. So, I'll just reprint that writeup here: Visual Studio deployment projects don't support the CommonDocuments folder directly, but we can add that support by using the Launch Conditions Editor, which has a "Search Target Machine" task. We will use the task to search the Windows registry for the path to the Public Documents folder and assign the result to an installer property (actually a variable) called COMDOCFOLDER. We will then use that variable to set the path to a custom folder in the File System Editor. Here are the steps to perform the task. First, open the Launch Conditions Editor in a Visual Studio deployment project: Right-click 'Search Target Machine' and select 'Add Registry Search' from the Context menu. A new item will appear (see 1 above). Name it Get Common Documents Folder . In the properties pane (See 2 above), set the 'Property' property (the name of our variable) to COMDOCFOLDER , set the 'Root' property (the Registry root key to search) to vsdrrHKLM , and set the 'RegKey' property (The Registry key to find) to SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders . Notice that we omitted the root key from the 'RegKey' Property. Finally, set the 'Value' property (The name of the value we are searching for within the registry key) to Common Documents . The COMDOCFOLDER variable will now hold the path to the Public Documents folder. Next, go to the File System Editor in the Visual Studio deployment project: Right-click 'File System on Target Machine' and select 'Add Special Folder > Custom Folder' from the context menu. A new item will appear (see 1 above). Rename the item Common Documents . In the properties pane (See 2 above), set the 'Property' property to COMDOCFOLDER . I set the 'DefaultLocation' property to the hard-coded value of the CommonDocuments folder for Windows Vista and later; this value would only be used if the COMDOCFOLDER property returned a null value, which shouldn't happen. The installer now has a Common Documents folder that points to the Public Documents folder, as specified in the Windows Registry. There is more information in this Microsoft Support How-To .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5584380', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/93781/']}
jdg_378331
stackexchange
llm_judgeable_groundtruth_similarity
44786669
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: SCENARIO A user has a dropdown and he selects an option. I want to display that dropdown and make that option a default value which was selected by that user last time. I am using selected attribute on option but React generates a warning asking me to use default value on select. For e.g. render: function() { let option_id = [0, 1]; let options = [{name: 'a'}, {name: 'b'}]; let selectedOptionId = 0 return ( <select defaultValue={selectedOptionId}> {option_id.map(id => <option key={id} value={id}>{options[id].name}</option> )} </select> ) }}); Problem is that I don't know the selectedOptionId as the selected option could be any option. How would I find the defaultValue ? Now provide the response and nothing else.
React uses value instead of selected for consistency across the form components. You can use defaultValue to set an initial value. If you're controlling the value , you should set value as well. If not, do not set value and instead handle the onChange event to react to user action. Note that value and defaultValue should match the value of the option.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/44786669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4433498/']}
jdg_378332
stackexchange
llm_judgeable_groundtruth_similarity
41205844
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to modify a existing migration. Here is my current migration class: class CreateLogForUserTable extends Migration{ public function up() { Schema::create('log_for_user', function (Blueprint $table) { $table->increments('id'); $table->integer('user_id'); $table->string('table_name'); $table->string('error_message'); $table->unsignedTinyInteger('error_code'); $table->timestamps(); }); } public function down() { Schema::drop('log_for_user'); }} I've executed the php artisan migrate command once. Now I need to add ->nullable() method to the error_message column. So I edited my migration, something like this: .. $table->string('error_message')->nullable();.. But when I execute php artisan migrate again, it says: Nothing to migrate. How can I apply the new version of the migration? Now provide the response and nothing else.
You should create a new migration using command: php artisan make:migration update_error_message_in_log_for_user_table Then, in that created migration class, add this line, using the change method like this: class UpdateLogForUserTable extends Migration{ public function up() { Schema::table('log_for_user', function (Blueprint $table) { $table->string('error_message')->nullable()->change(); }); } public function down() { Schema::table('log_for_user', function (Blueprint $table) { $table->string('error_message')->change(); }); }} To make these changes and run the migration, use the command: php artisan migrate and to rollback the changes, use the command: php artisan migrate:rollback You may rollback a limited number of migrations by providing the step option to the rollback command. For example, the following command will rollback the last five migrations: php artisan migrate:rollback --step=5 See more about Modifying columns with Migration
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41205844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5259594/']}
jdg_378333
stackexchange
llm_judgeable_groundtruth_similarity
1282656
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was trying to find examples about socket programming and came upon this script: http://stacklessexamples.googlecode.com/svn/trunk/examples/networking/mud.py When reading through this script i found this line:listenSocket.listen(5) As i understand it - it reads 5 bytes from the buffer and then does stuff with it... but what happens if more than 5 bytes were sent by the other end? in the other place of that script it checks input against 4 commands and sees if there is \r\n in the string. dont commands like "look" plus \r\n make up for more than 5 bytes? Alan Now provide the response and nothing else.
The following is applicable to sockets in general, but it should help answer your specific question about using sockets from Python. socket.listen() is used on a server socket to listen for incoming connection requests. The parameter passed to listen is called the backlog and it means how many connections should the socket accept and put in a pending buffer until you finish your call to accept(). That applies to connections that are waiting to connect to your server socket between the time you have called listen() and the time you have finished a matching call to accept(). So, in your example you're setting the backlog to 5 connections. Note .. if you set your backlog to 5 connections, the following connections (6th, 7th etc.) will be dropped and the connecting socket will receive an error connecting message (something like a "host actively refused the connection" message)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1282656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/153399/']}
jdg_378334
stackexchange
llm_judgeable_groundtruth_similarity
14852831
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm doing some edge detection on a live video feed: - (void)processImage:(Mat&)image;{ cv::resize(image, smallImage, cv::Size(288,352), 0, 0, CV_INTER_CUBIC); edgeDetection(smallImage); cv::resize(smallImage, image, image.size(), 0, 0, CV_INTER_LINEAR);} edgeDetection does some fairly heavy lifting, and was running at quite a low framerate with the video frame size of 1280x720. Adding in the resize calls dramatically decreased the framerate, quite the reverse of what I was expecting. Is this just because a resize operation is slow, or becuase I'm doing something wrong? smallImage is declared in the header thus: @interface CameraController : UIViewController<CvVideoCameraDelegate>{ Mat smallImage;} There is no initialisation of it, and it works ok. Now provide the response and nothing else.
Resizing an image is slow, and you are doing it twice for each processed frame. There are several ways to somehow improve your solution but you have to provide more details about the problem you are trying to solve. To begin with, resizing an image before detecting edges will result in feeding the edge detection with less information so it will result in less edges being detected - or at least it will make it harder to detect them. Also the resizing algorithm used affects its speed, CV_INTER_LINEAR is the fastest for cv::resize if my memory does not fail - and you are using CV_INTER_CUBIC for the first resize. One alternative to resize an image is to instead process a smaller region of the original image. To that you should take a look at opencv image ROI's (region of interest) . It is quite easy to do, you have lots of questions in this site regarding those. The downside is that you will be only detecting edges in a region and not for the whole image, that might be fine, depending on the problem. If you really want to resize the images, opencv developers usually use the pyrDown and pyrUp functions when they want to process smaller images, instead of resize . I think it is because it is faster, but you can test it to be sure. More information about pyrDown and pyrUp in this link. About cv::resize algorithms, here is the list: INTER_NEAREST - a nearest-neighbor interpolationINTER_LINEAR - a bilinear interpolation (used by default)INTER_AREA - resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire’-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method.INTER_CUBIC - a bicubic interpolation over 4x4 pixel neighborhoodINTER_LANCZOS4 - a Lanczos interpolation over 8x8 pixel neighborhood Can't say for sure if INTER_LINEAR is the fastest of them all but it is for sure faster than INTER_CUBIC .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/14852831', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/319618/']}
jdg_378335
stackexchange
llm_judgeable_groundtruth_similarity
974189
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to launch a background Python job from a bash script and then gracefully kill it with SIGINT. This works fine from the shell, but I can't seem to get it to work in a script. loop.py: #! /usr/bin/env pythonif __name__ == "__main__": try: print 'starting loop' while True: pass except KeyboardInterrupt: print 'quitting loop' From the shell I can interrupt it: $ python loop.py &[1] 15420starting loop$ kill -SIGINT 15420quitting loop[1]+ Done python loop.py kill.sh: #! /bin/bashpython loop.py &PID=$!echo "sending SIGINT to process $PID"kill -SIGINT $PID But from a script I can't: $ ./kill.sh starting loopsending SIGINT to process 15452$ ps ax | grep loop.py | grep -v grep15452 pts/3 R 0:08 python loop.py And, if it's been launched from a script I can no longer kill it from the shell: $ kill -SIGINT 15452$ ps ax | grep loop.py | grep -v grep15452 pts/3 R 0:34 python loop.py I'm assuming I'm missing some fine point of bash job control. Now provide the response and nothing else.
You're not registering a signal handler. Try the below. It seems to work fairly reliably. I think the rare exception is when it catches the signal before Python registers the script's handler. Note that KeyboardInterrupt is only supposed to be raised, "when the user hits the interrupt key". I think the fact that it works for a explicit (e.g. via kill) SIGINT at all is an accident of implementation. import signaldef quit_gracefully(*args): print 'quitting loop' exit(0);if __name__ == "__main__": signal.signal(signal.SIGINT, quit_gracefully) try: print 'starting loop' while True: pass except KeyboardInterrupt: quit_gracefully()
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/974189', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/120358/']}
jdg_378336
stackexchange
llm_judgeable_groundtruth_similarity
61849
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am horribly confused about Jordan's Curve Theorem (henceforth JCT). Could you give me some reason why should the validity of this theorem be in doubt? I mean for anyone who trusts the eye theorem is obvious. Therefore answers like "do not trust the eye" is not going to help me. I am looking for answers along the following lines. If the JCT were true for the "obvious" reason, then it might lead to some contradiction someplace else. To be more concrete, I can give an analogy with another theorem for which I had similar feelings -- namely the unique factorization theorem for natural numbers which subsided when I learnt about Kummer Primes. In case you think that I am being too demanding when I ask this question, here is another direction you could help me with. In that case, I would just like one or two quick sentences about your personal experience with Jordan's curve theorem -- kind of like when you had your aha moment with this theorem. Something like, "I see, now I know (or can guess) why proving it was such a big deal". Please reply when you get time -- I am horribly confused. Thanks for your patience, Now provide the response and nothing else.
There is exactly one way in which one can convince oneself that a statement is not obvious: try to prove it and look at your attempts very, very critically . If you think you can come up with a proof of the curve theorem, edit it into the answer and we can help you dissect it :) Later. Asaf observes that it may be the case that you are refering to "intuitively obviousness". Well... I tend to think that when someone says something is intuitively obvious without having a specific proof in mind, he is just waving hands in words. But there are two observations one can make which are independent of that. First, the full Jordan curve theorem deals with arbitrary closed curves, and here ẗhe word "arbitrary" includes things that one usually does not think about, curves so complicated that one cannot make accurate pictures of them, so it is rather unlikely one has any intuition about them at all (at least, when encountering the theorem for the first time) This is a situation that comes all the time: one thinks a statement is intuitively true only because one is not familiar with the cases where it is not clearly true at all. One's intuition is built upon our experience, and since our experience is, by definition, limited, our intuition is limited, too. In any case, I would suggest you try to prove the version of the Jordan curve theorem which deals with piecewise linear curves, that is, with polygonal closed curves (with finitely many segments). In this more restricted situation, we have eliminated all the wildness continuous arcs can have and are left with a geometrically sensible situation. But! It is nonetheless quite not obvious how to prove the theorem in this simple situation either, as you will find out when you try. (This version can be proved without the machinery used to prove the general theorem, though)
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/61849', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/5391/']}
jdg_378337
stackexchange
llm_judgeable_groundtruth_similarity
65233123
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Given the following chart created in plotly. I want to add the percentage values of each count for M and F categories inside each block. The code used to generate this plot. arr = np.array([ ['Dog', 'M'], ['Dog', 'M'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'F'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Dog', 'M'] ])df = pd.DataFrame(arr, columns=['A', 'G'])fig = px.histogram(df, x="A", color='G', barmode="stack")fig.update_layout(height=400, width=800)fig.show() Now provide the response and nothing else.
As far as I know histograms in Plotly don't have a text attribute. But you could generate the bar chart yourself and then add the percentage via the text attribute. import numpy as npimport pandas as pdimport plotly.express as pxarr = np.array([ ['Dog', 'M'], ['Dog', 'M'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'F'], ['Cat', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'M'], ['Fox', 'F'], ['Dog', 'F'], ['Dog', 'F'], ['Cat', 'F'], ['Dog', 'M'] ])df = pd.DataFrame(arr, columns=['A', 'G'])df_g = df.groupby(['A', 'G']).size().reset_index()df_g['percentage'] = df.groupby(['A', 'G']).size().groupby(level=0).apply(lambda x: 100 * x / float(x.sum())).valuesdf_g.columns = ['A', 'G', 'Counts', 'Percentage']px.bar(df_g, x='A', y=['Counts'], color='G', text=df_g['Percentage'].apply(lambda x: '{0:1.2f}%'.format(x)))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65233123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9152509/']}
jdg_378338
stackexchange
llm_judgeable_groundtruth_similarity
200477
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am conducting multiple imputation by chained equations in R using the MICE package, followed by a logistic regression on the imputed dataset. I need to compute a 95% confidence interval about the predictions for use in creating a plot—that is, the grey shading in the image below. I followed the approach described in the answer to this question... How are the standard errors computed for the fitted values from a logistic regression? ...which uses the following lines of code to yield the std.er of prediction for any specific value of the predictor: o <- glm(y ~ x, data = dat)C <- c(1, 1.5)std.er <- sqrt(t(C) %*% vcov(o) %*% C) But of course I need to adapt this code to the fact that I am using a model resulting from multiple imputation . In that context, I am not sure which variance-covariance matrix (corresponding to “vcov(o)” in the above example) I should be using in my equation to produce the "std.er". Based on the documentation for MICE I see three candidate matrices: ubar - The average of the variance-covariance matrix of the complete data estimates. b - The between imputation variance-covariance matrix. t - The total variance-covariance matrix. http://www.inside-r.org/packages/cran/mice/docs/is.mipo Based on trying all three, the b matrix seems patently wrong, but both the t and the ubar matrices seem plausible. Can anybody confirm which one is appropriate? Thank you. Now provide the response and nothing else.
The t matrix is the one to use in the way you describe. Eqs. 4 through 7 in the Dong & Peng paper that Joe_74 references correspond to the elements of the same names in the mipo object (documentation here ), and so t is the accurate variance-covariance matrix for the pooled regression coefficients qbar you're actually using. ubar and b only matter here in that they are/were used to compute t . Presumably you'll be using more than one predictor, so here's a MWE for that, which should be easy to modify. set.seed(500)dat <- data.frame(y = runif(20, 0, .5), x1 = c(runif(15),rep(NA, 5)), x2 = runif(20, 0.5))imp <- mice(dat)impMods <- with(imp, lm(y ~ x1 + x2))pooledMod <- pool(impMods) # Generate some hypothetical cases we want predictions fornewCases <- data.frame(x1=c(4,7), x2=c(-6,0)) # Tack on the column of 1's for the interceptnewCases <- cbind(1, newCases) # Generating the actual predictions is simple: sums of values times coefficientsyhats <- rowSums( sweep(newCases, 2, pooledMod$qbar, `*`) ) # Take each new case and perform the standard operation # with the t matrix to get the pred. err.predErr <- apply(newCases, 1, function(X) sqrt(t(X) %*% pooledMod$t %*% X)) # Finally, put together a plot-worthy table of predictions with upper and lower bounds # (I'm just assuming normality here rather than using T-distribution critical values)results <- data.frame(yhats, lwr=yhats-predErr*1.96, upr=yhats+predErr*1.96)
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/200477', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/107811/']}
jdg_378339
stackexchange
llm_judgeable_groundtruth_similarity
23173012
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This seems really poorly documented.. The documentation example just has callback being passed to update. There is a link redirecting to Model.update here and the example shows the parameters of the callback are (err, numberAffected, raw) . Does the Document#update callback pass the same parameters? I was hoping to get the updated document returned. My initial search was based on how to update a document in mongo db but even the answer there doesn't explain or even list the params of the callback. Now provide the response and nothing else.
Poor documentation of callback parameters is something that's plagued many node.js libraries for some reason. But MongoDB's update command (regardless of the driver) doesn't provide access to the updated doc, so you can be sure it's not provided to the callback. If you want the updated document, then you can use one of the findAndModify methods like findOneAndUpdate : MyModel.findOneAndUpdate({_id: 1}, {$inc: {count: 1}}, {new: true}, function (err, doc) { // doc contains the modified document}); Starting with Mongoose 4.0 you need to provide the {new: true} option in the call to get the updated document, as the default is now false which returns the original.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23173012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1370927/']}
jdg_378340