source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4511586
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm am adding tests to a Rails app that remotely stores files. I'm using the default Rails functional tests. How can I add file uploads to them? I have: test "create valid person" do post(:create, :person => { :avatar => fixture_file_upload('avatar.jpeg') })end This for some reason uploads a Tempfile and causes the AWS/S3 gem to fail with: NoMethodError: undefined method `bytesize' for Tempfile Is their any way that I can get the test to use an ActionDispatch::Http::UploadedFile and perform more like it does when testing with the web browser? Is fixture_file_upload the way to test uploading files to a controller? If so why doesn't it work like the browser? As a note, I really don't want to switch testing frameworks. Thanks!
Now provide the response and nothing else.
|
I use the s3 gem instead of the aws/s3 gem. The main reasons for this are no support for european buckets and development of aws/s3 seems to be stopped. If you want to test file upload than using the fixtures_file_upload method is correct, it maps directly to Rack::Test::UploadedFile.new (you can use this if the test file isn't in the fixtures folder). But I've also noticed that the behavior of the Rack::Test::Uploaded file objects isn't exactly the same as the ActionDispatch::Http::UploadedFile object (that's the class of uploaded files). The basic methods (original_filename, read, size, ...) all work but there are some differences when working with the file method. So limit your controller to these methods and all will be fine. An other possible solution is by creating an ActionDispatch::Http::Uploaded file object and using that so: upload = ActionDispatch::Http::UploadedFile.new({ :filename => 'avatar.jpeg', :type => 'image/jpeg', :tempfile => File.new("#{Rails.root}/test/fixtures/avatar.jpeg")})post :create, :person => { :avatar => upload }
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4511586', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259900/']}
|
jdg_376541
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23504409
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying for my data organization final and I'm going over stacks and heaps because I know they will be on the final and I'm going to need to know the differences.I know what the Stack is and what the Heap is. But I'm confused on what a stack is and what a heap is. The Stack is a place in the RAM where memory is stored, if it runs out of space, a stackoverflow occurs. Objects are stored here by default, it reallocates memory when objects go out of scope, and it is faster. The Heap is a place in the RAM where memory is stored, if it runs out of space, the OS will assign it more. For an object to be stored on the Heap it needs to be told by using the, new, operator, and will only be deallocated if told. fragmentation problems can occur, it is slower then the Stack, and it handles large amounts of memory better. But what is a stack, and what is a heap? is it the way memory is stored? for example a static array or static vector is a stack type and a dynamic array, linked list a heap type? Thank you all!
Now provide the response and nothing else.
|
"The stack" and "the heap" are memory lumps used in a specific way by a program or operating system. For example, the call stack can hold data pertaining to function calls and the heap is a region of memory specifically used for dynamically allocating space. Contrast these with stack and heap data structures . A stack can be thought of as an array where the last element in will be the first element out. Operations on this are called push and pop. A heap is a data structure that represents a special type of graph where each node's value is greater than that of the node's children. On a side note, keep in mind that "the stack" or "the heap" or any of the stack/heap data structures are unique to any given programming language but are simply concepts in the field of computer science.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23504409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3543339/']}
|
jdg_376542
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20045015
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Visual Studio 2012 with C++, developing a Qt application. I'm able to compile it and debug it, but, somehow, no .dll file is in the Debug or Release folder. I've tried some of other posts solutions, but none worked. So, how can I make Visual Studio copy the required .dll files into the release folder? I think it should be an option somewhere. I'm just starting to think about copying it handmade.
Now provide the response and nothing else.
|
Too much bad advice, a DLL cannot be a resource. Windows demands that code is stored in a separate executable file with a proper PE32 header. Which permits it to create a memory-mapped file to map the file content into memory, allowing the code to be shared by multiple processes and keeping it out of the paging file. And to relocate the code when the DLL's base address is already in use. Simply use Project + Properties, Build events, Post-Build Event to xcopy the DLLs. Arbitrarily, if you stored the needed DLLs in the "dlls" subdirectory of your project then this command will get them copied, only when necessary: xcopy /d /y "$(ProjectDir)dlls\*.*" "$(OutDir)" Use it both in the Debug and Release configuration so you'll debug exactly what you'll ship.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20045015', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1695172/']}
|
jdg_376543
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11762398
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following data: name id urlJohn 1 someurl.comMatt 2 cool.comSam 3 stackoverflow.com How can I write an SQL statement in Postgres to select this data into a multi-dimensional array, i.e.: {{John, 1, someurl.com}, {Matt, 2, cool.com}, {Sam, 3, stackoverflow.com}} I've seen this kind of array usage before in Postgres but have no idea how to select data from a table into this array format. Assuming here that all the columns are of type text .
Now provide the response and nothing else.
|
You cannot use array_agg() to produce multi-dimensional arrays, at least not up to PostgreSQL 9.4. (But the upcoming Postgres 9.5 ships a new variant of array_agg() that can!) What you get out of @Matt Ball's query is an array of records ( the_table[] ). An array can only hold elements of the same base type. You obviously have number and string types. Convert all columns (that aren't already) to text to make it work. You can create an aggregate function for this like I demonstrated to you here before. CREATE AGGREGATE array_agg_mult (anyarray) ( SFUNC = array_cat ,STYPE = anyarray ,INITCOND = '{}'); Call: SELECT array_agg_mult(ARRAY[ARRAY[name, id::text, url]]) AS tbl_mult_arrFROM tbl; Note the additional ARRAY[] layer to make it a multidimensional array (2-dimenstional, to be precise). Instant demo: WITH tbl(id, txt) AS ( VALUES (1::int, 'foo'::text) ,(2, 'bar') ,(3, '}b",') -- txt has meta-characters ) , x AS ( SELECT array_agg_mult(ARRAY[ARRAY[id::text,txt]]) AS t FROM tbl )SELECT *, t[1][3] AS arr_element_1_1, t[3][4] AS arr_element_3_2FROM x;
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11762398', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/761294/']}
|
jdg_376544
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
599040
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If a customer arrives according to a Possion process with rate $\lambda$ , how can I show that the time interval $X$ taken to receive $k$ customers is an Erlang- $k$ random variable with parameters $n$ and $\lambda$ ?
Now provide the response and nothing else.
|
Those apply to a log-normal distribution. The paper says "The evidence is inpractice approximately log-normally distributed." If it has parameters $\mu=\mathbb{E}[\log Z]$ and $\sigma^2=\mathrm{Var}[\log Z]$ then: $\mathbb{E}[ Z] = \exp\left(\mu + \frac{\sigma^2}{2}\right)$ $\mathrm{Var}[Z]=(\exp(\sigma^2)-1)\exp(2\mu+\sigma^2)$ $\mathbb{E}[ Z^2] =\exp(2\mu+2\sigma^2)$ which leads to the desired $2\log(\mathbb{E}[Z])-\frac12\log(\mathbb{E}[Z^2]) = 2\mu+\sigma^2 - \mu-\sigma^2=\mu=\mathbb{E}[\log Z]$ $\log(\mathbb{E}[Z^2])-2\log(\mathbb{E}[Z]) = 2\mu +2\sigma^2-2\mu-\sigma^2 = \sigma^2=\mathrm{Var}[\log Z]$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/599040', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/375427/']}
|
jdg_376545
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17032293
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have searched this question here but couldn't find it, please redirect me if we already have it on the site. I'm looking for a way to create CTE which uses another CTE as the data to further limit. I have a CTE which creates a report for me , but I would like to narrow this report with another input using the existing CTE. I hope my question is clear.
Now provide the response and nothing else.
|
You can chain 2 (or more) CTE's together. For example with ObjectsWithA as( select * from sys.objects where name like '%A%'),ObjectsWithALessThan100 as( select * from ObjectsWithA where object_id < 100)select * from ObjectsWithALessThan100; Or the same example, with more "spelled out" names/aliases: with ObjectsWithA (MyObjectId , MyObjectName) as( select object_id as MyObjIdAlias , name as MyNameAlias from sys.objects where name like '%A%'),ObjectsWithALessThan100 as( select * from ObjectsWithA theOtherCte where theOtherCte.MyObjectId < 100)select lessThan100Alias.MyObjectId , lessThan100Alias.MyObjectName from ObjectsWithALessThan100 lessThan100Alias order by lessThan100Alias.MyObjectName;
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17032293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2166856/']}
|
jdg_376546
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28254266
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to php, so don't hate me if I speak things that don't make sense. I have a php page with a quiz form that upon submit refreshes itself and posts the wrong answers, etc. Now I want to add the functionality to send this result to my email. I managed to get it to send the mail, but I can't get the whole data. This is what checks and prints the questions and answers and (tries to) mails it: if (isset($_POST['answers'])){ $Answers = $_POST['answers']; foreach ($Questions as $QuestionNo => $Value){ if ($Answers[$QuestionNo] != $Value['CorrectAnswer']){ echo $Value['Question'].'<br />'; echo 'Your answer: ' . $Value['Answers'][$Answers[$QuestionNo]].'<br />'; echo 'Correct answer: ' . $Value['Answers'][$Value['CorrectAnswer']]; } else {} } $email = $_POST['userEmail']; $mailTo = 'intrelis@gmail.com'; $mailFrom = 'From: <mail@mail.com>'; $msgSubject = 'English level test from' . $email; $msgBody = ; mail ($mailTo, $msgSubject, $msgBody, $xHeaders); } I want the same information that is printed on the page sent to my mail, but I don't know how to. What do I put in $msgBody? I tried different ideas, but the body of the mail is always empty...
Now provide the response and nothing else.
|
Steven was right about the HttpContext being a runtime value. Its values are not even populated at the time the application is composed. This makes sense if you think about it because the application should be initialized outside of any individual user context. However, Steven's solution only moved the problem to a different service. After all, the class that implements IUserContext will still need to take HttpContext as a dependency. The solution is to use an Abstract Factory to allow the HttpContext instance to be accessed at runtime instead of when the factory is wired up. Important: HttpContext is not an abstraction, so it cannot be swapped or mocked. To ensure we are dealing with an abstraction, Microsoft has provided the HttpContextBase abstract class and the default concrete type HttpContextWrapper. HttpContextBase has exactly the same interface as HttpContext. You should always use HttpContextBase as the abstract reference type within your services, not HttpContext. With those 2 things in mind, you can create a factory for your HttpContext , as follows: public interface IHttpContextFactory{ HttpContextBase Create();}public class HttpContextFactory : IHttpContextFactory{ public HttpContextBase Create() { return new HttpContextWrapper(HttpContext.Current); }} Your MembershipService can then be modified to accept an IHttpContextFactory in its constructor: public class MembershipService : IMembershipService{ private readonly IHttpContextFactory httpContextFactory; // This is called at application startup, but note that it // does nothing except get our service(s) ready for runtime. // It does not actually use the service. public MembershipService(IHttpContextFactory httpContextFactory) { if (httpContextFactory == null) throw new ArgumentNullException("httpContextFactory"); this.httpContextFactory = httpContextFactory; } // Make sure this is not called from any service constructor // that is called at application startup. public void DoSomething() { HttpContextBase httpContext = this.httpContextFactory.Create(); // Do something with HttpContext (at runtime) }} And you need only inject the HttpContextFactory at composition time. kernel.Bind<IHttpContextFactory>() .To<HttpContextFactory>();kernel.Bind<IMembershipService>() .To<MembershipService>(); This alone might not solve the entire issue, though. You need to ensure that the rest of your application does not try to use HttpContext before it is ready. In terms of DI, it means you can't use HttpContext in any constructor of types that are composed in application start or any service members that one of those constructors calls. To solve that, you may need to create additional abstract factories to ensure those services don't call members of IMembershipService until HttpContext is ready. See this answer for more information about how to accomplish that. Steven's solution also entailed creating a Facade around HttpContext . While this does not really help solve the problem at hand, I agree that this might be a good idea if your MembershipService (and perhaps other services) only uses a small number of members of HttpContext . Generally, this pattern is to make a complex object simpler to work with (such as flattening it down to a few members that may be nested deep within its hierarchy). But you really need to weigh the extra maintenance of adding another type against the complexity of using HttpContext within your application (or the value of swapping out a section of it) to make that decision.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28254266', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3384569/']}
|
jdg_376547
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1577825
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using unix scoket for data transferring (SOCK_STREAM mode) I need to send a string of more than 100k chars. Firstly, I send length of a string - it's sizeof(int) bytes. length = strlen(s)send(sd, length, sizeof(int)) Then I send the whole string bytesSend = send(sd, s, length) but for my surprise "bytesSend" is less than "length". Note, that this works fine when I send not so big strings.May be there exist some limitations for system call "send" that I've been missing ...
Now provide the response and nothing else.
|
The send system call is supposed to be fast , because the program may have other things useful things to do. Certainly you do not want to wait for the data to be sent out and the other computer to send a reply - that would lead to terrible throughput. So, all send really does is queues some data for sending and returns control to the program. The kernel could copy the entire message into kernel memory, but this would consume a lot of kernel memory (not good). Instead, the kernel only queues as much of the message as is reasonable. It is the program's responsibility to re-attempt sending of the remaining data. In your case, use a loop to send the data that did not get sent the first time. while(length > 0) { bytesSent = send(sd, s, length); if (bytesSent == 0) break; //socket probably closed else if (bytesSent < 0) break; //handle errors appropriately s += bytesSent; length -= bytesSent;} At the receiving end you will likely need to do the same thing.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1577825', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/180361/']}
|
jdg_376548
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24159
|
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I added two new lines to my .bashrc so that Bash can find my Go programs : GOROOT=/Users/jay/work PATH=$PATH:$GOROOT/bin But as a result, when I launch Emacs, Emacs hangs for several minutes and I get this error: Tramp: Opening connection for bin using ssh...Tramp: Sending command `exec ssh -o ControlMaster=auto -o ControlPath='tramp.%C' -o ControlPersist=no -e none bin'Tramp: Waiting for prompts from remote shell...Tramp failed to connect. If this happens repeatedly, try `M-x tramp-cleanup-this-connection'Tramp: Waiting for prompts from remote shell...failedTramp: Opening connection for bin using ssh...failed In a previous question , I found that a path declaration in my .bashrc file caused Tramp to hang. The line was this: export PATH="$PATH:$HOME/.rvm/bin" # Add RVM to PATH for scripting That time, I resolved the issue by simply removing the line, since it didn't seem necessary. But this time I kind of need these path declarations. How do I resolve this error without deleting this path declarations from my .bashrc ?
Now provide the response and nothing else.
|
I had this exact same problem but it had nothing to do with RVM on ( which I also have ) on my Ubuntu-16.10 machine. It was related to by custom prompt (PS1) The issue was the prompt PS1. This fix : https://stackoverflow.com/questions/6954479/emacs-tramp-doesnt-work Essentially in your .bashrc case "$TERM" in "dumb") export PS1="> " ;; xterm*|rxvt*|eterm*|screen*) tty -s && export PS1="some crazy prompt stuff" ;; esac
|
{}
|
{'log_upvote_score': 4, 'links': ['https://emacs.stackexchange.com/questions/24159', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/2494/']}
|
jdg_376549
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15734374
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using ASP.NET Web API. And I do like the ability to add attributes to specify levels of access to the API controllers like this: [Authorize]public IEnumerable<Activity> Get() So far so good, but when I use roles the concept breaks apart. [Authorize(Roles = "Manager")]public IEnumerable<Activity> Get() My user may have logged on to the system a while back, and then at some point they hit a resource that is "forbidden" to them. There is no sense in the user attempting to log on again. Since their rightful account does not have access to that URL. But currently they get a 401 (unauthorized) instead of a 403 (forbidden) as if they had logged on with the wrong account. But the user only has one account, and it is not intended that users ask for an account that belongs to someone else. Has anyone else dealt with this problem? Does anyone know how to fix this? I am more than willing to write the code to fix this, but I am currently at a loss on where to start.
Now provide the response and nothing else.
|
Reading Parv's suggestion, I created the following custom filter called [WebApiAuthorize] . The key is the HandleUnauthorizedRequest() method. When code is executing inside this method, it is because the user is unauthorized "for some reason".... so now we just determine out "why".... and then either: Call base method for default behavior (return 401).... or.... Return our own response with a 403. As you can see, it returns 403 when appropriate (authenticated, but not authorized). public class WebApiAuthorizeAttribute : AuthorizeAttribute{ protected override void HandleUnauthorizedRequest(HttpActionContext ctx) { if (!ctx.RequestContext.Principal.Identity.IsAuthenticated) base.HandleUnauthorizedRequest(ctx); else { // Authenticated, but not AUTHORIZED. Return 403 instead! ctx.Response = new HttpResponseMessage(System.Net.HttpStatusCode.Forbidden); } }} To use, just throw the custom filter on a controller or action like this..... [WebApiAuthorize(Roles = "YourRoleA,YourRoleB")]public class AdminController : ApiController{ public List<Admin> GetAdmins() { ... }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15734374', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/937703/']}
|
jdg_376550
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9818570
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
class Foo(bar: String) { import Foo.Bar def this() = this(Bar) // this line fails, it seems I can only do // def this() = this(Foo.Bar) }object Foo { val Bar = "Hello Bar"} Basically, how do I use Bar after I import Foo.Bar , do I really have to call Foo.Bar every single time?
Now provide the response and nothing else.
|
Secondary constructors have outer scope to prevent you doing something silly like this: class Silly(foo: String) { val bar = 123 def this() = this(bar.toString)} where you try to pass a parameter to the constructor...after creating it in the constructor. Unfortunately, this means that import Foo.Bar is not in scope for that line. You'll have to use the full path Foo.Bar . For everything in the class except the additional constructors, Foo.Bar will be in scope as Bar .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9818570', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1024089/']}
|
jdg_376551
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
73152
|
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Russian State has been clear about its motivations in the recent Ukrainian conflict. Notably, its objection to NATO expansion. However, much Western media, Western discussion, and even Western politicians disregard Russia's stated motivations as "Russian propaganda". This phenomenon is not unique to Russia, I've noticed that European countries will e.g. have marches condemning Israel when Israel is attacked, selectively adhering to specific events yet ignoring other events. Similarly we've seen this happen with e.g. the United States' attacks on Afghanistan and Iraq, NATO's attack on Libya, and many other conflicts from recent memory to before I was born. I would say that presenting both sides of a conflict is the exception, not the norm. Rather, many conflicts are narrated as "bad guys" vs "good guys" by the media, and more importantly treated as such by governments whether they interfere or not. I can understand that some issues may look different from different points of view. Back to the Russian-Ukrainian example, surely the West would see NATO as not "expanding", but rather as "welcoming new members". But why is the Russian point of view disregarded so easily by Western politicians? Is not the first step of conflict resolution to understand the other side's concerns? I see these concerns outright dismissed by Western politicians (and media): Ukraine joining NATO Nazis in Ukrainian politics and military . Potential Ukrainian war crimes . To be clear, I am not excusing Russian actions in the current conflict. This question concerns the general phenomenon of politicians' disregard of "the enemy"'s point of view, using the specific example of disregard in Western government discourse of the Russian point of view (especially voiced Russian concerns before and after the invasion). This prevents practical application of regard for such concerns, which e.g. could have possibly prevented the Ukrainian invasion by addressing Russian concerns at the NATO-Russia meeting in January .
Now provide the response and nothing else.
|
President Putin is jumping between different explanations/justifications for the invasion, and Western analysts believe that the true goal -- which is one of the stated goals -- is to undo the dissolution of the Soviet Union and to reintegrate Ukraine into Russia as the Soviet successor ( Reuters ). There is also analysis that the current Russian government (not the Russian people ) is fundamentally threatened by a Russian-speaking, Slavic nation which embraces Western-style democracy and tells their in-laws across the border how their elections are going. But the West cannot defuse this threat without denying the universal scope of the values it proclaims to embrace. ( RAND ) So NATO could not have 'resolved Russian concerns' in 2022 or 2021 or earlier without a repeat of the Molotov-Ribbentrop-Pact or perhaps a replay of Yalta . Russia believes that it needs a sphere of buffer states to be secure, the West doesn't want to abandon states into that buffer. If it wasn't about NATO, it would have been about EU membership or association; if it wasn't about Ukraine, it would have been about Georgia, or Moldova, or the Baltics.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://politics.stackexchange.com/questions/73152', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/5736/']}
|
jdg_376552
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
39351
|
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm curious why: (equal (make-symbol "foo") (make-symbol "foo")) ends up beeing nil . While (equal 'foo 'foo) is t . Do i have a misconception of symbols?Aren't they just strings without double-quotes?
Now provide the response and nothing else.
|
Do i have a misconception of symbols? Aren't they just strings without double-quotes? You do indeed have a misconception. A symbol is a lisp object containing numerous properties (including its name, its variable/value slot; and its function slot; but you can in fact set arbitrary properties on them, and many symbols have more than just the above). A symbol (object) can be referred to in code with a textual name (in which case the lisp reader interns that text to obtain the canonical symbol of that name); so you will usefully improve your understanding of Lisp if you understand that a symbol is not its name, but rather that the name is just a way of accessing a more complicated object 1 . Furthermore, it is entirely valid (and useful) to be able to have more than one symbol with a given name, and make-symbol facilitates this by returning a new, un-interned symbol. Therefore it is expected and necessary that (equal (make-symbol "foo") (make-symbol "foo")) is nil , as you are comparing two independent objects (both of which happen to have the name "foo" -- and neither of which is the canonical symbol foo ). However most of the time you do actually want to obtain the single, canonical symbol of a given name, and this is what intern gives you: ELISP> (eq 'foo (intern "foo"))t For more details, see C-h i g (elisp)Creating Symbols RET 1 Beyond this you will at some point want to understand what the lisp reader is doing in general, and the different things which are being processed in the "read" vs "eval" phases of lisp execution. In brief, the "read" phase converts the code (text) you write into lisp objects, and the "eval" phase works on those objects (rather than on the text that you wrote). You don't necessarily need to know this, but certain things may fall into place more easily later on if you're at least vaguely aware of this distinction.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://emacs.stackexchange.com/questions/39351', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/15971/']}
|
jdg_376553
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3031503
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Show that: $$\lim_{n\to\infty}\frac{1}{n}(1+\sqrt2+\dots + \sqrt{n}) = +\infty$$ I've tried the following way. Consider the following sum: $$\sqrt n + \sqrt{n-1} + \dots + \sqrt{n-\frac{n}{2}} + \dots + \sqrt{2} + 1$$ Now if we take only $n\over 2$ terms of the sum we obtain that: $$\sqrt n + \sqrt{n-1} + \dots > {n \over 2} \sqrt{n\over 2}$$ Let: $$x_n = {1 \over n}(1 + \sqrt{2} + \dots + \sqrt{n}),\ \ n\in \Bbb N$$ Using the above we have that: $$x_n > {1\over n} {n\over 2}\sqrt{n\over 2} = {1\over 2}\sqrt{n \over 2}$$ Now taking the limit for RHS its obvious that: $$\lim_{n\to\infty}{1\over2}\sqrt{n\over2} = +\infty$$ Which implies: $$\lim_{n\to \infty}x_n = + \infty$$ Have I done it the right way? Also i would appreciate alternative ways of showing that limit. Thanks!
Now provide the response and nothing else.
|
That's seems fine, the more straightforward alternative way is by Stolz-Cesaro , that is $$\frac{1+\sqrt2+\dots + \sqrt{n+1}-(1+\sqrt2+\dots + \sqrt{n})}{n+1-n}=\sqrt{n+1}$$ As another one alternative, we can use AM-GM $$\frac{1}{n}(1+\sqrt2+\dots + \sqrt{n}) \ge \sqrt[2n]{n!}$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3031503', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/53017/']}
|
jdg_376554
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23073346
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
http://jsbin.com/huzem/1/edit?html,css,output In the above site how do i extend the border to the bottom of the page, compared to where it ends now(right at the edge of the content)? Also is there a way to make the border line up on the edge of the right and left sides of the screen without using negative values for margin such as i did by setting margin -right and margin-left to -4% ?
Now provide the response and nothing else.
|
To observe paths in an object, you need to use an observe block: Polymer('x-element', { observe: { 'model.title': 'modelUpdated', 'model.text': 'modelUpdated', 'model.slug': 'modelUpdated' }, ready: function() { this.model = { title: this.noteTitle, text: this.noteText, slug: this.noteSlug }; }, modelUpdated: function(oldValue, newValue) { var value = Path.get('model.title').getValueFrom(this); // newValue == value == this.model.title }}); http://www.polymer-project.org/docs/polymer/polymer.html#observeblock
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/23073346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3316430/']}
|
jdg_376555
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1290204
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $z_i$ be complex numbers such that $|z_i| = 1$ . Prove that : $$ z\, :=\, \frac{z_1+z_2+z_3 +z_1z_2+z_2z_3+z_1z_3}{1+z_1z_2z_3} \in \mathbb{R} $$ This problem was featured on my son's final exam today, I tried helping him with this, but I guess I got a bit rusty myself. Any ideas ?
Now provide the response and nothing else.
|
Since $|z_i|=1\iff z_i\overline{z_i}=1$, one has$$\begin{align}\overline{\left(\frac{z_1+z_2+z_3+z_1z_2+z_2z_3+z_3z_1}{1+z_1z_2z_3}\right)}&=\frac{\overline{z_1+z_2+z_3+z_1z_2+z_2z_3+z_3z_1}}{\overline{1+z_1z_2z_3}}\\&=\frac{\overline{z_1}+\overline{z_2}+\overline{z_3}+\overline{z_1z_2}+\overline{z_2z_3}+\overline{z_3z_1}}{1+\overline{z_1z_2z_3}}\\&=\frac{\frac{1}{z_1}+\frac{1}{z_2}+\frac{1}{z_3}+\frac{1}{z_1z_2}+\frac{1}{z_2z_3}+\frac{1}{z_3z_1}}{1+\frac{1}{z_1z_2z_3}}\\&=\frac{z_2z_3+z_3z_1+z_1z_2+z_3+z_1+z_2}{z_1z_2z_3+1}\\&=\frac{z_1+z_2+z_3+z_1z_2+z_2z_3+z_3z_1}{1+z_1z_2z_3}\end{align}$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1290204', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/241894/']}
|
jdg_376556
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5178828
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have this text: BEGINhelloworldhowareyouEND How to convert it to bellow text using sed command in linux: BEGINfine, thanksEND
Now provide the response and nothing else.
|
$ cat fileBEGINhelloworldhowareyouEND$ sed -e '/BEGIN/,/END/c\BEGIN\nfine, thanks\nEND' fileBEGINfine, thanksEND /BEGIN/,/END/ selects a range of text that starts with BEGIN and ends with END . Then c\ command is used to replace the selected range with BEGIN\nfine, thanks\nEND .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5178828', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/459142/']}
|
jdg_376557
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
95521
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm studying how to write epsilon-delta proofs for limits of sequences, limits of functions, continuity, and differentiability and I'm having trouble with the general methodological procedure used in some of the proofs in the text as opposed to some of the proofs I have come up with. I'm going to post an example question with the proof I came up with and the proof given by the book. If possible, could you comment on how my proof is insufficient to prove the result, because I don't understand why it isn't. I understand the definition of the limit of a function to be the following: $\lim_{x \rightarrow a} f(x) = L$ means, $\forall \epsilon >0, \exists \delta >0; 0 < |x-a|< \delta \Rightarrow |f(x) - L|< \epsilon$, with $x, a, L \in \mathbb{R}$. Question: Give an $\epsilon$-$\delta$ proof that $\lim_{x \rightarrow 2} x^3 =8$. My Proof: Let $\epsilon > 0$ be arbitrary and for $\delta = (\epsilon +8)^\frac{1}{3} -2$ assume that $ 0 < |x-2| < \delta$, with $x \in \mathbb{R}$. Then, $$|x-2| < (\epsilon +8)^\frac{1}{3} -2$$ $$x-2 < (\epsilon + 8)^\frac{1}{3} -2 $$ $$x< (\epsilon +8)^\frac{1}{3}$$ $$x^3 < \epsilon +8$$ $$x^3 - 8 < \epsilon$$ $$|x^3 - 8| < \epsilon$$ Therefore, for $f(x) = x^3$ and $L = 8$, we have shown $|f(x) -L| < \epsilon$ and resultantly $\lim_{n \rightarrow 2} x^3 = 8$. Book's Proof: Let $\epsilon >0$ be given and choose $\delta = \min(1, \frac{\epsilon}{19})$. Let $x \in \mathbb{R}$ such that $0 < |x-2| < \delta = \min(1,\frac{\epsilon}{19})$. Since $|x-2|<1$, it follows that $ -1 < x-2 < 1$ and so $1 <x<3$. Thus $|x^2 + 2x +4| < 19$. Because $|x-2| < \frac{\epsilon}{19}$, it follows that $|x^3 - 8| = |x-2||x^2 + 2x +4| < 19|x-2|< 19(\frac{\epsilon}{19})=\epsilon$. Aren't we trying to show that given our assumption that $0 < |x-a|< \delta$ we need to show that $|f(x) - L|< \epsilon$. That is all that is required right? We pick some $\delta$ for which this works? I'm able to follow to proof in the book and understand that it is correct, I just don't understand why all of the additional details are necessary. Thank you for any responses.
Now provide the response and nothing else.
|
I think you have a fairly decent idea, but the implementation is quite seriously flawed. As I pointed out in my comment, one of the mistakes in your work is that you cannot conclude that $|x^3 - 8| \lt \varepsilon$ given only $x^3 - 8 \lt \varepsilon$. ( Exercise: Do you see why this is wrong? The trouble arises when $x$ is smaller than $2$, so that $x^3-8$ is a -- possibly large -- negative number.) One way to remedy the proof is to fork into two cases: $x < 2$ and $x > 2$. That is, we define two different thresholds $\delta_+$ and $\delta_-$ that work separately for $x > 2$ and $x < 2$ respectively; then the overall $\delta$ is defined to be the smaller of the two. Let's now see the above idea in action. Fix an $\varepsilon > 0$. Define $\delta_+ = (8 + \varepsilon)^{1/3} - 2$ and $\delta_- = 2 - (8 - \varepsilon)^{1/3}$; also define $\delta = \min \{ \delta_-, \delta_+ \}$. Note that $\delta_+, \delta_-$, and $\delta$ are all strictly positive; the proof would be incomplete without this observation. Now when $2 < x < (2 + \delta_+)$, we have $0 < x^3 - 8 < \varepsilon$; and when $(2 - \delta_-) < x < 2$, we have $- \varepsilon < x^3 - 8 < 0$. Combining these two statements, we can write that whenever $2 - \delta_- < x < 2 + \delta_+$ and $x \ne 2$, we have $- \varepsilon < x^3 - 8 < + \varepsilon$. In particular, for $x \in (2 - \delta, 2 + \delta) \smallsetminus \{ 2 \}$, we have $|x^3 - 8| \lt \varepsilon$. We have thus showed that the limit of $f(x)$ as $x \to 2$ is $8$. $\qquad \diamond$ Although the above proof is correct, it is quite unsatisfactory because of many reasons. [This list is admittedly subjective and vague, so I recommend that you do not worry if something is unclear here.] This style of argument relies in some sense on the fact that $f$ is monotonic. The monotonicity allowed us to “invert” the $\varepsilon$-$\delta$ condition in a straightforward way. (Moreover, it was useful that $f$ had a “nice” inverse.) For many functions, such a simple strategy does not work; so we often resort to establishing “bounds”. The textbook proof gives a good example of the latter approach. The trick of considering the left and right sides separately works only in one dimension, i.e., the real line. It wouldn't work in more general spaces like $\mathbf R^2$, for instance. Once again, the textbook proof would generalise more easily. In our proof, we were able to find a suitable $\delta$ without expending too much effort. In contrast, the textbook proof proceeds via a nontrivial estimate. Nevertheless, this effort does not totally go waste because the author manages to find a $\delta$ having a much simpler form; specifically, it is proportional to $\varepsilon$. This significance of this point will become more evident once you learn about derivatives, because the derivative of a function at a given point essentially tries to quantify the ratio $\varepsilon / \delta$ for small values of $\delta$. Let us see the textbook proof now. The textbook proof done “backwards”. For any polynomial $f$ and for any real number $a$, the difference $f(x) - f(a)$ is divisible by $x-a$. Therefore we can factor an $x-a$ out, and write $f(x) - f(a)$ as the product of $x-a$ and some other polynomial. Already this suggests that when $x-a$ is “small”, then the difference $f(x) - f(a)$ must also be small. However, to make this intuition precise, we proceed as follows. In our example, $f(x) = x^3$ and $a=2$, so$$f(x) - 8 = (x-2) \cdot (x^2 + 2x + 4).$$As mentioned before, the $(x-2)$ factor is responsible for making the difference $(f(x) - 8)$ go to $0$ as $x \to 2$. On the other hand, the second factor $x^2 + 2x + 4$ approaches $2^2 + 2 \cdot 2 + 4 = 12$ as $x \to 2$. Inspired by this observation, we want to write that for $x$ close to $2$, $$f(x) - 8 \approx 12 (x-2). \tag{$\dagger$}$$Unfortunately, as intuitive as it might seem, this statement is neither precise nor correct, because we cannot selectively evaluate just one of the factors at the point $x=2$. Nevertheless this can be fixed because we only care about establishing an upper bound on the second factor when $x$ is close to $2$. More precisely, for all $x \in (1, 3)$, we have $$|x^2 + 2x + 4| = x^2 + 2x+4 \leqslant 3^2 + 2 \cdot 3 + 4 = 19,$$ which implies that$$|x^3 - 8| \leqslant 19|x-2| \tag{$\ddagger$}$$for all $x \in (1, 2)$. Comparing $(\dagger)$ and $(\ddagger)$, note that the right hand side slightly worsened from $12 |x-2|$ to $19 |x-2|$, but this is not of much consequence to us for the purposes of calculating the limit. All we want is some bound that goes to $0$, and $(\ddagger)$ works just fine. [[EDIT: There is a close connection to derivatives here. Note that even though $(\dagger)$ doesn't make precise sense, the expression $12(x-2)$ feels like the “right” approximation to $(f(x)-8)$. In particular, the $19$ in $(\ddagger)$ is plainly arbitrary; we could have replaced it by any constant bigger than $12$ (for $x$ sufficiently close to $2$). In fact, we can think of $f(x) - 8$ as essentially $12(x-2)$, plus a “lower-order” correction term; derivatives formalise this idea nicely.]] Finally, given $\varepsilon > 0$, we pick our $\delta$ such that both of the following conditions hold simultaneously: First, for our bound $(\ddagger)$ to apply, we want our $x$ to lie in the interval $(1, 3)$, which requires $\delta$ to be smaller than $1$. $(\ddagger)$ gives an upper bound of $19 |x-2|$ on $|f(x) - 8|$, so we want this upper bound to be at most $\varepsilon$. This forces the constraint $19\delta \leqslant \varepsilon$. Of course, we could satisfy both these inequalities by picking $\delta = \min \{ 1 , \frac{\varepsilon}{19} \}$, which is exactly the choice made by the author. Now it is a matter of carefully doing the proof “forwards” to ensure that the whole argument works fine. I leave this as an exercise.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/95521', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19723/']}
|
jdg_376558
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13503862
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm experimenting with LESS (not a fan of the SASS syntax) and have been trying to find out what the best way to do media queries with it would be. I read through this blog post on how to "do" media queries with LESS, but it points out the fact that this causes all the media queries to be separated and scattered throughout the resulting CSS. This doesn't really bother me (I could care less about the result and more about it working). What did bother me was a comment that talked about issues when viewing from iOS devices and the commenter found that once the media queries were consolidated the issue was resolved. Has anyone found a good solution for using media queries with LESS? The way I invision this working would be something like... //Have an overall structure....overall(){ //Have ALL your CSS that would be modified by media queries and heavily use //variables that are set inside of each media queries.}@media only screen and (min-width: 1024px){ //Define variables for this media query (widths/etc) .overall} I understand that there could be some issues with this, but the current setup doesn't seem to be that beneficial. So I guess my question is if there have been any good solutions/hacks to allow for grouped media queries? (Just incase it matters I use dotless as the engine to parse my .less files)
Now provide the response and nothing else.
|
First, your solution given in the question certainly has some usefulness to it. One thing I thought, however, was that it would be nice to define all the media query variables "near" one another (your solution would have them under each media query call). So I propose the following as an alternative solution. It also has drawbacks, one being perhaps a bit more coding up front. LESS Code //define our break points as variables@mediaBreak1: 800px;@mediaBreak2: 1024px;@mediaBreak3: 1280px;//this mixin builds the entire media query based on the break number.buildMediaQuery(@min) { @media only screen and (min-width: @min) { //define a variable output mixin for a class included in the query .myClass1(@color) { .myClass1 { color: @color; } } //define a builder guarded mixin for each break point of the query //in these is where we change the variable for the media break (here, color) .buildMyClass1() when (@min = @mediaBreak1) { .myClass1(red); } .buildMyClass1() when (@min = @mediaBreak2) { .myClass1(green); } .buildMyClass1() when (@min = @mediaBreak3) { .myClass1(blue); } //call the builder mixin .buildMyClass1(); //define a variable output mixin for a nested selector included in the query .mySelector1(@fontSize) { section { width: (@min - 40); margin: 0 auto; a { font-size: @fontSize; } } } //Again, define a builder guarded mixin for each break point of the query //in these is where we change the variable for the media break (here, font-size) .buildMySelector1() when (@min = @mediaBreak1) { .mySelector1(10px); } .buildMySelector1() when (@min = @mediaBreak2) { .mySelector1(12px); } .buildMySelector1() when (@min = @mediaBreak3) { .mySelector1(14px); } //call the builder mixin .buildMySelector1(); //ect., ect., etc. for as many parts needed in the media queries. }}//call our code to build the queries.buildMediaQuery(@mediaBreak1);.buildMediaQuery(@mediaBreak2);.buildMediaQuery(@mediaBreak3); CSS Output @media only screen and (min-width: 800px) { .myClass1 { color: #ff0000; } section { width: 760px; margin: 0 auto; } section a { font-size: 10px; }}@media only screen and (min-width: 1024px) { .myClass1 { color: #008000; } section { width: 984px; margin: 0 auto; } section a { font-size: 12px; }}@media only screen and (min-width: 1280px) { .myClass1 { color: #0000ff; } section { width: 1240px; margin: 0 auto; } section a { font-size: 14px; }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13503862', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/718003/']}
|
jdg_376559
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
232935
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to find an SQL query to find rows where field1 does not contain $x. How can I do this?
Now provide the response and nothing else.
|
What kind of field is this? The IN operator cannot be used with a single field, but is meant to be used in subqueries or with predefined lists: -- subquerySELECT a FROM x WHERE x.b NOT IN (SELECT b FROM y);-- predefined listSELECT a FROM x WHERE x.b NOT IN (1, 2, 3, 6); If you are searching a string, go for the LIKE operator (but this will be slow): -- Finds all rows where a does not contain "text"SELECT * FROM x WHERE x.a NOT LIKE '%text%'; If you restrict it so that the string you are searching for has to start with the given string, it can use indices (if there is an index on that field) and be reasonably fast: -- Finds all rows where a does not start with "text"SELECT * FROM x WHERE x.a NOT LIKE 'text%';
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/232935', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/26823/']}
|
jdg_376560
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
965460
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Could you give me an example of function $ f \colon \mathbb N \to \mathbb Z$ that is both one-to-one and onto? Does this work: $f(n) := n \times (-1)^n$? N starts with zero.
Now provide the response and nothing else.
|
First note that $\Bbb{Z}$ contains all negative and positive integers. As such, we can think of $\Bbb{Z}$ as (more or less) two pieces. Next, we know that every natural number is either odd or even (or zero for some people) so again we can think of $\Bbb{N}$ as being in two pieces. lastly, let's try to make a map that takes advantage of the "two pieces" observation . That is, let's make a function from evens/odds to positives/negatives. Let $f: \Bbb{N} \to \Bbb{Z}$ where $$f(n) = \begin{cases} \frac{n}{2} & n\text{ is even} \\ -\frac{n + 1}{2} & \text{else}\end{cases}$$ This map is a bijection, although I will leave the proof of that up to you.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/965460', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/180065/']}
|
jdg_376561
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
68011
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I know that doing %y a will yank complete file into register a . However this doesn't help in pasting the file content in another text editor (say gedit). What alternative is there for doing so?
Now provide the response and nothing else.
|
Not sure if vi is required for this operation. There is xclip utility that allows you to copy anything from console output to x server clipboard. You should specify DISPLAY=:0.0 environment and execute it like this: cat file | xclip or for remote file ssh remote "cat file" | xclip Or from vi (note, that this way will temporaly clear vi buffer content, to revoke it press Esc + u , data will stay in X cilpboard): :%!xclip Now you're able to paste it anywhere with middle mouse button (note, that CTRL+V or shift+ins won't work).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/68011', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/23301/']}
|
jdg_376562
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46603220
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a URL coming in to the AppDelegate method: func application(_ application: UIApplication, open url: URL, sourceApplication: String?, annotation: Any) -> Bool {} The URL looks like www.wesite.com/shareplace.html?placeid=123 . How can it be converted to a dictionary for easy access? I found some code on some website, but it's showing an error in Xcode 9: extension URL { var queryDictionary: [String: AnyObject]? { return URLComponents(url: self, resolvingAgainstBaseURL: false)? .queryItems? .reduce([:], combine: { (var result: [String: AnyObject], queryItem) -> [String: AnyObject] in if queryItem.value?.containsString(",") ?? false { let array = queryItem.value?.componentsSeparatedByString(",") result[queryItem.name] = array } else { result[queryItem.name] = queryItem.value } return result }) }} .reduce([:], combine: { (var result: [String: AnyObject], queryItem) -> [String: AnyObject] in (var result) Parameters may not have the 'var' specifier
Now provide the response and nothing else.
|
Simple Extension extension URL { var queryDictionary: [String: String]? { guard let query = self.query else { return nil} var queryStrings = [String: String]() for pair in query.components(separatedBy: "&") { let key = pair.components(separatedBy: "=")[0] let value = pair .components(separatedBy:"=")[1] .replacingOccurrences(of: "+", with: " ") .removingPercentEncoding ?? "" queryStrings[key] = value } return queryStrings }} USAGE let urlString = "http://www.youtube.com/video/4bL4FI1Gz6s?hl=it_IT&iv_logging_level=3&ad_flags=0&endscreen_module=http://s.ytimg.com/yt/swfbin/endscreen-vfl6o3XZn.swf&cid=241&cust_gender=1&avg_rating=4.82280613104"let url = URL(string: urlString)print(url!.queryDictionary ?? "NONE")
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46603220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3851580/']}
|
jdg_376563
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8721475
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
All the initialized global/static variables will go to initialized data section .All the uninitialized global/static variables will go to uninitialed data section (BSS). The variables in BSS will get a value 0 during program load time. If a global variable is explicitly initialized to zero ( int myglobal = 0 ), where that variable will be stored?
Now provide the response and nothing else.
|
Compiler is free to put such variable into bss as well as into data . For example, GCC has a special option controlling such behavior: -fno-zero-initialized-in-bss If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code. This option turns off this behavior because some programs explicitly rely on variables going to the data section. E.g., so that the resulting executable can find the beginning of that section and/or make assumptions based on that. The default is -fzero-initialized-in-bss . Tried with the following example ( test.c file): int put_me_somewhere = 0;int main(int argc, char* argv[]) { return 0; } Compiling with no options (implicitly -fzero-initialized-in-bss ): $ touch test.c && make test && objdump -x test | grep put_me_somewherecc test.c -o test0000000000601028 g O .bss 0000000000000004 put_me_somewhere Compiling with -fno-zero-initialized-in-bss option: $ touch test.c && make test CFLAGS=-fno-zero-initialized-in-bss && objdump -x test | grep put_me_somewherecc -fno-zero-initialized-in-bss test.c -o test0000000000601018 g O .data 0000000000000004 put_me_somewhere
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8721475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/986020/']}
|
jdg_376564
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
453757
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to write a Bash script that takes a list of numbers as command line arguments and then outputs the sum of those numbers. So the script would be run as script.sh 1 555 22 122 66 and would then sum them all up. I know they need to be passed to the $@ variable and then likely run through a loop, but I don't understand how to convert the contents of $@ to an int to do the math. I tried to do this: #!/bin/bashfor i in $@do $@+$@ echo "the total is `$@`"done
Now provide the response and nothing else.
|
In general, an argument is converted to an integer automatically if used inside an arithmetic $((...)) expansion. This loop will sum all the arguments: for x; do sum=$((sum+x)); done; echo "$sum" The shell caches all the arguments in separated memory locations as a c program deals with an argv[] array. The shell user does not need to deal directly with that array, the shell helps by assigning them to $1, $2, $3, etc. The shell also abstract such list as "$@". And finally, the syntax for x is a shorthand for for x in "$@" to loop over all arguments. That is assuming that arguments are decimal numbers that do not start with zero, octal numbers starting with zero or hexadecimal numbers that start with 0x , and that the total sum does not overflow (2^63-1 in 64 bit systems) This list: $ ./script 12 021 0xab Will print 200 (the decimal result).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/453757', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/297398/']}
|
jdg_376565
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19091221
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How Do i put this html with onclick <a href="#" onClick="addInput()"><img src="..\common_pic\add.gif" /></a> into response.Write "<td align='left'><img src=""..\common_pic\add.gif"" border=0><input type='text' value='" & adoRecordset("production_unit") & "'name='production_unit' size='10'><img src=""..\common_pic\delete.gif"" border=0></td>"
Now provide the response and nothing else.
|
It is usually caused by Spring default CSRF protection. If you use for example DELETE HTTP request from your JS code, it is required to send also CSRF protection headers. It is not necessary to disable CSRF protection! Please, do not do that if not necessary. You can easily add CSRF AJAX/REST protection by: 1.Adding meta headers to every page (use @layout.html or something): <head> <meta name="_csrf" th:content="${_csrf.token}"/> <meta name="_csrf_header" th:content="${_csrf.headerName}"/></head> 2.Customizing your ajax requests to sent these headers for every request: $(function () { var token = $("meta[name='_csrf']").attr("content"); var header = $("meta[name='_csrf_header']").attr("content"); $(document).ajaxSend(function(e, xhr, options) { xhr.setRequestHeader(header, token); });}); Notice that i use thymeleaf, so i use th:content instead of content attribute.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19091221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2685152/']}
|
jdg_376566
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
332061
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a dedicated server with 3 SSD drives in RAID 1. Output of cat /proc/mdstat : Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md4 : active raid1 sdc4[2] sdb4[1] sda4[0] 106738624 blocks [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunkmd2 : active raid1 sdc2[2] sda2[0] sdb2[1] 5497792 blocks [3/3] [UUU] md1 : active raid1 sda1[0] sdc1[2] sdb1[1] 259008 blocks [3/3] [UUU] unused devices: <none> ¿How can a drive be safely removed from the soft raid without loosing any data?I would like to remove a drive from the array in order to reformat it and use it independently, while keeping the most important data mirrored.
Now provide the response and nothing else.
|
You've got a three-way mirror there: each drive has a complete copy of all data. Assuming the drive you want to remove is /dev/sdc , and you want to remove it from all three arrays, you'd perform the following steps for /dev/sdc1 , /dev/sdc2 , and /dev/sdc4 . Step 1: Remove the drive from the array. You can't remove an active device from an array, so you need to mark it as failed first. mdadm /dev/md1 --fail /dev/sdc1mdadm /dev/md1 --remove /dev/sdc1 Step 2: Erase the RAID metadata so the kernel won't try to re-add it: wipefs -a /dev/sdc1 Step 3: Shrink the array so it's only a two-way mirror, not a three-way mirror with a missing drive: mdadm --grow /dev/md1 --raid-devices=2 You may need to remove the write-intent bitmap from /dev/md4 before shrinking it (the manual isn't clear on this), in which case you'd do so just before step 3 with mdadm --grow /dev/md4 --bitmap=none , then put it back afterwards with mdadm --grow /dev/md4 --bitmap=internal .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/332061', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/202601/']}
|
jdg_376567
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25727
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The $\mathsf{W}$-hierarchy is a hierarchy of complexity classes $\mathsf{W}[t]$ in parameterized complexity, see the Complexity Zoo for definitions. An alternative definition defines $\mathsf{W}[t]$ using weighted Fagin definability for $\Pi_t$-formulas of first-order logic, see the textbook by Flum and Grohe . For the lowest classes $\mathsf{W}[1]$ and $\mathsf{W}[2]$, many natural complete problems are known, e.g. Clique and Independent Set are complete for $\mathsf{W}[1]$ , and Dominating Set and Hitting Set are complete for $\mathsf{W}[2]$, where each of these problems is defined as the corresponding well-known $\mathsf{NP}$-complete problem with the size of the required solution set as the parameter. Are there any known natural complete problems for classes higher up in the $\mathsf{W}$-hierarchy, in particular for $\mathsf{W}[3]$ and $\mathsf{W}[4]$?
Now provide the response and nothing else.
|
From the comment above: $p$-HYPERGRAPH-(NON)-DOMINATING-SET is W[3]-complete under fpt-reductions: A hypergraph $H = (V,E)$ consists of a set $V$ of vertices and a set $E$ of hyperedges. Eachhyperedge is as subset of $V$. In a 3-hypergraph all edges have size 3. If $H = (V,E)$ isa 3-hypergraph, every $a \in V$ induces a graph $H^a = (V^a, E^a)$ given by: $V^a = \{ v \in V \mid v \neq a \text{ and there is } e \in E \text{ with } a, v \in e \}$ and $E^a = \{ \{u,v\} \mid \{a,u,v\} \in E \}$ Input : A 3-hypergraph $H = (V,E)$, a set $M \subseteq V$, and $k \geq 1$. Parameter : $k$. Problem : Decide whether there exists a set $D \subseteq V$ of cardinality $k$ such that: if $a \in M$, then $D$ is a dominating set of $H^a$, if $a \notin M$, then $D$ is not a dominating set of $H^a$. see Yijia Chen, Jörg Flum and Martin Grohe. An Analysis of the W*-Hierarchy. The Journal of Symbolic Logic, Vol. 72, No. 2 (Jun., 2007), pp. 513-534
|
{}
|
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/25727', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/6579/']}
|
jdg_376568
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23169941
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I just started to learn about Express 4.0 in my Node.js app, and I found that it generated ./bin/www file, on which only the application server and port settings are written and everything others like middleware and routing is defined in ./app.js file. However, I'm not sure what this ./bin/www does. I've used Express 3.x and I have always defined server and port settings as well as routing and middleware on the identical ./app.js file, and launched my node app with node app.js . So what's the point of using the ./bin/www ? Does it only separate the server and port definition from others? Right now, when I create the package using express-generator, the package.json includes the following definition: "scripts": { "start": "node ./bin/www"} However, I wonder whether I should launch my app using node ./bin/www , or npm start . Which command should I run to start my app? And also, when I deploy my app to heroku, what should I write in the Procfile file? Is web: node app.js enough?
Now provide the response and nothing else.
|
In Express 3.0 , you normally would use app.configure() (or app.use() ) to set up the required middleware you need. Those middleware you specified are bundled together with Express 3.0. Example: var express = require('express');var routes = require('./routes');var user = require('./routes/user');var http = require('http');var path = require('path');var app = express();// all environmentsapp.set('port', process.env.PORT || 3000);app.set('views', path.join(__dirname, 'views'));app.set('view engine', 'jade');app.use(express.favicon());app.use(express.logger('dev'));app.use(express.compress());app.use(express.json());app.use(express.urlencoded());app.use(express.methodOverride()); In Express 4.0 however, all middleware have been removed so that they can be maintained and updated independently from the core Express (except the static middleware), thus they need to be called separately (what you see in app.js ). The bin/ directory serves as a location where you can define your various startup scripts . The www is an example to start the express app as a web server. Ultimately, you could have different scripts like test , stop , or restart , etc. Having this structure allows you to have different startup configurations, without cramming everything into app.js . The correct way to start your Express app is: npm start To deploy an Express 4.x app to Heroku , add this to your Procfile : web: npm start Or if you can just use the start script in your package.json , heroku will automatically uses that, read more here "scripts": { "start": "node ./bin/www",}
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/23169941', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2360798/']}
|
jdg_376569
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30673
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have recently installed the Apache web browser in my Debian Squeeze (using command tasksel and selecting [ ] Web server ). Now I get it to start via sudo /etc/init.d/apache2 start and then I go to my browser and key in http://localhost and the " It works! " page shows up indicating that I have installed and started the web server successfully. Now I know that the HTML file for the " It works! " page is in /var/www and I am led to beleive that if I do create new pages, I need to stuff them into this directory. Creating pages would be easy but in doing so, I need to issue the sudo command each and every time I create/edit a file under the /var/www directory. I would like to know how I can create a directory in my HOME e.g. ~/html/test/index.html and have that page displayed when I enter the address http://localhost/html/test in my browser address?
Now provide the response and nothing else.
|
As Ulrich said, you can do this by enabling the userdir module. On Debian, this can be done by using the a2enmod utility, which enables or disables Apache modules. See man a2enmod . In this case, you just need to run sudo a2enmod userdir and then restart the Apache server to make the change take effect. Note that the userdir module is in base Apache, so you don't have to install anything extra. For reference the userdir config is in /etc/apache2/mods-available/userdir.conf . All a2enmod is doing here is creating a symbolic link from the /etc/apache2/mods-enabled directory to the files /etc/apache2/mods-available/{userdir.conf/userdir.load} . You could also do this manually. I.e. faheem@orwell:/etc/apache2/mods-enabled$ ls -la userdir.*lrwxrwxrwx 1 root root 30 Feb 6 03:11 userdir.conf -> ../mods-available/userdir.conflrwxrwxrwx 1 root root 30 Feb 6 03:11 userdir.load -> ../mods-available/userdir.load Then put whatever web stuff you want to make available under ~/public_html , and then it should be acccessible from http://servername/~username .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/30673', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/14878/']}
|
jdg_376570
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25019
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In some specifications for telescopes, I saw a value marked as f/4.6. What does it mean exactly, and how important is when it comes to choosing a telescope?
Now provide the response and nothing else.
|
I disagree with the other answers that say this is only important for astrophotography. Think of it this way, the focal ratio is basically a measure of how steep the light cone is inside the telescope. There are a significant number of ways this impacts even visual viewers. A longer focal ratio means: A narrower field of view (see less of the sky at once), using the same eyepiece. Quality long f-ratio scopes are sometimes nicknamed "planet killers" The scope has a more forgiving "depth of focus" (a wider sweet spot of focus) The telescope is typically longer and heavier than shorter focal ratios of the same scope type (folded designs, like catatropic scopes, are an exception). Therefore requires a sturdier, more expensive mount. Longer focal ratios have a wider tolerance in design and manufacture. This means that they are usually cheaper (for a similar quality of optics) Longer focal ratio telescopes are more forgiving for lower-quality eyepieces Longer focal ratio reflectors are easier to collimate because they have a wider tolerance for error Basically reverse all of those points, and a shorter focal ratio means: A wider field of view (see more of the sky at once), using the same eyepiece. Depending on the exact f-ratio, these are sometimes called "wide-field" or "rich-field" scopes. The scope has a more unforgiving "depth of focus" (harder to get the sweet spot of focus) The telescope is typically shorter and lighter than longer focal ratios of the same scope type (so easier to transport and handle, allows a lighter mount) Shorter focal ratios have a tighter tolerance in design and manufacture. This means that they are either more expensive (apochromatic refractors) or give less perfect views (achromatic short-tube refractors that show lots of false color) Shorter focal ratio telescopes are less forgiving for lower-quality eyepieces, and typically require more expensive eyepieces to get the best views Shorter focal ratio reflectors are harder to collimate because they have a narrower tolerance for error
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/25019', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/217/']}
|
jdg_376571
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
30126754
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently building a single page AngularJS application which communicates via REST to a backend. The structure is as follow: One Spring MVC WebApp project which contains all AngularJS pages and resources and all REST controllers. A true backend which has services and repositories for backend communication, an API if you will. The REST calls will talk to these service (the second project is included as a dependency of the first one). I have been thinking about this a lot but I can't seem to find anything that can help me. Basically I just need some security on this application. I'd like some kind of session management which is extremely simple: user logs in, session id is created and stored in JS/cookie onwebsite when user would reload page/ come back later a check needs to be done to see if the session id is still valid no calls should reach the controllers if the session id is not valid This is the general idea of basic session managament, what would be the easiest way to get this implemented in a Spring MVC webapp (no JSP's, just angular and REST controllers). Thanks in advance!
Now provide the response and nothing else.
|
You have 2 options for the rest API: stateful or stateless. 1st option: HTTP session authentication - the "classical" Spring Security authentication mechanism. If you plan to scale your application on multiple servers, you need to have a load balancer with sticky sessions so that each user stays on the same server (or use Spring Session with Redis). 2nd option: you have the choice of OAuth or token-based authentication. OAuth2 is a stateless security mechanism, so you might prefer it if you want to scale your application across several machines. Spring Security provides an OAuth2 implementation. The biggest issue with OAuth2 is that requires to have several database tables in order to store its security tokens. Token-based authentication, like OAuth2, is a stateless security mechanism, so it's another good option if you want to scale on several different servers. This authentication mechanism doesn't exist by default with Spring Security. It is easier to use and implement than OAuth2, as it does not require a persistence mechanism, so it works on all SQL and NoSQL options. This solution uses a custom token, which is a MD5 hash of your user name, the expiration date of the token, your password, and a secret key. This ensures that if someone steals your token, he should not be able to extract your username and password. I recommend you to look into JHipster . It will generate a web app skeleton for you with REST API using Spring Boot and the front end using AngularJS. When generating the application skeleton it will ask you to choose between the 3 authentication mechanisms that I described above. You can reuse the code that JHipster will generate in your Spring MVC application. Here is an example of TokenProvider generated by JHipster: public class TokenProvider { private final String secretKey; private final int tokenValidity; public TokenProvider(String secretKey, int tokenValidity) { this.secretKey = secretKey; this.tokenValidity = tokenValidity; } public Token createToken(UserDetails userDetails) { long expires = System.currentTimeMillis() + 1000L * tokenValidity; String token = userDetails.getUsername() + ":" + expires + ":" + computeSignature(userDetails, expires); return new Token(token, expires); } public String computeSignature(UserDetails userDetails, long expires) { StringBuilder signatureBuilder = new StringBuilder(); signatureBuilder.append(userDetails.getUsername()).append(":"); signatureBuilder.append(expires).append(":"); signatureBuilder.append(userDetails.getPassword()).append(":"); signatureBuilder.append(secretKey); MessageDigest digest; try { digest = MessageDigest.getInstance("MD5"); } catch (NoSuchAlgorithmException e) { throw new IllegalStateException("No MD5 algorithm available!"); } return new String(Hex.encode(digest.digest(signatureBuilder.toString().getBytes()))); } public String getUserNameFromToken(String authToken) { if (null == authToken) { return null; } String[] parts = authToken.split(":"); return parts[0]; } public boolean validateToken(String authToken, UserDetails userDetails) { String[] parts = authToken.split(":"); long expires = Long.parseLong(parts[1]); String signature = parts[2]; String signatureToMatch = computeSignature(userDetails, expires); return expires >= System.currentTimeMillis() && signature.equals(signatureToMatch); }} SecurityConfiguration: @Configuration@EnableWebSecuritypublic class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Inject private Http401UnauthorizedEntryPoint authenticationEntryPoint; @Inject private UserDetailsService userDetailsService; @Inject private TokenProvider tokenProvider; @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Inject public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception { auth .userDetailsService(userDetailsService) .passwordEncoder(passwordEncoder()); } @Override public void configure(WebSecurity web) throws Exception { web.ignoring() .antMatchers("/scripts/**/*.{js,html}"); } @Override protected void configure(HttpSecurity http) throws Exception { http .exceptionHandling() .authenticationEntryPoint(authenticationEntryPoint) .and() .csrf() .disable() .headers() .frameOptions() .disable() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .authorizeRequests() .antMatchers("/api/register").permitAll() .antMatchers("/api/activate").permitAll() .antMatchers("/api/authenticate").permitAll() .antMatchers("/protected/**").authenticated() .and() .apply(securityConfigurerAdapter()); } @EnableGlobalMethodSecurity(prePostEnabled = true, jsr250Enabled = true) private static class GlobalSecurityConfiguration extends GlobalMethodSecurityConfiguration { } private XAuthTokenConfigurer securityConfigurerAdapter() { return new XAuthTokenConfigurer(userDetailsService, tokenProvider); } /** * This allows SpEL support in Spring Data JPA @Query definitions. * * See https://spring.io/blog/2014/07/15/spel-support-in-spring-data-jpa-query-definitions */ @Bean EvaluationContextExtension securityExtension() { return new EvaluationContextExtensionSupport() { @Override public String getExtensionId() { return "security"; } @Override public SecurityExpressionRoot getRootObject() { return new SecurityExpressionRoot(SecurityContextHolder.getContext().getAuthentication()) {}; } }; }} And the respective AngularJS configuration: 'use strict';angular.module('jhipsterApp') .factory('AuthServerProvider', function loginService($http, localStorageService, Base64) { return { login: function(credentials) { var data = "username=" + credentials.username + "&password=" + credentials.password; return $http.post('api/authenticate', data, { headers: { "Content-Type": "application/x-www-form-urlencoded", "Accept": "application/json" } }).success(function (response) { localStorageService.set('token', response); return response; }); }, logout: function() { //Stateless API : No server logout localStorageService.clearAll(); }, getToken: function () { return localStorageService.get('token'); }, hasValidToken: function () { var token = this.getToken(); return token && token.expires && token.expires > new Date().getTime(); } }; }); authInterceptor: .factory('authInterceptor', function ($rootScope, $q, $location, localStorageService) { return { // Add authorization token to headers request: function (config) { config.headers = config.headers || {}; var token = localStorageService.get('token'); if (token && token.expires && token.expires > new Date().getTime()) { config.headers['x-auth-token'] = token.token; } return config; } };}) Add authInterceptor to $httpProvider: .config(function ($httpProvider) { $httpProvider.interceptors.push('authInterceptor');}) Hope this is helpful! This video from SpringDeveloper channel may be useful too: Great single page apps need great backends . It talks about some best practices (including session management) and demos working code examples.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/30126754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1997016/']}
|
jdg_376572
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
33962226
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've noticed that a frequent occurrence during training is NAN s being introduced. Often times it seems to be introduced by weights in inner-product/fully-connected or convolution layers blowing up. Is this occurring because the gradient computation is blowing up? Or is it because of weight initialization (if so, why does weight initialization have this effect)? Or is it likely caused by the nature of the input data? The overarching question here is simply: What is the most common reason for NANs to occurring during training? And secondly, what are some methods for combatting this (and why do they work)?
Now provide the response and nothing else.
|
I came across this phenomenon several times. Here are my observations: Gradient blow up Reason: large gradients throw the learning process off-track. What you should expect: Looking at the runtime log, you should look at the loss values per-iteration. You'll notice that the loss starts to grow significantly from iteration to iteration, eventually the loss will be too large to be represented by a floating point variable and it will become nan . What can you do: Decrease the base_lr (in the solver.prototxt) by an order of magnitude (at least). If you have several loss layers, you should inspect the log to see which layer is responsible for the gradient blow up and decrease the loss_weight (in train_val.prototxt) for that specific layer, instead of the general base_lr . Bad learning rate policy and params Reason: caffe fails to compute a valid learning rate and gets 'inf' or 'nan' instead, this invalid rate multiplies all updates and thus invalidating all parameters. What you should expect: Looking at the runtime log, you should see that the learning rate itself becomes 'nan' , for example: ... sgd_solver.cpp:106] Iteration 0, lr = -nan What can you do: fix all parameters affecting the learning rate in your 'solver.prototxt' file. For instance, if you use lr_policy: "poly" and you forget to define max_iter parameter, you'll end up with lr = nan ... For more information about learning rate in caffe, see this thread . Faulty Loss function Reason: Sometimes the computations of the loss in the loss layers causes nan s to appear. For example, Feeding InfogainLoss layer with non-normalized values , using custom loss layer with bugs, etc. What you should expect: Looking at the runtime log you probably won't notice anything unusual: loss is decreasing gradually, and all of a sudden a nan appears. What can you do: See if you can reproduce the error, add printout to the loss layer and debug the error. For example: Once I used a loss that normalized the penalty by the frequency of label occurrence in a batch. It just so happened that if one of the training labels did not appear in the batch at all - the loss computed produced nan s. In that case, working with large enough batches (with respect to the number of labels in the set) was enough to avoid this error. Faulty input Reason: you have an input with nan in it! What you should expect: once the learning process "hits" this faulty input - output becomes nan . Looking at the runtime log you probably won't notice anything unusual: loss is decreasing gradually, and all of a sudden a nan appears. What can you do: re-build your input datasets (lmdb/leveldn/hdf5...) make sure you do not have bad image files in your training/validation set. For debug you can build a simple net that read the input layer, has a dummy loss on top of it and runs through all the inputs: if one of them is faulty, this dummy net should also produce nan . stride larger than kernel size in "Pooling" layer For some reason, choosing stride > kernel_size for pooling may results with nan s. For example: layer { name: "faulty_pooling" type: "Pooling" bottom: "x" top: "y" pooling_param { pool: AVE stride: 5 kernel: 3 }} results with nan s in y . Instabilities in "BatchNorm" It was reported that under some settings "BatchNorm" layer may output nan s due to numerical instabilities. This issue was raised in bvlc/caffe and PR #5136 is attempting to fix it. Recently, I became aware of debug_info flag: setting debug_info: true in 'solver.prototxt' will make caffe print to log more debug information (including gradient magnitudes and activation values) during training: This information can help in spotting gradient blowups and other problems in the training process .
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/33962226', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4975126/']}
|
jdg_376573
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5458129
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
SO i have this - (void)loadView {BOOL success;NSFileManager *fileManager = [NSFileManager defaultManager];NSString* documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];NSString* databasePath = [documentsPath stringByAppendingPathComponent:@"ProxDeals.db"];NSError *error;BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:databasePath];if (fileExists==TRUE) { [[NSBundle mainBundle] loadNibNamed:@"ProxDealsViewController" owner:self options:nil];}else { NSString *defaultDBPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"ProxDeals.db"]; NSLog(@"%@",defaultDBPath); success = [fileManager copyItemAtPath:defaultDBPath toPath:databasePath error:&error]; if (!success) { NSAssert1(0, @"Failed to create writable database file with message '%@/.", [error localizedDescription]); } [[NSBundle mainBundle] loadNibNamed:@"UserRegistration" owner:self options:nil];} } and this error: Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<ProxDealsViewController 0x5f22160> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key givenName.' I know that i don't do something wright in the initialization of the UserRegistration nib but i don't know how to fix this.
Now provide the response and nothing else.
|
This usually means that something is trying to access the @property "givenName". If you were doing something with IB, the usual cause is that you either: deleted that property from the class,but haven't deleted the hookups in IByet OR: you have a File's Ownerobject set to the wrong class (checkthe properties - different depending which version of xcode you're using - to find the Class Nameits set as. You probably copy/pasted a NIB file, and didn't change this field in the NIB), and you've hooked up anoutlet for that class, but youractual File's Owner is somethingdifferent
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/5458129', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/663942/']}
|
jdg_376574
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
225207
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My teacher told me that sun is a black body but after reading at various sites whre they say that sun is not a black body but has black body radiations because it cannot absorb all radiations.
Now provide the response and nothing else.
|
Here is the measured radiation from the sun It is fitted with a black body curve, at the top of the atmosphere (yellow) and even though it is evident that there are deviations from the theoretical black body, it is still a good approximation. Generally all bodies radiating are approximated with a black body spectrum. If it fits well, that means that it also absorbs the same type of spectrum, if it fell on it. The deviations are due to the differences stated in the other answer.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/225207', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/101867/']}
|
jdg_376575
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27575779
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two elements inside a container, which are being side-by-side by using flex box. On the second element ( .flexbox-2 ), I am setting it's height in the CSS. However, then the first element ( .flexbox-1 ) will match the height of .flexbox-2 . How would I stop .flexbox-1 from matching the height of .flexbox-2 , and instead just retain its natural height? Here is what I have so far (also available as a jsFiddle ) .container { display: -webkit-flex; -webkit-flex-direction: row;}.flexbox-1 { -webkit-flex: 1; border: solid 3px red;}.flexbox-2 { -webkit-flex: 2; border: solid 3px blue; height: 200px; margin-left: 10px;} <div class="container"> <div class="flexbox-1">.flexbox-1</div> <div class="flexbox-2">.flexbox-2</div></div>
Now provide the response and nothing else.
|
I know this is an old question but a better solution is to set the flex item to align to the top using flex-start . /* Default Styles */.container { display: flex;}.flexbox-2 { flex: 2; border: solid 3px blue; height: 200px; margin-left: 10px;} .flexbox-1 { flex: 1; align-self: flex-start; border: solid 3px red;} <div class="container"> <div class="flexbox-1">"align-self: flex-start;"</div> <div class="flexbox-2">.flexbox-2</div></div>
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27575779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1125251/']}
|
jdg_376576
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8847109
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a date/time string like 2012-01-13 04:37:20 but I want to convert it to dd-mm-yyyy hh:mm , how can i do this? I am using the following code but it throws an exception. var now = "2012-01-13 04:37:20";var dd = now.toLocaleDateString() + " " + now.toLocaleTimeString();alert(dd);
Now provide the response and nothing else.
|
You can do a simple string manipulation and create js date object. See function below, which accepts date in format //yyyy-mm-dd hh:mm:ss DEMO here function toJSDate (dateTime) {var dateTime = dateTime.split(" ");//dateTime[0] = date, dateTime[1] = timevar date = dateTime[0].split("-");var time = dateTime[1].split(":");//(year, month, day, hours, minutes, seconds, milliseconds)// mont is 0 indexed so date[1] - 1 corrected formatreturn new Date(date[0], date[1]-1, date[2], time[0], time[1], time[2], 0); }
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8847109', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/406512/']}
|
jdg_376577
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3434803
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following snippet: void Foo(){ // ...}void Bar(){ return Foo();} What is a legitimate reason to use the above in C++ as opposed to the more common approach: void Foo(){ // ...}void Bar(){ Foo(); // no more expressions -- i.e., implicit return here}
Now provide the response and nothing else.
|
Probably no use in your example, but there are some situations where it's difficult to deal with void in template code, and I expect this rule helps with that sometimes. Very contrived example: #include <iostream>template <typename T>T retval() { return T();}template <>void retval() { return;}template <>int retval() { return 23;}template <typename T>T do_something() { std::cout << "doing something\n";}template <typename T>T do_something_and_return() { do_something<T>(); return retval<T>();}int main() { std::cout << do_something_and_return<int>() << "\n"; std::cout << do_something_and_return<void*>() << "\n"; do_something_and_return<void>();} Note that only main has to cope with the fact that in the void case there's nothing to return from retval . The intermediate function do_something_and_return is generic. Of course this only gets you so far - if do_something_and_return wanted, in the normal case, to store retval in a variable and do something with it before returning, then you'd still be in trouble - you'd have to specialize (or overload) do_something_and_return for void.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3434803', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/414417/']}
|
jdg_376578
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13554162
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Thanks to the replies on NFC and a few certain things, I've understood and managed to compile a code in which the users will be able to read a tag, and if the tag contains a string that is similar to my code, a coupon will be added (image changes) and an integer goes up by 1. This integer will be saved by SharedPreferences and it is used to determine how many coupons the users have collected and show it onResume. However, after compiling, when I try to run it, my application stops immediately. Can someone help me check on what I may have go wrong? I know it's kinda long but I really have no idea what went wrong. @TargetApi(10) //I have to use this line of code because I'm targetted to code at API 8 but some NFC functionalities that I use requires API 10.public class CouponManager extends Activity { private static final String TAG = "NFCReadTag"; private NfcAdapter mNfcAdapter; private IntentFilter[] mNdefExchangeFilters; private PendingIntent mNfcPendingIntent; public static final String PREF_FILE_NAME = "PrefFile"; private int[] images = new int[10]; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.coupon_layout); //List of images images[0]=R.drawable.cp0; images[1]=R.drawable.cp1; images[2]=R.drawable.cp2; images[3]=R.drawable.cp3; images[4]=R.drawable.cp4; images[5]=R.drawable.cp5; images[6]=R.drawable.cp6; images[7]=R.drawable.cp7; images[8]=R.drawable.cp8; images[9]=R.drawable.cp9; images[10]=R.drawable.cp10; //Restore preferences SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); int storedPreference = preferences.getInt("storedInt", 0); //Image to use depending on coupon collected final ImageView img = new ImageView(this); if(storedPreference!=10) { img.setImageResource(images[storedPreference]); } else { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setCancelable(false); builder.setTitle("Redeem Your Coupon?"); builder.setInverseBackgroundForced(true); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); SharedPreferences.Editor editor = preferences.edit(); editor.putInt("storedInt", 0); // value to store editor.commit(); img.setImageResource(images[0]); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); img.setImageResource(images[10]); } }); } //Check and send Intent from NFC tag discovered mNfcAdapter = NfcAdapter.getDefaultAdapter(this); mNfcPendingIntent = PendingIntent.getActivity(this, 0, new Intent(this, getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP | Intent.FLAG_ACTIVITY_CLEAR_TOP), 0); IntentFilter coupontag = new IntentFilter(NfcAdapter.ACTION_NDEF_DISCOVERED); coupontag.addDataScheme("http"); coupontag.addDataAuthority("www.ichatime.com", null); coupontag.addDataPath(".*", PatternMatcher.PATTERN_SIMPLE_GLOB); mNdefExchangeFilters = new IntentFilter[] { coupontag }; } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } @Override protected void onResume() { super.onResume(); if(mNfcAdapter != null) { mNfcAdapter.enableForegroundDispatch(this, mNfcPendingIntent, mNdefExchangeFilters, null); } else { Toast.makeText(getApplicationContext(), "Sorry, No NFC Adapter found.", Toast.LENGTH_SHORT).show(); } } @Override protected void onPause() { super.onPause(); if(mNfcAdapter != null) mNfcAdapter.disableForegroundDispatch(this); } @Override protected void onStop() { super.onStop(); // We need an Editor object to make preference changes. // All objects are from android.context.Context SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); int storedPreference = preferences.getInt("storedInt", 0); SharedPreferences.Editor editor = preferences.edit(); editor.putInt("storedInt", storedPreference); // value to store editor.commit(); } @Override protected void onNewIntent(Intent intent) { super.onNewIntent(intent); SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); int storedPreference = preferences.getInt("storedInt", 0); if (NfcAdapter.ACTION_NDEF_DISCOVERED.equals(intent.getAction())) { NdefMessage[] messages = null; Parcelable[] rawMsgs = intent.getParcelableArrayExtra(NfcAdapter.EXTRA_NDEF_MESSAGES); if (rawMsgs != null) { messages = new NdefMessage[rawMsgs.length]; for (int i = 0; i < rawMsgs.length; i++) { messages[i] = (NdefMessage) rawMsgs[i]; } } if(messages[0] != null) { String result=""; byte[] payload = messages[0].getRecords()[0].getPayload(); // this assumes that we get back am SOH followed by host/code for (int b = 1; b<payload.length; b++) { // skip SOH result += (char) payload[b]; } if (result == "ichatime.com") { final ImageView img = new ImageView(this); Toast.makeText(getApplicationContext(), "Coupon collected!", Toast.LENGTH_SHORT).show(); if (storedPreference!=10) { storedPreference++; SharedPreferences.Editor editor = preferences.edit(); editor.putInt("storedInt", storedPreference); img.setImageResource(images[storedPreference]); } if (storedPreference==10) { AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setCancelable(false); builder.setTitle("Redeem Your Coupon?"); builder.setInverseBackgroundForced(true); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); SharedPreferences.Editor editor = preferences.edit(); editor.putInt("storedInt", 0); // value to store editor.commit(); img.setImageResource(images[0]); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); img.setImageResource(images[10]); } }); } else { SharedPreferences.Editor editor = preferences.edit(); editor.putInt("storedInt", 10); img.setImageResource(images[10]); }} else { Toast.makeText(getApplicationContext(), "Wrong tag detected!", Toast.LENGTH_SHORT).show(); } //Debugging Mode to see what is contained in the tags. // Toast.makeText(getApplicationContext(), "Tag Contains " + result, Toast.LENGTH_SHORT).show(); } } }} Logcat errors: >11-26 01:16:11.869: D/AndroidRuntime(550): Shutting down VM>11-26 01:16:11.869: W/dalvikvm(550): threadid=1: thread exiting with uncaught exception (group=0x409c01f8)>11-26 01:16:11.929: I/dalvikvm(550): threadid=3: reacting to signal 3>11-26 01:16:11.979: E/AndroidRuntime(550): FATAL EXCEPTION: main>**11-26 01:16:11.979: E/AndroidRuntime(550): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.ponpon/com.example.ponpon.MainActivity}: java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.ponpon/com.example.ponpon.CouponManager}: java.lang.ArrayIndexOutOfBoundsException: length=10; index=10**>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1956)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1981)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.access$600(ActivityThread.java:123)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1147)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.os.Handler.dispatchMessage(Handler.java:99)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.os.Looper.loop(Looper.java:137)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.main(ActivityThread.java:4424)>11-26 01:16:11.979: E/AndroidRuntime(550): at java.lang.reflect.Method.invokeNative(Native Method)>11-26 01:16:11.979: E/AndroidRuntime(550): at java.lang.reflect.Method.invoke(Method.java:511)>11-26 01:16:11.979: E/AndroidRuntime(550): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784)>11-26 01:16:11.979: E/AndroidRuntime(550): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551)>11-26 01:16:11.979: E/AndroidRuntime(550): at dalvik.system.NativeStart.main(Native Method)>11-26 01:16:11.979: E/AndroidRuntime(550): Caused by: java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.ponpon/com.example.ponpon.CouponManager}: java.lang.ArrayIndexOutOfBoundsException: length=10; index=10>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1956)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.startActivityNow(ActivityThread.java:1797)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.LocalActivityManager.moveToState(LocalActivityManager.java:135)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.LocalActivityManager.startActivity(LocalActivityManager.java:347)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.widget.TabHost$IntentContentStrategy.getContentView(TabHost.java:682)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.widget.TabHost.setCurrentTab(TabHost.java:346)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.widget.TabHost.addTab(TabHost.java:236)>11-26 01:16:11.979: E/AndroidRuntime(550): at com.example.ponpon.MainActivity.onCreate(MainActivity.java:37)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.Activity.performCreate(Activity.java:4465)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1920)>11-26 01:16:11.979: E/AndroidRuntime(550): ... 11 more>11-26 01:16:11.979: E/AndroidRuntime(550): Caused by: java.lang.ArrayIndexOutOfBoundsException: length=10; index=10>11-26 01:16:11.979: E/AndroidRuntime(550): at com.example.ponpon.CouponManager.onCreate(CouponManager.java:53)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.Activity.performCreate(Activity.java:4465)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049)>11-26 01:16:11.979: E/AndroidRuntime(550): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1920)>11-26 01:16:11.979: E/AndroidRuntime(550): ... 21 more What did I do wrong with my arrays? Thanks for the clarification guys!
Now provide the response and nothing else.
|
myfile->d_name is the file name not the path, so you need to append the file name to the directory "Downloads/file.txt" first, if it's is not the working directory: char buf[512]; while((myfile = readdir(mydir)) != NULL){ sprintf(buf, "%s/%s", argv[1], myfile->d_name); stat(buf, &mystat);.... As to why it prints 4096 that is the size of the links . and .. from the last call to stat() . Note: you should allocate a buffer large enough to hold the directory name, the file name the NULL byte and the separator, something like this strlen(argv[1]) + NAME_MAX + 2;
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13554162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1806905/']}
|
jdg_376579
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40433
|
Below is a question asked on the forum astronomy.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Mercury 's orbital period around the Sun is about 88 days. Comets and other things have gotten closer to the Sun than Mercury does. But has there ever been an asteroid or some other body discovered that has a shorter orbital period? Are there theoretical constraints on if such a body can exist? The IAU defines a planet as one that has swept out its orbit . Does the neighborhood of Mercury extend to the Sun?
Now provide the response and nothing else.
|
The recently discovered asteroids 2019 LF6 and 2020 AV2 , each taking 151 days to orbit the Sun, have the shortest periods currently listed in the JPL Small Body Database . Vulcanoids are difficult to detect from Earth; none are known yet.To remain in such an orbit, Evans and Tabachnik 1999 estimate a minimum diameter of 100 m and a semimajor axis between 0.09 and 0.21 au. There are various competing definitions for a planet's neighborhood. Soter 2006 says: Two bodies share an "orbital zone" if their orbits cross a common radial distance from the primary and their periods are nonresonant and differ by less than an order of magnitude. A Sun-grazing asteroid with perihelion 0.01 au and aphelion 0.31 au (Mercury's perihelion) would orbit in 23 days, well above the 9 day minimum for that definition.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://astronomy.stackexchange.com/questions/40433', 'https://astronomy.stackexchange.com', 'https://astronomy.stackexchange.com/users/36644/']}
|
jdg_376580
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
373388
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'll soon be working on a large c# project and would like to build in multi-language support from the start. I've had a play around and can get it working using a separate resource file for each language, then use a resource manager to load up the strings. Are there any other good approaches that I could look into?
Now provide the response and nothing else.
|
Use a separate project with Resources I can tell this from out experience, having a current solution with 12 24 projects that includes API, MVC, Project Libraries (Core functionalities), WPF, UWP and Xamarin. It is worth reading this long post as I think it is the best way to do so. With the help of VS tools easily exportable and importable to sent to translation agencies or review by other people. EDIT 02/2018: Still going strong, converting it to a .NET Standard library makes it possible to even use it across .NET Framework and NET Core. I added an extra section for converting it to JSON so for example angular can use it. EDIT 2019: Going forward with Xamarin, this still works across all platforms. E.g. Xamarin.Forms advices to use resx files as well. (I did not develop an app in Xamarin.Forms yet, but the documentation, that is way to detailed to just get started, covers it: Xamarin.Forms Documentation ). Just like converting it to JSON we can also convert it to a .xml file for Xamarin.Android. EDIT 2019 (2): While upgrading to UWP from WPF, I encountered that in UWP they prefer to use another filetype .resw , which is is in terms of content identical but the usage is different. I found a different way of doing this which, in my opinion, works better then the default solution . EDIT 2020: Updated some suggestions for larger (modulair) projects that might require multiple language projects. So, lets get to it. Pro's Strongly typed almost everywhere. In WPF you don't have to deal with ResourceDirectories . Supported for ASP.NET, Class Libraries, WPF, Xamarin, .NET Core, .NET Standard as far as I have tested. No extra third-party libraries needed. Supports culture fallback: en-US -> en. Not only back-end, works also in XAML for WPF and Xamarin.Forms, in .cshtml for MVC. Easily manipulate the language by changing the Thread.CurrentThread.CurrentCulture Search engines can Crawl in different languages and user can send or save language-specific urls. Con's WPF XAML is sometimes buggy, newly added strings don't show up directly. Rebuild is the temp fix (vs2015). UWP XAML does not show intellisense suggestions and does not show the text while designing. Tell me. Setup Create language project in your solution, give it a name like MyProject.Language . Add a folder to it called Resources, and in that folder, create two Resources files (.resx). One called Resources.resx and another called Resources.en.resx (or .en-GB.resx for specific). In my implementation, I have NL (Dutch) language as the default language, so that goes in my first file, and English goes in my second file. Setup should look like this: The properties for Resources.resx must be: Make sure that the custom tool namespace is set to your project namespace. Reason for this is that in WPF, you cannot reference to Resources inside XAML. And inside the resource file, set the access modifier to Public: If you have such a large application (let's say different modules) you can consider creating multiple projects like above. In that case you could prefix your Keys and resource classes with the particular Module. Use the best language editor there is for Visual Studio to combine all files into a single overview. Using in another project Reference to your project: Right click on References -> Add Reference -> Prjects\Solutions. Use namespace in a file: using MyProject.Language; Use it like so in back-end: string someText = Resources.orderGeneralError; If there is something else called Resources, then just put in the entire namespace. Using in MVC In MVC you can do however you like to set the language, but I used parameterized url's, which can be setup like so: RouteConfig.cs Below the other mappings routes.MapRoute( name: "Locolized", url: "{lang}/{controller}/{action}/{id}", constraints: new { lang = @"(\w{2})|(\w{2}-\w{2})" }, // en or en-US defaults: new { controller = "shop", action = "index", id = UrlParameter.Optional }); FilterConfig.cs (might need to be added, if so, add FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); to the Application_start() method in Global.asax public class FilterConfig{ public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new ErrorHandler.AiHandleErrorAttribute()); //filters.Add(new HandleErrorAttribute()); filters.Add(new LocalizationAttribute("nl-NL"), 0); }} LocalizationAttribute public class LocalizationAttribute : ActionFilterAttribute{ private string _DefaultLanguage = "nl-NL"; private string[] allowedLanguages = { "nl", "en" }; public LocalizationAttribute(string defaultLanguage) { _DefaultLanguage = defaultLanguage; } public override void OnActionExecuting(ActionExecutingContext filterContext) { string lang = (string) filterContext.RouteData.Values["lang"] ?? _DefaultLanguage; LanguageHelper.SetLanguage(lang); }} LanguageHelper just sets the Culture info. //fixed number and date format for now, this can be improved.public static void SetLanguage(LanguageEnum language){ string lang = ""; switch (language) { case LanguageEnum.NL: lang = "nl-NL"; break; case LanguageEnum.EN: lang = "en-GB"; break; case LanguageEnum.DE: lang = "de-DE"; break; } try { NumberFormatInfo numberInfo = CultureInfo.CreateSpecificCulture("nl-NL").NumberFormat; CultureInfo info = new CultureInfo(lang); info.NumberFormat = numberInfo; //later, we will if-else the language here info.DateTimeFormat.DateSeparator = "/"; info.DateTimeFormat.ShortDatePattern = "dd/MM/yyyy"; Thread.CurrentThread.CurrentUICulture = info; Thread.CurrentThread.CurrentCulture = info; } catch (Exception) { }} Usage in .cshtml @using MyProject.Language;<h3>@Resources.w_home_header</h3> or if you don't want to define usings then just fill in the entire namespace OR you can define the namespace under /Views/web.config: <system.web.webPages.razor><host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=5.2.3.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /><pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> ... <add namespace="MyProject.Language" /> </namespaces></pages></system.web.webPages.razor> This mvc implementation source tutorial: Awesome tutorial blog Using in class libraries for models Back-end using is the same, but just an example for using in attributes using MyProject.Language;namespace MyProject.Core.Models{ public class RegisterViewModel { [Required(ErrorMessageResourceName = "accountEmailRequired", ErrorMessageResourceType = typeof(Resources))] [EmailAddress] [Display(Name = "Email")] public string Email { get; set; } }} If you have reshaper it will automatically check if the given resource name exists. If you prefer type safety you can use T4 templates to generate an enum Using in WPF. Ofcourse add a reference to your MyProject.Language namespace, we know how to use it in back-end. In XAML, inside the header of a Window or UserControl, add a namespace reference called lang like so: <UserControl x:Class="Babywatcher.App.Windows.Views.LoginView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:local="clr-namespace:MyProject.App.Windows.Views" xmlns:lang="clr-namespace:MyProject.Language;assembly=MyProject.Language" <!--this one--> mc:Ignorable="d" d:DesignHeight="210" d:DesignWidth="300"> Then, inside a label: <Label x:Name="lblHeader" Content="{x:Static lang:Resources.w_home_header}" TextBlock.FontSize="20" HorizontalAlignment="Center"/> Since it is strongly typed you are sure the resource string exists. You might need to recompile the project sometimes during setup, WPF is sometimes buggy with new namespaces. One more thing for WPF, set the language inside the App.xaml.cs . You can do your own implementation (choose during installation) or let the system decide. public partial class App : Application{ protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); SetLanguageDictionary(); } private void SetLanguageDictionary() { switch (Thread.CurrentThread.CurrentCulture.ToString()) { case "nl-NL": MyProject.Language.Resources.Culture = new System.Globalization.CultureInfo("nl-NL"); break; case "en-GB": MyProject.Language.Resources.Culture = new System.Globalization.CultureInfo("en-GB"); break; default://default english because there can be so many different system language, we rather fallback on english in this case. MyProject.Language.Resources.Culture = new System.Globalization.CultureInfo("en-GB"); break; } }} Using in UWP In UWP, Microsoft uses this solution , meaning you will need to create new resource files. Plus you can not re-use the text either because they want you to set the x:Uid of your control in XAML to a key in your resources. And in your resources you have to do Example.Text to fill a TextBlock 's text. I didn't like that solution at all because I want to re-use my resource files. Eventually I came up with the following solution. I just found this out today (2019-09-26) so I might come back with something else if it turns out this doesn't work as desired. Add this to your project: using Windows.UI.Xaml.Resources;public class MyXamlResourceLoader : CustomXamlResourceLoader{ protected override object GetResource(string resourceId, string objectType, string propertyName, string propertyType) { return MyProject.Language.Resources.ResourceManager.GetString(resourceId); }} Add this to App.xaml.cs in the constructor: CustomXamlResourceLoader.Current = new MyXamlResourceLoader(); Where ever you want to in your app, use this to change the language: ApplicationLanguages.PrimaryLanguageOverride = "nl";Frame.Navigate(this.GetType()); The last line is needed to refresh the UI. While I am still working on this project I noticed that I needed to do this 2 times. I might end up with a language selection at the first time the user is starting. But since this will be distributed via Windows Store, the language is usually equal to the system language. Then use in XAML: <TextBlock Text="{CustomResource ExampleResourceKey}"></TextBlock> Using it in Angular (convert to JSON) Now days it is more common to have a framework like Angular in combination with components, so without cshtml. Translations are stored in json files, I am not going to cover how that works, I would just highly recommend ngx-translate instead of the angular multi-translation. So if you want to convert translations to a JSON file, it is pretty easy, I use a T4 template script that converts the Resources file to a json file. I recommend installing T4 editor to read the syntax and use it correctly because you need to do some modifications. Only 1 thing to note: It is not possible to generate the data, copy it, clean the data and generate it for another language. So you have to copy below code as many times as languages you have and change the entry before '//choose language here'. Currently no time to fix this but probably will update later (if interested). Path: MyProject.Language/T4/CreateLocalizationEN.tt <#@ template debug="false" hostspecific="true" language="C#" #><#@ assembly name="System.Core" #><#@ assembly name="System.Windows.Forms" #><#@ import namespace="System.Linq" #><#@ import namespace="System.Text" #><#@ import namespace="System.Collections.Generic" #><#@ import namespace="System.Resources" #><#@ import namespace="System.Collections" #><#@ import namespace="System.IO" #><#@ import namespace="System.ComponentModel.Design" #><#@ output extension=".json" #><#var fileNameNl = "../Resources/Resources.resx";var fileNameEn = "../Resources/Resources.en.resx";var fileNameDe = "../Resources/Resources.de.resx";var fileNameTr = "../Resources/Resources.tr.resx";var fileResultName = "../T4/CreateLocalizationEN.json";//choose language herevar fileResultPath = Path.Combine(Path.GetDirectoryName(this.Host.ResolvePath("")), "MyProject.Language", fileResultName);//var fileDestinationPath = "../../MyProject.Web/ClientApp/app/i18n/";var fileNameDestNl = "nl.json";var fileNameDestEn = "en.json";var fileNameDestDe = "de.json";var fileNameDestTr = "tr.json";var pathBaseDestination = Directory.GetParent(Directory.GetParent(this.Host.ResolvePath("")).ToString()).ToString();string[] fileNamesResx = new string[] {fileNameEn }; //choose language herestring[] fileNamesDest = new string[] {fileNameDestEn }; //choose language herefor(int x = 0; x < fileNamesResx.Length; x++){ var currentFileNameResx = fileNamesResx[x]; var currentFileNameDest = fileNamesDest[x]; var currentPathResx = Path.Combine(Path.GetDirectoryName(this.Host.ResolvePath("")), "MyProject.Language", currentFileNameResx); var currentPathDest =pathBaseDestination + "/MyProject.Web/ClientApp/app/i18n/" + currentFileNameDest; using(var reader = new ResXResourceReader(currentPathResx)) { reader.UseResXDataNodes = true;#> {<# foreach(DictionaryEntry entry in reader) { var name = entry.Key; var node = (ResXDataNode)entry.Value; var value = node.GetValue((ITypeResolutionService) null); if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("\n", ""); if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("\r", "");#> "<#=name#>": "<#=value#>",<# }#> "WEBSHOP_LASTELEMENT": "just ignore this, for testing purpose" }<# } File.Copy(fileResultPath, currentPathDest, true);}#> If you have a modulair application and you followed my suggestion to create multiple language projects, then you will have to create a T4 file for each of them. Make sure the json files are logically defined, it doesn't have to be en.json , it can also be example-en.json . To combine multiple json files for using with ngx-translate , follow the instructions here Use in Xamarin.Android As explained above in the updates, I use the same method as I have done with Angular/JSON. But Android uses XML files, so I wrote a T4 file that generates those XML files. Path: MyProject.Language/T4/CreateAppLocalizationEN.tt #@ template debug="false" hostspecific="true" language="C#" #><#@ assembly name="System.Core" #><#@ assembly name="System.Windows.Forms" #><#@ import namespace="System.Linq" #><#@ import namespace="System.Text" #><#@ import namespace="System.Collections.Generic" #><#@ import namespace="System.Resources" #><#@ import namespace="System.Collections" #><#@ import namespace="System.IO" #><#@ import namespace="System.ComponentModel.Design" #><#@ output extension=".xml" #><#var fileName = "../Resources/Resources.en.resx";var fileResultName = "../T4/CreateAppLocalizationEN.xml";var fileResultRexPath = Path.Combine(Path.GetDirectoryName(this.Host.ResolvePath("")), "MyProject.Language", fileName);var fileResultPath = Path.Combine(Path.GetDirectoryName(this.Host.ResolvePath("")), "MyProject.Language", fileResultName); var fileNameDest = "strings.xml"; var pathBaseDestination = Directory.GetParent(Directory.GetParent(this.Host.ResolvePath("")).ToString()).ToString(); var currentPathDest =pathBaseDestination + "/MyProject.App.AndroidApp/Resources/values-en/" + fileNameDest; using(var reader = new ResXResourceReader(fileResultRexPath)) { reader.UseResXDataNodes = true; #> <resources> <# foreach(DictionaryEntry entry in reader) { var name = entry.Key; //if(!name.ToString().Contains("WEBSHOP_") && !name.ToString().Contains("DASHBOARD_"))//only include keys with these prefixes, or the country ones. //{ // if(name.ToString().Length != 2) // { // continue; // } //} var node = (ResXDataNode)entry.Value; var value = node.GetValue((ITypeResolutionService) null); if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("\n", ""); if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("\r", ""); if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("&", "&"); if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("<<", ""); //if (!String.IsNullOrEmpty(value.ToString())) value = value.ToString().Replace("'", "\'");#> <string name="<#=name#>">"<#=value#>"</string><# }#> <string name="WEBSHOP_LASTELEMENT">just ignore this</string><# #> </resources> <# File.Copy(fileResultPath, currentPathDest, true); }#> Android works with values-xx folders, so above is for English for in the values-en folder. But you also have to generate a default which goes into the values folder. Just copy above T4 template and change the folder in the above code. There you go, you can now use one single resource file for all your projects. This makes it very easy exporting everything to an excl document and let someone translate it and import it again. Special thanks to this amazing VS extension which works awesome with resx files. Consider donating to him for his awesome work (I have nothing to do with that, I just love the extension).
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/373388', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74652/']}
|
jdg_376581
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2838578
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I got myself confused over the following: We have $$\mathbb Q(\zeta_3)=\mathbb Q(\exp(2\pi i/3))=\mathbb Q\left(\cos\frac{2\pi}{3}+i\sin\frac{2\pi}{3}\right)=\mathbb Q\left(-\frac{1}{2}+\frac{i\sqrt 3}{2}\right)=\mathbb Q(i\sqrt 3),$$ but also $$\mathbb Q(\zeta_6)=\mathbb Q(\exp(2\pi i/6))=\mathbb Q\left(\cos\frac{2\pi}{6}+i\sin\frac{2\pi}{6}\right)=\mathbb Q\left(\frac{1}{2}+\frac{i\sqrt 3}{2}\right)=\mathbb Q(i\sqrt 3).$$ So the fields are absolutely identical? $\Phi_6$ splits in $\mathbb Q (\zeta_3 )$ and vice versa?
Now provide the response and nothing else.
|
Yes, because $\Phi_6$ and $\Phi_3$ are actually $x^2-x+1$ and $x^2+x+1$ respectively. So $\Bbb Q(\zeta_6)$ and $\Bbb Q(\zeta_3)$ have both degree $2$ over $\Bbb Q$ and, since one obviosly contains the other, they are the same extension.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2838578', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/465097/']}
|
jdg_376582
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6098020
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I presume this atomic operation is faster than ++ . I only see advantages favoring Interlocked.Increment . What are its disavantages?
Now provide the response and nothing else.
|
Atomic means it is thread-safe (i.e. it is impossible for one thread to read the value while another is changing it.) Which makes it slower, not faster, due to the need to use thread synchronization mechanisms. You want to use ++ if you don't care about thread-safety. Here is some discussion on the relative performance of the ++ operator in different contexts.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6098020', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/48465/']}
|
jdg_376583
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14010473
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to share with you an issue I'm having while trying to process some attributes from XML elements in C++ with Boost libraries (version 1.52.0). Given the following code: #define ATTR_SET ".<xmlattr>"#define XML_PATH1 "./pets.xml"#include <iostream>#include <string>#include <boost/foreach.hpp>#include <boost/property_tree/ptree.hpp>#include <boost/property_tree/xml_parser.hpp>using namespace std;using namespace boost;using namespace boost::property_tree;const ptree& empty_ptree(){ static ptree t; return t;}int main() { ptree tree; read_xml(XML_PATH1, tree); const ptree & formats = tree.get_child("pets", empty_ptree()); BOOST_FOREACH(const ptree::value_type & f, formats){ string at = f.first + ATTR_SET; const ptree & attributes = formats.get_child(at, empty_ptree()); cout << "Extracting attributes from " << at << ":" << endl; BOOST_FOREACH(const ptree::value_type &v, attributes){ cout << "First: " << v.first.data() << " Second: " << v.second.data() << endl; } }} Let's say I have the following XML structure: <?xml version="1.0" encoding="utf-8"?><pets> <cat name="Garfield" weight="4Kg"> <somestuff/> </cat> <dog name="Milu" weight="7Kg"> <somestuff/> </dog> <bird name="Tweety" weight="0.1Kg"> <somestuff/> </bird></pets> Therefore, the console output I'll get will be the next: Extracting attributes from cat.<xmlattr>:First: name Second: GarfieldFirst: weight Second: 4KgExtracting attributes from dog.<xmlattr>:First: name Second: MiluFirst: weight Second: 7KgExtracting attributes from bird.<xmlattr>:First: name Second: TweetyFirst: weight Second: 0.1Kg However, if I decide to use a common structure for every single element laying down from the root node (in order to identify them from their specific attributes), the result will completely change. This may be the XML file in such case: <?xml version="1.0" encoding="utf-8"?><pets> <pet type="cat" name="Garfield" weight="4Kg"> <somestuff/> </pet> <pet type="dog" name="Milu" weight="7Kg"> <somestuff/> </pet> <pet type="bird" name="Tweety" weight="0.1Kg"> <somestuff/> </pet></pets> And the output would be the following: Extracting attributes from pet.<xmlattr>:First: type Second: catFirst: name Second: GarfieldFirst: weight Second: 4KgExtracting attributes from pet.<xmlattr>:First: type Second: catFirst: name Second: GarfieldFirst: weight Second: 4KgExtracting attributes from pet.<xmlattr>:First: type Second: catFirst: name Second: GarfieldFirst: weight Second: 4Kg It seems the number of elements hanging from the root node is being properly recognized since three sets of attributes have been printed. Nevertheless, all of them refer to the attributes of the very first element... I'm not an expert in C++ and really new to Boost, so this might be something I'm missing with respect to hash mapping processing or so... Any advice will be much appreciated.
Now provide the response and nothing else.
|
The problem with your program is located in this line: const ptree & attributes = formats.get_child(at, empty_ptree()); With this line you are asking to get the child pet.<xmlattr> from pets and you do this 3 times independently of whichever f you are traversing. Following this article I'd guess that what you need to use is: const ptree & attributes = f.second.get_child("<xmlattr>", empty_ptree()); The full code, that works with both your xml files, is: #define ATTR_SET ".<xmlattr>"#define XML_PATH1 "./pets.xml"#include <iostream>#include <string>#include <boost/foreach.hpp>#include <boost/property_tree/ptree.hpp>#include <boost/property_tree/xml_parser.hpp>using namespace std;using namespace boost;using namespace boost::property_tree;const ptree& empty_ptree(){ static ptree t; return t;}int main() { ptree tree; read_xml(XML_PATH1, tree); const ptree & formats = tree.get_child("pets", empty_ptree()); BOOST_FOREACH(const ptree::value_type & f, formats){ string at = f.first + ATTR_SET; const ptree & attributes = f.second.get_child("<xmlattr>", empty_ptree()); cout << "Extracting attributes from " << at << ":" << endl; BOOST_FOREACH(const ptree::value_type &v, attributes){ cout << "First: " << v.first.data() << " Second: " << v.second.data() << endl; } }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14010473', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1924830/']}
|
jdg_376584
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
762642
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
To generate an SSL certificate file for Apache , I am using the below command: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.cert And I manually feed it with these parameters: Country Name (2 letter code) [AU]: AU State or Province Name (full name): Myname [Some-State]: Some-State Locality Name (eg, city) []: City Organization Name (eg, company) [Internet Widgits Pty Ltd]: Internet Organizational Unit Name (eg, section) []: Section Common Name (e.g. server FQDN or YOUR name) []: yourname Email Address []: email@gmail.com Is it possible to enter them from a file or right from an OpenSSL command line using options? There is no hint from the OpenSSL man pages.
Now provide the response and nothing else.
|
You can create a configuration file and use that in your command. You could for example create a config file named openssl.cnf and use it like this: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.cert -config ./openssl.cnf In your case you can set the following parameters: [ req_distinguished_name ]# Variable name Prompt string#------------------------- ----------------------------------0.organizationName = Organization Name (company)organizationalUnitName = Organizational Unit Name (department, division)emailAddress = Email AddressemailAddress_max = 40localityName = Locality Name (city, district)stateOrProvinceName = State or Province Name (full name)countryName = Country Name (2 letter code)countryName_min = 2countryName_max = 2commonName = Common Name (hostname, IP, or your name)commonName_max = 64 More can be found at http://www.flatmtn.com/article/setting-openssl-create-certificates#SSLCert-4
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/762642', 'https://serverfault.com', 'https://serverfault.com/users/196732/']}
|
jdg_376585
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8265328
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I make a model completely read-only in the admin interface? It's for a kind of log table, where I'm using the admin features to search, sort, filter etc, but there is no need to modify the log. In case this looks like a duplicate, here's not what I'm trying to do: I'm not looking for readonly fields (even making every field readonly would still let you create new records) I'm not looking to create a readonly user : every user should be readonly.
Now provide the response and nothing else.
|
The admin is for editing, not just viewing (you won't find a "view" permission). In order to achieve what you want you'll have to forbid adding, deleting, and make all fields readonly: class MyAdmin(ModelAdmin): def has_add_permission(self, request, obj=None): return False def has_delete_permission(self, request, obj=None): return False (if you forbid changing you won't even get to see the objects) For some untested code that tries to automate setting all fields read-only see my answer to Whole model as read-only EDIT: also untested but just had a look at my LogEntryAdmin and it has readonly_fields = MyModel._meta.get_all_field_names() Don't know if that will work in all cases. EDIT: QuerySet.delete() may still bulk delete objects. To get around this, provide your own "objects" manager and corresponding QuerySet subclass which doesn't delete - see Overriding QuerySet.delete() in Django
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8265328', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/263268/']}
|
jdg_376586
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4254
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Define the following class of "circular" languages over a finite alphabet Sigma. Actually, the name already exists to denote a different thing it seems, used in the field of DNA computing. AFAICT, that's a different class of languages. A language L is circular iff for all words $w$ in $\Sigma^*$, we have: $w$ belongs to L if and only if for all integers $k > 0$, $w^k$ belongs to L. Is this class of languages known? I am interested in the circular languages which are also regular and in particular in: a name for them, if they are already known decidability of the problem, given an automaton (in particular: aDFA), whether the accepted language obeys to the above definition
Now provide the response and nothing else.
|
In the first part, we show an exponential algorithm for deciding circularity. In the second part, we show that this the problem is coNP-hard. In the third part, we show that every circular language is a union of languages of the form $r^+$ (here $r$ could be the empty regexp); the union is not necessarily disjoint. In the fourth part, we exhibit a circular language which cannot be written as a disjoint sum $\sum r_i^+$. Edit: Incorporated some corrections following Mark's comments. In particular, my earlier claims that circularity is coNP-complete or NP-hard are corrected. Edit: Corrected normal form from $\sum r_i^*$ to $\sum r_i^+$. Exhibited an "inherently ambiguous" language. Continuing Peter Taylor's comment, here's how to decide (extremely inefficiently) whether a language is circular given its DFA. Construct a new DFA whose states are $n$-tuples of the old states. This new DFA runs $n$ copies of the old DFA in parallel. If the language is not circular then there is a word $w$ such that if we run it through the DFA repeatedly, starting with the initial state $s_0$, then we get states $s_1,\ldots,s_n$ such that $s_1$ is accepting but one of the other ones is not accepting (if all of them are accepting then then the sequence $s_0,\ldots,s_n$ must cycle so that $w^*$ is always in the language). In other words, we have a path from $s_0,\ldots,s_{n-1}$ to $s_1,\ldots,s_n$ where $s_1$ is accepting but one of the others is not accepting. Conversely, if the language is circular then that cannot happen. So we've reduced the problem to a simple directed reachability test (just check all possible "bad" $n$-tuples). The problem of circularity is coNP-hard. Suppose we're given a 3SAT instance with $n$ variables $\vec{x}$ and $m$ clauses $C_1,\ldots,C_m$. We can assume that $n = m$ (add dummy variables) and that $n$ is prime (otherwise find a prime between $n$ and $2n$ using AKS primality testing, and add dummy variables and clauses). Consider the following language: "the input is not of the form $\vec{x}_1 \cdots \vec{x}_n$ where $\vec{x}_i$ is a satisfying assignment for $C_i$". It is easy to construct an $O(n^2)$ DFA for this language. If the language is not circular then there is a word $w$ in the language, some power of which is not in the language. Since the only words not in the language have length $n^2$, $w$ must be of length $1$ or $n$. If it is of length $1$, consider $w^n$ instead (it is still in the language), so that $w$ is in the language and $w^n$ is not in the language. The fact that $w^n$ is not in the language means that $w$ is a satisfying assignment. Conversely, any satisfying assignment translates to a word proving the non-circularity of the language: the satisfying assignment $w$ belongs to the language but $w^n$ does not. Thus the language is circular iff the 3SAT instance is unsatisfiable. In this part, we discuss a normal form for circular languages. Consider some DFA for a circular language $L$. A sequence $C = C_0,\ldots$ is real if $C_0 = s$ (the initial state), all other states are accepting, and $C_i = C_j$ implies $C_{i+1} = C_{j+1}$. Thus every real sequence is eventually periodic, and there are only finitely many real sequences (since the DFA has finitely many states). We say that a word behaves according to $C$ if the word takes the DFA from state $c_i$ to state $c_{i+1}$, for all $i$. The set of all such words $E(C)$ is regular (the argument is similar to the first part of this answer). Note that $E(C)$ is a subset of $L$. Given a real sequence $C$, define $C^k$ to be the sequence $C^k(t) = C(kt)$. The sequence $C^k$ is also real. Since there are only finitely many different sequences $C^k$, the language $D(C)$ which is the union of all $E(C^k)$ is also regular. We claim that $D(C)$ has the property that if $x,y \in D(C)$ then $xy \in D(C)$. Indeed, suppose that $x \in C^k$ and $y \in C^l$. Then $xy \in C^{k+l}$. Thus $D(C) = D(C)^+$ can be written in the form $r^+$ for some regular expression $r$. Every word $w$ in the language corresponds to some real sequence $C$, i.e. there exists a real sequence $C$ that $w$ behaves according to. Thus $L$ is the union of $D(C)$ over all real sequence $C$. Therefore every circular language has a representation of the form $\sum r_i^+$. Conversely, every such language is circular (trivially). Consider the circular language $L$ of all words over $a,b$ that contain either an even number or $a$'s or an even number of $b$'s (or both). We show that it cannot be written as a disjoint sum $\sum r_i^+$; by "disjoint" we mean that $r_i^+ \cap r_j^+ = \varnothing$. Let $N_i$ be the size of the some DFA for $r_i^+$, and $N > \max N_i$ be some odd integer. Consider $x = a^N b^{N!}$. Since $x \in L$, $x \in r_i^+$ for some $i$. By the pumping lemma, we can pump a prefix of $x$ of length at most $N$. Thus $r_i^+$ generates $z = a^{N!} b^{N!}$. Similarly, $y = a^{N!} b^N$ is generated by some $r_j^+$, which also generates $z$. Note that $i \neq j$ since $xy \notin L$. Thus the representation cannot be disjoint.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/4254', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/3176/']}
|
jdg_376587
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2621496
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Are there any decent examples of the following available: Looking through the WIF SDK , there are examples of using WIF in conjunction with ASP.NET using the WSFederationAuthenticationModule (FAM) to redirect to an ASP.NET site thin skin on top of a Security Token Service (STS) that user uses to authenticate (via supplying a username and password). If I understand WIF and claims-based access correctly, I would like my application to provide its own login screen where users provide their username and password and let this delegate to an STS for authentication, sending the login details to an endpoint via a security standard (WS-*), and expecting a SAML token to be returned. Ideally, the SessionAuthenticationModule would work as per the examples using FAM in conjunction with SessionAuthenticationModule i.e. be responsible for reconstructing the IClaimsPrincipal from the session security chunked cookie and redirecting to my application login page when the security session expires. Is what I describe possible using FAM and SessionAuthenticationModule with appropriate web.config settings, or do I need to think about writing a HttpModule myself to handle this? Alternatively, is redirecting to a thin web site STS where users log in the de facto approach in a passive requestor scenario?
Now provide the response and nothing else.
|
An example of WIF + MVC is available in this chapter of the "Claims Identity Guide": http://msdn.microsoft.com/en-us/library/ff359105.aspx I do suggest reading the first couple chapters to understand all underlying principles. This blog post covers the specifics of MVC + WIF: Link Controlling the login experience is perfectly fine. You should just deploy your own STS (in your domain, with your look & feel, etc). Your apps would simply rely on it for AuthN (that's why a app is usually called a "relying party"). The advantage of the architecture is that authN is delegated to 1 component (the STS) and not spread out throughout many apps. But the other (huge) advantage is that you can enable more sophisticated scenarios very easily. For example you can now federate with other organization's identity providers. Hope it helpsEugenio @RisingStar: The token (containing the claims) can be optionally encrypted (otherwise they will be in clear text). That's why SSL is always recommended for interactions between the browser and the STS. Notice that even though they are in clear text, tampering is not possible because the token is digitally signed.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2621496', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1831/']}
|
jdg_376588
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
564967
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I mean the question in the following sense. For Fourier, we can do it on compact intervals and then we get a sequence of coefficients. We can do it continuum-style, and then we get a superposition of waves of continuum-varying frequencies. We can even do it abstractly on compact groups. So why is the only Laplace transform (at least that I've ever heard about) on $[0, \infty)$?
Now provide the response and nothing else.
|
Consider the space $L=L^1(\mathbb{R})$, this is a Banach space which becomes a commutative Banach algebra under the convolution $$f*g(x)=\int_\mathbb{R}f(y)g(x-y)dy$$Similarly, the space $L^+=L^1(\mathbb{R}^+)$ is a Banach space which becomes a commutative Banach algebra under the convolution $$f*g(x)=\int_0^xf(y)g(x-y)dy$$ The Gelfand transform on $L$ is the Fourier transform. The Gelfand transform on $L^+$ is the Laplace transform. There are also discrete versions of this. Consider the space $\ell=\ell^1(\mathbb{Z})=L^1(\mathbb{Z})$, this is a Banach space which becomes a commutative Banach algebra under the convolution $$f*g(k)=\sum_\mathbb{Z}f(n)g(k-n)$$Similarly, the space $\ell^+=\ell^1(\mathbb{N})=L^1(\mathbb{N})$ is a Banach space which becomes a commutative Banach algebra under the convolution $$f*g(k)=\sum_{n=0}^kf(n)g(k-n)$$ The Gelfand transform on $\ell$ is an absolute convergent Fourier series. The Gelfand transform on $\ell^+$ is an absolute convergent Taylor series. If we have a locally compact group $G$ then we may consider complex valued functions $f:G\to\mathbb{C}$ and there is a generalisation of Lebesgue measure due to Alfred Haar which is left invariant, i.e. $\int f(yx) d\mu(x)= \int f(x) d\mu(x)$. If it happens that $\mu$ is right invariant too, then $\mu$ is called unimodular (which is the case for Abelian groups). Having a Haar measure it makes sense to talk about convolution, that is $$f*g(x)=\int_G f(y)g(y^{-1}x)d\mu(y)$$The hope to understand $L^1(G)$ through diagonalising the convolution using a kind of Fourier transform is not easy. Loosely speaking we look for $\phi$ such that $\phi(xy)=\phi(x)\phi(y)$ (note, for the additive group $\mathbb{R}$ this reads $\phi(x+y)=\phi(x)\phi(y)$ and leads to the exponential function) and thus $$\int_G f*g(x)\phi(x)d\mu(x)=\int_G\int_Gf(y)g(y^{-1}x\phi(x)d\mu(y)d\mu(x)=\\ \int_G\int_Gf(y)g(z)\phi(yz)d\mu(y)d\mu(z)= \int_Gf(y)g(z)\phi(y)\phi(z)d\mu(y)d\mu(z)$$ For LCA-groups (locally compact Abelian groups) can more or less be written in the form $\mathbb{R}^n\times\mathbb{Z}^m\times(\mathbb{R}/\mathbb{Z})^l$.For non-Abelian groups there is a whole subject known as Representation theory, in particular you will find several books on $SL(2,\mathbb{R})$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/564967', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33378/']}
|
jdg_376589
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
349531
|
Below is a question asked on the forum meta.stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Recently this question has been on Meta: What is the appropriate way to report someone who created a new account to get around a ban? To me that particular issue there seems like a duplicate question problem, but it raises the whole issue of circumventing bans by creating a new account. On several occasions I've seen people say that you shouldn't create a new account in order to get past the question ban. This seems strange to me. It is like asking people boarding a plane if they are terrorists. The good people aren't and the bad people say they aren't. A conscientious Stack Overflow user may refrain from opening a second account, but a spammer will have no qualms about it. By forbidding it you are only inhibiting the conscientious user. The more official stance actually seems to be "yes you can do it but you may be punished for it." (See what Shog wrote here , quoting here .) Can I simply create a new account? Yes, but doing so violates the rules for operating multiple accounts - namely, it allows you to do something you would otherwise be restricted from doing . Penalties for violating this restriction can vary: Your accounts may be deleted without warning. You and others on your network may find your questions are shown to fewer potential answerers or subjected to preemptive review when posted. You and others on your network may be prevented from asking questions entirely. If you opt to delete your account and then later return to the site, your ability to ask questions may be limited to one question per week until you have demonstrated an ability to ask useful questions. This is a site for programmers. As programmers we are constantly looking for solutions to problems. We keep trying things until something works. That applies to question bans. If I have a pressing issue that I need help with, then I'm not going to wait a month or even a week to ask it. I'll find a way to get help sooner. Even the penalties listed above are rather meaningless. If a user's accounts get deleted, then they can open a new one with a new email address. If you limit or block their network, then they can find a new IP by using a VPN or proxy. Most Stack Overflow users in China pretty much have to do this already because of the blocked Google APIs that SE uses . Don't get me wrong, I'm all for rate limits and bans and everything else used to cut down on the number of poor questions coming in. I'm also not talking about sock puppet voting or other types of multiple account abuse. I just think that telling people not to open a new account when they have a question ban is not useful. Or even if it is in some way meaningful, it is not practically enforceable. In my opinion, the biggest deterrent to creating a new account is that you lose all your rep and the privileges that come with it. This is a natural consequence that no one needs to remind you of.
Now provide the response and nothing else.
|
This was the reason I asked this earlier question . We know that people create question-ban evasion accounts all the time, so I wanted to kick around ideas for how to make this harder or how to detect these accounts as they happen. On a near-daily basis, I find and delete question-ban evasion accounts and / or deal with sock puppets created to prop someone up from being banned. The discussion on that question went in a bit of a different direction, though. When someone evading a question ban is brought to my attention, I can usually track down the main account and all their secondary accounts. Feeding the secondary accounts into the anti-trolling system (a process I'd like to see codified and made simpler ) works to block new accounts from their location. Despite what you might think, and what you speculate about in your question, I've found that intelligent IP-bans put in place by this system work extremely well to stop further posts by question-ban evaders. The people dumping questions on the site in order to have us do their work for them are not exactly the most capable or hard-working programmers, so they aren't as likely to take the time to figure out how to circumvent these blocks. I've seen tags transformed overnight by catching and IP-blocking a few repeated question-ban evaders. Their posts stopped immediately. Now, Shog9's stats seem to indicate that the rate at which people evade question bans is far lower than the rate at which genuinely new accounts are posting, so I don't know if it's just my irritation with these folks causing me to focus on them or if they are a legitimate problem. There might still be things that the site could do to cut down on this, like banning throwaway email providers for new account registration (mailinator, 10-minute-mail, sharklasers, etc.), identifying new accounts on IPs where someone was just question-banned, identifying new accounts with names that match others on the same IP, and so on. That's what I'd hoped to discuss in my earlier question, but the conversation focused more on brand new users than on ban evaders. I might propose some of these as separate feature requests if I can think through the implementation.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://meta.stackoverflow.com/questions/349531', 'https://meta.stackoverflow.com', 'https://meta.stackoverflow.com/users/3681880/']}
|
jdg_376590
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11819895
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I prevent the ScrollView from scolling content under the LinearLayout content?ScrollView just uses all the display space and I can't limit this.Tried with main Relative- and LinearLayout. My code: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent"android:orientation="vertical"tools:context=".MainActivity" ><LinearLayout xmlns:myapp="http://schemas.android.com/apk/res/your.app.package" android:id="@+id/ad_layout" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" ></LinearLayout><ScrollView android:id="@+id/ScrollView1" android:layout_width="match_parent" android:layout_height="0px" android:layout_weight="1" ><RelativeLayout android:id="@+id/RelativeLayout1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="bottom" android:orientation="vertical" ><ImageView android:id="@+id/imageView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:contentDescription="@string/image" android:onClick="openImage1" android:scaleType="fitStart" android:src="@drawable/loading" android:adjustViewBounds="true" android:background="#000000" /></RelativeLayout></ScrollView></LinearLayout>
Now provide the response and nothing else.
|
Usually a back-reference is either $1 or \1 (backslash one) for the first capture group (the first match of a pattern in parentheses), and indeed Sublime supports both syntaxes. So try: my name used to be \1 or my name used to be $1 Also note that your original capture pattern: my name is (\w)+ is incorrect and will only capture the final letter of the name rather than the whole name. You should use the following pattern to capture all of the letters of the name: my name is (\w+)
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/11819895', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1516065/']}
|
jdg_376591
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20362373
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say there is a class: class Person{ String name; int age; City location;} Is there some library that will let me create a list of Strings containing each name from the list of persons in one line instead of creating a new list and looping through the other list? Something like: List<Person> people = getAllOfThePeople();List<String> names = CoolLibrary.createList("name", people); Rather than: List<Person> people = getAllOfThePeople();List<String> names = new LinkedList<String>();for(Person person : people){ names.add(person.getName());}
Now provide the response and nothing else.
|
You can using Java 8 with lambda expressions : List<String> listNames = people.stream().map(u -> u.getName()).collect(Collectors.toList()); import java.util.*;import java.util.function.*;import java.util.stream.*;public class Test { public static void main(String args[]){ List<Person> people = Arrays.asList(new Person("Bob",25,"Geneva"),new Person("Alice",27,"Paris")); List<String> listNames = people.stream().map(u -> u.getName()).collect(Collectors.toList()); System.out.println(listNames); }}class Person{ private String name; private int age; private String location; public Person(String name, int age, String location){ this.name = name; this.age = age; this.location = location; } public String getName(){ return this.name; }} Output : [Bob, Alice] Demo here . Alternatively, you can define a method that will take your list as parameter and the function you want to apply for each element of this list : public static <X, Y> List<Y> processElements(Iterable<X> source, Function <X, Y> mapper) { List<Y> l = new ArrayList<>(); for (X p : source) l.add(mapper.apply(p)); return l;} Then just do : List<String> lNames = processElements(people, p -> p.getName()); //for the namesList<Integer> lAges = processElements(people, p -> p.getAge()); //for the ages//etc. If you want to group people by age, the Collectors class provide nice utilities (example): Map<Integer, List<Person>> byAge = people.stream() .collect(Collectors.groupingBy(Person::getAge));
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20362373', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1267413/']}
|
jdg_376592
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19407153
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen a few (literally, only a few) links and nothing in the documentation that talks about clustering with Firebird, that it can be done. Then, I shot for the moon on this question CLUSTER command for Firebird? , but answerer told me that Firebird doesn't even have clustered indexes at all, so now I'm really confused. Does Firebird physically order data at all? If so, can it be ordered by any key, not just primary, and can the clustering/defragging be turned on and off so that it only does it during downtime? If not, isn't this a hit to performance since it will take the disk longer to put together disparate rows that naturally should be right next to each other? (DB noob) MVCC I found out that Firebird is based upon MVCC, so old data actually isn't overwritten until a "sweep". I like that a lot! Again, I can't find much, but it seems like a real shame that data wouldn't be defragged according to a key. This says that database pages are defragmented but provides no further explanation.
Now provide the response and nothing else.
|
Firebird does not cluster records. It was designed to avoid the problems that require clustering and the fragmentation problems that come with clustered indexes. Indexes and data are stored separately, on different types of pages. Each data page contains data from only one table. Records are stored in the order they were inserted, give or take concurrent inserts, which generally go on separate pages. When old records are removed, new records will be stored in their place, so new records sometimes appear on the same page as older ones. Many tables use an artificial primary key, generally ascending, which might be a database generated sequence or a timestamp. That practice causes records to be stored in key order, but that order is by no means guaranteed. Nor is it very interesting. When the primary key is artificial, most queries that return groups of related records are done on secondary indexes. That's a performance hit for records that are clustered because look-ups on secondary indexes require traversing two indexes because the secondary index provides only the key to the primary index, which must be traversed to find the data. On the larger issue of defragmentation and space usage, Firebird tracks the free space on pages so new records will be inserted on pages that have had records removed. If a page becomes completely empty, it will be reallocated. This space management is done as the database runs. As you know, Firebird uses Multi-Version Concurrency Control, so when a record is updated or deleted, Firebird creates a new record version, but keeps the old version around. When all transactions that were running before the change was committed have ended, the old record version no longer serves any purposes, and Firebird will remove it. In many applications, old versions are removed in the normal course of running the database. When a transaction touches a record with old versions, Firebird checks the state of the old versions and removes them if no running transaction can read them. There is a function called "Sweep" that systematically removes unneeded old record versions. Sweep can run concurrently with other database activity, though it's better to schedule it when the database load is low. So no, it's not true that nothing is removed until you run a sweep. Best regards, Ann Harrison who's worked with Firebird and it's predecessors for an embarassingly long time BTW - as the first person to answer mentioned, Firebird does leave space on pages so that the old version of a record stays on the same page as the newer version. It's not a fixed percentage of the space, but 16 bytes per record stored on the page, so pages of tables with very short records have more free space and tables that have long records have less.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19407153', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_376593
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
187091
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have two identical hard drives (same model) in my system. One is used for data which is not important (could be downloaded again but would take an eternity). The other is my system disk.Now my problem is: The system disk is likely to fail in the near future (the SMART stats for it are really bad) and the other was seldom used and still has good stats.Does anybody here now a program which can exchange the data between two drives without a third drive or losing data on one drive? I already ordered a new hard disk, but I want to be on the safe side.
Now provide the response and nothing else.
|
First, your snippet executes the command echo {} : ;if [ -f {} ]; then echo file; else echo directory;fi because it needs its output to evaluate the command substitution. Since there is no file named {} , this produces the output {} :directory Then the find command is executed with the arguments -exec , echo , {} , : , directory , so for every file, it outputs the file name followed by a space and : directory . What you actually want to do is to execute the shell snippet echo {} :; … on each file found by find . This snippet must be executed by a shell spawned by find , not by the shell that starts find , since it is receiving data from find on its command line. Therefore you need to instruct find to run a shell: find -exec sh -c 'echo {} : ;if [ -f {} ]; then echo file; else echo directory;fi' \; This is better, but still not right. It'll work with some (not all) find implementations if your file names don't contain any special characters, but since you are interpolating the file name in a shell script, you allow file names to execute arbitrary shell commands, e.g. if you have a file called $(rm -rf /) then the command rm -rf / will be executed. To pass file names to the script, pass them as separate arguments. Also the first echo prints a newline after the colon. Use echo -n (if your shell supports it) or printf to avoid this. find -exec sh -c 'printf "%s :" "$0"; if [ -f "$0" ]; then echo file; else echo directory; fi' {} \; You can use -exec … {} + to group shell invocations, which is faster. find -exec sh -c 'for x; do printf "%s :" "$x"; if [ -f "$x" ]; then echo file; else echo directory; fi; done' _ {} +
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/187091', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/104686/']}
|
jdg_376594
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
527660
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This might be stupid but I need an answer. Newton's second law stated that the force exerted on a body equals the mass of the body times the acceleration of the body. And the law of gravitation states that: $$F = G\frac{m_1m_2}{r^2}$$ My question is: if I take two identical masses and place one higher than the other then one experiences greater gravitational force than the other. Both accelerate at the same rate $g$ . If this is so, according to Newton's second law the forces on the bodies must be equal. Where am I wrong?
Now provide the response and nothing else.
|
The answer is that falling objects do not all accelerate towards the Earth at the same rate of $9.8 \text{ m/s}^2$ . All objects, at the surface of the Earth , accelerate the same, regardless of their mass. Also, all objects at the same distance from the center of the Earth accelerate at the same rate. But objects at different heights do not accelerate exactly the same. Otherwise how could you ever escape the Earth's gravity? The more correct way to calculate the acceleration is to do it the way you have done, using Newton's 2nd Law and Newton's Law of Universal Gravitation. Physics teachers often teach their classes that all object's accelerate at the same rate and then don't emphasize the limits on that statement.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/527660', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/215062/']}
|
jdg_376595
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3027149
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to have multiple submit buttons. I have a form which creates an instance of Contact_Call. One button creates it as normal. The other button creates it but needs to have a different :attribute value from the default, and it also needs to set the attribute on a different, but related model used in the controller. How do I do that? I can't change the route, so is there a way to send a different variable that gets picked up by [:params]? And if I do then, what do I do in the controller, set up a case statement?
Now provide the response and nothing else.
|
You can create multiple submit buttons and provide a different value to each: <% form_for(something) do |f| %> .. <%= f.submit 'A' %> <%= f.submit 'B' %> ..<% end %> This will output: <input type="submit" value="A" id=".." name="commit" /><input type="submit" value="B" id=".." name="commit" /> Inside your controller, the submitted button's value will be identified by the parameter commit . Check the value to do the required processing: def <controller action> if params[:commit] == 'A' # A was pressed elsif params[:commit] == 'B' # B was pressed endend However, remember that this tightly couples your view to the controller which may not be very desirable.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3027149', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92679/']}
|
jdg_376596
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
784420
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I activate Windows 10 clients via an existing KMS Server running on Windows 2008 R2? EXISTING ENVIRONMENT Windows 2008 R2 Server running the KMS Host role Existing KMS key is Windows 2012 R2 (or lower) installed on the KMS host from a previous setup. Current activations are fine for Windows 2012 R2 and lower on servers, and Windows 8.1 and lower for clients. DESIRED OUTCOME Activation of Windows 10 clients via the KMS host
Now provide the response and nothing else.
|
In order to activate Windows 10 clients on a KMS host running Windows Server 2008 R2 you can follow the steps below. Please be aware these aren't steps to get a KMS host running from scratch, some of those steps are outlined in the TechNet article here: Activate using Key Management Service but be aware that deploying a KMS host server shouldn't be something just thrown into an environment without proper planning. NOTE: if you are using a 2012 R2 KMS host, you can skip to step #2. The rest of the steps should apply to you at that point. STEPS FOLLOWED On the 2008 R2 KMS host, a hotfix is required: KB 3079821 Retrieve the correct KMS key from the Microsoft VLSC site. The correct key is labeled "Windows Srv 2012R2 DataCtr/Std KMS for Windows 10" in your "Product Keys" tab of your Relationship Summary page. On the existing KMS host, run slcmgr.vbs /dlv to get the current detailed KMS license summary. I would recommend taking a screenshot for historical reference. Activate the new license on the KMS server by typing slcmgr.vbs /ipk <key you retrieved in step #2> (be patient, it will return a command prompt immediately and then about 10 seconds later show a window pop-up stating the license was installed successfully) NOTE: If you use the Windows 10 KMS key instead of the one above you will get an error 0xC004F015 during activation. See here for more details: KB 3086418 Servers can ONLY have a Server license key installed, so you have to use the proper "Windows Srv 2012R2 DataCtr/Std KMS for Windows 10" license key from the MVLS site. The KMS server will then show as "This copy of Windows is not Genuine" in the desktop and the activation section of the System properties. Don't freak out...you now need to re-activate just the KMS host by typing slcmgr.vbs /ato . The server should then show as activated again. That's it, you can go to the "TESTING COMPLETED" section for next steps. References: Activate using Key Management Service Activating Windows 10 on Windows Server 2008 R2, Windows Server 2012 and 2012 R2 KMS hosts. TESTING COMPLETED The following tests will help validate all is well. On the KMS host, run slcmgr.vbs /dlv again. Note the new product key which should look like this: Everything else on that screen pop-up should be very similar to what you saw before you activated the new license (refer to step #3 above in "Steps Followed") On a Windows 10 client that has been deployed with the CVLK (the generic KMS key used in an SCCM or MDT image), try to activate the Windows 10 client now (you can use slcmgr.vbs /ato for instance)
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/784420', 'https://serverfault.com', 'https://serverfault.com/users/7861/']}
|
jdg_376597
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4830856
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to install a Python package from a private GitHub repository. For a public repository, I can issue the following command which works fine: pip install git+git://github.com/django/django.git However, if I try this for a private repository: pip install git+git://github.com/echweb/echweb-utils.git I get the following output: Downloading/unpacking git+git://github.com/echweb/echweb-utils.gitCloning Git repository git://github.com/echweb/echweb-utils.git to /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-buildComplete output from command /usr/local/bin/git clone git://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build:fatal: The remote end hung up unexpectedlyCloning into /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build...----------------------------------------Command /usr/local/bin/git clone git://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-VRsIoo-build failed with error code 128 I guess this is because I am trying to access a private repository without providing any authentication. I therefore tried to use Git + ssh hoping that pip would use my SSH public key to authenticate: pip install git+ssh://github.com/echweb/echweb-utils.git This gives the following output: Downloading/unpacking git+ssh://github.com/echweb/echweb-utils.gitCloning Git repository ssh://github.com/echweb/echweb-utils.git to /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-buildComplete output from command /usr/local/bin/git clone ssh://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build:Cloning into /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build...Permission denied (publickey).fatal: The remote end hung up unexpectedly----------------------------------------Command /usr/local/bin/git clone ssh://github.com/echweb/echweb-utils.git /var/folders/cB/cB85g9P7HM4jcPn7nrvWRU+++TI/-Tmp-/pip-DQB8s4-build failed with error code 128 Is what I am trying to achieve even possible? If so, how can I do it?
Now provide the response and nothing else.
|
You can use the git+ssh URI scheme, but you must set a username. Notice the git@ part in the URI: pip install git+ssh://git@github.com/echweb/echweb-utils.git Also read about deploy keys . PS: In my installation, the "git+ssh" URI scheme works only with "editable" requirements: pip install -e URI#egg=EggName Remember : Change the : character that git remote -v prints to a / character before using the remote's address in the pip command: $ git remote -vorigin git@github.com:echweb/echweb-utils.git (fetch)# ^ change this to a '/' character If you forget, you will get this error: ssh: Could not resolve hostname github.com:echweb: nodename nor servname provided, or not known
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/4830856', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/15676/']}
|
jdg_376598
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11289551
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars). When I try to delete all PDFs together using the following command: rm -f *.pdf I get the following error: /bin/rm: cannot execute [Argument list too long] What is the solution to this error?Does this error occur for mv and cp commands as well? If yes, how to solve for these commands?
Now provide the response and nothing else.
|
The reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line. Try this: find . -name "*.pdf" -print0 | xargs -0 rm Warning: this is a recursive search and will find (and delete) files in subdirectories as well. Tack on -f to the rm command only if you are sure you don't want confirmation. You can do the following to make the command non-recursive: find . -maxdepth 1 -name "*.pdf" -print0 | xargs -0 rm Another option is to use find's -delete flag: find . -name "*.pdf" -delete
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/11289551', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/887235/']}
|
jdg_376599
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9980416
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
As silly as it may sound, I still haven't found an appropriate answer. Let's say I want to dynamically create a new DOM element and fill up its textContent/innerTextwith a JS string literal. The string is so long I would like to split it into three chunks: var h1 = document.createElement("h1");h1.textContent = "This is a very long string and I would like to insert a carriage return HERE...moreover, I would like to insert another carriage return HERE... so this text will display in a new line"; The problem is, if i write h1.textContent = "...I would like to insert a carriage return here... \n"; it doesn't work, probably because the browser considers the '\n' to be pure text and displays it as such (the \r doesn't work either). On the other hand, I could change the h1.innerHTML instead of the textContent and write : h1.innerHTML = "...I would like to insert a carriage return here...<br />"; Here the <br /> would do the job, but doing so would replace not just the text content but all the HTML content of my h1, which is not quite what I want . Is there a simple way to solve my problem? I wouldn't resort to creating multiple block elements just to have the text on different lines. Any idea would be greatly appreciated. Thanks in advance.
Now provide the response and nothing else.
|
I know this question posted long time ago. I had similar problem few days ago, passing value from web service in json format and place it in table cell contentText . Because value is passed in format, for example, "text row1\r\ntext row2" and so on. For new line in textContent You have to use \r\n and, finally, I had to use css white-space: pre-line; (Text will wrap when necessary, and on line breaks) and everything goes fine. Or, You can use only white-space: pre; and then text will wrap only on line breaks (in this case \r\n ). So, there is example how to solve it with wrapping text only on line breaks : var h1 = document.createElement("h1");//setting this css style solving problem with new line in textContenth1.setAttribute('style', 'white-space: pre;');//add \r\n in text everywhere You want for line-break (new line)h1.textContent = "This is a very long string and I would like to insert a carriage return \r\n...";h1.textContent += "moreover, I would like to insert another carriage return \r\n...";h1.textContent += "so this text will display in a new line";document.body.appendChild(h1);
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9980416', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1236489/']}
|
jdg_376600
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
301685
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Ubuntu server running Redis, which suffers from a high load problem. Forensics Uptime # uptime05:43:53 up 19 min, 1 user, load average: 2.96, 2.07, 1.52 sar # sar -q 05:24:00 AM LINUX RESTART05:25:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked05:35:04 AM 0 116 3.41 2.27 1.20 4Average: 0 116 3.41 2.27 1.20 4 htop The CPU is utilization in htop is embarrassingly low: top netstat 34 open redis-server connections: $ sudo netstat -natp | grep redis-server | wc -l34 free $ free -g total used free shared buffers cachedMem: 14 6 8 0 0 2-/+ buffers/cache: 4 10Swap: 0 0 0 How do I know which processes are causing the high load, waiting to enter the Running state? Is the number of connections too high?
Now provide the response and nothing else.
|
You're seeing the unexpected loadavg because of high iowait. 98.7 in the wa section of top shows this. From your screenshots I see the kworker process is also in uninterruptible sleep (state of D within top) which occurs when a process is waiting for disk I/O to complete. vmstat gives you visibility into the run queue. Execute vmstat 1 in typical sar fashion for updates every second. The r column shows runnable/running processes which the kernel uses to calculate loadavg and the b column shows processes blocked waiting for disk I/O aka uninterruptible sleep. Processes in b are added to the loadavg calculation, which is how iowait causes mysterious loadavg. So to answer your question of how to see which procs are causing high loadavg, in your case of iowait, use top / ps to look for procs in a state of D then troubleshoot from there.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/301685', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/1079/']}
|
jdg_376601
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16316
|
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been watching to some videos on Youtube to see if I should be a vegan. One of the thing that popped out during a discussion was the fact that soy contains a lot of oestrogen which could cause undesirable side effects. However, to my great surprise, the speaker answered by claiming that "meat and cow's milk have twice the amount of oestrogen that's found in soy". I searched rapidly on Google and saw many website claiming the same thing. Is that true? Claim found in this video .
Now provide the response and nothing else.
|
It seems unlikely. Soy contains isoflavones which are said to mimic oestrogen. Many cattle are given growth promoting hormones - including oestrogen. If the sources below are to be believed, the daily amount consumed in soya may be thousands of times greater than that consumed in beef. Soy: 30000 ng Beef: 4 ng or 20 ng. The first two sources are not great and the products, quantities and consumers in them are not comparable. However I think they are at least indicative of the magnitude of the likely quantities. According to The Guardian It has been estimated that infants who are fed soya formula exclusively receive an amount of oestrogen equivalent to five birth control pills every day. One birth control pill contains 30 to 35 micrograms (mcg) of EE (ethinyl estradiol - a synthetic oestrogen). According to BeefMyths 3-ounce serving of beef from a steer treated with growth promotants contains 1.9 nanograms of estrogen. 3-ounce serving of beef from a steer raised without growth promotants, such as certified organic beef contains 1.3 nanograms of estrogen. Sources: Food and Drug Administration; Hoffman and Evers; Scanga et al.; FSIS-USDA; Dr. Harlan Ritchie, Michigan State University; NCBA In Possible health impact of animal oestrogens in food it says The theoretical maximum daily intake (TMDI) of oestradiol-17β by consumption of cattle meat is calculated to be 4.3 ng. Following the use of oestradiol-containing growth-promoting agents TMDI is increased by a factor of 4.6 to 20 ng.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://skeptics.stackexchange.com/questions/16316', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/7569/']}
|
jdg_376602
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
45066
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd describe myself as an average tech savvy computer user. I have many accounts in forums, shopping sites, etc where I recycle two moderately strong password with small variation. These are account where I don't care if anybody gains access to them and that's why I have them saved in the browser's password manager. For example, I don't care if somebody gains access to my Alfa Romeo forum account or my Deal Extreme account because they can't do me any harm. Now for my internet banking and main email, it's a different story. I use a strong password for my internet banking which I DON'T recycle and don't have it saved in my browser password managers. For banking transactions I use a hardware token. For my gmail I use a two-step verification with another strong password. To me that sounds like a secure enough method where I'm keeping what's important safe and at the same time I'm not clogging my mind with too many passwords or worrying about what the latest security breach in my password manager would be. Thanks!
Now provide the response and nothing else.
|
Yes. The average user should use long random passwords for every site. Passwords should not be repeated, passwords should not follow a discernible pattern. The compromise of any one password (e.g. your Adobe or LinkedIn login) must not be allowed to make it any easier for the attacker to guess your other passwords. These requirements make remembering passwords very nearly impossible. But that's not the primary reason why you should use a password manager. The primary reason is that it reliably protects you against phishing attacks. A browser-integrated password manager will only fill in a site-specific password if you're actually visiting the correct site. So you won't accidentally type in your Paypal.com password into www.paypal.com.us.cgi-bin.webscr.xzy.ru. This is doubly true for average users , who on the average, rely on the general familiarity of a site to determine whether or not its legitimate (a terribly ineffective heuristic). Since you don't know your password, you can't type it in. Instead, it will only auto-fill if you're at the authentic site. Use a browser-integrated password manager, don't get phished. It literally is that simple. Phishing is far more prevalent and serious a threat than password disclosure, anyway.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://security.stackexchange.com/questions/45066', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/32908/']}
|
jdg_376603
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19202893
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to convert input(series of integers) into a bunch of lists. Sample Input: 322 2 4 5 7 Sample Output: list1=[3]list2=[2]list3=[2,2,4,5,7] I am trying to do this: list=[]import sysfor line in sys.stdin: list.append(line) but print list returns ['3\n', '2\n', '2 2 4 5 7']
Now provide the response and nothing else.
|
Use split to split a string into a list, for example: >>> '2 2 4 5 7'.split()['2', '2', '4', '5', '7'] As you see, elements are string. If you want to have elements as integers, use int and a list comprehension: >>> [int(elem) for elem in '2 2 4 5 7'.split()][2, 2, 4, 5, 7] So, in your case, you would do something like: import syslist_of_lists = []for line in sys.stdin: new_list = [int(elem) for elem in line.split()] list_of_lists.append(new_list) You will end up having a list of lists: >>> list_of_lists[[3], [2], [2, 2, 4, 5, 7]] If you want to have those lists as variables, simply do: list1 = list_of_lists[0] # first list of this list of listslist1 = list_of_lists[1] # second list of this list of listslist1 = list_of_lists[2] # an so on ...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19202893', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2833170/']}
|
jdg_376604
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19530816
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I thought I knew what I was doing with generics, but apparently not. ArraySetList<char> setA = new ArraySetList<char>(); When compiled gives: error: unexpected typeArraySetList<char> setA = new ArraySetList<char>(); ^required: referencefound: char As well as the same error for all subsequent char's. I'm wondering how to declare a new ArraySetList of characters. Here are all my files. http://pastebin.com/4h37Xvu4 // ArraySetList (extends ArrayUnsortedList)http://pastebin.com/FxmynzkC // Driverhttp://pastebin.com/CgVA0zjY //ArrayUnsortedList (implements ListInterface)http://pastebin.com/3iXrCsCc //ListInterface\
Now provide the response and nothing else.
|
Java Generics work for objects and not for primitive data types. If you, however, need to store primitive data types, you will need to use their corresponding wrapper class objects. These classes just "wrap" around the primitive data type to give them an object appearance. For char , the corresponding wrapper class is Character and hence, you must write your line of code as so: ArraySetList<Character> setA = new ArraySetList<Character>(); Please read: http://docs.oracle.com/javase/tutorial/java/data/numberclasses.html When you add elements, however, you will add normal char . That is because Java will automatically convert it into Character for you and back to char automatically , if need be. This is called auto-boxing conversion. Autoboxing is the automatic conversion that the Java compiler makes between the primitive types and their corresponding object wrapper classes. For example, converting an int to an Integer, a double to a Double, and so on. If the conversion goes the other way, this is called unboxing. source: http://docs.oracle.com/javase/tutorial/java/data/autoboxing.html
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19530816', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/314972/']}
|
jdg_376605
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21037242
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there any compiled language that has garbage collection built in? To my understanding right now, the purpose of an interpreter or JVM is to make binaries platform independent. Is it also because of the GC? Or is GC possible in compiled code?
Now provide the response and nothing else.
|
As you correctly point out, virtual machines are mostly used to abstract away machine-dependent properties of underlying platforms. Garbage collection is an orthogonal technology. Usually it is not mandatory for a language, but is considered a desired property of a run-time environment. There are indeed languages with primitives to allocate memory (e.g., new in Java and C# ) but without primitives to release it. They can be thought of as languages with built-in GC. One such programming language is Eiffel . Most Eiffel compilers generate C code for portability reasons. This C code is used to produce machine code by a standard C compiler. Eiffel implementations provide GC (and sometimes even accurate GC) for this compiled code, and there is no need for VM. In particular, VisualEiffel compiler generated native x86 machine code directly with full GC support.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21037242', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1525061/']}
|
jdg_376606
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17150085
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Currently i am switching my java web application from JDBC to Hibernate, in my current implementation with JDBC i load up static data on initialization of the application into static variables so i don't have to directly access the database every time i need some static data, now switching to hibernate i am thinking of getting rid of these static variables as as far as i have researched hibernate keeps loaded data in a cache. I am fairly new to hibernate so i am not sure if switching from my current method to hibernate will give any performance improvements. I am going to research further into hibernates caching and run some performance tests to see which method is better but would just like some opinions on what others think regarding performance on both these methods. Thanks.
Now provide the response and nothing else.
|
JDBC will always give better performance as compared to Hibernate for most of the database vendors. You can check the comparison made as given in the link below. He concludes that Hibernate is fast when querying tables with less rows else JDBC is way better: http://phpdao.com/hibernate_vs_jdbc/ Another good performance stats can be found in Hibernate forum discussion at https://forum.hibernate.org/viewtopic.php?f=1&t=931708 It states the following order of performance hit taken due to use of Hibernate (please note that this can be improved by tuning Hibernate to one's needs: Objects: 8 - Hibernate: 10ms / Direct JDBC: 10ms = Ratio: 1.0 Objects: 16 - Hibernate: 10ms / Direct JDBC: 0ms = Ratio: Infinity Objects: 64 - Hibernate: 20ms / Direct JDBC: 10ms = Ratio: 2.0 Objects: 256 - Hibernate: 150ms / Direct JDBC: 30ms = Ratio: 5.0 Objects: 512 - Hibernate: 210ms / Direct JDBC: 40ms = Ratio: 5.25 Objects: 1024 - Hibernate: 410ms / Direct JDBC: 70ms = Ratio: 5.857143 Objects: 2048 - Hibernate: 681ms / Direct JDBC: 180ms = Ratio: 3.7833333 The choice of Hibernate over JDBC and SQL queries is not because of the performance, but because of reasons mainly object persistence and database independence in terms of not writing database specific queries. You can read the following PDF guide to get a better view: http://www.mindfiresolutions.com/mindfire/Java_Hibernate_JDBC.pdf
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17150085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1246834/']}
|
jdg_376607
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52269422
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My POJO is as follows class EventUser { private id; private userId; private eventId;} I retrieve EventUser object as follows: List<EventUser> eventUsers = eventUserRepository.findByUserId(userId); Say the 'eventUsers' is as follows: [{"id":"id200","userId":"001","eventId":"1010"},{"id":"id101","userId":"001","eventId":"4212"},{"id":"id402","userId":"001","eventId":"1221"},{"id":"id301","userId":"001","eventId":"2423"},{"id":"id701","userId":"001","eventId":"5423"},{"id":"id601","userId":"001","eventId":"7423"}] Using streaming, and without using any intermediate variable , how can I filter and collect events after a given EventUser.id:ex: List<EventUser> filteredByOffSet = eventUsers.stream.SOMEFILTER_AND_COLLECT("id301"); the result should be : [{"id":"id301","userId":"001","eventId":"2423"},{"id":"id701","userId":"001","eventId":"5423"},{"id":"id601","userId":"001","eventId":"7423"}]
Now provide the response and nothing else.
|
In Java 8 you need a stateful filter public static <T> Predicate<T> from(Predicate<T> test) { boolean[] found = { false }; // once found, always true return t -> found[0] || (found[0] = test.test(t));} NOTE: this only makes sense for single threaded streams. List<EventUser> filteredByOffSet = eventUsers.stream() .filter(from(e -> "id301".equals(e.getId())) .collect(Collectors.toList());
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/52269422', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/180904/']}
|
jdg_376608
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
261963
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Bug introduced in 12.3 and persists in 13.0 Is this a bug of AxisObjec? Or I have misunderstood the usage? My understanding is that by setting the option TickLabels to TickLabels -> {All, All} will give tick labels to both the major and minor ticks. However, currently it only gives labels to the major ticks. Graphics[ AxisObject[Line[{{0, 0}, {10, 10}}], {0, 10}, TickPositions -> {{0, 10, 2}, {1, 9, 2}}, TickLabels -> {All, All}]]
Now provide the response and nothing else.
|
It looks like this is a bug to me. Based on david's comment and Michael's answer, we see: TracePrint[ ToBoxes @ Graphics[ AxisObject[ Line[{{0, 0}, {10, 10}}], {0, 10}, TickPositions -> {{0, 10, 2}, {1, 9, 2}}, TickLabels -> {All, All} ] ], _System`Dump`formatTickLabelSet, TraceInternal -> True] System`Dump`formatTickLabelSet[All,StandardForm] System`Dump`formatTickLabelSet[All,StandardForm] GraphicsBox[AxisBox[LineBox[{{0, 0}, {10, 10}}], {0, 10},TickLabels -> {Automatic, Automatic},TickPositions -> {{0, 10, 2}, {1, 9, 2}}]] As Michael says, note that the TickLabels option has been changed from {All, All} to {Automatic, Automatic} . Now, the DownValues for System`Dump`formatTickLabelSet are: Block[{Internal`$ContextMarks = False}, Print @ Column @ DownValues @ System`Dump`formatTickLabelSet] HoldPattern[formatTickLabelSet[Automatic,fmt_]]:>AutomaticHoldPattern[formatTickLabelSet[None|False,fmt_]]:>NoneHoldPattern[formatTickLabelSet[All|True,fmt_]]:>AutomaticHoldPattern[formatTickLabelSet[ticklabels_List,fmt_]]:>(LabelToBoxes[#1,fmt,Automatic]&)/@ticklabels Notice that All gets sent to Automatic . This looks like a bug to me. One possibility is to do: System`Dump`formatTickLabelSet[All, _] := All Then: ToBoxes @ Graphics[ AxisObject[ Line[{{0, 0}, {10, 10}}], {0, 10}, TickPositions -> {{0, 10, 2}, {1, 9, 2}}, TickLabels -> {All, All} ]] GraphicsBox[AxisBox[LineBox[{{0, 0}, {10, 10}}], {0, 10}, TickLabels -> {All, All},TickPositions -> {{0, 10, 2}, {1, 9, 2}}]] And this renders correctly: RawBoxes @ %
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/261963', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/73323/']}
|
jdg_376609
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1038056
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have the following XML code. <firstname> <default length="6">Örwin</default> <short>Örwin</short> <shorter>Örwin</shorter> <shortest>�.</shortest></firstname> Why does the content of the "shortest" node break? It should be a simple "Ö" instead of the tedious �. XML is UTF-8 encoded and the function which processes the output of that node also writes the content of "short" and "shorter". Where the "Ö" is clearly visible.
Now provide the response and nothing else.
|
My guess is that the XML isn't properly UTF-8 encoded. Please show the bytes within the <shortest> element in the raw file... I suspect you'll find they're not a validly encoded character. If you could show a short but complete program which generates this XML from valid input, that would be very helpful. (Preferably saying which platform it is, too :) EDIT: Something very odd is going on in this file. Here are the hex values for the "shorter" and "shortest" values: Shorter: C3 96 72 77 69 63 Shortest: EF BF BD 2E Now "C3 96" is the valid UTF-8 encoding for U+00D6 which is "Latin capital letter O with diaeresis" as you want. However, EF BF BD is the UTF-8 encoding for U+FFFD which is "replacement character" - definitely not what you want. (The 2E is just the ASCII dot.) So, this is actually valid UTF-8 - but it doesn't contain the characters you want. Again, you should examine what created the file...
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1038056', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63674/']}
|
jdg_376610
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27783783
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Created a userform Added a textBox and a comboBox Added a submit button When submit is clicked it adds the data to a spreadsheet From what I have been told and what I have read this is wrong ActiveCell.Value = TextBox3.Text ActiveCell.Offset(0, 1).Select ActiveCell.Value = ComboBox1.Text ActiveCell.Offset(1, -1).Select This works but I've been told I shouldn't use the .select keyword when possible . I've read that to make my code reusable I should create variables. How would a professional developer write this code, can it be written in less lines and how can I refer to the activecell offset without using select?
Now provide the response and nothing else.
|
I am assuming you want TextBox3 in column A and ComboBox1 in column B. If you want different columns just change the letter references. Sub OnClick() 'whatever your current sub is called. Dim LastRow As Long, ws As Worksheet Set ws = Sheets("Name of Sheet where data is going") LastRow = ws.Range("A" & Rows.Count).End(xlUp).Row + 1 'Finds the last blank row ws.Range("A" & LastRow).Value = TextBox3.Text 'Adds the TextBox3 into Col A & Last Blank Row ws.Range("B" & LastRow).Value = ComboBox1.Text 'Adds the ComboBox1 into Col B & Last Blank RowEnd Sub If you want a method using Offset() : Sub OnClickwithOffset() 'whatever your current sub is called. Dim LastRow As Long, ws As Worksheet Set ws = Sheets("Name of Sheet where data is going") LastRow = ws.Range("A" & Rows.Count).End(xlUp).Row + 1 'Finds the last blank row ws.Range("A" & LastRow).Value = TextBox3.Text 'Adds the TextBox3 into Col A & Last Blank Row ws.Range("A" & LastRow).Offset(0, 1).Value = ComboBox1.Text 'Adds the ComboBox1 into next cell to the right of TextBox3 data.End Sub
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27783783', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3246338/']}
|
jdg_376611
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2891225
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This problem has been asked here before. For example, this question Prove that the restriction $f|_K$ of $f$ to $K$ is globally Lipschitz where $K$ is a compact set only treated the continuity of $f$ on $K$ while this question $f \in C^1$ defined on a compact set $K$ is Lipschitz? already assumed $f$ to be $C^1$ function. Although, the latter was close to answering my question but used terms difficult for me to understand. So, in both cases, they have not answered my question. So, here it is If $f:O\subset \Bbb{R}^n\to\Bbb{R}^m$ is locally Lipschitz, then prove that for any compact set $K$ in $O,$ $f \mid_K$ is Lipschitz such that $\exists \;c\in [x,y]$ such that \begin{align}\Vert f(x)-f(y) \Vert\leq c\Vert x-y \Vert,\;\;\forall\;x,y\in K\end{align} My efforts Let $K$ be compact in $O$. Let $x,y\in K$, then by MVT, $\exists \;r\in [x,y]$ such that \begin{align}\Vert f(x)-f(y) \Vert\leq \sup\limits_{r\in [x,y]}\Vert f'(r) \Vert\Vert x-y \Vert\end{align}\begin{align}\qquad\qquad\qquad\leq \sup\limits_{r\in K}\Vert f'(r) \Vert\Vert x-y \Vert\end{align}Since $f$ is locally Lipschitz, then it is continuous and since $K$ is compact, then the maximum is reached. So, let \begin{align}c= \sup\limits_{r\in K}\Vert f'(r) \Vert\end{align}Thus, \begin{align}\Vert f(x)-f(y) \Vert\leq c\Vert x-y \Vert,\;\;\forall\;x,y\in K\end{align} Please, can anyone help me check if my proof is correct? If no, alternative proofs will be highly regarded! Thanks!
Now provide the response and nothing else.
|
While I am not a fan of proof by contradiction, it works efficiently here. Suppose $S(x,y)={\|f(x)-f(y)| \over \|x-y\|}$ is unbounded for $x,y \in K, x \neq y$. Then we can find $x_k, y_k \in K$ such that $S(x_k,y_k) \to \infty$. Since $K$ is compact,we can assume that $x_k \to x, y_k \to y$. Since $f$ is bounded on $K$, wemust have $x=y$ (otherwise $S(x_k,y_k)$ would not be unbounded). By assumption, $f$ is locally Lipschitz around $x$, hence $S(x_k,y_k) \le L$ for some (finite) $L$, which is a contradiction. Here is a constructive proof: Since $f$ is locally Lipschitz, for each $x$ there is some $r_x>0$ and $L_x$ such that $f$ is Lipschitz with rank $L_x$ on $B(x,r_x)$. Then the sets $B(x, {1 \over 2} r_x)$, $x \in O$ form an open cover of $K$, soa finite number cover $K$. For convenience, denote these by $B(x_k, {1 \over 2} r_k)$ (instead of$r_{x_k}$). Let $M= \sup_{x \in M} \|f(x)\|$, $r= {1 \over 2}\min r_k$, $L_0 = {2M \over r}$and $L= \max (L_0, L_k)$. Then $L$ is a Lipschitz constant for $f$ on $K$. To see this, pick $x,y \in K$. If $\|x-y\| \ge r$ then we see that ${ \|f(x)-f(y) \| \over \|x - y \|} \le {2M \over r} = L_0 \le L$. If $\|x-y\| < r$, then for some $x_k$ we have $x \in B(x_k, {1 \over 2} r_k)$. Then $y \in B(x_k, r_k)$ andso $\|f(x)-f(y) \| \le L_k \|x - y \| \le L \|x - y \|$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2891225', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/544685/']}
|
jdg_376612
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21885683
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm currently attempting to add some video clip settings to a NSMutableDictionary, including two CMTime objects. I am trying to store the video stream in use (indicated with an integer), the clip duration (CMTime) and the clip start time (CMTime), which are defined elsewhere in the code. I'm probably being daft but I can't figure out how to add the CMTimes to the dictionary, I get a "Sending 'CMTime' to parameter of incompatible type 'id'" error. I tried both setObject and setValue with no success and can't find an answer anywhere. NSMutableDictionary *clipDetails = [NSMutableDictionary dictionary];[clipDetails setObject:[NSNumber numberWithInteger:currentStream] forKey:@"stream"];[clipDetails setObject:startTime forKey:@"clipStart"];[clipDetails setObject:duration forKey:@"duration"];
Now provide the response and nothing else.
|
Since CMTime is a struct, you need to wrap it in an Objective C type, usually with NSValue : CMTime startTime = (...);NSValue *startValue = [NSValue valueWithBytes:&startTime objCType:@encode(CMTime)];[clipDetails setObject:startValue forKey:@"startTime"]; You can get it out again like so: CMTime startTime;NSValue *startValue = [clipDetails objectForKey:@"startTime"];[startValue getValue:&startTime]; Sidenote, it's much easier to use the new dictionary syntax: clipDetails[@"startTime"] = ...; NSValue *value = clipDetails[@"startTime"]; Those steps will work for any struct; as it turns out, the AVFoundation framework provides convenience methods for CMTime structs: clipDetails[@"startTime"] = [NSValue valueWithCMTime:startTime]; CMTime startTime = [clipDetails[@"startTime"] CMTimeValue];
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21885683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3328848/']}
|
jdg_376613
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
402742
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Consider the split monic $f=\prod_{i=1}^n(x-x_i)\in \mathbb Z[x_1 ,\dots ,x_n,x]$ . Its discriminant is usually defined as $$(-1)^{n(n-1)/2}\prod_{i=1}^nf^\prime(x_i)=\prod_{1\leq i<j\leq n}(x_i-x_j)^2.$$ What is the reason for taking this definition as opposed to $\prod_{i=1}^nf^\prime(x_i)$ ? The product of the derivatives at the roots "feels" to me more canonical than the product on the RHS.
Now provide the response and nothing else.
|
The reason is that the formula on the right side should be considered more fundamental, not the formula on the left, when seeking a symmetric expression in the roots. Don't use a product of anything "at" the roots, but a symmetric expression in the roots that vanishes if any pair of roots are equal. That explains the factors $(x_i-x_j)^2$ . Do you consider the simplest polynomial with a double root at 0 to be $x^2$ or $-x^2$ ? The product on the left (without the sign) is also interesting and has a name: it is called the resultant of $f(x)$ and $f'(x)$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/402742', 'https://mathoverflow.net', 'https://mathoverflow.net/users/69037/']}
|
jdg_376614
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
408278
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have 2 Ubuntu 12.04 servers which are load balanced. If I go to anyone on them from the shell and type: wget stackoverflow.com The page is fetched into index.html . However, assuming the site hosted on those servers is called mysite.com , calling wget mysite.com I get: Resolving mysite.com (mysite.com)... 50.XXX.YY.ZZZConnecting to mysite.com (mysite.com)|50.XXX.YY.ZZZ|:80... failed: Connection refused. where 50.XXX.YY.ZZZ is the public IP of mysite.com . Any ideas what is wrong on those servers?
Now provide the response and nothing else.
|
The basic issue is this: The server has a private inside IP address. (I'll call it 192.168.0.2 for simplicity.) It opens a connection from its private inside IP address to its public address. (From 192.168.0.2 to 59.XXX.YY.ZZ) This goes to the router, following the server's default route. (Since the machine doesn't know the public address is associated with itself.) The router port forwards the request to the public IP address and forwards it to the machine. The request still has a source of 192.168.0.2 but now it has a destination of 192.168.0.2. The machine receives a connection from 192.168.0.2 to 192.168.0.2, accepts the connection, and sends itself a response. (Since it knows 192.168.0.2 is local.) The machine is baffled to receive a response from 192.168.0.2 since it was expecting one from 59.XXX.YY.ZZ, and the connection attempt fails. For hairpin NAT to work, the router has to not only forward the request to the correct inside machine rewriting the destination but it also has to rewrite the source to make sure the reply packets go back through the router so they can be NATted too. Many routers can't do this, and many that can require specific configuration.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/408278', 'https://serverfault.com', 'https://serverfault.com/users/16033/']}
|
jdg_376615
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
49609719
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to deploy my app on Github to Heroku but getting error: ERROR in ./src/Index.tsx Module not found: Error: Can't resolve './ConfigureStore' in '/app/src' @ ./src/Index.tsx 9:23-50 Seems like Typescript issue when I deploy on Heroku. Although, works perfect in my local and webpack generates bundles and app runs fine.Below is my webpack.config: const path = require('path');const webpack = require('webpack');const merge = require('webpack-merge');const CheckerPlugin = require('awesome-typescript-loader').CheckerPlugin;const rootModulePath = "./src/";module.exports = { stats: { modules: false }, resolve: { extensions: ['.js', '.jsx', '.ts', '.tsx', '.css'] }, entry: { 'DIG': rootModulePath + "Index.tsx" }, externals: { jQuery: 'jQuery' }, node: { fs: 'empty' }, plugins: [ new webpack.ProvidePlugin({ '$': 'jquery', 'jQuery': 'jquery' }), new webpack.IgnorePlugin(/\.\/locale$/), new CheckerPlugin() ], devtool: 'source-map', output: { path: __dirname, filename: './public/dig.js' }, module: { rules: [ { test: /\.tsx?$/, include: /src/, use: ['awesome-typescript-loader?silent=true'] }, //{ test: /\.ts$/, include: /src/, use: 'awesome-typescript-loader?silent=true' }, { test: /\.html$/, use: 'html-loader?minimize=false' }, { test: /\.less$/, use: [{ loader: "style-loader" // creates style nodes from JS strings }, { loader: "css-loader" // translates CSS into CommonJS }, { loader: "less-loader" // compiles Less to CSS }] }, { test: /\.css$/, use: ['style-loader','css-loader'] }, { test: /\.(png|jpg|jpeg|gif|svg)$/, use: 'url-loader?limit=25000' } // { test: /\.js/, loader: 'imports-loader?define=>false'} ] } }; package.json { "name": "digeratia", "version": "1.0.0", "description": "", "main": "webpack.config.js", "scripts": { "build": "webpack --mode production", "start": "npm run build && node server.js" }, "author": "Vivek Singh", "dependencies": { "@types/history": "4.6.2", "@types/numeral": "0.0.22", "@types/react": "16.1.0", "@types/react-bootstrap": "^0.32.6", "@types/react-dom": "16.0.4", "@types/react-hot-loader": "3.0.6", "@types/react-redux": "5.0.15", "@types/react-router": "4.0.23", "@types/react-router-dom": "4.2.6", "@types/react-router-redux": "5.0.13", "@types/webpack": "4.1.3", "@types/webpack-env": "1.13.5", "awesome-typescript-loader": "4.0.1", "axios": "^0.18.0", "bootstrap": "^4.0.0-alpha.6", "chai": "^4.1.2", "create-react-class": "^15.6.2", "datatables.net": "^1.10.16", "datatables.net-dt": "^1.10.16", "domain-task": "^3.0.3", "event-source-polyfill": "0.0.12", "express": "^4.16.2", "extract-text-webpack-plugin": "3.0.2", "file-loader": "1.1.11", "history": "4.7.2", "html-to-react": "1.3.3", "it": "^1.1.1", "json-loader": "0.5.7", "less": "^3.0.1", "less-loader": "^4.0.5", "lodash": "^4.17.4", "node-noop": "1.0.0", "numeral": "^2.0.6", "react": "^16.2.0", "react-bootstrap": "^0.32.1", "react-dom": "^16.2.0", "react-element-to-jsx-string": "^13.0.0", "react-hot-loader": "4.0.1", "react-image": "^1.0.1", "react-intl": "^2.4.0", "react-moment": "^0.7.0", "react-popper": "^0.9.2", "react-redux": "5.0.7", "react-router-dom": "4.2.2", "react-router-redux": "5.0.0-alpha.6", "redux": "3.7.2", "redux-thunk": "2.2.0", "shelljs": "^0.8.1", "style-loader": "^0.20.3", "to-string-loader": "^1.1.5", "typescript": "2.8.1", "url-loader": "1.0.1", "webpack": "4.4.1", "webpack-hot-middleware": "2.21.2", "webpack-merge": "4.1.2", "xit": "^0.2.0", "abort-controller": "^1.0.0", "babel-plugin-syntax-optional-chaining": "^7.0.0-alpha.13", "babel-runtime": "^6.26.0", "core-js": "^2.5.1", "css-loader": "^0.28.4", "css-to-string-loader": "^0.1.3", "react-render-html": "^0.6.0", "toastr": "^2.1.2", "webpack-cli": "^2.0.12" }, "devDependencies": { "abort-controller": "^1.0.0", "babel-plugin-syntax-optional-chaining": "^7.0.0-alpha.13", "babel-runtime": "^6.26.0", "core-js": "^2.5.1", "css-loader": "^0.28.4", "css-to-string-loader": "^0.1.3", "react-render-html": "^0.6.0", "toastr": "^2.1.2", "webpack-cli": "^2.0.12" }} The github code url is here Please help as am stuck on this issue since long time.
Now provide the response and nothing else.
|
So the issue is not what you posted it is. The issue got fixed because you assume Index.ts or Index.tsx is equivalent to index.ts or index.tsx . If you look at the below thread webpack: fine on MacOS, loader errors on linux It explains that building on Mac you wont face the issue. And I assume the same case with Windows as well. But with Linux you will face the issue. That is what is happening here. If you ssh to the heroku box $ heroku ps:execEstablishing credentials... doneConnecting to web.1 on ⬢ sleepy-sea-65699...~ $ webpack -p --verboseHash: 8c20236f8268ce043077Version: webpack 3.10.0Time: 24904ms Asset Size Chunks Chunk Names ./docs/dig.js 608 kB 0 [emitted] [big] DIG./docs/dig.js.map 4.25 MB 0 [emitted] DIGEntrypoint DIG [big] = ./docs/dig.js ./docs/dig.js.mapchunk {0} ./docs/dig.js, ./docs/dig.js.map (DIG) 1.38 MB [entry] [rendered]......ERROR in ./src/configureStore.tsModule not found: Error: Can't resolve './Stores' in '/app/src'resolve './Stores' in '/app/src' using description file: /app/package.json (relative path: ./src) Field 'browser' doesn't contain a valid alias configuration after using description file: /app/package.json (relative path: ./src) using description file: /app/package.json (relative path: ./src/Stores) no extension Field 'browser' doesn't contain a valid alias configuration /app/src/Stores is not a file .js Field 'browser' doesn't contain a valid alias configuration /app/src/Stores.js doesn't exist .jsx Field 'browser' doesn't contain a valid alias configuration /app/src/Stores.jsx doesn't exist .ts Field 'browser' doesn't contain a valid alias configuration /app/src/Stores.ts doesn't exist .tsx Field 'browser' doesn't contain a valid alias configuration /app/src/Stores.tsx doesn't exist .css Field 'browser' doesn't contain a valid alias configuration /app/src/Stores.css doesn't exist as directory existing directory using path: /app/src/Stores/index using description file: /app/package.json (relative path: ./src/Stores/index) no extension Field 'browser' doesn't contain a valid alias configuration /app/src/Stores/index doesn't exist .js Field 'browser' doesn't contain a valid alias configuration /app/src/Stores/index.js doesn't exist .jsx Field 'browser' doesn't contain a valid alias configuration /app/src/Stores/index.jsx doesn't exist .ts Field 'browser' doesn't contain a valid alias configuration /app/src/Stores/index.ts doesn't exist .tsx Field 'browser' doesn't contain a valid alias configuration /app/src/Stores/index.tsx doesn't exist .css Field 'browser' doesn't contain a valid alias configuration /app/src/Stores/index.css doesn't exist As you can see the case of the file in error is index and not Index Also if you check you repo config $ cat .git/config[core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true precomposeunicode = true You can see the ignorecase is set to true , which is not good, because then git won't look at the file renames and cases changes and push renamed files. So essentially the error is that your Index.ts should index.ts to make your build Cross-OS compatible Case sensitive plugin You should enable the case sensitive plugin for you development to avoid such issues $ npm install --save-dev case-sensitive-paths-webpack-plugin> fsevents@1.1.3 install /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/DigeratiGlobalReact/node_modules/fsevents> node install[fsevents] Success: + case-sensitive-paths-webpack-plugin@2.1.2added 117 packages from 85 contributors and removed 3 packages in 9.661s Update your webpack.config.js like below const path = require('path');const webpack = require('webpack');const merge = require('webpack-merge');const CheckerPlugin = require('awesome-typescript-loader').CheckerPlugin;const rootModulePath = "./src/";const rootBundlePath = "./src/bundle/";const isDevBuild = true;var CaseSensitivePathsPlugin = require('case-sensitive-paths-webpack-plugin');module.exports = { stats: { modules: false }, resolve: { extensions: ['.js', '.jsx', '.ts', '.tsx', '.css'] }, entry: { 'DIG': rootModulePath + "Index.tsx" }, externals: { jQuery: 'jQuery' }, node: { fs: 'empty' }, plugins: [ new CaseSensitivePathsPlugin(), new webpack.ProvidePlugin({ '$': 'jquery', 'jQuery': 'jquery' }), .... Now when you build you will see these issues directly in Windows/Mac as well sh-3.2$ npm run build> digeratiaglobal@1.0.0 build /Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/DigeratiGlobalReact> webpack -pHash: 3762a6deb22d9fabd37bVersion: webpack 3.10.0Time: 12279ms Asset Size Chunks Chunk Names ./docs/dig.js 511 kB 0 [emitted] [big] DIG./docs/dig.js.map 3.56 MB 0 [emitted] DIGERROR in ./src/configureStore.tsModule not found: Error: [CaseSensitivePathsPlugin] `/Users/tarun.lalwani/Desktop/tarunlalwani.com/tarunlalwani/workshop/ub16/so/DigeratiGlobalReact/src/Stores/index.ts` does not match the corresponding path on disk `Index.ts`. @ ./src/configureStore.ts 6:15-34 @ ./src/Index.tsxnpm ERR! code ELIFECYCLEnpm ERR! errno 2
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49609719', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4868839/']}
|
jdg_376616
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29244351
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When we use the sort file command,the file shows its contents in a sorted way. What if I don't want to get any output on stdout, but in the input file instead?
Now provide the response and nothing else.
|
You can use the -o , --output=FILE option of sort to indicate the same input and output file: sort -o file file Without repeating the filename (with bash brace expansion ) sort -o file{,} ⚠️ Important note: a common mistake is to try to redirect the output to the same input file(e.g. sort file > file ). This does not work as the shell is making the redirections (not the sort(1) program) and the input file (as being the output also) will be erased just before giving the sort(1) program the opportunity of reading it.
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/29244351', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4705190/']}
|
jdg_376617
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12424263
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If I create a modal window: Ext.define('myWindow', { extend: 'Ext.Container', alias: 'widget.myWindow', floating: true, modal: true, listeners: 'onMaskClick???': { close the window } .....} How do I know when a user has clicked on the mask outside the window? In Sench Touch, there is a config hideOnMaskTap that lets me specify. What is the event/config for extJS?
Now provide the response and nothing else.
|
Tramway's case (sort of) works on modal or non modal windows. But not in case child components like the boundlist of a combobox float outside the windows boundaries. However if you use modal windows anyway you can listen for a click event on the mask like this. Ext.define('myWindow', { extend: 'Ext.window.Window', alias: 'widget.myWindow', floating: true, modal: true, initComponent: function () { var me = this; me.callParent(arguments); me.mon(Ext.getBody(), 'click', function(el, e){ me.close(me.closeAction); }, me, { delegate: '.x-mask' }); }});
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12424263', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1671322/']}
|
jdg_376618
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14090008
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm sending out messages and I need to keep track of each message being sent out, when, to whom, etc. So I have the recipients in a an extra table model besides the message, and I need to populate the recipients every time I create a new message. The Recipient of a message will be populated from a third model, which contains all the current e-mail addresses that I want to sent out to. So my question is how I would go about this the most efficient way? I know I can do something similar to: m = Message.objects.create(*args)for email in ModelWithEmails.active.values_list('email', flat=True): Recipient.objects.create(message=m, email=email) But that will still involve getting all the e-mail addresses out of the database and I would like to keep it all inside of the database if possible, as there's several thousand addresses that will be fetched every time.
Now provide the response and nothing else.
|
You can't do INSERT .. SELECT with django ORM, but you can do a bulk insert (since django 1.4): m = Message.objects.create(*args)recipients = []for email in ModelWithEmails.active.values_list('email', flat=True): recipients.append(Recipient(message=m, email=email))Recipient.objects.bulk_create(recipients) Or a tiny bit more efficient: m = Message.objects.create(*args)emails = ModelWithEmails.active.values_list('email', flat=True)Recipient.objects.bulk_create([Recipient(message=m, email=email) for email in emails]) For INSERT .. SELECT you'll have to fall back to raw SQL.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14090008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68035/']}
|
jdg_376619
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35509336
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does AWS allow usage of Cloudfront for websites usage, eg:- caching web pages.Website should be accessible within corporate VPN only. Is it a good idea to cache webpages on cloudfront when using Application restricted within one network?
Now provide the response and nothing else.
|
As @daxlerod points out, it is possible to use the relatively new Web Application Firewall service with CloudFront, to restrict access to the content, for example, by IP address ranges. And, of course, there's no requirement that the web site actually be hosted inside AWS in order to use CloudFront in front of it. However, "will it work?" and "are all the implications of the required configuration acceptable from a security perspective?" are two different questions. In order to use CloudFront on a site, the origin server (the web server where CloudFront fetches content that isn't in the cache at the edge node where the content is being requested) has to be accessible from the Internet, in order for CloudFront to connect to it, which means your private site has to be exposed, at some level, to the Internet. The CloudFront IP address ranges are public information, so you could partially secure access to the origin server with the origin server's firewall, but this only prevents access from anywhere other than through CloudFront -- and that isn't enough, because if I knew the name of your "secured" server, I could create my own CloudFront distribution and access it through CloudFront, since the IP addresses would be in the same range. The mechanism CloudFront provides for ensuring that requests came from and through an authorized CloudFront distribution is custom origin headers, which allows CloudFront to inject an unknown custom header and secret value into each request it sends to your origin server, to allow your server to authenticate the fact that the request not only came from CloudFront, but from your specific CloudFront distribution. Your origin server would reject requests not accompanied by this header, without explanation, of course. See http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/forward-custom-headers.html#forward-custom-headers-restrict-access . And, of course, you need https between the browser and CloudFront and https between CloudFront and the origin server. It is possible to configure CloudFront to use (or require) https on the front side or the back side separately, so you will want to ensure it's configured appropriately for both, if the security considerations addressed above make it a viable solution for your needs. For information that is not highly sensitive, this seems like a sensible approach if caching or other features of CloudFront would be beneficial to your site.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35509336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5436333/']}
|
jdg_376620
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6777629
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have 3 branches. master [ Live Server] \ stage [ Stage Server Where we test changes; merge commits ] \ Dev [ Local Machine ] I would like to downstream the changes to. Each of these branches are set to tracking each other. Normally, to downstream the changes i do this: git checkout stage && git merge master Then i checkout dev and i do the same git checkout dev && git merge stage Then push them all: git push origin --all Is there a way to downstream those changes without checking out into each branch? I maybe using the wrong terminology. I'm not totally sure if i'm using upstream/downstream terminology correctly.
Now provide the response and nothing else.
|
You can indeed "merge" a branch B into branch A without having to check out branch A, but only if it's a fast-forward merge . You can use a refspec with fetch to do the "merge". If merging branch B into branch A using git merge would result in a fast-forward merge, then you can do the following without having to checkout A: git fetch <remote> B:A The Documentation The above matches the refspec format git fetch <remote> <source>:<destination> From the documentation for git fetch (emphasis mine): The remote ref that matches <src> is fetched, and if <dst> is not empty string, the local ref that matches it is fast-forwarded using <src> . See Also Git checkout and merge without touching working tree Merge, update, and pull Git branches without using checkouts Merging without changing the working directory Merging Branches Without Checkout
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6777629', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/171217/']}
|
jdg_376621
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27019752
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In C#, we have Object Initializers, like so: Person obj = new Person { FirstName = "Craig", LastName = "Playstead", }; Does Swift have this? As an example, I have this code: var config = IndicatesConfig()config.name = NSLocalizedString(Localizable.FOLDERS, comment: "").uppercaseStringconfig.style = .DetailHeaderreturn config But would like to do something along the lines of: var config = IndicatesConfig() { name = NSLocalizedString(Localizable.FOLDERS, comment: "").uppercaseString style = .DetailHeader} Thank you! edit: I'm not referencing to the explicit definition of class initialisers. Please bear in mind the syntax shown in the C# example.
Now provide the response and nothing else.
|
Not as such. If you create a custom struct , Swift will, under certain conditions, create a default memberwise initializer that is close to what you're looking for. But otherwise, I don't think there's even a way to implement such a feature, since Swift lacks anything like a with keyword that would get you into the new instance's scope. Update: this is as close as I can get, by defining a custom operator: infix operator <| { }func <|<T>(decl: @autoclosure () -> T, f: T -> () ) -> T { let obj = decl() f(obj) return obj}let label = UILabel() <| { $0.frame = CGRect(x: 10, y: 10, width: 300, height: 25) $0.text = "Hello" $0.enabled = false}println(label)// <UILabel: 0x7fb46240b210; frame = (10 10; 300 25); text = 'Hello'; userInteractionEnabled = NO; layer = <_UILabelLayer: 0x7fb46240c2b0>>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27019752', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1544047/']}
|
jdg_376622
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1804396
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $D$ be the open triangle with the vertices $(0,0), (3,0), (0,2)$. For $f(x,y)=e^{ \frac{x}{2x+3y}}$ show that $f$ is integrable on $D$ and prove that $\iint_Df(x,y)dxdy=6\sqrt{e}-6$. I was able to prove that $f$ is integrable on $D$, since $f$ is continuous everywhere but $(0,0)$ and around $(0,0)$, we have that $\frac{x}{2x+3y}<\frac{x}{2x}=\frac{1}{2}$, and therefore $f$ is bounded with a finite number of points where it's not continuous, and therefore, is integrable. I also represented $D$ as $x\in (0,3)$ , $ y\in (0,-\frac{3x}{2}+3)$ since $y=-\frac{3x}{2}+2 $ is the hypertenuse of $D$. Once I got to calculating the integral itself, I tried multiple changes of variables, such as polar, $u=x, v=\frac{1}{2x+3y}$, $u=x, v=2x+3y$, $u=\frac{1}{2x+3y}, v=-\frac{3x}{2}+3$, and none of these gave an integral that could be calculated using analytical tools only (no numerical tools. I also checked this with mathematica and all of these integrals require numerical tools to calculate). What change of variables can be used here? Thanks!
Now provide the response and nothing else.
|
I think you had the upper limit for $y$ the wrong way around, unless you switched the vertices on the axes. If the vertices are on $(3,0)$ and $(0,2)$, the equation of the line joining them is $2x+3y=6$ so $y$ runs from $0$ to $-\tfrac{2}{3}x+2$. If you let $u=x$ and $v=2x+3y$, then $u$ keeps the limits of $x$ and $v$ will go from $2u$ to $6$. The inverse relations are $x=u$ and $y=\tfrac{v}{3}-\tfrac{2u}{3}$, so the Jacobian is:$$\begin{vmatrix}1 & 0 \\-\tfrac{2}{3} & \tfrac{1}{3}\end{vmatrix} = \frac{1}{3}$$The integral becomes:$$\int_0^3 \int_0^{-\tfrac{2}{3}x+2} e^{\frac{x}{2x+3y}}\,\mbox{d}y \,\mbox{d}x = \frac{1}{3} \int_0^3 \int_{2u}^{6} e^{\frac{u}{v}}\,\mbox{d}v \,\mbox{d}u = (*)$$Now $e^{\frac{u}{v}}$ doesn't have an elementary anti-derivative w.r.t. $v$, but you can change the order of integration. In the $uv$-plane, with $u$ from $0$ to $3$ and $v$ from $2u$ to $6$, the region is the triangle with vertices $(0,0)$, $(0,6)$ and $(3,6)$. Letting $v$ run fixed from $0$ to $6$ then gives limits for $u$ running from $0$ to $\tfrac{v}{2}$. The integral becomes easy to compute:$$\begin{array}{rcl}\displaystyle (*) = \frac{1}{3} \int_0^6 \int_{0}^{\tfrac{v}{2}} e^{\frac{u}{v}}\,\mbox{d}u \,\mbox{d}v& = & \displaystyle \frac{1}{3} \int_0^6 \left[ ve^{\frac{u}{v}} \right]_{u=0}^{u=\tfrac{v}{2}} \,\mbox{d}v \\[8pt]& = & \displaystyle \frac{1}{3} \int_0^6 \left( \sqrt{e}-1 \right)v \,\mbox{d}v \\[8pt]& = & \displaystyle \frac{1}{3} \left( \sqrt{e}-1 \right) \left[ \frac{v^2}{2} \right]_{v=0}^{v=6} \\[8pt]& = & \displaystyle 6\left( \sqrt{e}-1 \right) \end{array}$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1804396', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/74815/']}
|
jdg_376623
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2203392
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $x^2+x+1 = 0$ then find the value of $x^{1999}+x^{2000}$. I first tried finding the solution of the given equation and then substituting it in the expression whose value we have to find but I wasn't able to simplify it. In a different approach I moved the terms around a bit and arrived at $x^3 = 1$.But wouldn't that mean that $x = 1$ (which is clearly not possible since it wouldn't satisfy the given equation)? Any help would be appreciated.
Now provide the response and nothing else.
|
Hint: As you have correctly observed, we can deduce that $x^3 = 1$. Now, note that$$x^{1999} + x^{2000} = (x^{3})^{666}(x + x^2)$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2203392', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/247359/']}
|
jdg_376624
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5104765
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been doing some work with Hibernate 3.5 and Spring 3 recently, I'm fairly new with Hibernate and thought the HibernateDaoSupport class in Spring made it nice and easy to use Hibernate with my domain classes. However, while searching for an unrelated question I saw someone mention that the HibernateDaoSupport is not the best way to use Spring and Hibernate. Can anyone shed any light on: Why is it not recommended? What is the best (or at least the accepted) way to integrate Hibernate and Spring?
Now provide the response and nothing else.
|
Using HibernateDaoSupport / HibernateTemplate is not recommended since it unnecessarilyties your code to Spring classes. Using these classes was inevitable with older versions of Hibernate in order to integrate support of Spring-managed transactions. However, since Hibernate 3.0.1 you don't need it any more - you can write a code against a plain Hibernate API while using Spring-managed transactions. All you need is to configure Spring transaction support, inject SessionFactory and call getCurrentSession() on it when you need to work with session. Another benefit of HibernateTemplate is exception translation. Without HibernateTemplate the same functionality can be achieved by using @Repository annotation, as shown in Gareth Davis's answer. See also: 13.3.2 Implementing DAOs based on plain Hibernate 3 API
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/5104765', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/563718/']}
|
jdg_376625
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
581153
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen a number of questions like this asking whether certain parameters can be treated as a hyperparameter. Why can't we just treat everything as a hyperparameter? I understand that this is an extreme stance, but the way I see it, hyperparameters (along with the parameters) ultimately determines the set of all possible models that we can fit the data against. For example, if we set a low learning rate, the set of models it can explore can be a superset of that when we set a high learning rate.
Now provide the response and nothing else.
|
Treating "everything" as a hyperparameter leads to an infinite regress of priors In principle, you can take any constant in a distribution that has an allowable range, and you can then treat it like a conditioning random variable. Consequently, in principle you can always have more hyperparameters in your analysis if you want to. But you have to stop somewhere. Treating a formerly fixed quantity in a prior distribution as a hyperparameter means that you are changing your prior distribution. To see this, suppose you have a prior for $\theta$ using some constant $\phi$ . If you treat $\phi$ as a hyperparameter with density $f$ then you get the following change in your (marginal) prior for your parameter: $$\begin{matrix}& & & \text{Prior} \\[6pt]\text{Known constant } \phi & & & \pi(\theta|\phi) \\[6pt]\text{Hyperparameter } \phi & & & \pi(\theta) = \int \pi(\theta|\phi) f(\phi) d \phi \\[6pt]\end{matrix}$$ Every time we take a fixed quantity in the prior and treat it as a hyperparameter, we change the (marginal) prior. Usually this change makes the prior become more diffuse, because of the additional uncertainty in relation to a quantity it depends on. If we were to try to "treat everything as a hyperparameter" that would just mean that we would construct an infinite regress of prior distributions, as we take more and more quantities and assign them a hyperprior, thereby changing the (marginal) prior. You would never get to a point where you have exhausted all quantities that could be generalised to hyperparameters, so you would never get to an endpoint giving you a prior distribution to use in your analysis.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/581153', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/362499/']}
|
jdg_376626
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
139549
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Well, I am not getting any hint how to show $GL_n(\mathbb{C})$ is path connected. So far I have thought that let $A$ be any invertible complex matrix and $I$ be the idenity matrix, I was trying to show a path from $A$ to $I$ then define $f(t)=At+(1-t)I$ for $t\in[0,1]$ which is possible continous except where the $\operatorname{det}{f(t)}=0$ i.e. which has $n$ roots and I can choose a path in $\mathbb{C}\setminus\{t_1,\dots,t_n\}$ where $t_1,\dots,t_n$ are roots of $\operatorname{det}{f(t)}=0$, is my thinking was correct? Could anyone tell me the solution?
Now provide the response and nothing else.
|
If $P$ is a polynomial of degree $n$, the set $\{\lambda,P(\lambda)\neq 0\}$ is path connected (because its complement is finite, so you can pick a polygonal path). Let $P(t):=\det(A+t(I-A))$. We have that $P(0)=\det A\neq 0$, and $P(1)=\det I=1\neq 0$, so we can find a path $\gamma\colon[0,1]\to\mathbb C$ such that $\gamma(0)=0$, $\gamma(1)=1$, and $P(\gamma(t))\neq 0$ for all $t$. Finally, put $\Gamma(t):=A+\gamma(t)(I-A)$. If $B_1$ and $B_2$ are two invertible matrices, consider $\gamma(t):=B_2\cdot\gamma(t)$, where we chose $\gamma$ for $A:=B_2^{-1}B_1$.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/139549', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24690/']}
|
jdg_376627
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32006098
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a list containing several players but there are gaps between the list-items where there shouldn't be gaps. Here's the link: http://www.ostameerbeke.be/spelerskernB.html . HTML: <ul id="beloften"> <li><a href="B1.jpg" rel="shadowbox[spelers]"><img src="B11.jpg" width="180px" alt="speler"/></a><p><strong>BUGDAYCI</strong> </p><p>Taner</p></li> <li><img src="spelerklein.jpg" width="180px" alt="speler"/><p><strong>CIRPI</strong> </p><p>Umit</p></li> <li><a href="B3.jpg" rel="shadowbox[spelers]"><img src="B31.jpg" width="180px" alt="speler"/></a><p><strong>DE BEULE</strong> </p><p>Bram</p></li> <li><a href="B4.jpg" rel="shadowbox[spelers]"><img src="B41.jpg" width="180px" alt="speler"/></a><p><strong>DE CONINCK</strong> </p><p>Toon</p></li> <li><a href="B5.jpg" rel="shadowbox[spelers]"><img src="B51.jpg" width="180px" alt="speler"/></a><p><strong>DE COOMAN</strong> </p><p>Rik</p></li> <li><img src="spelerklein.jpg" width="180px" alt="speler"/><p><strong>DE COOMAN</strong> </p><p>Wim</p></li> <li><img src="spelerklein.jpg" width="180px" alt="speler"/><p><strong>DE KEGEL</strong> </p><p>Gregory</p></li> <li><a href="B8.jpg" rel="shadowbox[spelers]"><img src="B81.jpg" width="180px" alt="speler"/></a><p><strong>DE NUTTE</strong> </p><p>Bram</p></li> <li><img src="spelerklein.jpg" width="180px" alt="speler"/><p><strong>DE REUSE</strong> </p><p>Laurens</p></li> <li><a href="B10.jpg" rel="shadowbox[spelers]"><img src="B101.jpg" width="180px" alt="speler"/></a><p><strong>DE SMET</strong> </p><p>Sigi</p></li> <li><a href="B11.jpg" rel="shadowbox[spelers]"><img src="B111.jpg" width="180px" alt="speler"/></a><p><strong>HAJIOUI</strong> </p><p>Yassin</p></li> <li><a href="B12.jpg" rel="shadowbox[spelers]"><img src="B121.jpg" width="180px" alt="speler"/></a><p><strong>LETTENS</strong> </p><p>Daan</p></li> <li><img src="spelerklein.jpg" width="180px" alt="speler"/><p><strong>LEYSSENS</strong> </p><p>Bedner</p></li> <li><a href="B15.jpg" rel="shadowbox[spelers]"><img src="B151.jpg" width="180px" alt="speler"/></a><p><strong>MOYSON</strong> </p><p>Pieter</p></li> <li><a href="B16.jpg" rel="shadowbox[spelers]"><img src="B161.jpg" width="180px" alt="speler"/></a><p><strong>NOTAERTS</strong> </p><p>Kevin</p></li> <li><a href="B17.jpg" rel="shadowbox[spelers]"><img src="B171.jpg" width="180px" alt="speler"/></a><p><strong>PAPPAERT</strong> </p><p>Pieter</p></li> <li><img src="spelerklein.jpg" width="180px" alt="speler"/><p><strong>ROBLEDO GONZALEZ</strong> </p><p>Adrian</p></li> </ul> And Css: #beloften { list-style-type: none; width: 1000px; margin: 0px auto; padding: 0px;}#beloften li { margin: 0px auto; float: left; width: 170px; padding: 12px; text-align: center;} What am i doing wrong? Here is the working Jsfiddle of this code: Working example
Now provide the response and nothing else.
|
The date_format validator takes only one date format as parameter. In order to be able to use multiple formats, you'll need to build a custom validation rule. Luckily, it's pretty simple. You can define the multi-format date validation in your AppServiceProvider with the following code: class AppServiceProvider extends ServiceProvider { public function boot() { Validator::extend('date_multi_format', function($attribute, $value, $formats) { // iterate through all formats foreach($formats as $format) { // parse date with current format $parsed = date_parse_from_format($format, $value); // if value matches given format return true=validation succeeded if ($parsed['error_count'] === 0 && $parsed['warning_count'] === 0) { return true; } } // value did not match any of the provided formats, so return false=validation failed return false; }); }} You can later use this new validation rule like that: 'trep_txn_date' => 'date_multi_format:"Y-m-d H:i:s.u","Y-m-d"' You can read more about how to create custom validation rules here: http://laravel.com/docs/5.1/validation#custom-validation-rules
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/32006098', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5202577/']}
|
jdg_376628
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1098175
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I currently have a std::map<std::string,int> that stores an integer value to a unique string identifier, and I do look up with the string. It does mostly what I want, except that it does not keep track of the insertion order. So when I iterate the map to print out the values, they are sorted according to the string; but I want them to be sorted according to the order of (first) insertion. I thought about using a vector<pair<string,int>> instead, but I need to look up the string and increment the integer values about 10,000,000 times, so I don't know whether a std::vector will be significantly slower. Is there a way to use std::map or is there another std container that better suits my need? I'm on GCC 3.4, and I have probably no more than 50 pairs of values in my std::map .
Now provide the response and nothing else.
|
If you have only 50 values in std::map you could copy them to std::vector before printing out and sort via std::sort using appropriate functor. Or you could use boost::multi_index . It allows to use several indexes.In your case it could look like the following: struct value_t { string s; int i;};struct string_tag {};typedef multi_index_container< value_t, indexed_by< random_access<>, // this index represents insertion order hashed_unique< tag<string_tag>, member<value_t, string, &value_t::s> > >> values_t;
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1098175', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/45383/']}
|
jdg_376629
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25823
|
Below is a question asked on the forum networkengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to block Host 192.168.1.1 from reaching 192.168.2.0/24 network and in order to implement that I've applied Extended ACL "ACL_FOR_R2" on R2's serial2/0 interface. Now as we all know that at the end of every access list is an implied "deny all traffic" criteria statement. Therefore, if a packet does not match any of your criteria statements, the packet will be blocked. So I didn't mention any statement in the ACL "ACL_FOR_R2" assuming we only have one statement in that ACL as "deny all" but the problem is when I ping from R1 to 192.168.2.2, R2 doesn't block the traffic and ping reaches to R3. WHY? Here are the configurations: - R1's configurations: R1#show running-config Building configuration... Current configuration : 1557 bytes ! version 12.4 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname R1 ! boot-start-marker boot-end-marker ! ! no aaa new-model memory-size iomem 5 no ip icmp rate-limit unreachable ip cef ! ! ! ! no ip domain lookup ! multilink bundle-name authenticated ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! archive log config hidekeys ! ! ! ! ip tcp synwait-time 5 ip ssh version 1 ! ! ! ! interface FastEthernet0/0 no ip address shutdown duplex auto speed auto ! interface FastEthernet0/1 no ip address shutdown duplex auto speed auto ! interface FastEthernet1/0 no ip address shutdown duplex auto speed auto ! interface Serial2/0 ip address 192.168.1.1 255.255.255.0 ip access-group ACL_FOR_R1 out serial restart-delay 0 ! interface Serial2/1 no ip address shutdown serial restart-delay 0 ! interface Serial2/2 no ip address shutdown serial restart-delay 0 ! interface Serial2/3 no ip address shutdown serial restart-delay 0 ! router eigrp 1 network 192.168.1.0 auto-summary ! ip forward-protocol nd ! ! no ip http server no ip http secure-server ! no cdp log mismatch duplex ! ! ! ! ! ! control-plane ! ! ! ! ! ! ! ! ! banner motd ^C ********************************** This is a Cisco . **********************************^C ! line con 0 exec-timeout 0 0 privilege level 15 password cisco logging synchronous login line aux 0 exec-timeout 0 0 privilege level 15 logging synchronous line vty 0 4 password cisco login line vty 5 15 password cisco login ! ! end R1#un all All possible debugging has been turned off R1# R1#show run R1#show running-config Building configuration... Current configuration : 1557 bytes ! version 12.4 service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption ! hostname R1 ! boot-start-marker boot-end-marker ! ! no aaa new-model memory-size iomem 5 no ip icmp rate-limit unreachable ip cef ! ! ! ! no ip domain lookup ! multilink bundle-name authenticated ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! archive log config hidekeys ! ! ! ! ip tcp synwait-time 5 ip ssh version 1 ! ! ! ! interface FastEthernet0/0 no ip address shutdown duplex auto speed auto ! interface FastEthernet0/1 no ip address shutdown duplex auto speed auto ! interface FastEthernet1/0 no ip address shutdown duplex auto speed auto ! interface Serial2/0 ip address 192.168.1.1 255.255.255.0 ip access-group ACL_FOR_R1 out serial restart-delay 0 ! interface Serial2/1 no ip address shutdown serial restart-delay 0 ! interface Serial2/2 no ip address shutdown serial restart-delay 0 ! interface Serial2/3 no ip address shutdown serial restart-delay 0 ! router eigrp 1 network 192.168.1.0 auto-summary ! ip forward-protocol nd ! ! no ip http server no ip http secure-server ! no cdp log mismatch duplex ! ! ! ! ! ! control-plane ! ! ! ! ! ! ! ! ! banner motd ^C ********************************** This is a Cisco . **********************************^C ! line con 0 exec-timeout 0 0 privilege level 15 password cisco logging synchronous login line aux 0 exec-timeout 0 0 privilege level 15 logging synchronous line vty 0 4 password cisco login line vty 5 15 password cisco login ! ! end - R2's configurations:R2#show running-configBuilding configuration...Current configuration : 1627 bytes!version 12.4service timestamps debug datetime msecservice timestamps log datetime msecno service password-encryption!hostname R2!boot-start-markerboot-end-marker!!no aaa new-modelmemory-size iomem 5no ip icmp rate-limit unreachableip cef!!!!no ip domain lookup!multilink bundle-name authenticated!!!!!!!!!!!!!!!!!!!!!archive log config hidekeys!!!!ip tcp synwait-time 5ip ssh version 1!!!!interface FastEthernet0/0 no ip address shutdown duplex auto speed auto!interface FastEthernet0/1 no ip address shutdown duplex auto speed auto!interface FastEthernet1/0 no ip address shutdown duplex auto speed auto!interface Serial2/0 ip address 192.168.1.2 255.255.255.0 ip access-group ACL_FOR_R2 in serial restart-delay 0!interface Serial2/1 ip address 192.168.2.1 255.255.255.0 serial restart-delay 0!interface Serial2/2 no ip address shutdown serial restart-delay 0!interface Serial2/3 no ip address shutdown serial restart-delay 0!router eigrp 1 network 192.168.1.0 network 192.168.2.0 auto-summary!ip forward-protocol nd!!no ip http serverno ip http secure-server!ip access-list extended ACL_FOR_R2!no cdp log mismatch duplex!!!!!!control-plane!!!!!!!!!banner motd ^C**********************************This is a Cisco .**********************************^C!line con 0 exec-timeout 0 0 privilege level 15 password cisco logging synchronous loginline aux 0 exec-timeout 0 0 privilege level 15 logging synchronousline vty 0 4 password cisco loginline vty 5 15 password cisco login!!end - R3's configurations:R3#show running-configBuilding configuration...Current configuration : 1525 bytes!version 12.4service timestamps debug datetime msecservice timestamps log datetime msecno service password-encryption!hostname R3!boot-start-markerboot-end-marker!!no aaa new-modelmemory-size iomem 5no ip icmp rate-limit unreachableip cef!!!!no ip domain lookup!multilink bundle-name authenticated!!!!!!!!!!!!!!!!!!!!!archive log config hidekeys!!!!ip tcp synwait-time 5ip ssh version 1!!!!interface FastEthernet0/0 no ip address shutdown duplex auto speed auto!interface FastEthernet0/1 no ip address shutdown duplex auto speed auto!interface FastEthernet1/0 no ip address shutdown duplex auto speed auto!interface Serial2/0 no ip address shutdown serial restart-delay 0!interface Serial2/1 ip address 192.168.2.2 255.255.255.0 serial restart-delay 0!interface Serial2/2 no ip address shutdown serial restart-delay 0!interface Serial2/3 no ip address shutdown serial restart-delay 0!router eigrp 1 network 192.168.2.0 auto-summary!ip forward-protocol nd!!no ip http serverno ip http secure-server!no cdp log mismatch duplex!!!!!!control-plane!!!!!!!!!banner motd ^C**********************************This is a Cisco .**********************************^C!line con 0 exec-timeout 0 0 privilege level 15 password cisco logging synchronous loginline aux 0 exec-timeout 0 0 privilege level 15 logging synchronousline vty 0 4 password cisco loginline vty 5 15 password cisco login!!endR1#ping 192.168.2.2Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 192.168.2.2, timeout is 2 seconds:!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 28/46/88 msR1#ping 192.168.2.1Type escape sequence to abort.Sending 5, 100-byte ICMP Echos to 192.168.2.1, timeout is 2 seconds:!!!!!Success rate is 100 percent (5/5), round-trip min/avg/max = 8/46/104 ms
Now provide the response and nothing else.
|
Normally you would be right, there is an implied "deny" at the end of every access list. You have found the exception however, because you use an empty access list: interface Serial2/0 ip address 192.168.1.2 255.255.255.0 ip access-group ACL_FOR_R2 in serial restart-delay 0!<snip>!ip access-list extended ACL_FOR_R2! According to the documentation : "An interface or command with an empty access list applied to it permits all traffic into the network." So as soon as you add any term to the ACL, it will start blocking all other traffic.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://networkengineering.stackexchange.com/questions/25823', 'https://networkengineering.stackexchange.com', 'https://networkengineering.stackexchange.com/users/17520/']}
|
jdg_376630
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48114810
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have TabNavigator with screens 1 and 2 and inside screen 1 I have StackNavigator with screens 1.1 and 1.2. I have enabled swiping and gestures. From root I can swipe tabs between 1 and 2. When I'm on screen 1 and I open screen 1.1 I still can swipe to screen 2 and this ability I need to disabled somehow when the 1.1 screen is open. I need it to work just like Instagram app (ios). When you are on home screen (1) you can swipe left to see Direct screen (2). When you open friends profile from home screen (1) it opens it as screen (1.1) and you can't swipe left to open Direct screen (2). You can only go back. I have this functionality working just fine but with this "bug" where I can navigate from screen 1.1 to screen 2. I tried a lot to solve this in different ways by reading docs and other people problems with navigation but somehow doesn't really work as I need. I suppose something is wrong with my nested screen structure or something or it's solved in different way. Does someone has a clue?
Now provide the response and nothing else.
|
Each screen in the tab can have a navigation option swipeEnabled set individually. Take a look at the Tab Navigator Screen Navigation Options docs. MyScreen.navigationOptions = ({navigation}) => ({ swipeEnabled: false}); You can set that value to be the result of a function that checks whether the stack navigator has been navigated into or not. Update - react-navigation 3 This property was removed, and replaced with gesturesEnabled . You can set the value for each screen individually, or set a default at the navigator configuration level. const navigator = createStackNavigator( { Main: { screen: Main }, ... }, { defaultNavigationOptions: { gesturesEnabled: false, }, ... });
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48114810', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1046124/']}
|
jdg_376631
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
841258
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
This is a question regarding zfs on Linux (CentOS 7). I have a very simple setup with two 8 TB disks, one disk mirroring the other. zpool create -f -o ashift=12 $zpoolName mirror $disksById In case one of the disk needs to be replaced, the replacement disk must be of equal or greater size than the smallest of the two disk in the configuration, according to the zpool manual pages . And from what I have understood it is common that the exact size usually differs a bit between drives of different make and model (and model revision), even if they all are labelled 8 TB. However, I would like to be able to replace it with any other 8 TB disk , not necessarily by the same make and model. How do I achieve this? I would have expected an option to the zpool create command so that not the entire disk is used for the pool, but leaving some slack, however I cannot find such an option. The only suggestion that I have seen is partitioning the disk before creating the pool, creating one "pool" partition and one "slack" partition, but I've read the this will affect disk performance as the disk cache can not be used properly by zfs, so I suppose that I would like to avoid this.
Now provide the response and nothing else.
|
The only suggestion that I have seen is partitioning the disk before creating the pool, creating one "pool" partition and one "slack" partition This is the correct answer. but I've read the this will affect disk performance as the disk cache can not be used properly by zfs. This is a misunderstanding. Using a partition rather than a full disk only affects performance if the partition is misaligned, which typically requires some real determination on the user's part, if you're using vaguely modern partition editors. Linux and BSD fdisk, sfdisk, and gparted all understand partition boundaries and work within them unless outright forced not to. Further, if you look closely at a disk that's been fed whole to zfs, you'll notice that zfs has actually partitioned it itself. Example: root@banshee:~# zpool status data pool: data state: ONLINE scan: scrub repaired 0 in 27h54m with 0 errors on Mon Mar 13 05:18:20 2017config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 wwn-0x50014ee206fd9549 ONLINE 0 0 0 wwn-0x50014ee2afb368a9 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 wwn-0x50014ee25d2510d4 ONLINE 0 0 0 wwn-0x5001517bb29d5333 ONLINE 0 0 0errors: No known data errorsroot@banshee:~# ls -l /dev/disk/by-id | grep 510d4lrwxrwxrwx 1 root root 9 Mar 22 15:57 wwn-0x50014ee25d2510d4 -> ../../sddlrwxrwxrwx 1 root root 10 Mar 22 15:57 wwn-0x50014ee25d2510d4-part1 -> ../../sdd1lrwxrwxrwx 1 root root 10 Mar 22 15:57 wwn-0x50014ee25d2510d4-part9 -> ../../sdd9 As you can see, ZFS has already partitioned the raw disks in the pool. The pool uses partition 1; partition 9 is left slack. root@banshee:~# sfdisk -d /dev/sddlabel: gptlabel-id: B2DED677-DB67-974C-80A6-070B72EB8CFBdevice: /dev/sddunit: sectorsfirst-lba: 34last-lba: 3907029134/dev/sdd1 : start= 2048, size= 3907010560, type=6A898CC3-1DD2-11B2-99A6-080020736631, uuid=A570D0A4-EA32-F64F-80D8-7479D918924B, name="zfs"/dev/sdd9 : start= 3907012608, size= 16384, type=6A945A3B-1DD2-11B2-99A6-080020736631, uuid=85D0957B-65AF-6B4A-9F1B-F902FE539170 sdd9 is 16384 sectors long. This is a 4K disk, so that comes out to 64M, and any disk that's no more than 63M-ish smaller than the existing disk should be fine as a replacement for this one, should it fail.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/841258', 'https://serverfault.com', 'https://serverfault.com/users/381590/']}
|
jdg_376632
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12799099
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an interface ISnack which when implemented by a class, it should have a default parameterless constructor. Basically this: public interface ISnack<T> where T : new(){} I use <T> where T : new() just to enforce the parameterless constructor. I would then implement the interface this way: public class Cutlet : ISnack<Cutlet>{} This works and it simply ensures Cutlet class has a parameterless constructor. Now I have an abstract base class Kitchen : public abstract class Kitchen<T> where T : ISnack{} The requirement is that Kitchen should have constraint where T should be an ISnack . But this wont work because there exists no ISnack , but only ISnack<T> . If I tried this public abstract class Kitchen<T> where T : ISnack<T>{} it wouldn't compile ( 'T' must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method 'ISnack<T>' ) and also wouldn't make sense in my context. If I could force ISnack s to have a parameterless constructor without constraining by a T type parameter, then T in Kitchen<T> could easily be an ISnack . How to go about it?
Now provide the response and nothing else.
|
You can't unless you add the constraint; generic constraints are cumulative, so to make the compiler happy you would have to have: public abstract class Kitchen<T> where T : ISnack<T>, new() If that is fine, then do that . If it isn't fine, then you'll have to remove the : new from the original, and make do without it. This isn't as bad as it sounds, but it means you push validation down to execution rather than compilation. But: Activator.CreateInstance<T>() still does what you would need, anyway - even without the new() constraint. So you can replace: T newObj = new T(); // validated by the compiler with: T newObj = Activator.CreateInstance<T>(); // not validated until executed A handy trick when removing constraints can be: add a unit/integration test that finds the candidate types via reflection, and validate the missing constraint as part of your test suite .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12799099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/661933/']}
|
jdg_376633
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16434526
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an iterator of strings. For sorting i need to create a list from it and sort it using Collections.sort(list) . Is there any simple way to sort an iterator.
Now provide the response and nothing else.
|
An Iterator is NOT a container, it is a utility for traversing over the elements of a container.So if you only have access to the Iterator there is no way to change the order of iteration which is defined by the creator of this iterator. If you can't change the original container, you'll have to gather the elements delivered by the iterator within a new Collection and sort them therein. (A good approach to understand what is possible with iterators is to have a look at the Source-code of the JDK classes or to implement an own iterator)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16434526', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2130636/']}
|
jdg_376634
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3799238
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen a slide that presented Fab , a node.js framework. Is this JavaScript? Could someone explain what is going on in that code? I'm all lost.
Now provide the response and nothing else.
|
Is plain JavaScript, it is a function chaining pattern. The first line, ( fab = require("fab") ) includes the fab function and returns a reference to it. All the subsequent parentheses are function calls, each function invocation returns probably the same function again and again. The pattern probably looks like this simplified example: var foo = function (arg) { // detect what the argument is if (typeof arg == 'function') { // do something with arg console.log('function: '+arg()); } else if (arg instanceof RegExp) { // arg is a RegExp... console.log('A RegExp: '+arg); } else if (typeof arg == "string") { // arg is a string console.log('A string: '+arg); } return foo; // return a reference to itself};(foo) (function() { return "Foo "; }) (/bar/) (" baz!"); Outputs: function: FooA RegExp: /bar/A string: baz!
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3799238', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/224922/']}
|
jdg_376635
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16932711
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We often shorten a block using the & notation on a symbol like this: some_array.group_by(&:foo) Is there a similar way to shorten expressions like {|x| x} ? some_array.group_by{|x| x} If there were a method Object#self that returns self , then we can do some_array.group_by(&:self) but unfortunately, there is no such method. In terms of the number of characters, it may be longer, but readability improves.
Now provide the response and nothing else.
|
Yes . #itself was implemented in Ruby 2.2.0. You can access the Ruby core team discussion about this feature here . As an interesting analogue, the #ergo method has been proposed , which would yield the receiver to a given block. If you haven't yet upgraded to Ruby 2.2.0, you may wish to backport #itself and/or define #ergo as follows: class Object def itself; self end def ergo fail ArgumentError, "Block expected!" unless block_given? yield self endend And then: some_array.group_by &:itself
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16932711', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/314166/']}
|
jdg_376636
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
86549
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $\phi_g : \mathcal{M}_g \rightarrow \mathcal{A}_g$ be the period mapping from the open moduli space of genus $g$ Riemann surfaces to the moduli space of $g$-dimensional principally polarized abelian varieties over $\mathbb{C}$. Thus for a Riemann surface $S$ the image $\phi_g(S)$ is the Jacobian of $S$. The Schottky problem consists in determining the image of $\phi_g$. It is classical that $\text{Im}(\phi_2)$ is exactly the set of abelian varieties that are not isomorphic to a product of elliptic curves. This is asserted in many places, but I have not been able to find a nice discussion of it in the literature. Does anyone know one? The more down-to-earth, the better.
Now provide the response and nothing else.
|
This will need expansion by a more knowledgable person, but as memory serves, it was proved by Mayer and Mumford that the closure in Ag of the locus of traditional Jacobians is the set of products of Jacobians. This is probably exposed first in a talk in the 1964 Woods Hole talks on James Milne's site. (I see Mumford credits it there, on page 4 of his talk, in part three of the Woods Hole notes, to Matsusaka and Hoyt. Apparently Mayer and Mumford computed the closure in the Satake compactification.) But let us try to explain this more in dim two. A two diml ppav is a compact 2 torus A containing a curve C carrying the homology class a1xb1 + a2xb2, where the aj,bj are a basic symplectic homology basis of H1(A). It follows from the topological Pontrjagin product that the induced map from the Albanese variety of C to A, has topological degree one, hence is an isomorphism. (I.e. the map from the Cartesian product of C with itself g times to A, has image whose class is the g fold Pontrjagin product of [C], which equals g! times the fundamental class of A. Hence the induced map from the g fold symmetric product of C, has image with exactly the fundamental class of A. Hence this map has degree one as does that induced from the Jacobian.) Since it also induces the identity map on C, it also preserves the polarization. Let me speculate on the special cases. If C is reducible it is known (Complex abelian varieties and theta functions, George Kempf, p. 89, Cor. 10.4) that A is a product of elliptic curves. If C is irreducible and singular then I guess the normalization map extends to a map of the Albanese of C to A. But that seems to imply the image of C in A does not span, a contradiction. So it seems that any irreducible curve C contained in a two diml ppav A and carrying the class of a principal polarization, is smooth and induces an isomorphism from the Albanese (i.e. Jacobian) of the curve to the ppav. I hope there is some useful information in this.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/86549', 'https://mathoverflow.net', 'https://mathoverflow.net/users/20862/']}
|
jdg_376637
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
26422470
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
To pass variables between steps I have the step methods belong to the same class, and use fields of the class for the passed information. Here is an example as follows: Feature: Demo Scenario: Create user Given User creation form management When Create user with name "TEST" Then User is created successfully Java class with steps definitions: public class CreateUserSteps { private String userName; @Given("^User creation form management$") public void User_creation_form_management() throws Throwable { // ... } @When("^Create user with name \"([^\"]*)\"$") public void Create_user_with_name(String userName) throws Throwable { //... this.userName = userName; } @Then("^User is created successfully$") public void User_is_created_successfully() throws Throwable { // Assert if exists an user with name equals to this.userName } My question is if it is a good practice to share information between steps? Or would be better to define the feature as: Then User with name "TEST" is created successfully
Now provide the response and nothing else.
|
In order to share commonalities between steps you need to use a World . In Java it is not as clear as in Ruby. Quoting the creator of Cucumber. The purpose of a "World" is twofold: Isolate state between scenarios. Share data between step definitions and hooks within a scenario. How this is implemented is language specific. For example, in ruby,the implicit self variable inside a step definition points to thecurrent scenario's World object. This is by default an instance ofObject, but it can be anything you want if you use the World hook. In Java, you have many (possibly connected) World objects. The equivalent of the World in Cucumber-Java is all of the objectswith hook or stepdef annotations . In other words, any class withmethods annotated with @Before, @After, @Given and so on will beinstantiated exactly once for each scenario. This achieves the first goal. To achieve the second goal you have twoapproaches: a) Use a single class for all of your step definitions and hooks b) Use several classes divided by responsibility [1] and use dependencyinjection [2] to connect them to each other. Option a) quickly breaks down because your step definition codebecomes a mess. That's why people tend to use b). [1] https://cucumber.io/docs/gherkin/step-organization/ [2] PicoContainer, Spring, Guice, Weld, OpenEJB, Needle The available Dependency Injection modules are: cucumber-picocontainer cucumber-guice cucumber-openejb cucumber-spring cucumber-weld cucumber-needle Original post here https://groups.google.com/forum/#!topic/cukes/8ugcVreXP0Y . Hope this helps.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/26422470', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/947199/']}
|
jdg_376638
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22381442
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way where I can manually download a docker image? I have pretty slow Internet connection and for me is better to get a link of the image and download it elsewhere with better Internet speed, How can I get the direct URL of the image managed by docker pull?
Now provide the response and nothing else.
|
It's possible to obtain that, but let me suggest two other ways! If you can connect to a remote server with a fast connection, and that server can run Docker, you could docker pull on that server, then you can docker save to export an image (and all its layers and metadata) as tarball, and transfer that tarball any way you like. If you want to transfer multiple images sharing a common base, the previous method won't be great, because you will end up transferring multiple tarballs sharing a lot of data. So another possibility is to run a private registry e.g. on a "movable" computer (laptop), connect it to the fast network, pull images, push images to the private registry; then move the laptop to the "slow" network, and pull images from it. If none of those solutions is acceptable for you, don't hesitate to give more details, we'll be happy to help!
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22381442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/829928/']}
|
jdg_376639
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1794698
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to Mathematica so this could be an easy question : I'd like to solve for y in $$c = \int_{0}^{y} \sin^n(x) dx$$ where c and n are parameters;n could be large $n=100$; c could be small $c=2^{-50}$ so I'm worried about numeric problems. I tried numeric solution: F[y_,n_]:=Module[{},a=NIntegrate[(Sin[x])^n,{x,0,y};a]; r=NSolve[F[y,n]==c,y,Reals]; the results look somewhat reasonable but a symbolic solution would be preferred.Any suggestions? thanks
Now provide the response and nothing else.
|
This solution uses the following facts: For every real number $t$, $\left(1+\frac{t}n\right)^n\to e^t$ when $n\to\infty$. For every real number $t$, $1+t\leqslant e^t$. For every real number $t\geqslant-\frac12$, $1+t\geqslant e^{t-t^2}$. One is asking to prove that the limit of $$S_n(x)=\sum_{i=1}^{n}\left(\frac{i+x}n\right)^n=\sum_{k=0}^{n-1}\left(1+\frac{x-k}n\right)^n$$ when $n\to\infty$, exists and equals $$s(x)=\sum_{k=0}^{\infty}e^{x-k}=\frac{e^{x+1}}{e-1}.$$ To prove this, first note that $$S_n(x)=S_n(x-1)+\left(1+\frac{x}n\right)^n-\left(\frac{x}n\right)^n,$$ hence, for every $x$, $$\lim_{n\to\infty}\ (S_n(x)-S_n(x-1))=e^x.$$Next, assume that $x\geqslant0$. Then, the bound $1+t\leqslant e^t$, valid for every $t$, and the fact that $1+\frac{x-k}n\geqslant0$ for every $k$ in the second sum above defining $S_n(x)$, yield $$S_n(x)\leqslant\sum_{k=0}^{n-1}\left(e^{(x-k)/n}\right)^n=\sum_{k=0}^{n-1}e^{x-k}\leqslant s(x).$$ Likewise, pick some $a$ in $(0,1)$ and assume that $n$ is large enough for $n^{1-a}\geqslant2$ to hold. Then, the bound $1+t\geqslant e^{t-t^2}$, valid for every $t\geqslant-\frac12$, and the fact that $1+\frac{x-k}n\geqslant0$ and that $\frac{x-k}n\geqslant-\frac12$ for every $k\leqslant n^a$,together yield $$S_n(x)\geqslant\sum_{k=0}^{n^a}\left(e^{(x-k)/n-(x-k)^2/n^2}\right)^n=\sum_{k=0}^{n^a}e^{x-k-(x-k)^2/n}\geqslant e^{-n^{2a-1}}\sum_{k=0}^{n^a}e^{x-k}=e^{-n^{2a-1}}s(x)\left(1-e^{-n^a}\right).$$ If $a$ is in $(0,\frac12)$, $e^{-n^{2a-1}}\to1$ and $e^{-n^a}\to0$, hence $S_n(x)\to s(x)$, thus the claim holds for every $x\geqslant0$. Finally, the claim holds for every $x$ because $$s(x)-s(x-1)=e^x=\lim_{n\to\infty}\ (S_n(x)-S_n(x-1)).$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1794698', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/268900/']}
|
jdg_376640
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.