source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21044798
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is my HTML which I'm generating dynamically using drag and drop functionality. <form method="POST" id="contact" name="13" class="form-horizontal wpc_contact" novalidate="novalidate" enctype="multipart/form-data"><fieldset> <div id="legend" class=""> <legend class="">file demoe 1</legend> <div id="alert-message" class="alert hidden"></div> </div> <div class="control-group"> <!-- Text input--> <label class="control-label" for="input01">Text input</label> <div class="controls"> <input type="text" placeholder="placeholder" class="input-xlarge" name="name"> <p class="help-block" style="display:none;">text_input</p> </div> <div class="control-group"> </div> <label class="control-label">File Button</label> <!-- File Upload --> <div class="controls"> <input class="input-file" id="fileInput" type="file" name="file"> </div> </div> <div class="control-group"> <!-- Button --> <div class="controls"> <button class="btn btn-success">Button</button> </div> </div></fieldset></form> This is my JavaScript code: <script> $('.wpc_contact').submit(function(event){ var formname = $('.wpc_contact').attr('name'); var form = $('.wpc_contact').serialize(); var FormData = new FormData($(form)[1]); $.ajax({ url : '<?php echo plugins_url(); ?>'+'/wpc-contact-form/resources/js/tinymce.php', data : {form:form,formname:formname,ipadd:ipadd,FormData:FormData}, type : 'POST', processData: false, contentType: false, success : function(data){ alert(data); } }); }
Now provide the response and nothing else.
|
For correct form data usage you need to do 2 steps. Preparations You can give your whole form to FormData() for processing var form = $('form')[0]; // You need to use standard javascript object herevar formData = new FormData(form); or specify exact data for FormData() var formData = new FormData();formData.append('section', 'general');formData.append('action', 'previewImg');// Attach fileformData.append('image', $('input[type=file]')[0].files[0]); Sending form Ajax request with jquery will looks like this: $.ajax({ url: 'Your url here', data: formData, type: 'POST', contentType: false, // NEEDED, DON'T OMIT THIS (requires jQuery 1.6+) processData: false, // NEEDED, DON'T OMIT THIS // ... Other options like success and etc}); After this it will send ajax request like you submit regular form with enctype="multipart/form-data" Update: This request cannot work without type:"POST" in options since all files must be sent via POST request. Note: contentType: false only available from jQuery 1.6 onwards
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/21044798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2424332/']}
|
jdg_370241
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15538125
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In all of the AngularJS examples, the Angular library is placed in the HEAD tags of the document. I have an existing project that has been built upon the HTML5 Boilerplate layout. This defines that JS libraries should be placed at the very bottom of the DOM before the </BODY> tag. Does AngularJS need to be placed in the HEAD?
Now provide the response and nothing else.
|
AngularJS does not need to be placed in the HEAD, and actually you normally shouldn't, since this would block loading the HTML. However, when you load AngularJS at the bottom of the page, you will need to use ng-cloak or ng-bind to avoid the "flash of uncompiled content". Note that you only need to use ng-cloak/ng-bind on your "index.html" page. When ng-include or ng-view or other Angular constructs are used to pull in additional content after the initial page load, that content will be compiled by Angular before it is displayed. See also https://stackoverflow.com/a/14076004/215945
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/15538125', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1284807/']}
|
jdg_370242
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20458890
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to remove a piece of a data from a json array. For example I have this array var favorites = { "userID": "12345678", "favorites": [ { "name" : "My Favorites", "id" : "87654321", "items": [ { "productID": "11234567", "added": "TIMESTAMP", "title": "Project", "type": "Weekend Project", "imageURL": "1" }, { "productID": "11223456", "added": "TIMESTAMP", "title": "Bathroom", "type": "Weekend Project", "imageURL": "2" }, { "productID": "11223345", "added": "TIMESTAMP", "title": "Curves", "type": "Collections", "imageURL": "3" } ] }, { "name" : "Bathroom", "id" : "87654323", "items": [ { "productID": "11122224", "added": "TIMESTAMP", "title": "Project", "type": "Weekend Project", "imageURL": "1" }, { "productID": "11122222", "added": "TIMESTAMP", "title": "Room", "type": "Weekend Project", "imageURL": "2" }, { "productID": "11112222", "added": "TIMESTAMP", "title": "Strais", "type": "Collections", "imageURL": "3" }, { "productID": "11111222", "added": "TIMESTAMP", "title": "Door", "type": "Collections", "imageURL": "4" } ] } ] } Say I wanted to remove the a product out of the bathroom category on the click of a button. How would I acheive this? I have been trying this to no avail: jQuery(document).on('click', ".removeFav", function() { favorites.favorites[1].items[1].splice();} error I am receiving: Uncaught TypeError: Object # has no method 'splice'
Now provide the response and nothing else.
|
To unset any variable use the delete statement: delete favorites.favorites[1].items[1] This is correct way, and it will work, but if your goal is to preserve indexes in order, then your way with the splice method is the way to go: favorites.favorites[1].items.splice(1,1); The above will remove one element (second parameter) starting at 1st index (first parameter). So to be clear: to remove the last element use this: var arr = favorites.favorites[1].items;arr.splice(arr.length - 1, 1); See your code on JsFiddle . You can take additional measures to protect the code in case the array is not set or empty: var arr = favorites.favorites[1].items;if ( arr && arr.length ) { arr.splice(arr.length - 1, 1);}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20458890', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2172218/']}
|
jdg_370243
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7870014
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've looked at a number of other posts here and elsewhere (see below), but I still don't have a clear answer to this question: How does windows wchar_t handle unicode characters outside the basic multilingual plane? That is: many programmers seem to feel that UTF-16 is harmful because it is a variable-length code. wchar_t is 16-bits wide on windows , but 32-bits wide on Unix/MacOS The Windows APIs use wide-characters, not Unicode. So what does Windows do when you want to code something like (U+2008A) Han Character on Windows?
Now provide the response and nothing else.
|
The implementation of wchar_t under the Windows stdlib is UTF-16-oblivious: it knows only about 16-bit code units. So you can put a UTF-16 surrogate sequence in a string, and you can choose to treat that as a single character using higher level processing. The string implementation won't do anything to help you, nor to hinder you; it will let you include any sequence of code units in your string, even ones that would be invalid when interpreted as UTF-16. Many of the higher-level features of Windows do support characters made out of UTF-16 surrogates, which is why you can call a file .txt and see it both render correctly and edit correctly (taking a single keypress, not two, to move past the character) in programs like Explorer that support complex text layout (typically using Windows's Uniscribe library). But there are still places where you can see the UTF-16-obliviousness shining through, such as the fact you can create a file called .txt in the same folder as .txt , where case-insensitivity would otherwise disallow it, or the fact that you can create [U+DC01][U+D801].txt programmatically. This is how pedants can have a nice long and basically meaningless argument about whether Windows “supports” UTF-16 strings or only UCS-2.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7870014', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/51167/']}
|
jdg_370244
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12950
|
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am building a text categorizer for short sentences. In addition to telling the user "the category of the text you entered is C", I want to be able to explain why I made this decision, in a short and understandable way. For example, I don't want to tell the user "I put your sentence into a complex 3-layered neural network and that's the answer that scored the best"; I want explanations such as "Your sentence contains the words U, V and W, that are characteristic of this category, because of sentences such as X, Y and Z that appeared in the training data". My question is: what classification algorithms are best suited for such application? k-nearest-neighbours seems like a good candidate, because I can tell the user "Your sentence has category C because it is similar to sentences X, Y and Z that have the same category. But its performance on text categorization problems is known to be poor. I am looking for a classifie that balances performance with explanation ability. EDIT: After spending a lot of time looking for such a classifier, I started to build a machine-learning library called limdu , that allows the classifiers to explain their decisions. It is still under development, but, it has already helped me explain to myself and my colleagues why our classifiers fail so often...
Now provide the response and nothing else.
|
Although it performs poorly on text classification tasks, if you want a clear explanation of the classification reasoning, a decision tree would be your best option as it provides clear rules for how an instance is classified. A trace of the tree would provide the determining values and by analyzing the instances close to the root node, you might even be able to determine which attributes where more significant in the classification. Another option would be to use a Naive Bayes classifier , which performs better for text mining tasks and would give you a comprehensible evaluation scheme based on probabilities that you could use to generate the "why" you're looking for, even providing an indication of which attribute values were more important than others in making the determination. Support Vector Machines , specifically Sequential Minimum Optimization (SMO), seem to work well with these tasks and likewise provide indication of the values used to make the classification in the form of attribute weights and support vectors, but I think you may have a harder time making explicit how these weights and vectors influenced the overall classification decision. Good luck with your project! Sounds very interesting.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/12950', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/1342/']}
|
jdg_370245
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
46645477
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I see the paramter npartitions in many functions, but I don't understand what it is good for / used for. http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.read_csv head(...) Elements are only taken from the first npartitions, with a default of 1. If there are fewer than n rows in the first npartitions a warning will be raised and any found rows returned. Pass -1 to use all partitions. repartition(...) Number of partitions of output, must be less than npartitions of input. Only used if divisions isn’t specified. Is the number of partitions probably 5 in this case: (Image source: http://dask.pydata.org/en/latest/dataframe-overview.html )
Now provide the response and nothing else.
|
The npartitions property is the number of Pandas dataframes that compose a single Dask dataframe. This affects performance in two main ways. If you don't have enough partitions then you may not be able to use all of your cores effectively. For example if your dask.dataframe has only one partition then only one core can operate at a time. If you have too many partitions then the scheduler may incur a lot of overhead deciding where to compute each task. Generally you want a few times more partitions than you have cores. Every task takes up a few hundred microseconds in the scheduler. You can determine the number of partitions either at data ingestion time using the parameters like blocksize= in read_csv(...) or afterwards by using the .repartition(...) method.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/46645477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/562769/']}
|
jdg_370246
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16738581
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to get some basic data passed from PHP to Flash. From reading on the topic I understand the best way is to create XML with PHP then read it in Flash. I am trying to start out simple so here is my PHP code: <?php header('Content-Type: text/xml'); echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"; echo "<userData>"; echo "<firstName>John</firstName>"; echo "<lastName>Smith</lastName>"; echo "</userData>";?> And here is my Flash code: var xml:XML = new XML();var url:URLRequest = new URLRequest("data.php");var loader:URLLoader = new URLLoader(url);loader.addEventListener("complete", xmlLoaded);function xmlLoaded(event:Event):void{ xml = XML(loader.data); trace("Data loaded."); trace (loader.data);}; I've seen some tutorials that use this approach and it works however in Flash I receive this error: TypeError: Error #1088: The markup in the document following the root element must be well-formed. Can anyone determine why I get this error or provide another way of doing this?
Now provide the response and nothing else.
|
A great deal here depends on how many Boolean values you're working with. Both bitset and vector<bool> normally use a packed representation where a Boolean is stored as only a single bit. On one hand, that imposes some overhead in the form of bit manipulation to access a single value. On the other hand, that also means many more of your Booleans will fit in your cache. If you're using a lot of Booleans (e.g., implementing a sieve of Eratosthenes) fitting more of them in the cache will almost always end up a net gain. The reduction in memory use will gain you a lot more than the bit manipulation loses. Most of the arguments against std::vector<bool> come back to the fact that it is not a standard container (i.e., it does not meet the requirements for a container). IMO, this is mostly a question of expectations -- since it says vector , many people expect it to be a container (other types of vectors are), and they often react negatively to the fact that vector<bool> isn't a container. If you're using the vector in a way that really requires it to be a container, then you probably want to use some other combination -- either deque<bool> or vector<char> can work fine. Think before you do that though -- there's a lot of (lousy, IMO) advice that vector<bool> should be avoided in general, with little or no explanation of why it should be avoided at all, or under what circumstances it makes a real difference to you. Yes, there are situations where something else will work better. If you're in one of those situations, using something else is clearly a good idea. But, be sure you're really in one of those situations first. Anybody who tells you (for example) that "Herb says you should use vector<char> " without a lot of explanation about the tradeoffs involved should not be trusted. Let's give a real example. Since it was mentioned in the comments, let's consider the Sieve of Eratosthenes: #include <vector>#include <iostream>#include <iterator>#include <chrono>unsigned long primes = 0;template <class bool_t>unsigned long sieve(unsigned max) { std::vector<bool_t> sieve(max, false); sieve[0] = sieve[1] = true; for (int i = 2; i < max; i++) { if (!sieve[i]) { ++primes; for (int temp = 2 * i; temp < max; temp += i) sieve[temp] = true; } } return primes;}// Warning: auto return type will fail with older compilers// Fine with g++ 5.1 and VC++ 2015 though.//template <class F>auto timer(F f, int max) { auto start = std::chrono::high_resolution_clock::now(); primes += f(max); auto stop = std::chrono::high_resolution_clock::now(); return stop - start;}int main() { using namespace std::chrono; unsigned number = 100000000; auto using_bool = timer(sieve<bool>, number); auto using_char = timer(sieve<char>, number); std::cout << "ignore: " << primes << "\n"; std::cout << "Time using bool: " << duration_cast<milliseconds>(using_bool).count() << "\n"; std::cout << "Time using char: " << duration_cast<milliseconds>(using_char).count() << "\n";} We've used a large enough array that we can expect a large portion of it to occupy main memory. I've also gone to a little pain to ensure that the only thing that changes between one invocation and the other is the use of a vector<char> vs. vector<bool> . Here are some results. First with VC++ 2015: ignore: 34568730Time using bool: 2623Time using char: 3108 ...then the time using g++ 5.1: ignore: 34568730Time using bool: 2359Time using char: 3116 Obviously, the vector<bool> wins in both cases--by around 15% with VC++, and over 30% with gcc. Also note that in this case, I've chosen the size to show vector<char> in quite favorable light. If, for example, I reduce number from 100000000 to 10000000 , the time differential becomes much larger: ignore: 3987474Time using bool: 72Time using char: 249 Although I haven't done a lot of work to confirm, I'd guess that in this case, the version using vector<bool> is saving enough space that the array fits entirely in the cache, while the vector<char> is large enough to overflow the cache, and involve a great deal of main memory access.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/16738581', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2382314/']}
|
jdg_370247
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12657121
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an NSManagedObject for the sections in the grouped UITableView.This object has the attributes "name" and "createdAt".I want to use "name" in te UI for the section titles, but sorted by "createdAt".According to the documentation the first sortDescriptor key has to be also the sectionNameKeyPath of the NSFetchedResultsController. I suggested using two sortDescriptors, but it doesn't work. The sections are still sorted by name. - (NSFetchedResultsController *)fetchedResultsController { if (_fetchedResultsController != nil) { return _fetchedResultsController; } NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Object" inManagedObjectContext:[CoreDataHelper instance].managedObjectContext]; [fetchRequest setEntity:entity]; NSSortDescriptor *sortName = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSSortDescriptor *sortDate = [[NSSortDescriptor alloc] initWithKey:@"createdAt" ascending:YES]; [fetchRequest setSortDescriptors:[NSArray arrayWithObjects:sortName, sortDate, nil]]; [fetchRequest setFetchBatchSize:20]; NSFetchedResultsController *theFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:[CoreDataHelper instance].managedObjectContext sectionNameKeyPath:@"name" cacheName:@"Root"]; self.fetchedResultsController = theFetchedResultsController; return _fetchedResultsController;}
Now provide the response and nothing else.
|
A fetched results controller (FRC) uses only the first sort descriptor to group (and sort) the objects into sections. A second sort descriptor can be added to sort the objects within each section. Also, the key path of the sort descriptor must be the same as the sectionNameKeyPath of the FRC (or at least generate the same relative ordering). See also Creating a Fetched Results Controller in the “Core Data Programming Guide”: ... In this example you add one more NSSortDescriptor instance to theNSFetchRequest instance. You set the same key from that new sortdescriptor as the sectionNameKeyPath on the initialization of theNSFetchedResultsController. The fetched results controller uses thisinitial sort controller to break apart the data into multiple sectionsand therefore requires that the keys match. In your case, you can proceed as follows: Use createdAt as sectionNameKeyPath and in the first sort descriptor. Modify the titleForHeaderInSection delegate function to return the name property instead of createdAt : - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.controller sections] objectAtIndex:section]; return [[[sectionInfo objects] objectAtIndex:0] name]; } Note: If you have multiple objects with the same name but different createAt values, these will be grouped into different sections with the above approach. I don't know if that is a problem for you.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12657121', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/470964/']}
|
jdg_370248
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11768615
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Tomcat webapp which does some pretty memory and CPU-intensive tasks on the behalf of clients. This is normal and is the desired functionality. However, when I run Tomcat, memory usage skyrockets over time to upwards of 4.0GB at which time I usually kill the process as it's messing with everything else running on my development machine: I thought I had inadvertently introduced a memory leak with my code, but after checking into it with VisualVM, I'm seeing a different story: VisualVM is showing the heap as taking up approximately a GB of RAM, which is what I set it to do with CATALINA_OPTS="-Xms256m -Xmx1024" . Why is my system seeing this process as taking up a ton of memory when according to VisualVM, it's taking up hardly any at all? After a bit of further sniffing around, I'm noticing that if multiple jobs are running simultaneously in the applications, memory does not get freed. However, if I wait for each job to complete before submitting another to my BlockingQueue serviced by an ExecutorService , then memory is recycled effectively. How can I debug this? Why would garbage collection/memory reuse differ?
Now provide the response and nothing else.
|
You can't control what you want to control , -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation. VisualVM is only showing you what the Heap is comsuming, it doesn't show what the entire JVM is consuming as native memory as an OS process. You will have to use OS level tools to see that, and they will report radically different numbers, usually much much larger than anything VisualVM reports, because the JVM uses up native memory in an entirely different way. From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux ) Maintaining the heap and garbage collector use native memory you can't control. More native memory is required to maintain the state of thememory-management system maintaining the Java heap. Data structuresmust be allocated to track free storage and record progress whencollecting garbage. The exact size and nature of these data structuresvaries with implementation, but many are proportional to the size ofthe heap. and the JIT compiler uses native memory just like javac would Bytecode compilation uses native memory (in the same way that a staticcompiler such as gcc requires memory to run), but both the input (thebytecode) and the output (the executable code) from the JIT must alsobe stored in native memory. Java applications that contain manyJIT-compiled methods use more native memory than smaller applications. and then you have the classloader(s) which use native memory Java applications are composed of classes that define object structureand method logic. They also use classes from the Java runtime classlibraries (such as java.lang.String) and may use third-partylibraries. These classes need to be stored in memory for as long asthey are being used. How classes are stored varies by implementation. I won't even start quoting the section on Threads, I think you get the idea that -Xmx doesn't control what you think it controls, it controls the JVM heap, not everythinggoes in the JVM heap, and the heap takes up way more native memory that what you specify formanagement and book keeping. Plain and simple the JVM uses more memory than what is supplied in -Xms and -Xmx and the other command line parameters. Here is a very detailed article on how the JVM allocates and manages memory , it isn't as simple as what you are expected based on your assumptions in your question, it is well worth a comprehensive read. ThreadStack size in many implementations have minimum limits that vary by Operating System and sometimes JVM version; the threadstack setting is ignored if you set the limit below the native OS limit for the JVM or the OS ( ulimit on *nix has to be set instead sometimes ). Other command line options work the same way, silently defaulting to higher values when too small values are supplied. Don't assume that all the values passed in represent what are actually used. The Classloaders, and Tomcat has more than one, eat up lots of memory that isn't documented easily. The JIT eats up a lot of memory, trading space for time, which is a good trade off most of the time.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11768615', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/128967/']}
|
jdg_370249
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59213196
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Dart offers a FutureOr class, that allows writing: FutureOr<int> future;future = 42; // validfuture = Future.value(42); // also valid I would assume that FutureOr would be useful to remove the unnecessary delay caused by the event loop if the value can be read synchronously. But that doesn't seem to be the case, as showcased by: import 'dart:async';void main() async { print('START'); futureOrExample(); print('END');}void futureOrExample() async { FutureOr<int> futureOr = 42; print('before await'); await futureOr; print('end await');} which prints: STARTbefore awaitENDend await when I would expect: STARTbefore awaitend awaitEND In that case, why does FutureOr (or more generally await 42 ) work this way? Similarly, what's the purpose of FutureOr in that situation since it produces the same result as Future ? I know that I could use SynchronousFuture to achieve the desired result, but I'm just trying to understand what's the use of FutureOr .
Now provide the response and nothing else.
|
The use of FutureOr , as introduced with Dart 2, is to allow you to provide either a value or a future at a point where the existing Dart 1 API allowed the same thing for convenience, only in a way that can be statically typed. The canonical example is Future.then . The signature on Future<T> is Future<R> then<R>(FutureOr<R> action(T value), {Function onError}) . The idea is that you can have an action on the future's value which is either synchronous or asynchronous. Originally there was a then function which took a synchronous callback and a chain function which took an asynchronous callback, but that was highly annoying to work with, and in good Dart 1 style, the API was reduced to one then method which took a function returning dynamic , and then it checked whether it was a future or not. In Dart 1 it was easy to allow you to return either a value or a future. Dart 2 was not as lenient, so the FutureOr type was introduced to allow the existing API to keep working. If we had written the API from scratch, we'd probably have done something else, but migrating the existing asynchronous code base to something completely different was not an option, so the FutureOr type was introduced as a type-level hack. The await operation was also originally defined to work on any object, long before FutureOr existed. For consistency and smaller code, an await e where e evaluated to a non-future would wrap that value in a future and await that. It means that there is only one quick and reusable check on a value (is it a future, if not wrap it), and then the remaining code is the same. There is only one code-path. If the await worked synchronously on non- Future values, there would have to be a synchronous code path running through the await , as well as an asynchronous path waiting for a future. That would potentially double the code size, for example when compiling to JavaScript (or worse, if there were more await s in the same control flow, you could get exponential blow-up for a naive implementation). Even if you avoided that by just calling the continuation function synchronously, it would likely be confusing to some readers that an await would not introduce an asynchronous gap. A mistake around that can cause race conditions or things happening in the wrong order. So, the original design, predating FutureOr , was to make all await operations actually wait. The introduction of FutureOr did not change this reasoning, and even if it did, it would now be a breaking change to not wait in places where people expect their code to actually give time for other microtasks to run.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59213196', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8394265/']}
|
jdg_370250
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2677177
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to know my error in this problem. Find the complex number such that:$$ z+|z|=2+8i$$So far, I have:$$\begin{split}a+bi+\sqrt{a^2+b^2} &= 2 + 8i\\a^2-b^2+a^2+b^2&=4-64\\2a^2 -b^2 + b^2&=-60\\a^2&=-30\end{split}$$ But I should end up with$$a^2=-15$$ No matter how hard I try, I can't seem to find what I did wrong. Any suggestions?
Now provide the response and nothing else.
|
I would go about this differently. Since $|z| \in \mathbb{R}$, you know that $b=8$ immediately since $bi$ is the only imaginary term on the left and $8i$ - on the right. Now the only thing is to find $a$... UPDATE We have the equation $$a + \sqrt{a^2+64} = 2$$ (hence $a<0$), which implies $$\sqrt{a^2+64} = 2-a$$ and now squaring will yield the desired result.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2677177', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/409277/']}
|
jdg_370251
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1321137
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I realize that this question may have been asked several times in the past, but I am going to continue regardless. I have a program that is going to get a string of numbers from keyboard input. The numbers will always be in the form "66 33 9" Essentially, every number is separated with a space, and the user input will always contain a different amount of numbers. I'm aware that using 'sscanf' would work if the amount of numbers in every user-entered string was constant, but this is not the case for me. Also, because I'm new to C++, I'd prefer dealing with 'string' variables rather than arrays of chars.
Now provide the response and nothing else.
|
I assume you want to read an entire line, and parse that as input. So, first grab the line: std::string input;std::getline(std::cin, input); Now put that in a stringstream : std::stringstream stream(input); and parse while(1) { int n; stream >> n; if(!stream) break; std::cout << "Found integer: " << n << "\n";} Remember to include #include <string>#include <sstream>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1321137', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/155726/']}
|
jdg_370252
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1600051
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $x_1,x_2,\ldots,x_n$ are real numbers larger than $1$, prove that $$\dfrac{1}{1+x_1}+\dfrac{1}{1+x_2}+\cdots+\dfrac{1}{1+x_n} \geq \dfrac{n}{\sqrt[n]{x_1x_2\cdots x_n}+1}$$ Attempt AM-GM doesn't work here since we will get an upper bound. I don't see Cauchy-Schwarz working either. Thus, I think a substitution might work, but I am unsure of which one to use.
Now provide the response and nothing else.
|
Consider the function $f(x)=\frac{1}{1+e^x}$ which is convex for $x>0$ . Now use Jensen's inequality : $$f( \ln x_1)+f( \ln x_2)+\ldots+f( \ln x_n) \geq n f \left (\frac{\ln x_1+\ln x_2+\ldots+\ln x_n}{n} \right)$$ This is exactly your inequality : $$\frac{1}{1+x_1}+\frac{1}{1+x_2}+\ldots+\frac{1}{1+x_n} \geq \frac{n}{1+\sqrt[n]{x_1x_2\ldots x_n}}$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1600051', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/300621/']}
|
jdg_370253
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
191118
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to write a function to add numbers from 1 through h: function[h_]:= x=0; For[i=1, i=<h, i++, x = x + i]; Print[x] But I am getting some strange and inconsistent results. Can someone point out what is wrong here?
Now provide the response and nothing else.
|
Put the expressions on right-hand-side in parentheses: function[h_] := (x = 0; For[i = 1, i <= h, i++, x = x + i]; Print[x])function[10] 55 Without the parentheses, you are defining function as function[h_] := x = 0; and the remaining expressions are not part of the definition of function . As mentioned by m_goldberg, there are better ways to define such a function. In addition to the ones in m_goldberg's answer, you can also use ClearAll[function]function[h_]:= h (h + 1) / 2function[10] 55
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/191118', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/62817/']}
|
jdg_370254
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2726343
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to port this question to Python (Windows + Linux + Mac Os) How to create ASCII animation in Windows Console application using C#? Thank you!
Now provide the response and nothing else.
|
I just ported my example with the animated gif to ASCII animation from my answer here to python. You will need to install the pyglet library from here , as python unfortunately has no built-in animated-gif support. Hope you like it :) import pyglet, sys, os, timedef animgif_to_ASCII_animation(animated_gif_path): # map greyscale to characters chars = ('#', '#', '@', '%', '=', '+', '*', ':', '-', '.', ' ') clear_console = 'clear' if os.name == 'posix' else 'CLS' # load image anim = pyglet.image.load_animation(animated_gif_path) # Step through forever, frame by frame while True: for frame in anim.frames: # Gets a list of luminance ('L') values of the current frame data = frame.image.get_data('L', frame.image.width) # Built up the string, by translating luminance values to characters outstr = '' for (i, pixel) in enumerate(data): outstr += chars[(ord(pixel) * (len(chars) - 1)) / 255] + \ ('\n' if (i + 1) % frame.image.width == 0 else '') # Clear the console os.system(clear_console) # Write the current frame on stdout and sleep sys.stdout.write(outstr) sys.stdout.flush() time.sleep(0.1)# run the animation based on some animated gifanimgif_to_ASCII_animation(u'C:\\some_animated_gif.gif')
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2726343', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/231677/']}
|
jdg_370255
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
446840
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was given an interesting dilemma today. A co-worker saw me adding a liquid (Diisopropyl ethylamine AKA DIPEA) to a flask filled with another liquid (Tetrahydrofuran AKA THF). I needed to weigh out exactly 5 grams of DIPEA into the THF and so I zero'd the scale with the flask+THF on it, then proceeded to add the DIPEA until the scale said 5.000g. Since masses are additive I assumed this was fine. My co-worker, however, stopped and told me that although masses of two liquids are additive, the combined weights would not be, and since the scale measures weight as opposed to mass I had apparently just added an incorrect amount of DIPEA. He explained the reasoning to me but I'm a chemist, not a physicist and certainly not skilled in fluid mechanics, so I would like someone to dumb it down for me a bit or tell me if I'm way off. From what I understand, the scale measures weight which is a function of gravitational force. But gravitational force is a function of buoyant force (its less if the buoyant force is greater since the buoyant force pushed a liquid up). Finally, buoyant force is a function of density. This means that my THF (which had a density of .9 g/ml) had a greater buoyant force than my THF/DIPEA solution (DIPEA density is only .74 g/ml so the solution would be somewhere between .74 and .90). And this means that technically as I'm adding DIPEA, the added mass is not the only thing causing the weight to increase; but rather the decreased buoyant force is also causing that. And so, when the scale finally read 5.000g, I had possibly only added 4.950 or maybe 4.990 etc (something less than 5.000). Is my reasoning correct? Any help is appreciated.
Now provide the response and nothing else.
|
Of course, by common sense, if you put together two objects with masses $m_1$ and $m_2$ , and nothing comes out, then you end up with mass $m_1 + m_2$ . Weights are a little more complicated because of buoyant forces. All objects on Earth continuously experience a buoyant force from the volume of the air they displace. This doesn't matter as long as volume is conserved: if you stack two solid blocks their weights add because the total buoyant force is the same as before. But when you mix two liquids the total buoyant force can change, because the volume of the mixed liquid might not be equal to the sum of the individual volumes. To estimate this effect, let's say (generously) that mixing two liquids might result in a change of total volume of $10\%$ . The density of air is about $0.1\%$ that of a typical liquid. So the error of this effect will be, at most, around $0.01\%$ , which is irrelevant. Thus we can conclude that, rather than trying to help, your coworker just wanted to feel smug for a second.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/446840', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/131809/']}
|
jdg_370256
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10413795
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to download a file from a server using System.Web.It actually works, but some links give me trouble. The links look like this: http://cdn.somesite.com/r1KH3Z%2FaMY6kLQ9Y4nVxYtlfrcewvKO9HLTCUBjU8IBAYnA3vzE1LGrkqMrR9Nh3jTMVFZzC7mxMBeNK5uY3nx5K0MjUaegM3crVpFNGk6a6TW6NJ3hnlvFuaugE65SQ4yM5754BM%2BLagqYvwvLAhG3DKU9SGUI54UAq3dwMDU%2BMl9lUO18hJF3OtzKiQfrC/the_file.ext The code looks basically like this: HttpWebRequest request = (HttpWebRequest)WebRequest.Create(link);WebResponse response = request.getResponse(); getResponse() always throws an exception (Error 400 Bad Request).However, I know the link works because I can download the file with Firefox without problems. I also tried to decode the link with Uri.UnescapeDataString(link), but that link wont even work in Firefox. Other links work perfectly fine this way.. just these won't work. Edit: Okay, i found something out using wireshark: If i open the link using Firefox, this is sent: &ME3@"dM*PNyAo PA:]GET /r1KH3Z%2FaMY6kLQ9Y4nVxYp5DyNc49t5kJBybvjbcsJJZ0IUJBtBWCgri3zfTERQught6S8ws1a%2BCo0RS5w3KTmbL7i5yytRpn2QELEPUXZTGYWbAg5eyGO2yIIbmGOcFP41WdrFRFcfk4hAIyZ7rs4QgbudzcrJivrAaOTYkEnozqmdoSCCY8yb1i22YtEAV/epd_outpost_12adb.flv HTTP/1.1Host: cdn.somesite.comUser-Agent: Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20100101 Firefox/12.0Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3Accept-Encoding: gzip, deflateConnection: keep-alive I think only the first line is the problem, because WebRequest.Create(link) decodes the url: &MEz.@!dM/nP9@~P>.GET /r1KH3Z/aMY6kLQ9Y4nVxYp5DyNc49t5kJBybvjbcsJJZ0IUJBtBWCgri3zfTERQught6S8ws1a%2BCo0RS5w3KTmbL7i5yytRpn2QELEPUXZTGYWbAg5eyGO2yIIbmGOcFP41WdrFRFcfk4hAIyZ7rs6Mmh1EsQQ4vJVYUwtbLBDNx9AwCHlWDfzfSWIHzaaIo/epd_outpost_12adb.flv HTTP/1.1User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:12.0) Gecko/20100101 Firefox/12.0Host: cdn.somesite.com ( %2F is replaced with / ) Another edit: I found out that the Uri class decodes the url automatically: Uri uri = new Uri(link); //link is not decoded Debug.WriteLine(uri.ToString()); //link is decoded here. How can I prevent this? Thanks in advance for your help.
Now provide the response and nothing else.
|
By default, the Uri class will not allow an escaped / character ( %2f ) in a URI (even though this appears to be legal in my reading of RFC 3986 ). Uri uri = new Uri("http://example.com/embed%2fded");Console.WriteLine(uri.AbsoluteUri); // prints: http://example.com/embed/ded (Note: don't use Uri.ToString to print URIs.) According to the bug report for this issue on Microsoft Connect, this behaviour is by design, but you can work around it by adding the following to your app.config or web.config file: <uri> <schemeSettings> <add name="http" genericUriParserOptions="DontUnescapePathDotsAndSlashes" /> </schemeSettings></uri> (Since WebRequest.Create(string) just delegates to WebRequest.Create(Uri) , you would need to use this workaround no matter which method you call.)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10413795', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1064622/']}
|
jdg_370257
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3886014
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$\lim_{x\to\infty}\frac{\ln(x+e^x+e^{2x})}{x}$$ In the book I am reading, evaluated the limit with L'Hopital rule like this: $$\lim_{x\to\infty}\frac{\ln(x+e^x+e^{2x})}{x}=\frac{\infty}{\infty}\rightarrow \lim_{x\to\infty}\frac{1+e^x+2e^{2x}}{x+e^x+e^{2x}} $$ Then it used equivalence and write $\lim_{x\to\infty}\cfrac{2e^{2x}}{e^2x}=2$ . My approach to evaluate $\lim_{x\to\infty}\frac{\ln(x+e^x+e^{2x})}{x}$ is different: in the numerator of the fraction and inside the $\ln()$ the function $e^{2x}$ goes faster to infinity than $x$ , $e^x$ . therefore we can ignore these two: $$\lim_{x\to\infty}\frac{\ln(e^{2x})}{x}=\lim_{x\to\infty}\frac{2x}{x}=2$$ Is my approach right? and was it really necessary to use L'Hopital Rule?
Now provide the response and nothing else.
|
Your sentence in the comment does not quite make sense: "It is obvious I can write $x$ equals to a polynomial with degree $n$ ". But on the other hand, suppose you know that $x$ is a root of a monic polynomial over $A$ , which means that $x^n + a_{n-1} x^{n-1} + ... + a_1 x + a_0 = 0$ where the lower term coefficients $a_0,...,a_{n-1}$ are all in $A$ . Then it is obvious that you can write $x^n$ equal to a polynomial of degree $\le n-1$ , namely $$x^n = - a_{n-1} x^{n-1} - ... - a_1 x - a_0$$ And then an easy induction shows for each $k \ge n$ that $x^k$ is also equal to a polynomial of degree $\le n-1$ . For instance, in the next step of the induction multiply both sides of that equation by $x$ , so the new equation has $x^{n+1}$ on the left hand side, and the leading term on the right hand side will be $-a_{n-1} x^n$ , and then use the above equation to substitute for $x^n$ on the right hand side. It follows that $A[x]$ is finitely generated over $A$ , the generating set being $1,x,....,x^{n-1}$ .
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3886014', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/717650/']}
|
jdg_370258
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2377887
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use instruments to see memory leaks. At least in one scenario where I am constantly flicking through slides/pages (inside UIScrollView) I don't see any memory leak. Using instruments - under "Allocation lifespan" I switch to view "Created & Still Living" and see memory around 1.17MB throughout. I assume this means my app is using only this much actual memory and rest is being properly recycled. Yet after flicking through 100 or so pages, I get a memory warning and then couple of my views are unloaded resulting in a crash of the whole app. If I am not using a lot of memory and don't have memory leak why did I receive memory warning ? Since there is nothing I can really release I don't see a way to avoid the crash. Anyone experienced this situation or know what can I do ? Am I misinterpreting anything from Instruments ? Thanks much for any comment.
Now provide the response and nothing else.
|
The documentation says: If a matching image object is not already in the cache, this method loads the image data from the specified file, caches it, and then returns the resulting object. Thus, every image you load by using imageNamed: will continue to exist in the cache after you release your last ownership of it. Instruments doesn't show this as a leak because, strictly speaking, it isn't one: Something (UIImage) still knows about these images. The Leaks instrument will only show a leak for an object that exists but nothing knows about. You can still see this in Instruments, though. Select the ObjectAlloc instrument in your trace document and sort the list of classes by current count, or by current total size. You'll see that the bulk of the memory is occupied by UIImage objects. If you mouse over the class-name column for the UIImage row, you'll see a ➲ (go-to-iTunes-Store) icon; if you click on that, you'll see a list of all of your UIImage instances. Then, if you mouse over the address column for an instance's row, you'll see the same button; this time, clicking on it will take you to the history of that address, including all creations, retentions, releases, and deallocations of objects with that address. Here, you can see the image's allocation (within the UIImage class, ordered by you a few stack-frames down), retention (by you), and release (by you). You can also see that it has not been released by UIImage—the +[UIImage imageNamed:] cache still owns the image, hence the “leak”. If you don't want images to pile up like that, load them yourself using imageWithContentsOfFile: and the -[NSBundle pathForResource:ofType:] method . UPDATE: I have read that since iOS 3, UIImage will purge its cache in (at least some) low-memory situations, so this should not be as much of a “leak” as it used to be. You'll probably still see memory pile up, but then you should see the pile collapse eventually. If you still see memory piling up and can prove that it's Apple's fault, you should document your evidence and file a bug .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2377887', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/242165/']}
|
jdg_370259
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2602191
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
$$f(x) = x + \frac{2}{x}$$I tried to find the minimum of $f(x)$ with the Cauchy–Schwarz inequality,but I find different answers. First I suppose $a = (\sqrt{\frac{2}{x}} ,\sqrt x,0)$ and $b = (\sqrt x,\sqrt{\frac{2}{x}} ,0)$ and the minimum of function becomes $2\sqrt 2$. Second I suppose $a = (\sqrt{\frac{2}{x}} ,\sqrt x,0)$ and $b = (\sqrt x,0 ,\sqrt{\frac{2}{x}})$ and the minimum of function becomes $\sqrt 2$. I know the first answer is true and in the second condition the inequality doesn't have a problem but it's not the minimum of the function. Do we know from this we can't use the Cauchy–Schwarz inequality to find the minimum of a function?
Now provide the response and nothing else.
|
We can use the CS-inequality to find the min of this function. You have actually already done most of the work. Both of your results say "whatever value of $x$ you put in, $f$ can't be smaller than this", and they're both right. Now, to find the actual min, you have to remember when the CS inequality is an equality . That happens when the two vectors are parallel. In the second attempt, the two vectors can never be parallel, and as such, the function value can never actually get as low as $\sqrt 2$. However, in your first attempt, there is a value of $x$ that makes the two vectors parallel, and that's $x = \sqrt2$, making both $a$ and $b$ into $(\sqrt[4]2, \sqrt[4]2, 0)$ and thus parallel. So the lower bound you got in your first attempt is actually attainable, and therefore the true minimum value of the function.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2602191', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/514621/']}
|
jdg_370260
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28564324
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The codes are below: scala> def f(x:Int => Unit):Unit = 1<console>:7: warning: a pure expression does nothing in statement position; you may be omitting necessary parentheses def f(x:Int => Unit):Unit = 1 ^f: (x: Int => Unit)Unitscala> f(_=>2);<console>:9: warning: a pure expression does nothing in statement position; you may be omitting necessary parentheses f(_=>2); ^scala> f(_=>List(1,2)); All three expressions above worked in REPL(with some warnings), but they look a bit confusing.. In the first expression, f 's return type is Unit , which is the subtype of AnyVal but not the Supertype of Int , therefore, I can't understand why 1 can be used as the returned value. In the second expression, _=>2 also used 2 instead of Unit as the returned value, which conflicts with the definition. In the third expression, _=> List(1,2) even uses List , subtype of AnyRef as the returned value, but the REPL still doesn't complain about this.. Does anyone have ideas about why Unit can tolerate the non-subtype type conversion here? Thanks!
Now provide the response and nothing else.
|
Scala will automatically insert () (the singleton Unit value) in this case to make the typechecking work. So what you have is equivalent to: def f(x:Int => Unit):Unit = { 1; () } This is known as "value discarding" in Scala. From the spec : Value Discarding If e has some value type and the expected type is Unit , e is converted to the expected type by embedding it in the term { e; () } Like in many programming languages, this is meant to facilitate just "throwing out" the return value of the expression. This allows you to make a method of type Unit that only uses the side effects of an expression.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/28564324', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1272683/']}
|
jdg_370261
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9390368
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to parse some huge JSON file (like http://eu.battle.net/auction-data/258993a3c6b974ef3e6f22ea6f822720/auctions.json ) using gson library ( http://code.google.com/p/google-gson/ ) in JAVA. I would like to know what is the best approch to parse this kind of big file (about 80k lines) and if you may know good API that can help me processing this. Some idea... read line by line and get rid of the JSON format: but that's nonsense. reduce the JSON file by splitting this file into many other: but I did not find any good Java API for this. use this file directlly as nonSql database, keep the file and use it as my database. I would really appreciate adices/ help/ messages/ :-) Thanks.
Now provide the response and nothing else.
|
You don't need to switch to Jackson. Gson 2.1 introduced a new TypeAdapter interface that permits mixed tree and streaming serialization and deserialization. The API is efficient and flexible. See Gson's Streaming doc for an example of combining tree and binding modes. This is strictly better than mixed streaming and tree modes; with binding you don't waste memory building an intermediate representation of your values. Like Jackson, Gson has APIs to recursively skip an unwanted value; Gson calls this skipValue() .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/9390368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1136484/']}
|
jdg_370262
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3634
|
Below is a question asked on the forum bioinformatics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
A common way to model RNA-seq data is using a negative binomial distribution, where each sample-gene pair is modeled by a different negative binomial distribution with mean $\mu_{ij}$ where $i$ and $j$ are indices for genes and samples, respectively (see article ). My question is, given we only have a single observation ($X_{ij}$) from each sample-gene pair, how come we can learn the $\mu_{ij}$ from that single observation? We need multiple observations to estimate a mean, right? Maybe I am missing something critical.
Now provide the response and nothing else.
|
To follow up on Devon Ryan's answer, I thought it would be a little fun to write a Python script that demonstrates using a bit array to maintain a presence/absence table. Note: I wrote a C++ port that includes a custom bitset implementation that can be sized at runtime. This and the Python script are available on Github: https://github.com/alexpreynolds/kmer-boolean #!/usr/bin/env pythonimport sysimport osimport bitarray# read FASTAdef read_sequences(): global seqs seqs = [] seq = "" for line in sys.stdin: if line.startswith('>'): if len(seq) > 0: seqs.append(seq) seq = "" else: seq += line.strip() seqs.append(seq)# build and initialize bit arraydef initialize_bitarray(): global ba ba = bitarray.bitarray(4**k) ba.setall(False) sys.stderr.write("Memory usage of bitarray.bitarray instance is [%ld] bytes\n" % (ba.buffer_info()[1]))# process sequencesdef process_sequences(): global observed_kmers observed_kmers = {} for seq in seqs: for i in range(0, len(seq)): kmer = seq[i:i+k] if len(kmer) == k: observed_kmers[kmer] = None idx = 0 for j in range(k-1, -1, -1): idx += 4**(k-j-1) * bm[kmer[j]] ba[idx] = Truedef test_bitarray(): test_idx = 0 for j in range(k-1, -1, -1): test_idx += 4**(k-j-1) * bm[test_kmer[j]] test_result = ba[test_idx] if test_result: sys.stdout.write("%s found\n" % (test_kmer)) sys.exit(os.EX_OK) else: sys.stdout.write("%s not found\n" % (test_kmer)) sys.exit(os.EX_DATAERR)def main(): global k k = int(sys.argv[1]) global bm bm = { 'A' : 0, 'C' : 1, 'T' : 2, 'G' : 3 } read_sequences() initialize_bitarray() process_sequences() try: global test_kmer test_kmer = sys.argv[2] if len(test_kmer) == k: test_bitarray() else: raise ValueError("test kmer (%s) should be of length k (%d)" % (test_kmer, k)) except IndexError as err: keys = list(observed_kmers.keys()) for i in range(0, len(keys)): sys.stdout.write("%s found\n" % (keys[i])) sys.exit(os.EX_OK)if __name__== "__main__": main() Note that this doesn't look at canonical kmers, e.g. , AG is considered a distinct 2mer from its reverse complement CT . To use this script, you pipe in your FASTA, specify the k , and an optional kmer that you want to test for presence/absence, e.g. : $ echo -e ">foo\nCATTCTC\nGGGAC\n>bar\nTTATAT\n>baz\nTTTATTAG\nACCTCT" | ./kmer-bool.py 2 CGMemory usage of bitarray.bitarray instance is [2] bytesCG found Or: $ echo -e ">foo\nCATTCTC\nGGGAC\n>bar\nTTATAT\n>baz\nTTTATTAG\nACCTCT" | ./kmer-bool.py 3 AAAMemory usage of bitarray.bitarray instance is [8] bytesAAA not found Or if the optional test kmer is left out: $ echo -e ">foo\nCATTCTC\nGGGAC\n>bar\nTTATAT\n>baz\nTTTATTAG\nACCTCT" | ./kmer-bool.py 5Memory usage of bitarray.bitarray instance is [128] bytesCATTC foundATTCT foundTTCTC foundTCTCG foundCTCGG foundTCGGG foundCGGGA foundGGGAC foundTTATA foundTATAT foundTTTAT foundTTATT foundTATTA foundATTAG foundTTAGA foundTAGAC foundAGACC foundGACCT foundACCTC foundCCTCT found Or for the ~67M kmers in a 13-mer set, for which a roughly 8.4MB bit array is reserved: $ echo -e ">foo\nCATTCTC\nGGGAC\n>bar\nTTATAT\n>baz\nTTTATTAG\nACCTCT" | ./kmer-bool.py 13Memory usage of bitarray.bitarray instance is [8388608] bytesTTTATTAGACCTC foundTTATTAGACCTCT found
|
{}
|
{'log_upvote_score': 4, 'links': ['https://bioinformatics.stackexchange.com/questions/3634', 'https://bioinformatics.stackexchange.com', 'https://bioinformatics.stackexchange.com/users/1331/']}
|
jdg_370263
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4016984
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The Hilbert Cube is defined to be the countable infinite Cartesian products of the interval $[0,1]$ or anything homeomorphic to $[0,1]$ . Why do we care about this object?
Now provide the response and nothing else.
|
Every compact metrisable space (of any dimension ) is homeomorphic to a closed subset of the Hilbert cube. Any separable metrisable space is homeomorphic to a subspace of the Hilbert cube. Every separable metrisable topological vector space is homeomorphic to the pseudo-interior $(0,1)^{\Bbb N} \simeq \Bbb R^{\Bbb N}$ of the Hilbert cube. There is a very nice theory of Z-sets and homeomorphisms of the Hilbert cube (see van Mill’s books on infinite-dimensional topology, or Bessaga and Pelczynski’s book for more on these theorems). It’s a fundamental object in infinite-dimensional topology. The hyperspace of any Peano continuum (in the Hausdorff metric) is homeomorphic to it too, e.g.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4016984', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/881180/']}
|
jdg_370264
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
9641699
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In this question, Howard Hinnant said Some implementations of std::tuple use recursive inheritance. But the good ones don't. ;-) Can someone please shed some light on that?
Now provide the response and nothing else.
|
A non-recursive implementation has better compile-time performance. Believe it or not, in a heavily used library facility like std::tuple , how it is implemented can impact (for better or worse), the compile times the client sees. Recursive implementations tend to produce compile times that are linear in the depth of recursion (or can be even worse). This impacts more than just the instantiation of the tuple itself. std::get<I>(tuple) for example will take a linear amount of compile time for one implementation and a constant amount of compile time for another implementation. This impact can rapidly deteriorate (or not) when dealing with tuples of tuples. I.e. the recursive implementation could result in O(N^2) compile time while the non-recursive implementation is still O(1). Fwiw, the libc++ implementation lays the objects out in the order specified by the client, but optimizes away space for empty components using the compiler's empty base class optimization facility.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/9641699', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/34509/']}
|
jdg_370265
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32226915
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can I have two tables with different constrains that are names the same - that is: Table A with constraint C Table B with constraint C The constrains are different Thanks!
Now provide the response and nothing else.
|
You have basically asked the compiler to do this: I have this: DbRecurPatt I want this: IRecurrencePattern Please figure out a way to get from point 1. to point 2. The compiler, even though it may only have one choice, does not allow you to do this. The cast operator specifically says that DbRecurPatt can be converted to a RecurrencePattern , not to a IRecurrencePattern . The compiler only checks if one of the two types involved specifies a rule on how to convert from one to the other, it does not allow intermediary steps. Since no operator has been defined that allows DbRecurPatt to be converted directly to IRecurrencePattern , the compiler will compile this as a hard cast, reinterpreting the reference as a reference through an interface, which will fail at runtime. So, the next question would be this: How can I then do this? And the answer is you can't. The compiler does not allow you to define a user-defined conversion operator to or from an interface. A different question here on Stack Overflow has more information . If you try to define such an operator: public static implicit operator IRecurrencePattern(DbRecurPatt obj){ return new RecurrencePattern() { Data = $"{obj.Name} - {obj.Description}" };} The compiler will say this: CS0552 'DbRecurPatt.implicit operator IRecurrencePattern(DbRecurPatt)': user-defined conversions to or from an interface are not allowed
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32226915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2256986/']}
|
jdg_370266
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
34527653
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am working on one android app in which I am using CoordinatorLayout , AppBarLayout and CollapsingToolbarLayout to use the collapse toolbar functionality. I am using NestedScrollView in layout to expand and collapse AppBarLayout in same layout. When I am trying to scroll up from center of the screen then it does not work but when I try to scroll up screen from right corner of the screen then it scrolls smoothly. Below mentioned is my xml file layout.xml <android.support.v4.widget.NestedScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" xmlns:app="http://schemas.android.com/apk/res-auto" android:fillViewport="true" android:fitsSystemWindows="true" android:paddingBottom="2dp" android:paddingLeft="5dp" android:paddingRight="5dp" android:paddingTop="5dp" android:layout_gravity="fill_vertical" app:layout_behavior="@string/appbar_scrolling_view_behavior"> <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/fragment_back_color" android:orientation="vertical"> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:padding="5dp"> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/new_recharge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginLeft="10dp" android:text="NEW PAYMENT" android:textColor="@color/offer_name_text_color" /> </LinearLayout> <RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginLeft="5dp" android:layout_marginRight="5dp" android:background="@color/white" android:paddingBottom="20dp" android:paddingLeft="10dp" android:paddingRight="10dp" android:paddingTop="20dp"> <HorizontalScrollView android:id="@+id/hsv" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:fillViewport="true" android:measureAllChildren="false" android:scrollbars="none"> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:orientation="horizontal"> <LinearLayout android:id="@+id/wallet_layout" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:orientation="vertical"> <ImageView android:id="@+id/wallet_recharge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/wallet_recherge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="65dp" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:gravity="center" android:text="WALLET" android:textColor="@color/offer_name_text_color" android:textSize="12sp" /> > </LinearLayout> <View android:layout_width="5dp" android:layout_height="20dp" android:background="@color/white" /> <LinearLayout android:id="@+id/prepaid_layout" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:orientation="vertical"> <ImageView android:id="@+id/prepaid_recharge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/prepaid_recherge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="65dp" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:gravity="center" android:text="PREPAID" android:textColor="@color/offer_name_text_color" android:textSize="12sp" /> > </LinearLayout> <View android:layout_width="5dp" android:layout_height="20dp" android:background="@color/white" /> <LinearLayout android:id="@+id/postpaid_layout" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:orientation="vertical"> <ImageView android:id="@+id/postpaid_recharge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/postpaid_recherge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="65dp" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:gravity="center" android:text="POSTPAID" android:textColor="@color/offer_name_text_color" android:textSize="12sp" /> > </LinearLayout> <View android:layout_width="5dp" android:layout_height="20dp" android:background="@color/white" /> <LinearLayout android:id="@+id/dth_layout" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:orientation="vertical"> <ImageView android:id="@+id/dth_recharge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/dth_recherge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="65dp" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:gravity="center" android:text="DTH" android:textColor="@color/offer_name_text_color" android:textSize="12sp" /> > </LinearLayout> <View android:layout_width="5dp" android:layout_height="20dp" android:background="@color/white" /> <LinearLayout android:id="@+id/landline_layout" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:orientation="vertical"> <ImageView android:id="@+id/landline_recharge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/landline_recherge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="65dp" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:gravity="center" android:text="LANDLINE" android:textColor="@color/offer_name_text_color" android:textSize="12sp" /> > </LinearLayout> <View android:layout_width="5dp" android:layout_height="20dp" android:background="@color/white" /> <LinearLayout android:id="@+id/datacard_layout" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:orientation="vertical"> <ImageView android:id="@+id/datacard_recharge" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/datacard_recherge" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="65dp" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:gravity="center" android:text="DATACARD" android:textColor="@color/offer_name_text_color" android:textSize="12sp" /> > </LinearLayout> </LinearLayout> </HorizontalScrollView> </RelativeLayout> <LinearLayout android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:paddingBottom="10dp" android:paddingLeft="5dp" android:paddingRight="5dp" android:paddingTop="10dp"> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:background="@drawable/recent" /> <com.spiceladdoo.views.RobotTextviewRegular android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginLeft="10dp" android:text="RECENT" android:textColor="@color/offer_name_text_color" /> </LinearLayout> <ListView android:id="@+id/recent_recharge_list" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_marginLeft="5dp" android:layout_marginRight="5dp"> </ListView> </LinearLayout> <FrameLayout xmlns:tools="http://schemas.android.com/tools" android:id="@+id/recharge_container" android:layout_width="match_parent" android:layout_height="match_parent" android:visibility="visible" tools:ignore="MergeRootFrame"> </FrameLayout> </RelativeLayout></android.support.v4.widget.NestedScrollView> The desired result is when I try to scroll up from center of the screen then it should work as smmoth as I scroll up from right corner of the mobile. Please watch below mentioned video to look at the problem more clearly https://www.dropbox.com/s/gscfc8vfc7kkpxp/device-2015-12-30-160119.mp4?dl=0
Now provide the response and nothing else.
|
I believe, I've nailed it: You can find the source code here - feel free to try; I took as an example google\designlibdemo . Here's how my Activity looks like: <?xml version="1.0" encoding="utf-8"?><android.support.v4.widget.DrawerLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/drawer_layout" android:layout_height="match_parent" android:layout_width="match_parent" android:fitsSystemWindows="true"> <include layout="@layout/include_list_viewpager"/> <android.support.design.widget.NavigationView android:id="@+id/nav_view" android:layout_height="match_parent" android:layout_width="wrap_content" android:layout_gravity="start" android:fitsSystemWindows="true" app:headerLayout="@layout/nav_header" app:menu="@menu/drawer_view"/></android.support.v4.widget.DrawerLayout> It hosts ViewPager : include_list_viewpager.xml : <?xml version="1.0" encoding="utf-8"?><android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/main_content" android:layout_width="match_parent" android:layout_height="match_parent"> <android.support.design.widget.AppBarLayout android:id="@+id/appbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar"> <android.support.design.widget.CollapsingToolbarLayout android:id="@+id/collapsing_toolbar" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_scrollFlags="scroll|enterAlways|snap" app:contentScrim="?attr/colorPrimary" app:expandedTitleMarginStart="48dp" app:expandedTitleMarginEnd="64dp" android:fitsSystemWindows="true"> <ImageView android:id="@+id/image" android:src="@drawable/header_image" android:layout_width="match_parent" android:layout_height="200dp" android:scaleType="centerCrop" android:fitsSystemWindows="true" app:layout_collapseMode="parallax"/> <View android:background="#AA50AA00" android:layout_width="match_parent" android:layout_height="200dp" app:layout_collapseMode="parallax"/> <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:title="" android:layout_height="?attr/actionBarSize" app:popupTheme="@style/ThemeOverlay.AppCompat.Light" app:layout_collapseMode="pin" /> </android.support.design.widget.CollapsingToolbarLayout> <android.support.design.widget.TabLayout android:id="@+id/tabs" android:background="#50AA00" app:tabMode="scrollable" app:tabIndicatorColor="#FFF" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </android.support.design.widget.AppBarLayout> <android.support.v4.view.ViewPager android:id="@+id/viewpager" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_behavior="@string/appbar_scrolling_view_behavior" /></android.support.design.widget.CoordinatorLayout> CollapsingToolbarLayout hosts Toolbar ( app:layout_collapseMode="pin" ) and above ImageView ( app:layout_collapseMode="parallax" ). The hosted in the ViewPager Fragment has this layout: <?xml version="1.0" encoding="utf-8"?><android.support.v4.widget.NestedScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" xmlns:app="http://schemas.android.com/apk/res-auto" android:fillViewport="true" android:fitsSystemWindows="true" android:layout_gravity="fill_vertical" app:layout_behavior="@string/appbar_scrolling_view_behavior"> <LinearLayout android:orientation="vertical" android:background="#DDD" android:layout_width="match_parent" android:layout_height="match_parent"> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" android:padding="16dp"> <ImageView android:src="@drawable/mobile" android:layout_marginTop="2dp" android:layout_width="16dp" android:layout_height="16dp" /> <TextView android:text="@string/prepaid_recharge" android:layout_marginStart="16dp" android:layout_gravity="center_vertical" android:textColor="#000" android:fontFamily="sans-serif-medium" android:textSize="14sp" android:textAllCaps="true" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> ........ <ListView android:id="@+id/recent_recharge_list" android:layout_width="match_parent" android:layout_height="match_parent"/> </LinearLayout></android.support.v4.widget.NestedScrollView> So it has this behaviour app:layout_behavior="@string/appbar_scrolling_view_behavior" and filling ViewPort ( android:fillViewport="true" ) And actually - that's all you need. Fragment class is very standard: public class RechargeFragment extends Fragment { @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { return inflater.inflate(R.layout.recharge_fragment, container, false); }} Activity is also remained very standard. Again, you can find my code example here . NB! I found it works pretty poor(not smooth at all) on the emulator. I hope, it helps.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34527653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5226511/']}
|
jdg_370267
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14090353
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to have widgets communicating between each other. My central Widget has 2 widget instances: widget A (instance of A() ), widget B (instance of B() )When a widget A is subject to an event (e.g.: mouseRelease ),I need widget A to call a function and to do some computing, processing data.And then I need this processed data to be passed to the widget B ,which in turn upon receiving this data from widget A , will call a function with this data, process it, and do something more with it. Both these widgets are custom widgets derived from QWidget . I am almost certain I need to use signals and slots but I don't understand how to implement function call parts. I've read over and over again explanations of Qt here: http://qt-project.org/wiki/Signals_and_Slots_in_PySide however I can not figure it out how to connect two widgets which both process data emitted and sent. I would really appreciate some help here. If you can not answer in Python yet competent with C++, please explain nonetheless using it instead of refraining at all.
Now provide the response and nothing else.
|
Here is a example demonstrating signals/slots connections between two widgets in PyQt4: #!/usr/bin/env python#-*- coding:utf-8 -*-from PyQt4 import QtCore, QtGuiclass widgetB(QtGui.QWidget): procDone = QtCore.pyqtSignal(str) def __init__(self, parent=None): super(widgetB, self).__init__(parent) self.lineEdit = QtGui.QLineEdit(self) self.button = QtGui.QPushButton("Send Message to A", self) self.layout = QtGui.QHBoxLayout(self) self.layout.addWidget(self.lineEdit) self.layout.addWidget(self.button) self.button.clicked.connect(self.on_button_clicked) @QtCore.pyqtSlot() def on_button_clicked(self): self.procDone.emit(self.lineEdit.text()) @QtCore.pyqtSlot(str) def on_procStart(self, message): self.lineEdit.setText("From A: " + message) self.raise_()class widgetA(QtGui.QWidget): procStart = QtCore.pyqtSignal(str) def __init__(self, parent=None): super(widgetA, self).__init__(parent) self.lineEdit = QtGui.QLineEdit(self) self.lineEdit.setText("Hello!") self.button = QtGui.QPushButton("Send Message to B", self) self.button.clicked.connect(self.on_button_clicked) self.layout = QtGui.QHBoxLayout(self) self.layout.addWidget(self.lineEdit) self.layout.addWidget(self.button) @QtCore.pyqtSlot() def on_button_clicked(self): self.procStart.emit(self.lineEdit.text()) @QtCore.pyqtSlot(str) def on_widgetB_procDone(self, message): self.lineEdit.setText("From B: " + message) self.raise_()class mainwindow(QtGui.QMainWindow): def __init__(self, parent=None): super(mainwindow, self).__init__(parent) self.button = QtGui.QPushButton("Click Me", self) self.button.clicked.connect(self.on_button_clicked) self.setCentralWidget(self.button) self.widgetA = widgetA() self.widgetB = widgetB() self.widgetA.procStart.connect(self.widgetB.on_procStart) self.widgetB.procDone.connect(self.widgetA.on_widgetB_procDone) @QtCore.pyqtSlot() def on_button_clicked(self): self.widgetA.show() self.widgetB.show() self.widgetA.raise_()if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) main = mainwindow() main.show() sys.exit(app.exec_())
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14090353', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/853934/']}
|
jdg_370268
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40120
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Question. Is it true that each infinite hyperbolic grouphas a torsion-free subgroup of finite index? Are there counterexamples, or positive results for some large subclasses of hyperbolic groups?For example, is the answer positive for orbifold fundamental groups of negatively curved orbifolds? More precisely, I am interested the most in the case of orbifolds with locally $CAT(0)$ metric. I guess it will be hard to construct a counterexample in this category. Related question . Is it known that every nontrivial hyperbolic group has a proper subgroup of finite index? Just to recall, a definition of hyperbolic group is here https://en.wikipedia.org/wiki/Hyperbolic_group . Added. Note, that every hyperbolic group is finitely presented (thanks to Sam Nead).
Now provide the response and nothing else.
|
This is a well known open problem. The following properties are equivalent a) Every hyperbolic group is residualy finite b) Every hyperbolic group has a finite index torsion-free subgroup. The proof is either here: Olʹshanskiĭ, A. Yu.On the Bass-Lubotzky question about quotients of hyperbolic groups. J. Algebra 226 (2000), no. 2, 807--817 or here: Kapovich, Ilya; Wise, Daniel T. The equivalence of some residual properties of word-hyperbolic groups. J. Algebra 223 (2000), no. 2, 562--583 or can be given by exactly the same methods as in these two papers (I do not remember exactly which of these three possibilities hold).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/40120', 'https://mathoverflow.net', 'https://mathoverflow.net/users/943/']}
|
jdg_370269
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27347802
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have default project template of ASP.NET MVC 5 web site and I am trying to list all users with role names (not IDs). The query is: db.Users.Include(u => u.Roles).ToList() Then I want to print the role names with something like: @string.Join(", ", user.Roles.Select(r => r.RoleId)) The problem is that I can reach only RoleId , not Role class where Name and other properties are stored. I could run another select to get all roles and then use that as a lookup. Or write a join in the query directly? I am not sure how because I do not have access to the table with IdentityUserRole entities that are binding users and roles together. The root of the problem seems to be the fact that is Roles collection of IdentityUserRole (not Role ) which contains only RoleId and UserId . public class IdentityUserRole<TKey> { public virtual TKey RoleId { get; set; } public virtual TKey UserId { get; set; }} I thought that if one want to do N-to-N relationship in EF they should put directly collection of Roles and then override OnModelCreating and specify the relationships. This approach seems to complicate browsing the objects from one to another. Why they decided to include IdentityUserRole as extra entity? To be able to add extra data to the relationships? At cost of not being able to navigate from users to roles?
Now provide the response and nothing else.
|
The way I do it is: using (var userManager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationContext())){ var rolesForUser = await userManager.GetRolesAsync(userId); // rolesForUser now has a list role classes.} The identity team made two managers: RoleManager for sorting out roles (not user roles though) and UserManager basically for everything authentication wise. There is also a SignInManager as well but not needed. So UserManager finds users, creates users, deletes users, sends emails .... the list goes on. So my Action could look like this: public async Task<ActionResult> GetRolesForUser(string userId) { using ( var userManager = new UserManager<ApplicationUser>(new UserStore<ApplicationUser>(new ApplicationDbContext()))) { var rolesForUser = await userManager.GetRolesAsync(userId); return this.View(rolesForUser); } } To execute raw SQL then you can do something like this: Create the class that Entity Framework can map to, based on the output of your query: public class UserWithRole{ public string UserName {get;set;} // You can alias the SQL output to give these better names public string Name {get;set;}}using (var context = new DbContext()){ var sql = @" SELECT AspNetUsers.UserName, AspNetRoles.Name FROM AspNetUsers LEFT JOIN AspNetUserRoles ON AspNetUserRoles.UserId = AspNetUsers.Id LEFT JOIN AspNetRoles ON AspNetRoles.Id = AspNetUserRoles.RoleId WHERE AspNetUsers.Id = @Id"; var idParam = new SqlParameter("Id", theUserId); var result = context.Database.ExecuteQuery<UserWithRole>(sql, idParam);} Pretty simple! If you alias your SQL return columns: SELECT AspNetUSers.UserName, AspNetRoles.Name As RoleName Then your DTO class can look like this: public class UserWithRole{ public string UserName {get;set;} public string RoleName {get;set;}} Which is obviously a lot cleaner.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27347802', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1030376/']}
|
jdg_370270
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11963193
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am going through some Clojure tutorials using Closure Box, and entered the following code: user> (def stooges (vector "Moe" "Larry" "Curly"))#'user/stoogesuser> (contains? stooges "Moe")false Shouldn't this evaluate to TRUE ? Any help is appreciated.
Now provide the response and nothing else.
|
A vector is similar to an array. contains? returns true if the key exists in the collection. You should be looking for the "key/index" 0, 1 or 2 user=> (def stooges (vector "Moe" "Larry" "Curly"))#'user/stoogesuser=> (contains? stooges 1)trueuser=> (contains? stooges 5) false If you were using a hash... user=> (def stooges {:moe "Moe" :larry "Larry" :curly "Curly"})#'user/stoogesuser=> (contains? stooges :moe)trueuser=> (contains? stooges :foo)false As mikera suggests, you probably want something like clojure.core/some
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11963193', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1017102/']}
|
jdg_370271
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
70154
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Hi all. Is there any explicit matrix expression for a general element of the special orthogonal group $SO(3)$? I have been searching texts and net both, but could not find it. Kindly provide any references.
Now provide the response and nothing else.
|
Here is the standard quaternion answer: Given $(a,b,c,d)$ such that $a^2+b^2+c^2+d^2=1$, the matrix$$\begin{pmatrix}a^2+b^2-c^2-d^2&2bc-2ad &2bd+2ac \\2bc+2ad &a^2-b^2+c^2-d^2&2cd-2ab \\2bd-2ac &2cd+2ab &a^2-b^2-c^2+d^2\\\end{pmatrix}$$is a rotation and every rotation matrix is of this form. Note that $(a,b,c,d)$ and $(-a, -b,-c,-d)$ give the same rotation.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/70154', 'https://mathoverflow.net', 'https://mathoverflow.net/users/16391/']}
|
jdg_370272
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
147529
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The openid connect specification adds a nonce parameter to the authorize endpoint, which must be echoed back as a claim in the id_token. It claims that the purpose of this parameter is to prevent replay attacks and has some implementation suggestions around using http only cookies. I understand what replay attack is in general, but I'd like to get a better understanding of the details of what a replay attack this might protect against might look like, so I can have a better idea about whether I have implemented it effectively, or made some subtle mistake. I assume the replay attack is against the authorize endpoint, for some third party to get a token corresponding to a different user. Is that it or is it something else? What level of information do we expect the attacker has to replay? Do they have the traffic between the user agent and authorization server but not the client application, which is why they'd be able to replay the traffic to the authorize endpoint but would not be able to replay previous a cookie to the client application? Or they have the browser history of both but not the network traffic so they don't have old cookie values? Does it matter whether or not the user agent still has the previous user's session with the authorization server when the replay occurs? If the previous user does have a valid session with the authorization server and the attacker has access to that, can't they get access to a token without doing a replay? If they don't have a valid session with the authorization server, won't the user be prompted anyway, even if they replay an old request? If the user will be prompted to authenticate, presumably with credentials they don't have, what's the danger of a replay? I have done a lot of searching and there are some similar questions ( Replay attack example for validating nonce? and Purpose of nonce validation in OpenID Connect implicit flow ) neither of these actually answers the question about how does this specific replay attack work. The first just quotes the recommended implementation form the spec, then talks about leaked tokens, which seems different from a replay attack to me (if the attacker has a leaked token, haven't they already won anyway?). The second explains the general idea of a replay attack, but doesn't talk about how this particular attack actually works. As some background I'm making an openid connect app using the authorization code flow (so replaying the redirect from the AS to my app shouldn't do anything because you can only exchange the same authorization code for a token once AFAICT, so you would need to replay the authorize endpoint as well for a replay attack to be meaningful.) My app is using tokens I get back as bearer tokens to authenticate requests to my web server, and they are the entirety of the session state, there is no other meaningful session state or locally managed identity to tie the tokens to. My openid connect provider requires a nonce because it's a security best practice. Is it actually useful in my particular use case? Is the reason I'm so confused because doesn't actually provide meaningful protection for the particular thing that I'm doing? If this does have security implications for me I 110% want to make sure I do the right thing but I'm having a very hard time understanding what it protects me against. Some implementation detail questions I'm fuzzy on due to not understanding the threat vector exactly: Do I validate the nonce matches what was sent to the authorize endpoint after pulling it back from the token endpoint and then assume it's good for the session? Or should validating the id_token cookie against the nonce cookie be part of every request just like checking the signature? The nonce is supposed to be tied to the session but the term nonce implies it should only be used once. If I need to send a user with a valid session back through the authorize endpoint for some reason (get a new id_token with an expiration further in the future? Request a wider set of scopes?) do I create a new nonce? Is it one nonce per session or per call to authorize?
Now provide the response and nothing else.
|
Reading your question, I believe the confusion stems from where the replay attack is taking place. The nonce prevents replay attacks against the client (your application), not the authorization server. I understand that you're using the code flow, but for the sake of simplicity, let's assume you're using the implicit flow in a single-page application and you have no back-end server (e.g. there's no server-side session). Here's the attack vector that the nonce helps to mitigate: The client application redirects the user agent to the auth server with a response_type of id_token . The user establishes identity with the auth server, i.e. logs in. The auth server redirects the user agent back to the client application with an id_token . The response looks something like this: https://your-single-page.app/auth#token_type=bearer&state=some-state&id_token=some-token An attacker obtains that response, possibly via packet sniffing, client server logs (e.g the web server that's hosting your SPA), the browser's developer tools, shoulder surfing, or some other means. The nonce helps to prevent the attacker from taking the authorization server's response, pasting it into their URL bar, and establishing identity with your client application. Here's how: The client application generates a secure random nonce and stores it as is, in clear text, in a cookie, session storage, or somewhere persistent. The client application hashes that nonce and sends the hash as an authentication request parameter. When the client application handles the authentication response, it pulls and removes the nonce from persistent storage, hashes it, and compares it against the nonce in the id_token . If they don't match, then the client application refuses to establish identity. The attacker may have intercepted the response, including the id_token , but here the nonce effectively acts like a password for the client application. The attacker would need the clear-text nonce to directly establish identity with the client application. (I say "directly" because, depending on the application, an attacker may be able to bypass the nonce check.) To be clear, obtaining an auth response only gives the attacker access to a hash of the nonce, not the actual nonce. The same attack vector is present when using the code flow, albeit harder to successfully leverage. As an example, imagine an attacker intercepts the authentication response . The attacker could then paste the response (the 302 location) into their URL bar and get your client application to make a token request. When the authorization server responds, your client application can verify the nonce in the ID token against something that your server has tied to your user's user agent (e.g. a cryptographically random value that's stored in an HTTP-only cookie). Again, the nonce acts as a password for your client application . I'll point out that to successfully exploit this exact attack vector there's an assumption that the authorization server does not verify that an authorization code has already been used. That's optional in the spec: "If possible, verify that the Authorization Code has not been previously used." In my opinion, it would be difficult to leverage this type of replay attack when using the code flow if the authorization server prevents authorization codes from being reused (and it should). However, replaying ID tokens with the implicit flow is trivial given access to authentication responses. As such, the nonce is optional in the code flow and required in the implicit flow. Since the implicit flow was mentioned a number of times in this answer, it's worth pointing out that the OAuth Group recommends not using it due to its numerous security risks , "not all of which have sufficient mitigation strategies." The code flow with PKCE should be used instead.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/147529', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/135508/']}
|
jdg_370273
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36925261
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I understand the difference between export (for containers) and save (for images). But at the end of the day the tarball produced by either save or export should be used as an image . So why are there 2 commands to make an image from a tarball?
Now provide the response and nothing else.
|
docker save will indeed produce a tarball, but with all parent layers, and all tags + versions. docker export does also produce a tarball, but without any layer/history. It is often used when one wants to "flatten" an image , as illustrated in " Flatten a Docker container or image " from Thomas Uhrig : docker export <CONTAINER ID> | docker import - some-image-name:latest However, once those tarballs are produced, load/import are there to: docker import creates one image from one tarball which is not even an image (just a filesystem you want to import as an image) Create an empty filesystem image and import the contents of the tarball docker load creates potentially multiple images from a tarred repository (since docker save can save multiple images in a tarball). Loads a tarred repository from a file or the standard input stream
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/36925261', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5006469/']}
|
jdg_370274
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
58186739
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to connect to Google Cloud SQL from my machine (Ubuntu) using this command : mysql --host='Public IP' --user='' --password However, I'm getting this error : ERROR 2003 (HY000): Can't connect to MySQL server on 'Public IP' (110) I need any help resolving my issue.
Now provide the response and nothing else.
|
You need to create your own scoring function since IsolationForest does not have score method inbuilt. Instead you can make use of the score_samples function that is available in IsolationForest (can be considered as a proxy for score ) and create your own scorer as described here and pass it to the GridSearchCV . I have modified your code to do this: import pandas as pdimport numpy as npfrom sklearn.ensemble import IsolationForestfrom sklearn.model_selection import GridSearchCVdf = pd.DataFrame({'first': [-112,0,1,28,5,6,3,5,4,2,7,5,1,3,2,2,5,2,42,84,13,43,13], 'second': [42,1,2,85,2,4,6,8,3,5,7,3,64,1,4,1,2,4,13,1,0,40,9], 'third': [3,4,7,74,3,8,2,4,7,1,53,6,5,5,59,0,5,12,65,4,3,4,11], 'result': [5,2,3,0.04,3,4,3,125,6,6,0.8,9,1,4,59,12,1,4,0,8,5,4,1]})x = df.iloc[:,:-1]tuned = {'n_estimators':[70,80], 'max_samples':['auto'], 'contamination':['legacy'], 'max_features':[1], 'bootstrap':[True], 'n_jobs':[None,1,2], 'behaviour':['old'], 'random_state':[None,1,], 'verbose':[0,1,2], 'warm_start':[True]} def scorer_f(estimator, X): #your own scorer return np.mean(estimator.score_samples(X))#or you could use a lambda aexpression as shown below#scorer = lambda est, data: np.mean(est.score_samples(data)) isolation_forest = GridSearchCV(IsolationForest(), tuned, scoring=scorer_f)model = isolation_forest.fit(x) SAMPLE OUTPUT print(model.best_params_){'behaviour': 'old', 'bootstrap': True, 'contamination': 'legacy', 'max_features': 1, 'max_samples': 'auto', 'n_estimators': 70, 'n_jobs': None, 'random_state': None, 'verbose': 1, 'warm_start': True}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58186739', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12149012/']}
|
jdg_370275
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10677020
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The NLTK book has a couple of examples of word counts, but in reality they are not word counts but token counts. For instance, Chapter 1, Counting Vocabulary says that the following gives a word count: text = nltk.Text(tokens)len(text) However, it doesn't - it gives a word and punctuation count.How can you get a real word count (ignoring punctuation)? Similarly, how can you get the average number of characters in a word? The obvious answer is: word_average_length =(len(string_of_text)/len(text)) However, this would be off because: len(string_of_text) is a character count, including spaces len(text) is a token count, excluding spaces but including punctuation marks, which aren't words. Am I missing something here? This must be a very common NLP task...
Now provide the response and nothing else.
|
Removing Punctuation Use a regular expression to filter out the punctuation import refrom collections import Counter>>> text = ['this', 'is', 'a', 'sentence', '.']>>> nonPunct = re.compile('.*[A-Za-z0-9].*') # must contain a letter or digit>>> filtered = [w for w in text if nonPunct.match(w)]>>> counts = Counter(filtered)>>> countsCounter({'this': 1, 'a': 1, 'is': 1, 'sentence': 1}) Average Number of Characters Sum the lengths of each word. Divide by the number of words. >>> float(sum(map(len, filtered))) / len(filtered)3.75 Or you could make use of the counts you already did to prevent some re-computation. This multiplies the length of the word by the number of times we saw it, then sums all of that up. >>> float(sum(len(w)*c for w,c in counts.iteritems())) / len(filtered)3.75
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10677020', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1406413/']}
|
jdg_370276
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3618744
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I observed strange behavior while working with AVAudioPlayer Following is the code: AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@",fileName]] error: &error]; In this, I downloaded the file from server and stored in application's Cache directory. I am getting following error: Error in playing =Domain = NSOSStatusErrorDomainCode = -43Description = Error Domain=NSOSStatusErrorDomain Code=-43 "The operation couldn’t be completed. (OSStatus error -43.)" I also verified that file is present at that location.Everytime I restart my application, I was getting same error for song play.After some time, when I tried to run same code, my player just works fine without any error. Can anyone tell me how to handle this error? Also, Can anyone explain me what was the problem?
Now provide the response and nothing else.
|
I had the same error with this code, even though I could verify that the songCacheURL was valid and that the file was available: self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:songCacheURL error:&error]; I was able to fix the issue by loading the file first into an NSData element and then using that to initialize the AVAudioPlayer instead, like so: NSData *songFile = [[NSData alloc] initWithContentsOfURL:songCacheURL options:NSDataReadingMappedIfSafe error:&error1 ];self.audioPlayer = [[AVAudioPlayer alloc] initWithData:songFile error:&error2]; I hope that helps someone else. Robin
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3618744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1528551/']}
|
jdg_370277
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1227245
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the quotient space ${[0,1]}/{\sim}$, where $[0,1]$is equipped with the standard Euclidean topology, and where and the equivalencerelation $\sim$ identifies its endpoints, i.e. gives rise to a partition$$\left\{ \left\{ x\right\} :x\in(0,1)\right\} \cup\{0,1\}$$of $[0,1]$. Now it is clear that $[0,1]/{\sim}$ is homeomorphic tothe circle $S^{1}$. My question is: Suppose we replace the equivalencerelation with another, $\approx$, which identifies a different pair of points,say $0$ and $\frac{1}{2}$; it would thus now give rise to the partition$$\{ \left\{ x\right\} :x\in(0,1]\setminus\{ \frac{1}{2}\} \} \cup\{0,\frac{1}{2}\}.$$ Is then ${[0,1]}/{\approx}$ still homeomorphic to $S^{1}$? (Note: I suppose that's not the case, due to some high-level theoremsthat immediately tell us, that it can't be the case; but I don't knowthose theorem. For whose interested, here's a sketch of the proof of the claim about $[0,1]/{\sim}$: The map $$f:[0,1]\rightarrow S^{1},\ t\mapsto(\cos2\pi t,\sin2\pi t)$$is a continuous surjection that makes the same identifications asthe canonical projection $p$ from $[0,1]$ to $[0,1]/{\sim}$,i.e. $p(x_{1})=p(x_{2})$ iff $f(x_{1})=f(x_{2})$. Some high-leveltheorem then guarantee us that $f$is a quotient map, so $[0,1]/{\sim}$ ishomeomorphic to $S^{1}$.)
Now provide the response and nothing else.
|
$\hspace{10mm} \hspace{10mm} \hspace{10mm}$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1227245', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_370278
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
175896
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say you have two points, $(x_0, y_0)$ and $(x_1, y_1)$. The gradient of the line between them is: $$m = (y_1 - y_0)/(x_1 - x_0)$$ And therefore the equation of the line between them is: $$y = m (x - x_0) + y_0$$ Now, since I want another point along this line, but a distance $d$ away from $(x_0, y_0)$, I will get an equation of a circle with radius $d$ with a center $(x_0, y_0)$ then find the point of intersection between the circle equation and the line equation. Circle Equation w/ radius $d$: $$(x - x_0)^2 + (y - y_0)^2 = d^2$$ Now, if I replace $y$ in the circle equation with $m(x - x_0) + y_0$ I get: $$(x - x_0)^2 + m^2(x - x_0)^2 = d^2$$ I factor is out and simplify it and I get: $$x = x_0 \pm d/ \sqrt{1 + m^2}$$ However, upon testing this equation out it seems that it does not work! Is there an obvious error that I have made in my theoretical side or have I just been fluffing up my calculations?
Now provide the response and nothing else.
|
Another way, using vectors: Let $\mathbf v = (x_1,y_1)-(x_0,y_0)$. Normalize this to $\mathbf u = \frac{\mathbf v}{||\mathbf v||}$. The point along your line at a distance $d$ from $(x_0,y_0)$ is then $(x_0,y_0)+d\mathbf u$, if you want it in the direction of $(x_1,y_1)$, or $(x_0,y_0)-d\mathbf u$, if you want it in the opposite direction. One advantage of doing the calculation this way is that you won't run into a problem with division by zero in the case that $x_0 = x_1$.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/175896', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/36410/']}
|
jdg_370279
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
57776529
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a table that I want to edit using Angular. When the user clicks the edit button, a dialog window pops up that is the same as the add entry dialog window. However, when the fields are still blank. I would like the input fields to be populated with the current information for that entry that can be edited. How is this done? onEdit() { const dialogConfig = new MatDialogConfig(); dialogConfig.disableClose = true; dialogConfig.autoFocus = true; dialogConfig.width = "475px"; this.dialog.open(ApplicationEditComponent, dialogConfig); } ```EditList.ts applicationForm = this.fb.group({ appName: ['', Validators.required], appCode: ['', Validators.required] })```EditList.html <form [formGroup]="applicationForm" class="form"> <div fxLayout="row" fxLayoutAlign="startcenter"> <mat-form-field> <input matInput placeholder="App Code:" type="text" formControlName="appCode" required autofocus> </mat-form-field> <mat-form-field> <input matInput placeholder="Application Name:" type="text" formControlName="appName" required autofocus> </mat-form-field> </div></form>```ListServices.ts updateApplication(application: IApplicationData) { return this.http.put<IApplicationData>(this.myAppUrl + 'api/Application/Edit', application) .pipe(catchError(this.errorHandler)); }
Now provide the response and nothing else.
|
You can use the aws_ssm_parameter data source to fetch the value of a parameter at runtime: data "aws_ssm_parameter" "ami" { name = "/path/to/ami"}resource "aws_instance" "nginx" { ami = data.aws_ssm_parameter.ami.value # pull value from parameter store instance_type = "t2.micro" provisioner "remote-exec" { inline = [ "sudo yum install nginx -y", "sudo service nginx start" ] }} However, a better approach might be to use the aws_ami data source to filter for the AMI you want more directly instead of pushing the AMI ID to SSM parameter store and then looking it up later. You can filter on a number of criteria including name, account owner and tags. Here's the example from the aws_instance resource documentation that is looking for the latest Ubuntu 20.04 AMI: data "aws_ami" "ubuntu" { most_recent = true filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"] } filter { name = "virtualization-type" values = ["hvm"] } owners = ["099720109477"] # Canonical}resource "aws_instance" "web" { ami = data.aws_ami.ubuntu.id instance_type = "t2.micro" tags = { Name = "HelloWorld" }}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/57776529', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11979287/']}
|
jdg_370280
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
386729
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm stuck on this problem, any help would be appreciated. Find all $n \in \mathbb{Z}$ which satisfy the following equation: $${12 \choose n} = \binom{12}{n-2}$$ I have tried to put each of them into the factorial equation and then making them equal each other and manipulating a bit, although I get stuck at: $$(12 - k)!\cdot k! - (14 - k)!\cdot (k - 2)! = 0$$ Maybe I'm on the completely wrong path? Maybe I just had a mathematical error? Thanks in advance!
Now provide the response and nothing else.
|
Which reduces to $$(12-k)! \cdot k(k-1) \cdot (k-2)!=(14-k)(13-k)\cdot (12-k)!\cdot (k-2)!$$ $$\implies k(k-1)=(14-k)(13-k)$$ Alternatively, we know $$\binom nr=\binom n{n-r}$$ and $\binom nr\ne \binom ns$ for $s\ne r,n-r$ So, $n+(n-2)=12$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/386729', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/47550/']}
|
jdg_370281
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
57657
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
A short version of my question is: Is there a $p$-adic theory of integration? Now let me expand a little further. In introductory texts such as Koblitz' book $p$-adic numbers,.. a bunch of $p$-adic analysis is developed. However, since all applications are towards number theory, the exposition stops at some point. In particular, there is no theory of integration developed for $p$-adic numbers.By this I do not mean putting a measure on $\mathbb{C}_p$ and integrating real or complex valued functions, but instead putting a "$p$-adic measure"(whatever this may be) on it and integrating $\mathbb{C}_p$-valued functions on it. To rephrase my question: Is there are an integration theroy for $\mathbb{C}_p$-valued functions on $\mathbb{C}_p$. In particular I would like to know if an analogue of Cauchy's theorem holds. Where can I read more about such a theory?
Now provide the response and nothing else.
|
There is an important difference, relevant to the original question, between the two kinds of $p$-adic integrals mentioned by Kevin in his comments. Because I see frequent confusion on this issue, I thought I'd comment. The 'usual' $p$-adic integrals as you might see in, say, Tate's thesis on L-functions or the adelic theory of automorphic forms, are volume integrals, with respect to a measure, typically on some group. This kind of volume integral can also be easily defined on arbitary varieties, and you will see plenty in Weil's book on Tamagawa numbers, or in papers on motivic integration. Coleman integration, on the other hand, is a $p$-adic analogue of line integrals , and comes up most naturally in discussing the holonomy of vector bundles with connection on a variety over a $p$-adic field (often interpreted as isocrytals). These, therefore, should be the right quantities to relate to a Cauchy formula. However, unfortunately (and fortunately), it doesn't work. The reason is that Coleman integration is a line integral along a canonical path between two points on a variety over the $p$-adics. So there is a canonical holonomy in the theory, at least if you just want to compute it for a bundle with unipotent connection, that is, one that has a strictly upper-triangular connection form. This is where a mysterious 'crystalline' structure on the space of paths is used, whereby there is a unique path invariant under the action of the Frobenius. The notion of a path, by the way, uses the Tannakian formalism in this context. For a very quick overview of this approach, you can look at section 2 of this paper: http://www.ucl.ac.uk/~ucahmki/siegelinv.pdf Breuil's paper linked from Chandan's answer should provide a more systematic overview. Anyways, because of the canonical paths in Coleman's theory, there can be no holonomy around a loop, and hence, no Cauchy formula. I was told quite a few years ago by Berkovich that he has a theory of line integrals on Berkovich spaces that are path dependent in interesting ways, but I've never looked into it. Added: I realize I didn't mention above the connection between holonomy and usual integration of a one-form $A$. You get this by considering the connection$$d+\begin{bmatrix}0& A; \\ 0& 0\end{bmatrix}$$ on the trivial bundle of rank two. One view of Coleman integration is that the holonomy $H_a^b$ from $a$ to $b$ is defined first. And then, the naive integral is defined by the fomula$$H_a^b=\begin{bmatrix}1& \int_a^bA ;\\ 0& 1\end{bmatrix}$$
|
{}
|
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/57657', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3757/']}
|
jdg_370282
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41052872
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to make a world map with pop up points, it was going good mostly until the end, I tried to style the waypoints, but I cant figure out how to keep the pop up from rotating and also only some of the waypoints keep the background color specified, the others are transparent and I cant figure out why at all. http://codepen.io/juryk/pen/vydNJy html { box-sizing: border-box; height: 100%; font-size: 100%;}*,*:before,*:after { box-sizing: inherit;}body { height: 100%; padding: 2rem; font: 100%/1.375em 'Roboto', sans-serif; background: #fcfcfc;}h1 { color: #013567; font-weight: bold; font-size: 10px; line-height: .5em; text-transform: uppercase; text-align: center; letter-spacing: 2px; padding-top: 10px;}h2 { color: #013567; font-weight: bold; font-size: 9px; letter-spacing: 1px; padding-top: 3px; line-height: 1em;}h1,h2,p { margin-bottom: 1em; font-size: 10px;}.button { display: inline-block; color: #fff; text-align: center; text-decoration: none; background: #013567;}.button:hover { background: #013567;}.map-list { position: relative; margin: 1.375em; background: #fff;}.map-list li { position: relative; padding: 1.375em;}.map-list li:nth-child(even) { background: #eee;}.map-list h2,.map-list p { margin-bottom: 0; line-height: 1.5em;}.map-list .button { position: absolute; top: 50%; right: 1.375em; transform: translateY(-50%);}@media (min-width:600px) { .map-list { height: 0; padding-top: 60%; background-image: url(http://www.heyer-associates.com/wp-content/uploads/2016/11/worldmap2.jpg); background-repeat: no-repeat; background-size: 105%; background-position: 0% 84%; border: none; } .map-list li { position: absolute; padding: 0; transform: translateZ(0); list-style-type: none; }}.map-list li:nth-child(even) { background: none;}.map-list li { position: absolute; bottom: 0; left: -1.5em; content: ''; display: block; width: .8em; height: .8em; transform: rotate(45deg); transform-origin: 100% 100%; background: #fff; border: .3em solid #013567; border-radius: 55%; border-bottom-right-radius: 0; cursor: pointer; z-index: 0;}.map-list li .details { position: absolute;}.map-list li:hover .details { z-index: 1;}.map-list li.sf { bottom: 62%; left: 72.2%; z-index: 6;}.map-list li.bh { bottom: 59%; left: 73.2%; z-index: 1;}.map-list li.bl { bottom: 64%; left: 72.3%; z-index: 7;}.map-list li.ny { bottom: 62%; left: 86%; z-index: 1;}.map-list li.ut { bottom: 62.5%; left: 73.4%; z-index: 5;}.map-list li.jp { bottom: 59%; left: 44.0%; z-index: 1;}.map-list li.co { bottom: 61.6%; left: 75.2%; z-index: 3;}.map-list li.vc { bottom: 68.6%; left: 71.5%; z-index: 8;}.map-list li.az { bottom: 60.5%; left: 74.3%; z-index: 2;}.map-list li.mc { bottom: 60.7%; left: 72.8%; z-index: 2;}.map-list li.aus { bottom: 16.7%; left: 45.5%; z-index: 2;}.map-list li.ws { bottom: 69.6%; left: 72.3%; z-index: 8;}.map-list .details { padding: 1rem; width: 185px; margin-bottom: 10px; transform: scale(0); transform-origin: 0 100%; font-size: 75%; background: #eee; border-radius: 0.5em; opacity: 0; transition: opacity 500ms, transform 400ms cubic-bezier(0.6, -0.3, 0.3, 1.3);}.map-list li:hover .details { transform: scale(1); opacity: 1;}.map-list li.central .details,.map-list li.east .details { position: absolute; bottom: 0; right: 100%; transform-origin: 100% 100%; white-space: nowrap;}.map-list p { margin-bottom: 0.25rem;}.map-list .button { position: static; transform: none; width: 100%; padding-top: 3px; padding-bottom: 3px;}@media (max-width:480px) { body { padding: 2rem 1rem; } .map-list { margin: 1.375em 0; } .map-list p { margin-bottom: 0.5rem; } .map-list .button { position: static; transform: none; width: 100%; }} <ul class="map-list"> <center> <li class="bl"> <div class="details"> <h2>BELLEVUE, WASHINGTON</h2> Sotheby's International Realty <a class="button" href="http://www.brazensothebysrealty.com/eng">www.brazensothebysrealty.com</a> </div> </li> <li class="ut"> <div class="details"> <h2>PARK CITY, UTAH</h2> List Sotheby's International Realty <a class="button" href="http://www.listsothebysrealty.com/eng">www.listsothebysrealty.com</a> </div> </li> <li class="bh"> <div class="details"> <h2>BEVERLY HILLS, CALIFORNIA</h2> Hilton & Hyland <a class="button" href="https://www.hiltonhyland.com/">www.hiltonhyland.com</a> <br> Douglas Elliman <a class="button" href="https://www.elliman.com/">www.elliman.com</a> </div> </li> <li class="ny"> <div class="details"> <h2>NEW YORK CITY, NEW YORK</h2> List Sotheby's International Realty <a class="button" href="http://www.listsothebysrealty.com/eng">www.listsothebysrealty.com</a> </div> </li> <li class="jp"> <div class="details"> <h2>TOKYO, JAPAN</h2> Tokyu Resort Corporation <a class="button" href="http://www.tokyu-resort.co.jp/en/">www.tokyu-resort.co.jp</a> </div> </li> <li class="sf"> <div class="details"> <h2>SAN FRANCISCO, CALIFORNIA</h2> Pacific Union - Christie's International Real Estate <a class="button" href="http://www.pacificunion.com">www.pacificunion.com</a> <h2>MENLO PARK, CALIFORNIA</h2> Pacific Union <a class="button" href="http://www.pacificunion.com">www.pacificunion.com</a> </div> </li> <li class="co"> <div class="details"> <h2>ASPEN, COLORADO</h2> Coldwell Banker Previews International <a class="button" href="http://www.coldwellbankerpreviews.com/">www.coldwellbankerpreviews.com</a> <h2>VAIL, COLORADO</h2> Sotheby's International Realty <a class="button" href="http://www.sothebysrealty.com/eng">www.sothebysrealty.com</a> </div> </li> <li class="vc"> <div class="details"> <h2>VANCOUVER, BRITISH COLUMBIA</h2> Royal LePage Sussex <a class="button" href="http://www.royallepage.ca">www.royallepage.ca</a> </div> </li> <li class="az"> <div class="details"> <h2>SCOTTSDALE, ARIZONA</h2> Arizona Best <a class="button" href="http://www.arizonabest.com/">www.arizonabest.com</a> </div> </li> <li class="mc"> <div class="details"> <h2>MONTECITO, CALIFORNIA</h2> Coldwell Banker Preview <a class="button" href="http://www.coldwellbankerpreviews.com/">www.coldwellbankerpreviews.com</a> </div> </li> <li class="aus"> <div class="details"> <h2>MELBOUNRE, AUSTRALIA</h2> Dingle Partners <a class="button" href="http://dinglepartners.com.au/">www.dinglepartners.com.au</a> </div> </li> <li class="ws"> <div class="details"> <h2>WHISTLER, BRITISH COLUMBIA</h2> RE/MAX Sea to Sky Real Estate <a class="button" href="http://www.remax-whistler.com/">www.remax-whistler.com</a> </div> </li> </center></ul>
Now provide the response and nothing else.
|
These files contain data for declarations of standard ("built-in") Kotlin classes which are not compiled to .class files, but rather are mapped to the existing types on the platform (in this case, JVM). For example, kotlin/kotlin.kotlin_builtins contains the information for non-physical classes in package kotlin : Int , String , Enum , Annotation , Collection , etc. There are two main scenarios when these files are used: The compiler looks them up from kotlin-stdlib on the classpath to determine which built-in declarations are available. The reflection library ( kotlin-reflect ) loads these files as resources to provide reflection capabilities for built-in declarations. For example, String::class.members returns all members of the class kotlin.String exactly in the same way as the Kotlin compiler sees those members (despite the fact that there's no kotlin/String.class file and it's erased to java.lang.String in bytecode). The first point is clearly not applicable in your case. And if you don't use reflection on built-in classes, I think it's safe to exclude .kotlin_builtins files completely from the resulting jar.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41052872', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7270816/']}
|
jdg_370283
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42060625
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some json on a website that i want to convert to excel using the power query option from web . But I ran into a small problem. My json looks like this: [ { "id" : 1, "visitors" : 26, "some_number" : 1, "value" : 3500 }, { "id" : 2, "visitors" : 21, "some_number" : 5, "value" : 2000 }] but when i use from web i get this: I can drill down into a record,convert it to a table, transpose and use first row as header but then i get just one row. How can i get all of my data to the table and not just one row?
Now provide the response and nothing else.
|
First I would use the List Tools / Transform menu (it should be automatically selected) and click the To Table button. This will give you a single-column table with 2 rows. Then I would click the small Expand button - it will appear in the column headings, just to the right of "Column1". Uncheck the Use original column name ... option and you will get a table of 4 columns and 2 rows. Here's the full script I generated: let Source = Json.Document(File.Contents("C:\Users\Mike.Honey\Downloads\json2.json")), #"Converted to Table" = Table.FromList(Source, Splitter.SplitByNothing(), null, null, ExtraValues.Error), #"Expanded Column2" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"id", "visitors", "some_number", "value"}, {"id", "visitors", "some_number", "value"})in #"Expanded Column2"
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/42060625', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6337767/']}
|
jdg_370284
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21558968
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Ok, so I have a query that is returning more rows than expected with repeating data. Here is my query: SELECT AP.RECEIPTNUMBER,AP.FOLDERRSN,ABS(AP.PAYMENTAMOUNT),ABS(AP.PAYMENTAMOUNT - AP.AMOUNTAPPLIED),TO_CHAR(AP.PAYMENTDATE,'MM/DD/YYYY'),F.REFERENCEFILE,F.FOLDERTYPE,VS.SUBDESC,P.NAMEFIRST||' '||P.NAMELAST,P.ORGANIZATIONNAME,VAF.FEEDESC,VAF.GLACCOUNTNUMBERFROM ACCOUNTPAYMENT APINNER JOIN FOLDER F ON AP.FOLDERRSN = F.FOLDERRSNINNER JOIN VALIDSUB VS ON F.SUBCODE = VS.SUBCODEINNER JOIN FOLDERPEOPLE FP ON FP.FOLDERRSN = F.FOLDERRSNINNER JOIN PEOPLE P ON FP.PEOPLERSN = P.PEOPLERSNINNER JOIN ACCOUNTBILLFEE ABF ON F.FOLDERRSN = ABF.FOLDERRSNINNER JOIN VALIDACCOUNTFEE VAF ON ABF.FEECODE = VAF.FEECODEWHERE AP.NSFFLAG = 'Y'AND F.FOLDERTYPE IN ('405B','405O') Everything works fine until I add the bottom two Inner Joins. I'm basically trying to get all payments that had NSF. When I run the simple query: SELECT *FROM ACCOUNTPAYMENTWHERE NSFFLAG = 'Y' I get only 3 rows pertaining to 405B and 405O folders. So I'm only expecting 3 rows to be returned in the above query but I get 9 with information repeating in some columns. I need the exact feedesc and gl account number based on the fee code that can be found in both the Valid Account Fee and Account Bill Fee tables. I can't post a picture of my output. Note: when I run the query without the two bottom joins I get the expected output. Can someone help me make my query more efficient? Thanks! As requested, below are the results that my query is returning for vaf.feedesc and vaf.glaccountnumber columns: Boiler Operator License Fee 2423809Boiler Certificate of Operation without Manway - Revolving 2423813Installers (Boiler License)/API Exam 2423807Boiler Public Inspection/Certification (State or Insurance) 2423816Boiler Certificate of Operation with Manway 2423801Boiler Certificate of Operation without Manway 2423801Boiler Certificate of Operation with Manway - Revolving 2423813BPV Owner/User Program Fee 2423801Installers (Boiler License)/API Exam Renewal 2423807
Now provide the response and nothing else.
|
The cause is that at least one of the connections ACCOUNTBILLFEE-FOLDER or VALIDACCOUNTFEE-ACCOUNTBILLFEE is not one-to-one. It allows for one Folder to have many AccountBillFees or for one ValidAccountFee to have many AccountBillFees . To find the cause of such a problem this is what I usually do: Change the SELECT A, B, C part of your query to SELECT * . Reduce the results to one of the rows that is causing you trouble (by adding a WHERE ... ). That is a single row without your last two joins and a few rows after you add those two joins. Look at the result table from left to right. The first columns will probably show the same values for all rows. Once you see a difference between the values in a column, you know that the table of the column you are currently looking at is causing your "multiple row problem". Now create a SELECT * statement that includes only the two tables joined together that cause multiple rows with the same WHERE ... you used above. The result should give you a clear picture of the cause. Once you know the reason for your problem you can think of a solution ;)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21558968', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2844414/']}
|
jdg_370285
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50376977
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following typescript enum: enum MyEnum { A, B, C }; If I want another type that is the unioned strings of the keys of that enum, I can do the following: type MyEnumKeysAsStrings = keyof typeof MyEnum; // "A" | "B" | "C" This is very useful. Now I want to create a generic type that operates universally on enums in this way, so that I can instead say: type MyEnumKeysAsStrings = AnyEnumKeysAsStrings<MyEnum>; I imagine the correct syntax for that would be: type AnyEnumKeysAsStrings<TEnum> = keyof typeof TEnum; // TS Error: 'TEnum' only refers to a type, but is being used as a value here. But that generates a compile error: "'TEnum' only refers to a type, but is being used as a value here." This is unexpected and sad. I can incompletely work around it the following way by dropping the typeof from the right side of the declaration of the generic, and adding it to the type parameter in the declaration of the specific type: type AnyEnumAsUntypedKeys<TEnum> = keyof TEnum;type MyEnumKeysAsStrings = AnyEnumAsUntypedKeys<typeof MyEnum>; // works, but not kind to consumer. Ick. I don't like this workaround though, because it means the consumer has to remember to do this icky specifying of typeof on the generic. Is there any syntax that will allow me to specify the generic type as I initially want, to be kind to the consumer?
Now provide the response and nothing else.
|
No, the consumer will need to use typeof MyEnum to refer to the object whose keys are A , B , and C . LONG EXPLANATION AHEAD, SOME OF WHICH YOU PROBABLY ALREADY KNOW As you are likely aware, TypeScript adds a static type system to JavaScript, and that type system gets erased when the code is transpiled. The syntax of TypeScript is such that some expressions and statements refer to values that exist at runtime, while other expressions and statements refer to types that exist only at design/compile time. Values have types, but they are not types themselves. Importantly, there are some places in the code where the compiler will expect a value and interpret the expression it finds as a value if possible, and other places where the compiler will expect a type and interpret the expression it finds as a type if possible. The compiler does not care or get confused if it is possible for an expression to be interpreted as both a value and a type. It is perfectly happy, for instance, with the two flavors of null in the following code: let maybeString: string | null = null; The first instance of null is a type and the second is a value. It also has no problem with let Foo = {a: 0};type Foo = {b: string}; where the first Foo is a named value and the second Foo is a named type. Note that the type of the value Foo is {a: number} , while the type Foo is {b: string} . They are not the same. Even the typeof operator leads a double life. The expression typeof x always expects x to be a value , but typeof x itself could be a value or type depending on the context: let bar = {a: 0};let TypeofBar = typeof bar; // the value "object"type TypeofBar = typeof bar; // the type {a: number} The line let TypeofBar = typeof bar; will make it through to the JavaScript, and it will use the JavaScript typeof operator at runtime and produce a string. But type TypeofBar = typeof bar ; is erased, and it is using the TypeScript type query operator to examine the static type that TypeScript has assigned to the value named bar . Now, most language constructs in TypeScript that introduce names create either a named value or a named type. Here are some introductions of named values: const value1 = 1;let value2 = 2;var value3 = 3;function value4() {} And here are some introductions of named types: interface Type1 {}type Type2 = string; But there are a few declarations which create both a named value and a named type, and, like Foo above, the type of the named value is not the named type . The big ones are class and enum : class Class { public prop = 0; }enum Enum { A, B } Here, the type Class is the type of an instance of Class , while the value Class is the constructor object. And typeof Class is not Class : const instance = new Class(); // value instance has type (Class)// type (Class) is essentially the same as {prop: number};const ctor = Class; // value ctor has type (typeof Class)// type (typeof Class) is essentially the same as new() => Class; And, the type Enum is the type of an element of the enumeration; a union of the types of each element. While the value Enum is an object whose keys are A and B , and whose properties are the elements of the enumeration. And typeof Enum is not Enum : const element = Math.random() < 0.5 ? Enum.A : Enum.B; // value element has type (Enum)// type (Enum) is essentially the same as Enum.A | Enum.B// which is a subtype of (0 | 1)const enumObject = Enum;// value enumObject has type (typeof Enum)// type (typeof Enum) is essentially the same as {A: Enum.A; B: Enum.B}// which is a subtype of {A:0, B:1} Backing way way up to your question now. You want to invent a type operator that works like this: type KeysOfEnum = EnumKeysAsStrings<Enum>; // "A" | "B" where you put the type Enum in, and get the keys of the object Enum out. But as you see above, the type Enum is not the same as the object Enum . And unfortunately the type doesn't know anything about the value. It is sort of like saying this: type KeysOfEnum = EnumKeysAsString<0 | 1>; // "A" | "B" Clearly if you write it like that, you'd see that there's nothing you could do to the type 0 | 1 which would produce the type "A" | "B" . To make it work, you'd need to pass it a type that knows about the mapping. And that type is typeof Enum ... type KeysOfEnum = EnumKeysAsStrings<typeof Enum>; which is like type KeysOfEnum = EnumKeysAsString<{A:0, B:1}>; // "A" | "B" which is possible... if type EnumKeysAsString<T> = keyof T . So you are stuck making the consumer specify typeof Enum . Are there workarounds? Well, you could maybe use something that does that a value, such as a function? function enumKeysAsString<TEnum>(theEnum: TEnum): keyof TEnum { // eliminate numeric keys const keys = Object.keys(theEnum).filter(x => (+x)+"" !== x) as (keyof TEnum)[]; // return some random key return keys[Math.floor(Math.random()*keys.length)]; } Then you can call const someKey = enumKeysAsString(Enum); and the type of someKey will be "A" | "B" . Yeah but then to use it as type you'd have to query it: type KeysOfEnum = typeof someKey; which forces you to use typeof again and is even more verbose than your solution, especially since you can't do this: type KeysOfEnum = typeof enumKeysAsString(Enum); // error Blegh. Sorry. TO RECAP: THIS IS NOT POSSIBLE; TYPES AND VALUES BLAH BLAH; STILL NOT POSSIBLE; SORRY. Hope that makes some sense. Good luck.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/50376977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2092621/']}
|
jdg_370286
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8775850
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created an account on GitHub — I'm new on it — and I'm facing a problem with adding files. I have added readme.txt . Also, I have 3 other PHP files and a folder including images. How do I add the files and folder? I tried it with git pull because git push origin -u master showed me an error.
Now provide the response and nothing else.
|
You can add files using git add , example git add README , git add <folder>/* , or even git add * Then use git commit -m "<Message>" to commit files Finally git push -u origin master to push files. When you make modifications run git status which gives you the list of files modified, add them using git add * for everything or you can specify each file individually, then git commit -m <message> and finally, git push -u origin master Example - say you created a file README, running git status gives you $ git status# On branch master# Untracked files:# (use "git add <file>..." to include in what will be committed)## README Run git add README , the files are staged for committing. Then run git status again, it should give you - the files have been added and ready for committing. $ git status# On branch master# Changes to be committed:# (use "git reset HEAD <file>..." to unstage)## new file: README#nothing added to commit but untracked files present (use "git add" to track) Then run git commit -m 'Added README' $ git commit -m 'Added README'[master 6402a2e] Added README 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 README Finally, git push -u origin master to push the remote branch master for the repository origin . $ git push -u origin masterCounting objects: 4, done.Delta compression using up to 2 threads.Compressing objects: 100% (2/2), done.Writing objects: 100% (3/3), 267 bytes, done.Total 3 (delta 1), reused 0 (delta 0)To xxx@xxx.com:xxx/xxx.git 292c57a..6402a2e master -> masterBranch master set up to track remote branch master from origin. The files have been pushed successfully to the remote repository. Running a git pull origin master to ensure you have absorbed any upstream changes $ git pull origin masterremote: Counting objects: 12, done.remote: Compressing objects: 100% (4/4), done.remote: Total 8 (delta 4), reused 7 (delta 3)Unpacking objects: 100% (8/8), done.From xxx.com:xxx/xxx * branch master -> FETCH_HEADUpdating e0ef362..6402a2eFast-forward public/javascript/xxx.js | 5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) create mode 100644 README If you do not want to merge the upstream changes with your local repository, run git fetch to fetch the changes and then git merge to merge the changes. git pull is just a combination of fetch and merge . I have personally used gitimmersion - http://gitimmersion.com/ to get upto curve on git, its a step-by-step guide, if you need some documentation and help
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/8775850', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1055090/']}
|
jdg_370287
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
870502
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In my understanding, the definition of tensor product of linear maps cannot be directly derived from the definition of tensor product of vector spaces (or modules), since it's not clear what is the domain, range, and the map of the result product. Originally I thought this is just a definition. But then I learned that if we represent $S$ and $T$ by matrices, then the matrix describing the tensor product $S \otimes T$ is the Kronecker product of the two matrices. However, Kronecker product of matrices can be developed totally without the concept of tensor product of linear maps. For example, we can find basis of the matrix spaces (say $V$ and $W$). Then we can get natural basis for $V \otimes W$. And we can easily get the definition of Kronecker product from these basis. So my question is: is this coincidence just a coincidence or there is some deep reason behind the definition so that it has to be defined like this. I can't even see why the domain of $S \otimes T$ should be $V\otimes W$, given $S$ and $T$ are linear maps over $V$ and $W$. (I know that the definition of tensor product of linear maps is very natural, and I can't imagine other definitions of it. I just thought there must be a formal reason that the definition has to be that. )
Now provide the response and nothing else.
|
At some level, the tensor product of maps is, like most things in math, a convenient choice of definition. However, it arises naturally, in a precise sense. The standard definition of a tensor product of two spaces, $V \otimes W$ , actually provides more than a vector space constructed from $V$ and $W$ . It is a universal construction , meaning that it satisfies a particular property, and is the best choice of a vector space that does so. There is a bilinear map $i: V \times W \rightarrow V \otimes W$ taking $(v,w) \mapsto v \otimes w$ . Now for any bilinear map $f: V \times W \rightarrow U$ , there exists a unique linear map $\tilde{f}: V\otimes W \rightarrow U$ such that $\tilde{f}\circ i = f$ . Now if $S: V \rightarrow V'$ , $T: W \rightarrow W'$ , we want to define a new map $S \otimes T: V \otimes W \rightarrow V' \otimes W'$ . We choose this domain and range because we want the tensor product of maps to be compatible with the tensor product of spaces (in a precise sense: we want the association of a tensor product of spaces and maps to be a "bifunctor" - see here ). Now to actually define the map, we appeal to the universal property. Let $(S \times T)(v,w) = S(v) \otimes T(w)$ . This map is bilinear from $V \times W$ to $V' \otimes W'$ , so it induces the map $(S \otimes T)(v,w): V \otimes W \rightarrow V' \otimes W'$ . These concepts become a bit more clear if you're familiar with the language of category theory - natural constructions, universal properties, functors, etc.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/870502', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8750/']}
|
jdg_370288
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17308335
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have @RunWith(SpringJUnit4ClassRunner.class)@TransactionConfiguration(defaultRollback = true, transactionManager = "transactionManager") @Before @Transactional public void mySetup() { // insert some records in db } @After @Transactional public void myTeardown() { // delete some records } @Test @Transactional public void testMy() { // do stuff } My question is: will mySetup, testMy and myTeardown all run within the same transaction? It seems like they should, but I'm getting some strange error which might suggest that they are stepping on each other.
Now provide the response and nothing else.
|
Yes, all three methods will run within the same transaction. See section TestContext Framework/Transaction management in the reference docs: Any before methods (such as methods annotated with JUnit's @Before) and any after methods (such as methods annotated with JUnit's @After) are executed within a transaction Thus the @Transactional annotation on mySetup() and myTeardown() is kind of redundant, or might be even considered misleading, as their transactionality is determined by the individual test method being currently executed. This is because the beforeTestMethod() and afterTestMethod() callbacks of TransactionalTestExecutionListener (responsible for starting/completing the transaction) are executed before JUnit's @Before and after JUnit's @After methods, respectively.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/17308335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68105/']}
|
jdg_370289
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
424267
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I need a batch script to create an ODBC/DSN connection from a .bat file in Windows. How can I do this?
Now provide the response and nothing else.
|
Sure. Manually create an ODBC connection on your PC using the "Data Sources (ODBC)" applet in Admin tools. Launch regedit and go to HKLM\Software\ODBC and export the whole lot from ODBC down to a .reg file. This is a right click option on the folder. Open the exported file in Notepad now you need to delete everything that doesn't refer to your particular ODBC connection. You should end up with a registry file looking something like this Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\ODBC Data Sources]"TEST"="SQL Server"[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\TEST]"Driver"="C:\\Windows\\system32\\SQLSRV32.dll""Server"="SERVERNAME""LastUser"="USERNAME""Trusted_Connection"="Yes" It may be slightly different depending on connection settings and such like. You will need the two sections, the data sources defines the type of connection, in this instance a SQL server one, then the bit that tells you what it is called, which driver to use and what sort of authentication to use. this will all be done when you set up the connection, you just need the .reg file with it in. You can then import it on any pc by either double clicking the reg file or if you have the facility run a script to import it using regedit. You can call this import from a batch file: regedit /s FileToImport.reg
|
{}
|
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/424267', 'https://serverfault.com', 'https://serverfault.com/users/114595/']}
|
jdg_370290
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
65683
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I just compiled my own version of tmux with libevent . Whenever I run tmux from zsh, tmux starts, but exits quickly returning [exited] . I read here that the problem could be my configuration for default-command , default-shell , but I don't have a .tmux.conf file on my home directory, and as far as I know there is no version of tmux installed on the system. Interestingly though, tmux runs correctly with: tmux new $SHELL I have also tried creating an empty .tmux.conf with no luck, and tmux -v does not output anything else. any thoughts? Additional Info 1/3: Below are the last few lines from strace -f tmux [pid 30927] writev(6, [{"\33[?1049h\33(B\33[m\33[?1l\33>\33[H\33[2J\33[?1"..., 976}, {" "..., 195}], 2) = 8[pid 30927] close(6) = 0[pid 30927] epoll_ctl(3, EPOLL_CTL_ADD, 8, {EPOLLIN|EPOLLOUT, {u32=8, u64=8}}) = 0[pid 30927] epoll_ctl(3, EPOLL_CTL_DEL, 8, {EPOLLIN|EPOLLOUT, {u32=8, u64=8}}) = 0[pid 30927] epoll_ctl(3, EPOLL_CTL_ADD, 8, {EPOLLIN|EPOLLOUT, {u32=8, u64=8}}) = 0[pid 30927] clock_gettime(CLOCK_MONOTONIC, {3625108, 599513697}) = 0[pid 30927] gettimeofday({1361992488, 675759}, NULL) = 0[pid 30927] clock_gettime(CLOCK_MONOTONIC, {3625108, 599693697}) = 0[pid 30927] gettimeofday({1361992488, 675933}, NULL) = 0[pid 30927] epoll_wait(3, {{EPOLLOUT, {u32=8, u64=8}}}, 32, 692) = 1[pid 30927] clock_gettime(CLOCK_MONOTONIC, {3625108, 599962697}) = 0[pid 30927] gettimeofday({1361992488, 676202}, NULL) = 0[pid 30927] epoll_ctl(3, EPOLL_CTL_DEL, 8, {EPOLLIN|EPOLLOUT, {u32=8, u64=8}}) = 0[pid 30927] sendmsg(8, {msg_name(0)=NULL, msg_iov(1)=[{"\4\0\0\0\20\0\0\0\7\0\0\0\377\377\377\377", 16}], msg_controllen=0, msg_flags=0}, 0) = 16[pid 30909] <... epoll_wait resumed> {{EPOLLIN, {u32=7, u64=7}}}, 32, 4294967295) = 1[pid 30927] epoll_ctl(3, EPOLL_CTL_ADD, 8, {EPOLLIN, {u32=8, u64=8}} <unfinished ...>[pid 30909] clock_gettime(CLOCK_MONOTONIC, <unfinished ...>[pid 30927] <... epoll_ctl resumed> ) = 0[pid 30909] <... clock_gettime resumed> {3625108, 600611697}) = 0[pid 30927] clock_gettime(CLOCK_MONOTONIC, <unfinished ...>[pid 30909] gettimeofday( <unfinished ...>[pid 30927] <... clock_gettime resumed> {3625108, 600777697}) = 0[pid 30909] <... gettimeofday resumed> {1361992488, 676953}, NULL) = 0[pid 30927] gettimeofday( <unfinished ...>[pid 30909] epoll_ctl(3, EPOLL_CTL_DEL, 7, {EPOLLIN, {u32=7, u64=7}} <unfinished ...>[pid 30927] <... gettimeofday resumed> {1361992488, 677094}, NULL) = 0[pid 30909] <... epoll_ctl resumed> ) = 0[pid 30927] clock_gettime(CLOCK_MONOTONIC, <unfinished ...>[pid 30909] recvmsg(7, <unfinished ...>[pid 30927] <... clock_gettime resumed> {3625108, 601290697}) = 0[pid 30927] gettimeofday( <unfinished ...>[pid 30909] <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"\4\0\0\0\20\0\0\0\7\0\0\0\377\377\377\377\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65535}], msg_controllen=0, msg_flags=0}, 0) = 16[pid 30927] <... gettimeofday resumed> {1361992488, 677801}, NULL) = 0[pid 30909] epoll_wait(3, <unfinished ...>[pid 30927] epoll_wait(3, <unfinished ...>[pid 30909] <... epoll_wait resumed> {}, 32, 0) = 0[pid 30909] clock_gettime(CLOCK_MONOTONIC, {3625108, 601921697}) = 0[pid 30909] gettimeofday({1361992488, 678171}, NULL) = 0[pid 30909] fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 14), ...}) = 0[pid 30909] mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b4474914000[pid 30909] write(1, "[exited]\n", 9[exited]) = 9[pid 30909] getppid() = 30904[pid 30909] fcntl(0, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE)[pid 30909] fcntl(0, F_SETFL, O_RDWR|O_LARGEFILE) = 0[pid 30909] exit_group(0) = ?<... epoll_wait resumed> {{EPOLLIN|EPOLLHUP, {u32=8, u64=8}}}, 32, 691) = 1clock_gettime(CLOCK_MONOTONIC, {3625108, 604208697}) = 0gettimeofday({1361992488, 680447}, NULL) = 0epoll_ctl(3, EPOLL_CTL_DEL, 8, {EPOLLIN, {u32=8, u64=8}}) = 0recvmsg(8, {msg_name(0)=NULL, msg_iov(1)=[{"\5\0\0\0\20\0\0\0\7\0\0\0\377\377\377\377\377\377\377\377\377\377\377\377\0\0\0\0\0\177\0\0"..., 65535}], msg_controllen=0, msg_flags=0}, 0) = 0close(8) = 0close(9) = 0close(10) = 0close(11) = 0close(12) = 0close(13) = 0close(14) = 0close(15) = 0epoll_ctl(3, EPOLL_CTL_DEL, 7, {EPOLLIN, {u32=7, u64=7}}) = 0epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN, {u32=7, u64=7}}) = 0stat("/tmp/tmux-4887/default", {st_mode=S_IFSOCK|0770, st_size=0, ...}) = 0chmod("/tmp/tmux-4887/default", 0140660) = 0exit_group(0) = ?Process 30927 detached Additional Info 2/3: Below is the output of cat tmux-server-23582.log from tmux -v : > cat tmux-server-23590.log server started, pid 23590socket path /tmp/tmux-4887/defaultnew client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 14 from client 8got 6 from client 8got 0 from client 8new term: xterm-256colorxterm-256color override: colors 256xterm-256color override: XT xterm-256color override: Ms ]52;%p1%s;%p2%sxterm-256color override: Cc xterm-256color override: Cr ]112xterm-256color override: Cs [%p1%d qxterm-256color override: Csr [2 qnew key Oo: 0x1021 (KP/)new key Oj: 0x1022 (KP*)new key Om: 0x1023 (KP-)new key Ow: 0x1024 (KP7)new key Ox: 0x1025 (KP8)new key Oy: 0x1026 (KP9)new key Ok: 0x1027 (KP+)new key Ot: 0x1028 (KP4)new key Ou: 0x1029 (KP5)new key Ov: 0x102a (KP6)new key Oq: 0x102b (KP1)new key Or: 0x102c (KP2)new key Os: 0x102d (KP3)new key OM: 0x102e (KPEnter)new key Op: 0x102f (KP0)new key On: 0x1030 (KP.)new key OA: 0x101d (Up)new key OB: 0x101e (Down)new key OC: 0x1020 (Right)new key OD: 0x101f (Left)new key [A: 0x101d (Up)new key [B: 0x101e (Down)new key [C: 0x1020 (Right)new key [D: 0x101f (Left)new key Oa: 0x501d (C-Up)new key Ob: 0x501e (C-Down)new key Oc: 0x5020 (C-Right)new key Od: 0x501f (C-Left)new key [a: 0x901d (S-Up)new key [b: 0x901e (S-Down)new key [c: 0x9020 (S-Right)new key [d: 0x901f (S-Left)new key [11^: 0x5002 (C-F1)new key [12^: 0x5003 (C-F2)new key [13^: 0x5004 (C-F3)new key [14^: 0x5005 (C-F4)new key [15^: 0x5006 (C-F5)new key [17^: 0x5007 (C-F6)new key [18^: 0x5008 (C-F7)new key [19^: 0x5009 (C-F8)new key [20^: 0x500a (C-F9)new key [21^: 0x500b (C-F10)new key [23^: 0x500c (C-F11)new key [24^: 0x500d (C-F12)new key [25^: 0x500e (C-F13)new key [26^: 0x500f (C-F14)new key [28^: 0x5010 (C-F15)new key [29^: 0x5011 (C-F16)new key [31^: 0x5012 (C-F17)new key [32^: 0x5013 (C-F18)new key [33^: 0x5014 (C-F19)new key [34^: 0x5015 (C-F20)new key [2^: 0x5016 (C-IC)new key [3^: 0x5017 (C-DC)new key [7^: 0x5018 (C-Home)new key [8^: 0x5019 (C-End)new key [6^: 0x501a (C-NPage)new key [5^: 0x501b (C-PPage)new key [11$: 0x9002 (S-F1)new key [12$: 0x9003 (S-F2)new key [13$: 0x9004 (S-F3)new key [14$: 0x9005 (S-F4)new key [15$: 0x9006 (S-F5)new key [17$: 0x9007 (S-F6)new key [18$: 0x9008 (S-F7)new key [19$: 0x9009 (S-F8)new key [20$: 0x900a (S-F9)new key [21$: 0x900b (S-F10)new key [23$: 0x900c (S-F11)new key [24$: 0x900d (S-F12)new key [25$: 0x900e (S-F13)new key [26$: 0x900f (S-F14)new key [28$: 0x9010 (S-F15)new key [29$: 0x9011 (S-F16)new key [31$: 0x9012 (S-F17)new key [32$: 0x9013 (S-F18)new key [33$: 0x9014 (S-F19)new key [34$: 0x9015 (S-F20)new key [2$: 0x9016 (S-IC)new key [3$: 0x9017 (S-DC)new key [7$: 0x9018 (S-Home)new key [8$: 0x9019 (S-End)new key [6$: 0x901a (S-NPage)new key [5$: 0x901b (S-PPage)new key [11@: 0xd002 (C-S-F1)new key [12@: 0xd003 (C-S-F2)new key [13@: 0xd004 (C-S-F3)new key [14@: 0xd005 (C-S-F4)new key [15@: 0xd006 (C-S-F5)new key [17@: 0xd007 (C-S-F6)new key [18@: 0xd008 (C-S-F7)new key [19@: 0xd009 (C-S-F8)new key [20@: 0xd00a (C-S-F9)new key [21@: 0xd00b (C-S-F10)new key [23@: 0xd00c (C-S-F11)new key [24@: 0xd00d (C-S-F12)new key [25@: 0xd00e (C-S-F13)new key [26@: 0xd00f (C-S-F14)new key [28@: 0xd010 (C-S-F15)new key [29@: 0xd011 (C-S-F16)new key [31@: 0xd012 (C-S-F17)new key [32@: 0xd013 (C-S-F18)new key [33@: 0xd014 (C-S-F19)new key [34@: 0xd015 (C-S-F20)new key [2@: 0xd016 (C-S-IC)new key [3@: 0xd017 (C-S-DC)new key [7@: 0xd018 (C-S-Home)new key [8@: 0xd019 (C-S-End)new key [6@: 0xd01a (C-S-NPage)new key [5@: 0xd01b (C-S-PPage)new key OP: 0x1002 (F1)new key OQ: 0x1003 (F2)new key OR: 0x1004 (F3)new key OS: 0x1005 (F4)new key [15~: 0x1006 (F5)new key [17~: 0x1007 (F6)new key [18~: 0x1008 (F7)new key [19~: 0x1009 (F8)new key [20~: 0x100a (F9)new key [21~: 0x100b (F10)new key [23~: 0x100c (F11)new key [24~: 0x100d (F12)new key O2P: 0x100e (F13)new key O2Q: 0x100f (F14)new key O2R: 0x1010 (F15)new key O2S: 0x1011 (F16)new key [15;2~: 0x1012 (F17)new key [17;2~: 0x1013 (F18)new key [18;2~: 0x1014 (F19)new key [19;2~: 0x1015 (F20)new key [2~: 0x1016 (IC)new key [3~: 0x1017 (DC)new key OH: 0x1018 (Home)new key OF: 0x1019 (End)new key [6~: 0x101a (NPage)new key [5~: 0x101b (PPage)new key [Z: 0x101c (BTab)replacing key OA: 0x101d (Up)replacing key OB: 0x101e (Down)replacing key OD: 0x101f (Left)replacing key OC: 0x1020 (Right)new key [3;2~: 0x9017 (S-DC)new key [1;2B: 0x901e (S-Down)new key [1;5B: 0x501e (C-Down)new key [1;6B: 0xd01e (C-S-Down)new key [1;2F: 0x9019 (S-End)new key [1;2H: 0x9018 (S-Home)new key [2;2~: 0x9016 (S-IC)new key [1;2D: 0x901f (S-Left)new key [1;5D: 0x501f (C-Left)new key [1;6D: 0xd01f (C-S-Left)new key [6;2~: 0x901a (S-NPage)new key [5;2~: 0x901b (S-PPage)new key [1;2C: 0x9020 (S-Right)new key [1;5C: 0x5020 (C-Right)new key [1;6C: 0xd020 (C-S-Right)new key [1;2A: 0x901d (S-Up)new key [1;5A: 0x501d (C-Up)new key [1;6A: 0xd01d (C-S-Up)session 0 createdwriting 8 to client 8keys are 12 ([>1;3400;0c)received xterm version 3400input_parse: '/' groundinput_parse: 'h' groundinput_parse: 'o' groundinput_parse: 'm' groundinput_parse: 'e' groundinput_parse: '/' groundinput_parse: 'j' groundinput_parse: 'o' groundinput_parse: 'h' groundinput_parse: 'n' groundinput_parse: '/' groundinput_parse: '.' groundinput_parse: 'z' groundinput_parse: 'p' groundinput_parse: 'r' groundinput_parse: 'o' groundinput_parse: 'f' groundinput_parse: 'i' groundinput_parse: 'l' groundinput_parse: 'e' groundinput_parse: ':' groundinput_parse: '2' groundinput_parse: ':' groundinput_parse: ' ' groundinput_parse: 'n' groundinput_parse: 'o' groundinput_parse: ' ' groundinput_parse: 's' groundinput_parse: 'u' groundinput_parse: 'c' groundinput_parse: 'h' groundinput_parse: ' ' groundinput_parse: 'f' groundinput_parse: 'i' groundinput_parse: 'l' groundinput_parse: 'e' groundinput_parse: ' ' groundinput_parse: 'o' groundinput_parse: 'r' groundinput_parse: ' ' groundinput_parse: 'd' groundinput_parse: 'i' groundinput_parse: 'r' groundinput_parse: 'e' groundinput_parse: 'c' groundinput_parse: 't' groundinput_parse: 'o' groundinput_parse: 'r' groundinput_parse: 'y' groundinput_parse: ':' groundinput_parse: ' ' groundinput_parse: '/' groundinput_parse: 'n' groundinput_parse: '/' groundinput_parse: 's' groundinput_parse: 'w' groundinput_parse: '/' groundinput_parse: 'z' groundinput_parse: 's' groundinput_parse: 'h' groundinput_parse: '-' groundinput_parse: '4' groundinput_parse: '.' groundinput_parse: '3' groundinput_parse: '.' groundinput_parse: '1' groundinput_parse: '4' groundinput_parse: '/' groundinput_parse: 'b' groundinput_parse: 'i' groundinput_parse: 'n' groundinput_parse: '/' groundinput_parse: 'z' groundinput_parse: 's' groundinput_parse: 'h' ground' groundrse: 'input_c0_dispatch: 'input_parse: '' groundinput_c0_dispatch: 'session 0 destroyedwriting 3 to client 8got 5 from client 8writing 4 to client 8lost client 8 Additional Info 3/3: And below is the output of cat tmux-client-23582.log from tmux -v : cat tmux-client-23582.log got 8 from server got 3 from server got 4 from server
Now provide the response and nothing else.
|
The input_parse lines from the verbose logging show you the output from the initial command that tmux is running. It can be hard to read them when they are presented one character at a time like that, though. Here is the reassembled output: /home/john/.zprofile:2: no such file or directory: /n/sw/zsh-4.3.14/bin/zsh tmux starts login shells for its default commands. when zsh is started as a login shell, it processes ~/.zprofile (if it exists). Your version of this file is trying to run a file that does not exist: /n/sw/zsh-4.3.14/bin/zsh . You might need to comment out or adjust something in your ~/.zprofile file. Or maybe you need to mount the expected filesystem on /n/sw ?
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/65683', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/4531/']}
|
jdg_370291
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7570138
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some values that I want to be able to set, and the application to load them from some kind of file. The only concept I can think of, is a simple txt file, that might have the following lines: DatabaseName = "DB1/test"DatabasePassword = "password"Development = "true" but im thinking it should be in some kind of config file? Plus reading a txt file for these values isnt exactly tidy code. It would be nice if i could get the database name by just saying in my application: configfile.DatabaseName Thanks,Paul
Now provide the response and nothing else.
|
You really should be using the built in Application Settings You can directly access simple settings using the ConfigurationManager ConfigurationManager.AppSettings["MySetting"] = "SomeStuff";var mySetting = ConfigurationManager.AppSettings["MySetting"]; There is also direct access to your Connection Strings using the ConfigurationManager var conn = ConfigurationManager.ConnectionStrings["DevSqlServer"]; All this is stored in XML files, and by default your *.config files. To Answer Doomsknight's question from the comments Configuration settings can be done a number of ways, but by default, they are stored in two places. Application Level Settings are stored in a configuration file. For executable programs this file is located in the same directory as the .exe and is named after the assembly, or executable. Example: MyAssembly.config , Another.Assembly.config For web applications, the settings are stored in the web.config file (usually) located in the root directory of the web application. These are applied hierarchically and one can be located at each directory level of the Web Application. Example: MySite\web.config , MySite\SubDirectory\web.config User Scoped Settings are stored in the user profile Example: C:\Documents and Settings\USERNAME\Local Settings\Application Data\ApplicationName Connection Strings are stored in the <connectionStrings></connectionStrings> section in your config file. <connectionStrings> <clear /> <add name="Name" providerName="System.Data.ProviderName" connectionString="Valid Connection String;" /></connectionStrings> These settings can easily be modified directly in the config file, but without writing some code to automatically refresh sections of the config file (which is possible), an application restart is typically needed. I hope this helps out.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7570138', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/940834/']}
|
jdg_370292
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
622321
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have to show that if $|z| < 1$, $z \in \mathbb{C}$, $$(1+z)\prod_{n=1}^{+\infty}(1+z^{2^{n}})= \frac{1}{1-z}$$ I want to understand how to do this kind of exercises, any hint ?
Now provide the response and nothing else.
|
Consider the finite product $$P(N) = \prod_{n=0}^N (1+z^{2^n}).$$ Then observe that $$ \begin{align*} (1-z)P(N) &= (1-z)(1+z)(1+z^2)(1+z^4)\ldots(1+z^{2^N}) \\ &= (1-z^2)(1+z^2)(1+z^4)\ldots(1+z^{2^N}) \\ &= (1-z^4)(1+z^4) \ldots (1+z^{2^N}) \\ &= (1-z^8) \ldots (1+z^{2^N}) = \ldots \\ &= 1-z^{2^{N+1}}. \end{align*}$$ So if $|z| < 1$, $\displaystyle \lim_{N \to \infty} (1-z) P(N) = \lim_{N \to \infty} 1 - z^{2^{N+1}} = 1$, and the result immediately follows. Of course, the above is a bit sloppy for my liking, but it is not too difficult to formalize the basic idea.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/622321', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/21024/']}
|
jdg_370293
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10611332
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can I just click the SEARCH button, retrieve the data from my SQLite database, and display it in my app? My Activity: public class EKaDEMActivity extends Activity implements OnClickListener { //EditText ET; public static final String KEY_ROWID = "Dic_id"; public static final String colID="_id"; public static final String colKD="KD_word"; public static final String colENG="ENG_word"; public static final String colMAL="Malay_word"; public static final String colAudio="KD_audio"; private static final String dbName="Dictionary"; private static final String dbTable="tbl_Dictionary"; private static final int dbVersion = 2; static final String viewEmps="ViewEmps"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageButton button = (ImageButton)findViewById(R.id.button_one); button.setOnClickListener(this); } @Override public void onClick(View arg0) { // TODO Auto-generated method stub EditText ET = (EditText) findViewById(R.id.AutoTextBox); ET = (EditText) findViewById(R.id.AutoTextBox); String s = ET.getText().toString(); TextView TV = (TextView)findViewById(R.id.DefTextView); TV.setText("Welcome Using Kadazandusun-English-Malay Mobile Dictionary Application"); boolean r = s.contentEquals("dog"); if (r == true) { String str = " M: Anging \n\n" + " KD: Tasu "; TextView text = (TextView)findViewById(R.id.DefTextView); text.setText(str); } else { TV.setText(" Word not found"); } }} This is my Database.java public static final String KEY_ROWID = "Dic_id";public static final String colID="_id";public static final String colKD="KD_word";public static final String colENG="ENG_word";public static final String colMAL="Malay_word";public static final String colAudio="KD_audio";private static final String dbName="Dictionary";private static final String dbTable="tbl_Dictionary";private static final int dbVersion = 1;static final String viewEmps="ViewEmps";public Database(Context context, String name, CursorFactory factory, int version) { super(context, dbName, null, dbVersion); // TODO Auto-generated constructor stub}@Overridepublic void onCreate(SQLiteDatabase db) { // TODO Auto-generated method stub db.execSQL("CREATE TABLE " + dbTable + " (" + KEY_ROWID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + colKD + " TEXT NOT NULL, " + colENG + " TEXT NOT NULL, " + colMAL + " TEXT NOT NULL, );"); db.execSQL("CREATE VIEW "+viewEmps+ " AS SELECT "+dbTable+"."+colID+" AS _id,"+ " "+dbTable+"."+colKD+","+ " "+dbTable+"."+colENG+","+ " "+dbTable+"."+colMAL+""+ " FROM "+dbTable );}@Overridepublic void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub db.execSQL("DROP TABLE IF EXISTS "+ dbTable); onCreate(db);} void AddWord(Word wrd) { SQLiteDatabase db= this.getWritableDatabase(); ContentValues cv=new ContentValues(); cv.put(colKD, wrd.getKD()); cv.put(colENG, wrd.getENG()); cv.put(colMAL, wrd.getMAL()); //cv.put(colDept,2); db.insert(dbTable, colKD, cv); db.close();} public int UpdateWrd(Word wrd) { SQLiteDatabase db=this.getWritableDatabase(); ContentValues cv=new ContentValues(); cv.put(colKD, wrd.getKD()); cv.put(colENG, wrd.getENG()); cv.put(colMAL, wrd.getMAL()); return db.update(dbTable, cv, colID+"=?", new String []{String.valueOf(wrd.getID())}); } public void DeleteWrd(Word wrd) { SQLiteDatabase db=this.getWritableDatabase(); db.delete(dbTable,colID+"=?", new String [] {String.valueOf(wrd.getID())}); db.close(); }Cursor getWord() { SQLiteDatabase db=this.getWritableDatabase(); Cursor cur= db.rawQuery("SELECT * FROM "+viewEmps,null); return cur; } }
Now provide the response and nothing else.
|
You can see from the source that datetime objects don't support anything more fine than microseconds. As pointed out by Mike Pennington in the comments, this is likely because computer hardware clocks aren't nearly that precise . Wikipedia says that HPET has frequency "at least 10 MHz," which means one tick per 100 nanoseconds. If you can live with throwing out the last three digits (which probably aren't too meaningful anyway), you could parse this by just slicing the input string to have only six digits after the decimal point and parsing with %f . Otherwise, it looks like you'll have to implement the subtraction yourself. Much later update: numpy and pandas now each have (somewhat different) support for timestamps that includes the possibility of tracking nanoseconds, which are often good solutions. See the other answers for how. Python 3.7+ also has time.time_ns and related functions in time ( PEP 564 ), but still no support for nanoseconds in datetime .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10611332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1371590/']}
|
jdg_370294
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15816511
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I compile and run this code: class StringTest { public static void main(String[] args) { System.out.println("Begin Test"); String letters = "AAAAAAA" String[] broken = letters.split("A"); for(int i = 0; i < broken.length; i++) System.out.println("Item " + i + ": " + broken[i]); System.out.println("End Test"); }} The output to the console is: Begin TestEnd Test Can anyone explain why split() works like this? I saw some other questions sort of like this on here, but didn't fully understand why there is no output when splitting a string made entirely out of the character that you are using for regex . Why does java handle Strings this way?
Now provide the response and nothing else.
|
String.split discards trailing empty strings. For example, "foo,bar,,".split(",") gets split into {"foo", "bar"} . What you're seeing is a string that consists entirely of the separator, so all the empty splits are "trailing" and get discarded. You could probably get all those empty strings if you used letters.split("A", -1) . Alternately, Guava's Splitter doesn't do things like that unless you ask for it: Splitter.on('A').split(letters) .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15816511', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1941622/']}
|
jdg_370295
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
112964
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a bit confused. I want to buy a miniature solar panel like this . The problem is that all the miniature solar panels online in their specifications they give the peak voltage and peak current. Thats fine but I dont understand in which irradiance does the panel give those values. This is my first question. My second question is why there is no datasheet? I guess that depending on the type of panel (for example polycrystaline) the I-V curves are same for every panel. Am I right? And when giving the dimensions they give the device's dimension, how can we know the total panel area to calculate how much power it will produce?
Now provide the response and nothing else.
|
I dont understand in which irradiance does the panel give those values. This is my first question. Not too hard to calculate from the data provided, given a few assumptions. The "peak voltage" they give is less than the open circuit voltage, so ostensibly they mean to specify the peak voltage and current at the maximum power point . The "mp" after "Imp" and "Vmp" would also conventionally indicate "maximum power". So they are saying this panel can produce (under optimal conditions): $$ 3 \mathrm V \cdot 0.2 \mathrm A = 0.6 \mathrm W $$ The area of the panel is: $$ 92 \mathrm{mm} \cdot 61 \mathrm{mm} = 0.005612 \mathrm m^2 $$ They claim an efficiency of 15%, so the irradiance \$I\$ must have been: $$ 0.15 \cdot I = \frac{0.6 \mathrm W}{0.005612 \mathrm m^2} \\I = 713 \mathrm W / \mathrm m^2 $$ Irradiance on a very clear day at Earth's surface with the sun directly overhead is on the order of 1kW per square meter , so this sounds reasonable enough, considering that the efficiency number may not account for the fact that this device has some margins around the panels. My second question is why there is no datasheet? I guess that depending on the type of panel (for example polycrystaline) the I-V curves are same for every panel. Am I right? Some products just suck. That said, silicon junction photovoltaics pretty much have a common shape to their I-V curves , which is more or less a top-for-bottom flip of the I-V curve of a silicon diode. A more expensive product would probably have more detailed specifications; for something cheap like this you will just have to infer or measure for yourself.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/112964', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/39836/']}
|
jdg_370296
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
56255154
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have trained a semantic segmentation model using the sagemaker and the out has been saved to a s3 bucket. I want to load this model from the s3 to predict some images in sagemaker. I know how to predict if I leave the notebook instance running after the training as its just an easy deploy but doesn't really help if I want to use an older model. I have looked at these sources and been able to come up with something myself but it doesn't work hence me being here: https://course.fast.ai/deployment_amzn_sagemaker.html#deploy-to-sagemaker https://aws.amazon.com/getting-started/tutorials/build-train-deploy-machine-learning-model-sagemaker/ https://sagemaker.readthedocs.io/en/stable/pipeline.html https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/inference_pipeline_sparkml_xgboost_abalone/inference_pipeline_sparkml_xgboost_abalone.ipynb My code is this: from sagemaker.pipeline import PipelineModelfrom sagemaker.model import Models3_model_bucket = 'bucket's3_model_key_prefix = 'prefix'data = 's3://{}/{}/{}'.format(s3_model_bucket, s3_model_key_prefix, 'model.tar.gz')models = ss_model.create_model() # ss_model is my sagemaker.estimatormodel = PipelineModel(name=data, role=role, models= [models])ss_predictor = model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge')
Now provide the response and nothing else.
|
You can actually instantiate a Python SDK model object from existing artifacts, and deploy it to an endpoint. This allows you to deploy a model from trained artifacts, without having to retrain in the notebook. For example, for the semantic segmentation model: trainedmodel = sagemaker.model.Model( model_data='s3://...model path here../model.tar.gz', image='685385470294.dkr.ecr.eu-west-1.amazonaws.com/semantic-segmentation:latest', # example path for the semantic segmentation in eu-west-1 role=role) # your role here; could be different nametrainedmodel.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge') And similarly, you can instantiate a predictor object on a deployed endpoint from any authenticated client supporting the SDK, with the following command: predictor = sagemaker.predictor.RealTimePredictor( endpoint='endpoint name here', content_type='image/jpeg', accept='image/png') More on those abstractions: Model : https://sagemaker.readthedocs.io/en/stable/model.html Predictor : https://sagemaker.readthedocs.io/en/stable/predictors.html
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56255154', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11538596/']}
|
jdg_370297
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19159928
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to change the default window of my project but it doesn't work.I am using QtQuick 2.0.Tried importing QtQuick.Window 2.0 and making Window{} as root object instead of Rectangle{} but it doesn't allow window objects as root.It gives me the following errors: QQuickView only supports loading of root objects that derive from QQuickItem. If your example is using QML 2, (such as qmlscene) and the .qml file you loaded has 'import QtQuick 1.0' or 'import Qt 4.7', this error will occur. To load files with 'import QtQuick 1.0' or 'import Qt 4.7', use the QDeclarativeView class in the Qt Quick 1 module. Any ideas on how to change the window title? I'm using Qt 5.1.1.
Now provide the response and nothing else.
|
This depends on how you want to use your GUI. If you want to use QML for almost everything, from window creation to the elements in your windows, the following solution may be the best option for you. Qt5.1, only for desktop If you have Qt5.1, you may use the new ApplicationWindow item from QtQuick.Controls as you root object in a file named main.qml : import QtQuick 2.0import QtQuick.Controls 1.0ApplicationWindow { visible: true width: 360 height: 360 title: "MyWindow" Text { text: "Hello world!" anchors.centerIn: parent }} To avoid the error message you get, you need to start your application with a QQmlApplicationEngine instead of QQuickView . This may be done as follows in your main.cpp file: #include <QtGui/QGuiApplication>#include <QQmlApplicationEngine>int main(int argc, char *argv[]){ QGuiApplication app(argc, argv); QQmlApplicationEngine engine("main.qml"); return app.exec();} Qt5.0, (possibly) for environments other than desktop If using Qt5.1 is not an option for you or you are targeting devices not yet supporting QtQuick.Controls, the alternative is to use Window in the following way. Add this to main.qml : import QtQuick 2.0import QtQuick.Window 2.0Window { visible: true width: 360 height: 360 title: "MyWindow" Text { text: "Hello world!" anchors.centerIn: parent }} And let this be your main.cpp : #include <QtGui/QGuiApplication>#include <QQmlEngine>#include <QQmlComponent>int main(int argc, char *argv[]){ QGuiApplication app(argc, argv); QQmlEngine engine; QQmlComponent component(&engine, QUrl::fromLocalFile("main.qml")); component.create(); return app.exec();} This should open a window with the text "Hello World!".
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19159928', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2095806/']}
|
jdg_370298
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40230785
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using "react": "15.3.2", "react-native": "0.35.0", "react-native-fbsdk": "^0.3.0"the android virtual Devices I use is => Android 7.0, API Level: 24 I have followed the Readme setting https://github.com/facebook/react-native-fbsdk I want to use Analytics for Apps. On iOS it works well both on App Launches and AppEventsLogger On Android it works well on App Launches, and I could got the data on facebook analytics website but about the AppEventsLogger.logPurchase or AppEventsLogger.logEvent, I will got failed msg const FBSDK = require('react-native-fbsdk');const { AppEventsLogger,} = FBSDK;AppEventsLogger.logPurchase(15, 'USD', {'param': 'value'}) but I got the error like this Attempt to invoke virtual method 'void com.facebook.appevents.AppEventsLogger.logPurchase(java.math.BigDecimal, java.util.Currency, android.os.Bundle)' on a null object referencelogPurchase FBAppEventsLoggerModule.java:175invoke Method.java:-2 do other people have this problem
Now provide the response and nothing else.
|
Make a mock http context private class MockHttpContext : HttpContextBase { private readonly IPrincipal user; public MockHttpContext(IPrincipal principal) { this.user = principal; } public override IPrincipal User { get { return user; } set { base.User = value; } }} Arrange test accordingly. [Test]public void ViewDocuments_WhenCalled_ShouldReturnViewModel() { // Arrange var principal = new CustomPrincipal("2038786"); principal.UserId = "2038786"; principal.FirstName = "Test"; principal.LastName = "User"; principal.IsStoreUser = true; var mockUoW = new Mock<IUnitOfWork>(); //...setup UoW dependency if needed var controller = new DocumentsController(mockUoW.Object); controller.ControllerContext = new ControllerContext { Controller = controller, HttpContext = new MockHttpContext(principal) }; // Act var result = controller.ViewDocuments(); //Assert //...assertions} Don't mock system under test. Mock its dependencies.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40230785', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7067332/']}
|
jdg_370299
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20388923
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to grasp the different concepts of Database Partitioning and this is what I understood of it: Horizontal Partitioning/Sharding : Splitting a table into different table that will contain a subset of the rows that were in the initial table (an example that I have seen a lot if splitting a Users table by Continent, like a sub table for North America, another one for Europe, etc...). Each partition being in a different physical location (understand 'machine').As I understood it, Horizontal Partitioning and Sharding are the exact same thing(?). Vertical Partitioning : From what I understood ( http://technet.microsoft.com/en-us/library/ms178148%28v=sql.105%29.aspx ), there are 2 sorts of Vertical Partitioning: Normalization (which consists of removing redundancies from a the database by splitting tables and linking them with a foreign key). Row Splitting , here is what I don't understand, what is the difference between Normalization and Row Splitting ? In what those 2 techniques differ from each other? I have also read in this post ( Difference between scaling horizontally and vertically for databases ) that the difference between Horizontal Partitioning and Vertical Partitioning is that in the first you scale by adding more machines, while in the second one you scale by adding more power (CPU, RAM) to your existing machine, is that a correct definition? I thought that the core difference between those 2 techniques resides in the way you split your tables. I am sorry for the load of questions but I am a bit confused as a lot of different websites that I have came across say different things. Any help clarifying would be greatly appreciated. Any link to a clear and simple demonstration with a few tables would also be very helpful.
Now provide the response and nothing else.
|
Partitioning is a rather general concept and can be applied in many contexts. When it considers the partitioning of relational data , it usually refers to decomposing your tables either row-wise (horizontally) or column-wise (vertically). Vertical partitioning, aka row splitting, uses the same splitting techniques as database normalization, but ususally the term (vertical / horizontal) data partitioning refers to a physical optimization whereas normalization is an optimization on the conceptual level. Since you ask for a simple demonstration - assume you have a table like this: create table data ( id integer primary key, status char(1) not null, data1 varchar2(10) not null, data2 varchar2(10) not null); One way to partition data vertically : Split it as follows: create table data_main ( id integer primary key, status char(1) not null, data1 varchar2(10) not null );create table data_rarely_used ( id integer primary key, data2 varchar2(10) not null, foreign key (id) references data_main (id) ); This kind of partitioning can be applied, for example, when you rarely need column data2 in your queries. Partition data_main will take less space, hence full table scans will be faster and it is more likely that it fits into the DBMS' page cache. The downside: When you have to query all columns of data , you obivously have to join the tables, which will be more expensive that querying the original table. Notice you are splitting the columns in the same way as you would when you normalize tables. However, in this case data could already be normalized to 3NF (and even BCNF and 4NF), but you decide to further split it for the reason of physical optimization. One way to partition data horizontally , using Oracle syntax: create table data ( id integer primary key, status char(1), data1 varchar2(10), data2 varchar2(10) ) partition by list (status) ( partition active_data values ( 'A' ), partition other_data values(default) ); This would tell the DBMS to internally store the table data in two segments (like two tables), depending on the value of the column status . This way of partitioning data can be applied, for example, when you usually query only rows of one partition, e.g., the status 'A' rows (let's call them active rows). Like before, full scans will be faster (particularly if there are only few active rows), the active rows (and the other rows resp.) are stored contiguously (they won't be scattered around pages that they share with rows of a different status value, and it is more likely that the active rows will be in the page cache.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20388923', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1385655/']}
|
jdg_370300
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1953377
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How can my C# application check whether a particular application/process (note: not the current process) is running in 32-bit or 64-bit mode? For example, I might want to query a particular process by name, i.e, 'abc.exe', or based on the process ID number.
Now provide the response and nothing else.
|
One of the more interesting ways I've seen is this: if (IntPtr.Size == 4){ // 32-bit}else if (IntPtr.Size == 8){ // 64-bit}else{ // The future is now!} To find out if OTHER processes are running in the 64-bit emulator (WOW64), use this code: namespace Is64Bit{ using System; using System.ComponentModel; using System.Diagnostics; using System.Runtime.InteropServices; internal static class Program { private static void Main() { foreach (var p in Process.GetProcesses()) { try { Console.WriteLine(p.ProcessName + " is " + (p.IsWin64Emulator() ? string.Empty : "not ") + "32-bit"); } catch (Win32Exception ex) { if (ex.NativeErrorCode != 0x00000005) { throw; } } } Console.ReadLine(); } private static bool IsWin64Emulator(this Process process) { if ((Environment.OSVersion.Version.Major > 5) || ((Environment.OSVersion.Version.Major == 5) && (Environment.OSVersion.Version.Minor >= 1))) { bool retVal; return NativeMethods.IsWow64Process(process.Handle, out retVal) && retVal; } return false; // not on 64-bit Windows Emulator } } internal static class NativeMethods { [DllImport("kernel32.dll", SetLastError = true, CallingConvention = CallingConvention.Winapi)] [return: MarshalAs(UnmanagedType.Bool)] internal static extern bool IsWow64Process([In] IntPtr process, [Out] out bool wow64Process); }}
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1953377', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/448430/']}
|
jdg_370301
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2277219
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If $A$ and $B$ are ideals of a commutative ring $R$ such that $A \subseteq B$, then can one define a surjective map from the quotient ring $R/B$ to $R/A$? I can define an injective map from $R/A$ to $R/B$, but not a surjective homomorphism from $R/B$ to $R/A$. Is it possible? I'm asking this because I have this sentence (which is true): If $R$ is a UFD, then, if $h$ divides $t$ in R, the following sequence is exact (everything considered as a R-module) $$ 0 \rightarrow hR/tR \rightarrow R/tR \rightarrow R/hR \rightarrow 0.$$ I just cannot see why there is a (natural?) surjective map $f: R/tR \rightarrow R/hR$.
Now provide the response and nothing else.
|
In general, if $A\subseteq B$, then $R/A$ will be "bigger" than $R/B$. For example, let $R = \mathbb{Z}, A = 4\mathbb{Z}, B=2\mathbb{Z}$. $R/A = \mathbb{Z}_4, R/B = \mathbb{Z}_2$, and there exists a surjective homomorphism from $R/A$ to $R/B$. So, I think what you want to ask is if we can find a surjective homomorphism from $R/A$ to $R/B$ (for the question you actually posed, the above is a counterexample). In fact, we can. Define $f: R/A\to R/B$ in the following manner. Let $r+A\in R/A$. Then there exists some $b\in R$ such that $r\in b+B$. Then let $f(r+A)=b+B$. $f$ is well-defined, for suppose $r+A=s+A$. Let $b_r, b_s\in R$ such that $r\in b_r+B, s\in b_s+B$. This is to say, $r=b_r+b_1, s=b_s+b_2$ where $b_1, b_2\in B$. Now, $r-s\in A$, and so $r-s\in B$. Further, $r-s=(b_r-b_s)+(b_1-b_2)$. It follows that $b_r-b_s\in B$, and so $b_r+B=b_s+B$, so $f$ is well-defined. $f$ is a homomorphism, for let $r,s\in R$. Then let $f(r+A)=b_r+B, f(s+A)=b_s+B$. Then $r+s \in (b_r+b_s)+B$, and so $f((r+s)+A) = (b_r+b_s)+B$. Finally, $f$ is surjective, for let $x+B\in R/B$. Then $f(x+A)=x+B$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2277219', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/183836/']}
|
jdg_370302
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6110
|
Below is a question asked on the forum devops.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We are about to begin our journey with containers and I am wondering if we still need to utilize something like Chef, Ansible, Terraform, etc. to ensure that our infrastructure is properly maintained? Since containers simply sit on that environment my assumption is that we will still need to properly maintain our configuration idempotence but how is this going to differ with us using containers?
Now provide the response and nothing else.
|
The solution is to automate setting up the control machine, i.e., the VM. You can use, for example, Vagrant, Terraform or similar tools (I'll stick with Vagrant in this answer for this example, it's about the principle). With that approach, the VM is wholly defined by a text file ("Vagrantfile"), and can easily and repeatably be created and re-created. The Vagrantfile species a base image, for example some particular Linux distribution; networking etc.; and necessary command lines to set it up, going from there. You can set up your VM (i.e., install Ansible) by just providing all necessary shell commands. In this particular example, this is probably enough. If you need a more complex VM, you can get kind of recursive, and configure the VM itself by using Ansible (or one of its alternatives). Obviously, you will want to treat the Vagrantfile the same as your other configurations; i.e., commit it to your SCM and keep it up to date. This is just an example: You don't need to use Vagrant or a VM; you can also install your Ansible inside a Docker image or whatever your cloud provider of choice provides as containerization.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://devops.stackexchange.com/questions/6110', 'https://devops.stackexchange.com', 'https://devops.stackexchange.com/users/12001/']}
|
jdg_370303
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3686808
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to get the complete path value after the requestMapping @PathVariable values have been parsed? That is: /{id}/{restOfTheUrl} should be able to parse /1/dir1/dir2/file.html into id=1 and restOfTheUrl=/dir1/dir2/file.html Any ideas would be appreciated.
Now provide the response and nothing else.
|
Non-matched part of the URL is exposed as a request attribute named HandlerMapping.PATH_WITHIN_HANDLER_MAPPING_ATTRIBUTE : @RequestMapping("/{id}/**")public void foo(@PathVariable("id") int id, HttpServletRequest request) { String restOfTheUrl = new AntPathMatcher().extractPathWithinPattern(request.getAttribute(HandlerMapping.BEST_MATCHING_PATTERN_ATTRIBUTE).toString(),request.getRequestURI()); ...}
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/3686808', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/62534/']}
|
jdg_370304
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25825990
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a very weird problem with Jersey REST services. I'm using: Glassfish sh4.0 EJB3.1 JDK1.7 Netbeans8.0 When I persist an object by GET request it normally store in database: @Singleton@Path("/person")public class SampleRest { @EJB PersonFasade personFasade; @GET public Person getPerson1( final @QueryParam("id") String id, final @QueryParam("first") String first, final @QueryParam("last") String last) { final Person person = this.personFasade.create(id, first, last); return person; }} But when I'm using POST: ... @POST public Person getPerson3(Person person) { this.personFasade.create(person); return person; }... Everything works but Object does not store in database (without any error)!!
Now provide the response and nothing else.
|
Rx does not catch exceptions thrown by observers. This is a very important design principle that has been discussed in length before, though for some reason it's only included as a footnote of §6.4 in the Rx Design Guidelines . Note: do not protect calls to Subscribe , Dispose , OnNext , OnError and OnCompleted methods. These calls are on the edge of the monad. Calling the OnError method from these places will lead to unexpected behavior. Essentially, this guideline ensures that, from the perspective of an observer, OnError will only be called by exceptions originating from the observable itself, including any calls to user code that participate directly in the computation (rather than merely observing the results). If this were not the case, then an observer may not be able to distinguish whether an exception passed to OnError is a bug in their OnNext handler or perhaps a bug in the observable. But more importantly, it also ensures that any exception thrown by an OnNext handler goes unhandled. This makes it easier to debug your program and protects user data. That being said, the reason why you may be observing different behavior when OnNext is executed on a pooled thread is simply a consequence of your debugging experience. Try enabling first-chance exceptions . Furthermore, I'd also avoid the race condition by changing Thread.Sleep to Console.ReadKey() .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25825990', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1079877/']}
|
jdg_370305
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31368
|
Below is a question asked on the forum quant.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When ATM implied volatility is higher than OTM put and call I believe that the volatility smile is no longer arbitrage free? Why is that? On the other hand, when ATM implied volatility is lower than OTM put and call is the volatility smile always arbitrage free? Why is that?
Now provide the response and nothing else.
|
I generally agree with @dm63's answer: A convex (concave) smile around the forward usually indicates and leptokurtic (platykurtic) implied risk-neutral probability density. Both situations can or cannot admit arbitrage. I provide you with two counterexamples to your statements. A volatility smile that is concave around the forward does not necessarily represent an arbitrage. Concave smiles often arise when a significant jump with a predictable time of occurrence is priced in. This is often the case for single stocks around quarterly earnings announcements or for indices around macro events such as elections, referendums or rate decisions. Consider for example an underlying asset that does not move except for by a single jump. Let $X_t = \ln \left( S_t / S_0 \right)$ and define \begin{equation}X_t = \int_0^t \gamma(u) \mathrm{d}u + Y \mathrm{1} \left\{ t \geq t_J \right\}.\end{equation} Here, the jump time $t_J$ is known and has the random jump size $Y$. $\gamma$ is a deterministic drift that is chosen such that the discounted asset prices is a martingale under the risk-neutral probability measure $\mathbb{P}^*$. It is given by \begin{equation}\gamma(t) = r - \ln \left( \phi_Y(-\mathrm{i}) \right) \delta \left( t - t_J \right),\end{equation} where $\phi_Y(\omega)$ is the characteristic function of $Y$ and $\delta$ is the Dirac delta function. Assume that $Y$ follows a normal mixture distribution, i.e. \begin{equation}Y \sim \begin{cases} Y_1 & \text{with probability }p\\ Y_2 & \text{with probability } 1 - p.\end{cases}\end{equation} where $Y_1 \sim \mathcal{N} \left( \mu_1, \sigma_1^2 \right)$ and $Y_2 \sim \mathcal{N} \left( \mu_2, \sigma_2^2 \right)$. This model usually generates platykurtic implied densities and concave implied volatility smiles. Here is a numerical example. Let $t_J = 1 \text{ day}$, $\mu_1 = -5\%$, $\mu_2 = +5\%$, $\sigma_1 = \sigma_2 = 2\%$ and $p = 50\%$. Further let $S_0 = 100$, $r = 0\%$ and consider a maturity of $T = 1 \text{ week}$. We get the following implied density and volatility smile. In practice, you would consider more complex/realistic underlying dynamics such as e.g. a stochastic volatility and/or jump-diffusion model. To give you a real-world example: Here is the DAX 30 implied volatility smile as of December 1, 2016 for the maturity December 9, 2016. There was a jump priced in on the night of Sunday December 5th due to the Italian referendum which roughly had the implied parameters $\mu_1 = +2\%$, $\mu_2 = -3.5\%$, $\sigma_1 = \sigma_2 = 1.5\%$ and $p = 70\%$. A volatility smile that is convex around the forward is not necessarily arbitrage-free. A few popular implied volatility smile parametrizations are not arbitrage-free for their full parameter range. Roper (2010) for example shows that the so-called "arbitrage free" original SVI parametrization due to Gatheral (2004) is actually not arbitrage free, even for realistic parameter combinations; see Figures 1 and 2 in his paper. Another example is the Hagan et al. (2002) SABR parametrization, which is known to generate negative densities on the far downside strikes. For both examples, there exists a vast body of literature that aims at providing arbitrage-free alternative formulations. References Gatheral, Jim (2004) "A Parsimonious Arbitrage-Free Implied Volatility Parametrization", Presentation, Global Derivatives & Risk Management 2004 Hagan, Patrick S., Deep Kumar, Andrew S. Lesniewski and Diana E. Woodward (2002) "Managing Smile Risk", Wilmott Magazine Roper, Michael (2010) "Arbitrage Free Implied Volatility Surfaces", Working Paper, University of Sydney
|
{}
|
{'log_upvote_score': 5, 'links': ['https://quant.stackexchange.com/questions/31368', 'https://quant.stackexchange.com', 'https://quant.stackexchange.com/users/6428/']}
|
jdg_370306
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
25587773
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
With RStudio and knitr I see that I can add a TOC with the following code in my .rmd file. ----------------output: html_document: toc: yes----------------- However, this places the TOC at the very beginning of the HTML document. Is there a way to move the TOC lower on the page? Say after an introductory paragraph? I tried to use __TOC__ and __FORCETOC__ but it did not change the TOC position.
Now provide the response and nothing else.
|
The position of the TOC is fixed in the R Markdown default HTML template. If you want to change its position in the document, you'll need to modify the template: Make a copy of the R Markdown HTML template to use as a starting point. You can find it by running this R command: system.file("rmd/h/default.html", package="rmarkdown") Move the $toc section to where you want the table of contents to appear. Save the modified template in the same folder as the document you're rendering as e.g. lowertitle.html Add template: lowertitle.html to the html_document settings. From the standpoint of the template, all of the document's content is an atomic unit, so it might be necessary to put any content you want to appear before the TOC in the template itself.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/25587773', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2878990/']}
|
jdg_370307
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
648463
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Every Customer has a physical address and an optional mailing address. What is your preferred way to model this? Option 1. Customer has foreign key to Address Customer (id, phys_address_id, mail_address_id) Address (id, street, city, etc.) Option 2. Customer has one-to-many relationship to Address , which contains a fieldto describe the address type Customer (id) Address (id, customer_id, address_type, street, city, etc.) Option 3. Address information is de-normalized and stored in Customer Customer (id, phys_street, phys_city, etc. mail_street, mail_city, etc.) One of my overriding goals is to simplify the object-relational mappings, so I'm leaning towards the first approach. What are your thoughts?
Now provide the response and nothing else.
|
I tend towards first approach for all the usual reasons of normalisation. This approach also makes it easier to perform data cleansing on mailing details. If you are possibly going to allow multiple addresses (mail, residential, etc) or wish to be able to use effective dates, consider this approach Customer (id, phys_address_id) Cust_address_type (cust_id, mail_address_id, address_type, start_date, end_date) Address (id, street, city, etc.)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/648463', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/67063/']}
|
jdg_370308
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4761
|
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can anyone tell me what is the difference between the following regular expressions: $(0^*1^*)^*$ and $(0+1)^*$ ? To me they look like generating the same string.
Now provide the response and nothing else.
|
The language of both regular expressions is the same, $L((0+1)^*)=L((0^*1^*)^*)$. This follows from the following three claims: Claim 1: if $L_1 \subseteq L_2$, then $L_1^* \subseteq L_2^*$. Claim 2: $L(0+1) \subseteq L(0^*1^*)$ Claim 3: $ (0^*1^*)^* \subseteq (0+1)^* \equiv \Sigma^*$ The 2nd and 3rd claims are trivial. Prove the first claim and you're done. Note however, that the two regular expressions are not the same (ie., they are different!). They are equivalent in the sense of the language they generate. They are different in the way they generate it.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/4761', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/2987/']}
|
jdg_370309
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50878691
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I created new angular project with sass, and I created folder with name sass which contain a file named _variables.scss, in app component I tried to import variables like this. @import 'variables' when I run ng serve i get the following error: ./src/app/app.component.scssModule build failed: @import 'variables'^ File to import not found or unreadable: variables. in C:\Users\Bonge\Documents\Projects\movies_database\movies-client\src\app\app.component.scss (line 1, column 1) Note I added the following to angular.json: "stylePreprocessorOptions": { "includePaths": [ "src/", "src/sass/" ] Directory structure just a angular starter app: |- src/ |- sass/ |- _variables.scss |- _mixins.scss |- styles.scss still i get the same error: what am I doing wrong here? any help
Now provide the response and nothing else.
|
I realize this is an older question, but keeps coming up in searches so I figure an update is in order. There is a way to define your own import paths for SASS like node_modules libraries, all you need to do is make a stylePreprocessorOptions entry in the options section of the angular.json file. You do not need to include everything using src\sass angular.json "options": { "outputPath": "dist/App", "index": "src/index.html", "main": "src/main.ts", "polyfills": "src/polyfills.ts", "tsConfig": "src/tsconfig.app.json", "assets": [ "src/favicon.ico", "src/assets" ], "styles": [ "src/sass/styles.scss" ], "stylePreprocessorOptions": { "includePaths": [ "src/sass" ] }, "scripts": []}, Then in your styles you can simply import them using styles.sass Note: Don't include the file extension or an initial ~ . @import 'variables'; // Imports from src/sass@import 'mixins;
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50878691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9790792/']}
|
jdg_370310
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7963755
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
collections.defaultdict is great. Especially in conjunction with lambda : >>> import collections>>> a = collections.defaultdict(lambda : [None,None])>>> a['foo'][None, None] Is there a way to use the key given (e.g. 'foo' ) in the lambda? For example (doesn't work): >>> a = collections.defaultdict(lambda : [None]*key)>>> a[1][None]>>> a[2][None, None]>>> adefaultdict(<function <lambda> at 0x02984170>, {1: [None], 2: [None, None]})
Now provide the response and nothing else.
|
You probably want __missing__ which is called on dict whenever you try to access an item not present in the dict; the vanilla __missing__ raises an exception, but you could do whatever you like in a subclass: class A(dict): def __missing__(self, key): value = self[key] = [None] * key return value
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7963755', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/348545/']}
|
jdg_370311
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
601775
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am from engineering background and the concept of Borel set, Borel field and measures sound abstract to me. Can some one please, explain them in a simplified way i.e., to explain them without assuming the reader has any knowledge about measure theory. For example when I read the function is continuous over some interval, I can visualize it and I know I can integrate it over that interval. But when I read X is the intensity measure for a Borel set, I don't get it and most importantly I don't get what should this imply for the future steps (i.e., if it is a Borel what can I do and if it is not what are the things that I can't do). Assuming some one will answer these, can you please explain the Lebeg measure in the same manner simplified manner as well. Hopefully this makes sense.
Now provide the response and nothing else.
|
This is only a partial answer to your question, but I think it's important. A large part of the reason for introducing the Lesbesgue measure and Lesbegue integral is taking limits. Riemann integrable functions are not closed under taking limits. For example, if $\{f_n\}\to f$ is a sequence of Riemann integrable functions, then the limit is not necessarily Riemann integrable. However, if the $f_n$'s are all measurable, then we can conclude that $f$ is measurable (which is basically the same as saying Lesbesgue integrable). It is natural to want to say things like $\lim_{n\to\infty}\int f_n=\int f$. Obviously this is not always the case, but the point is that we need much weaker assumptions on the functions $f_n$ and the manner in which they converge to $f$ in order to make such a statement using the Lesbegue integral than we would need to make this statement using the Riemann integral. For this statement to hold with Riemann integrable functions, we must assume $\{f_n\}\to f$ unifomly, while for Lesbegue integrable functions, we can almost get away with assuming only pointwise convergence. See the dominated and monotone convergence theorems.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/601775', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/98114/']}
|
jdg_370312
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
419777
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have searched for this question but could not find a good answer for my case. Suppose I have a SuperClass for a bank loan with properties Amount and InterestRate . public class BankLoan{ public double Amount {get; set;} public double InterestRate {get; set;}} Then I create a derived class called SmallLoan, which will have a variable InterestRate based on its amount. public class SmallLoan{ public SmallLoan(){ InterestRate = CalculateInterest(); } private double CalculateInterest(){ if(Amount < 100) return 0.10; else return 0.05; }} Would this be violating the LSP? I would think that it doesn't, because even if an instance of SmallLoan tries to call CalculateInterest outside of the class definition, it's going to throw an error, not related to the substitution but simply because it has a different access modifier. Even if only BankLoan existed and it had the same private method, a call to it from the outside would also result in an error. But I can see how some people would say that the presence of the error in general is a violation of the principle. Lastly: I am aware that these classes might not be the best and brightest implementations of inheritance or even just basic class design, but the question isn't about that, I merely want to know if private methods are a violation of LSP, and I chose to illustrate it with this very simple example.
Now provide the response and nothing else.
|
Let's start with a definition: ' if S is a subtype of T, then objects of type T may be replaced with objects of type S ' The fact that S or T may or may not have a private method that handles part of the behavior for the public interface in itself can not be a violation of the above principle. Since (normally) private methods are invisible and cannot be called from the outside, they are not part of the interface and clients cannot depend on it. Therefor the fact that one type has delegated some of it's inner workings to a private method, does not hinder the substitution of objects of this type for objects of it's super or derived type. Note that I'm not saying that your code example isn't violating LSP. LSP is also about correct behavior and the way you set InterestRate in the constructor of SmallLoan with a reference to Amount which is null at that point could be considered a violation. But let's ignore this, since you said the question is not about the implementation of your example, but simply whether private methods are a LSP violation.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/419777', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/380729/']}
|
jdg_370313
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17195868
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
So I've been slowly replacing a lot of my normal jQuery code with native javascript, and I happened upon the document.createTextNode() and related MDN documentation . After reading I'm somewhat confused what a text node is. I understand it can be used to put text inside div 's, but I'm sure there's a bit more to it than just "use it to put words inside elements". Looking at this, it appears a text node can also refer to the text of attributes as well. Can anyone provide a bit more of a definition of what a text node is and what it's used for? Are there practical uses for this other than basic stuff like this? var div = document.createElement('div');var text = document.createTextNode('Y HALO THAR');div.appendChild(text);
Now provide the response and nothing else.
|
All viewable HTML text in a page (except text in form elements or custom embedded objects) is in text nodes. The page consists of a number of different types of nodes (you can see a listing of the different node types here: https://developer.mozilla.org/en-US/docs/Web/API/Node.nodeType ), some of which can have child nodes and some of which cannot. For example, a div is an ELEMENT node which can contain child nodes. Those child nodes can be other ELEMENT nodes or they can be TEXT nodes or COMMENT nodes or other types of nodes. When you set the .innerHTML property of an element node, it creates the appropriate nodes and makes them child nodes of the element that you set the innerHTML property on. If there is text in the innerHTML you set, then text nodes will be created to hold it. DOCUMENT_NODE , ELEMENT_NODE and TEXT_NODE are the most common node types and are in every page that has text. In your code example: var div = document.createElement('div');var text = document.createTextNode('Y HALO THAR');div.appendChild(text); This creates one text node and puts it into the div you created. It generates the same DOM structure as this: var div = document.createElement('div');div.innerHTML = 'Y HALO THAR'; In the latter case, the system creates the text node for you. In plain javascript programming (jQuery tends to shield developers from nodes that aren't of type ELEMENT_NODE ), you will encounter text nodes any time you walk the child nodes of an element that has text in it. You will need to check the .nodeType of each child to know whether it is another element or a text node or some other type of node. In general, there aren't a lot of reasons to manipulate text nodes directly as you can often use the higher level .innerHTML property more simply. But, to give you an idea, here are a couple reasons you might want to deal directly with text nodes: You want to change some text without affecting any of the elements around it. .innerHTML creates all new elements for the affected elements which kills any event handlers which might have been set on them, but setting the .nodeValue on a text node doesn't cause any elements to get recreated. If you want to find just the text in a document without any of the resulting HTML markup and know exactly where each piece of text is in the DOM hieararchy, you can just search for all the text nodes. For example, if you were doing a text search of the document and then highlighting found text, you would probably search text nodes directly. You want to display some text without any security risks that it might contain other markup that the browser would parse and interpret if you used .innerHTML . So, you create a text node and set the value of its text and the browser won't interpet any HTML in it. Modern browsers can also use the .textContent property of an element instead of .innerHTML to solve this problem too.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17195868', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23875/']}
|
jdg_370314
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
22850546
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've installed the latest RabbitMQ server (rabbitmq-server-3.3.0-1.noarch.rpm) on a fresh Centos 5.10 VM according to the instructions on the official site. I've done this many times before during development and never had any issues. However, this time I cannot log into the management web interface using the default guest/guest user. In the logs, I see the following: =ERROR REPORT==== 4-Apr-2014::00:55:15 ===webmachine error: path="api/whoami""Unauthorized" What could be causing this?
Now provide the response and nothing else.
|
It's new features since the version 3.3.0 http://www.rabbitmq.com/release-notes/README-3.3.0.txt server------...25603 prevent access using the default guest/guest credentials except via localhost. If you want enable the guest user read this or this RabbitMQ 3.3.1 can not login with guest/guest # remove guest from loopback_users in rabbitmq.config like this[{rabbit, [{loopback_users, []}]}].# It is danger for default user and default password for remote access# better to change password rabbitmqctl change_password guest NEWPASSWORD If you want create a new user with admin grants: rabbitmqctl add_user test testrabbitmqctl set_user_tags test administratorrabbitmqctl set_permissions -p / test ".*" ".*" ".*" Now you can access using test test.
|
{}
|
{'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/22850546', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138860/']}
|
jdg_370315
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18894206
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is the following define statement valid in C? #define TRUE FALSE
Now provide the response and nothing else.
|
You have to remember that preprocessor macros are simply substituted. If you do e.g. #define TRUE FALSE then the processor simply replaces all places where it finds TRUE will be replaced by whatever FALSE is defined to. So indeed it's a good definition. And yes it will most likely change the program workflow, possibly in very unexpected ways that may even cause undefined behavior.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18894206', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1433665/']}
|
jdg_370316
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
24412418
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking at a Jenkins job and trying to understand it. I have an Execute shell command box in my Build section: > mkdir mydir > cd mydir> > svn export --force https://example.com/repo/mydir . When Jenkins is done executing that command, and moves on to the next build step, what is its working directory? workspece-root/ or workspace-root/mydir ? As the next step, I have Invoke top-level Maven targets (still in the Build section). What I really want to know is: why does that execute successfully? Is it because Jenkins automatically moves back to the workspace-root/ folder after executing a shell command box, or is it because the next job is a "top-level" job, and Jenkins therefore changes back to the workspace-root/ ?
Now provide the response and nothing else.
|
Each build step is a separate process that Jenkins spawns off. They don't share anything, neither current directory, nor environment variables set/changed within the build step . Each new build step starts by spawning a new process off the parent process (the one running Jenkins) It's not that Jenkins "move back" to $WORKSPACE . It's that Jenkins discards the previous session.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24412418', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2018047/']}
|
jdg_370317
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4983915
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm looking at some code that I have not written myself. The code tries to hash a password with SHA512 and uses just time() as the salt. Is time() too simple a salt for this or is this code safe? Thanks for the answers and comments. I will sum it up here for the new readers: salt should be different for each user, so if 2 users register at the same time, their salts won't be unique. This is a problem, but not a big one. but salt shouldn't be in any way related to the user, so time() is not a good salt. " Use a random, evenly distributed, high entropy salt. " -- That's a mouthful, so what code could possibly generate a random, evenly distributed, high entropy salt? Ok, so how about I replace time() with a random string 32 char long. The random string could be generated from looping 32 times over a set of alphabet chars. Does that sound good?
Now provide the response and nothing else.
|
Short answer: No, time() is not a good salt. Long answer: copied from my answer to Salt Generation and open source software What is a salt? A salt is a random set of bytes of a fixed length that is added to the input of a hash algorithm. Why is salting (or seeding) a hash useful? Adding a random salt to a hash ensures that the same password will produce many different hashes. The salt is usually stored in the database, together with the result of the hash function. Salting a hash is good for a number of reasons: Salting greatly increases the difficulty/cost of precomputated attacks (including rainbow tables ) Salting makes sure that the same password does not result in the same hash. This makes sure you cannot determine if two users have the same password. And, even more important , you cannot determine if the same person uses the same password across different systems. Salting increases the complexity of passwords, thereby greatly decreasing the effectiveness of both Dictionary- and Birthday attacks . (This is only true if the salt is stored separate from the hash). Proper salting greatly increases the storage need for precomputation attacks, up to the point where they are no longer practical. (8 character case-sensitive alpha-numeric passwords with 16 bit salt, hashed to a 128 bit value, would take up just under 200 exabytes without rainbow reduction). There is no need for the salt to be secret. A salt is not a secret key, instead a salt 'works' by making the hash function specific to each instance. With salted hash, there is not one hash function, but one for every possible salt value. This prevent the attacker from attacking N hashed passwords for less than N times the cost of attacking one password. This is the point of the salt. A "secret salt" is not a salt, it is called a "key", and it means that you are no longer computing a hash, but a Message Authentication Code (MAC). Computing MAC is tricky business (much trickier than simply slapping together a key and a value into a hash function) and it is a very different subject altogether. The salt must be random for every instance in which it is used. This ensures that an attacker has to attack every salted hash separately. If you rely on your salt (or salting algorithm) being secret, you enter the realms of Security Through Obscurity (won't work). Most probably, you do not get additional security from the salt secrecy; you just get the warm fuzzy feeling of security. So instead of making your system more secure, it just distracts you from reality. So, why does the salt have to be random? Technically, the salt should be unique . The point of the salt is to be distinct for each hashed password. This is meant worldwide . Since there is no central organization which distributes unique salts on demand, we have to rely on the next best thing, which is random selection with an unpredictable random generator, preferably within a salt space large enough to make collisions improbable (two instances using the same salt value). It is tempting to try to derive a salt from some data which is "presumably unique", such as the user ID, but such schemes often fail due to some nasty details: If you use for example the user ID , some bad guys, attacking distinct systems, may just pool their resources and create precomputed tables for user IDs 1 to 50. A user ID is unique system-wide but not worldwide . The same applies to the username : there is one "root" per Unix system, but there are many roots in the world. A rainbow table for "root" would be worth the effort, since it could be applied to millions of systems. Worse yet, there are also many "bob" out there, and many do not have sysadmin training: their passwords could be quite weak. Uniqueness is also temporal. Sometimes, users change their password. For each new password , a new salt must be selected. Otherwise, an attacker obtained the hash of the old password and the hash of the new could try to attack both simultaneously. Using a random salt obtained from a cryptographically secure, unpredictable PRNG may be some kind of overkill, but at least it provably protects you against all those hazards. It's not about preventing the attacker from knowing what an individual salt is, it's about not giving them the big, fat target that will be used on a substantial number of potential targets. Random selection makes the targets as thin as is practical. In conclusion: Use a random, evenly distributed, high entropy salt. Use a new salt whenever you create a new password or change a password. Store the salt along with the hashed password. Favor big salts (at least 10 bytes, preferably 16 or more). A salt does not turn a bad password into a good password. It just makes sure that the attacker will at least pay the dictionary attack price for each bad password he breaks. Usefull sources: stackoverflow.com: Non-random salt for password hashes Bruce Schneier: Practical Cryptography (book) Matasano Security: Enough with the Rainbow Tables usenix.org: Unix crypt used salt since 1976 owasp.org : Why add salt openwall.com : Salts Disclaimer: I'm not a security expert. (Although this answer was reviewed by Thomas Pornin ) If any of the security professionals out there find something wrong, please do comment or edit this wiki answer. As for what seems to be a good source for your random salt Also read: What is the most secure seed for random number generation? In the absence of dedicated, hardware based, random generators, the best way of obtaining random data is to ask the operating system (on Linux, this is called /dev/random or /dev/urandom [both have advantages and problems, choose your poison]; on Windows, call CryptGenRandom() ) If for some reason you do not have access to the above mentioned sources of random, in PHP you could use the following function: From the source of phpass v0.3 <?php/** * Generate pseudo random bits * @copyright: public domain * @link http://www.openwall.com/phpass/ * @param int $length number of bits to generate * @return string A string with the hexadecimal number * @note don't try to improve this, you will likely just ruin it */function random_bits($entropy) { $entropy /= 8; $state = uniqid(); $str = ''; for ($i = 0; $i < $entropy; $i += 16) { $state = md5(microtime().$state); $str .= md5($state, true); } $str = unpack('H*', substr($str, 0, $entropy)); // for some weird reason, on some machines 32 bits binary data comes out as 65! hex characters!? // so, added the substr return substr(str_pad($str[1], $entropy*2, '0'), 0, $entropy*2);}?>
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4983915', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/605098/']}
|
jdg_370318
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1341033
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When a function is additive, $$\color{blue}{f(x+y)=f(x)+f(y)},$$ you can extend the property to the product by an integer $$f(nx)=f(x+x+\cdots x)=f(x)+f(x)+\cdots f(x)=nf(x),$$ then to the product by a rational number $$f(\frac pqx)=\frac qqf(\frac pqx)=\frac1qf(px)=\frac pqf(x),$$ and, presumably by exploiting continuity, to the product by a real number. So why are all proofs of linearity decomposed in a proof of additivity and a proof of commutativity with scalar multiplication $$\color{blue}{f(\lambda x)=\lambda f(x)}\ ?$$ Are there conditions on $f$ such that the second part can be omitted ? Update: Regarding continuity, one may reason as follows: By additivity$$\|f(x)-f(x_0)\|=\|f(x-x_0)\|$$ can be made arbitrarily small, as $$\|f(r(x-x_0))\|=|r|\|f(x-x_0)\|,$$ where $r$ is a rational. Hence, any additive function $f(x)$ is continuous. Is that correct ? What are required hypothesis ?
Now provide the response and nothing else.
|
As you noted yourself, there are two conditions for which the second part can not be omitted: If you are considering a different field than $\Bbb{F} = \Bbb{R}$. (For example $\Bbb{F} = \Bbb{C}$), since then in general $\Bbb{Q}$ will not be dense in $\Bbb{F}$. If you do not require $f$ to be continuous. Conversely, if your base field is $\Bbb{F}=\Bbb{R}$ and if you know/assume that $f$ is continuous, then showing additivity suffices.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1341033', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
|
jdg_370319
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10481277
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to have an activity (2) with translucent aspect over another activity (1), aligned at the top of the screen (4). I have tried assigning these themes to activity number 2: <style name="Theme.CustomDialog" parent="android:style/Theme.Dialog"> <item name="android:windowBackground">@android:color/black</item></style> <style name="CustomTheme"> <item name="android:windowBackground">@android:color/transparent</item> <item name="android:background">@android:color/transparent</item> <item name="android:windowIsFloating">true</item> <item name="android:windowNoTitle">true</item></style> But the result is always 3. If I set <item name="android:windowIsFloating">false</item> in the CustomTheme the result is 2. Can anybody tell me how can I get 4? Thanks! UPDATE: This is my activity 2 layout: <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" android:background="#0000"> <RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="#FFFFFF"> <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Menu" android:layout_centerHorizontal="true"/> </RelativeLayout></RelativeLayout>
Now provide the response and nothing else.
|
Finally, this theme worked to get a result like image number 4: <style name="Theme.CustomTranslucent" parent="android:style/Theme.Translucent"> <item name="android:backgroundDimEnabled">true</item> <item name="android:backgroundDimAmount">0.5</item> <item name="android:windowAnimationStyle">@android:style/Animation.Dialog</item> <item name="android:background">@android:color/transparent</item> </style> In my activity 2 layout, I could eihter set android:background="@android:color/transparent" or not set any value at all to make it work. Thanks to MikeIsrael and Veer for their help.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10481277', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/871102/']}
|
jdg_370320
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13788676
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to implement simple parser in haskell using parsec library (for learning purposes). So I wrote bunch of data structutes and related functions like this: data SourceElement = StatementSourceElement Statement | FunctionSourceElement FunctionName FunctionBodydata Statement = IfStatement Expr Statement Statement | WhileStatement Expr Statementdata FunctionBody = FunctionBody [SourceElement]parseSourceElement :: Parser SourceElementparseSourceElement = ...parseFunctionBody :: Parser FunctionBodyparseFunctionBody = ... It works fine. Now I want to split this stuff into two modules to separate FunctionBody and Statement data structures (because of readability issues). But I can't! The reason is cyclic dependency between SourceElement and FunctionBody . So, is there any way to solve this problem ?
Now provide the response and nothing else.
|
The typical way I break dependency cycles is by parameterizing something out. In this case, your Function module might do function parsers for your language, but expressed in such a way that it can do so no matter what the rest of the language is like. Thus: module Function where data FunctionBody e = FunctionBody [e]parseFunctionBody :: Parser e -> Parser (FunctionBody e) And module AST wheredata SourceElement = StatementSourceElement Statement | FunctionSourceElement FunctionName (FunctionBody SourceElement) Thus the mutual recursion is abstracted into a simple recursion + parameterization. I think parameterization is at least as important as separating different things into different files, so it's kind of nice (and kind of annoying) that one forces the other.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13788676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1888247/']}
|
jdg_370321
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8349686
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
All these 3 libraries allow to manipulate history object. OK, backbone does a lot of more but let consider only it history part. http://documentcloud.github.com/backbone/#History https://github.com/olivernn/davis.js https://github.com/browserstate/history.js What is a difference between these 3 implementation?
Now provide the response and nothing else.
|
History.js is more of a polyfil for browsers that do not natively support HTML5 pushState and for ironing out the differences between different browser implementations of pushState. Davis.js is a routing layer, primarily on top of pushState (although this can be switched out easily to hash routing). It gives you a simple api to define and respond to routes within your application. Backbone's router is similar to Davis, however it tries to automatically fall back to hash based routing when pushState is not available. As the author of Davis I'm slightly biased but I think the Davis api is slightly nicer and more powerful than backbones. I also think that trying to gracefully handle falling back to hash based routing when pushState is not available is not worth the effort and complexity that it can introduce.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8349686', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/344527/']}
|
jdg_370322
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
80400
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm having trouble conceptualizing why the voltage drop between two points of an ideal wire (i.e. no resistance) is $0~V$. Using Ohm's Law, the equation is such: $$V = IR \\V = I(0~\Omega) \\V = 0$$ However, conceptually I can't see how there is no change in energy between these two points. It is my understanding that the electrical field of this circuit produces a force running counterclockwise and parallel to the wire which acts continuously on the electrons as they move through the wire. As such, I expect there to be a change in energy equal to the work . Voltage drop is the difference in electric potential energy per coulomb, so it should be greater than $0~V$: $$\Delta V = \frac{\Delta J}C \\\Delta J > 0 \\\therefore \Delta V > 0$$ For example, suppose I have a simple circuit consisting of a $9~V$ battery in series with a $3~k\Omega$ resistor: If the length from point 4 to point 3 is $5~m$, I would expect the following: $$W = F \cdot d \\W = \Delta E \\F > 0 \\d = 5 > 0 \\\therefore W > 0 \\\therefore \Delta E > 0$$ Since work is positive for any given charge, the change in energy for any given charge is positive -- therefore the voltage drop must be positive . Yet, according to Ohm's Law it is $0~V$ since the wire has negligible resistance. Where is the fault in my logic?
Now provide the response and nothing else.
|
The key thing is that there is NO electric field within the perfect wire. So, there is no force acting on the electron, and thus no work done on it (while it's in the perfect wire). This goes back to the definition of a perfect conductor (which the perfect wire is). Within a perfect conductor, there is no electric field. Instead, the charges (which have infinite mobility) rearrange themselves on the surfaces of the conductor in such a way as to perfectly cancel out any internal field. So, the only fields in your circuit would be 1) in the battery, and 2) in the resistor. I should also add that this is due to the approximation of the wire as 'perfect'. A real wire has some resistance, or equivalently, its charges don't perfectly reorder so as to perfectly cancel an internal field.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/80400', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/30457/']}
|
jdg_370323
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
270177
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
After answering another question ( The number of values of $f(x)/x$ when $f$ is a linearized polynomial ), I stumbled upon an interesting polynomial in multiple variables. Let $\mathbb{F}_q$ be the field of $q$ elements, and let $K$ be a field containing it. Then define \begin{equation}L(X_1, X_2, ...) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n \text{Frob}^{i-1}(X_{\sigma(i)}).\end{equation}This polynomial is $\mathbb{F}_q$-multilinear, and detects whether the elements are linearly dependent over $\mathbb{F}_q$. In other words, it is nonzero exactly when they are linearly independent. I think this can be extended to the case where $K$ is an algebra, but haven't checked. This seems interesting enough that I am guessing it's been found and used before; does this polynomial have a name, and is it used anywhere interesting?
Now provide the response and nothing else.
|
This is a CW answer to remove this question from the unanswered list (once someone upvotes it). This is the determinant of the Moore matrix $\left( x_i^{q^{j-1}} \right)_{1 \leq i,j \leq n}$. This determinant can be expressed as a product of linear factors:$$\det \left(x_i^{q^{j-1}} \right) = \prod_{(c_1:c_2:\cdots:c_n) \in \mathbb{F}_q \mathbb{P}^{n-1}} (c_1 x_1 + \cdots + c_n x_n),$$up to a scalar factor depending on how we choose the representatives $(c_1, \ldots, c_n)\in \mathbb{F}_q^n$ of the points of $\mathbb{F}_q \mathbb{P}^{n-1}$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/270177', 'https://mathoverflow.net', 'https://mathoverflow.net/users/44191/']}
|
jdg_370324
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
42220569
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why Final variables in Scala are allowed to change values. As per my understanding once declared final, they shouldn't be allowed to change. class foo(){ final var name = "abc" name = "xyz" // why this is allowed}
Now provide the response and nothing else.
|
final definition per the Scala Specification (emphasis mine): 5.2.6 final The final modifier applies to class member definitions and to class definitions. A final class member definition may not be overridden in subclasses . A final class may not be inherited by a template . final is redundant for object definitions. Members of final classes or objects are implicitly also final, so the final modifier is generally redundant for them, too. Note, however, that constant value definitions do require an explicit final modifier , even if they are defined in a final class or object. final may not be applied to incomplete members, and it may not be combined in one modifier list with sealed. Since a val definition in Scala already means that the reference to it is immutable (unlike Java), you don't have to explicitly specify that when defining it. If you want the value to be inlined as a constant in the byte code, you can specify add the final modifier. As per var , final here only means "may not be overridden in subclasses" , but says nothing about the immutability of the variable: scala> :pa// Entering paste mode (ctrl-D to finish)class Foo { final var name = "abc"}class Bar extends Foo { override var name = "yuval"}// Exiting paste mode, now interpreting.<console>:16: error: overriding variable name in class Foo of type String; variable name cannot override final member override var name = "yuval"
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/42220569', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7259113/']}
|
jdg_370325
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5973883
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to create a horizontal UIScrollView which to hold hundreds of thumbnail images, just like a slide of thumbnails. For example, there will be 10 thumbnails showing in a single screen, each of them are horizontally adjacent to each other. My problem is that I don't know how to make a horizontal UIScrollView to hold the multiple thumbnails which showing at the same time ? A sample photo is as below. See the bottom part of the screen. Thanks.
Now provide the response and nothing else.
|
You can add all the thumbnails programatically to your scrollview and use the setContentSize method of UIScrollView. you have to pass 2 values in contentOffset. 1 for width and 1 for height. Please follow link to explore more on this. If you need further help please leave a comment. Hope it helps. Please consider Following example. - (void)setupHorizontalScrollView{scrollView.delegate = self;[self.scrollView setBackgroundColor:[UIColor blackColor]];[scrollView setCanCancelContentTouches:NO];scrollView.indicatorStyle = UIScrollViewIndicatorStyleWhite;scrollView.clipsToBounds = NO;scrollView.scrollEnabled = YES;scrollView.pagingEnabled = YES;NSUInteger nimages = 0;NSInteger tot=0;CGFloat cx = 0;for (; ; nimages++) { NSString *imageName = [NSString stringWithFormat:@"image%d.jpg", (nimages + 1)]; UIImage *image = [UIImage imageNamed:imageName]; if (tot==15) { break; } if (4==nimages) { nimages=0; } UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; CGRect rect = imageView.frame; rect.size.height = 40; rect.size.width = 40; rect.origin.x = cx; rect.origin.y = 0; imageView.frame = rect; [scrollView addSubview:imageView]; [imageView release]; cx += imageView.frame.size.width+5; tot++;}self.pageControl.numberOfPages = nimages;[scrollView setContentSize:CGSizeMake(cx, [scrollView bounds].size.height)];}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5973883', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/403015/']}
|
jdg_370326
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21649
|
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I just did my first automotive AC repair, after the compressor in my 2004 Lancer 2.0 ate itself, cracked its casing, and blew all the refrigerant out. I think I did the repair & recharge correctly-- and the AC is blowing very cold-- but I'm concerned that the operating pressures are not anywhere close to what they are spec'd to be. (During recharge, I added ~18oz of r134a, per the yellow sticker under the hood.) Before recharging, I googled 'r134a pressure chart', and found the following: At the time, it was 82F in my garage, so I was expecting to end up in the middle of the range (both low- and high-pressure) for the 80F row.... 45psi for low, and about 190psi for the high side. (When referring to this chart, I assumed that the target low readings were for when the clutch was disengaged, and the target high readings were for when the clutch was active.)However, after my recharge the highest low reading I see (right before clutch engages) is 40psi, and the highest reading I ever see for the high side is 150 (right before the clutch disengages.) Should I be concerned about these low operating pressures, given my starting ambient temperature? This car is 11 years old, and so the computer & pressure switch have about 160K miles on them. As I said, AC appears to be working very well. The compressor clutch cycles about 12 seconds on, and 12 seconds off. (I'm not sure what's an exceptable duty cycle there. Also I was parked in a garage, not running down the road, air flow through the condenser wasn't exactly real-world. Of course the fans kicked on whenever the clutch engaged.) Static pressures (with engine off): 80-85 for both high and low sides. In case anyone has questions about my methodology, this is what I did: Remove old compressor, condenser, dryer, and expansion valve Flush lines and evaporator with recommended flush solvent Installed (all new) compressor, condenser, dryer, and expansion valve, coating all rubber seals with PAG oil Added UV dye and 1.75 oz of PAG-46 oil to condenser (compressor came pre-filled with 3 oz, and total required by manufacturer was 4.75) (Mitsubishi originally called for 'SUN PAG-56', but after-market compressor called for PAG-46, so I used the latter) Connected hi/low gauge set Pulled vacuum to almost -30 psi inches Hg, observed that level held for 1 hour Resumed pulling vacuum for 1 additional hour Closed off high & low sides, detached vacuum pump, attached 12oz can of r134a (no additives) Pierced first 12oz can, allowed yellow line to pressurize. Burped yellow line @ the manifold to purge air 12oz can+valve+hose weighed 20.7oz at this point Opened up low side, allowed vacuum in system to suck in some refrigerant Started car, resumed slowly-turning can... compressor clutch engaged for the first time pretty early into this step. After first can felt empty, new weight reading was 8.2 (so, 12.5oz discharged from 1st "12oz" can) Closed off low side Disconnected yellow hose from can's adapter, then connected to different adapter already installed on 2nd can (at this point, I realized I had made a minor mistake, should have moved the first adapter over to the second can without disconnecting the hose. But I compensated for this by burping the yellow line again.) Initial weight of second (partial) can+adapter+hose: 15.1oz Final weight of second: 9.6 (so, 5.5oz discharged from 2nd "partial" can) Total estimated charge: 12.5+5.5=18oz (actually a little less because of burping and final contents of hoses. I think this is OK because the service manual's specifications page gives an acceptable range of "16.93 - 18.34")
Now provide the response and nothing else.
|
That chart is not universal. You should be fine. Every model will have its own characteristic low-side and high-side pressures. This is why some manufacturers recommend charging refrigerant by mass rather than by high-side, low-side and ambient temperature values. As long as you have charged the system with the correct amount of refrigerant (which you have according to the tolerance defined in your manual), you should be fine. Another sanity check is that the low-pressure switch activates at 32 psi, which is lower than the 40 psi you are seeing on the low-side. I'm not sure if your Lancer is equipped with a high-side pressure switch but the same principle applies. Enjoy your ice-cold AC! P.S. Some observations about your procedure: Pulled vacuum to almost -30psi, observed that level held for 1 hour This tells me that there are no leaks. Added UV dye and 1.75 oz of PAG-46 oil to condenser The UV dye is added if you want to detect leaks with UV light. Else it shouldn't be necessary. PAG-46 is the right oil for your car and is absolutely necessary to since it lubricates the internals of the compressor. This car is 11 years old, and so the computer & pressure switch have about 160K miles on them. Not sure about the Lancer, but on some vehicles the pressure switch is a capacitor where the refrigerant pressure controls the distance between the two plates, thereby altering capacitance and voltage. Once again, the level of detail provided in this question is stellar. I'd consider this to be the golden standard for all AC-related questions.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/21649', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/12281/']}
|
jdg_370327
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
6364
|
Below is a question asked on the forum cstheory.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
We have a set, $L$, of lists of elements from the set $N = \{ 1, 2, 3, ..., n \}$. Each element from $N$ appears in a single list in $L$. I am looking for a data structure which can perform the following updates: $concat(x, y)$ : concatenates the list containing $y$ onto the end of the list containing $x$ $split(x)$ : splits the list containing $x$ directly after $x$ It also needs to perform the following queries: $follows(x, y)$ : returns $true$ if $x$ and $y$ are in the same list and $y$ comes after $x$ (but is not necessarily adjacent to $x$) $first(x)$ : returns the first element of the list containing $x$ $next(x)$ : returns the the next element after $x$ in the list containing $x$ I have already come up with a data structure which performs these updates in $O(lg^2 (n))$ and queries in $O(lg (n))$ time. I'm mostly interested in whether or not there is already a data structure which can do this (hopefully faster?). Motivation: rooted directed forests can be represented with two of these list sets and they allow quick calculation of reachability in such forests. I want to see what else they can be used for and if all of this is already known.
Now provide the response and nothing else.
|
Keep your integers in skip lists. Normal skip lists are ordered by key, but we will just use them as a representation of sequences. Additionally, maintain an array of pointers of size $n$. Each element of the array should point to a node in a skip list. I believe this supports $next$ in $O(1)$ and all other operations in $O(\lg n)$. Specifically: $concat$ing or $split$ting two skip lists takes $O(\lg n)$ time and therefore invalidates at most $O(\lg n)$ pointers. $next$ just follows the forward pointer at the leaf level, taking $O(1)$ time. $first$ takes $O(\lg n)$ time: follow up pointers until you get stuck, then follow a left pointer. When you can't follow any more left pointers, you're at the head pointer of your skip list. Follow down pointers to the leaf, then one forward pointer. This is the first element in the list. $follows$ is somewhat trickier. Proceed as in $first$ for $y$, but record a list of the values where you get stuck (that is, where you can't follow up pointers any more). We'll call this list you record a "trace". Do the same for $x$, but follow right pointers when you get stuck, not left. If $x$ precedes $y$, their traces will intersect. The traces are of size $O(\lg n)$. If each element in the trace is annotated with the stuck level, we can check for an intersection in time $O(\lg n)$. $next$ is worst-case $O(1)$, all others are $O(\lg n)$ with high probability . They can be made worst-case by using deterministic skip lists. I think $concat$ can be made $O(\lg \lg n)$ by using leaf-level-linked (2,5) trees and boostrapping the spines. For the bootstrapping trick, see " Purely functional representations of catenable sorted lists " by Kaplan and Tarjan.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://cstheory.stackexchange.com/questions/6364', 'https://cstheory.stackexchange.com', 'https://cstheory.stackexchange.com/users/4765/']}
|
jdg_370328
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12876543
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
{%#include<stdio.h>#include<stdlib.h>%} %token ID NUM IF THEN LE GE EQ NE OR AND ELSE%right '='%left AND OR%left '<' '>' LE GE EQ NE%left '+''-'%left '*''/'%right UMINUS%left '!'%% The code mentioned above is part of the yacc for a simple IF ELSE program. What do %right and %left mean?
Now provide the response and nothing else.
|
%left and %right specify the associativity of an operator. The associativity of an operation determines which of two operations of the same precedence level is carried out first. Suppose we have the grammar rules: exp ::= exp + expexp ::= ID and suppose we have to parse the expression x+y-z. You see, as the precedence level of plus and minus is the same, this expression can be interpreted as (x+y)-z or x+(y-z). This does not seem like a big deal, but it introduces an ambiguity into the grammar. Parsing issues and theory aside, suppose we're parsing the expression. 6+5-7, and suppose that our language can only work with natural numbers, and throws an exception when underflow occurs. The result of (6+5)-7 (4) will not be not equal to 6+(5-7) (exception), so we won't be able to predict the result -- unless we define the evaulation order by specifying the associativity of the operators. Also consider the case of expressions like f()+g()+h() , when the operands are functions which may have side effects.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12876543', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1456914/']}
|
jdg_370329
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55155550
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What exactly is the reason for Control Freak: Commit rejected. Foxtrot merges not allowed We keep receiving this error quite often, is this is caused due to a combination of pull , rebase and amend by users while committing? Need clarity to get rid of this permanently. I know and understand the branch has diverged and it has lost the trace but what exactly has caused that to happen in simple language is highly appreciable It's a time killer for us to rebase every time when we see this error. We are manually cherry-picking the change to get rid of this. How to identify the committed type like was that a right commit afterrebase or pull or amended and which exactly is the commit that and bywhom? We want to educate developers to come out of similar commit mistakeshenceforth. Would love to hear about the best practices. Also, want to understand is there any reason for combination tools like git-bash/source-tree? Can we turn off this cause by any chance?
Now provide the response and nothing else.
|
That is related to Foxtrot merges , specifcally prohibited on BitBucket: A foxtrot merge is a specific sequence of git commits. A particularly nefarious sequence. Out in the open, in lush open grasslands, the sequence looks like this: But foxtrots are rarely seen in the open. They hide up in the canopy, in-between the branches. I call them foxtrots because, when caught mid-pounce, they look like the foot sequence for the eponymous ballroom dance: Foxtrot merges are bad because they change origin/master’s first-parent history. The parents of a merge commit are ordered. The first parent is HEAD. The second parent is the commit you reference with the git merge command. You can think of it like this: git checkout 1st-parentgit merge 2nd-parent If pushed: As explained in " GIT: How can I prevent foxtrot merges in my 'master' branch? ", commit ' D ' is a foxtrot merge because ' origin/master ' is its 2nd parent. This is the result of a pull (fetch + merge) Once that fox-trot merge D is pushed... the first-parent history of ' origin/master ' no longer contains commit ' B '! As explained by torek in " how to avoid foxtrot merge in git ", this is the result of working directly on master (new commit C ), and doing a git pull (instead of a pull --rebase , as I always advice ) That will merge B and C into D (the foxtrot merge), which, once pushed, means the origin/master no longer has B as direct ancestor, but C . Your work 'C' now becomes the main published branch history ( origin/master ), instead of B , relegated to something that was merged.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55155550', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4057790/']}
|
jdg_370330
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
340796
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This so-called "zero drift" opamp droops .001V/sec, at temp 85C with a 1 uF cap. If I'm reading the spec correctly, that's 3.6V/hour! http://www.ti.com/lit/ds/symlink/lf398-n.pdf Is there a method to store a low current V for up to about 5 hours with drift or droop within my desired resolution of approx 250 ppm, or equivalent 12 bits? "Low current" meaning mA or uA scale. The sample rate is between once per second and once per 5 hours. Prefer to stay in analog domain, because I want to explore and expand my analog knowledge. The solution should be practical, and use commonly-available components Digital solutions are ok, but should be code-free, so it's accessible to non coders, and not require a computer to implement, so it's accessible to people who don't own a computer (such as the economically-disadvantaged teens i mentor). Not asking for specific part numbers, just the basic method. update: The manufacturer confirmed that my estimate droop-per-hour is correct. According to the manufacturer, droop is largely influenced by the input bias current of the buffer amplifier and whatever leakage may occur through the switch - not just normal capacitor leakage. https://e2e.ti.com/support/amplifiers/precision_amplifiers/f/14/p/641041/2365384#2365384
Now provide the response and nothing else.
|
Well, there seem to be solutions, although it's a bit of a blast from the past... A Survey of Analog Memory Devices (from 1962) says: The transpolarizer, an electrostatic analog of the more widely known transfluxor [...] For a more modern solution, a micro with ADC and DAC seems the way to go. Also, unlike the analog solutions, it is much more likely to be stable with temperature, which is always a nice bonus. Regarding large capacitors, there are several problems: Capacitor value depends on temperature, thus with a constant amount of charge in your capacitor, the voltage will vary with temperature. The effect will be tiny, or huge, depending on capacitor type. Capacitor leakage depends a lot on temperature (for electrolytic ones). X7R is a piezoelectric microphone. Dielectric absorption (DA) means that you charge your capacitor, then disconnect it, wait a bit, and then the voltage on it is now different! And it depends on the voltage that was there before you charged (or discharged it). Also, for large capacitors intended for supply decoupling, the effect is absolutely harmless, so no-one cares about it, and therefore there is no specification. I don't know if it depends on temperature and aging, but there is no reason why it would not. You will only get a useful specification for capacitors that are intended for high precision integrators and stuff like that. I remember measuring leakage on a 470µF 6V3 Panasonic FR capacitor. I charged it to 5V for a few minutes, then measured it every few minutes. The voltage would drop quickly due to DA, then it stabilized at around 4V. I left the capacitor on a shelf for a week, and measured again. Calculated leakage was in the nanoamps, but you'd have to keep it at the target voltage for a while (like at least several hours, if not days) in order to overcome dielectric absorption, so it would be completely useless in this application.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/340796', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/45643/']}
|
jdg_370331
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
7614429
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The best way I can describe my problem is like this: A notification is created at boot (with a BroadcastReceiver ). My app main activity is opened and the home button is pressed (the app is still running in the background until the system closes it). I pull down the status bar and press on the notification previously created at boot. Some activity, different from the main one, is started. I press the back button and the main activity is displayed. How can I prevent that last step? What I want with the back button is to go back where I was, which is the home screen (the desktop with all the widgets and app icons). My app's main activity was supposed to be running on the background, why was it called with the back button? In case it's relevant, my code to create a notification goes like this: public void createNotification(int notifyId, int iconId, String contentTitle, String contentText) { Intent intent = new Intent(mContext, NewNoteActivity.class); intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); intent.putExtra(AgendaNotesAdapter.KEY_ROW_ID, (long)notifyId); PendingIntent contentIntent = PendingIntent.getActivity(mContext, notifyId, intent, 0); Notification notification = new Notification(iconId, contentTitle, 0); notification.setLatestEventInfo(mContext, contentTitle, contentText, contentIntent); mNotificationManager.notify(notifyId, notification); I tried to add a couple of more flags combinations to intent but neither of them solved my problem... Suggestions?
Now provide the response and nothing else.
|
For whose who still might need answer. It looks like this is what you want to achieve: When you start an Activity from a notification, you must preserve the user's expected navigation experience. Clicking Back should take the user back through the application's normal work flow to the Home screen, and clicking Recents should show the Activity as a separate task. http://developer.android.com/guide/topics/ui/notifiers/notifications.html#NotificationResponse Your situation is - Setting up a regular activity PendingIntent See full steps in the link. Basically you need to: 1. Define Activity hierarchy in AndroidManifest.xml <activity android:name=".MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter></activity><activity android:name=".ResultActivity" android:parentActivityName=".MainActivity"> <meta-data android:name="android.support.PARENT_ACTIVITY" android:value=".MainActivity"/></activity> 2. Create a back stack based on the Intent that starts the Activity : ...Intent resultIntent = new Intent(this, ResultActivity.class);TaskStackBuilder stackBuilder = TaskStackBuilder.create(this);// Adds the back stackstackBuilder.addParentStack(ResultActivity.class);// Adds the Intent to the top of the stackstackBuilder.addNextIntent(resultIntent);// Gets a PendingIntent containing the entire back stackPendingIntent resultPendingIntent = stackBuilder.getPendingIntent(0, PendingIntent.FLAG_UPDATE_CURRENT);...NotificationCompat.Builder builder = new NotificationCompat.Builder(this);builder.setContentIntent(resultPendingIntent);NotificationManager mNotificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);mNotificationManager.notify(id, builder.build());
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7614429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/40480/']}
|
jdg_370332
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35261924
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to set the password when it changes in my User model. So I'm using boot method of the model: <?phpnamespace App\Model;class User extends \Illuminate\Database\Eloquent\Model{ protected $table = 'users'; public static function boot() { //die('here'); // this happens User::saving(function ($user) { //die('here'); // this doesn't happen if ($user->isDirty('password')) { $user->password = // hash password... } }); }} I'm using the save() method on the model to create the entry in the data base, apparently this should fire the creating event. I've emptied the database table to ensure that a new row is being creating (it is), this event does not fire - and my password is raw un-ecrypted. By the way, I'm using illuminate/database ^5.2 in my app (not Laravel). UPDATE - capsule initialization $capsule = new Illuminate\Database\Capsule\Manager;$capsule->addConnection([ 'driver' => 'mysql', 'host' => 'localhost', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', 'database' => 'mydb', 'username' => 'myuser', 'password' => 'mypass',]);$capsule->bootEloquent();
Now provide the response and nothing else.
|
If you want your events to work, you need to setup an event dispatcher for the capsule. First, you will need to add illuminate/events to your dependencies. Add "illuminate/events": "5.2.*" to your composer.json file: "require": { // other requires... "illuminate/events": "5.2.*"}, Next, you'll need to setup the event dispatcher on the capsule. Make sure you do this before your call to bootEloquent() . From the docs : // new capsule...// add connection...// Set the event dispatcher used by Eloquent models... (optional)use Illuminate\Events\Dispatcher;use Illuminate\Container\Container;$capsule->setEventDispatcher(new Dispatcher(new Container));// Setup the Eloquent ORM... (optional; unless you've used setEventDispatcher())$capsule->bootEloquent(); Now you should be good to go. While not related, I also wanted to point out that your boot method should make sure to call parent::boot(); before it does anything else (like setting up events). Optional solution If this is the only thing you're trying to do with the events, you can skip this altogether by setting up a mutator function for your password attribute. The mutator method will be called any time you assign a value to the mutated attribute (i.e. $user->password = "hello" ). To do this, just add the following function to your User model: public function setPasswordAttribute($value) { $this->attributes['password'] = bcrypt($value);}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35261924', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3265472/']}
|
jdg_370333
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4637383
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible to return a generic type using extension methods ? For example, I have the following method: // Convenience method to obtain a field within a row (as a double type) public static double GetDouble(this DataRow row, string field) { if (row != null && row.Table.Columns.Contains(field)) { object value = row[field]; if (value != null && value != DBNull.Value) return Convert.ToDouble(value); } return 0;} This is currently used as follows: double value = row.GetDouble("tangible-equity"); but I would like to use the following code: double value = row.Get<double>("tangible-equity"); Is this possible and if so, what would the method look like?
Now provide the response and nothing else.
|
How about this one: public static T Get<T>(this DataRow row, string field) where T: IConvertible { if (row != null && row.Table.Columns.Contains(field)) { object value = row[field]; if (value != null && value != DBNull.Value) return (T)Convert.ChangeType(value, typeof(T)); } return default(T); } Convert.ChangeType is much more flexible handling conversions as opposed to just casting. This pretty much reflects your original code, just generic.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4637383', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/448568/']}
|
jdg_370334
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4566769
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a function called runquery that makes calls to a database and then yields the rows, one by one. I wrote a memoize decorator (or more accurately, I just stole one from this stackoverflow question ) but on subsequent calls it just yields an empty sequence, presumably because a generator's values can only be yielded once. How could I modify the memoization decorator that works for Python generators? I realise I will need to store it in memory at some point but I'd like to handle this within the decorator and not modify the original function. The current code of the memoization function is: def memoized(f): # Warning: Doesn't work if f yields values cache={} def ret(*args): if args in cache: return cache[args] else: answer=f(*args) cache[args]=answer return answer return ret
Now provide the response and nothing else.
|
I realise this is somewhat of an old question, but for those who want a full solution: here's one, based on jsbueno's suggestion: from itertools import teefrom types import GeneratorTypeTee = tee([], 1)[0].__class__def memoized(f): cache={} def ret(*args): if args not in cache: cache[args]=f(*args) if isinstance(cache[args], (GeneratorType, Tee)): # the original can't be used any more, # so we need to change the cache as well cache[args], r = tee(cache[args]) return r return cache[args] return ret
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4566769', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/383270/']}
|
jdg_370335
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3866491
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen in multiple projects a kind of catch all exception to catch all unexpected exception so the app won't crash, i see this usually with : AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(myUnexpectedExhandler);Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(threadExHandler); Is this a good or bad practice.
Now provide the response and nothing else.
|
Catching exceptions at the top level of your project is fine and correct. There, you can do things such as log it, report the details back to your team, etc. Exceptions should definitely be published somewhere if at all possible -- that helps a lot in terms of developing a rock-solid product (see Jeff Atwood's blog post " Exception-Driven Development " for a commentary on this). What is bad practice is catching exceptions inappropriately further down the call stack. The only time you should catch an exception is when you know exactly what to do with it. Certainly, you should never, ever, ever, ever silently swallow exceptions.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3866491', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/159527/']}
|
jdg_370336
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
29912773
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am new to Ionic.I am using Ionic Framework (1.3.20), Angular JS, Cordova 5.0.0 Template file browse.html code: <div class="col-50"><img ng-src="{{availableImages[currentImage].src}}" /></div> app.js code: .state('app.browse', { url: "/browse", views: { 'menuContent': { templateUrl: "templates/browse.html", controller: 'Ctrl' } } }) controller.js code .controller('Ctrl',function($scope) { $scope.currentImage = 0; $scope.availableImages = [ { src: "http://lorempixel.com/160/160/people/3" } ]; console.log("reading image in controller !!!");}) Header details : Request URL:http://lorempixel.com/160/160/people/3Request Method:GETStatus Code:404 Not Found (from cache)Response HeadersClient-Via:shouldInterceptRequestRequest HeadersProvisional headers are shownAccept:image/webp,*/*;q=0.8User-Agent:Mozilla/5.0 (Linux; Android 5.0.2; XT1033 Build/LXB22.46-28; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/42.0.2311.129 Mobile Safari/537.36 Config.xml file: <access origin="*"/> Error on console: GET http://lorempixel.com/160/160/people/3 404 (Not Found) I verified the url http://lorempixel.com/160/160/people/3 is showing image in my mobile browser. but the image is not coming on the app.
Now provide the response and nothing else.
|
Whitelisting the domains using cordova-plugin-whitelist solves the issue. Add the plugin via CLI: cordova plugin add cordova-plugin-whitelist and then add the following line of code to your app's config.xml : <allow-navigation href="http://*/*" /> and this meta tag in your index.html <meta http-equiv="Content-Security-Policy" content="default-src *; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' 'unsafe-eval'"> EDIT: The reason for this issue: From Cordova 4.0.0 for Android's update: Whitelist functionality is revamped You will need to add the new cordova-plugin-whitelist plugin to continue using a whitelist Setting a Content-Security-Policy (CSP) is now supported and is the recommended way to whitelist (see details in plugin readme) Network requests are blocked by default without the plugin, so install this plugin even to allow all requests, and even if you are using CSP. This new whitelist is enhanced to be more secure and configurable, but the Legacy whitelist behaviour is still available via a separate plugin (not recommended). Note: while not strictly part of this release, the latest default app created by cordova-cli will include this plugin by default.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/29912773', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2206295/']}
|
jdg_370337
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23799091
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
@Html.DropDownListFor(model => model.Status, new List<SelectListItem> { new SelectListItem{Text="Active", Value="True"}, new SelectListItem{Text="Deactive", Value="False"}}) In view I am using this drop dowenlist coding. I run my application default deactive value is display in dropdown list box. I want display default Active
Now provide the response and nothing else.
|
Like this: @Html.DropDownListFor(model => model.Status, new List<SelectListItem> { new SelectListItem{Text="Active", Value="True"}, new SelectListItem{Text="Deactive", Value="False"}},"Select One") If you want Active to be selected by default then use Selected property of SelectListItem : @Html.DropDownListFor(model => model.Status, new List<SelectListItem> { new SelectListItem{Text="Active", Value="True",Selected=true}, new SelectListItem{Text="Deactive", Value="False"}},"Select One") If using SelectList , then you have to use this overload and specify SelectListItem Value property which you want to set selected: @Html.DropDownListFor(model => model.title, new SelectList(new List<SelectListItem> { new SelectListItem { Text = "Active" , Value = "True"}, new SelectListItem { Text = "InActive", Value = "False" } }, "Value", // property to be set as Value of dropdown item "Text", // property to be used as text of dropdown item "True"), // value that should be set selected of dropdown new { @class = "form-control" })
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/23799091', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3663614/']}
|
jdg_370338
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
240496
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Find the MLE of the unknown parameter $\theta$ when $X_1,X_2,...,X_n$ is a sample from the distribution whose density function is: $$f_X(x) = \frac12e^{-|x-\theta|}, -\infty<x<\infty$$What I did so far:$$\text{maximize }(\frac12)^ne^{-|x_1-\theta|-|x_2-\theta|-...-|x_n-\theta|} \\\text{thus we want to minimize }|x_1-\theta|+|x_2-\theta|+...+|x_n-\theta|$$Since $|x_i-\theta|$ is positive for all $i$, we want to solve for $\theta$ so:$$|x_1-\theta|+|x_2-\theta|+...+|x_n-\theta|=0$$What do I do from here?
Now provide the response and nothing else.
|
$$f_X(x) = \frac12e^{-|x-\theta|}, -\infty<x<\infty$$ is a special case of the Laplace distribution given as follows: $$f_X(x|\mu,\sigma)=\frac{1}{\sqrt{2}\sigma}e^{-\frac{\sqrt{2}|x-\mu|}{\sigma}},x\in\mathbb{R}$$ for $\sigma=\sqrt{2}$ and $\mu:=\theta$ . To be more general, lets consider the Laplace distribution with parameters $(\mu,\sigma)$ . Consider the likelihood function for $N$ data samples: $$L(\mu,\sigma;x)=\prod_{t=1}^N \frac{1}{\sqrt{2}\sigma}e^{-\frac{\sqrt{2}|x_t-\mu|}{\sigma}}=(\sqrt{2}\sigma)^{-N}e^{\frac{-\sqrt{2}}{\sigma}\sum_{t=1}^N |x_t-\mu|}$$ Take the log likelihood funtion as $l(\mu,\sigma;x)=\log(L(\mu,\sigma;x))$ and we get $$l(\mu,\sigma;x)=-N\ln (\sqrt{2}\sigma)-\frac{\sqrt{2}}{\sigma}\sum_{t=1}^N |x_t-\mu|$$ Take the derivative with respect to the parameter $\mu$ $$\frac{\partial l}{\partial \mu}=-\frac{\sqrt{2}}{\sigma}\sum_{t=1}^N \frac{\partial|x_t-\mu|}{\partial\mu}$$ which is equal to $$=\frac{\sqrt{2}}{\sigma}\sum_{t=1}^N\mbox{sgn}(x_t-\mu)$$ using the identity $$\frac{\partial |x|}{\partial x}=\frac{\partial \sqrt{x^2}}{\partial x}=x(x^2)^{-1/2}=\frac{x}{|x|}=\mbox{sgn(x)}$$ To maximize the likelihood function we need to solve $$=\frac{\sqrt{2}}{\sigma}\sum_{t=1}^N\mbox{sgn}(x_t-\mu)=0 \quad\quad (1)$$ For which we have two cases; $N$ is even or odd. If $N$ is odd and we choose $\hat{\mu}=\mbox{median}(x_1,\ldots ,x_N)$ , then there are $\frac{N-1}{2}$ cases where $x_t<\mu$ and for the other $\frac{N-1}{2}$ cases $x_t>\mu$ , therefore $\hat{\mu}$ satisfies ( $1$ ) and is the Maximum likelihood estimator for the parameter $\mu$ If $N$ is even, we can not simply choose one $x_t$ which will satisfy ( $1$ ), however we can still minimize it through ranking the observations as $x_1\leq x_2\leq \ldots,x_N$ and then choosing either $x_{N/2}$ or $x_{(N+1)/2}$ In summary $\hat{\mu}=\mbox{median}(x_1,\ldots ,x_N)$ is the maximum likelihood estimator for any $N$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/240496', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/45586/']}
|
jdg_370339
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3219025
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried both calculations on Wolfram Alpha and it returns different results , but I can't get a grasp of why it is like that.From my point of view, both calculations should be the same, as $2.5=25/10,$ and $(-2)^{2.5}$ is equal to $(-2)^{25/10},$ relying on a general rule $(a^m)^n=a^{mn}$ . Links to sources: https://www.wolframalpha.com/input/?i=(-2)%5E(2.5) https://www.wolframalpha.com/input/?i=((-2)%5E(25))%5E(1%2F10)
Now provide the response and nothing else.
|
J.W. Tanner has communicated the main point and provided some links to questions that provide more details. I'd like to try to tell the (mostly) whole story in one place. Recall that the standard definition of $a^b$ for $a \in \mathbb{R}_{>0}$ , $b\in \mathbb{R}$ is $$a^b := e^{b\ln(a)}$$ Where the exponential function can be defined in several ways-- through its power series, as the solution to the differential equation $y'=y$ , or the inverse to the natural logarithm (which is in turn defined as the integral $\ln(x)=\int_1^x\frac{1}{t}dt$ ). From this definition, it's clear that $b\ln(a)=\ln(a^b)$ , so we have $$a^{bc} = e^{bc\ln(a)}=e^{c\ln(a^b)}=(a^b)^c.$$ However, for $a \leq 0$ , this definition requires us to make sense of $\ln(a)$ , and the integral definition referenced above diverges. How might we do this? Since we're trying to understand exponentiation of negative numbers, we surely must include the case of $(-1)^{1/2} = \pm i \in \mathbb{C}$ , so we can't get around working in the complex plane. If we want to try to extend our earlier definition of $a^b$ , then, we're forced to confront the extension of the exponential function to the complex plane. Fortunately, the exponential function's power series definition extends naturally to the complex plane, and from it we can easily derive Euler's identity, which states $$e^{i\theta} = \cos(\theta)+i\sin(\theta)$$ for $\theta \in \mathbb{R}$ , so $e^{i\theta}$ is a point on the unit circle at angle $\theta$ from the positive real axis, measured counterclockwise. In particular, we see that any nonzero complex number $z$ can be written uniquely as $z=re^{i\theta}$ for some $r \in \mathbb{R}_{>0}$ and $-\pi < \theta \leq \pi$ . If we want a defining property of our extension of the natural logarithm to be that the exponential function inverts it (which it had better, if the original formula is to always return $a^1=a$ ), then, one way to define the natural logarithm of $z$ is $\ln(z) := \ln(r)+i\theta$ , as this gives $$e^{\ln(z)}=e^{\ln(r)+i\theta}=re^{i\theta}=z,$$ as desired. Note $z=r$ and $\theta=0$ if $z$ is real and positive, so this is indeed an extension of the usual natural logarithm. However, this choice was not unique-- we had to restrict $-\pi < \theta \leq \pi$ to make this definition. If our defining property is just inversion by the exponential function, it's clear that $\ln(z)=\ln(r)+i(\theta+2\pi n)$ works just as well for any integer $n$ , and in general one could define a natural logarithm by instead restricting $\theta$ to be in any interval of length $2\pi$ we want, even making the interval a function of $r$ -- making this choice is called choosing a branch of the logarithm. The original definition I gave is called the principal branch, and this is what most calculators like Wolfram Alpha will use. Going back to our definition of $a^b$ and declaring it true for any $a,b \in \mathbb{C}$ , we see the result depends on our choice of branch. This is what people mean when they say that exponentiation isn't uniquely defined in $\mathbb{C}$ . Now, let's finally see what goes wrong in your example using the principal branch of the logarithm to define $(-2)^{2.5}$ and $((-2)^{25})^{1/10}$ . We have $$(-2)^{2.5}=e^{2.5\ln(-2)}=e^{2.5(\ln(2)+i\pi)}=e^{2.5\ln(2)+2.5\pi i}=e^{2.5\ln(2)}e^{i\frac{\pi}{2}} = 2^{2.5}i,$$ while $$((-2)^{25})^{1/10}=(-2^{25})^{1/10} = e^{\frac{1}{10}\ln(-2^{25})} = e^{\frac{1}{10}(\ln(2^{25})+i\pi)} = 2^{2.5}e^{i\pi/10}=2^{2.5}(\cos(\pi/10)+i\sin(\pi/10)),$$ and these are clearly different. This example demonstrates precisely that, in general, the identity $a^{bc}=(a^b)^c$ does not hold if $a$ is not a positive real number, and you can similarly see that this identity breaks down if $b$ is not real, even if $a \in \mathbb{R}_{>0}$ .
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/3219025', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/665894/']}
|
jdg_370340
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.