source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
17937003
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I haven't quite found an elegant way to solve this issue. I have an abstract class that several other classes are inheriting with an abstract method that can contain anywhere from zero to 4-5 arguments of varying types. public abstract class Item {public abstract void use();} For instance, I have a Book class that inherits this and takes no arguments when overriding use(), I have a Key class that inherits and takes a String and a Queue as arguments when overriding, etc... I've tried using generics but I have to input the number used, such as Item, when it actually depends on the class. public abstract class Item<T,U> {public abstract void use(T arg1, U arg2); //Number of arguments/types could be more or less} I've tried sending a variable list of Objects but the object types are always variable and I've unsure as to the syntax to receive in the inheriting classes. public abstract class Item<T> {public abstract void use(T... arguments);}public class Book extends Item<?> {public void use(?);}public class Book extends Item<String, Queue> { //Wrong number of arguments since I can't use Item<T...>public void use(String str, Queue q); //fails} I may just be doing something wrong - can anyone offer any assistance or insight?
Now provide the response and nothing else.
|
I've struggled with the same question, and there's not a perfect answer, but I can give you a few things to consider. First, you're basically trying to do something that is inherently against Object Oriented Programming, which is that you're trying to create a variable interface. The point of an interface is that code that gets an abstract version of the object (the Item rather than the Book, for example), knows how to invoke the use() method. This means that they must know what can be passed to the use() method. If the answer depends on the implementation of the abstract class or interface, then you need to ensure that the code using it actually knows what kind of implementation (Book, etc.) that it's using, otherwise it's not going to know how to invoke use() with the appropriate parameters anyway. It sounds like you need to refactor your code, in all honesty. However, there is a way to answer your question as stated without refactoring the architecture. You could create a class that's data is all of the different types of parameters that could possibly be passed to the use() method, have the calling code set the fields of that class, and then pass that to the use() method. For example: public class UseParameters { private String string; private Queue queue; // Any other potential parameters to use(...) public void setString(String string) { this.string = string; } public String getString() { return string; } // All of the other accessor methods, etc.} Then, you could define the use method in Item like this: public abstract void use(UseParameters params); And any code using an Item would have to set the parameters of the object appropriately: Item item = // However you're going to get the itemUseParameters params = new UseParameters();params.setString("good string");params.setQueue(new Queue());item.use(params); I just want to point out that if the code above knows the Item is a Book (which is how it knows to set the String and Queue, then why not just get a Book and skip needing an abstract class with a variable use() method altogether? But I digress. Anyway, the Book would then implement the use() method like so: @Overridepublic void use(UseParameters params) { if(params.getString == null || params.getQueue() == null) // throw exception // Do what books do with strings and queues} I think that gets you what you want, but you should consider refactoring, I think.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17937003', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2632304/']}
|
jdg_373141
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
161967
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This is code for timer in my project on STM32F429: //timer initialization void timerInit() { uwPrescalerValue2 = (uint32_t) ((SystemCoreClock / 2) / 100000) - 1; RS485Timer.Instance = TIM5; RS485Timer.Init.Period = 67400000; // high value to notice interrupt even without debugging RS485Timer.Init.Prescaler = 400000; RS485Timer.Init.ClockDivision = 0; RS485Timer.Init.CounterMode = TIM_COUNTERMODE_UP; HAL_TIM_Base_Init(&RS485Timer); } void timerReset() { HAL_TIM_Base_Stop_IT(&RS485Timer); HAL_TIM_Base_DeInit(&RS485Timer); HAL_TIM_Base_Init(&RS485Timer); HAL_TIM_Base_Start_IT(&RS485Timer); printf("%d timer reset\n", countereset); countereset++; } void HAL_TIM_Base_MspInit(TIM_HandleTypeDef *htim) { /*##-1- Enable peripherals and GPIO Clocks #################################*/ /* TIMx Peripheral clock enable */ __TIM5_CLK_ENABLE(); /*##-2- Configure the NVIC for TIMx #########################################*/ /* Set the TIMx priority */ HAL_NVIC_SetPriority(TIM5_IRQn, 7, 1); /* Enable the TIMx global Interrupt */ HAL_NVIC_EnableIRQ(TIM5_IRQn); } void HAL_TIM_Base_MspDeInit(TIM_HandleTypeDef *htim) { __TIM5_FORCE_RESET(); __TIM5_RELEASE_RESET(); HAL_NVIC_DisableIRQ(TIM5_IRQn); } void TIM5_IRQHandler(void) { if (__HAL_TIM_GET_FLAG(&RS485Timer, TIM_FLAG_UPDATE) != RESET) //In case other interrupts are also running { if (__HAL_TIM_GET_ITSTATUS(&RS485Timer, TIM_IT_UPDATE) != RESET) { __HAL_TIM_CLEAR_FLAG(&RS485Timer, TIM_FLAG_UPDATE); HAL_TIM_IRQHandler(&RS485Timer); printf("timer interrupt\n"); } } } And after running timerReset() function in the middle of my program, interrupt starts not few seconds later, but almost immediately. I tried few other timers to check if there is no hardware problem, but no, it isn't.
Now provide the response and nothing else.
|
I ran into this with an STM32F105. The STM32F1xx Standard Peripheral Library functions are a bit different than what you are using, but the idea should be the same. Issuing the TIM_TimeBaseInit() function caused the TIM_SR_UIF flag to become set. I haven't gone back yet to figure out why. Once this bit is set, the interrupt will trigger as soon as it is enabled. To fix it, after calling TIM_TimeBaseInit() , I immediately called TIM_ClearITPendingBit() . Then I would enable the interrupt with TIM_ITConfig() . This fixed the problem. My complete initialization routine looks like this: // Enable the peripheral clockRCC_APB1PeriphClockCmd(RCC_APB1Periph_TIM5, ENABLE);// Configure the timebaseTIM_TimeBaseInitStructure.TIM_Prescaler = 1;TIM_TimeBaseInitStructure.TIM_Period = 35999;TIM_TimeBaseInit(TIM5, &TIM_TimeBaseInitStructure);// That last function caused the UIF flag to get set. Clear it.TIM_ClearITPendingBit(TIM5, TIM_IT_Update);// Configure so that the interrupt flag is only set upon overflowTIM_UpdateRequestConfig(TIM5, TIM_UpdateSource_Regular);// Enable the TIM5 Update Interrupt typeTIM_ITConfig(TIM5, TIM_IT_Update, ENABLE);
|
{}
|
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/161967', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/54506/']}
|
jdg_373142
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
36681078
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an Angular 2 application. A service is requests data from an api that returns the results like the following: { "data":[ {"id":1,"timestamp":"2016-04-17T19:52:53.4510935+01:00","sourceDatabaseServer":"127.0.0.1","sourceDatabaseName":"Database1","targetDatabaseServer":"192.168.99.101","targetDatabaseName":"Database2"}, {"id":2,"timestamp":"2016-04-17T19:52:53.4510935+01:00","sourceDatabaseServer":"127.0.0.2","sourceDatabaseName":"Database3","targetDatabaseServer":"192.168.99.102","targetDatabaseName":"Database4"}, {"id":3,"timestamp":"2016-04-17T19:52:53.4510935+01:00","sourceDatabaseServer":"127.0.0.3","sourceDatabaseName":"Database5","targetDatabaseServer":"192.168.99.103","targetDatabaseName":"Database6"} ]} My Angular 2 service looks like this (I've cut the error handling for brevity as we're on the happy path here): getList() : Observable<SomeModel[]> { return this._http.get(this._getListUrl).map(this.extractData);} private extractData(res: Response) { return res.json().data || {};} and my component like this: results: SomeModel[];errorMessage: string;ngOnInit() { this._someService.getList() .subscribe( results => this.results = results, error => this.errorMessage = <any>error);} and my model like this: export class SomeModel { constructor( public id: number, public timestamp: Date, public sourceDatabaseServer: string, public sourceDatabaseName: string, public targetDatabaseServer: string, public targetDatabaseName: string ) { }} Everything looked like it was working however when I tried to display timestamp using the DatePipe like so {{item.timestamp | date:'short'}} the application blows up with the following error message: Invalid argument '2016-04-17T19:40:38.2424240+01:00' for pipe 'DatePipe' in [{{result.timestamp | date:'short'}} After some investigation I believe that timestamp is not actually being converted to the Date type but is instead just being set a string . I'm guessing this is becuase the Date type isn't known at the time Response.json() is called? or am I missing something else entirely? Is there a fix or work around for this?
Now provide the response and nothing else.
|
I would map the string field to a date one: getList() : Observable<SomeModel[]> { return this._http.get(this._getListUrl).map(this.extractData);}private extractData(res: Response) { var data = res.json().data || []; data.forEach((d) => { d.timestamp = new Date(d.timestamp); }); return data;}
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/36681078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/300744/']}
|
jdg_373143
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
51142
|
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I saw a tutorial here that shows wiring a button without a resistor. I have heard that this is bad if you set the pin as an output by mistake and press the button because it shorts the pin to ground. Besides that issue, can this be done with multiple buttons as long as I make sure to code it correctly? I read somewhere else that this only works with the I2C pins and thus can only work with a maximum of 2 buttons. I am working with a Raspberry Pi b+, but I also have a Pi 3 available.
Now provide the response and nothing else.
|
If the code is written correctly such that the GPIO pins are input only, you should not have an issue. However, the use of resistors may still be warranted ... but not to limit the current in case of bad coding but instead to pull the default state of a button to a logic level. When the button is "up", it is effectively an open circuit. What then is the logic level on the GPIO? If you don't have pull-up or pull-down resistors, then the logic level will "float" and you could have un-expected results which appear to be button presses when no button is pressed. You can code around that with the PI's logical pull-up and pull-down resistors. I believe the GPIOs allow us to say that a pin is input and should have a pull-up or pull-down resistor (logically) attached. I am guessing that in the circuitry there is the ability to engage internal resistors to achieve that capability.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://raspberrypi.stackexchange.com/questions/51142', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/29246/']}
|
jdg_373144
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50194925
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to append multiple files named lab_X.txt to one single output file final.txt. Knowing that all the files in one folder are the ones that I need I moved them into the current directory and used cat *.txt > final.txt , knowing that > overwrites the file. I want to insert a simple message between the files similar to ============= , is that possible?
Now provide the response and nothing else.
|
Even though you're going to the ngrok url, the host header in the request is still set as the name of your site. Laravel uses the host header to build the absolute url for links, assets, etc. ngrok includes the ngrok url in the X-Original-Host header, but Laravel doesn't know anything about that. There are two basic solutions to the issue: update the request with the proper server and header values, or use the forceRootUrl() method to ignore the server and header values. TrustedProxies and Forwarded Host If you're using TrustedProxies (default in Laravel >= 5.5), and you have it configured to trust all proxies ( protected $proxies = '*'; ), you can set the X-Forwarded-Host header to the X-Original-Host header. Laravel will then use the value in the X-Forwarded-Host header to build all absolute urls. You can do this at the web server level. For example, if you're using apache, you can add this to your public/.htaccess file: # Handle ngrok X-Original-Host HeaderRewriteCond %{HTTP:X-Original-Host} \.ngrok\.io$ [NC]RewriteRule .* - [E=HTTP_X_FORWARDED_HOST:%{HTTP:X-Original-Host}] If you prefer to handle this in your application instead of the web server, you will need to update the Laravel request. There are plenty of places you could choose to do this, but one example would be in your AppServiceProvider::boot() method: public function boot(\Illuminate\Http\Request $request){ if ($request->server->has('HTTP_X_ORIGINAL_HOST')) { $request->server->set('HTTP_X_FORWARDED_HOST', $request->server->get('HTTP_X_ORIGINAL_HOST')); $request->headers->set('X_FORWARDED_HOST', $request->server->get('HTTP_X_ORIGINAL_HOST')); }} Not Using TrustedProxies If you're not using TrustedProxies, you can't use the .htaccess method. However, you can still update the server and headers values in your application. In this case, you'd need to overwrite the Host header: public function boot(\Illuminate\Http\Request $request){ if ($request->server->has('HTTP_X_ORIGINAL_HOST')) { $request->server->set('HTTP_HOST', $request->server->get('HTTP_X_ORIGINAL_HOST')); $request->headers->set('HOST', $request->server->get('HTTP_X_ORIGINAL_HOST')); }} Using forceRootUrl() If you don't want to modify any headers or the Laravel request, you can simply tell the URL generator what root url to use. The URL generator has a forceRootUrl() method that you can use to tell it to use a specific value instead of looking at the request. Again, in your AppServiceProvider::boot() method: public function boot(\Illuminate\Http\Request $request){ if ($request->server->has('HTTP_X_ORIGINAL_HOST')) { $this->app['url']->forceRootUrl($request->server->get('HTTP_X_FORWARDED_PROTO').'://'.$request->server->get('HTTP_X_ORIGINAL_HOST')); }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50194925', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7971087/']}
|
jdg_373145
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3541378
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How would I code a reversible shuffle algorithm in C# which uses a key to shuffle and can be reversed to the original state? For instance, I have a string: "Hello world", how can I shuffle it so that later I could be able to reverse the shuffled string back to "Hello world".
Now provide the response and nothing else.
|
Look at Fisher-Yates shuffle for a way to permute the string based on a key. Feed the key as the seed into a PRNG, use that to generate the random numbers used by the shuffle. Now, how to reverse the process? Fisher-Yates works by swapping certain pairs of elements. So to reverse the process you can feed the same key into the same PRNG, then run through the Fisher-Yates algorithm as if you were shuffling an array the size of your string. But actually you don't move anything, just record the indexes of the elements that would be swapped at each stage. Once you've done this, run through your list of swaps in reverse , applying them to your shuffled string. The result is the original string. So for example, suppose we've shuffled the string "hello" using the following swaps (I haven't used a PRNG here, I rolled dice, but the point about a PRNG is it gives you the same sequence of numbers given the same seed): (4,0): "hello" -> "oellh"(3,3): "oellh" -> "oellh"(2,1): "oellh" -> "olelh"(1,0): "olelh" -> "loelh" So, the shuffled string is "loelh". To deshuffle, I generate the same series of "random" numbers, 0, 3, 1, 0. Then apply the swaps in reverse order: (1,0): "loelh" -> "olelh"(2,1): "olelh" -> "oellh"(3,3): "oellh" -> "oellh"(4,0): "oellh" -> "hello" Success! The downside of this of course is that it uses a lot of memory for the deshuffle: an array of indexes as long as your original array of chars. So for truly huge arrays, you might want to choose a PRNG (or anyway a sequence-generation function) that can be stepped either forwards or backwards without having to store all the output. This rules out hash-based cryptographically secure PRNGs, but LFSRs are reversible. Btw, why do you want to do this?
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3541378', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427303/']}
|
jdg_373146
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27504256
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I tried everything that is written in this article: http://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api , but nothing works.I'm trying to get data from webAPI2 (MVC5) to use in another domain using angularJS. my controller looks like this: namespace tapuzWebAPI.Controllers{ [EnableCors(origins: "http://local.tapuz.co.il", headers: "*", methods: "*", SupportsCredentials = true)] [RoutePrefix("api/homepage")] public class HomePageController : ApiController { [HttpGet] [Route("GetMainItems")] //[ResponseType(typeof(Product))] public List<usp_MobileSelectTopSecondaryItemsByCategoryResult> GetMainItems() { HomePageDALcs dal = new HomePageDALcs(); //Three product added to display the data //HomePagePromotedItems.Value.Add(new HomePagePromotedItem.Value.FirstOrDefault((p) => p.ID == id)); List<usp_MobileSelectTopSecondaryItemsByCategoryResult> items = dal.MobileSelectTopSecondaryItemsByCategory(3, 5); return items; } }}
Now provide the response and nothing else.
|
You need to enable CORS in your Web Api . The easier and preferred way to enable CORS globally is to add the following into web.config <system.webServer> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> <add name="Access-Control-Allow-Headers" value="Content-Type" /> <add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" /> </customHeaders> </httpProtocol></system.webServer> Please note that the Methods are all individually specified, instead of using * . This is because there is a bug occurring when using * . You can also enable CORS by code. Update The following NuGet package is required: Microsoft.AspNet.WebApi.Cors . public static class WebApiConfig{ public static void Register(HttpConfiguration config) { config.EnableCors(); // ... }} Then you can use the [EnableCors] attribute on Actions or Controllers like this [EnableCors(origins: "http://www.example.com", headers: "*", methods: "*")] Or you can register it globally public static class WebApiConfig{ public static void Register(HttpConfiguration config) { var cors = new EnableCorsAttribute("http://www.example.com", "*", "*"); config.EnableCors(cors); // ... }} You also need to handle the preflight Options requests with HTTP OPTIONS requests. Web API needs to respond to the Options request in order to confirm that it is indeed configured to support CORS . To handle this, all you need to do is send an empty response back. You can do this inside your actions, or you can do it globally like this: # Global.asax.csprotected void Application_BeginRequest(){ if (Request.Headers.AllKeys.Contains("Origin") && Request.HttpMethod == "OPTIONS") { Response.Flush(); }} This extra check was added to ensure that old APIs that were designed to accept only GET and POST requests will not be exploited. Imagine sending a DELETE request to an API designed when this verb didn't exist. The outcome is unpredictable and the results might be dangerous .
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/27504256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4366233/']}
|
jdg_373147
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18113164
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In one of my classes, I am trying to use std::priority queue with a specified lambda for comparison: #pragma once#include <queue>#include <vector>auto compare = [] (const int &a, const int &b) { return a > b; };class foo{public: foo() { }; ~foo() { }; int bar();private: std::priority_queue< int, std::vector<int>, decltype(compare)> pq;}; My program compiles perfectly until I add a .cpp file to accompany the header: #include "foo.h"int foo::bar(){ return 0;} This time, my compiler generates an error: >main.obj : error LNK2005: "class <lambda> compare" (?compare@@3V<lambda>@@A) already defined in foo.obj Why can't I create a accompanying .cpp file if my header file contains a lambda? Compiler: Visual Studio 2012 My main.cpp : #include "foo.h"int main(){ return 0;}
Now provide the response and nothing else.
|
As @Rapptz suggested, const auto compare = [] (const int &a, const int &b) { return a > b; }; Solved the problem. Why? Internal vs External linkage . By default, auto , like int has external linkage. So just how: int j = 5; In foo.h that would later be included by foo.cpp throws a Error 2 error LNK2005: "int j" (?j@@3HA) already defined in Header.obj (VS 2013) However, const makes the linkage internal by default, which means it is only accessible in one translation unit, thereby avoiding the problem.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18113164', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1783614/']}
|
jdg_373148
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15875199
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've found that JasperReports is really slow when filling a report from a Java app. The program hangs at this line: print = JasperFillManager.fillReport(report, parameters, xmlDataSource); It usually stays there for 3 minutes, consuming up to 300Mb of RAM and 50% CPU . report is a compiled (.jasper) report that uses 3 subreports. The datasource is a pretty big XML file (about 100k lines, 1.5Mb) The machine is a 3Ghz dual core with 4Gb of RAM So, how can I improve report filling performance?
Now provide the response and nothing else.
|
The Problem It seems that the problem is the XPath engine. That is, the library that parses the XML file looking for data. While iReport Designer uses Jaxen , JasperReport uses Xalan . Xalan is really slow compared to Jaxen (really really slow). That's why the problem only occurs when filling the report from a Java application and not from iReports. The Solution Well, the solution is simple, just add the following line in your Java application to select Jaxen lib instead of the default Xalan lib (it's deprecated, but it works): JRProperties.setProperty("net.sf.jasperreports.xpath.executer.factory", "net.sf.jasperreports.engine.util.xml.JaxenXPathExecuterFactory"); EDIT : That line was deprecated, I've found the correct way to set properties: DefaultJasperReportsContext context = DefaultJasperReportsContext.getInstance();JRPropertiesUtil.getInstance(context).setProperty("net.sf.jasperreports.xpath.executer.factory", "net.sf.jasperreports.engine.util.xml.JaxenXPathExecuterFactory"); You will also need to add the Jaxen .jar to your build path. Here is a link: https://mvnrepository.com/artifact/jaxen/jaxen While the report filling was taking 3-5 minutes with Xalan, it now completes in just a few seconds with Jaxen. The answer was found here: http://community.jaspersoft.com/questions/536842/jasperreports-too-slow And also here: http://community.jaspersoft.com/wiki/xml-data-source-very-slow-parse
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15875199', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/920173/']}
|
jdg_373149
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1259534
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In calculus the general Leibniz rule asserts that Let $n$ be a natural numbers, if $f$ and $g$ are $n$-times differentiable functions at a point $x$, then the function $fg$ is also $n$-times differentiable and it's $n$-th derivative at this point is given by $$(fg)^{(n)}(x)=\sum_{k=1}^n \binom nkf^{(k)}(x)g^{(n-k)}(x)$$ Now a similar theorem in algebra named after Newton asserts that Let $n$ be a natural number, if $a$ and $b$ are two real numbers, then we have $$(a+b)^n=\sum_{k=1}^n \binom nka^kb^{n-k}$$ I'm going to show that the Newton's binomial theorem can be deduced from Leibniz general rule. Let $n$ be a natural number and $a,b$ are two real numbers. And let $f(t)=e^{at}$ and $g(t)=e^{bt}$ and use general Leibniz rule to find $n$-th derivative of the function $f(t)g(t)=e^{(a+b)t}$ at the point $x=0$ to find$$\begin{align}(fg)^{(n)}(0)&=(e^{(a+b)t})^{(n)}|_{t=0}\\&=\sum_{k=1}^n \binom nkf^{(k)}(0)g^{(n-k)}(0)\\&=\sum_{k=1}^n \binom nk(e^{at})^{(k)}|_{t=0}(e^{bt})^{(n-k)}|_{t=0}\end{align}\tag{I}\label{I}$$On the other hand for every real number $c$ and every natural number $k$ we have$$(e^{ct})^{(k)}|_{t=0}=c^ke^{ct}|_{t=0}=c^k\tag{II}\label{II}$$Now apply $\eqref{II}$ in $\eqref{I}$ to find $$(a+b)^n=\sum_{k=1}^n \binom nka^kb^{n-k}$$ $\square$ The question is to find a proof of Leibniz general rule directly from Newton's binomial theorem. Thanks in advance... Note . You can use this method with other functions to find other interesting formulas.
Now provide the response and nothing else.
|
It is not necessary to reprove the binomial formula if we are willing to wade through some abstract nonsense. Denote by $X$ the space of sufficiently differentiable functions $x\mapsto f(x)$ defined in some neighborhood $U$ of $a\in{\Bbb R}$. The maps $$p(f,g):=f\cdot g,\quad D_l(f,g):=(f',g),\quad D_r(f,g):=(f,g')$$are bilinear on $X\times X$ and can therefore be lifted to linear maps on $Y:=X\otimes X$ such that$$p(f\otimes g)=f\cdot g,\quad D_l(f\otimes g)=f'\otimes g,\quad D_r(f\otimes g)=f\otimes g'\ .$$This means, e.g., that $$p\left(\sum_k \lambda_k (f_k\otimes g_k)\right)=\sum_k \lambda_k\> f_k\cdot g_k\ ,$$ and that $D_l$ and $D_r$ are now maps $Y\to Y$. The product rule $(f\cdot g)'=f'\cdot g+f\cdot g'$ can be written as$${d\over dx}\bigl( p(f,g)\bigr)=p\bigl(D_l(f,g)\bigr)+p\bigl(D_r(f,g)\bigr)\ ,$$which lifts to $${d\over dx}\circ p\>(f\otimes g)=p\circ(D_l+D_r)(f\otimes g)\ .$$As $D_l\circ D_r=D_r\circ D_l$ it then follows by induction that$$\left({d\over dx}\right)^n\circ p=p\circ(D_l+D_r)^n=p\circ\sum_{k=0}^n{n\choose k} D_l^{n-k}\>D_r^k\ .$$When applied to a single $f\otimes g$ this is Leibniz' formula.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1259534', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/132351/']}
|
jdg_373150
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
59576
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an inequality of the form $$a^k +b^k \leq c$$ with $a,b,c,k \in\mathbb{Z^+}$. For known $a,b,c$ I want to find out the largest $k$ for which this inequality holds. I am able to write a program that does this for me, but cannot come up with any clever way to do it analytically. Is there a good way to do this?
Now provide the response and nothing else.
|
Suppose $1< b$ and that $a\leq b$. Then in all cases either we have $$k=\lfloor\log_b c\rfloor$$ or $$k=\lfloor\log_b c\rfloor-1.$$ Here is the justification: Certainly $\lfloor\log_b c\rfloor$ is an upper bound for $k$. Now, since $a^k+b^k\leq 2b^k\leq b^{k+1}$ for all positive $k$ we see that $\lfloor\log_b c\rfloor-1$ is a lower bound. Notice that both cases are possible as well. Let $c=65$, and let $a=2$, $b=4$. Then $k=2=\lfloor\log_b c\rfloor-1$. Alternatively, let $b=4$, $a=2$ and $c=63$. Then the maximal $k$ is $k=2$ which is $\lfloor\log_b c\rfloor$. Hope that helps,
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/59576', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/8365/']}
|
jdg_373151
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
586781
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here . Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer);insert into lives values ('2000-01-01 07:00', 1, 1, 1);insert into lives values ('2000-01-01 09:00', 4, 2, 2);insert into lives values ('2000-01-01 10:00', 2, 3, 3);insert into lives values ('2000-01-01 10:00', 1, 2, 4);insert into lives values ('2000-01-01 11:00', 4, 1, 5);insert into lives values ('2000-01-01 11:00', 3, 1, 6);insert into lives values ('2000-01-01 13:00', 3, 3, 1);
Now provide the response and nothing else.
|
On a table with 158k pseudo-random rows (usr_id uniformly distributed between 0 and 10k, trans_id uniformly distributed between 0 and 30), By query cost, below, I am referring to Postgres' cost based optimizer's cost estimate (with Postgres' default xxx_cost values), which is a weighed function estimate of required I/O and CPU resources; you can obtain this by firing up PgAdminIII and running "Query/Explain (F7)" on the query with "Query/Explain options" set to "Analyze" Quassnoy's query has a cost estimate of 745k (!), and completes in 1.3 seconds (given a compound index on ( usr_id , trans_id , time_stamp )) Bill's query has a cost estimate of 93k, and completes in 2.9 seconds (given a compound index on ( usr_id , trans_id )) Query #1 below has a cost estimate of 16k, and completes in 800ms (given a compound index on ( usr_id , trans_id , time_stamp )) Query #2 below has a cost estimate of 14k, and completes in 800ms (given a compound function index on ( usr_id , EXTRACT(EPOCH FROM time_stamp) , trans_id )) this is Postgres-specific Query #3 below (Postgres 8.4+) has a cost estimate and completion time comparable to (or better than) query #2 (given a compound index on ( usr_id , time_stamp , trans_id )); it has the advantage of scanning the lives table only once and, should you temporarily increase (if needed) work_mem to accommodate the sort in memory, it will be by far the fastest of all queries. All times above include retrieval of the full 10k rows result-set. Your goal is minimal cost estimate and minimal query execution time, with an emphasis on estimated cost. Query execution can dependent significantly on runtime conditions (e.g. whether relevant rows are already fully cached in memory or not), whereas the cost estimate is not. On the other hand, keep in mind that cost estimate is exactly that, an estimate. The best query execution time is obtained when running on a dedicated database without load (e.g. playing with pgAdminIII on a development PC.) Query time will vary in production based on actual machine load/data access spread. When one query appears slightly faster (<20%) than the other but has a much higher cost, it will generally be wiser to choose the one with higher execution time but lower cost. When you expect that there will be no competition for memory on your production machine at the time the query is run (e.g. the RDBMS cache and filesystem cache won't be thrashed by concurrent queries and/or filesystem activity) then the query time you obtained in standalone (e.g. pgAdminIII on a development PC) mode will be representative. If there is contention on the production system, query time will degrade proportionally to the estimated cost ratio, as the query with the lower cost does not rely as much on cache whereas the query with higher cost will revisit the same data over and over (triggering additional I/O in the absence of a stable cache), e.g.: cost | time (dedicated machine) | time (under load) |-------------------+--------------------------+-----------------------+some query A: 5k | (all data cached) 900ms | (less i/o) 1000ms |some query B: 50k | (all data cached) 900ms | (lots of i/o) 10000ms | Do not forget to run ANALYZE lives once after creating the necessary indices. Query #1 -- incrementally narrow down the result set via inner joins-- the CBO may elect to perform one full index scan combined-- with cascading index lookups, or as hash aggregates terminated-- by one nested index lookup into lives - on my machine-- the latter query plan was selected given my memory settings and-- histogramSELECT l1.* FROM lives AS l1 INNER JOIN ( SELECT usr_id, MAX(time_stamp) AS time_stamp_max FROM lives GROUP BY usr_id ) AS l2 ON l1.usr_id = l2.usr_id AND l1.time_stamp = l2.time_stamp_max INNER JOIN ( SELECT usr_id, time_stamp, MAX(trans_id) AS trans_max FROM lives GROUP BY usr_id, time_stamp ) AS l3 ON l1.usr_id = l3.usr_id AND l1.time_stamp = l3.time_stamp AND l1.trans_id = l3.trans_max Query #2 -- cheat to obtain a max of the (time_stamp, trans_id) tuple in one pass-- this results in a single table scan and one nested index lookup into lives,-- by far the least I/O intensive operation even in case of great scarcity-- of memory (least reliant on cache for the best performance)SELECT l1.* FROM lives AS l1 INNER JOIN ( SELECT usr_id, MAX(ARRAY[EXTRACT(EPOCH FROM time_stamp),trans_id]) AS compound_time_stamp FROM lives GROUP BY usr_id ) AS l2ON l1.usr_id = l2.usr_id AND EXTRACT(EPOCH FROM l1.time_stamp) = l2.compound_time_stamp[1] AND l1.trans_id = l2.compound_time_stamp[2] 2013/01/29 update Finally, as of version 8.4, Postgres supports Window Function meaning you can write something as simple and efficient as: Query #3 -- use Window Functions-- performs a SINGLE scan of the tableSELECT DISTINCT ON (usr_id) last_value(time_stamp) OVER wnd, last_value(lives_remaining) OVER wnd, usr_id, last_value(trans_id) OVER wnd FROM lives WINDOW wnd AS ( PARTITION BY usr_id ORDER BY time_stamp, trans_id ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING );
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/586781', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/68623/']}
|
jdg_373152
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
349376
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let A be a commutative ring with 1. 1) Prove that a sum of a nilpotent element and an invertible element is invertible. 2) Prove that if $f=a_0+a_1x+\dots+a_nx^n \in A[x]$ a) $\exists f^{-1}\in A[x] \Leftrightarrow a_0$ is invertible and the other coefficients are nilpotent. b) f is nilpotent $\Leftrightarrow $ all its coefficients are nilpotent. p.s. Those are the first two in a series of problems. The rest easily follow from each other. I'm only struggling with the first two.
Now provide the response and nothing else.
|
Here is an abstract argument, which I really like. All you have to know is, that the intersection of all prime ideals of $A$ (called "nilradical") consists exactly of the nilpotent elements of $A$. Let $u,n \in A$, $u$ invertible, $n$ nilpotent. Let further $p$ be any prime ideal of $A$. Since $n \in p$ we have $u+n \equiv u \not\equiv 0\mod p$. So $u+n$ lies in no prime ideal of $A$ and must therefore be a unit. a) "$\Leftarrow$" follows from 1. For "$\Rightarrow$" let $p$ be any prime ideal of $A$. Since $A/p$ is an integral domain and the reduction $\overline{f} \in (A/p)[x]$ remains invertible, it follows $\deg(\overline{f}) = 0$, which means $a_0 \notin p$ and $a_i \in p$ for $i = 1, \dots, n$. Since this holds for any prime ideal $p$ of $A$ we conclude that $a_0$ is invertible while $a_i$ is nilpotent for $i = 1, \dots, n$. b) Proceed similar to a) and use the fact that the only nilpotent element of an integral domain is $0$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/349376', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34371/']}
|
jdg_373153
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47651864
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use c3p0-0.9.5.2.jar and mchange-commons-java-0.2.11.jar to manage the pool connection,And I use postgreSql 9.3. I get these messages at least once a day in my Prod environment : Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database! at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:118) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:692) at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:140) at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:81) at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446) ... 212 moreCaused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source. at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1469) at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:644) at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:554) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutAndMarkConnectionInUse(C3P0PooledConnectionPool.java:758) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:685) ... 215 moreCaused by: org.postgresql.util.PSQLException: FATAL: remaining connection slots are reserved for non-replication superuser connections at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:712) I have this config in my aplication : <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN""http://www.springframework.org/dtd/spring-beans.dtd"><beans> <bean id="dataSource" class = "com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver"/> <property name="jdbcUrl" value="jdbc:postgresql://localhost:5432/Test"/> <property name="user" value="postgres"/> <property name="password" value="postgres"/> <!-- pool sizing --> <!-- pool sizing --> <property name="initialPoolSize" value="32" /> <property name="minPoolSize" value="30" /> <property name="maxPoolSize" value="300" /> <property name="acquireIncrement" value="10" /> <property name="maxStatements" value="0" /> <!-- retries --> <property name="acquireRetryAttempts" value="30" /> <property name="acquireRetryDelay" value="1000" /> <!-- 1s --> <property name="breakAfterAcquireFailure" value="false" /> <!-- refreshing connections --> <property name="maxIdleTime" value="180" /> <!-- 3min --> <property name="maxConnectionAge" value="10" /> <!-- 1h --> <!-- timeouts and testing --> <property name="checkoutTimeout" value="0" /> <!-- 60s --> <property name="idleConnectionTestPeriod" value="60" /> <!-- 60 --> <property name="testConnectionOnCheckout" value="true" /> <property name="preferredTestQuery" value="SELECT 1" /> <property name="testConnectionOnCheckin" value="true" /> </bean></beans> in postgresql.conf I have this config : max_connections = 300shared_buffers = 32GB my server has this performance: 24 cpu, 256 GB memory the number of users using the application is around 1300 is there anyone who can help me to solve this problem thank you in advance
Now provide the response and nothing else.
|
There are superuser_reserved_connections connections slots ( 3 by default ) that are reserved for superusers so that they can connect even in a situation where all connection slots are taken. So you effectively only have 297 slots available. Either reduce the maximum number of connections of your connection pool or increase max_connections in PostgreSQL. By the way, 300 is much too high. You should use a much lower setting with a connection pool (unless you have hundreds of cores in your database machine).
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47651864', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1377850/']}
|
jdg_373154
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14294497
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to manipulate some xml files using Linq to xml. I have an existing XDocument that I Load Now I cannot seem to be able to add a namespace to it. I do: //Load an existing xml into a XDocumentXDocument xdoc=XDocument.Load(myXml);//Create a namespace XNamespace myNS="http://www.w3.org/2001/XMLSchema-instance/MyShinyNewNamespace"; xAttribute myAttr=new XAttribute(XNamespace.Xmlns +"myNS",myNS); //Add new namepsace to root xdoc.Root ????What do you do here? How do I retrieve my namespace? How do I Remove/Replace? many thanks
Now provide the response and nothing else.
|
First of all, while XML markup allows you to use <root xmlns="http://example.com/ns"> <foo> <bar>baz</bar> </foo></root> to use a single namespace declaration attribute to put the root element as well as those descendant elements into the declared namespace, when you manipulate the tree model you need to change the Name of all elements so you need e.g. XNamespace myNs = "http://example.com/ns";foreach (XElement el in xdoc.Descendants()) { el.Name = myNs + el.Name.LocalName;} If you also want to set a certain prefix pf then addionally set xdoc.Root.Add(new XAttribute(XNamespace.Xmlns + "pf", myNs));
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14294497', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/231465/']}
|
jdg_373155
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32577607
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Hy, I'm trying to install the CSS-Framework "Semantic UI" on a Windows 10 PC with npm from node.js. I've followed the official install-instructions carefully. I've installed successful node.js (v4.0.0) with the official Windows Installer . Typed in Windows cmd npm install -g gulp to install gulp (npm v2.14.2). First time I've got an error ECONNRESET , which I could solve . So gulp was successful installed globally. At least I've tried several times to install Semantic UI with this code: npm install semantic-ui --savecd semanticgulp build Which does work halfway - WARN peerdependencies & the missing dependency package jQuery was the result. But I'm still able to build my fresh installed default Semantic UI Framework with gulp build . I said it works halfway on my Windows 10 System, but also when I try to call my local file copy of the default template Fixed Menu I get in the google chrome developer tools following error: Uncaught ReferenceError: jQuery is not defined / semantic.min.js:11 Okay, it's just a unknown reference, but that pointed me to the missing jQuery package. After googling for it I've found a npm package npm-install-missing (best result for my predicament) and try it out in my project folder - nothing happend, because there is no package.json dependency-file. So I went deeper in my given project structure by npm "project folder\node_modules\semantic_ui and ran it again. The result was a complete package update of every package in the node_modules -folder with enclosed jQuery package and some more: github, gulp-concat-filenames, gulp-debug, gulp-git, gulp-json-editor, gulp-prompt, gulp-tap, merge-stream, mkdirp and wrench . So 11 packages were missed due to the dependencies of Semantic UI. But the jQuery ReferenceError is still available. When you try google chromes developer tools on the official semantic-ui.com/ , which is build with it's own framework, so you couldn't get any errors, although they put the semantic.min.js file in the same default directory-structure dist/semantic.min.js . Okay, my path has just one directory in front of: semantic/dist/semantic.min.js - but this is, how it's done in the official documentation. Hopefully someone can help me to get this framework fully alive. :) Thanks, Robert
Now provide the response and nothing else.
|
While jQuery is required by Semantic UI, it's not a npm requirement. To clarify, jQuery is a client-side JavaScript Library. Using it requires you to include its .js file on your webpages inside a <script> tag. You can download it from the official website or use a CDN . The jquery npm package is related, but in no way the same thing. This package is used when you want to build your own jQuery file (i.e. when you want to make some changes or have some specific requirements) - you usually don't want to do this. In short, if gulp build worked for you, then you're all set - the only two files you need are semantic.css and semantic.js . Make sure jQuery (found on jquery.com, not the one installed using npm) is also included in your web pages, right before semantic.js. So your "base" HTML file should look something like this (assuming the generated semantic.css and semantic.js are in the dist folder): <!doctype html><html><head> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0"> <title>Title</title> <link rel="stylesheet" type="text/css" href="dist/semantic.css"></head><body> Body goes here <script src="https://code.jquery.com/jquery-2.1.4.min.js"></script> <script src="dist/semantic.js"></script></body></html>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32577607', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1542878/']}
|
jdg_373156
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
798640
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
It seems to me that there is a clear, intuitive reason for extending the real number system to the complex number system. Namely, some polynomial equations that have no solutions in $\mathbb{R}$ become soluble in $\mathbb{C}$. When we do this, we lose almost none of the nice algebraic properties of the reals, and pick up some nice new ones along the way (e.g. fundamental theorem of algebra). However, I cannot see such an intuitive reason for similarly extending $\mathbb{C}$ to $\mathbb{H}$, other than "because we can." In so doing, we actually lose the important property of commutative multiplication. So precisely what problems do we solve by moving from $\mathbb{C}$ to $\mathbb{H}$? I am of course aware of the quaternions' numerous applications to 3D geometry; what I am really interested in here are the analytic properties they provide. Does quaternionic analysis offer results comparable to those of complex analysis?
Now provide the response and nothing else.
|
I assume that you're referring solely to solving equations (rather than applications of quaternions, like modelling 3D vectors), in which case there is not really much point in extending $\mathbb{C}$ to $\mathbb{H}$; in terms of solving polynomial equations, it's actually superfluous to solve in $\mathbb{H}$. e.g. $w^2=-1$ has two solutions in $\mathbb{C}$, but infinitely many in $\mathbb{H}$. Also, some (non-polynomial) equations cannot be solved in $\mathbb{C},$ but can be solved in $\mathbb{H}$- take, for example, $(xi-ix)^2=-1.$ In $\mathbb{C}$, $xi=ix$, so you get $0=-1$, which clearly has no solution, but in $\mathbb{H},$ using the fact that $xi \neq ix,$ we can, indeed, solve this. So, whilst you imply a lack of commutativity is a bad thing, we see that, in this case, it's necessary in order to solve this equation. Another example, regarding associativity this time is: $(xi)y-x(iy)=1$. Note that, in $\mathbb{H}$ (and any subsets therof), there are no solutions to this equation; we end up with $0=1,$ which has no solutions.Now, in the octonions, $\mathbb{O}$, this equation does have a solution, since $(xi)y \neq x(iy)$ (due to lack of associativity in the octonions). So, to conclude, purely with regard to equation solving, extending $\mathbb{C}$ to higher dimensions ($\mathbb{H, O, S, }$ etc.) allows us to solve previously-unsolvable equations (for example, by exploiting a lack of commutativity and/or lack of associativity).It also lets us find even more equations to polynomials (although why anyone would want so many solutions is beyond me).
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/798640', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/80762/']}
|
jdg_373157
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
471450
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
X10 is a protocol for communication among electronic devices used for home automation where the signals involve brief radio frequency (RF) bursts over a home's A/C power lines. Household wiring often has two different power circuits; each of the two 220V offers a single phase circuit to neutral. This means if the X10 controller is on one of those 110V circuits, the signals are not passed to the other of those circuits. The solution to this is to bridge the two 110V circuits so the RF signals may pass between them. This question is about what the specs for the capacitor should be if we wanted it to be very reliable over time. I bought and installed a capacitor like this (for a dollar) and it worked for probably five or more years: But then it failed, and looked like this when I found it: Capacitors, when operating properly, do not conduct [significant] current across their leads, but this season (putting up Christmas lights), the signals were not passing to all outlets, and when I looked in the breaker box, the capacitor had dropped away and fallen to the bottom. The leads were gone (apparently melted). So the dielectric apparently broke down over time and the capacitor failed. This got me to thinking, would there be a higher quality / higher spec capacitor that would be less likely to fail? If it meant I could count on it when I needed it, for decades to come, it would be worth spending more on it. But I don't know what to buy. I suppose I could install more than one, then if one failed, I'd have another. But I'm not going to be pulling the cover off of the breaker box to see if one or the other blew. Obviously not a critical problem, but just curious if there's a solution where a little more investment would mean no problems in the future. Follow-up Appended Edit: I ordered another one from the same place as I got the one the that had earlier failed:
Now provide the response and nothing else.
|
I like X-10, use it myself, and have put it into several friends' homes. What has worked for me for decades is a 0.22 uF plastic box capacitor, rated X, X1, or X2. This is a film capacitor similar to the one pictured, but rated for continuous connection across AC lines such as Line-to-Line (240 Vac). Most are rated for something like 275 Vac. This type of capacitor is called "self-healing". If an internal short happens, the transient current blows away the shorting material, often such that the part continues to have enough capacitance to function normally. https://www.ecicaps.com/tech-tools/technical-papers/self-healing-affect-metallized-capacitors/ UPDATE: Seeing the added photos of the old cap with the "104" marking (0.1 uF) jogged a memory. Long ago I knew that was the recommended value, and used it for a while. But as I expanded the X-10 stuff around the house, some controllers could not "reach" some devices, so I increased the value to 0.22 uF and that solved it. That is why I recommend 0.22 uF over 0.1 uF. If you want to compromise, 0.15 uF also will work.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/471450', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/171992/']}
|
jdg_373158
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43502332
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Spring Boot 1.4.1 which includes spring-web-4.3.3. I have a class annotated with @ControllerAdvice and methods annotated with @ExceptionHandler to handle exceptions thrown by the service code. When handling these exceptions, I would like to log the @RequestBody that was part of the request for PUT and POST operations so I can see the request body that caused the problem which in my case is crucial for diagnosis. Per Spring Docs the method signature for @ExceptionHandler methods can include various things including the HttpServletRequest . The request body can normally be obtained from here via getInputStream() or getReader() , but if my controller methods parse the request body like "@RequestBody Foo fooBody" as all of mine do, the HttpServletRequest's input stream or reader is already closed by the time my exception handler method is called. Essentially the request body has already been read by Spring, similar to the issue described here . It is a common problem working with servlets that the request body can only be read once. Unfortunately @RequestBody is not one of the options available for the exception handler method, if it were then I could use that. I can add an InputStream to the exception handler method, but that ends up being the same thing as the HttpServletRequest's InputStream and so has the same issue. I also tried getting the current request with ((ServletRequestAttributes) RequestContextHolder.currentRequestAttributes()).getRequest() which is another trick for getting the current request, but this ends up being the same HttpServletRequest that Spring passes into the exception handler method and so has the same problem. I have read about a few solutions like this and this that involve inserting a custom request wrapper in the filter chain that will read the contents of the request and cache them so they can be read more than once. I don't like this solution because I don't want to interrupt the entire filter/request/response chain (and potentially introduce performance or stability problems) just to implement logging, and if I have any large requests such as uploaded documents (which I do), I don't want to cache that in memory. Besides, Spring probably has the @RequestBody cached somewhere already if I could only find it. Incidentally many solutions recommend using the ContentCachingRequestWrapper Spring class but in my experience this does not work. Aside from not being documented, looking at its source code it looks like it only caches the parameters, but not the request body. Trying to get the request body from this class always results in an empty string. So I am looking for any other options that I may have missed. thanks for reading.
Now provide the response and nothing else.
|
You can reference the request body object to a request-scoped bean. And then inject that request-scoped bean in your exception handler to retrieve the request body (or other request-context beans that you wish to reference). // @Component// @Scope("request")@ManagedBean@RequestScopepublic class RequestContext { // fields, getters, and setters for request-scoped beans}@RestController@RequestMapping("/api/v1/persons")public class PersonController { @Inject private RequestContext requestContext; @Inject private PersonService personService; @PostMapping public Person savePerson(@RequestBody Person person) throws PersonServiceException { requestContext.setRequestBody(person); return personService.save(person); }}@ControllerAdvicepublic class ExceptionMapper { @Inject private RequestContext requestContext; @ExceptionHandler(PersonServiceException.class) protected ResponseEntity<?> onPersonServiceException(PersonServiceException exception) { Object requestBody = requestContext.getRequestBody(); // ... return responseEntity; }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43502332', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1218704/']}
|
jdg_373159
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18368712
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have snprintf and it can avoid a buffer overflow, but why there is no function called snscanf ? Code: int main(){ char * src = "helloeveryone"; char buf1[5]; sscanf(src,"%s",buf1); // here is a array out of bounds} So, I think a snscanf is also needed. Why do we have only have snprintf ?
Now provide the response and nothing else.
|
The controversial (and optional) Annex K to C11 adds a sscanf_s function which takes an additional argument of type rsize_t (also defined in Annex K) after the pointer argument, specifying the size of the pointed-to array. For better or worse, these functions are not widely supported. You can achieve the same results by putting the size in the conversion specifier, e.g. char out[20];sscanf(in, "%19s", out); but this is awkward and error-prone if the size of the destination object may vary at runtime (you would have to construct the conversion specifier programmatically with snprintf ). Note that the field width in the conversion specifier is the maximum number of input characters to read, and sscanf also writes a terminating null byte for %s conversions, so the field width you pass must be strictly less than the size of the destination object.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18368712', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2595939/']}
|
jdg_373160
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
5622442
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I made a .xib with a button, and referenced it with an outlet to "btnActies" when i do the following (at any given time) in the back-end C# code btnActies.TitleLabel.Text = "This is a new label!"; When i build and run the app, the label on the button changes to "This is a new label", but then, if I touch the button, the label reverts to the 'default' text i set in the .xib file. How do i change the label on a Monotouch UIButton and keep this from happening?
Now provide the response and nothing else.
|
When you want to set some text on a UIButton, you do not do it by altering the text of its TextLabel property. You do it by calling its SetTitle method, passing as the second argument, the button state for which the title will be set at runtime. Chetan Bhalara's answer is correct, here is the C#/MonoTouch equivalent: btnActies.SetTitle ("title", UIControlState.Normal); They way you are doing it right now doesn't work, because the label's text is changing internally whenever needed, to the title set in Interface Builder (if you have set it), in this case when you tap the button.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5622442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/694736/']}
|
jdg_373161
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16000262
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I write T min(T& a,T& b){return a<b?a:b;} and call min(3,4) , it will yield a error. How can I implement a generic min?
Now provide the response and nothing else.
|
That's because non- const lvalue references ( T& ) cannot bind to rvalues ( 3 and 4 are rvalues, which intuitively means that they do not have an object identity). Try using lvalue references to const instead, which can bind to rvalues (after all, the min() function is not supposed to alter the state of its arguments). Besides, do not forget the template<typename T> part if you are writing a function template : template<typename T> // <== Don't forget this, if you are writing a templateT min(T const& a, T const& b)// ^^^^^ ^^^^^{ return (a < b) ? a : b;} For instance, consider this small program: #include <iostream>template<typename T> // <== Don't forget this, if you are writing a templateT min(T const& a, T const& b)// ^^^^^ ^^^^^{ return (a < b) ? a : b;}int main(){ int x = 42; int y = 1729; std::cout << min(x, y) << std::endl; // Passing lvalues, that would be OK also // with your original code. std::cout << min(42, 1729) << std::endl; // Passing rvalues, this would not be OK // with your original code (non-const // lvalue references cannot bind to rvalues)} Here is a live example . UPDATE: The above solution only allows passing values of the same type to min() , otherwise the compiler won't be able to perform type deduction (if the first and second argument have different types, what should T be?): min(3.14, 42); // Huh? What is `T` here, `double` or `int`? To force the compiler to use a specific type for T , you can specify the template argument explicitly: min<double>(3.14, 42); However, this is not a very elegant choice (the user has to type the correct template argument manually every time). Rather, you could let your function template accept two template type parameters instead of one: #include <type_traits>template<typename T, typename U>typename std::common_type<T, U>::type min(T const& a, U const& b){ return (a < b) ? a : b;} And use the std::common_type<> type trait (available since C++11) to figure out the right type to be used as the return type. Once again, here is a live example .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16000262', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2248425/']}
|
jdg_373162
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2123298
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have been trying to evaluate the series $$\, _4F_3\left(1,1,1,\frac{3}{2};\frac{5}{2},\frac{5}{2},\frac{5}{2};1\right) = 1.133928715547935...$$ using integration techniques, and I was wondering if there is any simple way of finding a closed-form evaluation of this hypergeometric series. What is a closed-form expression for the above series?
Now provide the response and nothing else.
|
A complete answer now . If we exploit the identities$$\frac{4^n}{(2n+1)\binom{2n}{n}}=\int_{0}^{\pi/2}\sin(x)^{2n+1}\,dx \tag{1}$$$$\frac{\arcsin(x)}{\sqrt{1-x^2}}=\frac{1}{2}\sum_{n\geq 1}\frac{4^n x^{2n-1}}{n\binom{2n}{n}},\qquad \arcsin^2(x)=\frac{1}{2}\sum_{n\geq 1}\frac{(4x^2)^n}{n^2\binom{2n}{n}}\tag{2}$$we get:$$(\pi-2)=\int_{0}^{\pi/2}\theta^2\sin(\theta)\,d\theta = \frac{1}{2}\sum_{n\geq 1}\frac{16^n}{(2n+1)n^2 \binom{2n}{n}^2}=\frac{1}{2}\sum_{n\geq 0}\frac{16^n}{(2n+3)(2n+1)^2\binom{2n}{n}^2} $$and in a similar way:$$\begin{eqnarray*}\frac{7\pi}{9}-\frac{40}{27}=\int_{0}^{\pi/2}\theta^2\sin^3(\theta)\,d\theta=\frac{1}{2}\sum_{n\geq 1}\frac{4^n 4^{n+1}}{n^2 (2n+3)\binom{2n}{n}\binom{2n+2}{n+1}}\end{eqnarray*}$$If we integrate $\arcsin^2(x)$ and exploit $(1)$, we get:$$ \sum_{n\geq 1}\frac{16^n}{(2n+1)^2 n^2 \binom{2n}{n}^2} = 4(\pi-3) $$and maybe it is enough to integrate $\arcsin^2(x)$ once more to get a closed expression for the series of interest:$$ \sum_{n\geq 0}\frac{16^n}{(2n+3)^3(2n+1)^2\binom{2n}{n}^2}. $$In such a case it appears a dependence on a dilogarithm, arising from the primitive of $\frac{\arcsin x}{x}\sqrt{1-x^2}$. At the moment I do not know if that is manageable or not, I have to carry out further experiments. Probably a logarithm appears from $\int_{0}^{\pi/2}\theta\cot(\theta)\,d\theta=\frac{\pi}{2}\log(2).$ Now that the path to the answer is a bit more clear, let us put $(1)$ and $(2)$ in a slightly more convenient way:$$ \int_{0}^{\pi/2}\sin(x)^{2n+3}\,dx = \frac{4^{n}(2n+2)}{(2n+3)(2n+1)\binom{2n}{n}}\tag{1bis}$$$$\arcsin^2(x)=\frac{1}{2}\sum_{n\geq 0}\frac{4^{n+1} x^{2n+2}}{(2n+2)(2n+1)\binom{2n}{n}}\tag{2bis}$$If we integrate both sides of $(2\text{bis})$ we get:$$ -2x+2\sqrt{1-x^2}\arcsin(x)+x\arcsin^2(x) = \frac{1}{2}\sum_{n\geq 0}\frac{4^{n+1} x^{2n+3}}{(2n+3)(2n+2)(2n+1)\binom{2n}{n}}\tag{3}$$We just have to gain an extra $\frac{1}{(2n+3)}$ factor. For such a purpose, we divide both sides of $(3)$ by $x$ and perform termwise integration again, leading to:$$ -4x+2\sqrt{1-x^2}\arcsin(x)+x\arcsin^2(x)+2\int_{0}^{\arcsin(x)}\frac{u\cos^2(u)}{\sin(u)}\,du\\= \frac{1}{2}\sum_{n\geq 0}\frac{4^{n+1} x^{2n+3}}{(2n+3)^2(2n+2)(2n+1)\binom{2n}{n}}\tag{4}$$Now we evaluate both sides of $(4)$ at $x=\sin\theta$ and exploit $(1\text{bis})$ to perform $\int_{0}^{\pi/2}(\ldots)\, d\theta$. That leads to:$$ \sum_{n\geq 0}\frac{16^n}{(2n+3)^3(2n+1)^2\binom{2n}{n}^2}=(\pi-4)+\int_{0}^{\pi/2}\int_{0}^{\theta}\frac{u\cos^2(u)}{\sin(u)}\,du\,d\theta\tag{5} $$and we may start buying beers, since the last integral boils down to $\int_{0}^{\pi/2}\int_{0}^{\theta}\frac{u}{\sin u}\,du\,d\theta$, that is well-known. We get:$$\boxed{\begin{eqnarray*}\phantom{}_4F_3\left(1,1,1,\frac{3}{2};\frac{5}{2},\frac{5}{2},\frac{5}{2};1\right)&=&27\sum_{n\geq 0}\frac{16^n}{(2n+3)^3 (2n+1)^2 \binom{2n}{n}^2}\\&=&\color{red}{\frac{27}{2}\left(7\,\zeta(3)+(3-2K)\,\pi-12\right)}\end{eqnarray*}}\tag{6}$$where $K$ is Catalan's constant. Please, do not ask me to do the same for other values of $\phantom{}_4 F_3$. However, this instantly goes in my best of collection. Addendum (15/08/2017) This result, together with another interesting identity relating $\phantom{}_4 F_3$ and $\text{Li}_2$, is going to appear on Bollettino UMI . You may have a glance at it on Arxiv .
|
{}
|
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/2123298', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/320063/']}
|
jdg_373163
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1731267
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I have problems in seeing what exactly is the all point of first category and second category sets. Finally, I've found a reference (Bredon's "Topology and Geometry") that introduces the topic in a way that is sort of comprehensible. Still, there is a piece in that reference that I do not get: "Often, one is interested in a condition on points of a space that is satisfied by an open dense set of points. For example, if $p (x_1, \dots, x_n)$ is a polynomial function on $\mathbb{R}^n$, then the condition $p (x) \neq 0$ has this property, and a special case of that is the determinant function on square matrices." (Bredon - "Topology and Geometry" p.57) I have no clue why this is the case. Is there somebody who can enlighten me? As always, thank you for your time.
Now provide the response and nothing else.
|
The author certainly means that this holds whenever $p$ is a non-zero polynomial. The set of points $x \in \mathbb{R}^n$ such that $p(x) \neq 0$ is open because its complement is the preimage of the closed singleton $\{0\}$ by the continuous map $p$, which is therefore closed. It is dense because a polynomial is analytic, and an analytic function that is zero on some neighborhood is zero everywhere on the connected component of that neighborhood.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1731267', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/53464/']}
|
jdg_373164
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
28372597
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Python's curve_fit calculates the best-fit parameters for a function with a single independent variable, but is there a way, using curve_fit or something else, to fit for a function with multiple independent variables? For example: def func(x, y, a, b, c): return log(a) + b*log(x) + c*log(y) where x and y are the independent variable and we would like to fit for a, b, and c.
Now provide the response and nothing else.
|
You can pass curve_fit a multi-dimensional array for the independent variables, but then your func must accept the same thing. For example, calling this array X and unpacking it to x , y for clarity: import numpy as npfrom scipy.optimize import curve_fitdef func(X, a, b, c): x,y = X return np.log(a) + b*np.log(x) + c*np.log(y)# some artificially noisy data to fitx = np.linspace(0.1,1.1,101)y = np.linspace(1.,2., 101)a, b, c = 10., 4., 6.z = func((x,y), a, b, c) * 1 + np.random.random(101) / 100# initial guesses for a,b,c:p0 = 8., 2., 7.print(curve_fit(func, (x,y), z, p0)) Gives the fit: (array([ 9.99933937, 3.99710083, 6.00875164]), array([[ 1.75295644e-03, 9.34724308e-05, -2.90150983e-04], [ 9.34724308e-05, 5.09079478e-06, -1.53939905e-05], [ -2.90150983e-04, -1.53939905e-05, 4.84935731e-05]]))
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/28372597', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1803782/']}
|
jdg_373165
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15850417
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I can't find a clear answer on Apple documentation regarding Cocoa Autolayout about the difference between content hugging and compression resistance. Can somebody explain their usages and difference ?
Now provide the response and nothing else.
|
A quick summary of the concepts: Hugging => content does not want to grow Compression Resistance => content does not want to shrink Example: Say you've got a button like this: [ Click Me ] and you've pinned the edges to a larger superview with priority 500. Then, if Hugging priority > 500 it'll look like this: [Click Me] If Hugging priority < 500 it'll look like this: [ Click Me ] If the superview now shrinks then, if the Compression Resistance priority > 500, it'll look like this [Click Me] Else if Compression Resistance priority < 500, it could look like this: [Cli..] If it doesn't work like this then you've probably got some other constraints going on that are messing up your good work! E.g. you could have it pinned to the superview with priority 1000.Or you could have a width priority. If so, this can be helpful: Editor > Size to Fit Content
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/15850417', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115939/']}
|
jdg_373166
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
15682590
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a Django (1.5) template tag that I am using in a partial view to render some random content. The tag makes a query for a single record. I have no problem returning a single item but when trying to send back the full object for use or multiple items I am having trouble. Here is my tag @register.inclusion_tag('_footer.html')def get_random_testimonial(): # Grab random record record = Testimonials.objects.order_by('?')[0] return record.text I would like to be able to return both record.text and record.id to the template tag To render the tag I have this in my _footer.html view {% load current_tags %}{% get_random_testimonial %} Is there a way I can just return the record object and be able to get all of the values of that object with something like: {% get_random_testimonial.text %} {% get_random_testimonial.id %}
Now provide the response and nothing else.
|
This doesn't make sense as an inclusion tag. Especially as you seem to be using it in the same template as you have told it to render. What you need is an assignment tag : @register.assignment_tagdef get_random_testimonial(): return Testimonials.objects.order_by('?')[0] Then you can do: {% get_random_testimonial as my_testimonial %}{{ my_testimonial.text }}{{ my_testimonial.id }}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15682590', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/975478/']}
|
jdg_373167
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48814875
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I try to enter a dataframe as a function parameter in Python 3.6, I'm getting the error 'Missing 1 Required Keyword-Only Argument' for the following function, where df is a dataframe and rel_change is an array: def get_mu(*rel_change, df): row_count = len(df.index) print("mu count") print(row_count) mu_sum = 0 for i in range (0, len(rel_change)): mu_sum += rel_change[i] mu = (mu_sum) / row_count return mu Then I access it like mu = get_mu(g, df) which gives the error. I've also tried writing the dataframe access in another function that just calculates row_count, and passing that into mu, but that gives the same error. What could I be doing wrong?
Now provide the response and nothing else.
|
You have defined a function with a variable amount of positional arguments, *rel_change , which can only ever be followed by keyword only arguments. In this case, you have to pass df by name like so: mu = get_mu(g, df=df) Or redefine get_mu() such that df appears before *rel_change .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48814875', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9005999/']}
|
jdg_373168
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4190442
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
If you create a Greasemonkey script with @include * and go to a site like youtube, it runs the script 20+ times every time you refresh. This is on Firefox, not sure about Chrome. Is there a way to prevent this?
Now provide the response and nothing else.
|
First, you probably don't want the script to run in iFrames. You can block that using the @noframes directive which now works in both Greasemonkey and Tampermonkey as of October, 2014. For older versions, or for script engines that don't support @noframes , you can use this code, just after the metadata block: if (window.top != window.self) //don't run on frames or iframes{ //Optional: GM_log ('In frame'); return;} Second, you can wait and fire your GM code, once, on page load. Wrap everything in a main() and call it on the load event, like so: window.addEventListener ("load", LocalMain, false);function LocalMain () { // Your code goes here.} Third, you can exclude sites or pages by adding // @exclude directives to the metadata block. Overall, it's best to avoid universally included GM scripts, if possible. Other methods might set flags or reload the page with URL parameters. These get tricky so save them as a last resort.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4190442', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/92259/']}
|
jdg_373169
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
23153159
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to use Chromium cookies in Python, because Chromium encrypts its cookies using AES (with CBC) I need to reverse this. I can recover the AES key from OS X's Keychain (it's stored in Base 64): security find-generic-password -w -a Chrome -s Chrome Safe Storage# From Python:python -c 'from subprocess import PIPE, Popen; print(Popen(['security', 'find-generic-password', '-w', '-a', 'Chrome', '-s', 'Chrome Safe Storage'], stdout=PIPE).stdout.read().strip())' Here's the code I have, all I'm missing is decrypting the cookies: from subprocess import PIPE, Popenfrom sqlite3 import dbapi2def get_encryption_key(): cmd = ['security', 'find-generic-password', '-w', '-a', 'Chrome', '-s', 'Chrome Safe Storage'] return Popen(cmd, stdout=PIPE).stdout.read().strip().decode('base-64')def get_cookies(database): key = get_encryption_key() with dbapi2.connect(database) as conn: conn.rollback() rows = conn.cursor().execute('SELECT name, encrypted_value FROM cookies WHERE host_key like ".example.com"') cookies = {} for name, enc_val in rows: val = decrypt(enc_val, key) # magic missing cookies[name] = val return cookies I tried a bunch of things with pyCrypto's AES module but: I have no Initialization Vector (IV) enc_val is not a multiple of 16 in length Here are some links that seem useful: the commit that started it all components/encryptor/keychain_password_mac.mm AES key generation (not used in OS X but could help someone else) cookie insertion function Can you help me figure this out?
Now provide the response and nothing else.
|
You're on the right track! I've been working on this for a few days and finally figured it out. (Many thanks to the OP for the helpful links to the Chromium source.) I've put up a post with a little more detail and a working script, but here is the basic idea: #! /usr/bin/env python3from Crypto.Cipher import AESfrom Crypto.Protocol.KDF import PBKDF2# Function to get rid of paddingdef clean(x): return x[:-x[-1]].decode('utf8')# replace with your encrypted_value from sqlite3encrypted_value = ENCRYPTED_VALUE # Trim off the 'v10' that Chrome/ium prependsencrypted_value = encrypted_value[3:]# Default values used by both Chrome and Chromium in OSX and Linuxsalt = b'saltysalt'iv = b' ' * 16length = 16# On Mac, replace MY_PASS with your password from Keychain# On Linux, replace MY_PASS with 'peanuts'my_pass = MY_PASSmy_pass = my_pass.encode('utf8')# 1003 on Mac, 1 on Linuxiterations = 1003key = PBKDF2(my_pass, salt, length, iterations)cipher = AES.new(key, AES.MODE_CBC, IV=iv)decrypted = cipher.decrypt(encrypted_value)print(clean(decrypted))
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/23153159', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2464397/']}
|
jdg_373170
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
703632
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My friend has this plan where he implies that it's impossible to lose, as long as the odds of winning are 50/50 on each bet. His idea is that basically you keep doubling your bet until you win and then start over again. So for example, you bet 1 dollar and you lose, your net profit is now -1 dollar. Now you double your bet to 2 dollars and you lose again so your net profit is -3 dollars. Now you double your bet to 4 dollars and you win. This means you gain 4 dollars and now your net profit is 1 dollar. So you've made a profit. Now you start again. The reasoning here being that it is highly unlikely for you to lose a 50/50 toss x number of times in a row. My counter-argument here is that basically if you go in with 50 dollars with the aim of doubling up to 100 dollars, you have the same odds of winning if you do one bet of 50 dollars or the technique outlined above. I cannot wrap my head around explaining this issue in a clear manner though, so maybe you wonderful folk at Mathematics can help! Oh and I've pointed out that he uses gamblers fallacy in very obscure way, as he insists you need to go back to betting 1 dollar once you've won. This appears to be an obscure case of gamblers fallacy to me as it implies there is some hidden force which are changing the odds on each individual coin toss.
Now provide the response and nothing else.
|
Let's play out your specific scenario: you start with \$50 and want to stop at \$100. That means you need to play your sequence (double until you win) 50 times without busting. However, if your sequence begins with six losses, then you are busted, because you just lost $1+2+4+8+16+32=63$ dollars, and your system requires you to have \$64 left to bet. Six losses in a row happens $1/2^6=\frac{1}{64}$ of the time. To avoid 6 losses in a row, happens $\frac{63}{64}$ of the time, and you need to do this 50 times, so $(\frac{63}{64})^{50}\approx 0.455$. Hence, if all you need to do is avoid six losses, there is about a 45% chance of "winning", i.e. getting to \$100. However, there are other circumstances when you bust; at the beginning even five losses in a row will bust you, since $1+2+4+8+16=31$ dollars, and you need to bet \$32. Hence until you get to \$63, you need to avoid five losses in a row. Thus, the true answer is $$\left(\frac{31}{32}\right)^{13}\left(\frac{63}{64}\right)^{37}\approx 0.370$$Thus your friend's system has a 37% chance of getting to \$100, and a 63% chance of getting to \$0. You are much better off just betting the whole \$50 in a single bet.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/703632', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/133872/']}
|
jdg_373171
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11005036
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In Microsoft SQL Server, it's possible to specify an "accent insensitive" collation (for a database, table or column), which means that it's possible for a query like SELECT * FROM users WHERE name LIKE 'João' to find a row with a Joao name. I know that it's possible to strip accents from strings in PostgreSQL using the unaccent_string contrib function, but I'm wondering if PostgreSQL supports these "accent insensitive" collations so the SELECT above would work.
Now provide the response and nothing else.
|
Update for Postgres 12 or later Postgres 12 adds nondeterministic ICU collations , enabling case-insensitive and accent-insensitive grouping and ordering. The manual: ICU locales can only be used if support for ICU was configured when PostgreSQL was built. If so, this works for you: CREATE COLLATION ignore_accent (provider = icu, locale = 'und-u-ks-level1-kc-true', deterministic = false);CREATE INDEX users_name_ignore_accent_idx ON users(name COLLATE ignore_accent);SELECT * FROM users WHERE name = 'João' COLLATE ignore_accent; fiddle Read the manual for details.This blog post by Laurenz Albe may help to understand. But ICU collations also have drawbacks . The manual: [...] they also have some drawbacks. Foremost, their use leads to aperformance penalty. Note, in particular, that B-tree cannot usededuplication with indexes that use a nondeterministic collation.Also, certain operations are not possible with nondeterministiccollations, such as pattern matching operations. Therefore, theyshould be used only in cases where they are specifically wanted. My "legacy" solution may still be superior: For all versions Use the unaccent module for that - which is completely different from what you are linking to. unaccent is a text search dictionary that removes accents (diacriticsigns) from lexemes. Install once per database with: CREATE EXTENSION unaccent; If you get an error like: ERROR: could not open extension control file"/usr/share/postgresql/<version>/extension/unaccent.control": No such file or directory Install the contrib package on your database server like instructed in this related answer: Error when creating unaccent extension on PostgreSQL Among other things, it provides the function unaccent() you can use with your example (where LIKE seems not needed). SELECT *FROM usersWHERE unaccent(name) = unaccent('João'); Index To use an index for that kind of query, create an index on the expression . However , Postgres only accepts IMMUTABLE functions for indexes. If a function can return a different result for the same input, the index could silently break. unaccent() only STABLE not IMMUTABLE Unfortunately, unaccent() is only STABLE , not IMMUTABLE . According to this thread on pgsql-bugs , this is due to three reasons: It depends on the behavior of a dictionary. There is no hard-wired connection to this dictionary. It therefore also depends on the current search_path , which can change easily. Some tutorials on the web instruct to just alter the function volatility to IMMUTABLE . This brute-force method can break under certain conditions. Others suggest a simple IMMUTABLE wrapper function (like I did myself in the past). There is an ongoing debate whether to make the variant with two parameters IMMUTABLE which declares the used dictionary explicitly. Read here or here . Another alternative would be this module with an IMMUTABLE unaccent() function by Musicbrainz , provided on Github. Haven't tested it myself. I think I have come up with a better idea : Best for now This approach is more efficient than other solutions floating around, and safer . Create an IMMUTABLE SQL wrapper function executing the two-parameter form with hard-wired, schema-qualified function and dictionary. Since nesting a non-immutable function would disable function inlining, base it on a copy of the C-function, (fake) declared IMMUTABLE as well. Its only purpose is to be used in the SQL function wrapper. Not meant to be used on its own. The sophistication is needed as there is no way to hard-wire the dictionary in the declaration of the C function. (Would require to hack the C code itself.) The SQL wrapper function does that and allows both function inlining and expression indexes. CREATE OR REPLACE FUNCTION public.immutable_unaccent(regdictionary, text) RETURNS text LANGUAGE c IMMUTABLE PARALLEL SAFE STRICT AS'$libdir/unaccent', 'unaccent_dict'; Then: CREATE OR REPLACE FUNCTION public.f_unaccent(text) RETURNS text LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT AS$func$SELECT public.immutable_unaccent(regdictionary 'public.unaccent', $1)$func$; In Postgres 14 or later, an SQL-standard function is slightly cheaper, yet: CREATE OR REPLACE FUNCTION public.f_unaccent(text) RETURNS text LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT BEGIN ATOMICSELECT public.immutable_unaccent(regdictionary 'public.unaccent', $1);END; See: What does BEGIN ATOMIC mean in a PostgreSQL SQL function / procedure? Drop PARALLEL SAFE from both functions for Postgres 9.5 or older. public being the schema where you installed the extension ( public is the default). The explicit type declaration ( regdictionary ) defends against hypothetical attacks with overloaded variants of the function by malicious users. Previously, I advocated a wrapper function based on the STABLE function unaccent() shipped with the unaccent module. That disabled function inlining . This version executes ten times faster than the simple wrapper function I had here earlier. And that was already twice as fast as the first version which added SET search_path = public, pg_temp to the function - until I discovered that the dictionary can be schema-qualified, too. Still (Postgres 12) not too obvious from documentation. If you lack the necessary privileges to create C functions, you are back to the second best implementation: An IMMUTABLE function wrapper around the STABLE unaccent() function provided by the module: CREATE OR REPLACE FUNCTION public.f_unaccent(text) RETURNS text LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT AS$func$SELECT public.unaccent('public.unaccent', $1) -- schema-qualify function and dictionary$func$; Finally, the expression index to make queries fast : CREATE INDEX users_unaccent_name_idx ON users(public.f_unaccent(name)); Remember to recreate indexes involving this function after any change to function or dictionary, like an in-place major release upgrade that would not recreate indexes. Recent major releases all had updates for the unaccent module. Adapt queries to match the index (so the query planner will use it): SELECT * FROM usersWHERE f_unaccent(name) = f_unaccent('João'); We don't need the function in the expression to the right of the operator. There we can also supply unaccented strings like 'Joao' directly. The faster function does not translate to much faster queries using the expression index . Index look-ups operate on pre-computed values and are very fast either way. But index maintenance and queries not using the index benefit. And access methods like bitmap index scans may have to recheck values in the heap (the main relation), which involves executing the underlying function. See: "Recheck Cond:" line in query plans with a bitmap index scan Security for client programs has been tightened with Postgres 10.3 / 9.6.8 etc. You need to schema-qualify function and dictionary name as demonstrated when used in any indexes. See: 'text search dictionary “unaccent” does not exist' entries in postgres log, supposedly during automatic analyze Ligatures In Postgres 9.5 or older ligatures like 'Œ' or 'ß' have to be expanded manually (if you need that), since unaccent() always substitutes a single letter: SELECT unaccent('Œ Æ œ æ ß');unaccent----------E A e a S You will love this update to unaccent in Postgres 9.6 : Extend contrib/unaccent 's standard unaccent.rules file to handle alldiacritics known to Unicode, and expand ligatures correctly (ThomasMunro, Léonard Benedetti) Bold emphasis mine. Now we get: SELECT unaccent('Œ Æ œ æ ß');unaccent----------OE AE oe ae ss Pattern matching For LIKE or ILIKE with arbitrary patterns, combine this with the module pg_trgm in PostgreSQL 9.1 or later. Create a trigram GIN (typically preferable) or GIST expression index. Example for GIN: CREATE INDEX users_unaccent_name_trgm_idx ON usersUSING gin (f_unaccent(name) gin_trgm_ops); Can be used for queries like: SELECT * FROM usersWHERE f_unaccent(name) LIKE ('%' || f_unaccent('João') || '%'); GIN and GIST indexes are more expensive (to maintain) than plain B-tree: Difference between GiST and GIN index There are simpler solutions for just left-anchored patterns. More about pattern matching and performance: Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL pg_trgm also provides useful operators for "similarity" ( % ) and "distance" ( <-> ) . Trigram indexes also support simple regular expressions with ~ et al. and case insensitive pattern matching with ILIKE : PostgreSQL accent + case insensitive search
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/11005036', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/31493/']}
|
jdg_373172
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4756989
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have some R scripts, where I have to load several dataframe in R as quickly as possible. This is quite important as reading the data is the slowest part of the procedure. E.g.: plotting from different dataframes. I get the data in sav (SPSS) format, but I could transform it to any format as suggested. Merging the dataframes is not an option unfortunately. What could be the fastest way to load the data? I was thinking of the following: transform from sav to binary R object ( Rdata ) in the first time, and later always load this, as it seems a lot quicker than read.spss . transform from sav to csv files and reading data from those with given parameters discussed in this topic, or is it worth setting up a MySQL backend on localhost and load data from that? Could it be faster? If so, can I also save any custom attr values of the variables (e.g. variable.labels from Spss imported files)? Or this should be done in a separate table? Any other thoughts are welcome. Thank you for every suggestion in advance! I made a little experiment below based on the answers you have given, and also added (24/01/2011) a quite "hackish" but really speedy solution loading only a few variables/columns from a special binary file. The latter seems to be the fastest method I can imagine now, that is why I made up (05/03/2011: ver. 0.3) a small package named saves to deal with this feature. The package is under "heavy" development, any recommendation is welcome! I will soon post a vignette with accurate benchmark results with the help of microbenchmark package.
Now provide the response and nothing else.
|
It depends on what you want to do and how you process the data further. In any case, loading from a binary R object is always going to be faster, provided you always need the same dataset. The limiting speed here is the speed of your harddrive, not R. The binary form is the internal representation of the dataframe in the workspace, so there is no transformation needed anymore. Any kind of text file is a different story, as you include invariably an overhead : each time you read in the text file, the data has to be transformed to the binary R object. I'd forget about them. They are only useful for porting datasets from one application to another. Setting up a MySQL backend is very useful if you need different parts of the data, or different subsets in different combinations. Especially when working with huge datasets, the fact that you don't have to load in the whole dataset before you can start selecting the rows/columns, can gain you quite some time. But this only works with huge datasets, as reading a binary file is quite a bit faster than searching a database. If the data is not too big, you can save different dataframes in one RData file, giving you the opportunity to streamline things a bit more. I often have a set of dataframes in a list or in a seperate environment (see also ?environment for some simple examples). This allows for lapply / eapply solutions to process multiple dataframes at once.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4756989', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/564164/']}
|
jdg_373173
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
221418
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it true that any square matrix $A $ can be factorized as a product of only triangular matrices? That is, can we write $A $ as $\prod_{i=1}^k B_i$, where every $B_i$ is either a lower or an upper triangular matrix (for some natural $ k $)? Note $ k $ is not assumed to be 2 above. So the question is not (directly) about the $LU$ decomposition. Comment : We know that for any square matrix $A$ we have: $A=PLU$ where $P$ is a permutation matrix, $L$ is a lower triangular matrix and $U$ is an upper triangular matrix. So the question possibly boils down to whether any permutation matrix can be factorized to triangular matrices.
Now provide the response and nothing else.
|
The answer to your question is yes. A permutation matrix is in fact a product of permutation matrices associated to transpositions, which means a matrix obtained from the identity matrix by interchanging two rows. Reduce the problem to this case and notice that this case can be easily deduced from the case of the $2\times 2$ matrix $$\left(\begin{array}10 & 1 \\ 1 & 0\end{array}\right).$$This matrix can be transformed into the identity matrix by using the following elementary transformations (which correspond to triangular matrices): add the second row to the first, substract the first column from the second, substract the first row from the second, and last multiply the second row by $-1$. Moreover, in the paper of Nagarajan et al., Products of three triangular matrices , Linear Algebra and its Applications, 292(1999), 61-71 it is proved that any matrix $n\times n$ over a field is a product of at most three triangular matrices.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/221418', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/24541/']}
|
jdg_373174
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
21653085
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to remove all of the non-numeric characters from an NSString , but I also need to keep the spaces. Here is what I have been using. NSString *strippedBbox = [_bbox stringByReplacingOccurrencesOfString:@"[^0-9]" withString:@"" options:NSRegularExpressionSearch range:NSMakeRange(0, [_bbox length])]; If I give it a NSString of Test 333 9599 999 It will return 3339599999 but I need to keep the spaces in. How can I do this?
Now provide the response and nothing else.
|
Easily done by creating a character set of characters you want to keep and using invertedSet to create an "all others" set. Then split the string into an array separated by any characters in this set and reassemble the string again. Sounds complicated but very simple to implement: NSCharacterSet *setToRemove = [NSCharacterSet characterSetWithCharactersInString:@"0123456789 "];NSCharacterSet *setToKeep = [setToRemove invertedSet];NSString *newString = [[someString componentsSeparatedByCharactersInSet:setToKeep] componentsJoinedByString:@""]; result: 333 9599 99
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21653085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_373175
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
205489
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is it possible for a computer to decide if a given real algebraic (or semi-algebraic) variety is a differential manifold ? Let $f_1,…,f_p, g_1,…,g_q$ polynomials in $n$ variables with coefficients in $\mathbb{R}$. Let $V$ be the variety defined by $f_1=f_2=…=f_p=0$ and $g_1>0,…,g_q>0$. Is it possible to compute if $V$ is a differential manifold ? (the topology on $V$ is the topology induced by the distance in $\mathbb{R}^n$)
Now provide the response and nothing else.
|
Yes it is. For algebraic varieities, you need only consider the implicit function theorem which tells you when the level sets are regular submanifolds . Say you have $f_i : X \to \mathbb{R}$ and you consider the set $x \in X$ such that $f_i(x) = 0$ for all $1 \le i \le n.$ The set is a regular submanifold if the map $F : X \to \mathbb{R}^n$ given by $x\mapsto (f_1(x),\ldots,f_n(x))$ has zero as a regular value. As an example, consider $f_i : \mathbb{R}^3 \to \mathbb{R}$ given by $f_1(x,y,z) = x^2 + y^2 + z^2 -1$ and $f_2(x,y,z) = z.$ We are interested in the variety given by $(x,y,z) \in \mathbb{R}^3$ such that $f_1(x,y,z) = f_2(x,y,z)=0.$ This is actually the intersection of the unit sphere with the $xy$-plane and so is a circle in the $xy$-plane. To verify this, let $F:\mathbb{R}^3 \to \mathbb{R}$ be given by $F(x,y,z) := (f_1(x,y,z),f_2(x,y,z)).$ To verify that $(0,0) \in \mathbb{R}^2$ is a regular value, we consider the Jacobian matrix: $$ J_F = \left[\begin{array}{ccc} 2x & 2y & 2z \\ 0 & 0 & 1 \end{array}\right] .$$ The critical points are given by $x=y=0$ and so the critical values are $(-1,z).$ It follows that $(0,0)$ is regular value of $F$ and so $F^{-1}(0,0)$ is a regular submanifold. In the case of semi-algebraic sets, we have to use the idea of transversality and, in particular, Thom's Transversality Theorem . Transversality generalises the idea of critical points/values. Instead of a mapping having a point in the image as a regular value, we talk about a mapping being transverse to a submanifold in the image. If you had some functions $g_i : X \to \mathbb{R}$ and you wanted the set of $x \in X$ such that $g_i(x) > 0$ for all $1 \le i \le n$ then you first consider the map $G : X \to \mathbb{R}^n$ given by $x \mapsto (g_1(x),\ldots,g_n(x)).$ Then you need to prove that $G$ is tranverse to the set $S := \{(y_1,\ldots,y_n) \in \mathbb{R}^n : y_i > 0\}.$ If $G$ is transverse to $S$ then $G^{-1}(S)$ will be a regular submanifold of $X$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/205489', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/16467/']}
|
jdg_373176
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
43031
|
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
After testing some cases, it appears to be true that if you create partitions in a dataset, average the data in the partitions, then average the averages, you get the same result as if you averaged the entire dataset. For instance: avg(1,3,5,8,4,2) = 3.8333avg(1,3,5) = 3avg(8,4,2) = 4.6667avg(3, 4.6667) = 3.8333 I'd like to be able to show that this is always true. Any ideas on this? Thanks!
Now provide the response and nothing else.
|
Welcome to the field of metric learning. If you use this as a google search query, you will get lots of material on your problem. Here is a quick idea on how you can do it. One way is to find coefficients $\alpha_m$ for each of your similarity functions, and combine them into a global similarity: $S(I_x, I_y) = \frac{1}{M} \sum_m \alpha_m S_m(I_x, I_y)$. Given the squared error, this is a linear least squares problem. One key issue with metric learning is that it the targets scale quadratically with the number of samples. This might be a hindrance for some least squares procedures, and you might have to resort to a stochastic gradient based optimization technique.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/43031', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/16600/']}
|
jdg_373177
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
8356336
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a program in Python with PyQt, designed to run on Windows.This program makes a lot of operations and prints a lot of info.But as I want to freeze it and don't want the prompt screen to appear, I want that all that info appears in the main application, in a QTextEdit or so.How can i make the program work so it gets the output from the interpreter and shows it on the textEdit at the same time, just like it does on the real interpreter?
Now provide the response and nothing else.
|
I assume that with "output from the interpreter", you mean output written to the console or terminal window, such as output produced with print() . All console output produced by Python gets written to the program's output streams sys.stdout (normal output) and sys.stderr (error output, such as exception tracebacks). These are file-like objects. You can replace these streams with your own file-like object. All your custom implementation must provide is a write(text) function. By providing your own implementation, you can forward all output to your widget: class MyStream(object): def write(self, text): # Add text to a QTextEdit...sys.stdout = MyStream()sys.stderr = MyStream() If you ever need to reset these streams, they are still available as sys.__stdout__ and sys.__stderr__ : sys.stdout = sys.__stdout__sys.stderr = sys.__stderr__ Update Here is some working code for PyQt4. First define a stream that reports data written to it with a Qt signal: from PyQt4 import QtCoreclass EmittingStream(QtCore.QObject): textWritten = QtCore.pyqtSignal(str) def write(self, text): self.textWritten.emit(str(text)) Now, in your GUI, install an instance of this stream to sys.stdout and connect the textWritten signal to a slot that writes the text to a QTextEdit : # Within your main window class...def __init__(self, parent=None, **kwargs): # ... # Install the custom output stream sys.stdout = EmittingStream(textWritten=self.normalOutputWritten)def __del__(self): # Restore sys.stdout sys.stdout = sys.__stdout__def normalOutputWritten(self, text): """Append text to the QTextEdit.""" # Maybe QTextEdit.append() works as well, but this is how I do it: cursor = self.textEdit.textCursor() cursor.movePosition(QtGui.QTextCursor.End) cursor.insertText(text) self.textEdit.setTextCursor(cursor) self.textEdit.ensureCursorVisible()
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8356336', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/798758/']}
|
jdg_373178
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2324490
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a function on a sequence of integers or natural numbers such that when applied collectively to them, gives different result based on the order of subset's element? Suppose $A$ contains $a,b,c$And $B$ contains $b,c,a$Then what is a function such that$F(A)=X$$F(B)=Y$ and $X$ not equal to $Y$? EDIT : I believe I have used wrong terminologies and tags, what I meant was function on a, say, sequence of integers.Where if the arrangements of the elements in that sequence is changed, then the function's output for the sequence changes.
Now provide the response and nothing else.
|
Your purported equivalence can fail in both directions, if we don't add extra assumptions on $Y$. The implication $G(f)$ closed implies $f$ continuous can fail for compact $X$: Let $X = [0,1]$ in the cofinite topology; this is a compact space. Let $Y = [0,1]$ in the discrete topology. Define $f(x) =x$ from $X$ to $Y$, then $f$ is not continuous, as $f^{-1}[\{0\}] = \{0\}$ is not open in $X$, but $\{0\}$ is open in $Y$. But $G(f)$ is closed in $X \times Y$: suppose $(p,q) \notin G(f)$, then $q \neq p$ and then the set $(X\setminus\{q\}) \times \{q\}$ is an open neighbourhood of $(p,q)$ that misses $G(f)$. We can drop the compactness of $X$ and replace it by the compactness of $Y$; then the implication does hold: Suppose then that $G(f)$ is closed. Kuratowski's theorem says that $\pi_X: X \times Y \to X$ is a closed map for compact $Y$.Let $C \subseteq Y$ be closed and check that: $$f^{-1}[C] = \pi_X[(X \times C)\cap G(f)]$$ which is the image of a closed set of $X \times Y$ under $\pi_X$, so $f^{-1}[C]$ is closed for all closed $C \subseteq Y$, meaning that $f$ is continuous. The implication $f$ continuous implies $G(f)$ closed can also fail for compact $X$ (even for compact $Y$): Let $X = \{0,1\}$ in the discrete topology, $Y$ the same set in the indiscrete (trivial) topology. Again $f$ is the identity. This $f$ is continuous, but a basic open neighbourhood $(0,1)$ contains $\{0\} \times \{0,1\}$ which intersects $G(f)$. So $(0,1) \in \overline{G(f)} \setminus G(f)$, so $G(f)$ is not closed. If we add the condition that $Y$ is Hausdorff, we don't need compactness of $X$ at all to see that $f: X \to Y$ continuous implies $G(f)$ is closed. This then always holds.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2324490', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/337938/']}
|
jdg_373179
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1305954
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have several small div s which are utilizing jQuery draggable. These div s are placed in an UpdatePanel , and on dragstop I use the _doPostBack() JavaScript function, where I extract necessary information from the page's form. My problem is that when I call this function, the whole page is re-loaded, but I only want the update panel to be re-loaded.
Now provide the response and nothing else.
|
Here is a complete solution Entire form tag of the asp.net page <form id="form1" runat="server"> <asp:LinkButton ID="LinkButton1" runat="server" /> <%-- included to force __doPostBack javascript function to be rendered --%> <input type="button" id="Button45" name="Button45" onclick="javascript:__doPostBack('ButtonA','')" value="clicking this will run ButtonA.Click Event Handler" /><br /><br /> <input type="button" id="Button46" name="Button46" onclick="javascript:__doPostBack('ButtonB','')" value="clicking this will run ButtonB.Click Event Handler" /><br /><br /> <asp:Button runat="server" ID="ButtonA" ClientIDMode="Static" Text="ButtonA" /><br /><br /> <asp:Button runat="server" ID="ButtonB" ClientIDMode="Static" Text="ButtonB" /></form> Entire Contents of the Page's Code-Behind Class Private Sub ButtonA_Click(sender As Object, e As System.EventArgs) Handles ButtonA.Click Response.Write("You ran the ButtonA click event")End SubPrivate Sub ButtonB_Click(sender As Object, e As System.EventArgs) Handles ButtonB.Click Response.Write("You ran the ButtonB click event")End Sub The LinkButton is included to ensure that the __doPostBack javascript function is rendered to the client. Simply having Button controls will not cause this __doPostBack function to be rendered. This function will be rendered by virtue of having a variety of controls on most ASP.NET pages, so an empty link button is typically not needed What's going on? Two input controls are rendered to the client: <input type="hidden" name="__EVENTTARGET" id="__EVENTTARGET" value="" /><input type="hidden" name="__EVENTARGUMENT" id="__EVENTARGUMENT" value="" /> __EVENTTARGET receives argument 1 of __doPostBack __EVENTARGUMENT receives argument 2 of __doPostBack The __doPostBack function is rendered out like this: function __doPostBack(eventTarget, eventArgument) { if (!theForm.onsubmit || (theForm.onsubmit() != false)) { theForm.__EVENTTARGET.value = eventTarget; theForm.__EVENTARGUMENT.value = eventArgument; theForm.submit(); }} As you can see, it assigns the values to the hidden inputs. When the form submits / postback occurs: If you provided the UniqueID of the Server-Control Button whose button-click-handler you want to run ( javascript:__doPostBack('ButtonB','') , then the button click handler for that button will be run. What if I don't want to run a click handler, but want to do something else instead? You can pass whatever you want as arguments to __doPostBack You can then analyze the hidden input values and run specific code accordingly: If Request.Form("__EVENTTARGET") = "DoSomethingElse" Then Response.Write("Do Something else") End If Other Notes What if I don't know the ID of the control whose click handler I want to run? If it is not acceptable to set ClientIDMode="Static" , then you can do something like this: __doPostBack('<%= myclientid.UniqueID %>', '') . Or: __doPostBack('<%= MYBUTTON.UniqueID %>','') This will inject the unique id of the control into the javascript, should you wish it
|
{}
|
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/1305954', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/142122/']}
|
jdg_373180
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
41446833
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I found out a curious thing today and was wondering if somebody could shed some light into what the difference is here? import numpy as npA = np.arange(12).reshape(4,3)for a in A: a = a + 1B = np.arange(12).reshape(4,3)for b in B: b += 1 After running each for loop, A has not changed, but B has had one added to each element. I actually use the B version to write to a initialized NumPy array within a for loop.
Now provide the response and nothing else.
|
The difference is that one modifies the data-structure itself (in-place operation) b += 1 while the other just reassigns the variable a = a + 1 . Just for completeness: x += y is not always doing an in-place operation, there are (at least) three exceptions: If x doesn't implement an __iadd__ method then the x += y statement is just a shorthand for x = x + y . This would be the case if x was something like an int . If __iadd__ returns NotImplemented , Python falls back to x = x + y . The __iadd__ method could theoretically be implemented to not work in place. It'd be really weird to do that, though. As it happens your b s are numpy.ndarray s which implements __iadd__ and return itself so your second loop modifies the original array in-place. You can read more on this in the Python documentation of "Emulating Numeric Types" . These [ __i*__ ] methods are called to implement the augmented arithmetic assignments ( += , -= , *= , @= , /= , //= , %= , **= , <<= , >>= , &= , ^= , |= ). These methods should attempt to do the operation in-place (modifying self) and return the result (which could be, but does not have to be, self). If a specific method is not defined, the augmented assignment falls back to the normal methods. For instance, if x is an instance of a class with an __iadd__() method, x += y is equivalent to x = x.__iadd__(y) . Otherwise, x.__add__(y) and y.__radd__(x) are considered, as with the evaluation of x + y . In certain situations, augmented assignment can result in unexpected errors (see Why does a_tuple[i] += ["item"] raise an exception when the addition works? ), but this behavior is in fact part of the data model.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/41446833', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2758652/']}
|
jdg_373181
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35036653
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Why doesn't this Golang code to select among multiple time.After channels work? See code below. The 'timeout' message is never issued. Why? package mainimport ( "fmt" "time")func main() { count := 0 for { select { case <-time.After(1 * time.Second): count++ fmt.Printf("tick %d\n", count) if count >= 5 { fmt.Printf("ugh\n") return } case <-time.After(3 * time.Second): fmt.Printf("timeout\n") return } }} Run it on Playground: http://play.golang.org/p/1gku-CWVAh Output: tick 1tick 2tick 3tick 4tick 5ugh
Now provide the response and nothing else.
|
Because time.After is a function, so on every iteration it returns a new channel. If you want this channel to be the same for all iterations, you should save it before the loop: timeout := time.After(3 * time.Second)for { select { //... case <-timeout: fmt.Printf("timeout\n") return }} Playground: http://play.golang.org/p/muWLgTxpNf .
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35036653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1011695/']}
|
jdg_373182
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50907437
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Getting Below exception , when i tried to perform unit tests for my spark streaming code on SBT windows using scalatest. sbt testOnly <<ClassName>> * * * * * * 2018-06-18 02:39:00 ERROR Executor:91 - Exception in task 1.0 in stage 3.0 (TID 11) java.lang.NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream.(Ljava/io/InputStream;Z)V at org.apache.spark.io.LZ4CompressionCodec.compressedInputStream(CompressionCodec.scala:122) at org.apache.spark.serializer.SerializerManager.wrapForCompression(SerializerManager.scala:163) at org.apache.spark.serializer.SerializerManager.wrapStream(SerializerManager.scala:124) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:417) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:61) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.sort_addToSorter$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.GroupedIterator$.apply(GroupedIterator.scala:29) at org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec$StateStoreUpdater.updateStateForKeysWithData(FlatMapGroupsWithStateExec.scala:176)** Tried couple of things to exclude net.jpountz.lz4 jar( with suggestions from other posts) but again same error in output. Currently using spark 2.3 , scalatest 3.0.5, Scala 2.11 version . i see this issue only after upgrade to spark 2.3 and scalatest 3.0.5 Any suggestions ?
Now provide the response and nothing else.
|
Kafka has a conflicting dependency with Spark and that's what caused this issue for me. This is how you can exclude the dependency in you sbt file lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4")lazy val kafkaClients = "org.apache.kafka" % "kafka-clients" % userKafkaVersionHere excludeAll(excludeJpountz) // add more exclusions here When you use this kafkaClients dependency it would now exclude the problematic lz4 library. Update: This appears to be an issue with Kafka 0.11.x.x and earlier version. As of 1.x.x Kafka seems to have moved away from using the problematic net.jpountz.lz4 library. Therefore, using latest Kafka (1.x) with latest Spark (2.3.x) should not have this issue.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/50907437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6760910/']}
|
jdg_373183
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
371011
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am confused about my teachers notation.I have a set of problems about "gravitational potential $\Phi$." But is $\Phi$ normally written as $U$ is a lot of caseses? Is there a difference? Rant or My Understanding I think that he wants an equation for the field rather than a given point but I am confused on really how to do that. My brain will not grasp the idea of finding the potential everywhere in these problems even though I know what that means. Examples Note: I am not asking for solutions I am posting these for insight on what my teacher means. ex.1) A thin uniform disk of radius $a$ and mass $M$ lies in the $(x,y)$ plane centered on the origin. Find an integral expression for the gravitational potential $\Phi (x,y)$ for a general point in the (x,z) plane. ex.2) Find the gravitational force at a point a distance, $D$,from the base of a homogeneous cone of length $L$ base radius $R$, and mass $M$. The point is along the $z$ axis of the cone. Conclusion Any insight is helpful. I may not be asking my question properly in which case leave a comment and I can revise this accordingly. I find that I am stuck at writing the integral on these problem; even though I think I have all the parts to go into the integral. I am confused on what I am integrating. Is there an equation that looks like this:$$\Phi (r)=\int F(r)dr\quad\text{?}$$
Now provide the response and nothing else.
|
I'm going to assume you (and your prof or teacher) are using standard notation. In that case, $U$ denotes the gravitational potential energy of a configuration of two (or more) objects interacting gravitationally. $\Phi$ denotes the gravitational potential of one object. The difference between them, is that $U$ requires (at least) two objects in order to be defined, while $\Phi$ is the potential of one object. If we are working in the two-body case, one (usually more massive) body with mass $M$ and one (usually smaller, test-body) with mass $m$, then once I figure out the gravitational field of: $\Phi=-GM/r$, I can easily obtain the gravitational potential energy of the two bodies together by: $U=m\Phi = -GMm/r$. $U$ and $\Phi$ are related, but they are definitely NOT the same thing! You can tell just by the units. $U$ has units of energy whereas $\Phi$ has units of energy/mass. Generally you obtain the gravitational potential $\Phi$ by breaking down the gravitating objects into small (differential) chunks, and then adding up each chunk's contribution to the overall gravitational potential. So the general method of finding $\Phi$ is usually starting off with an integral like (treating each $dm$ as a point source): $\Phi = -\int \frac{Gdm}{\vec{r}}$And then expressing $dm$ as some product of a density (function) $\rho(\vec{r})$ and a small volume element $dV$ and then taking the volume integral over the gravitating object. The gravitational $field$ is then a vector field of forces:$\vec{F} = -\nabla \Phi$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/371011', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/123260/']}
|
jdg_373184
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
50739048
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i am trying to achieved some thing like this in flutter
Now provide the response and nothing else.
|
One way is with CustomPainter and an animation. Also look at SpriteWidget . import 'dart:math';import 'package:flutter/material.dart';class SpritePainter extends CustomPainter { final Animation<double> _animation; SpritePainter(this._animation) : super(repaint: _animation); void circle(Canvas canvas, Rect rect, double value) { double opacity = (1.0 - (value / 4.0)).clamp(0.0, 1.0); Color color = Color.fromRGBO(0, 117, 194, opacity); double size = rect.width / 2; double area = size * size; double radius = sqrt(area * value / 4); final Paint paint = Paint()..color = color; canvas.drawCircle(rect.center, radius, paint); } @override void paint(Canvas canvas, Size size) { Rect rect = Rect.fromLTRB(0.0, 0.0, size.width, size.height); for (int wave = 3; wave >= 0; wave--) { circle(canvas, rect, wave + _animation.value); } } @override bool shouldRepaint(SpritePainter oldDelegate) { return true; }}class SpriteDemo extends StatefulWidget { @override SpriteDemoState createState() => SpriteDemoState();}class SpriteDemoState extends State<SpriteDemo> with SingleTickerProviderStateMixin { late final AnimationController _controller; @override void initState() { super.initState(); _controller = AnimationController( vsync: this, ); //_startAnimation(); } @override void dispose() { _controller.dispose(); super.dispose(); } void _startAnimation() { _controller ..stop() ..reset() ..repeat(period: const Duration(seconds: 1)); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: const Text('Pulse')), body: CustomPaint( painter: SpritePainter(_controller), child: SizedBox( width: 200.0, height: 200.0, ), ), floatingActionButton: FloatingActionButton( onPressed: _startAnimation, child: new Icon(Icons.play_arrow), ), ); }}void main() { runApp( MaterialApp( home: SpriteDemo(), ), );}
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/50739048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3700517/']}
|
jdg_373185
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4593701
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Consider the following water-pouring puzzle. Given $n$ initially empty jugs with maximum volumes $V_1, V_2,\dots ,V_n$ , and the following three actions on a jug: A jug is filled completely. A jug is emptied completely. The contents of a jug are poured into another jug until it is completely full, leaving any excess in the original jug. Is the closed form for the total number of possible configurations of $(V_{\rm jug\,1},V_{\rm jug\,2},\dots,V_{{\rm jug}\,n})$ given by $$\prod_{i=1}^{n}\left(\frac{V_i}{\gcd(V_1,\dots,V_n)}+1\right)-\prod_{i=1}^{n}\left(\frac{V_i}{\gcd(V_1,\dots,V_n)}-1\right)?$$ Example: Suppose we have two jugs with specified volumes, $V_1=4, V_2=3$ . Then the possible permutations of $(V_1, V_2)$ are $(0,0),(0,3),(3,0),(3,3),(4,0),(1,3),(1,0),(0,1),(4,1),(2,3)$ , $(2,0),(0,2),(4,2),(4,3)$ i.e. $14$ tuples, agreeing with $(4+1)(3+1)-(4-1)(3-1)=14$ , since $\gcd(4,3)=1$ . By considering the expansion of the monic polynomial $\prod\limits_{i=1}^n(\lambda-V_i)$ and substituting $\lambda=\pm1$ , we can also express this in terms of the elementary symmetric polynomial as the following $$2\sum_{k=1}^{n/2}\frac{e_{2k-1}(V_1,\dots ,V_n)}{\gcd(V_1,\dots ,V_n)^{2k-1}}\quad\text{for $n$ even and }2+2\sum_{k=1}^{(n-1)/2}\frac{e_{2k}(V_1,\dots ,V_n)}{\gcd(V_1,\dots ,V_n)^{2k}}\quad\text{for $n$ odd}, $$ for which I credit @TheSimpliFire for finding all the above general expressions. For example: $n=2$ (the $2$ jug case): the number of possible permutations for $2$ jugs of maximum volume $V_1,V_2$ is $2(V_1+V_2)/\gcd(V_1,V_2)$ ; or $n=3$ (the $3$ jug case): with maximum volumes $V_1,V_2,V_3$ , this has a total number of possible permutations of $2+2(V_1V_2+V_1V_3+V_2V_3)/\gcd(V_1,V_2,V_3)^2$ . However, one can simplify the problem by assuming WLOG that $V_1,V_2,\dots ,V_n$ are all coprime to each other and asking whether the closed form for the total number of possible combinations is given by $$\prod_{i=1}^{n} (V_i+1) - \prod_{i=1}^{n}(V_i-1)$$ since suppose for example that $\gcd(V_1,\dots ,V_n)=3$ , then we cannot make a jug have a volume that is not a multiple of $3$ , giving the same number of possible combinations as if we began with an initial step of scaling all the maximum volumes down by a factor of $3$ such that all the maximum volumes would now be coprime. Since writing this question, I have uploaded a Python script of mine, which is able to generate a table of the number of possible states for up to $4$ jugs, to GitHub and which was used to greatly assist in determining the formula in the question. I have linked it here in the hope of aiding anyone attempting to determine the number of states without the need for tedious by-hand computations. The code can likely be optimised massively, and I would be interested if anyone knows how to make it better. The context for this problem is due to a computer science coursework question of a friend of mine which asked to write a Java program using the DFS algorithm to find the number of possible states for the $V_1=8,V_2=5,V_3=3$ case, however, I wanted to think of the problem more mathematically and abstractly and out of my own interest tried to generalise it and find a general formula. Note that there are $160$ possible states with $V_1=8,V_2=5,V_3=3$ for which the above formulae agree.
Now provide the response and nothing else.
|
I had an earlier answer which had a huge error. This new answer should be completely correct. Let $g=\gcd(V_1,\dots, V_n)$ . I will use $\newcommand\per{\,\%\,}\%$ to denote the modulo operation, so $x \per V$ is the unique integer in $\{0,1,\dots,V-1\}$ which is congruent to $x\pmod V$ . A configuration is a vector $(x_1,\dots,x_n)$ , where $x_i$ is the amount of water in jug $i$ , so $0\le x_i \le V_i$ . Call a configuration valid if every jug has a multiple of $g$ units of water, and at least jug is empty or full. In order to prove that the number of reachable configurations is $\prod_{i=1}^n (V_i/g+1)-\prod_{i=1}^n (V_i/g-1)$ , it suffices that every valid configuration is reachable. Let $(y_1,\dots,y_n)$ be the valid target configuration. We can assume that each $y_i<V_i$ , because it is trivial to convert between empty and full. Without loss of generality, we assume $y_n=0$ . In this solution, jug $n$ will generally remain empty, to be available for "storage". Here are the methods my solution uses. Below, $s$ and $t$ are indicies of jugs, where $s,t\in \{1,\dots,n-1\}$ . Storage : If jug $s$ is nonzero, then storing $s$ consists of pouring from $s$ into jug $n$ until $s$ is empty, possibly emptying $n$ to make space as many times as necessary. At the end, if jug $s$ started with $x_s$ liters, then jug $n$ will have $x_s\per V_n$ liters, while $s$ will be empty. As long as jug $s$ is empty later, and jug $n$ is never touched, then storage is reversible. By reversing the pourings, you can make it so jug $s$ has $x_s$ liters again, and jug $n$ is empty again. For example, if jug $s$ has $19$ liters of water, and $V_n=4$ , then storing $s$ involves pouring from jug $s$ into jug $n$ five times, where the last pour puts $3$ liters in jug $n$ . To undo, you pour those $3$ liters from $n$ back into $s$ , then fill $n$ and dump into $s$ four times, so $s$ returns to $19$ . Addition: The addition operation takes two jugs, the source $s$ and the target $t$ . This operation only affects the target jug. If jug $t$ starts with $x_t$ units, then at the end jug $t$ will have $$(x_t + V_s)\per V_t$$ To accomplish this, there are two cases: If jug $s$ is initially empty, this is accomplished by filling jug $s$ , and then pouring from jug $s$ into jug $t$ until jug $s$ is empty, emptying jug $t$ to make room if necessary. If jug $s$ is initially non-empty, you first store jug $s$ , then perform the steps described in 1. Afterwards, reverse the storage procedure so jug $s$ returns to its initial value. Once we have the addition operation, solving the puzzle is a piece of cake. For each jug, $i$ , with target value $y_i\in \{0,1,\dots,V_i-1\}$ , such that $y_i$ is a multiple of $\gcd(V_1,\dots,V_n)$ , we know from Bézout's Lemma that we can write $$y_i = c_1 V_1+\dots+c_iV_i+\dots+c_nV_n,\tag1$$ for some integers $c_1,\dots,c_n$ . Looking at this equation modulo $V_i$ , we see $$y_i\equiv \tilde c_1 V_1+\dots+\tilde c_{i-1} V_{i-1}+\tilde c_{i+1} V_{i+1}+\dots+\tilde c_n V_n\pmod {V_i},\tag2$$ where each $\tilde c_j=c_j\per V_i$ . Solution: For each $i\in \{1,\dots,n-1\}$ , and for each $j\in \{1,\dots,n\}$ such that $j\neq i$ , perform the addition operation with source $j$ and target $i$ , a total of $\tilde c_j$ times. If we let $w_i$ be the final contents of jug $i$ after performing the claimed solution, then $(2)$ implies that $w_i\equiv y_i\pmod{V_i}$ . Since both $w_i,y_i\in \{0,\dots,V_i-1\}$ , this further implies $w_i=y_i$ , so we have reached the target state.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/4593701', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/885224/']}
|
jdg_373186
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10962379
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone explain how to check if one rotated rectangle intersect other rectangle ?
Now provide the response and nothing else.
|
For each edge in both polygons, check if it can be used as a separating line. If so, you are done: No intersection. If no separation line was found, you have an intersection. /// Checks if the two polygons are intersecting.bool IsPolygonsIntersecting(Polygon a, Polygon b){ foreach (var polygon in new[] { a, b }) { for (int i1 = 0; i1 < polygon.Points.Count; i1++) { int i2 = (i1 + 1) % polygon.Points.Count; var p1 = polygon.Points[i1]; var p2 = polygon.Points[i2]; var normal = new Point(p2.Y - p1.Y, p1.X - p2.X); double? minA = null, maxA = null; foreach (var p in a.Points) { var projected = normal.X * p.X + normal.Y * p.Y; if (minA == null || projected < minA) minA = projected; if (maxA == null || projected > maxA) maxA = projected; } double? minB = null, maxB = null; foreach (var p in b.Points) { var projected = normal.X * p.X + normal.Y * p.Y; if (minB == null || projected < minB) minB = projected; if (maxB == null || projected > maxB) maxB = projected; } if (maxA < minB || maxB < minA) return false; } } return true;} For more information, see this article: 2D Polygon Collision Detection - Code Project NB: The algorithm only works for convex polygons, specified in either clockwise, or counterclockwise order.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10962379', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1219610/']}
|
jdg_373187
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
31872639
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say that my program have max 8 buttons and I should be able to switch them on/off (visible/hidden) according to my settings in configuration file.My chosen buttons should be then visible in two rows (if more than 4) or one row (cnt<=4) on the form leaving no gap between them.(I mean when 1,2,3,4 are active they would be in the first row same 1,2,5,8 is active so the 5 will take place of the button 3 and 8 will take the 4th place) <add key="butactive" value="1;3;4;8"/><add key="but1" value="START"/><add key="but2" value="END"/><add key="but3" value="PAUSE"/><add key="but4" value="RESET"/>... The program should be able to determine from "butactive" key which of those buttons will be visible and then change their Text property and add specific action which is also taken from config file. private void Form2_Load(object sender, EventArgs e) { radButton1.Text = ConfigurationManager.AppSettings["but1"]; radButton2.Text = ConfigurationManager.AppSettings["but2"]; radButton3.Text = ConfigurationManager.AppSettings["but3"]; radButton4.Text = ConfigurationManager.AppSettings["but4"]; ... radButton1.Click += getAction(ConfigurationManager.AppSettings["but1a"]); radButton2.Click += getAction(ConfigurationManager.AppSettings["but2a"]); ... } Any idea how to select needed buttons when the form loads and place them in desired positions?
Now provide the response and nothing else.
|
While Liviu Costea's answer is correct, it still took me quite some time to figure out how it is actually done. So here is my step-by-step guide starting from a new ASP.NET 4.5.2 MVC project. This guide includes client-side package management using bower but does not (yet) cover bundling/grunt/gulp. Step 1 (Create Project) Create a new ASP.NET 4.5.2 Project (MVC Template) with Visual Studio 2015. Step 2 (Remove Bundling/Optimization from Project) Step 2.1 Uninstall the following Nuget Packages: bootstrap Microsoft.jQuery.Unobstrusive.Validation jQuery.Validation jQuery Microsoft.AspNet.Web.Optimization WebGrease Antlr Modernizr Respond Step 2.2 Remove App_Start\BundleConfig.cs from project. Step 2.3 Remove using System.Web.Optimization; and BundleConfig.RegisterBundles(BundleTable.Bundles); from Global.asax.cs Step 2.4 Remove <add namespace="System.Web.Optimization"/> from Views\Web.config Step 2.5 Remove Assembly Bindings for System.Web.Optimization and WebGrease from Web.config Step 3 (Add bower to Project) Step 3.1 Add new package.json file to project ( NPM configuration file item template) Step 3.2 Add bower to devDependencies : { "version": "1.0.0", "name": "ASP.NET", "private": true, "devDependencies": { "bower": "1.4.1" }} The bower package is automatically installed when package.json is saved. Step 4 (Configure bower) Step 4.1 Add new bower.json file to project ( Bower Configuration file item template) Step 4.2 Add bootstrap , jquery-validation-unobtrusive , modernizr and respond to dependencies: { "name": "ASP.NET", "private": true, "dependencies": { "bootstrap": "*", "jquery-validation-unobtrusive": "*", "modernizr": "*", "respond": "*" }} These packages and their dependencies are automatically installed when bower.json is saved. Step 5 (Modify Views\Shared\_Layout.cshtml ) Step 5.1 Replace @Styles.Render("~/Content/css") with <link rel="stylesheet" href="~/wwwroot/lib/bootstrap/dist/css/bootstrap.min.css" /><link rel="stylesheet" href="~/Content/Site.css" /> Step 5.2 Replace @Scripts.Render("~/bundles/modernizr") with <script src="~/wwwroot/lib/modernizr/modernizr.js" ></script> Step 5.3 Replace @Scripts.Render("~/bundles/jquery") with <script src="~/wwwroot/lib/jquery/dist/jquery.min.js"></script> Step 5.4 Replace @Scripts.Render("~/bundles/bootstrap") with <script src="~/wwwroot/lib/bootstrap/dist/js/bootstrap.min.js"></script><script src="~/wwwroot/lib/respond/dest/respond.min.js"></script> Step 6 (Modify other sources) In all other Views replace @Scripts.Render("~/bundles/jqueryval") with <script src="~/wwwroot/lib/jquery-validation/dist/jquery.validate.min.js"></script><script src="~/wwwroot/lib/jquery-validation-unobtrusive/jquery.validate.unobtrusive.min.js"></script> Useful Links http://idisposable.co.uk/2015/02/switching-the-client-side-build-library-in-visual-studio-2013-mvc-template-to-gulp-and-bower/ http://www.baconapplications.com/running-bower-grunt-in-visual-studio-2013/ https://web.archive.org/web/20190611132417/http://old.devkimchi.com:80/2015/01/06/integrating-grunt-and-bower-with-visual-studio-2013 http://www.dotnetcurry.com/visualstudio/1096/using-grunt-gulp-bower-visual-studio-2013-2015 http://andy-carter.com/blog/a-beginners-guide-to-package-manager-bower-and-using-gulp-to-manage-components http://www.jeffreyfritz.com/2015/05/where-did-my-asp-net-bundles-go-in-asp-net-5/ Bundling & Minifying In the comments below LavaHot recommends the Bundler & Minifier extension as a replacement for the default bundler which I remove in step 2. He also recommends this article on bundling with Gulp.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/31872639', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5201056/']}
|
jdg_373188
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
186752
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let a function $f(x)$ be algebraic if it satisfies an equation of the form $$c_n(x)(f(x))^n + c_{n-1}(x)(f(x))^{n-1} + \cdots + c_0(x)=0,$$ for $c_k(x)$ rational functions of $x$, and let $f$ be called transcendental if it is not algebraic. Is it possible to use this definition directly to show that $e^x$ is transcendental? One way I have been considering for any complex number is this: Let $x_0\in\mathbb{C}$ and $x_n=x_0+2\pi i n$, where $n\in\mathbb{Z}$. Hence $x_n\neq x_m$ for all $n\neq m$, but we do have $e^{x_n}=e^{x_m}$ for all $n,m\in\mathbb{Z}$ (since $e^{2\pi i n} = 1$ for all $n\in\mathbb{Z}$). But since the Implicit Function Theorem suggests there exists an exact algebraic formula for $x$ using the above definition of an algebraic function, then $e^x$ can not be algebraic since there are an infinite number of representations $x_n$ of $x$.
Now provide the response and nothing else.
|
One could use the growth at infinity of the function $f:x\mapsto\mathrm e^x$. Assume that that $f$ is algebraic and choose a real number $x\geqslant0$. Then $|f(x)|\geqslant1$ and$$|c_n(x)|\,|f(x)|^n\leqslant b(x)|f(x)|^{n-1},\qquad b(x)=\sum\limits_{k=0}^{n-1}|c_k(x)|.$$Hence, for every real number $x\geqslant0$ such that $c_n(x)\ne0$, $|f(x)|\leqslant b(x)/|c_n(x)|$. But indeed, $c_n(x)\ne0$ for every real number $x$ large enough and $b(x)/|c_n(x)|$ can grow at most polynomially when the real number $x$ goes to $+\infty$ while $|f(x)|=\mathrm e^x$ grows... well, exponentially. This is a contradiction.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/186752', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/23431/']}
|
jdg_373189
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12617664
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've seen mentioned that IO doesn't satisfy the monad laws, but I didn't find a simple example showing that. Anybody knows an example? Thanks. Edit: As ertes and n.m. pointed out, using seq is a bit illegal as it can make any monad fail the laws (combined with undefined ). Since undefined may be viewed as a non-terminating computation, it's perfectly fine to use it. So the revised question is: Anybody knows an example showing that IO fails to satisfy the monad laws, without using seq ? (Or perhaps a proof that IO does satisfy the laws if seq is not allowed?)
Now provide the response and nothing else.
|
tl;dr upfront: seq is the only way. Since the implementation of IO is not prescribed by the standard, we can only look at specific implementations. If we look at GHC's implementation, as it is available from the source (it might be that some of the behind-the-scenes special treatment of IO introduces violations of the monad laws, but I'm not aware of any such occurrence), -- in GHC.Types (ghc-prim)newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))-- in GHC.Base (base)instance Monad IO where {-# INLINE return #-} {-# INLINE (>>) #-} {-# INLINE (>>=) #-} m >> k = m >>= \ _ -> k return = returnIO (>>=) = bindIO fail s = failIO sreturnIO :: a -> IO areturnIO x = IO $ \ s -> (# s, x #)bindIO :: IO a -> (a -> IO b) -> IO bbindIO (IO m) k = IO $ \ s -> case m s of (# new_s, a #) -> unIO (k a) new_sthenIO :: IO a -> IO b -> IO bthenIO (IO m) k = IO $ \ s -> case m s of (# new_s, _ #) -> unIO k new_sunIO :: IO a -> (State# RealWorld -> (# State# RealWorld, a #))unIO (IO a) = a it's implemented as a (strict) state monad. So any violation of the monad laws IO makes, is also made by Control.Monad.State[.Strict] . Let's look at the monad laws and see what happens in IO : return x >>= f ≡ f x:return x >>= f = IO $ \s -> case (\t -> (# t, x #)) s of (# new_s, a #) -> unIO (f a) new_s = IO $ \s -> case (# s, x #) of (# new_s, a #) -> unIO (f a) new_s = IO $ \s -> unIO (f x) s Ignoring the newtype wrapper, that means return x >>= f becomes \s -> (f x) s . The only way to (possibly) distinguish that from f x is seq . (And seq can only distinguish it if f x ≡ undefined .) m >>= return ≡ m:(IO k) >>= return = IO $ \s -> case k s of (# new_s, a #) -> unIO (return a) new_s = IO $ \s -> case k s of (# new_s, a #) -> (\t -> (# t, a #)) new_s = IO $ \s -> case k s of (# new_s, a #) -> (# new_s, a #) = IO $ \s -> k s ignoring the newtype wrapper again, k is replaced by \s -> k s , which again is only distinguishable by seq , and only if k ≡ undefined . m >>= (\x -> g x >>= h) ≡ (m >>= g) >>= h:(IO k) >>= (\x -> g x >>= h) = IO $ \s -> case k s of (# new_s, a #) -> unIO ((\x -> g x >>= h) a) new_s = IO $ \s -> case k s of (# new_s, a #) -> unIO (g a >>= h) new_s = IO $ \s -> case k s of (# new_s, a #) -> (\t -> case unIO (g a) t of (# new_t, b #) -> unIO (h b) new_t) new_s = IO $ \s -> case k s of (# new_s, a #) -> case unIO (g a) new_s of (# new_t, b #) -> unIO (h b) new_t((IO k) >>= g) >>= h = IO $ \s -> case (\t -> case k t of (# new_s, a #) -> unIO (g a) new_s) s of (# new_t, b #) -> unIO (h b) new_t = IO $ \s -> case (case k s of (# new_s, a #) -> unIO (g a) new_s) of (# new_t, b #) -> unIO (h b) new_t Now, we generally have case (case e of case e of pat1 -> ex1) of ≡ pat1 -> case ex1 of pat2 -> ex2 pat2 -> ex2 per equation 3.17.3.(a) of the language report, so this law holds not only modulo seq . Summarising, IO satisfies the monad laws, except for the fact that seq can distinguish undefined and \s -> undefined s . The same holds for State[T] , Reader[T] , (->) a , and any other monads wrapping a function type.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12617664', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1333025/']}
|
jdg_373190
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
4560192
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
While I have a basic understanding of what this definition states, I've been running into trouble interpreting the results when I test it with certain sets other than the traditional examples used to show that the Rationals do not have this property. $\textbf{Definition 1.10:}$ "An ordered set is said to have the least-upper-bound property if the following is true: $$\text{If }\subset\text{, }E \text{ is not empty, and } \text{ is bounded above, then } \sup() \text{ exists in } ."$$ $\\$ I am interpreting this as a series of nested conditional statements: $$\Big[ \big( \left( E\subset S \right) \land \left( E \neq \varnothing \right) \land \left( E \text{ bounded above}\right) \big) \implies (\ \sup(E) \text{ exists in }S)\ \Big] \implies S \text{ has LUB property}$$ From here, we usually let $S=\mathbb{Q}$ and $E=\{q\in\mathbb{Q}:q^2<2 \}$ to show that $\mathbb{Q}$ does not have the LUB property. This makes sense to me because while the antecedent within brackets is true, the consequent is false. Thus the bracketed conditional is false, forcing the entire definition to be true. But what about if (for example) $S=\mathbb{Q}$ and $E=\{q\in\mathbb{Q}:q^2<-1\}$ ? In this case, $E=\varnothing$ , thus the antecedent within the brackets is false. This renders the entire conditional in brackets true. Then the whole definition can only be true if $\mathbb{Q}$ has the LUB property - which is clearly wrong. Can someone help me track down my misunderstanding?
Now provide the response and nothing else.
|
There is something wrong with your series of conditionals: $E$ is a free variable. That means that the statement depends on both $S$ and $E$ . But " $S$ has the least upper bound property" doesn't depend on $E$ , so we need to make $E$ a bound variable by using a universal quantifier: $$\Big[ \forall E : \left\{ \big( \left( E\subset S \right) \land \left( E \neq \varnothing \right) \land \left( E \text{ bounded above}\right) \big) \implies (\ \sup(E) \text{ exists in }S)\ \right\} \Big] \leftrightarrow S \text{ has LUB property}$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/4560192', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/798991/']}
|
jdg_373191
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3583
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
As an intermediate step, consider the function: f[x_] := x Sin[Pi/x] I want to prove that this function is increasing for $x\ge 1$ . This can be done with the first derivative. We have to prove that $f'[x]\ge 0$ for $x\ge 1$ . First I tried: FullSimplify[f'[x] >= 0, x >= 1](*Out: x Sin[π/x] >= π Cos[π/x] *) Then I realized that Mathematica probably doesn't know about the inequality $\tan(x)>x$ for $0<x<\pi/2$ . So I tried: FullSimplify[f'[x] >= 0, x >= 1 && ForAll[y, 0 < y < Pi/2, Tan[y] > y]](*Out: x Sin[π/x] >= π Cos[π/x] *) This inequality seems to be easy, I tried replacing x >= 1 with x >= Pi and I get the same result. How can I properly use Mathematica to prove some theorems like this one? EDIT: The main point of the problem is not trying to prove that the function is increasing (that is the motivation of the problem) , the main point is trying to prove that $x\sin(\frac{\pi}{x})\ge\pi \cos(\frac{\pi}{x})$ for $x\ge 1$ using that $\tan(x)>x$ for $0<x<\pi/2$ , i.e. i want to use the proving features of Mathematica.
Now provide the response and nothing else.
|
The proof of the original statement that $f(x)\equiv x\sin\frac{\pi}{x}$ is a monotonically increasing function of $x$ for $x>1$ can be done as follows: First, we show that the second derivative $f''(x)$ of the function is negative: Simplify[D[x Sin[π/x], x, x] < 0, Assumptions -> x > 1] True This means that the first derivative $f'(x)$ is a monotonically decreasing function of $x$ for $x>1$. Now we show that the derivative of the function approaches zero as $x\to\infty$: Limit[D[x Sin[π/x], x], x -> ∞] 0 Since the derivative has been shown to be decreasing and to have a limit of zero for $x\to\infty$, it follows that $f'(x) > 0$ for $x>1$. This proves the desired statement about $f(x)$. Edit To take the other route proposed in the edited version of the question, you could do the following: Resolve[ForAll[{x}, x > 1 && Tan[π/x] >= π/x, f'[x] >= 0], Reals] True Edit 2 In the Resolve statement above, ForAll has three arguments: the variable {x} , a condition , and the statement to be proved. In words, this says the following: for all $x$ that satisfy the condition $x>1$ and $\tan(\pi/x)\ge \pi/x$, it holds that $f'(x)\ge0$. Of course, the condition can actually be simplified because the tangent inequality as stated here only holds for $x>2$. To make the condition fully consistent with the desired interval $x>1$, we simply have to replace $x$ by $2 x$ in the tangent inequality. This leaves the inequality unaffected but extends its range of validity to $x>1$. Therefore, we get the following statement that can be fed to Mathematica: Resolve[ForAll[x, x > 1 && Tan[π/(2 x)] >= π/x/2, f'[x] >= 0], Reals] True
|
{}
|
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/3583', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/676/']}
|
jdg_373192
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18533820
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I use chrome.tabs.insertCSS(null, {file: "style.css"}); to insert a css file into the page, and it works fine. But is there a way to remove the file? Also, if I inject another file named style.css , does it override the first one? And by the way, where can I see the injected CSS files? Scripts can be viewed in "Sources/Content Scripts" (of chrome developer tools), but I cannot find the CSS files.
Now provide the response and nothing else.
|
I'm not sure what the context of this question is, but it sounds like maybe you want to highlight specific elements of the page a certain way and be able to toggle that custom highlight. (rather than styling elements your extension would create) Since you can't actually remove the CSS file or just erase it by adding a file with the same name, I suggest the following: wrap your custom styles in something like body.JMaylinCustomStyles use JavaScript to add or remove the JMaylinCustomStyles class to the body element. there is no step 3. The benefits are that it doesn't add much work on top of what you have, that you don't have to figure out how to override your custom styles in a second stylesheet (it's always very annoying and error-prone) and that you even gain a bit in CSS specificity so your styles are more likely to be applied. Note that the best way to "wrap" your styles would be to use Sass or LESS since you can literally just add body.JMaylinCustomStyles { at the top of the file and } at the bottom.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18533820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/962893/']}
|
jdg_373193
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
52458430
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a snackbar / toast version with Bootstrap 4. I start with this tutorial from w3schools. Updated: I was trying to implement a custom snackbar or toast for Bootstrap 4 but, right now, it isn't necessary beacause Bootstrap 4 includes this option from version 4.2 as @Zim says.
Now provide the response and nothing else.
|
I (emphasis = full disclosure) created a jQuery plugin for the toasts components to make them easier to use, the repo can be found here . The purpose was to be able to call the toasts on the fly through JavaScript. Toast $.toast({ title: 'Toast', subtitle: '11 mins ago', content: 'Hello, world! This is a toast message.', type: 'info', delay: 5000}); Snack $.toast({ title: 'A small bitesize snack, not a toast!', type: 'info', delay: 5000}); Live Example (function(b){b.toast=function(a,h,g,l,k){b("#toast-container").length||(b("body").prepend('<div id="toast-container" aria-live="polite" aria-atomic="true"></div>'),b("#toast-container").append('<div id="toast-wrapper"></div>'));var c="",d="",e="text-muted",f="",m="object"===typeof a?a.title||"":a||"Notice!";h="object"===typeof a?a.subtitle||"":h||"";g="object"===typeof a?a.content||"":g||"";k="object"===typeof a?a.delay||3E3:k||3E3;switch("object"===typeof a?a.type||"":l||"info"){case "info":c="bg-info";f=e=d="text-white";break;case "success":c="bg-success";f=e=d="text-white";break;case "warning":case "warn":c="bg-warning";f=e=d="text-white";break;case "error":case "danger":c="bg-danger",f=e=d="text-white"}a='<div class="toast" role="alert" aria-live="assertive" aria-atomic="true" data-delay="'+k+'">'+('<div class="toast-header '+c+" "+d+'">')+('<strong class="mr-auto">'+m+"</strong>");a+='<small class="'+e+'">'+h+"</small>";a+='<button type="button" class="ml-2 mb-1 close" data-dismiss="toast" aria-label="Close">';a+='<span aria-hidden="true" class="'+f+'">×</span>';a+="</button>";a+="</div>";""!==g&&(a+='<div class="toast-body">',a+=g,a+="</div>");a+="</div>";b("#toast-wrapper").append(a);b("#toast-wrapper .toast:last").toast("show")}})(jQuery);const TYPES = ['info', 'warning', 'success', 'error'], TITLES = { 'info': 'Notice!', 'success': 'Awesome!', 'warning': 'Watch Out!', 'error': 'Doh!' }, CONTENT = { 'info': 'Hello, world! This is a toast message.', 'success': 'The action has been completed.', 'warning': 'It\'s all about to go wrong', 'error': 'It all went wrong.' };function show_random_toast(){ let type = TYPES[Math.floor(Math.random() * TYPES.length)], title = TITLES[type], content = CONTENT[type]; $.toast({ title: title, subtitle: '11 mins ago', content: content, type: type, delay: 5000 });}function show_random_snack(){ let type = TYPES[Math.floor(Math.random() * TYPES.length)], content = CONTENT[type].replace('toast', 'snack'); $.toast({ title: content, type: type, delay: 5000 });} #toast-container { position: sticky; z-index: 1055; top: 0}#toast-wrapper { position: absolute; top: 0; right: 0; margin: 5px}#toast-container > #toast-wrapper > .toast { min-width: 150px}#toast-container > #toast-wrapper > .toast >.toast-header strong { padding-right: 20px} <!DOCTYPE html><html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>repl.it</title> <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/css/bootstrap.min.css" integrity="sha384-ggOyR0iXCbMQv3Xipma34MD+dH/1fQ784/j6cY/iJTQUOhcWr7x9JvoRxT2MZw1T" crossorigin="anonymous"> </head> <body> <button class="btn-block btn-primary" onclick="show_random_toast();">Show Random Toast</button> <br> <button class="btn-block btn-primary" onclick="show_random_snack();">Show Random Snack</button> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js" integrity="sha384-JjSmVgyd0p3pXB1rRibZUAYoIIy6OrQ6VrjIEaFf/nJGzIxFDsf4x0xIM+B07jRM" crossorigin="anonymous"></script> </body></html>
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52458430', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3377046/']}
|
jdg_373194
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2643010
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a small program I want to execute to test something #include <map>#include <iostream>using namespace std;struct _pos{ float xi; float xf; bool operator<(_pos& other){ return this->xi < other.xi; }};struct _val{ float f;};int main(){ map<_pos,_val> m; struct _pos k1 = {0,10}; struct _pos k2 = {10,15}; struct _val v1 = {5.5}; struct _val v2 = {12.3}; m.insert(std::pair<_pos,_val>(k1,v1)); m.insert(std::pair<_pos,_val>(k2,v2)); return 0;} The problem is that when I try to compile it, I get the following error $ g++ m2.cpp -o mtestIn file included from /usr/include/c++/4.4/bits/stl_tree.h:64, from /usr/include/c++/4.4/map:60, from m2.cpp:1:/usr/include/c++/4.4/bits/stl_function.h: In member function ‘bool std::less<_Tp>::operator()(const _Tp&, const _Tp&) const [with _Tp = _pos]’:/usr/include/c++/4.4/bits/stl_tree.h:1170: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::iterator, bool> std::_Rb_tree<_Key, _Val, _KeyOfValue, _Compare, _Alloc>::_M_insert_unique(const _Val&) [with _Key = _pos, _Val = std::pair<const _pos, _val>, _KeyOfValue = std::_Select1st<std::pair<const _pos, _val> >, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’/usr/include/c++/4.4/bits/stl_map.h:500: instantiated from ‘std::pair<typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename _Alloc::rebind<std::pair<const _Key, _Tp> >::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const std::pair<const _Key, _Tp>&) [with _Key = _pos, _Tp = _val, _Compare = std::less<_pos>, _Alloc = std::allocator<std::pair<const _pos, _val> >]’m2.cpp:30: instantiated from here/usr/include/c++/4.4/bits/stl_function.h:230: error: no match for ‘operator<’ in ‘__x < __y’m2.cpp:9: note: candidates are: bool _pos::operator<(_pos&)$ I thought that declaring the operator< on the key would solve the problem, but its still there. What could be wrong? Thanks in advance.
Now provide the response and nothing else.
|
The problem is this: bool operator<(_pos& other) Should be this: bool operator<(const _pos& other) const {// ^^^^ ^^^^^ Without the first const , the right-hand side of the comparison ( b in a < b ) cannot be const , since without const the function may modify its argument. Without the second const , the left-hand side of the comparison ( a in a < b ) cannot be const , since without const the function may modify this . Internally, the key's of a map are always const . It should be noted that you should prefer to use nonmember functions. That is, better is a free-function: bool operator<(const _pos& lhs, const _pos& rhs){ return lhs.xi < rhs.xi;} In the same namespace as your class. (For our example, just underneath it.) By the way, in C++ there is no need to prefix the declaration of a struct type variable with struct . This is perfect, and preferred: _pos k1 = {0,10}; _pos k2 = {10,15}; _val v1 = {5.5}; _val v2 = {12.3}; (Though your type names are admittedly named in an unorthodox manner. :P) Lastly, you should prefer the make_pair utility function for making pairs: m.insert(std::make_pair(k1,v1)); m.insert(std::make_pair(k2,v2)); It saves you from having to write out the types for the pair, and is generally easier to read. (Especially when longer type names come along.)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2643010', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63309/']}
|
jdg_373195
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
502661
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I wanted to check the fuse of SMPS pictured . In order to safely conduct the procedure I followed the discharging procedure as provided in many tutorial videos i.e (connecting the end of the capacitor using nose plier). The fuse seemed alright but I while discharging the capacitor, a loud bang occurred (no power cable was attached)! This was quite loud bang. Does it mean the capacitor is bad? None of the videos warned about such bang! The capacitor was 330 mF. I cleaned up the soot caused by the bang and checked the capacitance and conductance of the capacitor, I seemed fine, the fuse after rechecking was also fine and pretty much other stuff that I check prior seemed also fine (The power supply is still bad though) Question: Shouldn't the capacitor go bad if a huge bang had occurred followed by soot on the PCB?
Now provide the response and nothing else.
|
Shouldn't the capacitor go bad if a huge bang had occurred followed by soot on the PCB? It appears that you short-circuited high-voltage, high-capacity capacitors. These store a lot of energy and you discharged it all in an instant so you generated a very high current and generated an arc due to poor contact. The soot on the PCB was generated externally to the capacitor. Your capacitor is probably fine. In future you should discharge the capacitor in a controlled fashion by using a resistor. Work out the power that would be dissipated in the resistor if it was permanently wired across the capacitor while powered. (You could probably use half the calculated wattage for a brief discharge.) Make up some insulated probes attached to the resistor.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/502661', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/235296/']}
|
jdg_373196
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10989
|
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
From the 1970s–1980s, UK government local authorities stopped building homes: (Source: BBC News ) Or, with raw figures from the Ministry of Housing, Communities & Local Government spreadsheet , extracting figures for England and Scotland for completed council homes in selected years: England Scotland1970 130,180 34,6601975 116,330 23,1901980 74,840 7,4901985 23,310 2,8301990 12,960 1,6301995 760 7202000 180 1102005 300 02010 1,140 6102015 1,900 1,140 Source: Ministry of Housing, Communities & Local Government This has resulted in the housing crisis (for example, see Financial Times , Shelter , The Independent , The Telegraph ). Why did UK local authorities stop building homes since the 1980s? Although houses were built by local councils, funding came from the central government so the political decision to no longer do so was taken in Westminster. So the direct answer would be because they cannot afford to , but what motivated the political decisions stripping councils of money to build homes? Although there's been some recovery in the past 15 years, building 2,000 council homes per year is far below demand. Scotland does a little better, it has 1/10th of the population that England does — per capita, Scotland has usually built more council homes, and are now building around a factor 5 more. Scotland does not have the right to buy, and has had more progressive governments than the UK (which decides for England), so either the right to buy and/or the more progressive governments may be part of the answer to the lack of council house building in England. I'm not sure how the right to buy would lead to lack of council house building; one might equally reason that the privatisation of council houses through the right to buy should necessitate a steady construction of new council homes. Background reading: Tom de Castella, Why can't the UK build 240,000 houses per year? , in: BBC Magazine, 13 January 2015.
Now provide the response and nothing else.
|
Much of the social housing stock has been sold off under Right to Buy and it will not be built at replacement rate without the removal of significant barriers to new builds. [1] Successive governments since 1979 have created a set of circumstances that make it economically infeasible to build social housing stock. Governments have tried to compel the building of replacement stock without significantly reversing those economic circumstances. These include: secure tenants' Right to Buy local authority housing stock (and former local authority stock owned by housing associations) at large discounts [2], introduced in 1980 in England, Wales and Scotland LAs have been obliged to set-aside large proportions of capital receipts from RTB to pay off debts before spending on replacement builds, not allowed to use RTB receipts to fund more than 30% of the cost of a replacement home, not allowed to use RTB receipts in conjunction with subsidies such as government grant or public land, have had to commit receipts within a tight three year turnaround or lose them to the Treasury reductions in central government grants to local authorities for the purpose of building new stock [3] rules changes that effectively put central government in charge of local housing budgets [4] reductions in rents LAs are allowed to charge compulsory sales of land owned by local authorities [5] central government limits on the freedom of LAs to borrow Now housing associations and other social housing landlords (e.g. the NHS and the military) will likely face similar problems with building new properties since the Right to Buy was extended to them. Scotland abolished the Right to Buy in 2014 and it will end in Wales in January 2019, so we'll have a 'natural experiment' to see whether social housing stock will recover in those countries. As to why, the Thatcher government (and successive Conservative governments) purported to want to restructure the relationship between the individual and the state. They said the state owned too much, people had a right to own their homes, the pride of ownership would reduce urban deterioration and ownership would increase labour mobility. But there were Conservatives who believed that making more people owner-occupiers would destroy British 'socialism'. One even proposed giving away social housing to the secure tenants. Right to Buy is the most expensive privatisation in UK history, it is the largest transfer of wealth from the public to the private sector - more than 2m properties sold at an average net loss. [1] " it is certain that reform of the RTB scheme would significantly help the council sector address the current shortfall of homes, both overall and in most individual local authority areas. Taking everything into account only one in five (21%) of the authorities who participated in our research said that they currently expect to be able to replace at least the majority of the homes they have sold. By contrast almost three quarters (73%) said that they only expect to replace half or fewer, including one in 10 (12%) who said that they will not be able to replace any at all. http://www.cih.org/resources/PDF/Policy%20free%20download%20pdfs/Keeping%20pace%20-%20replacing%20right%20to%20buy%20sales.pdf and https://www.local.gov.uk/about/news/right-buy-replacement-will-be-all-eliminated-just-five-years-warns-lga [2] At the time of writing, if you live in a house you get a 35% discount if you’ve been a public sector tenant for between 3 and 5 years, after 5 years the discount goes up by 1% for every extra year you’ve been a public sector tenant up to a maximum of 70%. In a flat you get a 50% discount if you’ve been a public sector tenant for between 3 and 5 years, after 5 years the discount goes up by 2% for every extra year you’ve been a public sector tenant up to a maximum of 70%. The maximum discount is £80,900 in England, except in London where it’s £108,000. https://www.gov.uk/right-to-buy-buying-your-council-home/discounts [3] page 11 https://www.ifs.org.uk/uploads/publications/bns/BN178.pdf [4] "By explicitly ring-fencing housing revenue accounts and controlling these net additional subsidies, central government was effectively able to impose external budget constraints on local authorities. In economic terms, therefore, local council housing budgets have effectively been set exogenously by central government and unaffected (at least directly) by revenues arising from council house sales." https://www.ifs.org.uk/uploads/publications/bns/BN162.pdf [5] https://www.nao.org.uk/wp-content/uploads/2016/07/Disposal-of-public-land-for-new-homes.pdf
|
{}
|
{'log_upvote_score': 4, 'links': ['https://politics.stackexchange.com/questions/10989', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/130/']}
|
jdg_373197
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
11273823
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Possible Duplicate: String, StringBuffer, and StringBuilder We know that String are immutable where StringBuffer / StringBuilder are mutable. But sometimes we get confused what to use in our code.. the String or StringBuffer/StringBuilder ?? Practically in our maximum code/quick code we use to prefer String than StringBuffer / StringBuilder . This question is to solve the confusion, if you have any idea & proper reason for that, then please give a reply.
Now provide the response and nothing else.
|
Below is the main difference between these three most commonly used classes. String class objects are immutable whereas StringBuffer andStringBuilder objects are mutable. StringBuffer is synchronized while StringBuilder is not synchronized. Concatenation operator "+" is internal implemented using eitherStringBuffer or StringBuilder. Criteria to choose among String , StringBuffer and StringBuilder If the Object value is not going to change use String Class because aString object is immutable. If the Object value can change and will only be accessed from asingle thread, use a StringBuilder because StringBuilder isunsynchronized. In case the Object value can change, and will be modified by multiplethreads, use a StringBuffer because StringBuffer is synchronized.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11273823', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1481665/']}
|
jdg_373198
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3798874
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using an ORM called Ohm in Ruby that works on top of Redis and am curious to find out how the data is actually stored. I was wondering if there is way to list all the keys/values in a Redis db. Any lead will go a long way in helping me out (I'm basically stuck atm). Thanks in advance! Update: A note for others trying this out using redis-cli, use this: $ redis-cli keys* (press * followed by Ctrl-D)... (prints a list of keys and exits)$ Thanks @antirez and @hellvinz!
Now provide the response and nothing else.
|
You can explore the Redis dataset using the redis-cli tool included in the Redis distribution. Just start the tool without arguments, then type commands to explore the dataset. For instance KEYS will list all the keys matching a glob-style pattern, for instance with: keys * you'll see all the keys available. Then you can use the TYPE command to check what type is a given key, if it's a list you can retrieve the elements inside using LRANGE mykey 0 -1 . If it is a Set you'll use instead SMEMBERS mykey and so forth. Check the Redis documentation for a list of all the available commands and how they work.
|
{}
|
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/3798874', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129912/']}
|
jdg_373199
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20004310
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
My app that contains a UITableViewController embedded in UIPageViewController raises this exception from time to time: Invalid parameter not satisfying: [views count] == 3 Backtrace: * thread #1: tid = 0x6239fa, 0x03d1d88a libobjc.A.dylib`objc_exception_throw, queue = 'com.apple.main-thread, stop reason = breakpoint 25.3 frame #0: 0x03d1d88a libobjc.A.dylib`objc_exception_throw frame #1: 0x0404f448 CoreFoundation`+[NSException raise:format:arguments:] + 136 frame #2: 0x03428fee Foundation`-[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 116 frame #3: 0x01e7c535 UIKit`-[_UIQueuingScrollView _replaceViews:updatingContents:adjustContentInsets:animated:] + 185 frame #4: 0x01e800ca UIKit`-[_UIQueuingScrollView _didScrollWithAnimation:force:] + 1231 frame #5: 0x01e7bb57 UIKit`-[_UIQueuingScrollView _scrollViewAnimationEnded:finished:] + 104 frame #6: 0x0190583c UIKit`-[UIScrollView(UIScrollViewInternal) animator:stopAnimation:fraction:] + 62 frame #7: 0x0197096e UIKit`-[UIAnimator stopAnimation:] + 533 frame #8: 0x0197100a UIKit`-[UIAnimator(Static) _advanceAnimationsOfType:withTimestamp:] + 325 frame #9: 0x01970b76 UIKit`-[UIAnimator(Static) _LCDHeartbeatCallback:] + 67 frame #10: 0x01663b8a QuartzCore`CA::Display::DisplayLinkItem::dispatch() + 48 frame #11: 0x01663a46 QuartzCore`CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long) + 310 frame #12: 0x01663f6b QuartzCore`CA::Display::TimerDisplayLink::callback(__CFRunLoopTimer*, void*) + 123 frame #13: 0x0400dbd6 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__ + 22 frame #14: 0x0400d5bd CoreFoundation`__CFRunLoopDoTimer + 1181 frame #15: 0x03ff5628 CoreFoundation`__CFRunLoopRun + 1816 frame #16: 0x03ff4ac3 CoreFoundation`CFRunLoopRunSpecific + 467 frame #17: 0x03ff48db CoreFoundation`CFRunLoopRunInMode + 123 frame #18: 0x0533b9e2 GraphicsServices`GSEventRunModal + 192 frame #19: 0x0533b809 GraphicsServices`GSEventRun + 104 frame #20: 0x01874d3b UIKit`UIApplicationMain + 1225 Does anyone have seen this already or have an idea what the reason could be?
Now provide the response and nothing else.
|
Edit: after using this fix for more time, I can still see the bug on occasion so this isn't the complete fix (well... it was always kind of a hack). I'll update with the actual solution once I'll find it. I've encountered this same error using UIPageViewController. After hours debugging the issue, I've found the cause was using UIView animations inside the completion handler of UIPageViewController's setViewControllers:direction:animated:completion:. I don't know why animating at that stage causes the assertion error (I wasn't animating the UIPageViewController or its child view controllers), but wrapping the code block with dispatch_async on the main queue solves the issue and stop the crashing.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/20004310', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/235297/']}
|
jdg_373200
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
48476011
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to loop trough every key of mystruct and print its key and its value for every property. struct mystruct { var a = "11215" var b = "21212" var c = "39932"}func loopthrough { for (key, value) in mystruct { print("key: \(key), value: \(value)") // Type mystruct.Type does not conform to protocol 'Sequence' }} But using the few lines from above I always get this error message: Type mystruct.Type does not conform to protocol 'Sequence' How can I avoid getting this message?
Now provide the response and nothing else.
|
First of all let's use CamelCase for the struct name struct MyStruct { var a = "11215" var b = "21212" var c = "39932"} Next we need to create a value of type MyStruct let elm = MyStruct() Now we can build a Mirror value based on the elm value. let mirror = Mirror(reflecting: elm) The Mirror value does allow us to access all the properties of elm , here's how for child in mirror.children { print("key: \(child.label), value: \(child.value)")} Result: key: Optional("a"), value: 11215 key: Optional("b"), value: 21212 key: Optional("c"), value: 39932
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48476011', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_373201
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
636032
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Can someone help me with this series? It was on my exam and I don't know how to do it. For $|x| < 1$ determine the sum of$$\sum\limits_{i=1}^\infty (-1)^i\frac{x^{2i+1}}{2i+1}$$
Now provide the response and nothing else.
|
The derivative of the given sum is the geometric sum$$\sum_{n=1}^\infty (-x^2)^n=-\frac{x^2}{1+x^2}=-1+\frac{1}{1+x^2}$$so the given sum which vanish at $0$ is$$\int_0^x\left(-1+\frac{1}{1+t^2}\right)dt=-x+\arctan x$$
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/636032', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/101749/']}
|
jdg_373202
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
206693
|
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I was recently refactoring a method that was both a command and a query method. After separating it into a one command method and one query method, I found that there are now multiple places in the code where I am calling the command then getting the value from the query, which seems like a violation of the DRY principle. But if I were to wrap that common code into a method, that method would be both command and a query. Is this acceptable?
Now provide the response and nothing else.
|
There are always trade offs to consider between conflicting design principles. The way to resolve it is to look at the underlying reasons behind the principles. In this case, being unable to run a query without running the command is problematic, but being unable to run a command without running the query is generally harmless. As long as there's a way to run the query standalone, I see no reason not to add the query result to the command, especially if done something like this: QueryResult command(){ // do command stuff return query();}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://softwareengineering.stackexchange.com/questions/206693', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/81943/']}
|
jdg_373203
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
882973
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm failing to understand how to come to the answer to this question. If you roll a fair die six times, what is the probability that the numbers recorded are $1$, $2$, $3$, $4$, $5$, and $6$ in any order? The answer given is $6!(1/6)^6 = 3/324$ Can anyone explain to me how to get to that answer? I would really appreciate the help! :)
Now provide the response and nothing else.
|
On your first roll, you need to get any of the six possible outcomes (that is, anything will do). This has probability 6/6. On your second roll, you need to get something different than your previous result. This has probability 5/6. On you third roll you need to avoid the two previous values, which has probability 4/6. Carrying on like this, the total probability is$$\frac66\times\frac56\times\frac46\times\frac36\times\frac26\times\frac16=\frac{6!}{6^6}.$$
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/882973', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/153954/']}
|
jdg_373204
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
758256
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Is there a way to minimize to tray in PyQt4? I've already worked with the QSystemTrayIcon class, but now I would like to minimize or "hide" my app window, and show only the tray icon. Has anybody done this? Any direction would be appreciated. Using Python 2.5.4 and PyQt4 on Window XP Pro
Now provide the response and nothing else.
|
It's pretty straightforward once you remember that there's no way to actually minimize to the system tray . Instead, you fake it by doing this: Catch the minimize event on your window In the minimize event handler, create and show a QSystemTrayIcon Also in the minimize event handler, call hide() or setVisible(false) on your window Catch a click/double-click/menu item on your system tray icon In your system tray icon event handler, call show() or setVisible(true) on your window, and optionally hide your tray icon.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/758256', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/91873/']}
|
jdg_373205
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
650211
|
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
When the earth revolves around the sun, the sun attracts the earth by a gravitational force $F_{se}$ (centripetal force), and the earth attracts the sun by a gravitational force $F_{es}$ (centrifugal force). The two forces are equal and opposite according to Newton's third law. We know that a centrifugal force is a fictitious force. So, $F_{es}$ is also a fictitious force, but wait, how is this possible? Gravitational force is not fictitious! But if a gravitational force is a centrifugal force, it has to be fictitious, right (since all centrifugal forces are fictitious)? So, is gravitational force fictitious or not?
Now provide the response and nothing else.
|
The best way to avoid this kind of confusion is to start from the beginning in a purely Newtonian description of the motion, i.e., working in an inertial frame. Only after understanding the situation in the inertial system it is possible to analyze it in a non-inertial frame without terminology or conceptual confusion. For the present discussion, we can neglect the effect of the presence of other planets. In an inertial frame, both Sun and Earth move with an almost circular trajectory around the common center of mass. If centripetal means towards the center of rotation , both $F_{es}$ and $F_{se}$ are centripetal . In this inertial frame, no centrifugal force is present. In the non-rotating non-inertial frame centered on the Sun, thus accelerating with acceleration ${\bf a}_s$ with respect to any inertial system, a fictitious (or inertial) force ${\bf F}_f = -m {\bf a}_s$ appears on each body of mass $m$ . As a consequence, there is no net force on the Sun, and the force on the Earth is the sum of the usual gravitational force plus a fictitious force $${\bf F}_f=-m_e {\bf a}_s$$ where ${\bf a}_s=\frac{Gm_e}{r_{es}^2}{\bf \hat r}_{es}$ is the acceleration of Sun in an inertial frame, ${\bf \hat r}_{es}$ is the unit vector from Sun to Earth. Therefore, this fictitious force points toward the Sun and should be called centripetal in this reference frame. It has to be added to the gravitational force on the Earth, again a centripetal force. The reference frames where a centrifugal fictitious force appears are all the non-inertial reference frames rotating with respect to the inertial frames. For example, if we assume circular orbits for simplicity, in the non-inertial frame centered on Sun and co-rotating with Earth, a fictitious centrifugal force on Earth appears, exactly equal to the gravitational force. Indeed, in such a rotating system, the Earth is at rest at a fixed distance from the Sun.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/650211', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/167872/']}
|
jdg_373206
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
27956779
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Suppose I have a class in one big file like this: export default class { constructor () {} methodA () {} methodB () {} methodC () {}} And I want to break up the class definition so that methodA , methodB , and methodC are each defined in their own separate files. Is this possible?
Now provide the response and nothing else.
|
You should be able to, as class is supposed to just be syntax sugar for the usual prototype workflow: import methodOne from 'methodOne'import methodTwo from 'methodTwo'class MyClass { constructor() { }}Object.assign(MyClass.prototype, {methodOne, methodTwo})export default MyClass
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27956779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4172219/']}
|
jdg_373207
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3418128
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I convert a factor to a numeric or integer, I get the underlying level codes, not the values as numbers. f <- factor(sample(runif(5), 20, replace = TRUE))## [1] 0.0248644019011408 0.0248644019011408 0.179684827337041 ## [4] 0.0284090070053935 0.363644931698218 0.363644931698218 ## [7] 0.179684827337041 0.249704354675487 0.249704354675487 ## [10] 0.0248644019011408 0.249704354675487 0.0284090070053935## [13] 0.179684827337041 0.0248644019011408 0.179684827337041 ## [16] 0.363644931698218 0.249704354675487 0.363644931698218 ## [19] 0.179684827337041 0.0284090070053935## 5 Levels: 0.0248644019011408 0.0284090070053935 ... 0.363644931698218as.numeric(f)## [1] 1 1 3 2 5 5 3 4 4 1 4 2 3 1 3 5 4 5 3 2as.integer(f)## [1] 1 1 3 2 5 5 3 4 4 1 4 2 3 1 3 5 4 5 3 2 I have to resort to paste to get the real values: as.numeric(paste(f))## [1] 0.02486440 0.02486440 0.17968483 0.02840901 0.36364493 0.36364493## [7] 0.17968483 0.24970435 0.24970435 0.02486440 0.24970435 0.02840901## [13] 0.17968483 0.02486440 0.17968483 0.36364493 0.24970435 0.36364493## [19] 0.17968483 0.02840901 Is there a better way to convert a factor to numeric?
Now provide the response and nothing else.
|
See the Warning section of ?factor : In particular, as.numeric applied to a factor is meaningless, and may happen by implicit coercion. To transform a factor f to approximately its original numeric values, as.numeric(levels(f))[f] is recommended and slightly more efficient than as.numeric(as.character(f)) . The FAQ on R has similar advice . Why is as.numeric(levels(f))[f] more efficent than as.numeric(as.character(f)) ? as.numeric(as.character(f)) is effectively as.numeric(levels(f)[f]) , so you are performing the conversion to numeric on length(x) values, rather than on nlevels(x) values. The speed difference will be most apparent for long vectors with few levels. If the values are mostly unique, there won't be much difference in speed. However you do the conversion, this operation is unlikely to be the bottleneck in your code, so don't worry too much about it. Some timings library(microbenchmark)microbenchmark( as.numeric(levels(f))[f], as.numeric(levels(f)[f]), as.numeric(as.character(f)), paste0(x), paste(x), times = 1e5)## Unit: microseconds## expr min lq mean median uq max neval## as.numeric(levels(f))[f] 3.982 5.120 6.088624 5.405 5.974 1981.418 1e+05## as.numeric(levels(f)[f]) 5.973 7.111 8.352032 7.396 8.250 4256.380 1e+05## as.numeric(as.character(f)) 6.827 8.249 9.628264 8.534 9.671 1983.694 1e+05## paste0(x) 7.964 9.387 11.026351 9.956 10.810 2911.257 1e+05## paste(x) 7.965 9.387 11.127308 9.956 11.093 2419.458 1e+05
|
{}
|
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/3418128', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/409828/']}
|
jdg_373208
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
44589669
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
According to this document, using wait and notify is discouraged in Kotlin: https://kotlinlang.org/docs/reference/java-interop.html wait()/notify() Effective Java Item 69 kindly suggests to prefer concurrency utilities to wait() and notify(). Thus, these methods are not available on references of type Any. However the document does not propose any correct way of doing it. Basically, I would like to implement a service, which would read the input data and process them. If there were no input data, it would suspend itself until someone notifies that there are new input data. Something like while (true) { val data = fetchData() processData(data) if (data.isEmpty()) { wait() }} EDIT: I don't want to use these not recommended methods (antipatterns), I really want to find out how to do this properly. In my case fetchData reads data from the database, so queues in my case cannot be used.
Now provide the response and nothing else.
|
In general you should use higher-level concurrency utilities when possible. However, if none of the higher-level constructs work in your case, the directreplacement is to use a ReentrantLock and a single Condition on that lock. For example, if your Java code was something like: private Object lock = new Object();...synchronized(lock) { ... lock.wait(); ... lock.notify(); ... lock.notifyAll(); ...} You can change it to the following Kotlin: private val lock = ReentrantLock()private val condition = lock.newCondition()lock.withLock { // like synchronized(lock) ... condition.await() // like wait() ... condition.signal() // like notify() ... condition.signalAll() // like notifyAll() ...} While this is slightly more verbose, conditions do provide some extraflexibility, as you can have multiple conditions on a single lock, and thereare also other kinds of locks (notably ReentrantReadWriteLock.ReadLock and ReentrantReadWriteLock.WriteLock ). Note that withLock is a Kotlin-provided extension function that takes care of calling Lock.lock() / Lock.unlock() before/after invoking the supplied lambda.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/44589669', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/922584/']}
|
jdg_373209
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
13237146
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
im having a problem in visual studio it keeps saying i have defined a member with same parameter types. Im new to C# programming and i dont really know what to do. These are the errors that are occurring: Error 1 Type 'Secret.AddPage' already defines a member called 'AddPage' with the same parameter types Error 2 Type 'Secret.AddPage' already defines a member called 'PhoneApplicationPage_Loaded' with the same parameter types Here is the code i have written so far any help is greatly appreciated. enter code hereusing System;using System.Collections.Generic;using System.Linq;using System.Net;using System.Windows;using System.Windows.Controls;using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Animation;using System.Windows.Shapes;using Microsoft.Phone.Controls;using System.Device.Location;namespace secret{public partial class AddPage : PhoneApplicationPage{ private string location = ""; public AddPage() { InitializeComponent(); GeoCoordinateWatcher myWatcher = new GeoCoordinateWatcher(); var myPosition = myWatcher.Position; // Eftersom koden körs i emulatorn kan den inte få tillgång till riktiga GPS-värden // Därför hårdkodas koordinaterna till slottet i Gamla stan så att MSR MAPS Web Services //kan testas. double latitude = 40.717; double longitude = -74; if (!myPosition.Location.IsUnknown) { latitude = myPosition.Location.Latitude; longitude = myPosition.Location.Longitude; } myTerraService.TerraServiceSoapClient client = new myTerraService.TerraServiceSoapClient(); client.ConvertLonLatPtToNearestPlaceCompleted += new EventHandler<myTerraService.ConvertLonLatPtToNearestPlaceCompletedEventArgs>(client_ConvertLonLatPtToNearestPlaceCompleted); client.ConvertLonLatPtToNearestPlaceAsync(new myTerraService.LonLatPt { Lat = latitude, Lon = longitude }); } void client_ConvertLonLatPtToNearestPlaceCompleted(object sender, myTerraService.ConvertLonLatPtToNearestPlaceCompletedEventArgs e) { location = e.Result; //throw new NotImplementedException(); } private void AppBar_Cancel_Click(object sender, EventArgs e) { navigateBack(); } private void AppBar_Save_Click(object sender, EventArgs e) { // spara en ny anteckning if (location.Trim().Length == 0) { location = "Okänd"; } navigateBack(); } private void navigateBack() { NavigationService.Navigate(new Uri("/secret;component/NotesMainPage.xaml", UriKind.Relative)); } private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { editTextBox.Focus(); }}}
Now provide the response and nothing else.
|
You are creating a partial class, so you probably have these members defined in another source file for your partial class. You may look at the solution explorer, find that source file and either remove it from there or you may remove these members from your current partial class. You may see: Partial Classes and Methods (C# Programming Guide) To search for the other source file containing the partial class, right click on the Class Name AddPage and select Go to Definition . You will see multiple results in Find Symbol result window in visual studio.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13237146', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1800808/']}
|
jdg_373210
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2283870
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
See the following example (PHP) class Parent{ protected $_property; protected $_anotherP; public function __construct($var) { $this->_property = $var; $this->someMethod(); #Sets $_anotherP } protected function someMethod() ...}class Child extends Parent{ protected $parent; public function __construct($parent) { $this->parent = $parent; } private function myMethod() { return $this->parent->_anotherP; #Note this line }} I am new to OOP and am a bit ignorant. Here to access the parents property I am using an instance of that class, which seems wrong :S (no need of being i child then). Is there an easy way, so that i can sync the parent properties with the child properties and can directly access $this->anotherP without having to use $this->parent->anotherP ?
Now provide the response and nothing else.
|
As your Child class is extending your Parent class, every properties and methods that are either public or protected in the Parent class will be seen by the Child class as if they were defined in the Child class -- and the other way arround. When the Child class extends the Parent class, it can be seen as " Child is a Parent " -- which means the Child has the properties of the Parent , unless it redefines those another way. (BTW, note that " parent " is a reserved keyword, in PHP -- which means you can't name a class with that name) Here's a quick example of a "parent" class : class MyParent { protected $data; public function __construct() { $this->someMethodInTheParentClass(); } protected function someMethodInTheParentClass() { $this->data = 123456; }} And it's "child" class : class Child extends MyParent { public function __construct() { parent::__construct(); } public function getData() { return $this->data; // will return the $data property // that's defined in the MyParent class }} That can be used this way : $a = new Child();var_dump($a->getData()); And you'll get as output : int 123456 Which means the $data property, defined in the MyParent class, and initialized in a method of that same MyParent class, is accessible by the Child class as if it were its own. To make things simple : as the Child "is a" MyParent , it doesn't need to keep a pointer to... itself ;-)
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/2283870', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/269194/']}
|
jdg_373211
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
218138
|
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I was just browsing through the site and found this question: My server's been hacked EMERGENCY . Basically the question says: My server has been hacked. What should I do? The best answer is excellent but it raised some questions in my mind. One of the steps suggested is to: Examine the 'attacked' systems to understand how the attacks succeeded in compromising your security. Make every effort to find out where the attacks "came from", so that you understand what problems you have and need to address to make your system safe in the future. I have done no system admin work so I have no idea how I would start doing this. What would be the first step? I know that you could look in the server log files but as an attacker the first thing that I would do would be errasing the log files. How would you "understand" how the attacks succeeded?
Now provide the response and nothing else.
|
I'll start by saying this, if you have NO LOG FILES , then there is a reasonably good chance that you will NEVER understand where or how the attack succeeded. Even with full and proper log files, it can be extremely difficult to understand fully, the who, what, where, when, why and how. So, knowing how important log files are, you begin to understand just how safe you have to keep them. Which is why companies do and should be investing in Security Information & Event Management or SIEM for short. In a nutshell, correlating all of your log files into specific events (time-based or otherwise) can be an extremely daunting task. Just take a look at your firewall syslogs in debug mode if you don't believe me. And that's just from one appliance! A SIEM process puts these log files into a series of logical events which makes figuring out what happened, much easier to understand. To begin to have a better understanding of the how, it's helpful to study penetration methodologies . It's also helpful to know how a virus is written. Or how to write a rootkit . It can also be extremely beneficial to setup and study a honeypot . It also helps to have a log parser and become proficient with it. It's helpful to gather a baseline for your network and systems. What's "normal" traffic in your situation vs. "abnormal" traffic? CERT has an excellent guide on what to do after your computer has been hacked, most notably (which directly pertains to your specific question) the section on "Analyze the intrusion": Look for modifications made to system software and configuration files Look for modifications to data Look for tools and data left behind by the intruder Review log files Look for signs of a network sniffer Check other systems on your network Check for systems involved or affected at remote sites There are many questions similar to yours that have been asked on SF: How to do a post-mortem of a server hack Strange Items in Hosts File and Netstat is this a hack attempt? How can I learn Linux from hacking or security point of view This can be an extremely convoluted and involved process. Most people, me included, would just hire a consultant if it got any more involved than what my SIEM appliances could put together. And, apparently, if you ever want to FULLY understand how your systems were hacked, you have to spend years studying them and give up women.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/218138', 'https://serverfault.com', 'https://serverfault.com/users/33730/']}
|
jdg_373212
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
55041973
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an array of objects (objList) that each has "id" property. I have an array of strings (idsToRemove), representing IDs of the objects to remove from objList. I find some solution but I fear it's slow, especially with the large list of objects with lots of properties.Is there more efficient way to do this? var idsToRemove = ["3", "1"];var objList = [{ id: "1", name: "aaa" }, { id: "2", name: "bbb" }, { id: "3", name: "ccc" }];for (var i = 0, len = idsToRemove.length; i < len; i++) { objList = objList.filter(o => o.id != idsToRemove[i]);}console.log(objList);
Now provide the response and nothing else.
|
Turn the idsToRemove into a Set so that you can use Set.prototype.has (an O(1) operation), and .filter the objList just once, so that the overall complexity is O(n) (and you only iterate over the possibly-huge objList once): var idsToRemove = ["3", "1"];var objList = [{ id: "1", name: "aaa" }, { id: "2", name: "bbb" }, { id: "3", name: "ccc" }];const set = new Set(idsToRemove);const filtered = objList.filter(({ id }) => !set.has(id));console.log(filtered); Note that Array.prototype.includes and Array.prototype.indexOf operations are O(N) , not O(1) , so if you use them instead of a Set , they may take significantly longer.
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/55041973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1738094/']}
|
jdg_373213
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
3200197
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to convert an existing .NET Remoting application to WCF. Both server and client share common interface and all objects are server-activated objects. In WCF world, this would be similar to creating per-call service and using ChannelFactory<T> to create a proxy. I'm struggling a bit with how to properly create ChannelFactory<T> for an ASP.NET client. For performance reasons, I want to cache ChannelFactory<T> objects and just create channel every time I call the service. In .NET remoting days, there used to be RemotingConfiguration.GetRegisteredWellknownClientTypes() method to get a collection of client objects that I could then cache. It appears, in WCF world there is no such thing, although I was able to get a collection of endpoints from config file. Now here is what I think will work. I can create something like this: public static ProxyHelper{ static Dictionary<Type, object> lookup = new Dictionary<string, object>(); static public T GetChannel<T>() { Type type = typeof(T); ChannelFactory<T> factory; if (!lookup.ContainsKey(type)) { factory = new ChannelFactory<T>(); lookup.Add(type, factory); } else { factory = (ChannelFactory<T>)lookup[type]; } T proxy = factory.CreateChannel(); ((IClientChannel)proxy).Open(); return proxy; } } I think the above code will work, but I'm a bit worried about multiple threads trying to add new ChannelFactory<T> objects if it's not in the lookup. Since I'm using .NET 4.0, I was thinking about using ConcurrentDictionary and use GetOrAdd() method or use TryGetValue() method first to check if ChannelFactory<T> exists and it does not exist, then use GetOrAdd() method. Not sure about performance though of ConcurrentDictionary.TryGetValue() and ConcurrentDictionary.GetOrAdd() method. Another minor question is whether I need to call ChannelFactory.Close() method on channel factory objects after ASP.NET application ends or can I just let .NET framework dispose the channel factory objects on its own. The proxy channel will always be closed after calling service method by using ((IChannel)proxy).Close() method.
Now provide the response and nothing else.
|
Here's a helper class that I use to handle channel factories: public class ChannelFactoryManager : IDisposable{ private static Dictionary<Type, ChannelFactory> _factories = new Dictionary<Type,ChannelFactory>(); private static readonly object _syncRoot = new object(); public virtual T CreateChannel<T>() where T : class { return CreateChannel<T>("*", null); } public virtual T CreateChannel<T>(string endpointConfigurationName) where T : class { return CreateChannel<T>(endpointConfigurationName, null); } public virtual T CreateChannel<T>(string endpointConfigurationName, string endpointAddress) where T : class { T local = GetFactory<T>(endpointConfigurationName, endpointAddress).CreateChannel(); ((IClientChannel)local).Faulted += ChannelFaulted; return local; } protected virtual ChannelFactory<T> GetFactory<T>(string endpointConfigurationName, string endpointAddress) where T : class { lock (_syncRoot) { ChannelFactory factory; if (!_factories.TryGetValue(typeof(T), out factory)) { factory = CreateFactoryInstance<T>(endpointConfigurationName, endpointAddress); _factories.Add(typeof(T), factory); } return (factory as ChannelFactory<T>); } } private ChannelFactory CreateFactoryInstance<T>(string endpointConfigurationName, string endpointAddress) { ChannelFactory factory = null; if (!string.IsNullOrEmpty(endpointAddress)) { factory = new ChannelFactory<T>(endpointConfigurationName, new EndpointAddress(endpointAddress)); } else { factory = new ChannelFactory<T>(endpointConfigurationName); } factory.Faulted += FactoryFaulted; factory.Open(); return factory; } private void ChannelFaulted(object sender, EventArgs e) { IClientChannel channel = (IClientChannel)sender; try { channel.Close(); } catch { channel.Abort(); } throw new ApplicationException("Exc_ChannelFailure"); } private void FactoryFaulted(object sender, EventArgs args) { ChannelFactory factory = (ChannelFactory)sender; try { factory.Close(); } catch { factory.Abort(); } Type[] genericArguments = factory.GetType().GetGenericArguments(); if ((genericArguments != null) && (genericArguments.Length == 1)) { Type key = genericArguments[0]; if (_factories.ContainsKey(key)) { _factories.Remove(key); } } throw new ApplicationException("Exc_ChannelFactoryFailure"); } public void Dispose() { Dispose(true); } protected virtual void Dispose(bool disposing) { if (disposing) { lock (_syncRoot) { foreach (Type type in _factories.Keys) { ChannelFactory factory = _factories[type]; try { factory.Close(); continue; } catch { factory.Abort(); continue; } } _factories.Clear(); } } }} Then I define a service invoker: public interface IServiceInvoker{ R InvokeService<T, R>(Func<T, R> invokeHandler) where T: class;} and an implementation: public class WCFServiceInvoker : IServiceInvoker{ private static ChannelFactoryManager _factoryManager = new ChannelFactoryManager(); private static ClientSection _clientSection = ConfigurationManager.GetSection("system.serviceModel/client") as ClientSection; public R InvokeService<T, R>(Func<T, R> invokeHandler) where T : class { var endpointNameAddressPair = GetEndpointNameAddressPair(typeof(T)); T arg = _factoryManager.CreateChannel<T>(endpointNameAddressPair.Key, endpointNameAddressPair.Value); ICommunicationObject obj2 = (ICommunicationObject)arg; try { return invokeHandler(arg); } finally { try { if (obj2.State != CommunicationState.Faulted) { obj2.Close(); } } catch { obj2.Abort(); } } } private KeyValuePair<string, string> GetEndpointNameAddressPair(Type serviceContractType) { var configException = new ConfigurationErrorsException(string.Format("No client endpoint found for type {0}. Please add the section <client><endpoint name=\"myservice\" address=\"http://address/\" binding=\"basicHttpBinding\" contract=\"{0}\"/></client> in the config file.", serviceContractType)); if (((_clientSection == null) || (_clientSection.Endpoints == null)) || (_clientSection.Endpoints.Count < 1)) { throw configException; } foreach (ChannelEndpointElement element in _clientSection.Endpoints) { if (element.Contract == serviceContractType.ToString()) { return new KeyValuePair<string, string>(element.Name, element.Address.AbsoluteUri); } } throw configException; }} Now every time you need to call a WCF service you could use this: WCFServiceInvoker invoker = new WCFServiceInvoker();SomeReturnType result = invoker.InvokeService<IMyServiceContract, SomeReturnType>( proxy => proxy.SomeMethod()); This assumes that you've defined a client endpoint for the IMyServiceContract service contract in the config file: <client> <endpoint name="myservice" address="http://example.com/" binding="basicHttpBinding" contract="IMyServiceContract" /></client>
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3200197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/371596/']}
|
jdg_373214
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
12149993
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Update: this issue was a result of jQuery 1.7 vs 1.8. Do not ever use promises in 1.7 beacuse they aren't chainable with returning a promise inside a .then . 1.8 looks like they didn't mess it up. http://jsfiddle.net/delvarworld/28TDM/ // make a promisevar deferred = $.Deferred();promise = deferred.promise();// return a promise, that after 1 second, is rejectedpromise.then(function(){ var t = $.Deferred(); setTimeout(function() { console.log('rejecting...'); t.reject(); }, 1000); return t.promise();});// if that promise is successful, do thispromise.then(function() { console.log('i should never be called');})// if it errors, do thispromise.fail(function() { console.log('i should be called');});deferred.resolve(); Expected: 'i should be called' Actual: 'i should never be called' Problem : I want to chain callbacks and have any one of them be able to break the chain and trigger the fail function, and skip the other chained callbacks. I don't understand why all of the thens are triggered and the fail is not triggered. I'm coming from NodeJS's Q library, so I tried it with .then first. However, changing it to .pipe has no effect.
Now provide the response and nothing else.
|
You aren't re-defining the value of promise , try this: http://jsfiddle.net/28TDM/1/ var deferred = $.Deferred();promise = deferred.promise();promise = promise.then(function(){ var t = $.Deferred(); setTimeout(function() { console.log('rejecting...'); t.reject(); }, 1000); return t.promise();});promise.then(function() { console.log('i should never be called');})promise.fail(function() { console.log('i should be called');});deferred.resolve(); Apparently it does work the way you thought it did, it just isn't documented https://api.jquery.com/deferred.then . Very cool. This is new functionality added in jQuery 1.8.0, more than likely they just aren't done updating the documentation.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12149993', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/743464/']}
|
jdg_373215
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
20069295
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a .proto file definition which needs to import "google/protobuf/descriptor.proto" because I use Custom Options . So in my .proto file I do: import "google/protobuf/descriptor.proto";package ...;... Since my file didn't compile complaining about the dependency, I got a copy of the descriptor.proto file placing it in the same directory my proto file was. This solved the problem but I don't believe this is the correct way. Now the descriptor.proto gets compiled together with my .proto file resulting in having 2 compiled descriptor.proto at runtime: the one shipped with the protobuf-java-2.5.0.jar file the one which was compiled together with my .proto file I think the --proto-path option should be used somehow but not entirely sure what is the correct way. Thanks for the best practise tip here!
Now provide the response and nothing else.
|
When I have used descriptor in a .proto, I have used it like import "google/protobuf/descriptor.proto";message AddressBook { required google.protobuf.FileDescriptorSet proto_files = 1; Then to generate the java (on windows) with addressbookSD.proto in the default directory: protoc addressbookSD.proto --java_out=./ --proto_path=./ --proto_path=<protobuf-install-directory>\src where < protobuf-install-directory > is the protocol buffers install directory.The key point is descriptor.proto is in <protobuf-install-directory>\src\google\protobuf The levels in an protobuf import stament must match directories in the File system just like they would in java. So I use < protobuf-install-directory>\src as the import directory, The directory structure must be <protobuf-install-directory>\src +-- google +-- protobuf +-- descriptor.proto
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/20069295', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/601493/']}
|
jdg_373216
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
47371950
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm wondering how I could go about defining an operator / if an operator already exists in Haskell that lets pick the first Maybe type that isn't Nothing , and if all if them are nothing, it returns some default. Basically something like this: let x = a ?? b ?? c ?? 1 where (??) is the operator and a, b, c are all Maybe types. I'd expect this code to return the first value that is Just or 1 if all of a, b, c are Nothing . I'm basically looking to replicate the same functionality as the null coalescing operator you'd find in C# or other languages. This question shows how to do it in F# ( Null Coalescing Operator in F#? ), and it's quite hacky. Is there a way to do it cleanly in Haskell, and if not, what's the closest you can get to it?
Now provide the response and nothing else.
|
Haskeller's would generally use (<|>) :: Maybe a -> Maybe a -> Maybe a for this sort of thing. Assuming a, b, c :: Maybe Int , you have x :: Maybe Int let x = a <|> b <|> c <|> Just 1 Of course, that isn't really what you asked for. If you want, you could define it though! -- You expect `a ?? b ?? c ?? 1` to parse as `a ?? (b ?? (c ?? 1))`infixr 3 ??-- (Someone is going to point out that `(??) = flip fromMaybe`, so I am first)(??) :: Maybe a -> a -> aJust x ?? _ = xNothing ?? y = y Then, you get precisely the behaviour you expect. Assuming a, b, c :: Maybe Int , you have x :: Int : let x = a ?? b ?? c ?? 1
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47371950', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3772221/']}
|
jdg_373217
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
35328016
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to do something clicking on body , except on menutop which is inside the body . <body><div id="menutop">lorem ipsum...</div>...</body> js $("body").not($("#menutop")).click(function(){ console.log("323");}); also tried var n = $("#menutop");$("body:not(n)").click(function(){ console.log("323");}); but clicking on menutop in both cases console shows 323
Now provide the response and nothing else.
|
The selector $("body").not($("#menutop")) will select the body element if it is not the #menutop element. Since the body element clearly isn't #menutop element, the body element is still selected and the click event is attached to it. Even if you click on the #menutop element, the click event will still bubble up and trigger the click event on the body element, therefore one option is to make a check to see if event.target is the #menutop element: $(document).on('click', function (event) { if (!$(event.target).closest('#menutop').length) { // ... clicked on the 'body', but not inside of #menutop }}); Alternatively, you could also suppress the event bubbling and stop event propagation when clicking in the #menutop element: $(document).on('click', function (event) { // ... clicked on the 'body', but not inside of #menutop});$('#menutop').on('click', function (event) { event.stopPropagation();});
|
{}
|
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35328016', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3044737/']}
|
jdg_373218
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
60238
|
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got a low power application which will be powered from a Li coin cell. I've achieved satisfactory sleeping current with my chosen MCU. The application ADC IC however is a different matter: 900µA while inactive -- way too much for a coin cell application. So I thought why not isolate the power to the ADC when it's not in use. Two ideas came to mind: a high-side FET on the ADC's Vdd or just connecting the ADC's Vdd (or Vss) pin to one of the MCU GPIO pins (assuming current is within the MCU's source/sink spec). I assumed there would be plenty of articles, app notes etc discussing this idea, but I've yet to find anything on line. Before I start experimenting, can anyone point me to any relevant articles/app notes online? Or is it just a bad idea? and if so why? (Ya, know most MCUs have ADCs built in. For reasons beyond the scope of the query, I really must use this external ADC IC.) Thanks!
Now provide the response and nothing else.
|
Powering a device through a GPIO pin is usually a bad idea. In the very low power regime, perhaps you could get away with it, but I would not recommend it unless you have very severe constraints. You've already noted that you've checked that the ADC's requirement is lower than the pin's drive capability. That is typically what lot of people don't bother checking. If the consumption is within the required limits, then you may be fine on that count. However, make sure that any transient current requirements from the ADC are also contained within the GPIO drive capability. You would at the very least require reasonably heavy decoupling on the ADC supply. Remember the fact that the GPIO output isn't a low impedance supply line, and will be slower to respond to transient current requirements. Secondly, since you're using an ADC, and at that an ADC that isn't built into a uC (which would be what you should be doing for very low power consumption), I'm presuming you have some requirements which aren't satisfied by the internal ADC. The GPIO, not being a supply line, and more importantly, being a GPIO of a microcontroller, will most definitely be contaminated by atleast the clock frequency of the uC, its harmonics, and possibly subharmonics. Since you're also going to be driving some substantial current through it, I would not be surprised if other effects are brought in as well. You may even end up seeing small components of your SPI/I2C/what have you in the supply, depending on which GPIO you use and how heavy the decoupling is. If ADC resolution and noise performance is important, then perhaps this is not such a good idea even if the GPIOs seem to be capable of driving the IC. The high side FET is a much better bet, and is safer. You could also consider using any number of power related ICs which have Enable control, such as LDOs and the like. An LDO close to the ADC supply may also help with improving performance. Note, though, that this means your ADC will have to run at a slightly lower voltage. This will happen with a simple transistor switch as well, and with a low Rdson FET the effect will, admittedly, be much smaller, but it will exist. One thing that you should note is that connecting an unpowered IC's digital lines to GPIOs of a powered uC is not a good idea. You will end up powering up your ADC through its digital IOs and cause strange, and potentially dangerous behaviour. Specifically, I would be surprised if your ADC did not respond even when OFF. This is capable of causing long term degradation, and eats into the advantage of power saving in the first place. In order to make it turn off well, you should use a level translating buffer for every digital line between the two with the ability to disable (tristate) the outputs. This can be done either using an EN pin, perhaps, or using a buffer with other mechanisms to disable (The SN74LVC1T45 tristates if the supply on one side is pulled to ground, for instance). Whether the scheme is useful depends on the consumption of the buffer in it's OFF state, the consumption in it's ON state, and the duty cycle (the fraction of time you want to turn it ON for), and the ADC consumption (900uA) that you can save by doing this. If you are very careful, you may be able to avoid the need for the buffers by tristating the uC IOs that are connected to the ADC before shutting it off, and thereby producing about the same effect.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/60238', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/18937/']}
|
jdg_373219
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
18602820
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm getting the following crash on the Samsung Galaxy Ace/Y/Pocket/Mini. It runs fine on all other devices. Crashes are only occurring on Android 2.3.5 or 2.3.6 There is no ref to my code in the stacktrace. It seem to come from the menu layout which I haven't customised. I suspect that it is a fault in the Galaxy OS. But can't confirm. More importantly, what can I can do avoid or mitigate this error? android.view.InflateException: Binary XML file line #45: Error inflating class at org.holoeverywhere.LayoutInflater._createView(SourceFile:382)at org.holoeverywhere.LayoutInflater.onCreateView(SourceFile:594)at org.holoeverywhere.LayoutInflater.createViewFromTag(SourceFile:444)at org.holoeverywhere.LayoutInflater.rInflate(SourceFile:731)at org.holoeverywhere.LayoutInflater.rInflate(SourceFile:734)at org.holoeverywhere.LayoutInflater.inflate(SourceFile:538)at org.holoeverywhere.LayoutInflater.inflate(SourceFile:488)at com.android.internal.view.menu.MenuItemImpl.createItemView(MenuItemImpl.java:592)at com.android.internal.view.menu.MenuItemImpl.getItemView(MenuItemImpl.java:577)at com.android.internal.view.menu.MenuBuilder$MenuAdapter.getView(MenuBuilder.java:1173)at android.widget.AbsListView.obtainView(AbsListView.java:1592)at android.widget.ListView.measureHeightOfChildren(ListView.java:1251)at android.widget.ListView.onMeasure(ListView.java:1162)at android.view.View.measure(View.java:8313)at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3138)at android.widget.FrameLayout.onMeasure(FrameLayout.java:250)at android.view.View.measure(View.java:8313)at android.view.ViewRoot.performTraversals(ViewRoot.java:845)at android.view.ViewRoot.handleMessage(ViewRoot.java:1865)at android.os.Handler.dispatchMessage(Handler.java:99)at android.os.Looper.loop(Looper.java:130)at android.app.ActivityThread.main(ActivityThread.java:3687)at java.lang.reflect.Method.invokeNative(Native Method)at java.lang.reflect.Method.invoke(Method.java:507)at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)at dalvik.system.NativeStart.main(Native Method) Caused by: java.lang.reflect.InvocationTargetExceptionat java.lang.reflect.Constructor.constructNative(Native Method)at java.lang.reflect.Constructor.newInstance(Constructor.java:415)at org.holoeverywhere.LayoutInflater._createView(SourceFile:358)... 26 more Caused by: android.content.res.Resources$NotFoundException: Resource is not a ColorStateList (color or path): TypedValue{t=0x1/d=0x7f0d014c a=3 r=0x7f0d014c}at android.content.res.Resources.loadColorStateList(Resources.java:1804)at android.content.res.TypedArray.getColorStateList(TypedArray.java:342)at android.widget.TextView.(TextView.java:445)at org.holoeverywhere.widget.TextView.(SourceFile:133)at org.holoeverywhere.widget.TextView.(SourceFile:129)... 29 more java.lang.reflect.InvocationTargetExceptionat java.lang.reflect.Constructor.constructNative(Native Method)at java.lang.reflect.Constructor.newInstance(Constructor.java:415)at org.holoeverywhere.LayoutInflater._createView(SourceFile:358)at org.holoeverywhere.LayoutInflater.onCreateView(SourceFile:594)at org.holoeverywhere.LayoutInflater.createViewFromTag(SourceFile:444)at org.holoeverywhere.LayoutInflater.rInflate(SourceFile:731)at org.holoeverywhere.LayoutInflater.rInflate(SourceFile:734)at org.holoeverywhere.LayoutInflater.inflate(SourceFile:538)at org.holoeverywhere.LayoutInflater.inflate(SourceFile:488)at com.android.internal.view.menu.MenuItemImpl.createItemView(MenuItemImpl.java:592)at com.android.internal.view.menu.MenuItemImpl.getItemView(MenuItemImpl.java:577)at com.android.internal.view.menu.MenuBuilder$MenuAdapter.getView(MenuBuilder.java:1173)at android.widget.AbsListView.obtainView(AbsListView.java:1592)at android.widget.ListView.measureHeightOfChildren(ListView.java:1251)at android.widget.ListView.onMeasure(ListView.java:1162)at android.view.View.measure(View.java:8313)at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3138)at android.widget.FrameLayout.onMeasure(FrameLayout.java:250)at android.view.View.measure(View.java:8313)at android.view.ViewRoot.performTraversals(ViewRoot.java:845)at android.view.ViewRoot.handleMessage(ViewRoot.java:1865)at android.os.Handler.dispatchMessage(Handler.java:99)at android.os.Looper.loop(Looper.java:130)at android.app.ActivityThread.main(ActivityThread.java:3687)at java.lang.reflect.Method.invokeNative(Native Method)at java.lang.reflect.Method.invoke(Method.java:507)at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)at dalvik.system.NativeStart.main(Native Method) Caused by: android.content.res.Resources$NotFoundException: Resource is not a ColorStateList (color or path): TypedValue{t=0x1/d=0x7f0d014c a=3 r=0x7f0d014c}at android.content.res.Resources.loadColorStateList(Resources.java:1804)at android.content.res.TypedArray.getColorStateList(TypedArray.java:342)at android.widget.TextView.(TextView.java:445)at org.holoeverywhere.widget.TextView.(SourceFile:133)at org.holoeverywhere.widget.TextView.(SourceFile:129)... 29 more android.content.res.Resources$NotFoundException: Resource is not a ColorStateList (color or path): TypedValue{t=0x1/d=0x7f0d014c a=3 r=0x7f0d014c}at android.content.res.Resources.loadColorStateList(Resources.java:1804)at android.content.res.TypedArray.getColorStateList(TypedArray.java:342)at android.widget.TextView.(TextView.java:445)at org.holoeverywhere.widget.TextView.(SourceFile:133)at org.holoeverywhere.widget.TextView.(SourceFile:129)at java.lang.reflect.Constructor.constructNative(Native Method)at java.lang.reflect.Constructor.newInstance(Constructor.java:415)at org.holoeverywhere.LayoutInflater._createView(SourceFile:358)at org.holoeverywhere.LayoutInflater.onCreateView(SourceFile:594)at org.holoeverywhere.LayoutInflater.createViewFromTag(SourceFile:444)at org.holoeverywhere.LayoutInflater.rInflate(SourceFile:731)at org.holoeverywhere.LayoutInflater.rInflate(SourceFile:734)at org.holoeverywhere.LayoutInflater.inflate(SourceFile:538)at org.holoeverywhere.LayoutInflater.inflate(SourceFile:488)at com.android.internal.view.menu.MenuItemImpl.createItemView(MenuItemImpl.java:592)at com.android.internal.view.menu.MenuItemImpl.getItemView(MenuItemImpl.java:577)at com.android.internal.view.menu.MenuBuilder$MenuAdapter.getView(MenuBuilder.java:1173)at android.widget.AbsListView.obtainView(AbsListView.java:1592)at android.widget.ListView.measureHeightOfChildren(ListView.java:1251)at android.widget.ListView.onMeasure(ListView.java:1162)at android.view.View.measure(View.java:8313)at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:3138)at android.widget.FrameLayout.onMeasure(FrameLayout.java:250)at android.view.View.measure(View.java:8313)at android.view.ViewRoot.performTraversals(ViewRoot.java:845)at android.view.ViewRoot.handleMessage(ViewRoot.java:1865)at android.os.Handler.dispatchMessage(Handler.java:99)at android.os.Looper.loop(Looper.java:130)at android.app.ActivityThread.main(ActivityThread.java:3687)at java.lang.reflect.Method.invokeNative(Native Method)at java.lang.reflect.Method.invoke(Method.java:507)at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:625)at dalvik.system.NativeStart.main(Native Method) Here is the entire theme.xml <?xml version="1.0" encoding="utf-8"?><resources> <style name="CustomWindowTitleText" parent="TextAppearance.Sherlock.Widget.ActionBar.Title"> <item name="android:textColor">@color/primary_text_yellow</item> <item name="android:textStyle">bold|italic</item> </style> <style name="CustomWindowTitleText.ActionBar"> <item name="android:textSize">@dimen/textsize_mid</item> </style> <style name="CustomWindowSubTitleText" parent="TextAppearance.Sherlock.Widget.ActionBar.Subtitle"> </style> <!-- Changes the background color of the title bar --> <style name="CustomWindowTitleBackground"> <item name="android:background">@drawable/background_window_title</item> </style> <!-- Set the theme for the window title --> <!-- NOTE: setting android:textAppearance to style defined above --> <style name="CustomWindowTitle"> <item name="android:singleLine">true</item> <item name="android:shadowColor">#BB000000</item> <item name="android:shadowRadius">2.75</item> <item name="android:textAppearance">@style/CustomWindowTitleText</item> <item name="android:paddingLeft">40dp</item> </style> <!-- http://actionbarsherlock.com/theming.html Due to limitations in Android's theming system any theme customizations must be declared in two attributes. The normal android-prefixed attributes apply the theme to the native action bar and the unprefixed attributes are for the custom implementation. --> <style name="CustomActionBar" parent="Widget.Sherlock.Light.ActionBar.Solid.Inverse"> <!--parent="@android:style/Widget.Holo.ActionBar"--> <item name="android:background">@drawable/gradient_lightpurple_darkpurple_off</item> <!-- Use background that has color but not icon. --> <item name="android:backgroundSplit">@drawable/gradient_lightpurple_darkpurple_off</item> <!-- Use background that has color but not icon. --> <item name="android:titleTextStyle">@style/CustomWindowTitleText.ActionBar</item> <item name="android:subtitleTextStyle">@style/CustomWindowSubTitleText</item> <item name="background">@drawable/gradient_lightpurple_darkpurple_off</item> <!-- Use background that has color but not icon. --> <item name="backgroundSplit">@drawable/gradient_lightpurple_darkpurple_off</item> <!-- Use background that has color but not icon. --> <item name="titleTextStyle">@style/CustomWindowTitleText.ActionBar</item> <item name="subtitleTextStyle">@style/CustomWindowSubTitleText</item> </style> <!-- Override properties in the default theme --> <!-- NOTE: If you change the windowTitleTextSize you must explicitly the windowTitleSize property, the title bar will not re-size automatically, text will be clipped --> <style name="JumbleeTheme" parent="Holo.Theme"> <!-- extends Theme.Sherlock --> <item name="android:windowTitleStyle">@style/CustomWindowTitle</item> <item name="android:windowTitleBackgroundStyle">@style/CustomWindowTitleBackground</item> <item name="android:windowBackground">@drawable/gradient_off_darkpurple_off</item> <item name="android:windowActionBar">true</item> <item name="android:actionBarStyle">@style/CustomActionBar</item> <item name="actionBarStyle">@style/CustomActionBar</item> </style> <style name="JumbleeTheme.GameScreen"> <item name="android:windowBackground">@color/black</item> </style> <!-- Facebook Theme --> <style name="CustomWindowTitleText.ActionBar.Facebook"> <item name="android:textColor">@color/primary_text_white</item> </style> <!-- http://actionbarsherlock.com/theming.html Due to limitations in Android's theming system any theme customizations must be declared in two attributes. The normal android-prefixed attributes apply the theme to the native action bar and the unprefixed attributes are for the custom implementation. --> <style name="CustomActionBar.Facebook"> <item name="android:background">@drawable/background_window_title_facebook</item> <!-- Use background that has color but not icon. --> <item name="android:backgroundSplit">@drawable/background_window_title_facebook</item> <!-- Use background that has color but not icon. --> <item name="android:titleTextStyle">@style/CustomWindowTitleText.ActionBar.Facebook</item> <item name="background">@drawable/background_window_title_facebook</item> <!-- Use background that has color but not icon. --> <item name="backgroundSplit">@drawable/background_window_title_facebook</item> <!-- Use background that has color but not icon. --> <item name="titleTextStyle">@style/CustomWindowTitleText.ActionBar.Facebook</item> </style> <!-- Facebook Sender Activity --> <style name="FacebookSenderTheme" parent="Holo.Theme.Light"> <item name="android:actionBarStyle">@style/CustomActionBar.Facebook</item> <item name="actionBarStyle">@style/CustomActionBar.Facebook</item> </style></resources> And the source for primary_text_yellow <?xml version="1.0" encoding="utf-8"?><selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_enabled="false" android:color="@color/text_light_disabled" /> <item android:color="@color/app_yellow"/></selector>
Now provide the response and nothing else.
|
I'm pretty sure I've had this same problem. I'm willing to bet it has nothing to do with the OS. It's caused by that thing in Android where you can substitute one Drawable for another depending on your theme or some other condition. It turns out that the Android resource compiler doesn't (always? ever?) check if there's anything 'real' behind an alias. Here is how you debug it... Start with this line, it tells you something specific about what is missing--the ID: Caused by: android.content.res.Resources$NotFoundException: Resource is not a ColorStateList (color or path): TypedValue{t=0x1/d=0x7f0d014c a=3 r=0x7f0d014c} Now go to bin/R.text , and search for the ID code that is given: 0x7f0d014c .There should be something like this: int attr actionModeCloseDrawable 0x7f0d014c Now look for the name actionModeCloseDrawable or whatever it is in your workspace. I believe that you will find that it is an alias for a non-existent resource. i.e., there is a configuration (version number, screen size, etc.) under which that attr does not map to anything at all. Important things to note:Make sure to check ALL places where you find actionModeCloseDrawable or whatever it is. It might look like it is in a place where it 'doesn't matter' but this was a second problem encountered: after I 'fixed' the issue in one abc_myfragment.xml , I actually had to fix it in abc_myfragment_large.xml -- Android was automatically calling a different xml file than the one specified in code, based on the device I happened to be running it on. The top line of your stack trace should help you verify that you are looking in the right place: android.view.InflateException: Binary XML file line #45: Error inflating class This tells you that the missing ColorStateList is being referred to in a section of some XML file somewhere, and that that troublesome section starts on line 45 (i.e., that's where the open angle bracket --"<"-- starting the section is.) PS What I mean by alias is one of these: <attr name="abc_my_home_button" format="reference" /> which gets separate further definitions in two separate themes: <item name="afc_button_home">@drawable/afc_button_home_light</item> in the file themes_light.xml and <item name="afc_button_home">@drawable/afc_button_home_dark</item> in the file themes_dark.xml The trouble arises when you leave the final definition out of one of the theme files. EDIT Next step: Investigate CustomActionBar 's dependencies. I'm not an expert on styles, so I will assume there are 9 possible places remaining where Android might see a reference to a ColorStateList that it can't find: in the parent of CustomActionBar ( Widget.Sherlock.Light.ActionBar.Solid.Inverse ), and in each of the 8 items it contains. Comment out those 8 items and get rid of the parent, then rerun your app in the troublesome configuration. Check if the runtime error goes away or, more likely, changes (e.g., line number (and id) changes from #45 to indicate CustomActionBar.Facebook instead, where many of the same resources are invoked). Then undo the changes one by one until the error changes back to what it was before. This will further localize the problem. After attempting a fix, move slowly, since it is possible that if you do find the problem (e.g., as a non-existent reference) it is replicated in multiple of the 8 items. So uncomment one by one.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18602820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/493682/']}
|
jdg_373220
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
118891
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've been using VirtualBox (and sometimes VMWare) for years and I've never had any problem with the virtual network adapters, no matter if the physical ones were wired or not. I also played some time ago with KVM in a wired set-up and, although I had to edit some configuration files to get it working, I could also create a bridged adapter without any major problems. Today I decided (wrongly, it seems) to try to use KVM in a laptop running Ubuntu 13.10 and tried to create a virtual machine with bridged networking over a wireless interface. It was extremely painful to set this up. After following all the tutorials I found ( for example ) and having to reboot my laptop several times to get the connection back I just gave up and went back to my old well-known VirtualBox. And, actually, the first thing I noticed when I looked into the official KVM documentation was that they discourage from trying to bridge a wireless adapter since, according to them : The here shown method, will not work with most(all?) wireless drivers, as these do not support bridging. So, my question is: How come they say that most wireless adapters do not support bridging if it works in VirtualBox and VMWare just "out-of-the-box"? And what's the difference between these hypervisors that makes it so complicated in KVM, if it works at all?
Now provide the response and nothing else.
|
Background on KVM I think this is partly due to expectations with KVM. KVM is first and foremost a server product and not a desktop product for virtualization. It can be used in either application but it's definitely suited more for being used on a server. I use it on 3+ hosts at work each hosting 5-10 VMs apiece and it has run flawlessly and is easy to manage, and basically just works. Question #1 How come they say that most wireless adapters do not support bridging if it works in VirtualBox and VMWare just "out-of-the-box"? I believe you're drawing this conclusion from this blurb on the KVM website . WARNING: The here shown method, will not work with most(all?) wireless drivers, as these do not support bridging. This statement is here because it is typically the case. I believe this is often why when you install VirtualBox or VMWare there are typically kernel modules that are getting installed and these products provide their own wrapping around doing this to facilitate making it easier. These products are essentially working around these issues. I believe this issue is also a driver issue. The drivers for WiFi under Linux still pales in comparison to the support that's provided by the Windows drivers for the same hardware. That's just a fact of life. NOTE: I've had wireless NICs in the past that I was not able to put into bridge mode in the past as well. I've typically worked around the issue by either using VirtualBox or getting a different NIC for my laptop. I'll also highlight that neither VirtualBox nor VMware could do this either, at least not until more recent versions. See this as evidence from VMware's KB: If your host has a wireless network adapter, you cannot use bridged networking on Linux hosts in VMware Workstation 5 or lower, VMware Server 1.x, any version of GSX Server, any hosts in VMware Workstation 3 or lower, or in VMware GSX Server 2 or lower. Under these products, if you want to run virtual machines on a host that uses wireless Ethernet adapters, you must configure your virtual machines to use NAT or host-only networking. Source: Using bridged networking with a wireless NIC (760) Question #2 And what's the difference between these hypervisors that makes it so complicated in KVM, if it works at all? I can't really shed any light on this particular question, other than to say that if it was easy I imagine this feature would be enabled. I think the crux of the issue has to do with this feature requiring 3 or more groups to coordinate their efforts (hardware manuf., driver devs., Linux kernel, & KVM). These situations are often what results when you need multiple groups to work together in the open source world (IMO)! So can I set it up or what? You can set this up following the directions from either of these 2 articles. The setup requires using a TUN/TAP device which can be put into bridge mode. Bridging a Wireless Card in KVM/QEMU Bridge wireless cards
|
{}
|
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/118891', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/39146/']}
|
jdg_373221
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
32370021
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am playing with Rust's concurrency and trying to wrap my head around Send / Sync / Arc / Mutex . I have problems with sharing a reference to an instance of trait which is held in a HashMap : use std::{collections::HashMap, sync::Arc, thread, time::Duration};#[derive(Debug)]struct A { foo: u8,}trait Foo { fn get_foo(&self) -> u8;}impl Foo for A { fn get_foo(&self) -> u8 { self.foo }}fn main() { let a = Arc::new(A { foo: 8 }); let mut map: HashMap<u8, Arc<Foo>> = HashMap::new(); map.insert(8u8, a); for _ in 0..2 { let a = map.get(&8u8).expect("boom"); let a = a.clone(); thread::spawn(move || { let _ = a.get_foo(); }); } thread::sleep(Duration::from_millis(200));} ( playground ) It gives me these errors: error[E0277]: `dyn Foo` cannot be sent between threads safely --> src/main.rs:27:9 |27 | thread::spawn(move || { | ^^^^^^^^^^^^^ `dyn Foo` cannot be sent between threads safely | = help: the trait `std::marker::Send` is not implemented for `dyn Foo` = note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Arc<dyn Foo>` = note: required because it appears within the type `[closure@src/main.rs:27:23: 29:10 a:std::sync::Arc<dyn Foo>]` = note: required by `std::thread::spawn`error[E0277]: `dyn Foo` cannot be shared between threads safely --> src/main.rs:27:9 |27 | thread::spawn(move || { | ^^^^^^^^^^^^^ `dyn Foo` cannot be shared between threads safely | = help: the trait `std::marker::Sync` is not implemented for `dyn Foo` = note: required because of the requirements on the impl of `std::marker::Send` for `std::sync::Arc<dyn Foo>` = note: required because it appears within the type `[closure@src/main.rs:27:23: 29:10 a:std::sync::Arc<dyn Foo>]` = note: required by `std::thread::spawn` Could anyone please recommend an approach for this task? I think I'm kinda stuck with Rust's way to work with traits and threading.
Now provide the response and nothing else.
|
Remember that types of original values which are converted to trait objects are erased. Therefore, the compiler can't know whether the data inside the Arc<Foo> is Send and Sync , and without these traits sharing data across threads may be unsafe. You need to specify that types which can be stored in Arc<Foo> must be Send and Sync : let mut map: HashMap<u8, Arc<Foo + Sync + Send>> = HashMap::new(); (try it here ) The Send bound is required by thread::spawn() , and Sync is required by Arc for it to be Send . Additionally, thread::spawn() also requires 'static but it is implicit in this particular Arc<Foo + Sync + Send> type declaration. Of course, you will be able to store only Sync and Send implementations of Foo , but this is necessary to ensure memory safety. However, in Rust synchronization is implemented with wrappers like Mutex<T> or RwLock<T> . They don't implement Foo even if T implements Foo , therefore you won't be able to store, say, Mutex<Foo + Send> inside your map (unless Foo is your trait and you implemented it for Mutex<Foo> , which could be unwieldy), which would be necessary if your Foo implementations are not Sync but Send (though I'm not sure I can provide an example of such type now). To solve this you'd need to change map type to contain a mutex inside it explicitly: let mut map: HashMap<u8, Arc<Mutex<Foo + Send>>> = HashMap::new(); This way, there is no need for the Sync bound because Mutex is Sync if its contents are Send . And naturally, you won't be able to share Foo implementations which are not Send at all, and there is no way around it. This can happen, for example, if Foo 's implementation contains Rc s.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32370021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/916384/']}
|
jdg_373222
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10598069
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm new to both Sourceforge and SVN (SilkSVN on Windows). I managed to check out a project using svn co --username=zeroth123 svn://zeroth123@svn.code.sf.net/p/stddecimal/code/trunk stddecimal-code However, every time I try and use ssh (so I can check in for example) it fails with > svn co --username=zeroth123 svn+ssh://zeroth123@svn.code.sf.net/p/stddecimal/code/trunk stddecimal-codesvn: E720087: Unable to connect to a repository at URL 'svn+ssh://zeroth123@svn.code.sf.net/p/stddecimal/code/trunk'svn: E720087: Can't create tunnel: The parameter is incorrect. I've set SVN_SSH to point to my fully qualified plink.exe and it makes no difference. I also tried changing it to junk.exe and didn't get anything different, so I don't think we're even looking at SVN_SSH , which exhausts all of the advice I've been able to find on the Sourceforge and other forums. For the record I am the project administrator and therefore have write permissions (or at least think I do!) Any ideas?
Now provide the response and nothing else.
|
`%SVN_SSH% needs to be set correctly and an SSH key needs to be registered with Sourceforge. Note that new Sourceforge projects are version 2.0 by default, while 99% of the Sourceforge documentation is for version 1.0 and appears outdated. These instructions assume you have SVN and the PuTTY tools installed, and that you can successfully access your project using a non-secure protocol, eg the read-only svn checkout svn://USER@svn.code.sf.net/p/PROJECT/code/trunk PROJECT-code style command should work. These instructions explain how to do the same with svn checkout svn+ssh://... which is needed for write access to Sourceforge. Set %SVN_SSH% to the fully qualified path of your SSH client. E.g. setx SVN_SSH "C:/Program Files (x86)/PuTTY/plink.exe" svn expects forward slashes or double back-slashes in the path You may have to reopen a cmd window Test by typing %SVN_SSH% at the prompt and confirm you get your ssh programs help screen (or whatever behaviour you expect from executing with no arguments) (Optional?) Cache the server RSA key. Open PuTTY , login to svn.code.sf.net (or whatever the server of your project is). A prompt will appear asking if you if you want to cache the server RSA key - click Yes. You might not need to do this, if not when you use SVN to connect ot your project for the first time you will be asked the same question. Type y . However, I did this because I thought the interactive prompt could be part of the problem. Create a public / private key and register the public key with Sourceforge. On creating a key, more info here for example. If you use PuTTY , use PuTTYgen to create the key, and make sure the private key is loaded in pageant . Ironically Launchpad has excellent instructions. The public key can be registered on Sourceforge by navigating to Home > Account > Services and clicking Edit SSH Keys for Shell/CVS . Now a checkout using svn+ssh protocol should work!
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10598069', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/424153/']}
|
jdg_373223
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2728190
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Code with iterators looks pretty much like code with pointers. Iterators are of some obscure type (like std::vector<int>::iterator for example). What I don't get is how iterators and pointer are related to each other - is an iterator a wrapper around a pointer with overloaded operations to advance to adjacent elements or is it something else?
Now provide the response and nothing else.
|
Iterators are a generalization of pointers. An iterator (depending on the variants) have to implement * and ++ So a pointer IS an iterator. But not necessarily the other way round. If you want to iterate over a complex structure (a tree, a graph...), the iterator will be much more than a pointer, and doesn't make any reference to some actual place in the ram.
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2728190', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/57428/']}
|
jdg_373224
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
176915
|
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Now I am try to convert matlab file to mathematica, This is the original Matlab command, surf , for making surface plots: surf (xei , yei , zei , EE,'FaceColor ','interp ','FaceAlpha ' ,1.0 , 'LineWidth ' ,... 3010.01 , 'EdgeAlpha ' ,0.2 , 'EdgeColor ','w'); If I want to plot the same matrix in Mathematica, which correct Plot Function shall I use? For a better understanding, I put some convert codes here. n = 70;m = n/2;CC = {{165.7, 63.9, 63.9, 0, 0, 0}, {63.9, 165.7, 63.9, 0, 0, 0}, {63.9, 63.9, 165.7, 0, 0, 0}, {0, 0, 0, 2*79.6, 0, 0}, {0, 0, 0, 0, 2*79.6, 0}, {0, 0, 0, 0, 0, 2*79.6}};phi = N[Range[0, Pi, Pi/(m - 1)]];theta = N[Range[0, 2*Pi, 2*Pi/(n - 1)]];EE = ConstantArray[0, {m, n}];xei = ConstantArray[0, {m, n}];yei = ConstantArray[0, {m, n}];zei = ConstantArray[0, {m, n}];SS = Inverse[CC];For[i = 1, i < n + 1, i++, For[j = 1, j < m + 1, j++, xhen = {{Sin[phi[[j]]]*Cos[theta[[i]]], Sin[phi[[j]]]*Sin[theta[[i]]], Cos[phi[[j]]]}}; (*xheng=Transpose[xhen];*) xiaoa = KroneckerProduct[Transpose[xhen], xhen]; dbV = {{xiaoa[[1, 1]], xiaoa[[2, 2]], xiaoa[[3, 3]], Sqrt[2]*xiaoa[[2, 3]], Sqrt[2]*xiaoa[[1, 3]], Sqrt[2]*xiaoa[[1, 2]]}}; (*dbV=Transpose[dbV];*) FFXX = SS . Transpose[dbV]; (*FXXX=Transpose[dbV];*) EE[[j, i]] = 1/(dbV . FFXX);]]For[i = 1, i < n + 1, i++, For[j = 1, j < m + 1, j++, xei[[j, i]] = EE[[j, i]]*Sin[phi[[j]]]*Cos[theta[[i]]]; yei[[j, i]] = EE[[j, i]]*Sin[phi[[j]]]*Sin[theta[[i]]]; zei[[j, i]] = EE[[j, i]]*Cos[phi[[j]]];]]ListPlot3D[Transpose[{xei, yei, zei}]]
Now provide the response and nothing else.
|
Edit 4 We can project it on xy plane by setting z=0 . n = 250;m = n/2;CC = {{165.7, 63.9, 63.9, 0, 0, 0}, {63.9, 165.7, 63.9, 0, 0, 0}, {63.9, 63.9, 165.7, 0, 0, 0}, {0, 0, 0, 2*79.6, 0, 0}, {0, 0, 0, 0, 2*79.6, 0}, {0, 0, 0, 0, 0, 2*79.6}};phi = N[Range[0, Pi, Pi/(m - 1)]];theta = N[Range[0, 2*Pi, 2*Pi/(n - 1)]];EE = xei = yei = zei = ConstantArray[0, {m, n}];SS = Inverse[CC];Table[xhen = {Sin[phi[[j]]]*Cos[theta[[i]]], Sin[phi[[j]]]*Sin[theta[[i]]], Cos[phi[[j]]]}; xiaoa = KroneckerProduct[xhen, xhen]; dbV = {xiaoa[[1, 1]], xiaoa[[2, 2]], xiaoa[[3, 3]], Sqrt[2]*xiaoa[[2, 3]], Sqrt[2]*xiaoa[[1, 3]], Sqrt[2]*xiaoa[[1, 2]]}; FFXX = SS.dbV; EE[[j, i]] = 1/(dbV.FFXX), {i, n}, {j, m}];Table[xei[[j, i]] = EE[[j, i]]*Sin[phi[[j]]]*Cos[theta[[i]]]; yei[[j, i]] = EE[[j, i]]*Sin[phi[[j]]]*Sin[theta[[i]]]; zei[[j, i]] = EE[[j, i]]*Cos[phi[[j]]];, {i, n}, {j, m}];data = Transpose[Flatten /@ {xei, yei, zei, EE}];data[[All, 3]] = 0;color = Blend["ThermometerColors", Rescale[#, MinMax@EE]] &;Legended[Graphics3D[{color[Last[#]], AbsolutePointSize[20], Point[Most[#]]} & /@ data, ImageSize -> Large, ViewPoint -> Above, Boxed -> False], BarLegend[{color, MinMax@EE}, LegendMarkerSize -> {22, 400}]] Or data = Transpose[Flatten /@ {xei, yei, zei, EE}];data = data[[All, {1, 2, 4}]];Legended[Graphics[{color[Last[#]], AbsolutePointSize[20], Point[Most[#]]} & /@ data, ImageSize -> Large], BarLegend[{color, MinMax@EE}, LegendMarkerSize -> {22, 400}]] Or you achieve this by using @xzczd CC = {{165.7, 63.9, 63.9, 0, 0, 0}, {63.9, 165.7, 63.9, 0, 0, 0}, {63.9, 63.9, 165.7, 0, 0, 0}, {0, 0, 0, 2 79.6, 0, 0}, {0, 0, 0, 0, 2 79.6, 0}, {0, 0, 0, 0, 0, 2 79.6}};SS = Inverse[CC];xhen = {Sin[phi] Cos[theta], Sin[phi] Sin[theta], Cos[phi]};xiaoa = Outer[Times, xhen, xhen];dbV = {xiaoa[[1, 1]], xiaoa[[2, 2]], xiaoa[[3, 3]], Sqrt[2] xiaoa[[2, 3]], Sqrt[2] xiaoa[[1, 3]], Sqrt[2] xiaoa[[1, 2]]};EE = 1/dbV.SS.dbV;SphericalPlot3D[EE, {phi, 0, Pi}, {theta, 0, 2 Pi}, ColorFunction -> Function[{x, y, z, phi, theta, r}, ColorData["ThermometerColors"][r]], ViewPoint -> Above, Boxed -> False, Axes -> False, PlotPoints -> 100, Mesh -> None] Edit 3 You can increase number of point by increasing n . {col, row} = ImageDimensions@parula; ParulaMMA = Module[{colorlist}, colorlist = Catenate@ ImageData@ImageTake[parula, {Round[row/2], Round[row/2]}, All]; Evaluate[Blend[RGBColor @@@ colorlist, Rescale[#, MinMax@EE]] &]];n = 250;m = n/2;CC = {{165.7, 63.9, 63.9, 0, 0, 0}, {63.9, 165.7, 63.9, 0, 0, 0}, {63.9, 63.9, 165.7, 0, 0, 0}, {0, 0, 0, 2*79.6, 0, 0}, {0, 0, 0, 0, 2*79.6, 0}, {0, 0, 0, 0, 0, 2*79.6}};phi = N[Range[0, Pi, Pi/(m - 1)]];theta = N[Range[0, 2*Pi, 2*Pi/(n - 1)]];EE = xei = yei = zei = ConstantArray[0, {m, n}];SS = Inverse[CC];Table[ xhen = {Sin[phi[[j]]]*Cos[theta[[i]]], Sin[phi[[j]]]*Sin[theta[[i]]], Cos[phi[[j]]]}; xiaoa = KroneckerProduct[xhen, xhen]; dbV = {xiaoa[[1, 1]], xiaoa[[2, 2]], xiaoa[[3, 3]], Sqrt[2]*xiaoa[[2, 3]], Sqrt[2]*xiaoa[[1, 3]], Sqrt[2]*xiaoa[[1, 2]]}; FFXX = SS.dbV; EE[[j, i]] = 1/(dbV.FFXX), {i, n}, {j, m}];Table[xei[[j, i]] = EE[[j, i]]*Sin[phi[[j]]]*Cos[theta[[i]]]; yei[[j, i]] = EE[[j, i]]*Sin[phi[[j]]]*Sin[theta[[i]]]; zei[[j, i]] = EE[[j, i]]*Cos[phi[[j]]];, {i, n}, {j, m}];data = Transpose[Flatten /@ {xei, yei, zei, EE}];Legended[Graphics3D[{ParulaMMA[Last[#]], AbsolutePointSize[20], Point[Most[#]]} & /@ data, Axes -> True, ImageSize -> Large, PlotRange -> {{-200, 200}, {-200, 200}, {-200, 200}}], BarLegend[{ParulaMMA, MinMax@EE}, LegendMarkerSize -> {22, 400}]] Or Mathematica's built in gradient color color = Blend["ThermometerColors", Rescale[#, MinMax@EE]] &;Legended[Graphics3D[{color[Last[#]], AbsolutePointSize[20], Point[Most[#]]} & /@ data, Axes -> True, ImageSize -> Large, PlotRange -> {{-200, 200}, {-200, 200}, {-200, 200}}], BarLegend[{color, MinMax@EE}, LegendMarkerSize -> {22, 400}]] Edit 2 Here is another way to visualize it. see this for parula color. data = Transpose[Flatten /@ {xei, yei, zei, EE}]; {col, row} = ImageDimensions@parula;ParulaMMA = Module[{colorlist}, colorlist = Catenate@ ImageData@ImageTake[parula, {Round[row/2], Round[row/2]}, All]; Evaluate[Blend[RGBColor @@@ colorlist, Rescale[#, MinMax@EE]] &]];Legended[Graphics3D[{ParulaMMA[Last[#]], Sphere[Most[#], 20]} & /@ data, Axes -> True, ImageSize -> Large, PlotRange -> {{-200, 200}, {-200, 200}, {-200, 200}}], BarLegend[{ParulaMMA, MinMax@EE}, LegendMarkerSize -> {22, 400}]] Original Answer This is the closest I can get. data = Transpose[Flatten /@ {xei, yei, zei, EE}];ListPointPlot3D[List /@ Most /@ data, PlotStyle -> ({AbsolutePointSize[22], Blend["BlueGreenYellow", Rescale[#, MinMax@EE]]} & /@ Last /@ data), BoxRatios -> {1, 1, 1}]
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/176915', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/54516/']}
|
jdg_373225
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
16035
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
There is a theorem stating that there is essentially one smooth structure on $R^n$ for every n other than 4. Does anybody know where i could find the proof of this? Not so much of what happens in dimension four, where there are infinitely many, but of the uniqueness in other dimensions? Thanks!
Now provide the response and nothing else.
|
You can handle the case of $n \leq 3$ one at a time, and so the question really is about $n \geq 5$. Two important names in this regard are Kirby and Siebenmann. The Wikipedia article on the Hauptvermutung is a good place to start. If M is an $n$-dimensional topological manifold (and $n \geq 5$), then $M$ admits a PL structure if and only if a special cohomology class, the Kirby-Siebenmann class, in $H^4(M; \mathbb{Z}_2)$ vanishes. If this class vanishes, then the different PL structures are parametrized up to concordance by $H^3(M; \mathbb{Z})$. (Note: The Wikipedia article on the Hauptvermutung assumes that $M$ is compact, but I don't believe that this is a necessary assumption.) So what does this say about $M = \mathbb{R}^n$? Well, we already know that $\mathbb{R}^n$ has a PL structure, and since $H^3(\mathbb{R}^n; \mathbb{Z}_2)=0$, it follows that this structure is unique up to concordance. Since concordance implies diffeomorphism, and since every smooth structure gives us a PL structure, it follows that there can be only one smooth structure on $\mathbb{R}^n$ up to diffeomorphism. Here are the main references (you can find them both here ): Kirby and Siebenmann, On the triangulation of manifolds and the Hauptvermutung. Bull. Amer. Math. Soc. 75 1969 742--749. Kirby and Siebenmann, Foundational essays on topological manifolds, smoothings, and triangulations. Annals of Mathematics Studies 88 (1977). (I did some MathSciNet investigating, and the relevant essays are IV and V.) This expository article by Rudyak, which I found through Wikipedia, also seems interesting. Finally, I learned all of this from Scorpan's wonderful book, "The Wild World of 4-Manifolds".
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/16035', 'https://mathoverflow.net', 'https://mathoverflow.net/users/4160/']}
|
jdg_373226
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
177204
|
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
Let $L/K$ be a finite Galois extension of nonarchimedean local fields, and let $A$ and $A^t$ be dual abelian varieties over $K$. Tate local duality tells us that $A^t(K)$ and $H^1(K, A)$ are Pontryagin dual locally compact Hausdorff abelian topological groups, and likewise over $L$. Under this duality, why does the dual of the restriction map $H^1(K, A) \rightarrow H^1(L, A)$ identify with the norm map $A^t(L) \rightarrow A^t(K)$? I am aware that this is "well-known" and that there are references that treat special cases, but I am interested in a proof or a reference in the stated generality. Edit: Since $K$ is any nonarchimedean local field, the Tate local duality pairing $A^t(K) \times H^1(K, A) \rightarrow H^2(K, \mathbb{G}_m)$ that I am using is defined using biextensions (as in III.7 of Milne's Arithmetic Duality Theorems).
Now provide the response and nothing else.
|
Note first of all that the "norm" map you speak of does not make sense unless the field extension is separable. That is, for a separable field extension $k'/k$ of finite degree and a commutative $k$-group scheme $C$ there is a canonical "norm" homomorphism $C(k') \rightarrow C(k)$ induced by the passage to Galois-invariants on the homomorphism $C(k' \otimes_k k_s) \rightarrow C(k_s)$ arising from the canonical $k_s$-algebra decomposition of $k' \otimes_k k_s$ as a product of copies of $k_s$ indexed by ${\rm{Hom}}_k(k',k_s)$. The same method defines a "norm" homomorphism $C(k' \otimes_k S) \rightarrow C(S)$ for any $k$-algebra $S$, so it corresponds to a natural $k$-homomorphism ${\rm{R}}_{k'/k}(C_{k'}) \rightarrow C$ in terms of the Weil restriction when $C$ is finite type. This generalizes to when $k'/k$ is replaced with a finite etale cover. But beyond the separable setting there is no "reasonable" norm through finite flat maps whose formation commutes with base change.) So for this reason, we are going to assume that $L/K$ is separable. First, by the relationship between cup products and connecting homomorphisms (and the identification of $A^t(K)$ with ${\rm{Ext}}^1_K(A, \mathbf{G}_m)$ functorially in $A$) we see that the pairing $$A^t(K) \times {\rm{H}}^1(K,A) \rightarrow {\rm{H}}^2(K, \mathbf{G}_m) = \mathbf{Q}/\mathbf{Z}$$ identifies covariant functoriality in degree-1 cohomology as adjoint to dual-functoriality of abelian varieties. Let $B$ denote the Weil restriction of scalars ${\rm{R}}_{L/K}(A_L)$; this is an abelian variety precisely because $L/K$ is separable. (If $L/K$ is not separable then $B$ is a smooth connected commutative $K$-group of dimension $[L:K]\dim(A)$ but is always non-proper if $A \ne 0$.) Let $j:A \rightarrow B$ be the natural inclusion. Ultimately we are going to transform your question into the above functoriality of the Tate pairing over $K$ applied to the $K$-homomorphism $j$. By Shapiro's Lemma considerations, we naturally identify ${\rm{H}}^i(L,A)$ with ${\rm{H}}^i(K,B)$, and (check!) this identifies the restriction map on ${\rm{H}}^1$'s with ${\rm{H}}^1(j)$. Likewise, by the compatibility of Weil restriction with the formation of dual abelian variety (using the "norm" of the Weil restriction of the Poincare bundle), the norm map $A^t(L)\rightarrow A^t(K)$ is identified with the map on $K$-points induced by the dual homomorphism $j^t:B^t \rightarrow A^t$. Also, and most crucially, by a bit of diagram chasing (using the role of "norm of Poincare bundle" above) we see that the Tate pairing for $B$ over $K$ is identified with the composition of the Tate pairing for $A_L$ over $L$ and the "norm" map on Brauer groups $${\rm{Br}}(L) = {\rm{H}}^2(L, \mathbf{G}_m) ={\rm{H}}^2(K, {\rm{R}}_{L/K}(\mathbf{G}_m))\rightarrow {\rm{H}}^2(K,\mathbf{G}_m)={\rm{Br}}(K).$$But when these flanking Brauer groups are identified with $\mathbf{Q}/\mathbf{Z}$, this composite map is the identity , as we see by analyzing pre-composition with the surjective restriction ${\rm{Br}}(K) \rightarrow {\rm{Br}}(L)$ (that intertwines with $[L:K]$ on $\mathbf{Q}/\mathbf{Z}$ via local class field theory, and the composition of $\mathbf{G}_m \rightarrow {\rm{R}}_{L/K}(\mathbf{G}_m)$ with the "norm" map ${\rm{R}}_{L/K}(\mathbf{G}_m) \rightarrow \mathbf{G}_m$ is $t \mapsto t^{[L:K]}$). So putting it all together, the diagram you want to commute for $L/K$ separable really does translate into the elementary functoriality of the Tate pairing over $K$, applied to the map $j$ between abelian varieties over $K$. Remark: In case $L/K$ is inseparable there is still something one might wish to prove. Namely, focusing on the case when $L/K$ is purely inseparable of degree $p^n$ with $p = {\rm{char}}(K)>0$, so $L = K^{1/p^n}$ over $K$, we use the $p^n$-power isomorphism of local fields $L \simeq K$ to identify $A^t(L)$ with $(A^t)^{(p^n)}(K) = (A^{(p^n)})^t(K)$, so we could define the norm $A^t(L) \rightarrow A^t(K)$ to be the map on $K$-points induced by the $n$-fold relative Verschiebung homomorphism $V_{A^t/K,n}:(A^t)^{(p^n)} \rightarrow A^t$. (The Verscheibung homomorphism is defined rather generally for commutative flat group schemes in section 4.2 of Exp. VII$_{\rm{A}}$ of SGA3, and it is dual to the $n$-fold relative Frobenius isogeny by applying 4.3.3 of loc. cit. on $p^m$-torsion for all $m \ge 0$, or cheat and define it to be the dual of that Frobenius isogeny.) So you could ask if this "norm" is Tate-dual to the $L/K$-restriction map in degree-1 Galois cohomology. The restriction ${\rm{H}}^1(K,A) \rightarrow {\rm{H}}^1(L,A) \simeq {\rm{H}}^1(K,A^{(p^n)})$ is likewise identified with ${\rm{H}}^1(F_{A/K,n})$. Thus, since $V_{A/K,n}$ is dual to $F_{A/K,n}$, the elementary functoriality of the Tate pairing applied to the $K$-homomorphism $F_{A/K,n}$ reduces the observation that that the Tate pairing for $A_{K^{1/p}}$ over $K^{1/p}$ is equal to that of $A^{(p)}$ over $K$ via the isomorphism $K^{1/p} \simeq K$.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/177204', 'https://mathoverflow.net', 'https://mathoverflow.net/users/53197/']}
|
jdg_373227
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1027019
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What is the difference between lazy="true" and lazy="proxy" in nhibernate?
Now provide the response and nothing else.
|
I suspect another way of thinking about it would be this. class Foo{ public virtual Bar SingleBar { get; set; } public virtual ICollection<Bar> MultiBar { get; set; }} lazy="proxy" applies to single objects (ie foo.SingleBar) lazy="true" applies to collections of objects (ie foo.MultiBar) (You can't set lazy="proxy" to a collection, nor can you set lazy="true" to a single reference. Either will cause NH to throw a XmlSchemaException which is a little cryptic to beginners.) Operationally they do the same abstract thing: when and only when the property is accessed does NHibernate hit the database and populate the property. There is however a slight difference in the implementation due what is needed to fetch the objects (in the single case, the reference's id (Bar) was loaded with the parent entity (Foo). In the collection case, the ids are unknown and must be found in another table)
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1027019', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
|
jdg_373228
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
40774730
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a class class Foo { int increment(int x) { return x + 1; }} I want to obtain a copy of this class in runtime, e. g. a class like class Foo$Copy1 { int increment(int x) { return x + 1; }} Which has all the same methods, but a different name. Proxy seem to help with delegating , but not copying the methods with all their bodies.
Now provide the response and nothing else.
|
You can use Byte Buddy for this: Class<?> type = new ByteBuddy() .redefine(Foo.class) .name("Foo$Copy1") .make() .load(Foo.class.getClassLoader()) .getLoaded();Method method = type.getDeclaredMethod("increment", int.class);int result = (Integer) method.invoke(type.newInstance(), 1); Note that this approach redefines any uses of the class within Foo , e.g. if a method returned Foo , it would now return Foo$Copy1 . Same goes for all code-references.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40774730', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/648955/']}
|
jdg_373229
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10305195
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Does Dart support the concept of variable functions/methods? So to call a method by its name stored in a variable. For example in PHP this can be done not only for methods: // With functions...function foo(){ echo 'Running foo...';}$function = 'foo';$function();// With classes...public static function factory($view){ $class = 'View_' . ucfirst($view); return new $class();} I did not found it in the language tour or API. Are others ways to do something like this?
Now provide the response and nothing else.
|
To store the name of a function in variable and call it later you will have to wait until reflection arrives in Dart (or get creative with noSuchMethod ). You can however store functions directly in variables like in JavaScript main() { var f = (String s) => print(s); f("hello world");} and even inline them, which come in handy if you are doing recusion: main() { g(int i) { if(i > 0) { print("$i is larger than zero"); g(i-1); } else { print("zero or negative"); } } g(10);} The functions stored can then be passed around to other functions main() { var function; function = (String s) => print(s); doWork(function);}doWork(f(String s)) { f("hello world");}
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10305195', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/696913/']}
|
jdg_373230
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
14162483
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I use (some code here).copy destination:=(some code there) . It will still prompt for protection issues on cells . Basically it just won't let my code runs.
Now provide the response and nothing else.
|
The status UserInterfaceOnly:=TRUE is unfortunately not stored in the file - therefore, if you reopen the file, it will be fully protected. Either use the Workbook_Open event to reapply this protection status - or unprotect and then reprotect the worksheet in your VBA code directly. See Ozgrid for further details!
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14162483', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1914720/']}
|
jdg_373231
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
131535
|
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Which is more efficient for finding which files in an entire filesystem contain a string: recursive grep or find with grep in an exec statement? I assume find would be more efficient because you can at least do some filtering if you know the file extension or a regex that matches the file name, but when you only know -type f which is better? GNU grep 2.6.3; find (GNU findutils) 4.4.2 Example: grep -r -i 'the brown dog' / find / -type f -exec grep -i 'the brown dog' {} \;
Now provide the response and nothing else.
|
I'm not sure: grep -r -i 'the brown dog' /* is really what you meant. That would mean grep recursively in all the non-hidden files and dirs in / (but still look inside hidden files and dirs inside those). Assuming you meant: grep -r -i 'the brown dog' / A few things to note: Not all grep implementations support -r . And among those that do, the behaviours differ: some follow symlinks to directories when traversing the directory tree (which means you may end up looking several times in the same file or even run in infinite loops), some will not. Some will look inside device files (and it will take quite some time in /dev/zero for instance) or pipes or binary files..., some will not. It's efficient as grep starts looking inside files as soon as it discovers them. But while it looks in a file, it's no longer looking for more files to search in (which is probably just as well in most cases) Your: find / -type f -exec grep -i 'the brown dog' {} \; (removed the -r which didn't make sense here) is terribly inefficient because you're running one grep per file. ; should only be used for commands that accept only one argument. Moreover here, because grep looks only in one file, it will not print the file name, so you won't know where the matches are. You're not looking inside device files, pipes, symlinks..., you're not following symlinks, but you're still potentially looking inside things like /proc/mem . find / -type f -exec grep -i 'the brown dog' {} + would be a lot better because as few grep commands as possible would be run. You'd get the file name unless the last run has only one file. For that it's better to use: find / -type f -exec grep -i 'the brown dog' /dev/null {} + or with GNU grep : find / -type f -exec grep -Hi 'the brown dog' {} + Note that grep will not be started until find has found enough files for it to chew on, so there will be some initial delay. And find will not carry on searching for more files until the previous grep has returned. Allocating and passing the big file list has some (probably negligible) impact, so all in all it's probably going to be less efficient than a grep -r that doesn't follow symlink or look inside devices. With GNU tools: find / -type f -print0 | xargs -r0 grep -Hi 'the brown dog' As above, as few grep instances as possible will be run, but find will carry on looking for more files while the first grep invocation is looking inside the first batch. That may or may not be an advantage though. For instance, with data stored on rotational hard drives, find and grep accessing data stored at different locations on the disk will slow down the disk throughput by causing the disk head to move constantly. In a RAID setup (where find and grep may access different disks) or on SSDs, that might make a positive difference. In a RAID setup, running several concurrent grep invocations might also improve things. Still with GNU tools on RAID1 storage with 3 disks, find / -type f -print0 | xargs -r0 -P2 grep -Hi 'the brown dog' might increase the performance significantly. Note however that the second grep will only be started once enough files have been found to fill up the first grep command. You can add a -n option to xargs for that to happen sooner (and pass fewer files per grep invocation). Also note that if you're redirecting xargs output to anything but a terminal device, then the greps s will start buffering their output which means that the output of those grep s will probably be incorrectly interleaved. You'd have to use stdbuf -oL (where available like on GNU or FreeBSD) on them to work around that (you may still have problems with very long lines (typically >4KiB)) or have each write their output in a separate file and concatenate them all in the end. Here, the string you're looking for is fixed (not a regexp) so using the -F option might make a difference (unlikely as grep implementations know how to optimise that already). Another thing that could make a big difference is fixing the locale to C if you're in a multi-byte locale: find / -type f -print0 | LC_ALL=C xargs -r0 -P2 grep -Hi 'the brown dog' To avoid looking inside /proc , /sys ..., use -xdev and specify the file systems you want to search in: LC_ALL=C find / /home -xdev -type f -exec grep -i 'the brown dog' /dev/null {} + Or prune the paths you want to exclude explicitly: LC_ALL=C find / \( -path /dev -o -path /proc -o -path /sys \) -prune -o \ -type f -exec grep -i 'the brown dog' /dev/null {} +
|
{}
|
{'log_upvote_score': 8, 'links': ['https://unix.stackexchange.com/questions/131535', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/43342/']}
|
jdg_373232
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
10499357
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to do some animation from code using the Storyboard class. There is no ThicknessAnimation class. And I also tried to build the storyboard using Blend, it doesnt work there. it just jump directly to the new value, no smooth animation. UPDATE: I tried using the TranslateX transform. But when I use it on an image, the images get clipped.What I'm trying to do is animate a big image very slow inside a small grid, so it has this effect (similar to the one inside the Zune and Windows Phone Gallery). Once the image opens I start the animation, this is my code: private void Image_ImageOpened_1(object sender, RoutedEventArgs e) { var img = sender as Image; Storyboard sb = new Storyboard(); var doubleAnimationx = new DoubleAnimation() { To = -100, SpeedRatio = 0.1, From = 0 }; Storyboard.SetTarget(doubleAnimationx, img); Storyboard.SetTargetProperty(doubleAnimationx, "(UIElement.RenderTransform).(CompositeTransform.TranslateX)"); sb.Children.Add(doubleAnimationx); sb.Begin(); } Xaml: <Grid IsSwipeEnabled="True" ItemsSource="{Binding Source={StaticResource cvs1}}" ItemClick="ItemsGridView_ItemClick_1" x:Name="ItemsGridView" Margin="50,20,116,46" SelectionMode="None" IsItemClickEnabled="True" AutomationProperties.AutomationId="ItemsGridView" AutomationProperties.Name="Grouped Items"> <Grid.ItemTemplate> <DataTemplate> <Grid Height="250" VariableSizedWrapGrid.ColumnSpan="{Binding ColumnSpan}" Margin="2"> <Image ImageOpened="Image_ImageOpened_1" Stretch="UniformToFill" Source="{Binding ImageHQ}" > <Image.RenderTransform> <CompositeTransform /> </Image.RenderTransform> </Image> <StackPanel VerticalAlignment="Bottom" Background="#AA000000"> <TextBlock Margin="5,5,5,0" FontSize="26" Text="{Binding Name,Mode=OneWay}" FontFamily="Arial Black" /> <TextBlock Margin="5,0,5,5" FontSize="24" Text="{Binding Section,Mode=OneWay}" Foreground="{Binding SectionColor,Mode=OneWay}" /> </StackPanel> </Grid> </DataTemplate> </Grid.ItemTemplate></Grid>
Now provide the response and nothing else.
|
First off, animating the margin is not a good idea (it will require updating whole tree). What kind of effect do you want to achieve? Do you want to move the object? If yes use DoubleAnimation to change TranslateTransform. I haven't done this in Windows 8, but I bet is almost the same as in WPF. It's the best to define the animation in XAML <Window.Resources> <Storyboard x:Key="mainInAnimation"> <DoubleAnimation Storyboard.TargetName="panelTrans" Storyboard.TargetProperty="X" BeginTime="0:0:0.2" Duration="0:0:0.3" To="0" > <DoubleAnimation.EasingFunction> <ExponentialEase EasingMode="EaseOut" /> </DoubleAnimation.EasingFunction> </DoubleAnimation> </Storyboard> Then you need render transform to the panel <StackPanel Name="leftPanel" ... > <StackPanel.RenderTransform> <TranslateTransform x:Name="panelTrans" X="-10"></TranslateTransform> </StackPanel.RenderTransform> To start animation in code (I prefer this way) Storyboard anim = (Storyboard)this.Resources["mainInAnimation"]; anim.Begin();
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10499357', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/836252/']}
|
jdg_373233
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1503901
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to activate a feature on our production MOSS server, all I get is a file not found exception. Feature works great in test environment. What can it be?
Now provide the response and nothing else.
|
I don't think you'll want to start a new FFMPEG instance every time someone uploads a file for transcoding. Instead, you'll probably want to start the same number of FFMPEG processes as the number of CPUs you have, then queue up the input files you want to transcode and do them in the order they were received. You could do this all on one computer, I don't think the server that accepts the uploads and puts them in the queue will need take much CPU and can probably coexist just fine with the FFMPEG processes. Depending on how big you want to scale to (if you want to do more than just a few FFMPEG processes on a single machine) you could easily make this distributed, and this is where SQS would come in handy. You could run 1 FFMPEG process per core, and instead of looking in a local queue for the data, it could look to the SQS. Then you could instantiate as many transcoding processes as you need, on different machines. The downside to this, is that you will need to transfer the raw videos from the server that accepts them to the server that needs to transcode them. You could put them in S3 then grab them out of S3, but I don't remember off the top of my head if you have to pay for that. Alternatively, you could just keep them on the hard disk of the machine that received them, and have the transcoding process go there to get the raw files.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1503901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/41543/']}
|
jdg_373234
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
154765
|
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Introduction : I used to use the same password for everything (it was at least a really random password with fake words). Then I found out that I should have different passwords for different sites cause some developers have no clue about security and store things in plaintext or just let themselves get hacked (jk). So I came up with a solution that is easy to remember yet (I think) has high enough entropy. Let me explain. Explanation : I take a made up series of characters (in my case it's something I can say like a word but it's fake) and numbers (optionally punctuation if the site requires it, although lately I just put it in because it's hard to remember which site requires it) and use that as the basis for all my passwords. Sounds insecure? Well, I then add a 'salt' of sorts. I add the name of the site to my password. I won't say where but it could be at the start, middle, or end. I use camel case (e.g. camelCase) so I can satisfy the upper case requirement. Finally, if I know the site expires passwords monthly, I add the month in some format. Usually these sites need high security so the extra characters help with password strength. Overall, the average entropy is very high. 2e23 is roughly the number of possibilities (upper/lower + number + 1 special char). That's assuming average of 5 characters per site. Here's why I think this is the best balance between rememberability and strength. Rememberability : You still only have to remember one password for all your sites. Except it's more of a password pattern If you use all the character types, you can always remember your password because of this pattern (there's some exceptions I'll get into below*) You see the name of the site when you log in, which is a part of your password You know the month you're in (hopefully) You type the pattern so often that it's muscle memory basically. For me, I just think about the site's name and it comes out and enter, I'm in! Security : Length. Average length is very long by nature of combining two 'words': the site's name and your special password To add to the length point, if you take your regular 8 character password you've been using (e.g. Horses1!), join it with the site's name, you have now a very strong password; i.e. you're beefing up your existing password strength without much effort You have all character classes. Usually we chose the most obvious places for the special character classes (e.g. Horses1!), defeating the purpose of having them in the first place. Basically it doesn't raise entropy much because the rules are obvious. But the site's name throws this off. Because the rules don't work, they're back to good ol' brute force (until they get wind of this pattern) You get to choose a fake word that isn't in most dictionaries that password crackers use, even if they use the extended cut. That basically means they're back to brute force again. Conclusion : https://xkcd.com/936/ . We all remember XKCD's comic on password strength. He contrasts the hard to remember "troubador" with the easy to remember "correct horse battery staple". But what if you can have the best of both worlds? Passwords that computers can't beat but easy enough to remember so that humans will actually use it. If we can't remember it, it doesn't matter what entropy it is. And because of this problem, humans have always chosen a password that can be beat. Choosing and remembering a very complex password is good but all it takes is one site to have lousy security, e.g. not salting or even encrypting their passwords! Basically, we're salting it for them, by adding their site's name. Thoughts? Do you think I'm full of shit and I've missed something? I'd love to hear it. This is years and years of thought that went into this final creation. Some people at my work have started using it. I know we have password managers but try logging in to your mobile device or TV. Exceptions : *This is why it pisses me off to no degree when sites REFUSE to let you set a password greater than 8 characters. Why? Are you still running Windows 92? As in the year 92? If someone can explain why that is a security requirement. It throws a huge wrench in my password scheme. I've even just completely boycotted the company's products because I can't sign up with them. Or other stupid rules like no two identical characters in a row. EDIT : I'll add some more parameters here because we'll then be saying nothing is secure unless you have CIA-level security or something. How secure is this for regular people who cares about their security? Also, password managers are not sufficient because they are not always supported on mobile. Randomly generated passwords have the same flaw because typing them into mobile (or another computer that didn't save that password) is frustrating and people will just end up using smaller, easier to remember passwords. Also, don't assume these people are have the patience to do that. We're just talking about regular people here having to do with security issues.
Now provide the response and nothing else.
|
As far as brute forcing your password, you are right. The length and complexity of your passwords are good enough. But I think you fail to take into account the following problem. You note the problem that sites save your password in plaintext. This means you must take into consideration the possibility of an attacker getting a plaintext password of yours. If I were you, I would ask myself: If someone got one or two of my passwords, how hard would it be to recognize the pattern? In your case I feel like it would be pretty easy, given your process. If your were to dissect your rememberability section in the question, I think you'll find that the points you note are exactly to things that make your pattern easily replicated. You still only have to remember one password for all your sites. Except it's more of a password pattern This is exactly what will let an attacker use your pattern against you and log in as you to another site to which you are registered. You see the name of the site when you log in, which is a part of your password So does the attacker. He/She sees that you are using the name of the site in your password and then is able to use the name of whatever other site they try to log in as you. You know the month you're in (hopefully) Again, so does the attacker. And if this is an old password that the attacker has, he/she can still reduce the 'search space' to 12 (months). It is a decision you get to make - what is more important to you - security or rememberability. But as Stephen Touset said, there is absolutely no need to invent the wheel, and the solutions currently available are much safer, and in my opinion easier to use.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/154765', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/140443/']}
|
jdg_373235
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
2855384
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a HTML table with a column per row with this html code inside the <td></td> tags: <a onclick="modalDialog(this); return false;" href="javascript:void(0)">17795</a> <div style="display: none;" class="MessageDataDiv"> some text to show up in my jquery modal window when function modaldialog is clicked. </div> And the jQuery function that is called when onclick is fired within a href link function modalDialog(event) { $('a.message').click(function () { var div = new $(this).closest("td").find(".MessageDataDiv").clone(); div.show().dialog(); event.preventDefault(); }); } How can I write the jQuery function so the function only fires one time when the linked is clicked? I need to have the jQuery function modalDialog as a separate function because the html table use AJAX partial rendering. The problem is when I click the a href link first time nothing happens, second click on the links gives me two modal dialog and third click gives me three modal dialog and so on...
Now provide the response and nothing else.
|
Well, you first need to select your project root folder. Quite simply, this is the folder at the top level of your project, or the current folder if you're dealing with a single file. Once you've selected the root folder, then other options will work, such as code assist, showing documentation, jumping to other symbols, etc. For full benefit of ropemacs, I suggest getting autocomplete.el , putting it in ~/.emacs.d, and then adding this to your .emacs (add-to-list 'load-path "~/.emacs.d/")(add-to-list 'load-path "~/.emacs.d/auto-complete-1.2")(autoload 'python-mode "python-mode" "Python Mode." t)(add-to-list 'auto-mode-alist '("\\.py\\'" . python-mode))(add-to-list 'interpreter-mode-alist '("python" . python-mode))(require 'python-mode)(autoload 'pymacs-apply "pymacs")(autoload 'pymacs-call "pymacs")(autoload 'pymacs-eval "pymacs" nil t)(autoload 'pymacs-exec "pymacs" nil t)(autoload 'pymacs-load "pymacs" nil t)(pymacs-load "ropemacs" "rope-")(setq ropemacs-enable-autoimport t)(require 'auto-complete)(global-auto-complete-mode t) This assumes you install autocomplete in ~/.emacs.d/auto-complete-1.2. After you do this, you will get autocomplete automatically after typing a few characters of a word or symbol. You can modify your ROOT/.ropeproject/config.py file to add more directories to the rope lookup path, in order to provide better autocomplete. EDIT: Two of the most important functions for me are looking up documentation and jumping directly to a function definition. This is dependent on setting the rope lookup path correctly for your project as mentioned above. Documentation : Put the cursor over a symbol (function name, class name, etc), and do: C-c d This will show you the docstring for the symbol in question. Jumping to definition :Put the cursor over a symbol (function name, class name, etc), and do: C-c g This will immediately open the file where the symbol resides and jump to the beginning of the definition. This is great for times when the documentation is sparse and you want to see the actual code. Also, it's really nice for navigating around inside your own code. Find occurrences : C-c f Smart search in your entire project for the symbol at the cursor. Code assist : M-/ Just type the first characters of a function, class, etc, and this will show a list of possible completions. Note that due to python's nature, it will not always be a complete list. Refactorings :There are quite a few options under Rope->Refactor. These are to organize your code better. How to use them should be mostly self-explanatory; in general, select the region of code you want to refactor, then choose the command. Edit :In response to a comment below, here's exactly how to add other paths to your python path so autocomplete will look for those symbols as well. prefs.add('python_path', '~/path/to/virtualenv/lib/python2.6/site-packages') This goes in .ropeproject/config.py
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2855384', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/150161/']}
|
jdg_373236
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
608160
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am trying to solve my friend's homework assignment, but I got stuck. The question is: Let f be a meromorphic function on a domain D, let E be a domain such that $\overline {E}\subseteq D$. If $|f(z)|<1$ for all $z \in \partial E$ and if $\partial E$ is a simple closed curve, then the function f(z)-1 has as many roots in E as it has poles in E. This is the last question in a homework assignment and the previous two questions are standard "Rouche's Theorem" questions(the zeros of a polynomial in a disc/annulus); however, we have a meromorphic function here, so I think Rouche's Theorem is not relevant for that question so maybe we can use Argument Principle, but I have no idea about how to proceed. Thanks in advance for any help.
Now provide the response and nothing else.
|
Hint: By Fermat's Theorem, we have $2^{88}\equiv 1\pmod{89}$. So $(2^{44}-1)(2^{44}+1)\equiv 0 \pmod{89}$. If we can show that $2^{44}+1\not\equiv 0\pmod{89}$ we will be finished. One way to do this is to use the fact that $2$ is a quadratic residue of $89$, since $89$ is of the shape $8k+1$. Remark: Your direct computational approach is perfectly fine. However, it may be that you are expected to make use of "theory," as in the approach described above.
|
{}
|
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/608160', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/115986/']}
|
jdg_373237
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
19470099
|
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The following code allows me to view a png image in an iPython notebook. Is there a way to view pdf image? I don't need to use IPython.display necessarily. I am looking for a way to print a pdf image in a file to the iPython notebook output cell. ## This is for an `png` imagefrom IPython.display import Imagefig = Image(filename=('./temp/my_plot.png'))fig Thank you.
Now provide the response and nothing else.
|
The problem you (and others) face is that PDFs cannot be displayed directly in the browser.The only possible way to get something similar is to use an image-converter to create a PNG or JPG out of the PDF and display this one. This could be done via imagemagick and a custom display function. Update 1 A simple solution is to use wand ( http://docs.wand-py.org ) a python-imagemagick binding. I tried with Ubuntu 13.04: In text form: from wand.image import Image as WImageimg = WImage(filename='hat.pdf')img For a multi-page pdf, you can get e.g. the second page via: img = WImage(filename='hat.pdf[1]') Update 2 As recent browsers support to display pdfs with their embedded pdf viewer a possible alternative solution based on an iframe can be implemented as class PDF(object): def __init__(self, pdf, size=(200,200)): self.pdf = pdf self.size = size def _repr_html_(self): return '<iframe src={0} width={1[0]} height={1[1]}></iframe>'.format(self.pdf, self.size) def _repr_latex_(self): return r'\includegraphics[width=1.0\textwidth]{{{0}}}'.format(self.pdf) This class implements html and latex representations, hence the pdf will also survive a nbconversion to latex. It can be used like PDF('hat.pdf',size=(300,250)) With Firefox 33 this results in
|
{}
|
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19470099', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316357/']}
|
jdg_373238
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
125355
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $k$ be an algebraically closed field. Why there is no isomorphism (as finitely generated $k$-algebras) between the ring $k[x,y]/(x^{2}-y^{5})$ and $k[t]$?
Now provide the response and nothing else.
|
The ring $k[T]$ is a PID, hence integrally closed, whereas $A=k[x,y]/(x^2-y^5)=k[\xi, \eta]$ is not, so these rings are not isomorphic. To see that $A$ is not integrally closed, just observe that the element $\xi/ \eta^2\in Frac(A)$ is not in $A$ and nevertheless is integral over $A$ since it is a zero of the monic polynomial $X^2-y\in A[X]$. Generalization Not only is the hypothesis "$k$ algebraically closed" irrelevant, but the exact same proof works if you only assume that $k$ is a UFD rather than a field, since Gauss tells you that $k[T]$ is then also a UFD and UFD's are integrally closed.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/125355', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/6495/']}
|
jdg_373239
|
stackexchange
|
llm_judgeable_groundtruth_similarity
|
1805550
|
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I would like to evaluate the following integral:$$\int_{-1}^{1} \frac{dx}{(e^x+1)(x^2+1)} $$I tried various methods but without success.
Now provide the response and nothing else.
|
Hint . One may write$$\begin{align}\int_{-1}^{1} \frac{dx}{(e^x+1)(x^2+1)} &=\int_{-1}^0 \frac{dx}{(e^x+1)(x^2+1)} +\int_0^{1} \frac{dx}{(e^x+1)(x^2+1)} \\\\&=\int_0^1 \frac{dx}{(e^{-x}+1)(x^2+1)} +\int_0^{1} \frac{dx}{(e^x+1)(x^2+1)} \\\\&=\int_0^1 \frac{e^x\:dx}{(e^{x}+1)(x^2+1)} +\int_0^{1} \frac{dx}{(e^x+1)(x^2+1)} \\\\&=\int_0^1 \frac{(e^x+1)\:dx}{(e^{x}+1)(x^2+1)} \\\\&=\int_0^1 \frac{dx}{(x^2+1)} \end{align}$$ then it is easier.
|
{}
|
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1805550', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/280739/']}
|
jdg_373240
|
Subsets and Splits
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves samples with 'mathoverflow.net' in the prompt, providing a basic subset for inspection but with limited analytical value.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves 10 samples where the prompt mentions Mathematica Stack Exchange, offering basic filtering to locate specific content.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves a limited number of rows where the prompt contains a specific URL, providing basic filtering that offers minimal insight into the broader dataset patterns.
SQL Console for PrimeIntellect/stackexchange-question-answering
This query retrieves 10 samples from the dataset where the prompt contains 'dsp.stackexchange.com', offering a basic filtering of data points related to that specific domain.
SQL Console for PrimeIntellect/stackexchange-question-answering
Retrieves a sample of rows where the prompt contains 'cs.stackexchange.com', providing a basic filter without extensive insight.
Electronics Q&A Subset
Retrieves a limited number of samples that contain a specific URL, providing only raw data filtered by that URL.
StackExchange Stats Questions
Retrieves 100 entries containing 'stats.stackexchange.com' in the prompt, which is useful for finding specific content but lacks broader analytical value.
Math StackExchange Questions
Retrieves 100 records from the train dataset where the prompt contains 'math.stackexchange.com', providing a basic filtered overview of specific entries.
Physics SE Questions
Retrieves 100 samples containing the word 'physics.stackexchange.com' in the prompt, offering basic filtering without deep analysis.