url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://stats.stackexchange.com/questions/22796/compare-the-difference-of-two-probabilities-or-a-ratio-of-probabilities
# Compare the difference of two probabilities or a ratio of probabilities? In an experiment with continuous-value random variable, e.g. measuring length of cucumbers. I would like to compare the probability of getting a particular length range in two different conditions. e.g. $P1=P(0.1<L<0.2, fertilizer A)$, $P2=P(0.1<L<0.2, fertilizer B)$. So I create 2 histograms for the lengths corresponding to each of the 2 fertilizers, $H_A$ and $H_B$. From this histograms, probabilities over ranges of L are calculated from bin counts, etc. My aim is to show that only for some ranges (or even 1 range) of L the probabilities (or counts of the above histogram) differ while in most other length ranges the probabilities do not differ. 1) what is a good way to express this difference? The usual ratio $\frac{P1}{P2}$ suffers when $P1$ or $P2$ is zero. a good one is $P1-P2$ but then I would like this to be normalised over P so as to have something like a percentage change for comparison, I thought something along: $\frac{P1-P2}{P1+P2}$ 2) I would like to plot these probability differences for various L and if possible to use a statistical test which tests the significance of probability differences for just one range and not overall (what a t-test does). I guess the significance of difference depends also on the counts for that particular range of lengths. - ## 1 Answer Are these distributions sufficiently normal? You could check a qq-plot to see if they're good enough for your satisfaction. If so, the area under the fitted normal distribution will never be exactly 0 as it is in a finite sample, so that would be one way to address that issue. On another issue, I would take the proportions within given ranges and convert them into the odds of a cucumber falling within that range given that it was grown with that fertilizer. Then I would use the odds ratio to compare the two. I think this will be a better approach than using the ratio of probabilities. One final note, if the distributions for the two fertilizers differ, then realistically the probabilities of being within a given range couldn't be exactly identical, and so testing a given range for 'significance' doesn't make a lot of sense to me. I would just do a t-test on the two distributions themselves (I should think Levene's test would also suffice for your purpose). Having shown that the distributions differ, that means the proportion within a given range will differ, and you could represent the magnitude of the difference for the range that you care about with an odds ratio. - Thanks, I followed your advice on Levene and the odds ratio. The problem may not appear when prob=1 for the reasons you mentioned but there are cases when it is zero.For this I just replaced 0 and 1 with 0.001 and 0.999 and hope for the best. We can not be sure about the distribution of lengths, they appear normal but my suspicion is that other quantities (e.g. weight) consist of subpopulations (e.g. let's say those cuc inseminated [!] by a bee in the morning and those done late in the afternoon etc. etc.). So, correct me if wrong, but may need to do gaus mixture and then apply same procedure. – bliako Feb 14 '12 at 19:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491427540779114, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/143698/topology-of-the-tangent-bundle-of-a-smooth-manifold?answertab=oldest
# Topology of the tangent bundle of a smooth manifold I am having trouble to understand what topology is given to the tangent bundle of a smooth manifold that allows it to be a smooth manifold itself. In my understanding, among other things the topology must be second countable and Hausdorff. The definition of the tangent bundle $TM$ of a smooth manifold $M$ I am using is $TM = \bigsqcup_{p\in M} T_pM$, that is the disjoint union of all $T_pM$ where $T_pM$ is tangent space at $p$ consisting of all derivations at $p$. Since there is no further specification on what topology this space is given I assume we take the natural disjoint union topology. However, in that case it seems that $TM$ is not second countable because then every set $(O,p)$ where $O$ is an open subset of $T_pM$ would be open and disjoint from any $(O,q)$ for $q \neq p$. So unless $M$ is countable there would be an uncountable number of disjoint open sets which contradicts second countability. The only alternative I can think of is using the natural smooth structure of $TM$ as the topology. That is for every open subset $O$ of $M$ the open sets of $TM$ are defined as $\pi^{-1}(O)$ where $\pi$ is the natural projection $TM \rightarrow M$. But then $TM$ can not be Hausdorff, since any two elements of the same fiber of $\pi$ could not be seperated by open sets. In conclusion, in both cases $TM$ could not be a manifold, so I must be missing something very obvious. Thus, I would really appreciate it if someone could point out my misconception. - 2 I'll try to find time to write an answer, but the topology definitely not the disjoint union topology. You want the different tangent spaces to vary in a nice way. – Dylan Moreland May 10 '12 at 22:55 1 For a definition of the topology of $TM$ and $T^*M$, check Warner, Foundations of Differentiable Manifolds and Lie Groups, in I think the second chapter. – Neal May 10 '12 at 23:37 Thanks, he outlines the steps to construct the topology explicitly in 1.25. – erlking May 11 '12 at 14:01 ## 3 Answers Take some atlas on $M$, and let $U$ be an element of that atlas. Then $TU=\pi^{-1}(U) \cong U \times \mathbb{R}^n$ as a set, so it inherits a topology. Moreover, all these topologies (for different $U$) are compatible with each other, so together they give you a topology on the total space $TM$. Note that this is very much like your second idea, except that we don't require open sets to contain entire fibres $\pi^{-1}(x)$ -- just open subsets of them. - You give $TM$ a topology and a manifold structure as follows. Suppose that $\varphi\colon U\subseteq M\to V\subseteq\mathbb{R}^n$ is a local chart of $M$. Let $x_1,\ldots, x_n$ be the corresponding coordinate functions, i.e., $\varphi(p) =(x_1(p),\ldots, x_n(p))$. Then you get a bijective map $\pi^{-1}(U)\to V\times \mathbb{R}^n$ given by $$\left(p, \sum_{i=1}^n\lambda_i\left.\frac{\partial}{\partial x_i}\right|_p\right)\mapsto (\varphi(p), (\lambda_1,\ldots, \lambda_n)).$$ The topology on $\pi^{-1}(U)$ is defined by pulling back the topology on $V\times \mathbb{R}^n$. Moreover, this map $\pi^{-1}(U)\to V\times\mathbb{R}^n$ is a chart map for the manifold $TM$. You must check, of course, that if you choose different charts $\varphi$, this does not change the topology or the manifold structure. - Maybe a broader perspective would be useful. Let $F$ and $B$ be topological spaces, with a given open cover $\{U_\alpha\}$ of $B$ and continuous transition functions $\theta_{\alpha\beta}:U_\alpha\cap U_\beta\to \operatorname{Aut}(F)$ on nonempty intersections of the sets in the open cover, which satisfy the conditions that, always, $\theta_{\alpha\beta} = \theta_{\beta\alpha}^{-1}$, $\theta_{\alpha\alpha} = 1$, and $\theta_{\alpha\beta}\theta_{\beta\gamma} = \theta_{\alpha\gamma}$. These data are exactly what we need to put together a fiber bundle with fiber $F$ and base $B$. We do this the following construction. One consequence of the construction is insight into the topology of the total space of the fiber bundle. The prospective bundle atlas will be composed of prospective bundle trivializations $\{U_\alpha\times F\}$, indexed by the open cover of $B$. Now define $$E = \bigg(\coprod_\alpha U_\alpha\times F\bigg)/\sim,$$ where $(x,f)\sim (y,g)$ if and only if $x=y\in U_\alpha\cap U_\beta$ and $\theta_{\alpha\beta}(x)f = g$. (Note that the topology of each component of the disjoint union is just the product topology.) While it is an exercise (left to you) to check that this is in fact the total space of a fiber bundle, the idea should be clear enough: we have taken the locally trivial neighborhoods and pasted them together with knowledge of how they transform into each other. We now see that the topology of the total space is just the quotient topology. This construction encompasses all topological fiber bundles. To construct smooth fiber bundles, replace "continuous" by "smooth" and "topological space" by "smooth manifold." Nice examples include (but are certainly not limited to): • the Hopf fibrations $\mathbb{S}^1\to\mathbb{S}^{2n+1}\to\mathbb{C}P^n$, $\mathbb{S}^3\to\mathbb{S}^{4n+3}\to\mathbb{H}P^n$; • symmetric and homogeneous spaces, such as the fibration $SO(n)\to SO(n+1)\to\mathbb{S}^n$; • Seifert fibered spaces and surface bundles in $3$-manifold theory; and • all real and complex vector bundles, such as $TM$, $T^*M$, the exterior bundles $\Lambda^k(M)$, and the tensor bundles $\mathcal{T}^r_s(M)$, for a smooth manifold $M$. In the particular case of your question, we have $F = \mathbb{R}^n$, transition functions are maps $U_\alpha\cap U_\beta \to Gl(n;\mathbb{R})$, and the topology is given locally by the product topology on $U\alpha\times\mathbb{R}^n$. The "compatibility" mentioned in the Micah's answer, and the check froggie suggests, are just verification that the topology on $TM$ is the quotient topology in my above definition. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321997761726379, "perplexity_flag": "head"}
http://patternsinpractice.wordpress.com/2011/06/02/guess-check-generalize-and-the-scrubbing-calculator/
#### Patterns in Practice the blog of the Mathematical Practices Institute # Guess-Check-Generalize and the Scrubbing Calculator June 2, 2011 Several other blogs have been talking about Bret Victor’s Kill Math website, including its Scrubbing Calculator.  I’d like to talk about how the Scrubbing Calculator is both very similar to and very different from an approach to solving word problems we call “Guess-Check-Generalize”.  Here’s a graphic from a sample problem solved Scrubbingly. The challenge is to find the height of each bar, given the information about other heights.  When I first taught Algebra 1, my approach to this would be to get students to “translate” the problem into algebra, trying to get them to write an equation that would be true for the right height.  And the results were a mixed bag, for a lot of reasons that might be good for a different post.  I think there’s something inherently challenging about trying to write a fully symbolic statement immediately from a problem situation. The concept of guess-check-generalize starts by changing the nature of the problem.  The question to start with changes: from What is the correct bar height? … to Is 100 the correct bar height? Here, 100 could have been any number at all, it’s a total guess.  (Some teachers using this method ask students to write down their first guess before even presenting the problem, since students may be afraid to guess incorrectly.) Now we see if the guess is right.  Up until now, I agree completely with the philosophy of the Scrubbing Calculator: make a guess at the bar height, then see if it’s right.  This is where things get interesting, because there’s more than one way to check the guess.  The most conventional way is to add up the heights on the right side, and a student might do this: 60 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 20 + 100 + 140 = nope It doesn’t actually matter what that equals, as long as it doesn’t equal 768.  Guess-check-generalize is about determining a process you can use to check any guess; then, the process you’ve described becomes an equation to solve.  And the process can evolve from one guess to another, as students realize they’ve used the same number 8 times or that this thing is twice that thing. So 100 was wrong; take a second guess.  It doesn’t have to be a better guess, because you’re not trying to nail the numeric answer, you’re trying to nail the process of checking a guess.  Let’s guess 36.  Checking this guess a student might notice they could combine some terms from before: $60 + (9 \cdot 36) + (8 \cdot 20) + 140 = 684, \text{nope}$ No more guessing.  The third guess is $h$, a variable.  (Students may need more guesses, especially at first; eventually some only need one or zero guesses.)  Take all the places the guess was found and replace them with the variable, noting that the correct guess yields 768: $60 + (9 \cdot h) + (8 \cdot 20) + 140 = 768$ Solving that equation and bringing the answer back into context are still issues, but I always found the largest difficulty with the dreaded “word problem” is an inability to take the situation and make a mathematical statement about it.  When almost every real mathematical situation an adult encounters is a “word problem”, this is a major issue that needs to be addressed. Here’s why I think guess-check-generalize is a good way of dealing with word problems. • The method is general in nature.  The method presented here works equally well for linear and nonlinear situations, for problems with a variable on each side of the equation, for rates, coin values, painting houses, counting beans, whatever.  This is a general-purpose tool that is useful over many years, including some surprising topics like generating the equations of lines and circles.  (More on this some other time.) • This is what people do with problems.  When a problem is new or overly complicated, picking a few cases and following them through leads to an understanding of what happens in general.  Traditional word problem methods expect students to have the generalization at the ready, and it just doesn’t work that way in reality.  The concept of generalizing from repeated example is a fundamental one that all students should learn, not just those heading into STEM careers. • Students have a simple place to start from.  By asking students to guess at the answer, the difficulty level of word problems can be reduced by 2 or 3 grade levels immediately.  Students with language difficulty can learn what is happening by calculating with numbers, connecting the new language to the calculations they know, then advancing to symbols when appropriate. • There are no black boxes.  Students construct equations and can understand where they come from.  Multiple equations with the same answer can be found from different techniques used on the same problem, leading to good discussions about the basic moves of algebra and how different equations and formulas are related. • Connections between arithmetic and algebra are reinforced.  Bret Victor says this: “We are accustomed to assuming that variables must be symbols. But this isn’t true — a variable is simply a number that varies.”  I’d like this to change.  Too many students only see variables as symbols for manipulation, and not as numbers that vary.  Students make mistakes with variables they would never make with numbers.  When this happens, it is because they don’t see that the symbol represents a number.  Since arithmetic is at the heart of guess-check-generalize, students are asked to solidify their number skill and sense.  Students begin to guess “nice” numbers, like a multiple of 3 when they see that dividing by 3 will be part of the process. It is on this last point that I disagree deeply with the philosophy of the Scrubbing Calculator; students don’t really do any of the calculating.  In the end, a student might see that the answer produced by Scrubbing works, but if there is more than one answer, there’s no way for a student to discern this.  If the problem changes slightly from its original form (say, to a 1024-high screen), the Scrubbing solution method is to start from scratch, which doesn’t help students generalize toward functions and formulas (in this case, a relationship between the screen height and the bar height). What if the correct answer to the equation is $\sqrt 2$ or even $\frac 2 3$?  I don’t see how the Scrubbing Calculator could get these answers.  I agree that too many students don’t see the real meaning of a variable, but this is no reason to ditch symbolic algebra, this is a reason to make the connections between arithmetic and algebra as strong as possible, as often as possible. The Scrubbing Calculator’s method is an opportunity for students to make deep connections between arithmetic and algebra, between real problems and symbolic algebra.  I’m disappointed that its intended purpose is to remove symbolic algebra altogether, because it could be pretty cool.  What do you think? For homework, solve this problem using guess-check-generalize or come up with a better one.  No scrubbing, please! Nancy takes a long car trip from Boston.  In one direction she drives at an average speed of 60 miles per hour, and in the other direction she drives at an average speed of 50 miles per hour.  She’s in the car a total of 38 hours for the round trip.  How far from Boston was her destination?  (Bonus: what city did she drive to?) ### Like this: Bowen is one of the lead authors of CME Project, a high school mathematics curriculum focused on mathematical habits of mind. Bowen leads professional development nationally, primarily on how math content can be taught with a focus on higher-level goals. Bowen is also a champion pinball player and once won \$1,000 for knowing the number of degrees in a right angle. ### 10 Responses to Guess-Check-Generalize and the Scrubbing Calculator 1. jim says: I say Atlanta at 1045 miles, but I’m just guessing. 2. Sean says: Great stuff here. Some questions: 1. How do the majority of students get from guess 2 to step 3 without explicit guidance? Most students- when guessing and checking- continue doing so, unaware that there is something to generalize. When it’s modeled like step 3, some students check out. Their intuition and number sense has been preyed upon. Now it becomes a ‘math class’ problem. 2. Why does the first guess have to be wrong? When the first guess is right – through coincidence or refined number sense- a whole new set of questions come out like: ‘Is that the only answer?’ ‘Can we prove that it’s the only answer?’ ‘Can you provide answers that are wrong?’ ‘Is there a visual representation for why it’s the only answer?’ 3. Bowen Kerins says: Thanks for the questions! 1. The reality is that it will take more than two guesses for the first such problem. Ask students to continue focusing on keeping track of their steps, getting the “rhythm of the calculations” (as Al would say). What I’d want to hear is a student who gets sick of guessing and says “Stop it already! Whatever number you use, it’s just going to be…” That kid is ready to generalize. I have seen it taught explicitly by asking students to take three guesses, then the fourth guess is a variable. I feel it should be up to the student to decide when to generalize, which may at first take many guesses. You can also “preload” this behavior by using the same tactics when building expressions for things like “3 less than a number” — do “3 less than 20″ then “3 less than 50″ until “3 less than n” makes sense. 2. These are great, great questions, and well worth asking. I also feel that if a student can determine the correct answer by some means, they shouldn’t be required to also create an equation to solve the same problem. It’s math for no purpose at that point. It’s good to present a mix of problems that have ‘nice’ answers and ones that deliberately do NOT have nice answers. The first problem we present using this method is one from Benjamin Banneker, we call the “four fours” problem: There’s this number. When you add 4 to it, subtract 4 from it, multiply it by 4, or divide it by 4, you get four different answers. That’s not too interesting, but the four different answers add up to exactly 60. What is the number? (Banneker’s original phrasing flips the question: Divide 60 into four such parts that the first being increased by 4, the second decreased by 4, the third multiplied by 4, the fourth part divided by 4, that the sum, the difference, the product and the quotient shall be one and the same number.) I like this problem because it’s simple to take a guess, students should eventually be generalizing, and the equation is relatively necessary to find the answer. Plus, the problem is over 200 years old! • jsb16 says: As another bonus, the writer of the problem wasn’t a white guy… 4. Pingback: dy/dan » Blog Archive » Guess-Check-Generalize 5. P.J. Karafiol says: Another advantage of this method is that it puts metacognition right into the process from the beginning: because students start by asking the question “is this actually right?” and maybe “how far off am I?” I think they would be more likely to ask those questions at the end, too. Good stuff! 6. Christopher says: Chattanooga. She wanted to ride the Choo-Choo. Here’s my concern with the scrubbing calculator-different from yours, I think. I don’t get how the scrubbing calculator helps the student who is struggling with the set up. In the original post I read about it, Bret writes: This is a simple problem, but it’s not obvious. It typically would require either writing out and solving an equation: 2910-x=426+x or recognizing the “trick” that we have to split the difference. Then he sets up two equations, 2910-1000=1910 and 426+1000=1426 and “scrubs” until they are equal. I’m fine with this, but I find that for this problem, the same insight is required for either technique. Namely, that we need to add to one person’s total and subtract from the other one. This seems like a fundamental structural insight. In my experience with community college developmental mathematics, this is the insight students struggle with and not so much the solving of the equation. So let’s just be clear about what the scrubbing calculator does and what it does not. When Bret alludes to the calculator being an alternative to using division and subtraction to solve an equation, I agree. And I’m curious about the consequences of the tool from that perspective. But he seems also to imply that the scrubbing calculator is an alternative to setting up an equation. And I disagree. I think we need the same structural insights into the problem to set up a scrubbing solution as a symbolic algebra one. Not that there’s anything wrong with that. I’m just not sure the scrubbing calculator really solves the pedagogical problem it claims to. 7. Bret says: @bowen “If the problem changes slightly from its original form (say, to a 1024-high screen), the Scrubbing solution method is to start from scratch, which doesn’t help students generalize toward functions and formulas (in this case, a relationship between the screen height and the bar height).” You may have missed the section on unlocked numbers, which addresses this very issue. If scrubbing bothers you, take a closer look at unlocking to see how it allows you to turn any number into a variable, and solve for it without scrubbing at all. @christopher “I’m just not sure the scrubbing calculator really solves the pedagogical problem it claims to.” Believe me, I have never claimed to address any pedagogical problems whatsoever. My interest in these tools is purely practical. Most people I know (adults, solving problems they care about) have no trouble with the insight that we have to add to one person’s total and subtract from the other — that’s what it means to pay for something. But these people won’t go near anything with an “x”, and don’t know or care about “moving terms to the other side of the equation”. 8. Bowen Kerins says: Thanks for the comments, Bret. My first thought is that philosophically we are a lot closer than I thought! I agree that there is a disservice to a large number of students when they are taught only symbolic algebra. I feel that many of the real purposes of learning algebra are ones that can and should apply to adults solving problems they care about: generalizing from examples, looking for structure and similarities between different kinds of problems, reasoning about and picturing calculations, and more. These are what an algebra course should really be about. I feel that symbolic algebra, and the connections between arithmetic and algebra, are critical to a deep understanding of algebraic habits of mind — and that these habits of mind are what school mathematics should be about. You posted a link to a paper by William Thurston, and this paper contains a quote we (at CME Project) frequently cite when we introduce the philosophy of the program: “What mathematicians most wanted and needed from me was to learn my ways of thinking, and not in fact to learn my proof of the geometrization conjecture for Haken manifolds.” I feel this should still be true if you replace that last part with “the quadratic formula” or even “the basic moves of solving equations”. Mathematics education should be preparing students for their adult lives, and courses that are purely about symbol manipulation do not accomplish this. I was unclear in my comment about relationships between variables. I meant that there doesn’t seem to be a way to find the overall relationship between two variables when using the Scrubbing Calculator. For example, if a proportional relationship emerged between two variables, it can be observed through several specific cases but not generalized. Similarly it would be difficult to identify when variables were in an inverse, quadratic, or exponential relationship. There’s more to say about Guess-Check-Generalize (more posts some other time) but I think there is a lot to talk about, educationally, as a result of these types of tools. I am hopeful that school mathematics courses can better serve students by targeting high-level thinking goals, the habits of mind that Thurston talks about, instead of just being about content goals and “mindless manipulation” of symbols. But I still think the symbols of algebra are necessary to accomplish those higher goals… 9. Pingback: Lessons from the CME project – Verbal Systems | gealgerobophysiculus %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451592564582825, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/2748/why-is-the-time-dimension-special
# Why is the time dimension special? Generally speaking, I've heard numerical analysts utter the opinion that "Of course, mathematically speaking, time is just another dimension, but still, time is special" How to justify this? In what sense is time special for computational science? Moreover, why do we so often prefer to use finite differences, (leading to "time-stepping"), for the time dimension, while we apply finite differences, finite elements, spectral methods, ..., for the spatial dimensions? One possible reason is that we tend to have an IVP in the time dimension, and a BVP in the spatial dimensions. But I don't think this fully justifies it. - ## 3 Answers Causality indicates that information only flows forward in time, and algorithms should be designed to exploit this fact. Time stepping schemes do this, whereas global-in-time spectral methods or other ideas do not. The question is of course why everyone insists on exploiting this fact -- but that's easy to understand: if your spatial problem already has a million unknowns and you need to do 1000 time steps, then on a typical machine today you have enough resources to solve the spatial problem by itself one timestep after the other, but you don't have enough resources to deal with a coupled problem of $10^9$ unknowns. The situation is really not very different from what you have with spatial discretizations of transport phenomena either. Sure, you can discretize a pure 1d advection equation using a globally coupled approach. But if you care about efficiency, then the by far best approach is to use a downstream sweep that carries information from the inflow to the outflow part of the domain. That's exactly what time stepping schemes do in time. - That's a good point... memory is definitely a major constraint! :) – Paul♦ Jul 9 '12 at 17:05 I definitely see the point that causality comes naturally with finite differences, but not with "global coupling". Conversely, "shooting methods" for solving BVPs sort of do the opposite. It introduces unwanted causality. Analytically speaking, for certain equations (eg. 2nd order hyperbolic PDEs) causality is needed for uniqueness. However, in some cases, it is not, and I guess then one may very well do spectral methods in time as well. As you say, I think reducing the size of the system is also a big one. And it makes more sense to do FD in time than in some arbitrary spatial dimension. – Patrick Jul 11 '12 at 10:22 While there are some exceptions (e.g. fully discrete finite element methods), temporal discretization generally implies an inherently sequential dependence in flow of information. This dependency restricts semi-discrete algorithms (BVP in space, IVP in time) to compute solutions to subproblems in sequential manner. This discretization is usually preferred for its simplicity and because it offers the analyst many well-developed algorithms for higher accuracy both in space and time. It is possible (and simpler) to use finite differences in spatial dimensions as well, but finite element methods offer easier flexibility in the type of domain on interest (e.g. non-regular shapes) than finite difference methods. A "good" choice of spatial discretization is often very problem dependent. - Similar to the causality Wolfgang mentioned in his post, we could see the reason why time dimension is special from Minkowski spacetime point of view:$\newcommand{\rd}{\mathrm{d}}$ The $(3+1)$-dimensional spacetime has an inner product defined as $$(A,B) = A_x B_x + A_y B_y + A_z B_z - \dfrac{1}{c^2}A_t B_t$$ if $A$ and $B$ are two 1-form in Minkowski spacetime: $A = A_x \rd x + A_y \rd y + A_z \rd z + A_t \rd t$, $B$ is defined in a similar fashion, the intuition behind defining an inner product(or rather to say, metric) is to imposing the idea of absolute light speed, such that two different points(events) in the spacetime have zero distance(happens at the "same time", like we are observing the motion of galaxies billions of lightyears away as if they are moving right now) if they are on the same light cone. As you can see, this inner product is not positive definite due to the presence of the time dimensional scaled by the light speed $c$, therefore intuitively speaking, when treating a problem concerning a quantity propagating in the spacetime, we cannot simply apply theorems in 3-dimensional Euclidean metric to a $(3+1)$-dimensional spacetime, just think of 3-dimensional elliptic PDE theories and their corresponding numerical methods differ drastically from the hyperbolic PDE theories. Maybe off-topic, but another major difference of space vs spacetime(elliptic vs hyperbolic) is that most elliptic equations model the equilibrium and ellipticity gives us "nice" regularity, while there are all kinds of discontinuities in hyperbolic problems(shock, rarefaction, etc). EDIT: I don't know there is a dedicated article about the difference other than giving you the definition, based what I learned before, typical elliptic equation like Poisson equation or elasticity, models a static phenomenon, has "smooth" solution if data and boundary of domain of interest are "smooth", this is due the ellipticity(or rather to say positive definite property) of the governing differential operator, this type of equations leads us to a very intuitive Galerkin type approach(multiply a test function and integration by parts), typical continuous finite element works well. Similar things apply to parabolic equation like heat equation, which is essentially an elliptic equation marching in time, has a similar "smoothing" property, an initial sharp corner will be smoothened out over time, we call this "diffusive" or "dissapative". For a hyperbolic problem, normally derived from a conservation law, is "conservative" or "dispersive". For example, linear advection equation, describing the certain quantity flows with a vector field, conserves how this specific quantity is like initially, just it moves spacially along this vector field, the discontinuities will propagate. Schrodinger equation, another hyperbolic equation, however, is dispersive, it is the propagation of a complex quantity, a non-oscillatory initial state will become different oscillatory wave packets over time. As you mentioned "time-stepping", you could think the quantity "flows" in the time "fields" with a certain velocity as the causality, very similar to the linear advection equation BVP, we only have to impose the inflow boundary condition, ie, what the quantity is like when flowing into the domain of interest, and the solution would tell us what the quantity is like when flowing out, an idea very similar to every method that uses time-stepping. Solving a 2D advection equation in space is like solving a 1D one-sided propagation problem in spacetime. For numerical schemes, you could google about spacetime FEM. - I must say that most of what you say is above my head. But the last paragraph was very interesting, and definitely lends insight. Do you have a link to (space and spacetime) vs (elliptic and hyperbolic)? – Patrick Jul 11 '12 at 10:08 @Patrick Thanks for the interest, I have edited more into my answer. – Shuhao Cao Jul 11 '12 at 17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400760531425476, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6583/can-gravity-be-described-in-terms-of-velocities-and-rotations-of-local-minkowski
# Can Gravity be described in terms of velocities and rotations of local Minkowski space? In the paper The River Model of Black Holes: Am.J.Phys.76:519-532,2008, Andrew J. S. Hamilton, Jason P. Lisle http://arxiv.org/abs/gr-qc/0411060 The authors give a way of describing the action of a rotating (and or charged) black hole through a collection of local Minkowski frames, that is, as a sort of collection of preferred reference frames, or more accurately, by the tidal effects arising from the movement from one frame to another. Each frame is defined by a "river field" $\omega_{km}$ (See around equation 74). This field is composed as follows: $\omega_{0m} = \beta_m$ is the "velocity" of the river, while $\mu^i = 0.5 \epsilon^{ikm}\omega_{km}$ gives the "twist" of the river. Then the motion of objects due to the black hole can be calculated from the tidal change $\delta\omega^k_m$ which is a local infinitesimal Lorentz transformation. My question is this: Can this description of a black hole be used to describe general relativity? Note that there is an obvious limitation: since this is based on a flat background metric you can't get worm holes and the like, but I mean, subject to the requirement of trivial topology, can every GR situation be described by a "river field"? - ## 2 Answers Daniel's answer may be a little bit generic; although I am not exactly sure what you are looking for one similar concept is that of a Tetrad; there are other names like Frame Field for the same concept. Here is the introductory paragraph from Wikipedia to see if this is what you are interested in: Frame fields always correspond to a family of ideal observers immersed in the given spacetime; the integral curves of the timelike unit vector field are the worldlines of these observers, and at each event along a given worldline, the three spacelike unit vector fields specify the spatial triad carried by the observer. The triad may be thought of as defining the spatial coordinate axes of a local laboratory frame, which is valid very near the observer's worldline. The Tetrad formalism is closely related to 2-spinors and Tetrads can twist and turn as they move through space-time encoding lots of GR properties more directly than other formalisms. - As i said in a response to a previous question of yours, a good way to think about these issues is in terms of the Frobenius theorem (in differential topology) — or, more generally, in terms of Foliations. What is seems to me that was done in the work you cited is exactly this: a particular 'distribution' (in the sense of Fröbenius's theorem above) was chosen, i.e., a particular collection of flat approximations [to the curved spacetime] — or, if you will, a certain foliation was chosen. (Note that there's a clear analogy between GR and Fluid Mechanics: both are so-called Classical Field Theories. Landau's "Course of Theoretical Physics", in 10 vols, comes in handy here. My point being that, effectively, it doesn't matter whether you're talking about GR or Fluid Mechanics — and this is the basis behind Unruh's "Sonic Holes". ;-) So, in the end of the day, you can use the very same construction in GR, Fluid Mechanics, or any other 'Classical Field Theory'. I think you can take a look at MacDowell-Mansouri gravity and Cartan geometry :: arXiv:gr-qc/0611154 and see how the tools of Cartan Geometry can be used in GR (and, more generally, in Differential Geometry) — in particular, you may be interested in the so-called repère mobile (moving frame). PS: I should have said something similar to this in the answer to your previous question. Note how the Equivalence Principle fits nicely in this framework (moving frames, foliations, 'distributions', etc). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923675537109375, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/156810-analysis-subsequential-limits.html
# Thread: 1. ## Analysis subsequential limits I need to prove i) lim n to infinity sup Sn is an element of SL(Sn) ii) same thing but replace sup with inf I know SL(Sn) is the set of all limits of all convergent subsequences of Sn. Also, lim n to infinity sup Sn can be defined as sup SL(Sn) So it may be easier to show that Sup SL(Sn) is an element of SL(Sn) 2. Sorry, not a specialist in this area, but what is Sn? Is it just a sequence of real numbers S1, S2, ...? If so, then what is sup Sn? 3. Can only help you if I can understand your question. Explain or state the definition of what Sn and SL(Sn) are. 4. Sn is sequence of real numbers SL(Sn) is the set of all limits of all convergent subsequences of the sequence Sn 5. Yes, but what is sup Sn? Is it a constant sequence where each element is equal to the supremum of the sequence $\{S_n\}_{n=0}^\infty$? Or is it a supremum of an individual number $S_n$? Or maybe, what I think is likely, it is $\sup_{m\ge n}S_m$, as in here, so the whole expression is $\displaystyle\limsup_{n\to\infty}S_n := \lim_{n\to\infty}(\sup_{m\geq n}S_m)$? 6. Originally Posted by emakarov Yes, but what is sup Sn? Is it a constant sequence where each element is equal to the supremum of the sequence $\{S_n\}_{n=0}^\infty$? Or is it a supremum of an individual number $S_n$? Or maybe, what I think is likely, it is $\sup_{m\ge n}S_m$, as in here, so the whole expression is $\displaystyle\limsup_{n\to\infty}S_n := \lim_{n\to\infty}(\sup_{m\geq n}S_m)$? It is the last definition that you gave with the link 7. Let's prove that there exists a subsequence $S_{n_1}, S_{n_2},\dots$ whose limit is $\lim_{n\to\infty}(\sup_{m\geq n}S_m)$. Let $T_n=\sup_{m\geq n}S_m$ and let $a=\lim_{n\to\infty}T_n$. We can ensure that for every $i$ and every $j>i$, $|S_{n_j}-a|<1/i$, i.e., the entire tail of the subsequence starting from $S_{n_i}$ lies in the $1/i$-neighborhood of $a$. Indeed, suppose $i$ is given and $S_{n_1}, \dots, S_{n_{i-1}}$ have already been chosen. From the definition of $a$, there is a point $m$ such that for all $n\ge m$, $|T_n-a|<1/(2i)$ (*) Without loss of generality, we can assume that $m>n_{i-1}$; otherwise, choose a new $m$ as $n_{i-1}+1$; then (*) still holds. In particular, $|T_{m}-a|<\1/(2i)$, i.e., $|(\sup_{m\ge m}S_m)-a|<1/(2i)$. From the definition of sup, there exists an index, which we'll call $n_i$, such that $n_i\ge m>n_{i-1}$ and $|S_{n_i}-\sup_{m\ge m}S_m|<1/(2i)$. Thus, $|S_{n_i}-a|<1/i$. TL;DR: We choose a subsequence whose limit is the limit superior of $\{S_n\}$ (call it $a$). Since $a$ is a limit, sups of tails of $\{S_n\}$ come close to $a$. Since those are sups, there are individual elements $S_n$ that come close to $a$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402971863746643, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-applied-math/9471-gas-liquid-pressure-temperature-density.html
# Thread: 1. ## Gas/Liquid pressure from temperature and density How can I calculate a gas' or a liquid's pressure from it's density and it's temperature? For example, for air, I suppose the pressure $p\$ and the density $\rho$ is proportional, $p\ \propto\ \rho$. Is this true? That would mean that since the unit of pressure is $Pa$ and the unit for density is $kg/m^3$, the unit for pressure per density would be $\frac{Pa}{kg/m^3}\ =\ \frac{Pa\cdot m^3}{kg}$, or $\frac{Nm^3/m^2}{kg}\ =\ \frac{Nm}{kg}$, or $\frac{m^2}{s^2}$. Im just thinking a little bit. And since the atmospherical pressure is $101\ 325\ Pa$, and the air density in one atm pressure is $1.2$ to heaped $1.3\ kg/m^3$ in normal temperatures (here in Sweden). So then we can say that $\frac{p}{\rho}\ =\ \frac{101\ 325 Pa}{1.25\ kg/m^3}\ =\ 81\ 060\ Pa\cdot m^3/kg$ Now I don't know if this is true. And this is only for the temperature $11^\circ\ C\ \sim\ 52^\circ\ F$. Then I guess It is not the same thing at all with liquids, not water at least. Since the density for water (in $4^\circ\ C$) is always $1\ kg/dm^3$ but the pressure can change. Okay I guess that is not completely true, but almost. 2. Originally Posted by TriKri How can I calculate a gas' or a liquid's pressure from it's density and it's temperature? For example, for air, I suppose the pressure $p\$ and the density $\rho$ is proportional, $p\ \propto\ \rho$. Is this true? That would mean that since the unit of pressure is $Pa$ and the unit for density is $kg/m^3$, the unit for pressure per density would be $\frac{Pa}{kg/m^3}\ =\ \frac{Pa\cdot m^3}{kg}$, or $\frac{Nm^3/m^2}{kg}\ =\ \frac{Nm}{kg}$, or $\frac{m^2}{s^2}$. Im just thinking a little bit. And since the atmospherical pressure is $101\ 325\ Pa$, and the air density in one atm pressure is $1.2$ to heaped $1.3\ kg/m^3$ in normal temperatures (here in Sweden). So then we can say that $\frac{p}{\rho}\ =\ \frac{101\ 325 Pa}{1.25\ kg/m^3}\ =\ 81\ 060\ Pa\cdot m^3/kg$ Now I don't know if this is true. And this is only for the temperature $11^\circ\ C\ \sim\ 52^\circ\ F$. Then I guess It is not the same thing at all with liquids, not water at least. Since the density for water (in $4^\circ\ C$) is always $1\ kg/dm^3$ but the pressure can change. Okay I guess that is not completely true, but almost. Start with the Idea Gas Law (see here). If you need something more follow the link at the bottom of the linked page labled "Equation of State". RonL 3. Also your temperatures should be in Kelvin (K) if in metric or Rankine (R) if in English units. Both systems are based with a 0 temperature at absolute zero. -Dan 4. Thanks! That was just what I was looking for. Edit: topsquark: Yes, I supose it should. I prefer kelvin since it is more close to the temperature system in Sweden (Celsius). By the way, did you know Anders Celsius was from Sweden? 5. Originally Posted by TriKri By the way, did you know Anders Celsius was from Sweden? I would have guessed he was from France. 6. Originally Posted by TriKri Thanks! That was just what I was looking for. Edit: topsquark: Yes, I supose it should. I prefer kelvin since it is more close to the temperature system in Sweden (Celsius). By the way, did you know Anders Celsius was from Sweden? Originally Posted by ThePerfectHacker I would have guessed he was from France. I also would have guessed France. I'm not surprised you would choose to do the temperature in K. I don't know anyone who uses the R scale anymore. (In fact I'm one of the few I know that have ever heard of it!) -Dan 7. Originally Posted by topsquark I don't know anyone who uses the R scale anymore. (In fact I'm one of the few I know that have ever heard of it!) Rankine scale right? I think the magic number is 84 R (for boiling). I use the C scale, in fact, I need to convert from F to C because, my parents use C , for they come from the Mother Country. 8. Originally Posted by ThePerfectHacker Rankine scale right? I think the magic number is 84 R (for boiling). I use the C scale, in fact, I need to convert from F to C because, my parents use C , for they come from the Mother Country. 84 R for boiling what? 1 degree R is equal to 1 degree F, so the freezing temp of water is about 459 R, then add 212 R to get to boiling water gives about 671 R for boiling water. -Dan 9. Why is not possible to get absolute zero again? I heard of this law. 10. Originally Posted by topsquark 84 R for boiling what? 1 degree R is equal to 1 degree F, so the freezing temp of water is about 459 R, then add 212 R to get to boiling water gives about 671 R for boiling water. -Dan Ahem, freezing temp of water is 491,67 R, and you have to add 180 R to get boiling temperature. So 671 is right at least. - Kristofer 11. Originally Posted by TriKri Ahem, freezing temp of water is 491,67 R, and you have to add 180 R to get boiling temperature. So 671 is right at least. - Kristofer This is what happens when I use my Swiss-cheese-like memory instead of looking it up like I should have. Thanks for the spot! -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608381390571594, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/239189/a-theorem-on-transient-and-recurrent-state-in-a-dtmc?answertab=oldest
# a theorem on transient and recurrent state in a DTMC Is the following statement true: In a finite Markov chain, if $i$ is a transient state then there is at least one recurrent state $j$ such that $j$ is reachable from $i$. - ## 2 Answers I'm not sure if your looking for an algebraic proof, but intuitively I was thinking: Assume your finite state space is $S = \{1, 2, ... , n\}$. Let $X_i$ denote the state you are in for all $i \in S$.The basic form of a transient "map" would be when $$X_1 \rightarrow X_2 \rightarrow X_3 \rightarrow ... \rightarrow X_i$$ It is possible that $i = n$. Now when you reach state $i$, you have two options. Either move to another state, or remain in state $i$. There is no option of "nothing happening" in state $i$ as this would just be that you remain in state $i$, i.e the probability of going from $i \rightarrow i = 1$. Looking at option $1$, if you go from $X_i$ to some other state $X_j$, you have just created a recurrance class between states $X_i$ and $X_j$ and so you have gone from a transient state (say $X_1$) to a recurrant state ($X_i$ or $X_j$). Looking at option 2, if $X_i \rightarrow X_i$, then clearly state $i$ is recurrant and again, you have gone from a transient state to a recurrant state. So I think your statement is correct. - Yes, for any finite state Markov Chain its state space admits the following Doeblin decomposition $$S = \sum_{i=1}^n H_i+E$$ where sets $H_i$ are absorbing and recurrent, and $E$ is a set of all transient states. In particular it means that $E$ does not have an absorbing subset and for any $x\in E$ there is $1\leq i\leq n$ such that $H_i$ is reachable from $x$. Since $H_i$ is recurrent, each state of it is reachable from $x$. This answers your question, however if you will be more precise what is your background on Markov Chains, I could provide an answer that may fir you better. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957102358341217, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/52907/wheres-the-energy-in-a-boosted-capacitor
Where's the energy in a boosted capacitor? Suppose I look at a parallel plate capacitor in its rest frame and calculate the electrostatic energy, $E$. Next, I look at the same capacitor in a primed frame boosted in the direction perpendicular to the plane of the plates. In this frame, the $E$-field is the same strength, there is no magnetic field, and the volume over which the $E$-field extends is less by a factor $1/\gamma$. This suggests $E' = \frac{1}{\gamma} E$, but relativity states that energy transforms as $E' = \gamma E$. Where is the missing energy? - Surely in the primed frame there are large magnetic fields generated at the leading and trailing edges of the capacitor from the $\frac{\partial \vec{E}}{\partial t}$ term and from the plates themselves from the $\vec{J}$ term, no? – dmckee♦ Feb 2 at 18:59 @dmckee I did think of that, but am not sure how to make it work. By "no magnetic field", I just meant that if you transform the constant E-field, it doesn't create a B-field. – Mark Eichenlaub Feb 2 at 19:04 – Raindrop Feb 3 at 8:39 I already did that search and found those notes you linked. Why are you linking to them when they don't answer the question? – Mark Eichenlaub Feb 3 at 16:37 1 Rindler and Denur wrote a paper on this paradox in 1987 in the AJP: "A simple relativistic paradox about electrostatic energy". – Larry Harson Mar 4 at 17:07 show 3 more comments 3 Answers First of all, thanks for this question because it made me think about relativity which was always fun! It's true that $E'=\frac{1}{\gamma} E$. You say that relativity states that the energy should increase by a factor of $\gamma$. This is certainly true for a massive particle whose energy is $\gamma mc^2$, but why would you expect this to hold for the energy in the fields in this situation? I think the answer simply is that there is no contradiction; the energy in the fields transforms by a factor of $\frac{1}{\gamma}$ and that's that! Actually, not quite! (as Mark argued in the comments) After the discussion in the comments below, I realized that perhaps "that's that" was both premature and doesn't get at the heart of Mark's question. So I dug deeper (namely I scoured Jackson's EM) and I found an answer that is significantly more complete. The definition of the energy and momentum densities in the fields given by the $\Theta^{00}$ and $\Theta^{0i}$ components of the (symmetric-traceless version of the) stress tensor (see Jackson 12.114) $$\Theta^{00} = \frac{1}{8\pi}(\mathbf E^2+\mathbf B^2), \qquad \Theta^{0i} = \frac{1}{4\pi}(\mathbf E\times\mathbf B)^i$$ leads to the following candidate for the electromagnetic four-momentum: $$P_\mathrm{cand}^\mu=\left(\int d^3 x\,\Theta^{00}, \int d^3x\, \Theta^{0i}\right)$$ Unfortunately, this quantity does not transform as a four-vector should in the presence of sources. The basic reason this is that $$\partial_\alpha\Theta^{\alpha\beta} = -F^{\beta\lambda}J_\lambda/c \neq 0$$ and the spatial integrals of $\Theta^{0\alpha}$ yield a four-vector only if the four-divergence of the tensor vanishes identically. To remedy this one needs to add a term $P^{\mu\nu}$ to the stress tensor that takes into account the so-called Poincare stresses of the sources; $$S^{\mu\nu} = \Theta^{\mu\nu} + P^{\mu\nu}$$ This new tensor does have vanishing four-divergence provided the Poincare stresses are chosen appropriately for the system at hand, and therefore the spatial integrals of the $S^{0\mu}$ are the components of a four-vector. Jackson indicates that the Poincare stresses should be thought of as the contributions to the energy of the system that come from the non-electromagnetic forces necessary to ensure the stability of electric charges. From this vantage point, the answer to the question is that the extra energy that seems to go missing is the energy present in the sources. Perhaps this is begging the question in the sense that I have nowhere attempted to write down the Poincare stresses present in the parallel plate capacitor system, but for the time being, I'm more satisfied, and hopefully, Mark, you are too. BTW see Ch. 16 in Jackson for many more details including the explicit calculation of Poincare stresses for a charged shell of uniform density. Cheers! - Energy is a component of a four-vector. It must transform as a four-vector. Imagine putting the capacitor in a black box, which we examine in the rest frame of the box. Coming out of the box are leads that we can use to charge the capacitor. If we charge the capacitor, we increase the mass of the black box according to E = mc^2. But now this is just a box with that extra mass added to it. Its energy must transform the same way as that of a particle. – Mark Eichenlaub Feb 4 at 5:00 I'm not convinced that the energy we are talking about here (that obtained by integrating the energy density $T^{00}$ of the fields) is the time component of a four-vector. In fact, see the bottom of page 607 in Jackson's EM where he writes "the integrals in 12.106 do not appear to have the transformation properties of a 4-vector. For source-free fields they do in fact transform properly, but in general do not." The integrals he is referring to are $\int d^3 x\,T^{00}$ and $\int d^3 x\, T^{0i}$. – joshphysics Feb 4 at 16:29 Which simply indicates that the energy is somewhere else; the black box argument holds. – Mark Eichenlaub Feb 4 at 19:30 2 Great, much clearer, thanks! – Mark Eichenlaub Feb 5 at 0:08 1 Np Mark. Thanks again for this (in my opinion) great question. I must admit I've been thinking about it constantly, and it has really added to my understanding of EM! Also I'm glad you stuck to your guns in the comments. – joshphysics Feb 5 at 0:14 show 2 more comments The field energy in the capacitor rest frame: $$E_0 = \frac{1}{8 \pi} \int \left[ \boldsymbol{E}^2+\boldsymbol{B}^2 \right] d^3x$$ does not have the character of the time-component of a 4-vector, as required for an energy. Instead it's the 00 component of the EM stress tensor $\Theta$ times a non-invariant volume element. One cannot apply this formula to boosted frames. One way to generalize to any frame is to re-write this quantity as the contraction of the stress tensor with a time-like 4-vector: $$E = \int \Theta^{0 \beta} \eta_\beta \, d^3 \sigma_i$$ where $\eta = (1,0,0,0)$ and the invariant volume element $d^3 \sigma = d^3 x$ in the capacitor rest frame. In this particular case, an equivalent but more appealing approach is to note that $\boldsymbol{B}=0$ in the capacitor rest frame, so one can write: $$E = \gamma \int \frac{\boldsymbol{E}^2 - \boldsymbol{B}^2}{8 \pi} d^3 \sigma$$ where $\gamma$ is referenced to the capacitor rest frame (and hence is unity there). The corresponding field momentum is 0 in this frame, again because $\boldsymbol{B}=0$. The minus sign is unsettling, but the beautiful feature here is that $\left[\boldsymbol{E}^2 - \boldsymbol{B}^2 \right]$ is Lorentz-invariant, so its value at any space-time point in any frame is the same as at that point in the capacitor rest frame, where it is always non-negative because $\boldsymbol{B}=0$. Combining, one finds: $$E = \gamma m_e c^2$$ where $$m_e = \frac{1}{8 \pi c^2} \int \left[\boldsymbol{E}^2-\boldsymbol{B}^2 \right] d^3\sigma$$ is a frame-invariant electromagnetic mass, just as one would expect for boosts from a rest frame. Ref Jackson Chapter 17. - +1: Yeah I saw this in Jackson as well, but I feel that the viewpoint of including Poincare stresses (slightly earlier in Jackson) is a bit more physical and speaks more to the spirit of the question. – joshphysics Feb 6 at 3:41 I also came up with this problem a month ago and wrote a post in my blog. But I resolved it in a much different way than any other answers posted here. I'm still not quite sure about my argument here but it seems plausible and interesting to me. The total field energy in the capacitor's rest frame is $U=\int \frac{\epsilon_0}{2}E^2dV=\frac{\epsilon_0E^2Ad}{2}$ Now an important point to note is that the capacitor plates are attracting one another, and they cannot simply stay there without crashing into each other. So let's say there's a rigid massless rod between the plates to hold the plates in place. In the capacitor's rest frame, we can calculate the magnitude forces acting on left and right plates. $F=\int (\frac{E}{2})\sigma dA=\frac{\epsilon_0 E^2 A}{2}$ In this frame obviously these forces do not provide any work. However in the primed frame, the rod plays a role as an energy transmitter. I mean, first imagine that there is no rod between the plates. Since the plates are attracting each other, the right plate will slow down and the left plate will speed up. Now if there is a rigid rod between them, their velocities will not change at all. In other words, the rod is taking energy from the left plate at a rate $F.v$ and transfer it into the right plate to account for the attraction. But, remember that the energy can’t teleport from one plate to the other plate instantaneously. Thus perhaps some of it has not reach the right plate yet, and still located between the plates. Or we can also say that the rod's mass is increased. * In the capacitor's rest frame, we can safely say that the event $1$ “force start acting on the left plate” and event $2$ “force start acting on the right plate” must happen simultaneously due to symmetry. However, in the primed frame there's a loss of simultaneity. Event $1$ happens $\Delta t=\gamma \frac{vd}{c^2}$ seconds before event $2$. During this time, the rod steals an amount of energy $\Delta U$ from the left plate without paying any energy to the right plate. $\Delta U=F.v \Delta t=\frac{\gamma \epsilon_0 E^2 Ad}{2} \frac{v^2}{c^2}$ if we take into account this "hidden energy" to the total energy in the primed frame $U'=U/\gamma+\Delta U=\frac{\epsilon_0E^2Ad}{2\gamma}+\frac{\gamma \epsilon_0 E^2 Ad}{2} \frac{v^2}{c^2}$ $U'=\frac{\gamma\epsilon_0E^2Ad}{2}=\gamma U$ * EDIT: The arguments starting from * until the horizontal rule can be replaced with an alternate way of viewing as suggested by Larry Harson: Now suppose that the whole rod suddenly disappear simultaneously in the capacitor's rest frame. Thus the event $1$ “force stop acting on the left plate” and event $2$ “force stop acting on the right plate” must happen simultaneously. However, in the primed frame there's a loss of simultaneity. Event $1$ happens $\Delta t=\gamma \frac{vd}{c^2}$ seconds before event $2$. During this time, the rod has done extra an amount of work $\Delta U$ to the right plate without returning any energy to the left plate. $\Delta U=F.v \Delta t=\frac{\gamma \epsilon_0 E^2 Ad}{2} \frac{v^2}{c^2}$ That means the same amount of energy was contained in the rod before disappearance. if we take into account this "hidden energy" to the total energy in the primed frame $U'=U/\gamma+\Delta U=\frac{\epsilon_0E^2Ad}{2\gamma}+\frac{\gamma \epsilon_0 E^2 Ad}{2} \frac{v^2}{c^2}$ $U'=\frac{\gamma\epsilon_0E^2Ad}{2}=\gamma U$ - What events are we talking about? The capacitor can just sit there and never accelerate at all. We just look at it in a boosted frame. The concept of "not reaching the right plate yet" doesn't seem to make sense if the capacitor is just sitting still, and always has sat still. – Mark Eichenlaub Feb 26 at 4:54 I was't saying that the capacitor is accelerated. I mean the plates are attracting each other through electrostatic forces, so if there is nothing that counters these forces(i.e. a rigid rod) the plates will crash into each other. Now if there is a rod between the plates everything looks the same as in the problem description, I know that we are just viewing it in a moving frame. But the rod plays a role here. In the frame where the capacitor appears to be moving, the rod does work on the capacitor. And the total work turned out to negative, as a result the rod's mass is increased. – Emitabsorb Feb 26 at 5:20 But what events are you talking about? That's what I asked. What event are you referring to with 'event 1 “force start acting on the left plate”'. There is no stopping or starting involved; the capacitor just sits there. – Mark Eichenlaub Feb 26 at 6:19 I think it is not quite a problem, we can always recreate the starting condition. Suppose that initially there is already a rod between the plates, call this rod 1. Then at $t=0$, this rod suddenly disappears and rod 2 suddenly appears to replace it. From the point of view of the rest frame of capacitor, doing this won't change the energy of the system. And thus it must be so in any other frame. In the moving frame, rod 2 does not receive work from the left plate and the right plate simultaneously. Thus some energy is transferred into rod 2. And this energy must be taken into account. – Emitabsorb Feb 26 at 7:26 As best I can tell, you are trying to argue that rod 2 is going to make physical changes after it pops into existence. But if it pops into existence in exactly the same state as rod 1 was, then clearly nothing will change. – Mark Eichenlaub Feb 26 at 8:30 show 8 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454106688499451, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/46782/projecting-projective-curves
# Projecting Projective Curves I've been stuck for quite a while on what is probably a trivial problem. Let $X\subset\mathbb{P}^n$ be a smooth projective curve, and let $$\mathcal{I}=\{(p,q,r):p,q\in X,p\neq q,r\in\overline{pq}\}$$ (where $\overline{pq}$ is the line that joins $p$ and $q$) and let $$\mathcal{J}=\{(p,r):p\in X,r\mbox{ lies on the tangent line to }X\mbox{ at }p\}.$$ It is easy to see that $\mathcal{I}$ is a complex 3-manifold and $\mathcal{J}$ is a complex 2-manifold. Let $\alpha:\mathcal{I}\to\mathbb{P}^n$ so that $(p,q,r)\mapsto r$, and let $\beta:\mathcal{J}\to\mathbb{P}^n$ so that $(p,r)\mapsto r$. Why is it that if $n\geq 4$ then there is a point in $\mathbb{P}^n$ that is not in the image of either function? I can see that the image of $\alpha$ has complex dimension at most 3 and the image of $\beta$ at most 2, but I can't see why their images can't cover all $\mathbb{P}^n$. For reference, this was taken from Algebraic Curves and Surfaces by Rick Miranda, page 101. Thanks. - Robert, you need to change the letter "e" just after the second time you write $\overline{pq}$ for the letter "i". It seems that your Spanish tried to come out, that sometimes happens to me also ;) – Adrián Barquero Jun 21 '11 at 20:58 Thanks, I'm used to writing in Spanish! – Robert Auffarth Jun 21 '11 at 21:23 If you can prove that the image of $\alpha$ has dimension at most $3$, then you know that image is not all of $P^n$ because $P^n$ has dimension at least $4$. – Mariano Suárez-Alvarez♦ Jun 21 '11 at 21:25 Yeah, but I want to find an element that is neither in the image of $\alpha$ nor $\beta$... – Robert Auffarth Jun 21 '11 at 22:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9668910503387451, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/41605-convergence-hilbert-spaces.html
# Thread: 1. ## convergence in Hilbert spaces I have the following problem: Let X be a Hilbert space with the inner product <.|.> Let $C_n \in C(X)$ be a family of compact operators that converges in the norm to operator C. Let $(x_n)_n$ be a sequence that converges weakly to $x_0$ Does < $C_n x_n | x_n$> converge? If it does, where to? I'm completely lost, please help. 2. Originally Posted by marianne Let X be a Hilbert space with the inner product <.|.> Let $C_n \in C(X)$ be a family of compact operators that converges in the norm to operator C. Let $(x_n)_n$ be a sequence that converges weakly to $x_0$ Does < $C_n x_n | x_n$> converge? If it does, where to? Three things you need to know. (1) Weakly convergent sequences are bounded. (This follows from the uniform boundedness principle, because a weakly convergent sequence is obviously weakly bounded.) (2) Compact operators convert weakly convergent sequences into norm-convergent sequences. (3) The set of compact operators is norm-closed, so that if C_n→C_0 in norm and each C_n is compact, then so is C. We want to show that $\langle C_n x_n | x_n\rangle\to \langle C_0 x_0 | x_0\rangle$. Here's an outline of how to do it. First, for n large enough, Cx_n is close to Cx_0 in norm, by (2) and (3). Next, for n large enough, C_nx_n is close to Cx_n (this follows from (1), together with the fact that C_n→C_0 in norm. Putting those two statements together, you see that (for n large enough) $\langle C_nx_n|x_n\rangle$ is close to $\langle C_0x_0|x_n\rangle$. But the weak convergence tells you that (again for n large enough) $\langle C_0x_0|x_n\rangle$ is close to $\langle C_0x_0|x_0\rangle$. Therefore $\langle C_n x_n | x_n\rangle\to \langle C_0 x_0 | x_0\rangle$ as n→∞. 3. Thank you so much! I hope nobody here won't mind if I post another question or two in the next couple of days. :-) 4. THERE ARE LOTS OF PEOPLE WHO MIND THINGS.. just kidding.. post your questions and everyone will do his/her best to help.. Ü #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259346723556519, "perplexity_flag": "head"}
http://leftcensored.skepsi.net/2012/01/26/filtering-a-list-with-the-filter-higher-order-function/
Where to begin… Filtering a list with the Filter higher-order function Posted on January 26, 2012 by Last week markbulling over at Drunks & Lampposts posted a method of using `sapply` to filter a list by a predicate. Today the @RLangTip tip of the day was to use `sapply` similarly. This made makes me wonder if R‘s very useful higher-order functions aren’t as well known as they should be. In this case, the `Filter` higher-order function would be the tool to use. `Filter` works more or less like the `*apply` family of functions, but it performs the subsetting (the filtering) of a list based on a predicate in a single step. As an example, let’s say we have a list of 1000 vectors, each of length 2 with $$x_1,\,x_2 \in [0,\,1]$$, and we want to select only those vectors where the elements of the list sum to a value greater than 1. With `Filter`, this is all we have to do: ```mylist <- lapply(1:1000, function(i) c(runif(1), runif(1))) method.1 <- Filter(function(x) sum(x) > 1, mylist)``` Which is at least a bit more transparent than the `sapply` alternative: `method.2 <- mylist[sapply(mylist, function(x) sum(x) > 1)]` In some very quick tests, I found no performance difference between the two approaches. There are other useful higher-order functions. If you are interested, check out `?Filter`. This entry was posted in R. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433125257492065, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/5283?sort=oldest
## Are there Generalisations of a Limit (for Just-divergent Sequences)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There are certain sequences such as 0, 1, 0, 1, 0, 1, 0, 1, ... that do not converge, but that may be assigned a generalised limit. Such a sequence is said to diverge, although in this case a phrase such as has an orbit might be preferable. One way to generalise a limit is by considering the sequence of accumulated means: given a sequence a1, a2, a3, a4, ... the accumulated mean sequence would be a1, (a1+a2)/2, (a1+a2+a3)/3, (a1+a2+a3+a4)/4, ... If this sequence has a limit, then the original sequence may be said to have that value as its generalised limit. In this way, the example sequence above has the generalised limit of 1/2; this seems natural as the sequence oscillates around this 'mean' value. Is there a name for this kind of generalised limit? Are there other ways to define such a thing. Do you know of any good on-line references for this? Thanks. - 4 Your example is the sequence of partial sums of the series (-1)^n, which is Cesaro summable to 1/2: see en.wikipedia.org/wiki/Cesaro_summation. – Steven Sivek Nov 12 2009 at 23:47 8 There are much worse things you can read than G.H. Hardy's `Divergent Series'. – Mariano Suárez-Alvarez Nov 12 2009 at 23:51 5 You can read Hardy's book online at archive.org/details/divergentseries033523mbp – lhf Nov 13 2009 at 0:24 Hardy's book is very good. I would have listed it as an answer, but really Mariano/lhf should, and I can vote their answers up. – Theo Johnson-Freyd Nov 13 2009 at 3:24 From the preface of Hardy's book: "Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever". Attributed to Abel. – Kevin Buzzard Nov 13 2009 at 9:57 ## 7 Answers Another common technique is Abel summation, which works a little better than Cesaro summation. Zeta regularization is also important in physics. You might enjoy reading these posts at The Everything Seminar and this column from John Baez. - Another approach (no pun intended) to the limit of sums is Borel Summation en.wikipedia.org/wiki/Borel_summation – Dan Piponi Nov 13 2009 at 1:19 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A nice book on this kind of stuff is "Classical and modern methods in summability" by Boos and Cass. - On less practical terms, you can assign a(n extended) limit to any bounded sequence once you have an ultrafilter (on the natural numbers) at hand: Let F be your ultrafilter (that's what makes it less practical). Then for any bounded sequence xn there exists a unique x such that for all ε>0 the set {n: |xn-x|<ε} is contained in F. Define this x to be the limit of xn. For your sequence 0,1,0,1,... this will assign either 0 or 1 as the limit depending on whether the chosen ultrafilter contains the set of even or the set of odd natural numbers. This extended notion of limit still is • an algebra homomorphism (from bounded sequences to numbers), • is bounded (ie. takes its value between the infimum and supremum of the sequence), and • is non-principal (that is sequences differing at finitely many indices only get assigned the same limit). Note that boundedness and non-principality alone suffice to show that for convergent sequences (in the usual sense) we don't get anything new: the extended notion agrees with the classical one. Of course, there's something to be sacrificed: the extended limit will, for instance, no longer be shift-invariant (meaning that xn and xn+h may have different limits). More details can be found in the following very informal handout I wrote for a student colloquium talk a few years ago. I also very much recommend Terry Tao's related blog post. - Cesaro summation (the process which you describe) defines a linear functional on a subspace of the Banach space of bounded sequences (namely those sequences which are cesaro summable). Using Hahn-Banach (or one of its variants), one can extend this linear functional to the whole space of bounded sequences, and the extension WILL be shift invariant. However, the extension is not unique and existence depends on the Axiom of choice. See the Wikipedia entry for Banach limit for more info. - Nice alternative. Of course, while gaining shift-invariance, you have to sacrifice the algebra homomorphism property (the limit is still linear though). On the other hand, that's already true for Cesaro summation... – Armin Straub Nov 13 2009 at 3:05 A good site (other than Wikipedia) for summation methods is the Encyclopaedia of Mathematics of SpringerLink. You can start at: http://eom.springer.de/s/s091140.htm And then look at Cesàro, Abel, Borel and matrix summations methods for an introduction (but you have many more! There there are Voronoi, Lindëlof, Riesz, Hölder...). - you can take a look to these papers http://wbabin.net/science/moreta23.pdf Author explain in a simple fashion divergent series. - Another possibility is to look at how the values are distributed and see whether that converges to some distribution. This is mostly used in stochastic series (e.g. people want to construct Markov chains that converge to a certain distribution of interest). - I'm afraid I don't really see the connection; and as many other comments have noted, there is a well-established body of techniques to handle divergent series, which seem more relevant to the question at hand. – Yemon Choi Feb 4 2010 at 21:29 The question (specifically the part that said "Are there other ways to define such a thing") was, are there any ways to talk about and define sequence convergence, other than the standard definition from freshman calculus. My response is, look at the distribution of the values of the sequence and see if that distribution can be characterized somehow. For example, if the sequence is $a_n = (-1)^n$, then the distribution of its values is peaked at two points, +1 and -1. For a sequence $a_n = \sin(n)$, the distribution is different (covering almost uniformly much of $[-1, 1]$). – sheldon-cooper Feb 5 2010 at 1:32 The connection is that this gives a different and (depending on the application) potentially useful way to characterize a sequence and its limiting behavior. – sheldon-cooper Feb 5 2010 at 1:33 1 If I'm understanding it correctly, this technique subsumes some of the other limits mentioned, such as the one in the original question. Statistics of the distribution can give various limit notions. Seems like an interesting approach! – Matt Noonan Feb 5 2010 at 3:33 Thanks for this reply. Any additional information and any additional ideas are appreciated. – Rhubbarb Mar 9 2010 at 19:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185508489608765, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/281593/n-1th-fibonacci-number-modulo-n
# $n +1$th Fibonacci number modulo $n$ The Pisano period studies the $n$th Fibonacci number $F_{n}$ modulo $n$. Is there anything about $F_{n + 1} \pmod n$? - 2 The Pisano period studies th $n$th Fibonacci number modulo $m$. The module is unrelated to the index as far as the pisano period is concerned - the period is a function of the module. – Jan Dvorak Jan 18 at 20:28 There is hardly anything special about the sequence $F_{n+1} \pmod n$ – Jan Dvorak Jan 18 at 20:34 ## 2 Answers The sequence $F_{n + 1} \pmod n$ is interesting enough to have been added to the Online Encyclopedia of Integer Sequences, but not interesting enough for its entry to have been expanded past the bare minimum: https://oeis.org/A002726 However, its plot does not indicate any periodicity: (log-y) (linear) The plot indicates higher density of points around $y=1$, $y={x\over2}$, $y={x\over3}$ and $y={2\over3} x$, but the function looks random except for that. The positions of zeroes and ones do not indicate much regularity either - except all zeroes seem to be at prime positions. Note, however, the sequence $F_n \pmod n$ that you have mentioned does not show any striking regularity either, except the dense lines are now just two, $y=1$ and $y=x$: https://oeis.org/A002708 (plot) The pisano period is defined as the period of the sequence $F_n \pmod m$ - for any fixed $m$ this sequence is periodic with the period being a function of $m$. Stated mathematically, $F_n \equiv F_{n+\pi(m)} \pmod m$ where $\pi(m)$ is the Pisano sequence. Ref: http://en.wikipedia.org/wiki/Pisano_period https://oeis.org/A001175 - If you are looking for some regularity in the sequence, you have that for $p\ne 5$ prime: $$F_{p+1} \operatorname{mod}p = \begin{cases} 1 \quad &\text{if the congruence } x^2 \equiv 5 \pmod{p}\text{ has a solution}\\0 \quad &\text{otherwise}\end{cases}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183908700942993, "perplexity_flag": "middle"}
http://nrich.maths.org/6659
### Lunar Leaper Gravity on the Moon is about 1/6th that on the Earth. A pole-vaulter 2 metres tall can clear a 5 metres pole on the Earth. How high a pole could he clear on the Moon? ### Which Twin Is Older? A simplified account of special relativity and the twins paradox. ### Whoosh A ball whooshes down a slide and hits another ball which flies off the slide horizontally as a projectile. How far does it go? # Electromagnetism ##### Stage: 5 Article by Simon Cattell Electromagnetism describes the relationship between magnetism and electricity. The concept of the electric field was first introduced by Michael Faraday; the electric field not only describes the region surrounding an electrically charged body but in addition the force experienced by any further charges placed within this region. When electrical charges are in motion they induce magnetic fields. Such phenomena are observed when iron shavings align in the presence of a magnetic field induced by the passage of a current through a nearby wire. A changing magnetic field will induce an electric field; similarly a changing electric field will induce a magnetic field. This is the concept of electromagnetic induction, it is the principle used to drive generators, motors, transformers, amplifiers and many more electrical devices. Electric Field Theory Coulomb's Law The electric force between two charged bodies is a non contact force. This forces acts along a line which connects the geometrical centres of the two charges. The force is proportional to the product of the charges and inversely proportional to the square of their separation. By Newton's third law it is seen that the force exerted by charge one on charge two is equal and opposite to that exerted by charge two on charge one. The force exerted by charge one on charge two is given in vector form below. $\bf{F_{12}} = \frac{1}{4 \pi \epsilon_{0}} \frac{Q_1Q_2}{\bf{r^2}} \bf{r_{12}}$ where $\epsilon_{0} \approx 8.854 \times 10^{-12} Fm^{-1}$ (electric constant) $F$ - Force $\bf r$ - separation between charge centres $Q$ - Charge $\bf{r_{12}}$ is a unit vector which points from the centre of charge one to the centre of charge two. The force is repulsive if the two charges are the same sign and attractive if the charges are of opposite sign. It should be noted that ordinary matter will acquire only a small amount of charge (measured in coulombs). The example below best illustrates this point. Example: Find the force between two point charges, each charged to +1 coulomb and separated by 1 metre. $\bf{F_{12}} = \frac{1}{4 \pi \epsilon_{0}} \frac{Q_1Q_2}{\bf{r^2}} {\bf r_{12}}$ = $\frac{1}{4 \pi \epsilon_0} \approx 9 \times 10^9$ N This is obviously an extremely large force; typically a charged body will carry charge of the order of a nano/micro coulomb. Electric Potential The potential difference between two points is the work done on (or by) a unit of positive charge when moving from one point to the other. When the charge passes through impedance (when the circuit is driven with direct current, there is no difference between impedance and resistance) it must do work but when passing through a battery has work done on it. $v_b - v_a = \int_a^b \bf{E} d\bf {r}$ In order to define the electric potential at some point we must find the potential difference between this point and some other point at which the potential is zero (this condition is satisfied at an infinite distance from the point charge whose field we describe). The electric potential is therefore the work done against an electric field in moving a unit positive charge from infinity to some distance r from the centre of the charge whose field it enters. Mathematically it is described as follows: $\bf V = \int_{r}^{\infty} \bf{E} \bf{dl}$ The electric potential is typically measured in joules per coulomb or volts. Electric Field Intensity, E The electric field intensity $\bf{E}$ is defined as the force exerted on a positive test charge placed in a field. The field lines hence point in the direction that a positive charge would accelerate if placed in the field. $\bf E = \frac{\bf F}{q}$ If we substitute $\bf {F}$ from Coulomb's law, we see that the electric field intensity due to charge $Q_1$ is: $\bf{E} =\frac{1}{4 \pi \epsilon_{0}} \frac{Q_1}{r^2} \bf{r_{12}}$ The above shows us that $\bf{E}$ obeys an inverse square law: the field intensity decays as $\frac{1}{r^2}$. In fact, it is true that any point source which spreads its influence equally in all directions will obey such a law, this can be deduced from geometrical considerations alone, examples include gravitational fields, EM radiation and sound. The common conventions adopted for drawing electric fields are described below. Density of field lines: The density of the field lines describes the magnitude of the field. A closer packing of field lines indicates a stronger field. Field lines commonly diverge as distance from the charged body increases -` this indicates a decaying field strength. Orientation of field lines for conductors: Field lines are always drawn perpendicular to the surface of the body whose field they describe; there is never a component of the electric field parallel to the surface of the body. Intersection of field lines: Electric field lines must never intersect. The electric field lines indicate the direction of the electromagnetic force in a given region of space. If field lines were to allowed to cross at some point in space then the direction of the force would be indeterminate at this point of intersection (we would effectively be defining two separate fields) Electric Flux The electric flux is equal to the total amount of electric field passing through a virtual surface area perpendicular to the field. Flux = $\phi = \int \bf{E} \cdot \bf {dA}$ Here, $\bf E$ is the electric field and $dA$ is a differential element of unit area on the closed surface with an outward facing normal defining its direction. Superposition The principle of superposition states that the response of any linear systems to several inputs is equal to the sum of the responses produced if each of the inputs was applied separately. Applying this idea to electrostatics we see that when several charges are present, the resulting electric field may be found by the vector summation of the electric field produced by each individual charge. It should be noted that the principle of superposition is a very important concept occurring in many engineering applications. Gauss's Law Gauss's law states that the total amount of electric flux emerging from and normal to a surface is equal to the total electric charge enclosed. As a consequence, the electric charge enclosed within a surface is zero when the flux entering the surface is equal to the flux emerging. In integral form, Gauss's law is as follows: $\int \bf{D} \cdot \bf{da} = Q$, where $\bf {D}$ is the dielectric field intensity, the dielectric field intensity takes the same value when passing from one dielectric to another. $\bf{D} = \epsilon_0 \epsilon_r \bf{E}$ ($\epsilon_r$ is the relative permittivity of the material) If we can find a surface which is always normal to the electric field we can simplify this calculation significantly by removal of the dot product. For a point charge we should select a sphere as our Gaussian surface. $A =4 \pi r^2$ For a line of charge we should select a cylinder as our Gaussian surface. $A =2\pi r l$ For a plane of charge (such as a capacitor) we should simply select a plane as our Gaussian surface. Care should be taken to ensure both sides of the plane are considered. Capacitance We define capacitance as the charge stored per unit volt: $C = \frac{dQ}{dV}$ A capacitor is simply a pair of conductors separated by a dielectric material, a device which provides short term storage of energy in the form of displaced charge. When a potential difference is applied across the conductors an electric field is produced in the dielectric, it is this electric field which provides a means of energy storage. At a sufficiently high voltage the molecular structure of an insulator breaks down, electrons are torn out of their atoms and the material begins to conduct. The maximum voltage which we may apply to a material before it collapses and begins to conduct is known as the breakdown voltage. We must always operate a capacitor below this breakdown voltage. The choice of dielectric material will determine the application of a capacitor, typical applications are listed below. Air: Used in radio tuning devices. Glass: Used in high voltage applications, NASA have been known to use glass dielectric capacitors to initialize space shuttle circuitry and help to deploy space probes. Ceramic: Used in high frequency applications such as antennas and X-ray machines. We know that the voltage is work done per unit charge, we may now use this idea to find the energy stored within an electric field. $dw = V dQ$ = $\frac{Q}{C} dQ$ W = $\int \frac{Q}{C} dQ = \frac{Q^2}{2C} = \frac{Q}{2V} = \frac{CV^2}{2}$ The simplest form of a capacitor is the parallel plate capacitor, shown below: Each plate is charged to an equal and opposite charge, if the plates are of infinite size a uniform electric field will exist between them. It is the even distribution of charge caused by mutual repulsion of like charges (on a plate) that leads to the generation of the uniform field. In practice, a plate area much greater than plate separation gives minimal field divergence at the plate ends (fringing effects), we can therefore treat the field as uniform. Question: Using the concepts of Gauss's Law, superposition, electric potential and capacitance, prove that the E field between between the plates of a parallel plate capacitor is uniform and find the capacitance. The plate separation is d and the relative permittivity of the dielectric $\epsilon_r$, in addition it can be assumed A is much greater than d. Solution: Step 1: Gauss's Law defines the electric field produced by a single plate. $\int \bf{D} \cdot \bf{da}$ = $\epsilon_0\epsilon_r \bf{E} A = Q$ $\bf{E} = \frac{Q}{A \epsilon_0\epsilon_r }$ Step 2: Principle of superposition We have two plates; each plate is storing charge and hence produces an electric field. At a distance x from the upper plate both the upper and lower plate are each producing a field of $\bf{E} = \frac{Q}{A \epsilon_0\epsilon_r }$ Only half of this field enters the dielectric between the conductors, the other half emerges from the opposite side of the plate and does not contribute. The total electric field is therefore found by superposition. $\bf E = \frac{1}{2} \frac{Q}{A \epsilon_0\epsilon_r} + \frac{1}{2} \frac{Q}{A \epsilon_0 \epsilon_r }= \frac{Q}{A \epsilon_0\epsilon_r }$ This function of $\bf E$ is independent of position between plates (x), the field is therefore uniform. Step 3: Electric Potential $\bf{V} = \int_0^d \bf {E(r)} dr$ =$\int_0^d \frac{Q}{A \epsilon_0\epsilon_r } dr$ = $\frac{Qd}{A \epsilon_0\epsilon_r }$ Step 4: Capacitance $C=\frac{dq}{dv}$ If th e charge is not changing with time. $C = \frac{Q}{V}$ $C = \frac{Q}{V}$ = $\frac{ A \epsilon_0\epsilon_r }{d}$ This shows us that the capacitance is proportional to the area of the plates and inversely proportional to their separation. Capacitor Charge and Discharge It is the potential difference between the supply and capacitor that causes charge to flow from one plate to the other. The system wishes to be in a state of equilibrium. Charge will therefore flow until the potential difference across the capacitor matches that of the supply (until there is no potential difference between supply and capacitor). Electrons are taken from one plate and transferred to the other; one becomes positive and the other progressively more negative. As charge builds up on the capacitor the voltage across it increases, the potential difference between the battery and the capacitor decreases and hence the rate of charging decays. This could be analysed using Kirchhoff's voltage law; we know that the sum of the emfs is equal to the sum of the voltage drops. $V_{supply} = V_c + V_R = V_c + IR$ $I =\frac{V_{supply} - V_c}{R}$ We see that the current (rate of flow of charge) and hence the rate of charging/discharging is proportional to the potential difference between supply and capacitor. The voltage of the supply is constant but the voltage of the capacitor varies with charge according to $C = \frac{Q}{V}$ Charge: $V = V_0 e^{\frac{-t}{RC}}$ Discharge: $V = V_0 ( 1 - e^{\frac{-t}{RC}})$ Question: Use Ohm's law, Kirchhoff's voltage law and the definition of capacitance to derive the equation of a charging capacitor for a simple circuit consisting of a cell, resistor and capacitor. Solution: $I = \frac{dQ}{dt}$ $V_{supply} = V_c + IR = \frac{Q}{C} + R\frac{dQ}{dt}$ This is a first order differential equation in Q. $\frac{dQ}{dt} + \frac{Q}{RC} = \frac{V}{R}$ The integrating factor is $e^{\frac{t}{RC}}$ $Q e^{\frac{t}{RC}} = \int \frac{V_{supply}}{R} e^{\frac{t}{RC}} dt = V_{supply}Ce^{ \frac{t}{RC}} + D$, where D is a constant $Q =V_{supply}C + D e^{\frac{-t}{RC}}$ At $t = 0$, $Q = 0$, therefore $D = -V_{supply}C$ and $Q = V_{supply}C(1 - e^{\frac{-t}{RC}} )$ Using the definition of capacitance ($V =\frac{Q}{C}$) $V = V_{supply}C(1 - e^{\frac{-t}{RC}} )$ Capacitors in Series: For capacitors placed in series we know that each will store the same amount of charge but may have a different voltage across it. $V_{total} = V_1 + V_2 +... +V_n$ $C=\frac{Q}{V}$ $\frac{Q}{C_{total}} = \frac{Q}{C_1} + \frac{Q}{C_2} +... +\frac{Q}{C_n}$ $\frac{1}{C_{total}} = \frac{1}{C_1} + \frac{1}{C_2} +...+ \frac{1}{C_n}$ Capacitors in parallel: For capacitors placed in parallel we know that each has the same voltage across it but may have a different charge stored. $Q_{total} = Q_1 + Q_2 +... +Q_n$ $C_{total}V = C_1 V + C_2 V +... +C_n V$ $C_{total} = C_1 + C_2 +...+ C_n$ Question: Find the capacitance of the circuit below. Solution: The 1µF capacitor and the 2µF capacitor combine in series: $\frac{1}{C_{series}} =\frac{1}{2µF} + \frac{1}{1µF}$ $C_{series} = 6.67 \times 10^{-7} F$ We can now combine this capacitance in parallel with the 3µF capacitor to find that: $C_{total} = .67 \times 10^{-7} + 3 \times 10^{-6} = 3.67 \times 10^{-6} = 3.67 µF$ Magnetic Field Theory Magnetic fields are the result of electric charges in motion; this can be currents flowing through wires or simply electrons in their atomic orbits. Magnetic Field Intensity The magnetic flux density (or field intensity), $\bf{B}$, is defined by the Lorentz force. The Lorentz force describes the force experienced by a charge $Q$ moving at a velocity $\bf {v}$ in superposed electric and magnetic fields. $\bf{F} = q( \bf{E} +\bf{V} \times \bf{B})$ In the absence of the electric field the above becomes $\bf{F} = q\bf V \times \bf{B}$, where: $q$ - Magnitude of charge $V$ - velocity vector of charge $B$ - magnetic flux density The above equation describes the vector nature of the B field, attention should be paid to the cross product in the above equation, it indicates that the B-field is always perpendicular to both the flow of charge and the magnetic force. The direction of the magnetic flux density can be found from the right hand thumb rule. When your right hand thumb points in the direction of the current (in the direction of the velocity vector) then your fingers will curl in the direction of $\bf{B}$. By noting that I = $\frac{dq}{dt}$ we can derive the force experienced by a current carrying wire placed within a magnetic field. If we substitute $q=It$ into the Lorenz force law we find that: $\bf{F} = q\bf{V} \times \bf{B} = It\bf{V}\times\bf{B}$ But the velocity of the charge multiplied by the magnitude of the charge is simply the length of the wire passed in a unit of time ($L$). $\bf F = L \bf I \times \bf B = BIL \sin \theta$ If we know the direction of any one of the force, field or current then the other two may be deduced by Fleming's left hand rule. Extend the thumb, forefinger, and the middle finger of your left hand such that all the three are mutually perpendicular to each other, if the thumb points in direction of the force then the forefinger points in the direction of the B field and the middle finger in the direction of the current. This is illustrated below. Magnetic Flux The concept of magnetic flux is very convenient for the description of Faraday's law (which will be discussed shortly). The magnetic flux is the total amount of magnetic field passing through a virtual surface area perpendicular to the field. Flux $\phi = \int \bf{B} \cdot \bf{dA}$ Where $B$ is the magnetic flux density and $dA$ is a differential area on the closed surface with an outward facing surface normal defining its direction. The flux linkage is equal to the product of the flux and the number of turns of the object that is being linked. $\phi ' = \phi N$ Ampere's Law Ampere's law describes the relationship between magnetic fields and currents in a similar way to how Gauss's law describes the relationship between electric fields and charge. Ampere's law states that for any closed loop, the line integral of field intensity ($H =\frac{B}{\mu_0\mu_r}$ ) is equal to the current linked. $\int \bf{H} dl = N I$ Amperes law allows us to find an expression for the B field as a function of current. Below is an example to illustrate this. Question: Using Ampere's law find an expression for the magnetic field intensity around a current carrying wire. Solution: We must first identify the path which the B-field will take. Using the right hand rule we know that the B-field will encircle the current, the differential path which the current will take is therefore dl = $2\pi dr$ (where r is the distance from the wire). $\int \bf{H} dl = \int \bf{H} 2\pi dr = I$ (N=1 as we are considering only one wire) $\frac{ B 2\pi r }{\mu_0\mu_r } = I$ $B = \frac{\mu_0\mu_r I}{2\pi r}$ Faraday's Law of Induction Faraday's law of electromagnetic induction is an extremely important principle, it is fundamental to the generation of most of the electrical power in the modern world. Faraday's law describes how a change in magnetic flux threading a circuit induces a voltage seeking to oppose this change in flux. The induced voltage is proportional to the rate of change of flux linkage. $V =- N \frac{d\phi}{dt}$ The minus sign simply indicates that the direction of the induced current is such that its magnetic field opposes the change in flux (Lenz's Law). There are typically two methods for inducing a voltage, flux cutting and flux linking. Flux cutting: When lines of flux are cut by conductors. E.g. Dropping a magnet through a coil of wire. Flux Linking: Varying the direction or magnitude of the B-field. Question: A plane of wingspan 42m flies through a vertical field of strength 5 x $10^{-4}$T. Calculate the emf induced across the wing tips if its velocity = 130 ms$^{-1}$. Solution: $V = -N \frac{d\phi}{dt} = \frac{\Delta {BA}}{\Delta {t}} = B L V = 5 \times 10^{-4}\times 42\times 130 = 2.73 V$ Inductors Inductors are passive electrical components. They store energy in the form of moving charge (or a magnetic field). Typically inductors take the form of a coil of wire (similar to a solenoid). The reason for looping the coil is to increase the flux linkage and hence increase potential for energy storage. Electric current passing through the inductor induces a magnetic field around it, a time varying current will produce a time varying magnetic field and in accordance with Faraday a voltage. The inductance is therefore the voltage induced per unit charge, its units are Henrys (H). $V = L \frac{dI}{dt} = Lq$ The symbol of an ideal inductor is shown: Inductors in Series: For inductors placed in series we know that the current through each will be the same but may have a different voltage across it. $V_{total} = V_1 + V_2 +... +V_n$ $L=\frac{V}{Q}$ $\frac{L_{total}}{Q} = \frac{L_1}{Q} + \frac{L_2}{Q} +... +\frac{L_3}{Q}$ $L_{total} = L_1 + L_2 +... +L_n$ Inductors in parallel: For inductors placed in parallel we know that each has the same voltage across it but may have a different current through it. $Q_{total} = Q_1 + Q_2 +... +Q_n$ $\frac{V}{L_{total}}= \frac{V}{L_1} + \frac{V}{L_2} +... +\frac{V}{L_n}$ $\frac{1}{L_{total}}= \frac{1}{L_1} + \frac{1}{L_2} +... +\frac{1}{L_n}$ Applications of Electromagnetic Induction Many electrical devices operate on the principle of electromagnetic induction. These include motors, generators, transformers, microphones and speakers (some of which are described below). AC Generator AC generators operate on the principle of electromagnetic induction; they are devices which convert mechanical energy into electrical energy. The turning of a coil within a magnetic field induces a potential difference across the coil and hence a current through it. The induced emf forces charge (already present within the wire) to flow through an external circuit and hence produces electricity. The initial mechanical energy can be produced by a range of devices, such as water falling through a turbine, wind turbines or steam turbines. AC Motor A current carrying wire within a magnetic field will experience a force; this is the basic principle of a motor. A current is passed through a coil, this produces a B-field around the coil, this B-field then interacts with the permanent B field in which the coil sits, vector superposition of the fields leads to cancelation in some regions and addition in others, the net effect is a resultant force and hence a turning moment about the coil. This torque forces the wire to rotate; mechanical energy has thus been converted into electrical. Transformer A transformer is a device which is used to transform electrical power from one voltage and current level to another. In its most simple form a transformer is comprised of a primary winding, a secondary winding and an iron core. the iron core transmits the flux from one winding to the other, this is illustrated below. Transformers operate in accordance with Faradays law of electromagnetic induction. By passing an alternating current through the primary winding we induce a time varying B-field, this B-field is then transmitted via the iron core to the secondary winding. Varying this B-field over the secondary winding will induce an alternating voltage across it and hence an alternating current through it. If the transformer is ideal the voltage induced is described by the formula below. $\frac{V_1}{V_2} = \frac{N_1}{N_2}$ We see that two types of transformer are therefore possible, one which steps a voltage up and one which steps a voltage down. If the number of turns of the secondary winding is greater than that of the primary the voltage will be stepped up. If the number of turns of the primary is greater than that of the secondary the voltage will be stepped down. In order for a real transformer to be modelled as ideal it must satisfy the conditions listed below: The internal resistance of the windings must be negligibly small. The reluctance of the iron must be negligibly small (we must assume iron is a perfect conductor of flux). All the flux linking the primary must link the secondary; no flux leakage occurs as the flux bends around the corners. No real or reactive power is consumed. Real transformers are typically able to achieve very high efficiencies, often as high as 99%. Transformer losses are divided into two subcategories; copper losses (losses associated with the windings) and iron losses (losses associated with the core). Copper losses are a consequence of resistive heating, these losses are also referred to as $I^2 R$ losses and are very much dependent on the magnitude of the current. At high frequencies additional losses occur due to a phenomenon known as the skin effect, electrical current is effectively only allowed to flow through the outer skin of the conductor. We know from faraday and Lenz that the induced voltage is proportional to the rate of change of flux, furthermore it acts in a direction opposing the change which caused it. A high frequency current will therefore induce a large current which will oppose itself. We know that the B-field induced by a current carrying wire decays with distance from the wire centre, the B-field and hence the opposing current is therefore largest at the centre of a winding, this strong opposing current results in current cancelation throughout the centre of the wire, current hence only flows in the skin. Iron losses are most commonly associated with hysteresis and eddy currents. Each time the magnetic field reverses it must do some work against atomic dipoles it previously aligned, this is known as hysteresis loss. Eddy currents circulate throughout the core in a plane which is perpendicular to the flux, these therefore result in resistive heating. Typical transformer applications are described below: Power Transmission: When power is transmitted it is desirable to step the voltage up and hence step the current down, this is done to minimise the $I^2 R$ losses associated with joule heating. When the power is then received by the consumer, the voltage must be stepped down again before domestic use. Measurement of High Voltages and Currents: Transformers are often used to step down a high voltage or current to a safe level before measurement. Continuous Variation Of Voltage and Current Levels: Machinery often requires power at a continuously varying voltage and current level. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 110, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934380054473877, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/9219/can-spin-be-infinite/9478
# Can spin be infinite? Can spin of a particle or a group of particles become infinity?Explain plz.Is there any representation for spins like dot(for s=0) and arrow(for s=1)?If so what for s= infinity? - please don't use capslock (I changed the question title) – Mark Eichenlaub Apr 28 '11 at 4:52 ok.thanx for changing the name.sorry 4 my mistake.!! – chaos Apr 28 '11 at 14:15 ## 3 Answers Compact group $SU(2)$ really has only finite-dimensional unitary irreducible representations, but formally it is not enough to close the question, because there are unitary irreducible infinite-dimensional representations of spin group $SL(2,C)$ of four-dimensional relativistic Lorentz group and they were used in some models, below is a cite from: N.N. Bogolubov, A. A. Logunov, A.I. Oksak, I. Todorov, General principles of quantum field theory, Springer, 1989. Appendix I for chapter 9 The concept of an infinite-component field (ICF for short) is the result of abandoning the "technical" requirement that the representations of the Lorentz group according to which the fields transform (say, in the Wightman formalism) be finite-dimensional. This idea turned up at the earliest stages of quantum field theory: in 1932, Majorana gave an example of an infinite-dimensional wave equation $(i \Gamma^\mu \partial_\mu – M) \psi(x) = 0$ without negative-energy solutions of non-negative square mass, that is, without "antiparticles". … Running ahead (see §1.3), it should be noted, however, that the description of composite systems by means of ICF's has met with difficulties which, it would seem, require a weakening of the postulate of (strict) locality. - 2 Dear @Alex 'qubeat'. Sorry for this nitpicking comment, but the statement that the compact group $SU(2)$ has only finite-dimensional irreducible representations is not correct. There are non-unitary infinite-dimensional irreducible representations of the Lie group $SU(2)$. Of course, in physics one usually requires unitarity. – Qmechanic♦ May 7 '11 at 21:13 @Qmechanic: why does the unitarian trick fail on these irreducible representations? – wnoise May 7 '11 at 21:41 Sorry, of course I missed word "unitary" – Alex 'qubeat' May 8 '11 at 9:21 1 @Qmechanic: Anyway, I would like to see an example of representation you are talking about. – Alex 'qubeat' May 8 '11 at 10:38 1 @Qmechanic: I believe your statement is not correct. By usual Weylish arguments one can always unitarize a representation of the compact group (just impose arbitrary Hilbert structure and then define new scalar product by averaging over the group action). By Peter-Weyl theorem then every such representation decomposes into sum of unitary irreducibles. What you had in mind was probably the rep. theory of $SL(2, \mathbb{R})$ which indeed admits non-unitary reps. But this is precisely because the group is not compact. – Marek May 8 '11 at 11:03 show 4 more comments I am not aware of any theory involving a notion of infinite spin. Generally, spin is a quantum number that takes (typically) small integer or half integer values. In principle, you can have a system with as high a spin as you would like, but that's not infinite. So I'd say the answer is no. - There is an exotic phenomenon in in general relativity regarding large "spin"/angular momentum. It does not have much to do with your question, so I'll add it as a comment: A massive rotating body will drag space along around itself. This "frame-dragging" has been measured for the earth, but it is a very small effect. However, around a quickly rotating black hole it can get so strong that something in orbit would have to move with light speed against it, just to stay where it is (relative to the surrounding ~flat space)! With a lot of handwaving, one could speak of an infinity here :-) – jdm May 5 '11 at 6:47 the angular momentum of such a rotating black hole is perfectly finite, though--with a maxiumum magnitude proportional to $M^{2}$ – Jerry Schirmer May 7 '11 at 23:09 @Jerry: Thanks for adding the clarification. I was not claiming that the angular momentum was infinite :-), it was more of an interesting anecdote (hence a comment and not an answer). – jdm May 9 '11 at 15:37 But intrinsic angular momentum,L=nh/(2pi),which is equal to spin.So when n becomes infinity,L is also infinity.So my question is ,is there any physical significance for this statement rather than a theoretical statement? – chaos Jun 2 '11 at 2:34 The three-dimensional spin group is the compact group $SU(2)$, which has only finite-dimensional representations. Hence, the spin is always a non-negative integer or half-integer. - Dear @Oluf. The answer (v1) contains a wrong statement. There exist infinite-dimensional unitary representations of the compact group $SU(2)$. Take, e.g., infinitely many copies of the trivial or adjoint representation. – Qmechanic♦ May 7 '11 at 11:16 1 @Qmechanic: Sorry, I meant to say irreducible representation. And of course it is not obvious that a particle has to transform in a irreducible representation, so I guess my answer is a bit too naive. By the way, doesn't infinitely many copies of the trivial representation still just contain a single state? – Olof May 7 '11 at 11:40 @Oluf. In usually terminology, the trivial (=singlet) group representation is $1$-dim, and the adjoint (=triplet) group representation is $3$-dim. Hope this answers your comment. – Qmechanic♦ May 7 '11 at 12:09 @Qmechanic: Sure. I'm just saying that a tensor product of even an infinite number of singlets still is a singlet. Or am I missing something? – Olof May 7 '11 at 12:20 @Oluf. No, you're right, the tensor product of two singlet representations is again a singlet representation. In my comment above, I was alluding to direct sums (as opposed to tensor products) of singlet representations. – Qmechanic♦ May 7 '11 at 12:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287174344062805, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2111/how-long-a-straw-could-superman-use/2139
# How long a straw could Superman use? To suck water through a straw, you create a partial vacuum in your lungs. Water rises through the straw until the pressure in the straw at the water level equals atmospheric pressure. This corresponds to drinking water through a straw about ten meters long at maximum. By taping several straws together, a friend and I drank through a $3.07m$ straw. I think we may have had some leaking preventing us going higher. Also, we were about to empty the red cup into the straw completely. My question is about what would happen if Superman were to drink through a straw by creating a complete vacuum in the straw. The water would rise to ten meters in the steady state, but if he created the vacuum suddenly, would the water's inertia carry it higher? What would the motion of water up the straw be? What is the highest height he could drink from? Ignore thermodynamic effects like evaporation and assume the straw is stationary relative to the water and that there is no friction. - 1 The "Fluid Dynamics" tag makes this question more intimidating – Thomas Dec 21 '10 at 8:37 10 +1 for the picture, cause we all know: pics or liar! ;p – Raskolnikov Dec 21 '10 at 20:55 1 Stupid nit-picky comment - but you actually make the vacuum with your mouth, not your lungs. Do an experiment to prove it... start by breathing in and then while still breathing in place the straw in your mouth and close your lips. See if anything different happens than normal. Also have the phone handy to dial 911. – John Berryman Dec 22 '10 at 6:12 @John Berryman: It depends how much of a vacuum you want to create. The mouth can create so much of a pressure differential, which is usually enough, but the lungs need to be used to create a larger one. – Noldorin Jan 19 '11 at 20:45 1 @MarkEichenlaub I may be late but Hagen Poisseulle? – drN Oct 22 '12 at 18:46 ## 11 Answers I think we can most easily consider the problem from the perspective of energy. For a unit area column the external energy put in equals the volume of the column times the air pressure. The gravitational energy is the mass of water raised times the average height of the water. So as we pull the water up, we the water is gaining kinetic energy until the water reaches the static limit (roughly 10meters), but at this point the average water in the tube has only risen by half that amount, so the rest is kinetic energy of (upward) water motion. So the vacuum energy in dimensionless units is $h$, while the gravitational energy is $\frac{1}{2}h^2$. The solution is $h=2$. When we reach 2 times the static limit (20 meters) then the gravitational energy in the water matches the "vacuum" energy we put in, so that would represent the high point of the oscillation. So I think we would get 2 times the static limit. The water velocity will be messy to solve for, as the amount of water moving in the column depends upon height, so just look only at net energy as a function of water height... Of course he will only get a sip of water, then the column would start to fall........ - 3 @Omega lol - 17 hours after the question is asked we posted the same solution within two minutes – Mark Eichenlaub Dec 21 '10 at 22:24 3 @Martin: what? How is discussing energy balance philosophy? And what use is it doing the long-winded calculations as you did if there is mistake in your approach in the first place? I wish I could give -1 to your comment... – Marek Dec 22 '10 at 8:01 1 @Marek: So where is a mistake in my solution? – Martin Gales Dec 22 '10 at 8:17 2 @Martin: I am not saying there necessarily is one. Just that there is a possibility. I argued against your stance of doing the math first. Thinking and physics comes first, math only second. By the way, are you able to find a flaw in Omega's and Mark's solutions? I am not, because they are conceptually very simple. But I am not so sure about your solution because it deals with gory details and it's not clear that you didn't forget to account for something (like the Bernoulli equation Mark has mentioned). – Marek Dec 22 '10 at 8:24 1 @Martin: call it Euler equation if it helps you. By making the flow unsteady you are not getting rid of the phenomena found in Bernoulli. Just introducing other, more complicated ones. – Marek Dec 22 '10 at 8:46 show 5 more comments I have an argument that the water in the straw will rise to twice the equilibrium height. David and Martin's answers consider the system of water in the straw. I will consider the system of the water in the straw plus the water in the reservoir. As water goes into the straw, the water level in the reservoir drops, and the atmosphere does work on the system. If a volume $V$ of water enters the straw, the work done on the system is $PV$, with $P$ the atmospheric pressure. Assume that the reservoir has a large surface area so that the level the reservoir drops is negligible. When the water is at its peak in the straw, the kinetic energy of the system is zero, so the potential energy is $PV$. The potential energy is also $\rho g V h/2$. So the maximum height of the water is $$h = \frac{2P}{\rho g}$$ This answer is different from Martin and David's. I think this might be because when the water starts moving, the pressure at the entrance to the straw may not be $P$ any more. - Heh, nice argument. Hard to find a flaw in this. – Marek Dec 21 '10 at 22:32 1 @Mark: If you are right then the pressure at the entrance to the straw must be greater than $P_0$. How do you explain this? – Martin Gales Dec 22 '10 at 8:06 1 @Mark: If we follow Bernoulli's equation then the pressure at the entrance to the straw must be even lower than $P_0$. – Martin Gales Dec 22 '10 at 10:12 1 @Mark: True! Marek would have had to write this comment. – Martin Gales Dec 22 '10 at 11:08 1 I think this makes perfect sense. Also, if you imagine an oscillating system without friction, oscillating around the stable height, and starting from h=0, then the maximum height also turns out to be twice the stable height. In reality I think this oscillation is damped and so the water won't reach the full height and will very quickly settle to the stable height. – Sklivvz♦ Dec 22 '10 at 20:56 show 4 more comments If I follow up on keenan pepper's suggestion, if the water is deep, and especially if you can mess with the topology of the straw you can go to almost unlimited height! Consider a straw that is stuck very deeply into the ocean. Then coil the straw around at great depth many many times. I then blow very hard (I am superman afterall), and create a huge volume of airfilled straw at great depth. This configuration has a great deal of potential energy, so if we simply stop blowing we have the pressure at the great depth of the bottom of the straw accelerating water into and up the straw. Since by coiling the straw at great depth I can obtain an unlimited ratio of volume of the straw underwater to volume above water, the energy analysis allows me to reach an unlimited height. So the issue becomes if there is some other sort of limit. Can we get cavitation of water trying to enter the straw or something if the velocity gets too high? But, in any case you should be able to get really high, tens or hundreds of times the static limit, by preenergizing the system in this way. In the real world friction will limit how far you can take it. - -1) Show this mathematically! – Martin Gales Dec 22 '10 at 7:02 1 @Martin I think it is sufficiently clear from energy considerations that what Omega said makes sense. Of course, you'd be very welcome to submit a more mathematical analysis of the same ideas if you want. – Mark Eichenlaub Dec 22 '10 at 9:27 @Mark:This must be done by Omega. Without any math the whole discussion goes dead. Why not add at least some back-of-the-envelope calculations to confirm their claims. This is an excellent quantitative problem. – Martin Gales Dec 22 '10 at 10:34 2 The math request seems unneeded. The energy stored by depressing the water level is simply the integral of d volume times depth under the surface. The energy needed above water is the same integral above the water level. Equate the two. If you can make the first integral grow unboundedly then it is solved. – Omega Centauri Dec 22 '10 at 14:25 I went back and took a more careful look at this. I'm still not convinced it's correct, but I'm hoping this is at least better than what I had before. Let $h$ be the height of the column of water inside the straw. As this height rises by an amount $\delta h$, the work done on the column is $$\delta W = P A \delta h$$ where $A$ is the cross-sectional area. The change in potential energy is $$\delta U = \rho A \delta h g h$$ so by conservation of energy, $$\delta K = \delta W - \delta U = \left(P - \rho g h\right) A \delta h$$ This excess kinetic energy comes from two contributions: the added mass, $$\delta K_1 = \frac{1}{2}mv^2 = \frac{1}{2}(\rho A \delta h)\dot{h}^2$$ and any change in speed of the column of water, $$\delta K_2 = \frac{1}{2}m(2v\delta v) = (\rho A h)\dot{h} \delta\dot{h}$$ Putting it all together, we get $$P A \delta h - \rho A g h \delta h = \frac{1}{2}\rho A \dot{h}^2 \delta h + \rho A h\dot{h}\delta\dot{h}$$ If you assume (or prove) that $P$ is dependent on $h$ through Bernoulli's theorem, $$P + \frac{1}{2}\rho\dot{h}^2 = P_0$$ Substituting in (and canceling the common factor of $A$), you get $$P_0 \delta h - \rho g h \delta h = \rho \dot{h}^2 \delta h + \rho h \dot{h}\delta\dot{h}$$ which at least accounts for the mysterious factor of $\frac{1}{2}$ that appeared in previous versions of my answer. Now, I don't think we can simply assume that $\delta h \neq 0$ and divide it out, because if we do that, we get a factor of $\frac{\delta\dot{h}}{\delta h}$ which is undefined at the initial and maximum heights. (Roughly speaking, the variation $\delta h$ is second-order at those points whereas the variation $\delta\dot{h}$ is still first-order.) Instead, I'll divide by $\delta t$, which certainly should not produce any singularities, to get $$P_0 \dot{h} - \rho g h \dot{h} = \rho \dot{h}^3 + \rho h \dot{h}\ddot{h}$$ At the initial and maximum heights, $\dot{h} = 0$, so the equation is trivially satisfied there. But consider the situation when displaced from either initial or maximum height by an arbitrarily small amount, such that $\dot{h}\neq 0$. Here we can cancel out $\dot{h}$ to get $$P_0 - \rho g h = \rho \dot{h}^2 + \rho h \ddot{h}$$ Since $\dot{h}$ will be infinitesimally small around the maximum height, we can neglect the first term on the right, but not the second. So we're left with $$P_0 - \rho g h = \rho h \ddot{h}$$ Note that this agrees with a simple analysis using Newton's second law. (The forces acting on the column of water at its maximum height are the pressure force $P_0 A$ acting upwards and gravity $\rho Ahg$ acting downwards, and the difference is equal to $ma = \rho Ah\ddot{h}$.) So the differential equation passes at least one basic consistency test. Anyway, this equation no longer admits the solution $h = \frac{P_0}{\rho g}$. Instead we have $$h = \frac{P_0}{\rho (g + \ddot{h})}$$ Unfortunately I can't think of a way to determine $\ddot{h}$ at maximum without solving the equation, so for now I'm limited to a numerical solution. For a quick estimation, I plugged the full differential equation from above into Mathematica's `NDSolve` function. With boundary conditions $h(0) = 0$ and $\dot{h}(0) = 0$, it complained about undefined expressions, so I used boundary conditions at a nonzero time, $$h(\epsilon_t) = \epsilon_h$$ and $$\dot{h}(\epsilon_t) = \epsilon_{\dot{h}}$$ for values of the various $\epsilon$ constants ranging from $10^{-3}$ to $10^{-8}$. In my tests, I get this graph, seemingly independently of the values of $\{\epsilon\}$ or the ratios between them: Mathematica indicates that the graph peaks at $15.5\,\mathrm{m}$, so if this analysis is correct, that would be the maximum height. (FWIW I am still very suspicious of this calculation though) - I can't dispute your argument, but it seems to me (and is implied in the OP) that the maximum height will be higher than the balance height you found. – Sklivvz♦ Dec 21 '10 at 21:10 @David: why would $\delta h$ need to be zero at the maximum? For me it's just a testing parameter that helps you determine the equation. In the maximum it amounts to putting the system a little out of equilibrium, so it's a principle of virtual work, right? As for that $1 \over 2$ factor, it's also bugging me. I can't understand why yours and Martin's derivations differ :-) – Marek Dec 21 '10 at 22:00 @Sklivvz: yeah, that's what I thought too. I was kind of surprised to see $P_0/\rho g$ pop out of the equation at the end (and that's part of the reason I'm a little suspicious of this). – David Zaslavsky♦ Dec 22 '10 at 2:24 @Marek: I guess that works. When I first wrote this up I used $\delta t$ instead, and I had stuck in my head the fact that, at the maximum, $\delta h \equiv \dot{h}\delta t = 0$ to first order. – David Zaslavsky♦ Dec 22 '10 at 2:27 @David:There is a fundamental error in your analysis:Bernoulli's equation is applicable only to the steady flow of a fluid. – Martin Gales Dec 22 '10 at 8:46 show 8 more comments Trick question, he'd use his super strength to bend the straw into an Archimedes' screw, then hold it at an angle to the surface of the water and rotate it about the axis. This lets him draw it up to any height, and then he can drain the world's oceans to prove a point or do whatever other superdickery he's trying to do. - Superman is going to manipulate the air pressure in the straw. To get the water to go up, he must provide a reduction in the pressure. It's clear that if this reduction is provided at a very slow rate, then he will not be able to significantly exceed 10 meters or so (as limited by atmospheric pressure). On the other hand, if he reduces the pressure quickly, it's at least possible that the water could reach a higher height. How high can the water go under this assumption? The idea is to use the momentum of the water to get the water higher, so the figure of merit will be the maximum speed of the water at surface level. Reducing the pressure cannot move the water faster than the air and the air speed is limited by the speed of the gas molecules in the air or about 330 meters per second. By equating kinetic energy $0.5 m v^2$ with potential energy $mgh$, water with that initial speed can reach a height of $h = 0.5 v^2/g =$ 2775 meters. The height is small enough to justify the assumption that $g$ is a constant. Maybe you should add 10 meters for the usual vacuum effect. Hmmm. Ah, what the heck, I ought to just do the fracking calculation for how high the water goes in a wide straw when a vacuum is applied to it. - I'm not sure I understand. How is he getting the water up to this speed? – Mark Eichenlaub Jan 17 '11 at 21:33 Er, is your Superman blowing into the ocean surface to increase atmospheric pressure? Or are you simply planning some method to store energy in the straw using forced oscilations? – arivero Jan 18 '11 at 1:10 Mark, Alejandro: Okay, I'll edit the answer to give an idea why this sort of calculation comes to mind. – Carl Brannen Jan 19 '11 at 4:44 I see your point, but the water needs to be accelerated by the pressure from the water below it, so I guess that's a good upper bound, but I don't think the water picks up nearly the velocity you mentioned. – Mark Eichenlaub Jan 19 '11 at 5:12 @Mark @Carl I could see the point if he uses a really deep straw say 10 or 11 kilometers. First he blows air inside, up to one thousand atmospheres, then he releases, no need to suck at all... – arivero Jan 19 '11 at 17:52 I can not comment yet, so I will put it as an answer :-( I like the energy approach, but why not to use Archimedes principle? First, substitute the air atmosphere by an extra ten meters of water around the straw, so that now the initial conditions are a vacuum straw inserted a lenght h in a fluid. The energy to produce such vacuum in the fluid you can see by Archimedes; and it is h/2 times g times the mass of the removed fluid. Let it move, and it can go up until filling a column of lenght h above the level, because the energy (now gravitational) of this column is, again, h/2 times its mass times g. - Actually, should it be possible to answer without using a reference to the energy? Old pal Archimedes had not this concept. The equivalent problem is, I have a closed barrel half filled of water, floating. I push it until it is exactly covered, then I release it. Can I prove, using old knowledge, that it jumps until exactly out of the water? – arivero Jan 16 '11 at 1:00 You have some interesting ideas, but it very difficult to parse your writing. I am unsure exactly what you mean by Archimedes' principle, and in general I am only partially confident I understand what points you're trying to make. – Mark Eichenlaub Jan 16 '11 at 1:20 Archimedes principle is that a body sumerged in a fluid suffers a force equal to its volume times the density of the fluid. The principle of equality between work and variation of energy implies that the energy to move down an empty body (say a crystal sphere with vacuum inside) in a fluid to a depth h is force times h. In this way you can calculate the energy needed to do a vacuum hole drilled in water, without using explicitly the concept of pressure. – arivero Jan 16 '11 at 3:06 Okay, thanks. – Mark Eichenlaub Jan 16 '11 at 3:39 @Mark, glad to serve. I hope also that the edits will do it more readable. Of course, it is still to be proved that hydrodinamically all the stored energy can be used to move the column up. If it can not, the argument gives only an upper bound. – arivero Jan 16 '11 at 3:41 I am not fully convinced by this argument, but can't find a flaw in it. Let's analyse a similar experiment, which I believe to be equivalent. Assume that initially, the water is already at the stable level $H=\frac{P_{atm}}{\rho g}$. The vacuum is already present. Now, by some unimportant means, we lower the water to $h=0$, and then let it go up freely. How much energy are we storing in the system by lowering the water? We can find out by calculating the work done. The work is done against pressure and in favour of gravity. $$W= sP_{atm}H + \int^0_H{m(h)g\ \mathrm{d}h}$$ Substituting $H=\frac{P_{atm}}{\rho g}$, $m(h)=\rho s h$ $$W= \frac{sP_{atm}^2}{\rho g} + s\rho g\int^0_H{h\ \mathrm{d}h}$$ $$W= \frac{sP_{atm}^2}{\rho g} - \frac{1}{2}s\rho gH^2$$ $$W= \frac{sP_{atm}^2}{2\rho g}$$ Now when the water is released and allowed to rise, all this energy will used to make the water rise. No energy is assumed to be wasted on friction. At the highest point, all the energy will be converted in gravitational potential energy. This can be expressed through the following formula: $$U(h)=\frac{1}{2}s\rho gh^2$$ Therefore, at the top point, $W=U(h)$ $$\frac{sP_{atm}^2}{2\rho g} = \frac{1}{2}s\rho gh^2$$ Solving for $h$: $$h^2=\frac{P_{atm}^2}{g^2\rho^2}$$ $$h = \frac{P_{atm}}{g\rho} = H$$ Therefore, the water will raise up to $H$, which it will reach with zero velocity. - @Sklivvz Your expression for the work done is correct, but the total energy is the work done plus the original potential energy. When this is accounted for, your method shows that the water rises to twice its original height. – Mark Eichenlaub Dec 23 '10 at 21:04 @Mark Eichenlaub: I don't understand - the original potential energy is the second term in the RHS of the first equation. Do I need to add it twice? – Sklivvz♦ Dec 24 '10 at 9:40 @Sklivvz The system starts out with some energy. Call it $E_0$. Next, you add energy to the system. That's the work you calculated, $W$. Now the total energy of the system is $E_0 + W$. Finally, you want to turn all the energy into potential energy at the highest point. Thus, the potential energy is $U(h) = E_0 + W$, not $U(h) = W$ as you have it now. As written, your calculation says that you start with the water at equilibrium height, then you add energy and that energy doesn't dissipate, and nonetheless it goes back to equilibrium height. If so, where is all the energy you added? – Mark Eichenlaub Dec 24 '10 at 9:47 @Mark Eichenlaub: My line of thought is actually a bit different. Initially the system has potential energy $E_0$. To lower the level we do some work, but less than expected because we use the potential energy as well - the water goes down, the potential energy goes down. So when the water is at the bottom level, there is no potential energy left. This makes sense to me, because if we actually started in that position, where would the potential energy come from? – Sklivvz♦ Dec 24 '10 at 9:54 1 @Mark, I think I am getting it now... :-) I basically need to redo the answer from scratch :-/ – Sklivvz♦ Dec 27 '10 at 14:08 show 29 more comments Unless I'm missing something, this is simply the height of a water-based barometer, since it is really the atmospheric pressure that is pushing the water up the straw. at STP, the answer is 33 1/2 feet. If he were sucking Hg up the straw (not recommended for non-superheros!), the height would be ~30 inches. - 1 You are missing something. Everybody knows the height of water in a water barometer is about 10 meters. It even says so in the question. The question is what would happen if Superman created a vacuum really suddenly, or perhaps alternately sucked and blew. – Keenan Pepper Dec 21 '10 at 20:29 ahhh. thanks for pointing that out! – Jeremy Dec 21 '10 at 21:17 If you immerse the staw in 10m of water, while holding the end closed, and then release it, the water will accelerate upwards past the level point and it will overshoot up to a height of about 6m. No sucking needed. Add some suction to this and you can go higher. A square section straw should minimize the friction loss allowing for better "spring back". Overall you can help the suction, by immersing the straw deeper and deeper. Note that it takes work do immerse the straw (displacing the water) and that is the energy conveted into ponetial energy that allows the water to rise. Close the end when the water reaches the maximum height and you can measure how high you can reach. - 1 -1) Show the math! – Martin Gales Dec 22 '10 at 7:43 2 @Martin There's nothing innately wrong with a qualitative answer. In this case, jalexiou was trying to point out a way of thinking about the problem that differed from the way I described in the original question. The point of the answer was this new physical insight, and so especially for this circumstance a detailed calculation is unnecessary. (I didn't upvote, though, because the answer restates one posted a few hours earlier.) – Mark Eichenlaub Dec 22 '10 at 9:31 1 @Mark:The problem is that jalexiou gives a quantitative argument: "the water will accelerate upwards past the level point and it will overshoot up to a height of about 6m" without any calculation. – Martin Gales Dec 22 '10 at 10:56 @Martin Well, I agree with you there. – Mark Eichenlaub Dec 22 '10 at 11:07 @Mark, I am an engineer, and such I am allowed to guestimate results based on experience. Yes, in reality the bounce back will not be 6.0000m but somewhere near there. I estimate the damping ratio of a typical flow through pipe as between 0.1 and 0.6 and from my exprience with 2nd order damped systems the overshoot (bounce) is about 60% of the initial excitation. If I show the math will you remove the (-1) ? Addionally, I have perfomed this experiment various times with actual drinking staws and water and I stand by my guestimate. – ja72 Dec 23 '10 at 16:21 show 2 more comments You can manipulate the vacuum suction limitation of maximum height of 33.9 feet or 10 meter (14.7 psia or 0.1 Mpa) by using oscillating blow and suct. Use longer straw submerged deep enough into the water, blow it until the air almost reach the bottom of the straw then suck it! You will get height boost!! - Welcome to physics SE! Physics lives through discussion and coherent reason. -1 At least mention a physical law (Newton's $F=m\cdot a$). This height boost already is mentioned as intertia in OP's question. – Stefan Bischof Apr 25 at 19:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 23, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390078186988831, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/49917/what-is-resonance-width-why-we-use-it-to-distinguish-different-regimes-of-the-a
# What is Resonance Width? Why we use it to distinguish different Regimes of the Anderson Model The single inpurity Anderson Hamiltonian is $H=\sum_{\sigma}\epsilon_{\sigma}n_{d,\sigma}+Un_{d,\uparrow}n_{d,\downarrow}+\sum_{k,\sigma}\epsilon_{k}c_{k,\sigma}^{+}c_{k,\sigma}+\sum_{k,\sigma}(V_{k}c_{d,\sigma}^+c_{k,\sigma}+h.c.)$ where $n_{d,\sigma}$ is the occupation number of d electron and $\epsilon_d$ is the energy level of d electron, $U$ is the Hubbard interaction. I know that there are several regimes of parameters $\epsilon_d, U, \Delta$ ($\Delta$ is the resonance width. I do not know why it is called resonance width and this is actually my question), in which the physics of anderson model is different. For example, there are intermediate valence regime where $\epsilon_d$ and $\epsilon_d+U$ are comparable with $\Delta$. For another example, when $\epsilon_d-\epsilon_F>>\Delta$ and $\epsilon_d+U-\epsilon_F<<\Delta$, they are called empty orbit regime. My questions are 1. What is the resonance width? What is its physical meaning? 2. Why should we compare $\epsilon_d-\epsilon_F$ and $\epsilon_d+U-\epsilon_F$ with $\Delta$, rather than 0, when we distinguish different regimes of Anderson Model? Thanks very much! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498843550682068, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=0e9396f867ed4a7661124cc1417d0819&p=4077625
Physics Forums Fluid dynamics, friction and pressure Hello, this is not homework, I am trying to derive some physics results using intuition, I am currently looking at some fluid dynamics problems. Consider a water droplet on a frictionless horizontal plane, subject to gravity. The cross-section of the droplet would look something like this (water volume at any given position along the cross-section discretized into water columns): Now because the surface on which the droplet is resting is frictionless, the water droplet will spread out forever until it covers the plane completely and uniformly. This is logical and can be tested easily with a material with a very low friction coefficient, such as glass. This happens because at the bottom of each water column, there is some amount of pressure, depending on the height of the column, as: $$P = \rho gh$$ Water being incompressible, fluid at the bottom of the column will be "pushed" aside by the column's pressure, and will attempt to enter the neighboring columns. Because the plane is frictionless, nothing prevents it to do so, so it does and the droplet spreads out. Well, actually, water will only go from higher-height columns to lower-height columns, because the column "receiving" the water will oppose a pressure force dependent on its current height; clearly, if the "receiving" column has a higher height, its pressure force will exceed the force pushing the water towards it, and the water will not move. But from the argument that the maximum height of the columns has to be bounded, it follows that the system's equilibrium is a uniform water distribution (possibly of infinitesimal height, if the plane has an infinite surface area). Now suppose the plane has some amount of friction. We would expect the droplet to enter a more interesting state of equilibrium, because there is now one force opposing water transfer between columns: friction. This means that as soon as the friction force exceeds the pressure force (pressure multiplied by the column's surface area) the water cannot move from that column, it is "held back" by friction. We know that friction depends on the normal force on the surface, conveniently this is the pressure force multiplied by the friction coefficient: $$F_{\mathrm{friction}} = \mu N = \mu P A = A \mu \rho gh$$ The force exerted by the water column on the water at the bottom of said column is more complicated to calculate. Basically, the column receiving the water will oppose a pressure force dependent on its current height. Let h' denote this height, then the opposing force is: $$F_{\mathrm{opposing}} = A \rho g h'$$ Since the columns have the same area. Then, the net force without friction on the water to transfer is: $$F_{\mathrm{net}} = A \rho g h - A \rho g h' = A \rho g(h - h')$$ So the water will be transferred if: $$F_{\mathrm{net}} > F_{\mathrm{friction}}$$ $$A \rho g(h - h') > A \mu \rho gh$$ $$h - h' > \mu h$$ $$(1 - \mu) h > h'$$ Which yields: $$\displaystyle h > \frac{h'}{1 - \mu}$$ This conclusion agrees with our previous observations: with a frictionless surface (μ = 0) we get h > h' which is what we found at the beginning of the post (water will only be transferred if the receiving column has less water than the "donor" column. Note I ignored the fact that the water spreads to more than one column (probably four) since we are in two dimensions, but it should just be a constant factor on the (h - h') term - but I haven't checked. With this relation and some integration, it should be possible to compute the steady-state height and water distribution of the droplet depending on its original height, its initial water distribution and the surface friction. Now my question - is this actually correct? It seems to make sense to me, and the results seem to be coherent and consistent with experiment, but can anyone check through my work and see if they agree with it? Does it make sense, is this what is actually happening physically or are there extra things to consider? I also have an extra question: this analysis ignores surface tension as the droplet's surface area changes - how important are those effects and what would be the first step to introducing them into the equation? I am unsure how to calculate the effects of surface tension - my first idea was to consider the change in (h - h') after the water has moved, which is what is causing the droplet's area to change. Thanks in advance :) PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Friction is not an issue with a fluid because of the well-known no-slip boundary condition at a solid surface. This is a fluid mechanics problem, and viscous forces play a major role. For this particular problem, surface tension effects are also very important. Surface tension affects the pressure distribution within the fluid, which feeds into the viscous flow. Another factor consider is the contact angle at the interface between the solid, liquid, and gas, which is determined by the chemical nature of the three materials (primarily the liquid and solid). The contact angle is typically not zero. Non-stick surfaces have lower contact angles. So, if you are going to start modeling this problem properly, you need to forget about friction, but include viscous flow, surface tension, non-slip boundary condition, and contact angle. Recognitions: Gold Member In this problem, because of the very thin nature of the fluid film on the surface and the slow velocity of flow, fluid inertia is not significant, and the behavior is dominated by the creeping viscous flow equations. The Euler equations do not apply to this situation. Thread Tools | | | | |------------------------------------------------------------|----------------------------|---------| | Similar Threads for: Fluid dynamics, friction and pressure | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 2 | | | General Physics | 1 | | | Classical Physics | 4 | | | General Physics | 5 | | | Classical Physics | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374606013298035, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/262049/a-non-well-ordered-set-where-principle-of-transfinite-induction-holds
# A non-well-ordered set where principle of transfinite induction holds? A theorem in my textbook says: Let $(A, < )$ be a totally ordered set. Set A has a least element and principle of transfinite induction holds in A if and only if A is well ordered. I understand why you need an assumption that A has a least element to prove left-to-right implication in my textbook proof of this theorem. But I can't find an example of the non-well-ordered set where principle of transfinite induction holds... Obviously, that set doesn't have a least element but that ''hint'' didn't take me far. So, can someone help me? EDIT: (Due to Brian M. Scott) We state principle of transfinite induction as follows: Let $(A, <)$ be a totally ordered set and $B \subseteq A$ which satisfies: $$(\forall x \in A) (p_A(x) \subseteq B \implies x \in B)$$ Then B = A. ( where $p_A(x) = \{ a \in A : a < x \}$ ) - 1 Exactly how do you state the principle of transfinite induction? – Brian M. Scott Dec 19 '12 at 12:26 I'll add exact statement to the post. – ante.ceperic Dec 19 '12 at 12:49 ## 2 Answers You are stating the principle as essentially $$(\forall x)[(\forall y < x) P(y) \to P(x)] \to (\forall x) P(x)$$ where the quantifiers range over an ordered set $A$. I claim that if the set has no least element then the principle does not hold. Take $P(z)$ to be $z \not = z$. Fix any $x \in A$. Because $x$ is not minimal, there is some $y < x$, and $P(y)$ is false, so $(\forall y < x)P(y)$ is false. Also $P(x)$ is false. Thus $(\forall x)[(\forall y < x) P(y) \to P(x)]$ is true. But $P(x)$ is false for all $x$, so the principle gives an incorrect result. Thus, by contraposition, if the transfinite induction principle holds then $A$ does have a least element. The assumption of a least element in the theorem mentioned in the question is superfluous. If the set did have a least element $x_0$, then $(\forall y < x_0) Q(y)$ would be true regardless what $Q$ is. That is the way that the transfinite induction principle is able to avoid proving identically false statements such as the $P$ I chose above. The intuition to have is that when we look at non-minimal elements, the "inductive" part of the principle of mathematical induction or the principle of transfinite induction will always go through for false statements. - I’m guessing that your version of the principle of transfinite induction is something like this: If $(\forall y<x)P(y)$ implies $P(x)$ for each $x\in X$, then $(\forall x\in X)P(x)$. If so, try taking $X=\Bbb Z$. I’ve also included a spoiler-protected hint for a property $P(x)$ that would work. (There are many.) Mouse-over to see it. For $P(x)$ you could try $x=x+1$. - Yes, you provided an example of non-well-ordered set where principle of transfinite induction does not hold. I'm interested in a non-well-ordered set where principle of transfinite induction HOLDS (it's use gives good results). I'm actually wondering why do we NEED that ''A has a least element'' assumption in theorem I cited. Can we leave it out? – ante.ceperic Dec 19 '12 at 12:48 1 @ante: It’s the absence of a least element that makes my example work. But we need to know exactly how you’re stating the principle of transfinite induction: in some versions it incorporates the assumption of a least element. – Brian M. Scott Dec 19 '12 at 12:50 You are providing me with an example of non-well-ordered set where transfinite induction principle doesn't hold. You use it to get to the wrong conclusion. I'm interested in the non-well-ordered set where PTI holds (it's use gives us a good conclusion for every P). Are there any? – ante.ceperic Dec 19 '12 at 12:57 @ante: Okay, I see what you’re asking now. I’ll have to give that some thought; I don’t see a proof that doesn’t use the assumption of a least element, but I don’t immediately see a counterexample without it. – Brian M. Scott Dec 19 '12 at 13:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204013347625732, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/117189/evaluation-of-lim-n-rightarrow-infty-left-e-nanb-right-when-a
# Evaluation of $\lim _{n\rightarrow \infty }\left( e^{-na}n^{b}\right)$ when $a > 0, b > 0$ Does $$\lim _{n\rightarrow \infty }\left( e^{-na}n^{b}\right)$$ evaluate to $\infty$ when $a > 0, b > 0$. I tried the expansion of $e^{-na}$ but could not shake of n from numerators. - Think of the expression as $n^b\over e^{na}$ and note all powers are positive. Which wins out, exponentials or powers? – David Mitra Mar 6 '12 at 19:17 it should evaluates to 0, since for a large number exponential is way larger than polynomial. – quartz Mar 6 '12 at 19:19 ## 3 Answers To try and complete your attempt: You can use the expansion of $e^n$ to show that for any $c \gt 0$, $e^n \gt Kn^c$ for some constant $K \gt 0$ (dependent on $c$). Let $[c] = m-1$ ($[x]$ is the integer part of $x$). Since $e^n = 1+ n + \frac{n^2}{2} + \dots + \frac{n^{m}}{m!} + \dots \gt \frac{n^m}{m!}$ Now $n^m \gt n^c$ and so $e^n \ge \frac{n^c}{m!}$ In your case, we can pick $c = \frac{b+1}{a}$. So we get $$e^n \ge K n^{\frac{b+1}{a}}$$ i.e $$e^{na} \ge K^a n^{b+1}$$ and so $$\frac{n^b}{e^{na}} \le \frac{1}{K^a n}$$ And so your sequence converges to $0$. btw, you don't really need the infinite series of $e^x$. Try proving, by induction on $n$ that: $e^x \ge 1 + x + \frac{x^2}{2!} + \dots + \frac{x^n}{n!}$ for $x \ge 0$. - The answer is no. Expressing $n$ as $e^{\log n}$ The equation simplifies to $\text{exp}[{b \log n-na}]$ Since $\log n$ does not grow as fast as $n$, it should evaluate to zero as $n\rightarrow \infty$. - To me, this problem just screams for a certain Frenchman's aid: Let $k$ be an integer greater than or equal to $b$. Then $$0\le{e^{-na}n^b}={n^b\over e^{na}}\le {n^k\over e^{na}}.$$ Now evaluate $\displaystyle\lim\limits_{n\rightarrow\infty}{ {n^k\over e^{na}}} =\lim\limits_{x\rightarrow\infty}{ {x^k\over e^{xa}}}$, by applying L'Hôpital's rule $k$-times. (The first step isn't necessary, but in my opinion makes the write up a bit prettier.) - Nice, also thanks for using L'Hopital's rule, i needed a reminder of that. – Hardy Mar 6 '12 at 19:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244123101234436, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/16098/complexity-of-testing-integer-square-freeness/16108
## Complexity of testing integer square-freeness ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) How fast can an algorithm tell if an integer is square-free? I am interested in both deterministic and randomized algorithms. I also care about both unconditional results and ones conditional on GRH (or other reasonable number-theoretic conjectures). One reference I could find was on the Polymath4 wiki, where it states No unconditional polynomial-time deterministic algorithm for square-freeness seems to be known. (It is listed as an open problem in this paper from 1994.) I can't tell if that quote implies that both conditional and randomized polynomial-time algorithms exist, but it might (the exception that proves the rule?). Thanks in advance. - ## 4 Answers As an upper bound, the problem is clearly in NP at worst. Given a putative factorization, we can check that it is indeed a correct factorization of n and whether it is square free in (very low degree) polynomial time. Another way of saying this is that there is a non-deterministic polynomial time algorithm. (But this is a far cry, of course, from having a polynomial time random algorithm.) - 4 It is also in co-NP. – Rune Mar 1 2010 at 23:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. MathWorld says that "there is no known polynomial time algorithm for recognizing squarefree integers or for computing the squarefree part of an integer". - 1 That's good, but unfortunately it doesn't specify whether it means deterministic/randomized and conditional/unconditional. – aorq Feb 23 2010 at 1:28 For quantum computers it is in BQP since factoring is in BQP see the wikipedia article on Shor's algorithm. The general number field sieve is the most efficient classical algorithm for factoring numbers larger than 100 digits according to wikipedia. According to the wikipedia article on factorization for b bits there is a published asymptotic running time of $O\left(\exp\left(\left(\begin{matrix}\frac{64}{9}\end{matrix} b\right)^{1\over3} (\log b)^{2\over3}\right)\right)$ for this algorithm. So this time will be an upper bound for the problem of recognizing square free integers if this algorithm is used. - but he only wanted to know if it's square free - not what is the square free part – David Lehavi Feb 23 2010 at 6:00 I was involved in the conversations about this topic on the Polymath4 blog (actually, looking back, it looks like I was the one who dug up that old paper...) and I came to believe that there was no such algorithm (randomized, conditional, whatever). Certainly I searched the literature as best I could and didn't find one. But I'm pessimistic about finding a reduction from factoring, for reasons I touched on in the linked post. I was going to mention this beautiful argument, but actually I don't think it applies here -- you can only use squarefreeness to tell if a prime factor $p | N$ ramifies over some extension (Edit: I think this is true -- but something weird might happen if the extension isn't Galois? Maybe? I know so little algebraic number theory it's not even funny), but that's only possible if p divides the discriminant -- but you can do that already by the Euclidean algorithm. So squarefreeness would only let you maybe factor if for some reason you could do the algorithm quickly in number fields with huge discriminant, which admittedly might be possible. Edit: Although of course if the discriminant is big enough to make a difference, it's unclear how you'd extract information about p anyway. Which, modulo a whole bunch of holes and handwaving, would seem to rule out any naive attempt to adapt that "reduction." -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507390260696411, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/kakeya-conjecture/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘Kakeya conjecture’ tag. ## The two-ends reduction for the Kakeya maximal conjecture 15 May, 2009 in math.CA, math.CO, tricks | Tags: Kakeya conjecture, Kakeya maximal function, rescaling, two-ends reduction | by Terence Tao | 10 comments In this post I would like to make some technical notes on a standard reduction used in the (Euclidean, maximal) Kakeya problem, known as the two ends reduction. This reduction (which takes advantage of the approximate scale-invariance of the Kakeya problem) was introduced by Wolff, and has since been used many times, both for the Kakeya problem and in other similar problems (e.g. by Jim Wright and myself to study curved Radon-like transforms). I was asked about it recently, so I thought I would describe the trick here. As an application I give a proof of the ${d=\frac{n+1}{2}}$ case of the Kakeya maximal conjecture. Read the rest of this entry » ## Recent progress on the Kakeya conjecture 11 May, 2009 in math.AG, math.AP, math.AT, math.CO, talk, travel | Tags: additive combinatorics, heat flow, incidence geometry, Kakeya conjecture, polynomial method | by Terence Tao | 22 comments Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference. Read the rest of this entry » ## The Kakeya set and maximal conjectures for algebraic varieties over finite fields 12 March, 2009 in math.AG, math.CA, math.CO, paper | Tags: Jordan Ellenberg, Kakeya conjecture, polynomial method, random projection trick, random rotations method, Richard Oberlin, Zeev Dvir | by Terence Tao | 23 comments Jordan Ellenberg, Richard Oberlin, and I have just uploaded to the arXiv the paper “The Kakeya set and maximal conjectures for algebraic varieties over finite fields“, submitted to Mathematika.  This paper builds upon some work of Dvir and later authors on the Kakeya problem in finite fields, which I have discussed in this earlier blog post.  Dvir established the following: Kakeya set conjecture for finite fields. Let F be a finite field, and let E be a subset of $F^n$ that contains a line in every direction.  Then E has cardinality at least $c_n |F|^n$ for some $c_n > 0$. The initial argument of Dvir gave $c_n = 1/n!$.  This was improved to $c_n = c^n$ for some explicit $0 < c < 1$ by Saraf and Sudan, and recently to $c_n =1/2^n$ by Dvir, Kopparty, Saraf, and Sudan, which is within a factor 2 of the optimal result. In our work we investigate a somewhat different set of improvements to Dvir’s result.  The first concerns the Kakeya maximal function $f^*: {\Bbb P}^{n-1}(F) \to {\Bbb R}$ of a function $f: F^n \to {\Bbb R}$, defined for all directions $\xi \in {\Bbb P}^{n-1}(F)$ in the projective hyperplane at infinity by the formula $f^*(\xi) = \sup_{\ell // \xi} \sum_{x \in \ell} |f(x)|$ where the supremum ranges over all lines $\ell$ in $F^n$ oriented in the direction $\xi$.  Our first result is the endpoint $L^p$ estimate for this operator, namely Kakeya maximal function conjecture in finite fields. We have $\| f^* \|_{\ell^n({\Bbb P}^{n-1}(F))} \leq C_n |F|^{(n-1)/n} \|f\|_{\ell^n(F^n)}$ for some constant $C_n > 0$. This result implies Dvir’s result, since if f is the indicator function of the set E in Dvir’s result, then $f^*(\xi) = |F|$ for every $\xi \in {\Bbb P}^{n-1}(F)$.  However, it also gives information on more general sets E which do not necessarily contain a line in every direction, but instead contain a certain fraction of a line in a subset of directions.  The exponents here are best possible in the sense that all other $\ell^p \to \ell^q$ mapping properties of the operator can be deduced (with bounds that are optimal up to constants) by interpolating the above estimate with more trivial estimates.  This result is the finite field analogue of a long-standing (and still open) conjecture for the Kakeya maximal function in Euclidean spaces; we rely on the polynomial method of Dvir, which thus far has not extended to the Euclidean setting (but note the very interesting variant of this method by Guth that has established the endpoint multilinear Kakeya maximal function estimate in this setting, see this blog post for further discussion). It turns out that a direct application of the polynomial method is not sufficient to recover the full strength of the maximal function estimate; but by combining the polynomial method with the Nikishin-Maurey-Pisier-Stein “method of random rotations” (as interpreted nowadays by Stein and later by Bourgain, and originally inspired by the factorisation theorems of Nikishin, Maurey, and Pisier), one can already recover a “restricted weak type” version of the above estimate.  If one then enhances the polynomial method with the “method of multiplicities” (as introduced by Saraf and Sudan) we can then recover the full “strong type” estimate; a few more details below the fold. It turns out that one can generalise the above results to more general affine or projective algebraic varieties over finite fields.  In particular, we showed Kakeya maximal function conjecture in algebraic varieties. Suppose that $W \subset {\Bbb P}^N$ is an (n-1)-dimensional algebraic variety.  Let $d \geq 1$ be an integer. Then we have $\| \sup_{\gamma \ni x; \gamma \not \subset W} \sum_{y \in \gamma} f(y) \|_{\ell^n_x(W(F))} \leq C_{n,d,N,W} |F|^{(n-1)/n} \|f\|_{\ell^n({\Bbb P}^N(F))}$ for some constant $C_{n,d,N,W} > 0$, where the supremum is over all irreducible algebraic curves $\gamma$ of degree at most d that pass through x but do not lie in W, and W(F) denotes the F-points of W. The ordinary Kakeya maximal function conjecture corresponds to the case when N=n, W is the hyperplane at infinity, and the degree d is equal to 1.  One corollary of this estimate is a Dvir-type result: a subset of ${\Bbb P}^N(F)$ which contains, for each x in W, an irreducible algebraic curve of degree d passing through x but not lying in W, has cardinality $\gg |F|^n$ if $|W| \gg |F|^{n-1}$.  (In particular this implies a lower bound for Nikodym sets worked out by Li.)  The dependence of the implied constant on W is only via the degree of W. The techniques used in the flat case can easily handle curves $\gamma$ of higher degree (provided that we allow the implied constants to depend on d), but the method of random rotations does not seem to work directly on the algebraic variety W as there are usually no symmetries of this variety to exploit.  Fortunately, we can get around this by using a “random projection trick” to “flatten” W into a hyperplane (after first expressing W as the zero locus of some polynomials, and then composing with the graphing map for such polynomials), reducing the non-flat case to the flat case. Below the fold, I wish to sketch two of the key ingredients in our arguments, the random rotations method and the random projections trick.  (We of course also use some algebraic geometry, but mostly low-tech stuff, on the level of Bezout’s theorem, though we do need one non-trivial result of Kleiman (from SGA6), that asserts that bounded degree varieties can be cut out by a bounded number of polynomials of bounded degree.) [Update, March 14: See also Jordan's own blog post on our paper.] Read the rest of this entry » ## A remark on the Kakeya needle problem 31 December, 2008 in math.CA, math.MG, question | Tags: Kakeya conjecture, Kakeya needle problem | by Terence Tao | 15 comments In 1917, Soichi Kakeya posed the following problem: Kakeya needle problem. What is the least amount of area required to continuously rotate a unit line segment in the plane by a full rotation (i.e. by $360^\circ$)? In 1928, Besicovitch showed that given any $\varepsilon > 0$, there exists a planar set of area at most $\varepsilon$ within which a unit needle can be continuously rotated; the proof relies on the construction of what is now known as a Besicovitch set – a set of measure zero in the plane which contains a unit line segment in every direction.  So the answer to the Kakeya needle problem is “zero”. I was recently asked (by Claus Dollinger) whether one can take $\varepsilon = 0$; in other words, Question. Does there exist a set of measure zero within which a unit line segment can be continuously rotated by a full rotation? This question does not seem to be explicitly answered in the literature.  In the papers of von Alphen and of Cunningham, it is shown that it is possible to continuously rotate a unit line segment inside a set of arbitrarily small measure and of uniformly bounded diameter; this result is of course implied by a positive answer to the above question (since continuous functions on compact sets are bounded), but the converse is not true. Below the fold, I give the answer to the problem… but perhaps readers may wish to make a guess as to what the answer is first before proceeding, to see how good their real analysis intuition is.  (To partially prevent spoilers for those reading this post via RSS, I will be whitening the text; you will have to highlight the text in order to see it.  Unfortunately, I do not know how to white out the LaTeX in such a way that it is visible upon highlighting, so RSS readers may wish to stop reading right now; but I suppose one can view the LaTeX as supplying hints to the problem, without giving away the full solution.) [Update, March 13: a non-whitened version of this article can be found as part of this book.] Read the rest of this entry » ## The Kakeya conjecture and the Ham Sandwich theorem 27 November, 2008 in expository, math.AG, math.AT, math.CA, math.CO | Tags: Borsuk-Ulam theorem, Ham sandwich theorem, Kakeya conjecture, Larry Guth, polynomial method | by Terence Tao | 16 comments One of my favourite family of conjectures (and one that has preoccupied a significant fraction of my own research) is the family of Kakeya conjectures in geometric measure theory and harmonic analysis.  There are many (not quite equivalent) conjectures in this family.  The cleanest one to state is the set conjecture: Kakeya set conjecture: Let $n \geq 1$, and let $E \subset {\Bbb R}^n$ contain a unit line segment in every direction (such sets are known as Kakeya sets or Besicovitch sets).  Then E has Hausdorff dimension and Minkowski dimension equal to n. One reason why I find these conjectures fascinating is the sheer variety of mathematical fields that arise both in the partial results towards this conjecture, and in the applications of those results to other problems.  See for instance this survey of Wolff, my Notices article and this article of Łaba on the connections between this problem and other problems in Fourier analysis, PDE, and additive combinatorics; there have even been some connections to number theory and to cryptography.  At the other end of the pipeline, the mathematical tools that have gone into the proofs of various partial results have included: • Maximal functions, covering lemmas, $L^2$ methods (Cordoba, Strömberg, Cordoba-Fefferman); • Fourier analysis (Nagel-Stein-Wainger); • Multilinear integration (Drury, Christ) • Paraproducts (Katz); • Combinatorial incidence geometry (Bourgain, Wolff); • Multi-scale analysis (Barrionuevo, Katz-Łaba-Tao, Łaba-Tao, Alfonseca-Soria-Vargas); • Probabilistic constructions (Bateman-Katz, Bateman); • Additive combinatorics and graph theory (Bourgain, Katz-Łaba-Tao, Katz-Tao, Katz-Tao); • Sum-product theorems (Bourgain-Katz-Tao); • Bilinear estimates (Tao-Vargas-Vega); • Perron trees (Perron, Schoenberg, Keich); • Group theory (Katz); • Low-degree algebraic geometry (Schlag, Tao, Mockenhaupt-Tao); • High-degree algebraic geometry (Dvir, Saraf-Sudan); • Heat flow monotonicity formulae (Bennett-Carbery-Tao) [This list is not exhaustive.] Very recently, I was pleasantly surprised to see yet another mathematical tool used to obtain new progress on the Kakeya conjecture, namely (a generalisation of) the famous Ham Sandwich theorem from algebraic topology.  This was recently used by Guth to establish a certain endpoint multilinear Kakeya estimate left open by the work of Bennett, Carbery, and myself.  With regards to the Kakeya set conjecture, Guth’s arguments assert, roughly speaking, that the only Kakeya sets that can fail to have full dimension are those which obey a certain “planiness” property, which informally means that the line segments that pass through a typical point in the set must be essentially coplanar. (This property first surfaced in my paper with Katz and Łaba.)  Guth’s arguments can be viewed as a partial analogue of Dvir’s arguments in the finite field setting (which I discussed in this blog post) to the Euclidean setting; in particular, both arguments rely crucially on the ability to create a polynomial of controlled degree that vanishes at or near a large number of points.  Unfortunately, while these arguments fully settle the Kakeya conjecture in the finite field setting, it appears that some new ideas are still needed to finish off the problem in the Euclidean setting.  Nevertheless this is an interesting new development in the long history of this conjecture, in particular demonstrating that the polynomial method can be successfully applied to continuous Euclidean problems (i.e. it is not confined to the finite field setting). In this post I would like to sketch some of the key ideas in Guth’s paper, in particular the role of the Ham Sandwich theorem (or more precisely, a polynomial generalisation of this theorem first observed by Gromov). Read the rest of this entry » ## Dvir’s proof of the finite field Kakeya conjecture 24 March, 2008 in expository, math.AG, math.CO | Tags: extremal combinatorics, Kakeya conjecture, polynomial method | by Terence Tao | 34 comments One of my favourite unsolved problems in mathematics is the Kakeya conjecture in geometric measure theory. This conjecture is descended from the Kakeya needle problem. (1917) What is the least area in the plane required to continuously rotate a needle of unit length and zero thickness around completely (i.e. by $360^\circ$)? For instance, one can rotate a unit needle inside a unit disk, which has area $\pi/4$. By using a deltoid one requires only $\pi/8$ area. In 1928, Besicovitch showed that that in fact one could rotate a unit needle using an arbitrarily small amount of positive area. This unintuitive fact was a corollary of two observations. The first, which is easy, is that one can translate a needle using arbitrarily small area, by sliding the needle along the direction it points in for a long distance (which costs zero area), turning it slightly (costing a small amount of area), sliding back, and then undoing the turn. The second fact, which is less obvious, can be phrased as follows. Define a Kakeya set in ${\Bbb R}^2$ to be any set which contains a unit line segment in each direction. (See this Java applet of mine, or the wikipedia page, for some pictures of such sets.) Theorem. (Besicovitch, 1919) There exists Kakeya sets ${\Bbb R}^2$ of arbitrarily small area (or more precisely, Lebesgue measure). In fact, one can construct such sets with zero Lebesgue measure. On the other hand, it was shown by Davies that even though these sets had zero area, they were still necessarily two-dimensional (in the sense of either Hausdorff dimension or Minkowski dimension). This led to an analogous conjecture in higher dimensions: Kakeya conjecture. A Besicovitch set in ${\Bbb R}^n$ (i.e. a subset of ${\Bbb R}^n$ that contains a unit line segment in every direction) has Minkowski and Hausdorff dimension equal to n. This conjecture remains open in dimensions three and higher (and gets more difficult as the dimension increases), although many partial results are known. For instance, when n=3, it is known that Besicovitch sets have Hausdorff dimension at least 5/2 and (upper) Minkowski dimension at least $5/2 + 10^{-10}$. See my Notices article for a general survey of this problem (and its connections with Fourier analysis, additive combinatorics, and PDE), my paper with Katz for a more technical survey, and Wolff’s survey for a systematic treatment of the field (up to about 1998 or so). In 1999, Wolff proposed a simpler finite field analogue of the Kakeya conjecture as a model problem that avoided all the technical issues involving Minkowski and Hausdorff dimension. If $F^n$ is a vector space over a finite field F, define a Kakeya set to be a subset of $F^n$ which contains a line in every direction. Finite field Kakeya conjecture. Let $E \subset F^n$ be a Kakeya set. Then E has cardinality at least $c_n |F|^n$, where $c_n > 0$ depends only on n. This conjecture has had a significant influence in the subject, in particular inspiring work on the sum-product phenomenon in finite fields, which has since proven to have many applications in number theory and computer science. Modulo minor technicalities, the progress on the finite field Kakeya conjecture was, until very recently, essentially the same as that of the original “Euclidean” Kakeya conjecture. Last week, the finite field Kakeya conjecture was proven using a beautifully simple argument by Zeev Dvir, using the polynomial method in algebraic extremal combinatorics. The proof is so short that I can present it in full here. Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048609137535095, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64370?sort=newest
## Simplest examples of rings that are not isomorphic to their opposites ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are the simplest examples of rings that are not isomorphic to their opposite rings? Is there a science to constructing them? ## The only simple example known to me: In Jacobson's Basic Algebra (vol. 1), Section 2.8, there is an exercise that goes as follows: Let `$u=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1\\ 0 & 0 & 0 \end{pmatrix}\in M_3(\mathbf Q)$` and let `$x=\begin{pmatrix} u & 0 \\ 0 & u^2 \end{pmatrix}$`, `$y=\begin{pmatrix}0&1\\0&0\end{pmatrix}$`, where $u$ is as indicated and $0$ and $1$ are zero and unit matrices in $M_3(\mathbf Q)$. Hence `$x,y\in M_6(\mathbf Q)$`. Jacobson gives hints to prove that the subring of $M_6(\mathbf Q)$ generated by $x$ and $y$ is not isomorphic to its opposite. ## Examples seem to be well-known to the operator algebras crowd: See for example the paper: "A Simple Separable C*-Algebra Not Isomorphic to Its Opposite Algebra" by N. Christopher Phillips, Proceedings of the American Mathematical Society Vol. 132, No. 10 (Oct., 2004), pp. 2997-3005. - 2 For the same operator algebra people: see also Alain Connes, A factor not anti-isomorphic to itself Ann. Math. (2) 101, 1975, 536-554. – Alain Valette May 9 2011 at 11:23 1 Out of curiosity: why do operator people publish papers about these examples? Are they much rarer there? Us mere algebrist don't think much of this, I think (all my algebra undergrad students have had to check that some example or another works) – Mariano Suárez-Alvarez May 9 2011 at 15:21 4 @Mariano: It is extraordinarily hard to see whether von Neumann algebras are isomorphic. Results like these are usually hard-won and are accompanied by deep new understanding of the structure of the algebras. (Notice that the above paper of Connes is an Annals paper.) – Jon Bannon May 9 2011 at 15:34 Thanks everyone for the nice answers. I would like to have accepted more than one. – Amritanshu Prasad May 10 2011 at 10:36 @Mariano: My understanding is that there are no analogs of the kinds of examples that the operator algebra people have which mere algebraists could fathom: they are like having semisimple $\mathbf C$-algebras which are not isomorphic to their opposites! – Amritanshu Prasad May 17 2011 at 10:07 ## 8 Answers Here's a factory for making examples. If $\Gamma$ is a quiver, and $k$ a field, then we get a quiver algebra $k\Gamma$. If $\Gamma$ has no oriented cycles, we can recover $\Gamma$ from $k\Gamma$ by taking the Ext-construction. Also, the opposite algebra of a quiver algebra is obtained by reversing all the arrows in the quiver. Hence you can produce an example by taking the quiver algebra of any quiver with no oriented cycles, which is not isomorphic to its reverse. It's easy to construct lots of quivers with these properties. - 4 It's clear if you believe that theorem, that you can recover the original quiver! – James Cranch May 9 2011 at 14:16 1 Incidentally, the smallest such quiver is the V-shaped quiver with two arrows emanating from one point. This gives a five-dimensional algebra: that's quite neat and tidy! – James Cranch May 9 2011 at 15:09 2 First, one finds the vertices of the quiver $\Gamma$, by finding the unique maximal set of orthogonal idempotents $\{e_i\}$; these will correspond to vertices. Then, the number of arrows from $i$ to $j$ will be $Ext^1(\Gamma e_j,\Gamma e_i)$. – Greg Muller May 10 2011 at 15:49 1 Alternatively: since the algebra is finite dimensiona, there are a finite number of isomorphism classes of simple modules. Use them as vertices of a graph, and draw $\dim\operatorname{Ext}^1(S,T)$ arrows from the class of $S$ to the class of $T$. – Mariano Suárez-Alvarez May 10 2011 at 16:00 1 Whoops, sorry, I confused two different ways of getting the arrows. The $Ext^1$ should be between $k e_j$ and $ke_i$, the simple modules supported at the appropriate vertices. You can also get the arrows as degree 1 graded $Hom$ between projectives $\Gamma e_i$ and $\Gamma e_j$, but the $Ext$ between them certainly vanishes. – Greg Muller May 10 2011 at 16:05 show 4 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi Amri, This is a bit late, but it's my favorite class of examples. If $X$ is a smooth affine variety over $\mathbb{C}$ (say), and $\mathcal{D} = \mathcal{D}(X)$ is its algebra of differential operators, then the opposite algebra $\mathcal{D}^{op}$ is isomorphic to $\mathcal{D}(K) = K\otimes \mathcal{D}\otimes K^{-1}$, where $K$ denotes the canonical module of $X$. [This is also true when $X$ is Gorenstein but not necessarily smooth---see work of Yekutieli.] So one gets answers to your question when $X$ doesn't have trivial canonical bundle. [And of course the story sheafifies for any smooth variety.] EDIT: I was writing carelessly the first time (thanks to Amri's comment for highlighting this). Note that $\mathcal{D}(K)$ acts on $K$ on the left. Since a left $\mathcal{D}$-module structure on a vector bundle (finitely generated projective module) is the same as a flat connection, one has $\mathcal{D}\cong \mathcal{D}(K)$ if and only if $K$ admits a flat connection. The first chern class of $K$ is an obstruction to the existence of a flat connection. So just pick your favorite such affine variety (see also this MO question for discussion of that). A pretty complete discussion of the (non)triviality of rings of twisted differential operators (TDOs) can be found in Beilinson-Bernstein "A proof of Jantzen conjectures." This story also illuminates a little bit why differential operators on half-densities, i.e. $\mathcal{D}(K^{1/2}) = K^{1/2}\otimes \mathcal{D}\otimes K^{-1/2}$, plays a special role in the study of rings of differential operators and (twisted) $\mathcal{D}$-modules (it's canonically isomorphic to its opposite algebra). - Is it always the case that $\mathcal D(K)$ is not isomorphic to $\mathcal D(X)$ if $K$ is not trivial? Why? – Amritanshu Prasad May 11 2011 at 6:11 Thanks Tom! I miss our discussions in the corridor where I learned so many things. – Amritanshu Prasad May 12 2011 at 4:04 A general idea to construct rings which behave different on the left and on the right is the following, which is already contained in Martins's answer: One considers triangular rings ```$$ A=\begin{pmatrix} R & M \\ 0 & S \end{pmatrix} $$``` where $R$ and $S$ are rings and $M$ is an $R$-$S$-bimodule. The left and right ideals of such a ring can be decribed: for example, the left ideals are isomorphic to $U\oplus J$, where $J$ is a left ideal of $S$, and $U$ an $R$-submodule of $R\oplus M$ with $MJ \subseteq U$. (See Lam's book A First Course in Noncommutative Rings, §1) Suitable choices of $R$, $M$ and $S$ lead to examples with quite different left and right structure. For example, the finite ring ```$$\begin{pmatrix} \mathbb{Z}/4\mathbb{Z} & \mathbb{Z}/2\mathbb{Z} \\ 0 & \mathbb{Z}/2\mathbb{Z} \end{pmatrix}$$``` has 11 left ideals and 12 right ideals, if my counting is right. (This may be the smallest example of a unital ring not isomorphic to its opposite ring, but I'm not sure here.) Of course, there are lots of examples, since there are many ring theoretic notions which are known to be not left-right symmetric. T. Y. Lam, in his two books (First Course mentioned above and Lectures on Modules and Rings), usually contructs at least one example of a ring being left blah but not right blah, whenever blah is a property which is not left-right symmetric. (Lam's books are generally worth reading, in particular when looking for examples!) - 1 I don't know whether or not there are others of order 16 (well, aside from its opposite), but there are definitely none of order smaller than 16. – Harry Altman May 9 2011 at 22:03 Thanks for pointing out Lam's book. I looked at it and indeed it is worth reading. Jacobson's example looks suspiciously similar to a triangular ring construction. – Amritanshu Prasad May 10 2011 at 6:15 A particularly simple example of an algebra not isomorphic to its (graded) opposite is the $\mathbb{R}$-algebra $\mathbb{C}$, where $1$ is even and $i$ is odd. This is the ($\mathbb{Z}/2$-graded) real Clifford algebra $Cl(-1) = \langle f \mid f^2 = -1 \rangle$. Its opposite is the Clifford algebra $Cl(1) = \langle e \mid e^2 = 1 \rangle$, whose underlying ungraded algebra is isomorphic to $\mathbb{R} \oplus \mathbb{R}$. Per the discussion in the other answers, these two algebras represent $1$ and $-1 = 7$ in the graded Brauer group $\mathbb{Z}/8$ of $\mathbb{R}$. - Here is an explicit example of a central simple algebra over $\mathbb{Q}$ not isomorphic to its opposite (which is merely a detailed example of what Pete explained). First take a cubic cyclic Galois extension $L/\mathbb{Q}$, for instance $L = \mathbb{Q}[x] / (x^3 + x^2 − 2x − 1)$, and let $\rho$ be a non-trivial element of $\operatorname{Gal}(L/\mathbb{Q})$. Now take an arbitrary element $\gamma \in \mathbb{Q}$ which is not the norm of an element in $L$. Define $$D = L \oplus zL \oplus z^2L,$$ where $z$ is a new "symbol" subject to the relations $z^3 = \gamma$ and $zt = t^\rho z$ for all $t \in L$. Then $D$ is a central simple division algebra of degree $3$ (i.e. of dimension $9$), and since its image in $\operatorname{Br}(\mathbb{Q})$ has order $3$, it is not isomorphic to its opposite. As you can imagine, this procedure works for any field admitting a cyclic extension (of degree $>2$) for which the norm is non-surjective. - Here is an easy example. Consider the abelian group $M = \mathbb{Z} \times \mathbb{Q}$. I claim that $R:=\text{End}(M)$ does not have any anti-endomorphism at all. EDIT: My previous proof is flawed. Thanks to Leon Lampret who pointed this out to me. The new proof shows that $R$ has several anti-endomorphisms, but no one is invertible. Thus $R$ is not isomorphic to $R^{\mathrm{op}}$. Identify $R$ with the matrix ring $\begin{pmatrix} \mathbb{Z} & 0 \\ \mathbb{Q} & \mathbb{Q} \end{pmatrix}$. The endomorphism ring of the underlying abelian group $\mathbb{Z} \times \mathbb{Q} \times \mathbb{Q}$ of $R$ can be identified with the matrix ring $\begin{pmatrix} \mathbb{Z} & 0 & 0 \\ \mathbb{Q} & \mathbb{Q} & \mathbb{Q} \\ \mathbb{Q} & \mathbb{Q} & \mathbb{Q} \end{pmatrix}$. Assume an anti-endomorphism $\alpha$ of $R$ is given by such a matrix $\begin{pmatrix}a & 0 & 0 \\ b & c & d \\ e & f & g \end{pmatrix}$. Then $\alpha(1)=1$ yields $a=1, b+d=0, e+g=1$. The determinant is $cg-df$. For all six-tuples $(u,v,w,p,q,r)$ (with $u,p$ integer) we have $\alpha\left(\begin{pmatrix} u & 0 \\ v & w \end{pmatrix} \begin{pmatrix} p & 0 \\ q & r \end{pmatrix}\right) = \alpha \begin{pmatrix} p & 0 \\ q & r \end{pmatrix} \alpha\begin{pmatrix} u & 0 \\ v & w \end{pmatrix}$ which yields the three equations 1) $a^2 pu = pu$ 2) $ap(bu + cv + dw) + (bp + cq + dr)(eu + fv + gw) = bpu + c(qu + rv) + drw$ 3) $(ep + fq + gr)(eu + fv + gw) = epu + f(qu + rv) + grw$ If we plug in the three equations we already know from $\alpha(1)=1$, this simplifies of course. Now insert some tuples to get the following equations: $(0,1,0,0,1,0) \leadsto f^2 = 0 \Rightarrow f = 0$ $(0,1,0,1,0,0) \leadsto c = 0$ This already shows that the determinant of $\alpha$ is zero, thus $\alpha$ cannot be bijective. But we can go even further: $(1,0,0,1,0,0) \leadsto be=0 \wedge e^2=e \Rightarrow e \in \{0,1\}$ For $e = 0$ we get $\alpha=\begin{pmatrix}1 & 0 & 0 \\ b & 0 & -b \\ 0 & 0 & 0 \end{pmatrix}$ and for $e=1$ we get $\alpha=\begin{pmatrix}1 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix}$. Here $b \in \mathbb{Q}$ may be chosen arbitrary. These are all anti-endomorphisms of $R$. There is a more advanced proof that $R$ is not isomorphic to $R^{\mathrm{op}}$: Observe that $R$ is right noetherian, but not left noetherian. - 4 Nice answer, Martin! Here's a link to supplement your last line: planetmath.org/encyclopedia/… – Jon Bannon May 9 2011 at 14:46 Thanks Martin. A very striking example from the triangular ring family. – Amritanshu Prasad May 10 2011 at 6:44 To amplify on Bugs Bunny's answer: let $D$ be a finite dimensional central division algebra over a field $K$. Then $D \otimes_K D^{\operatorname{op}} \cong \operatorname{End}_K(D)$. From this it follows that in the Brauer group of $K$, the class of $D^{\operatorname{op}}$ is the inverse of the class of $D$. So a central division algebra over a field is isomorphic to its opposite algebra iff it has order $2$ in the Brauer group, or, in the lingo of that field, period $2$. So you can get examples by taking any field $K$ with $\operatorname{Br}(K) \neq \operatorname{Br}(K)[2]$. In particular the Brauer group of any non-Archimedean locally compact field is $\mathbb{Q}/\mathbb{Z}$ and the Brauer group of any global field is close to being the direct sum of the Brauer groups of its completions (there is one relation, the so-called reciprocity law, which says that a certain "sum of invariants" map is zero). So for instance a division algebra of dimension $9$ over its center will do and these things can be constructed over the above fields. If you don't want to restrict to such nice algebras, it becomes much easier to construct examples. (Thanks to L. Moret-Bailly and Todd Trimble for fixing this!) Let $S$ be a set with at least two elements, and consider the left semigroup on $S$: for all $x,y \in S$, $x \cdot y = x$. For any field $K$, let $K[S]$ be the semigroup algebra, and let $A = K[S]^1 = K \oplus K[S]$ be the associated unital algebra. Then, if I am not still mistaken, it should be clear that $A$ is not isomorphic to its opposite algebra. (Since I know nothing about operator algebras, my opinion is of course going to be that such examples are much more complicated than the ones given above...) - 1 Unless I miss something, this $L(V)$ is not a ring: we have $x.(y+z)=x$ while $(x.y)+(x.z)=x+x$. – Laurent Moret-Bailly May 9 2011 at 12:17 True, but this can be repaired by passing to a monoid algebra for a monoid $M$ with a similar definition for multiplication (and making due allowance for the identity). – Todd Trimble May 9 2011 at 12:32 Whoops! Good grief, thanks for catching this. I tried to repair it along the lines of Todd's comment, but let me know if it still seems fishy. (I'm off to delete something from my notes...) – Pete L. Clark May 9 2011 at 12:43 Your example is not simple, i.e., it is not a simple algebra! If you want a simple algebra, you need a field whose Brauer group has elements of order more than 2 (the opposite algebra = inversion in Brauer group). If I remember correctly, the p-adic field will do the trick... - This is so much simpler! – Amritanshu Prasad May 10 2011 at 6:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 154, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936964213848114, "perplexity_flag": "head"}
http://en.m.wikipedia.org/wiki/Weyl's_inequality
# Weyl's inequality In mathematics, there are at least two results known as "Weyl's inequality". ## Weyl's inequality in number theory In number theory, Weyl's inequality, named for Hermann Weyl, states that if M, N, a and q are integers, with a and q coprime, q > 0, and f is a real polynomial of degree k whose leading coefficient c satisfies $|c-a/q|\le tq^{-2},\,$ for some t greater than or equal to 1, then for any positive real number $\scriptstyle\varepsilon$ one has $\sum_{x=M}^{M+N}\exp(2\pi if(x))=O\left(N^{1+\varepsilon}\left({t\over q}+{1\over N}+{t\over N^{k-1}}+{q\over N^k}\right)^{2^{1-k}}\right)\text{ as }N\to\infty.$ This inequality will only be useful when $q < N^k,\,$ for otherwise estimating the modulus of the exponential sum by means of the triangle inequality as $\scriptstyle\le\, N$ provides a better bound. ↑Jump back a section ## Weyl's inequality in matrix theory In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of a Hermitian matrix that is perturbed. It is useful if we wish to know the eigenvalues of the Hermitian matrix H but there is an uncertainty about the entries of H. We let H be the exact matrix and P be a perturbation matrix that represents the uncertainty. The matrix we 'measure' is $\scriptstyle M \,=\, H \,+\, P$. The theorem says that if M, H and P are all n by n Hermitian matrices, where M has eigenvalues $\mu_1 \ge \cdots \ge \mu_n\,$ and H has eigenvalues $\nu_1 \ge \cdots \ge \nu_n\,$ and P has eigenvalues $\rho_1 \ge \cdots \ge \rho_n\,$ then the following inequalties hold for $\scriptstyle i \,=\, 1,\dots ,n$: $\nu_i + \rho_n \le \mu_i \le \nu_i + \rho_1\,$ If P is positive definite (e.g. $\scriptstyle\rho_n \,>\, 0$) then this implies $\mu_i > \nu_i \quad \forall i = 1,\dots,n.\,$ Note that we can order the eigenvalues because the matrices are Hermitian and therefore the eigenvalues are real. ↑Jump back a section ## References • Matrix Theory, Joel N. Franklin, (Dover Publications, 1993) ISBN 0-486-41179-6 • "Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen", H. Weyl, Math. Ann., 71 (1912), 441–479 ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8331959843635559, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Magnetostatics
# Magnetostatics Electromagnetism Magnetostatics Scientists Magnetostatics is the study of magnetic fields in systems where the currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the charges are stationary. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales of nanoseconds or less.[1] Magnetostatics is even a good approximation when the currents are not static — as long as the currents do not alternate rapidly. Magnetostatics is widely used in applications of micromagnetics such as models of magnetic recording devices. ## Applications ### Magnetostatics as a special case of Maxwell's equations Starting from Maxwell's equations and assuming that charges are either fixed or move as a steady current $\scriptstyle\vec{J}$, the equations separate into two equations for the electric field (see electrostatics) and two for the magnetic field.[2] The fields are independent of time and each other. The magnetostatic equations, in both differential and integral forms, are shown in the table below. Name Partial differential form Integral form Gauss's law for magnetism: $\vec{\nabla} \cdot \vec{B} = 0$ $\oint_S \vec{B} \cdot \mathrm{d}\vec{S} = 0$ Ampère's law: $\vec{\nabla} \times \vec{H} = \vec{J}$ $\oint_C \vec{H} \cdot \mathrm{d}\vec{l} = I_{\mathrm{enc}}$ The first integral is over a surface $S$ with oriented surface element $\scriptstyle d\vec{S}$. The second is a line integral around a closed loop $C$ with line element $\scriptstyle\vec{l}$. The current going through the loop is $\scriptstyle I_\text{enc}$. The quality of this approximation may be guessed by comparing the above equations with the full version of Maxwell's equations and considering the importance of the terms that have been removed. Of particular significance is the comparison of the $\scriptstyle \vec{J}$ term against the $\scriptstyle \partial \vec{D} / \partial t$ term. If the $\scriptstyle \vec{J}$ term is substantially larger, then the smaller term may be ignored without significant loss of accuracy. ### Re-introducing Faraday's law A common technique is to solve a series of magnetostatic problems at incremental time steps and then use these solutions to approximate the term $\scriptstyle \partial \vec{B} / \partial t$. Plugging this result into Faraday's Law finds a value for $\scriptstyle \vec{E}$ (which had previously been ignored). This method is not a true solution of Maxwell's equations but can provide a good approximation for slowly changing fields.[citation needed] ## Solving for the magnetic field ### Current sources If all currents in a system are known (i.e., if a complete description of $\scriptstyle \vec{J}$ is available) then the magnetic field can be determined from the currents by the Biot-Savart equation: $\vec{B}= \frac{\mu_{0}}{4\pi}I \int{\frac{\mathrm{d}\vec{l} \times \hat{r}}{r^2}}$ This technique works well for problems where the medium is a vacuum or air or some similar material with a relative permeability of 1. This includes Air core inductors and Air core transformers. One advantage of this technique is that a complex coil geometry can be integrated in sections, or for a very difficult geometry numerical integration may be used. Since this equation is primarily used to solve linear problems, the complete answer will be a sum of the integral of each component section. For problems where the dominant magnetic material is a highly permeable magnetic core with relatively small air gaps, a magnetic circuit approach is useful. When the air gaps are large in comparison to the magnetic circuit length, fringing becomes significant and usually requires a finite element calculation. The finite element calculation uses a modified form of the magnetostatic equations above in order to calculate magnetic potential. The value of $\scriptstyle \vec{B}$ can be found from the magnetic potential. The magnetic field can be derived from the vector potential. Since the divergence of the magnetic flux density is always zero, $\vec{B} = \nabla \times \vec{A},$ and the relation of the vector potential to current is: $\vec{A} = \frac{\mu_{0}}{4\pi} \int{ \frac{\vec{J} } {r} dV}$ where $\scriptstyle \vec{J}$ is the current density. ### Magnetization Further information: Demagnetizing field and Micromagnetics Strongly magnetic materials (i.e., Ferromagnetic, Ferrimagnetic or Paramagnetic) have a magnetization that is primarily due to electron spins. In such materials the magnetization must be explicitly included using the relation $\vec{B} = \mu_0(\vec{M}+\vec{H}).$ Except in metals, electric currents can be ignored. Then Ampère's law is simply $\nabla\times\vec{H} = 0.$ This has the general solution $\vec{H} = -\nabla U,$ where $U$ is a scalar potential. Substituting this in Gauss's law gives $\nabla^2 U = \nabla\cdot\vec{M}.$ Thus, the divergence of the magnetization, $\scriptstyle \nabla\cdot\vec{M},$ has a role analogous to the electric charge in electrostatics [3] and is often referred to as an effective charge density $\rho_M$. The vector potential method can also be employed with an effective current density $\vec{J_M} = \nabla \times \vec{M}.$ ## References • • • Hiebert, W; Ballentine, G; Freeman, M (2002). "Comparison of experimental and numerical micromagnetic dynamics in coherent precessional switching and modal oscillations". 65 (14). p. 140404. doi:10.1103/PhysRevB.65.140404.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048594832420349, "perplexity_flag": "head"}
http://cust-serv@ams.org/bookstore?fn=20&arg1=whatsnew&ikey=MEMO-222-1045
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List A Study of Singularities on Rational Curves Via Syzygies David Cox, Amherst College, MA, Andrew R. Kustin, University of South Carolina, Columbia, SC, Claudia Polini, University of Notre Dame, IN, and Bernd Ulrich, Purdue University, West Lafayette, IN SEARCH THIS BOOK: Memoirs of the American Mathematical Society 2013; 116 pp; softcover Volume: 222 ISBN-10: 0-8218-8743-2 ISBN-13: 978-0-8218-8743-1 List Price: US\$72 Individual Members: US\$43.20 Institutional Members: US\$57.60 Order Code: MEMO/222/1045 Consider a rational projective curve $$\mathcal{C}$$ of degree $$d$$ over an algebraically closed field $$\pmb k$$. There are $$n$$ homogeneous forms $$g_{1},\dots ,g_{n}$$ of degree $$d$$ in $$B=\pmb k[x,y]$$ which parameterize $$\mathcal{C}$$ in a birational, base point free, manner. The authors study the singularities of $$\mathcal{C}$$ by studying a Hilbert-Burch matrix $$\varphi$$ for the row vector $$[g_{1},\dots ,g_{n}]$$. In the "General Lemma" the authors use the generalized row ideals of $$\varphi$$ to identify the singular points on $$\mathcal{C}$$, their multiplicities, the number of branches at each singular point, and the multiplicity of each branch. Let $$p$$ be a singular point on the parameterized planar curve $$\mathcal{C}$$ which corresponds to a generalized zero of $$\varphi$$. In the "Triple Lemma" the authors give a matrix $$\varphi'$$ whose maximal minors parameterize the closure, in $$\mathbb{P}^{2}$$, of the blow-up at $$p$$ of $$\mathcal{C}$$ in a neighborhood of $$p$$. The authors apply the General Lemma to $$\varphi'$$ in order to learn about the singularities of $$\mathcal{C}$$ in the first neighborhood of $$p$$. If $$\mathcal{C}$$ has even degree $$d=2c$$ and the multiplicity of $$\mathcal{C}$$ at $$p$$ is equal to $$c$$, then he applies the Triple Lemma again to learn about the singularities of $$\mathcal{C}$$ in the second neighborhood of $$p$$. Consider rational plane curves $$\mathcal{C}$$ of even degree $$d=2c$$. The authors classify curves according to the configuration of multiplicity $$c$$ singularities on or infinitely near $$\mathcal{C}$$. There are $$7$$ possible configurations of such singularities. They classify the Hilbert-Burch matrix which corresponds to each configuration. The study of multiplicity $$c$$ singularities on, or infinitely near, a fixed rational plane curve $$\mathcal{C}$$ of degree $$2c$$ is equivalent to the study of the scheme of generalized zeros of the fixed balanced Hilbert-Burch matrix $$\varphi$$ for a parameterization of $$\mathcal{C}$$. • Introduction, terminology, and preliminary results • The general lemma • The triple lemma • The BiProj Lemma • Singularities of multiplicity equal to degree divided by two • The space of true triples of forms of degree $$d$$: the base point free locus, the birational locus, and the generic Hilbert-Burch matrix • Decomposition of the space of true triples • The Jacobian matrix and the ramification locus • The conductor and the branches of a rational plane curve • Rational plane quartics: A stratification and the correspondence between the Hilbert-Burch matrices and the configuration of singularities • Bibliography AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 42, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8314357399940491, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/classical-mechanics+momentum
# Tagged Questions 2answers 72 views ### Conservation of Linear Momentum at the point of collision This is a pretty basic conceptual question about the conservation of linear momentum. Consider an isolated system of 2 fixed-mass particles of masses $m_1$ and $m_2$ moving toward each other with ... 4answers 198 views ### Bat hitting a ball When a bat hits a ball, consider two cases: 1) The batsman goes for a defense, and stonewalls it, to reduce its speed. 2) the batsman goes for a shot, e.g. a home-run, etc. in which case will the ... 1answer 83 views ### Spring coupled platforms & conservation of momentum - can it be solved with freshman physics? This question came up as an exercise in a first year undergraduate course I was a TA for. It turned out to be a lot more difficult (impossible?) than anticipated... Two platforms of mass $M_1$ and ... 1answer 101 views ### Non-relativistic Kepler orbits Consider the Newtonian gravitational potential at a distance of Sun: $$\varphi \left ( r \right )~=~-\frac{GM}{r}.$$ I write the classical Lagrangian in spherical coordinates for a planet with mass ... 3answers 202 views ### Explanation for classic mechanics puzzle I'm trying to figure out a nice way to describe to a kid the physics behind these experiments: Assuming ideal conditions, we have a small boat with a sale, close to a lake's shore and a fan fixed on ... 1answer 476 views ### Conservation of linear and angular momentum Suppose I have two rigid bodies A and B and they are connected by a spring which is attached off-center (thus possibly causing torques). Due to the spring a force $f$ acts on A and a force $-f$ acts ... 2answers 155 views ### Having Trouble With The Principle Of Conservation Of Momentum For a Multiparticle System I'am reading John Taylor's Classical Mechanics chapter 1 page 20 where he proves the principle of conservation of momentum which states "If the net external force $F^{ext}$ on an $N$-particle system ... 0answers 34 views ### Does the direction of a rocket relative to an orbiting mass reduce the orbiting mass' orbital velocity? Does a rocket taking off in the same direction of an orbiting mass (asteroid to planetary size) reduce the mass' orbital velocity versus the same rocket taking off in the opposite direction of the ... 3answers 492 views ### Why do we need the quantity momentum? Why do we need the quantity Momentum in physics when we have the quantities like Force and Energy? Isn't it possible to substitute the usage of Momentum with equivalent of Force and Energy? 1answer 162 views ### How to determine n equidistant vectors from point P in three dimensions As an assignment for uni I need to figure out an algorithm that explodes a particle of mass $m$, velocity $v$, into $n$ pieces. For the first part of the assignment, the particle has mass $m$, ... 2answers 88 views ### The time for which rear moving block remain in contact with spring in the following situation? [closed] I'm a physics tutor. I'm stuck up with this question. I've no clue about how to proceed with this question. Can any one help? A 2 Kg block moving with 10 m/s strikes a spring of constant π^2 N/m ... 1answer 591 views ### Converting angular velocity to linear velocity through friction A very basic question here; it's related to this one, but not quite the same. If a rotating rigid body (a sphere for the sake of discussion) with mass $m$, radius $r$ and inertial tensor $I$ has ... 4answers 225 views ### Applications of recoil principle in classical physics Are there any interesting, important or (for the non physicist) astonishing examples where the recoil principle (as special case of conservation of linear momentum) is applied beside rockets and guns? ... 2answers 3k views ### Difference between momentum and kinetic energy From a mathematical point of view it seems to be clear what's the difference between momentum and $mv$ and kinetic energy $\frac{1}{2} m v^2$. Now my problem is the following: Suppose you want to ... 2answers 491 views ### What are the properties of two bodies for their collision to be elastic? For example, must the shock wave in each body be of a particular form which influences the shape and material properties of the bodies? I suspect part of the the answer is that the objects must be ... 9answers 845 views ### How to explain independence of momentum and energy conservation in elementary terms? I'm trying to explain to someone learning elementary physics (16 year old) that linear momentum and energy are conserved independently. I'm not a professional physicist and haven't tried to explain ... 4answers 956 views ### Examples where momentum is not equal to $mv$? I am aware that momentum is the thing which is conserved due to symmetries in space (rotational symmetry, translaitonal symmetry, etc). I am aware that in some systems, the generalized momentum, ... 4answers 6k views ### Is pushing actually easier than pulling? It is generally assumed that pushing a cart is more easier than pulling one. But why? Is there any difference in terms of force required to achieve the same amount of displacement? Or is it just a ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9187356233596802, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/55945/find-greatest-value-of-an-angle?answertab=active
# Find greatest value of an angle I am not sure how to go about the last bit of this problem. With respect to an origin $O$, the points $P$ and $Q$ have variable position vectors $p$ and $q$ respectively, given by, $$\begin{align} p &= (\cos t)i + (\sin t)j - k \\ q &= (\cos 2t)i - (sin 2t)j + \frac12k \end{align}$$ where $t$ is a real parameter such that $0 \le t \le 2\pi$. Show that $p \cdot q = \cos 3t - \frac12$ and hence, or otherwise, find the greatest value of the $\angle{POQ}$. I have worked out the first part of this problem like below, $$\begin{align} p &= (\cos t, \sin t, - 1) \\ q &= (\cos 2t, -\sin 2t, \frac12) \end{align}$$ $$\begin{align} p.q &= \cos t \cos 2t - \sin t \sin 2t - \frac12 \\ &= \cos (t + 2t) - \frac12 \\ &= \cos 3t - \frac12 \end{align}$$ For finding the angle, $$\begin{align} |p| &= \sqrt{\cos^2 t + \sin^2 t} = 1 \\ |q| &= \sqrt{\cos^2 2t + \sin^2 2t} = 1 \end{align}$$ $$\begin{align} \cos \angle{POQ} &= \dfrac{p \cdot q}{|p| |q|} \\ &= \cos 3t - \frac12 \end{align}$$ This is the part I am unsure about. The problem asks for the maximum value of the angle and not the cosine of the angle. I could have gotten max cosine like, Max $\cos 3t = 1$ Max $\cos POQ = 1 - \frac12 = 1/2$ Which would give $\frac\pi3$. But I don't think that's quite right, and doesn't match the given answer which is $2.82$ radians. How do I get the max of $\angle{POQ}$. Thanks for all your help. - ## 1 Answer The norms of $p$ and $q$ aren't right because you didn't include the last component in the sum of the squares of the components. Also, to maximize the angle, you need to minimize, not maximize the cosine. - A Doh! moment if there ever was one. It makes sense now. Thanks for your help. – mathguy80 Aug 6 '11 at 10:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412087798118591, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53602/what-is-movement-through-time/53603
# What is movement through time? [duplicate] This question already has an answer here: • What is the speed of time 1 answer In general, when I think of movement through space, I think of this: $$\frac{dx}{dt}$$ But in special relativity, we also have a concept of relative duration, which means that $t$ must have a rate of change, but with respect to what? $$\frac{dt}{d?}$$ - ## marked as duplicate by John Rennie, Manishearth♦Feb 11 at 8:13 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer This is a near duplicate of What is the speed of time and Terry has given a comprehensive answer there. However there is one point that wasn't made in the previous question. In special and general relativity there is an invarient called proper time, $\tau$, which is is the time measured by a freely moving observer, and it is perfectly reasonable to ask what $dt/d\tau$ is provided you're clear what you mean by $t$. For a freely moving observer $dt/d\tau$ is always one, but this won't be so for other observers. For example if we watch someone falling into a black hole we will see their time slow as they approach the event horizon. So if by $t$ we mean our (Schwarzschild) co-ordinate time then $dt/d\tau$ is not unity. - Then what is the dimension through which we move when we get closer to tomorrow and farther from yesterday? – user912 Feb 11 at 7:49 @user912: questions about the experience of moving through time tend to verge on the philosophical and therefore have no answer. For physicists time is a co-ordinate just like $x$, $y$ and $z$. How/why we move through time is probably more to do with the way the brain works than with physics. Be cautious about the point made in my answer because co-ordinate time is a specific concept in physics that probably doesn't have much meaning for non-physicists or may even be misleading. – John Rennie Feb 11 at 8:06 Would I be right in saying that coordinate time is $\sqrt{t^2 - x^2 - y^2 - z^2}$? – user912 Feb 11 at 8:13 No, that's proper time, $\tau$, or more precisely $d\tau^2 = dt^2 - dx^2 - dy^2 - dz^2$ (in units where $c$ is 1). Co-ordinate time is the time measured in a particular set of co-ordinates, e.g. the Schwarzschild co-ordinates. Co-ordinate time is in general not equal to proper time, though just to confuse matters it can be if you use co-moving co-ordinates. – John Rennie Feb 11 at 8:17 @user912: if you're interested in the difference between co-ordinate time and proper time there is an interesting question to be asked there ... – John Rennie Feb 11 at 9:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9635246992111206, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/132834-fixed-point-theorem.html
# Thread: 1. ## fixed point theorem Prove that for every function $f: \omega_1 \rightarrow \omega_1$, there is a $\beta \in \omega_1$ such that $f[\beta] \subseteq \beta$. Hint: A fixed point theorem may be useful. We may us the Countable Principle of Choice, but not the Axiom of Choice. 2. Are you referring to Knaster–Tarski fixpoint theorem? Since it requires a monotonic function, maybe one can consider $g(\beta)=\bigcup_{\alpha\le\beta}f(\alpha)$. Then $g$ is monotonic, so it has a fixpoint $\beta_0$: $g(\beta_0)=\beta_0$, which implies $f(\beta_0)\subseteq\beta_0$. And the Countable Principle of Choice is used to show that $\omega_1$ is a complete lattice. I am not sure about this, but it may be a start...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92710280418396, "perplexity_flag": "head"}
http://mathoverflow.net/questions/91246/other-homology-theories-still-count-holes/91264
## Other Homology Theories still Count Holes? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This may be a naive question, but since first learning homology I considered it as a tool which counts appropriate holes in your space (on top of orientation and torsion phenomena). Then I was introduced to homology of groups and now Floer/Morse homologies. Do these homologies still count "holes" in some fashion? In the case of group homology, $H_\ast(G)\cong H_\ast(BG)$, so we can view this homology as a count of holes in the Milnor construction (CW-complex assembled from points in the discrete group with the group structure). In Floer homology we're counting holomorphic curves (flow-lines in Morse homology), but it isn't viewed as having these curves "wrap around holes", so I am not sure if this hole-detecting view of homology breaks down. []: I will narrow down my question. Are there instances where I can treat $HF_\ast$ as $H_\ast$ of a particular space? For instance, I just realized that with nice conditions we have $HF^\ast(L,L)=H^\ast(L)$ in Lagrangian-Floer homology, so here it counts the holes of the Lagrangian submanifold. Thanks to Steven Landsburg's response, we can usually find such a space (but ideally would be looking for something explicit, such as Floer homotopy type with $SH_\ast(T^\ast M)=H_\ast(\mathcal{L}M)$). - 3 Technically, singular homology does not quite count holes. $H_0 X$ is free abelian on the path-components of $X$, so there's one more copy of $\mathbb Z$ than the number of $0$-dimensional holes. Said another way ,if you treat a contractible space as "having no holes", then $H_0$ can't be measuring holes as it's not trivial. There's a calibration issue -- you need to take the associated reduced homology. That way, the homology theory is trivial on a contractible space. So sure, it measures holes, in that you can describe non-trivial homology classes as extension problems that... – Ryan Budney Mar 15 2012 at 4:22 have no solution. – Ryan Budney Mar 15 2012 at 4:23 5 This is a pretty vague question. – Andy Putman Mar 15 2012 at 5:03 @Andy, is this better? – Chris Gerig Mar 15 2012 at 7:21 ## 4 Answers If your homology theory is of the form $H_n(X) = H_n(S(X))$ where $S$ is some functor from your original category to non-negative chain complexes, then the Dold-Kan correspondence gives you a corresponding simplicial abelian group $\Gamma(S(X))$ and hence (by realization) a topological space in which you are "counting holes". - 3 If I understand what you're proposing correctly, you give a space $Y$ where $\pi_*(Y) = HF_*(X)$ (for instance). This is much easier than the question as asked, and gives less information. See front.math.ucdavis.edu/1202.1856 for a paper by Everitt, Lipshitz, Sarkar, and Turner comparing two such constructions for an example. – Dylan Thurston Apr 3 2012 at 9:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Symplectic homology of the cotangent bundle is the homology of loop space (see Viterbo's "Functors and computations in Floer homology" or Abbondandolo-Schwartz). Also, Cohen-Jones-Segal have a paper in the Floer memorial volume which outlines the construction (modulo analytical details of e.g. defining smooth structures on compactified moduli spaces) of a spectrum whose homology recovers a given Floer homology. See early work of Manolescu on the analogous problem in Seiberg-Witten-Floer homology or this paper of Lipyanskiy which extends the Viterbo-Abbondandolo-Schwartz result to the level of Floer bordism. EDIT: I would like to say a little more about this. Cohen-Jones-Segal prove that one can construct a manifold up to homeomorphism from Morse data alone (this shouldn't be too surprising when you remember that any compact manifold admitting a Morse function with two critical points has to be homeomorphic to a sphere). So although it's true (as Steven Landsberg says) that you can construct a space by geometric realisation of a Dold-Kan construction applied to the chain complex used to define homology, it's not clear to me that this will reconstruct the original space you started with (maybe it works up to homotopy?). The idea of Floer homotopy theory is therefore strictly deeper than just 'constructing a space whose homology gives you Floer homology'. It should really give new Floer theoretic invariants (e.g. the work of Barraud and Cornea on the 'quantum' Serre spectral sequence). - 1 regarding the lack of analytical details in the CJS paper, you might want to check out the work of Lizhen Qin. He is at Purdue now, and his thesis was precisely on this issue of smooth structures and compactifications. – Sean Tilson Mar 15 2012 at 21:09 Ah yes, Michael Hutchings mentioned this to me along with references but I have not checked it out yet. – Chris Gerig Mar 16 2012 at 2:10 Similar to the Floer homotopy type results Jonny cites, there are a few recent papers by Lipshitz-Sarkar on constructing a spectrum whose (singular) homology is Khovanov homology. Besides giving an alternate construction of these various homologies, and the intrinsically interesting question of what spaces/spectra might underlie (or at least be related to) the Floer-theoretic constructions, I think there was some hope that other topological invariants of the spectra (e.g. generalized homology theories) would give new interesting invariants attached to, say, the underlying 3-manifold (in the Seiberg-Witten case). But I don't know whether anything along these lines ever panned out. - In finite dimensional Floer homology, the connecting trajectory count encodes a more traditional count. Namely the unstable manifolds of the flow define, under certain assumptions, a cellular decomposition of the manifold. The associated cellular chain complex is isomorphic to the Floer complex. The isomorphism associates to each critical point, viewed as an element in the Floer complex, its unstable manifold, regarded as an element of the cellular complex. This interpretation is meaningless in infinite dimensions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9206388592720032, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/62144?sort=votes
## Are the Millennium Prize Problems all decidable? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am an inexperienced logician, so I may be completely missing something major in this question. I also may be misconstruing the idea of decidability. However, I was wondering if all 6 of the remaining Millennium Prize Problems are decidable in the sense of Gödel. If any of the associated theories were not decidable, wouldn't that have far-reaching applications in the world of mathematics? Thanks in advance, and I hope that my question makes sense. - 4 Your question does seem a little bit off, because if any such problem was proved to be independent of some mainstream axiomatic system such as ZFC, you would have definitely heard about it. As for the question is it possible that one of them is, I seem to recall that this has been extensively discussed on MO already. – Thierry Zell Apr 18 2011 at 17:20 4 +1. I voted to re-open, since the question seems quite reasonable to me, and I can imagine that additional answers might be posted about whether any of the specific problems have such a nature that one can prove something about the possiblility that they are independent. (Which is to say nothing against David's excellent current answer.) – Joel David Hamkins Jul 29 2011 at 12:59 can someone provide the link to the duplicate question if there really is one? – vzn Oct 19 at 5:21 ## 2 Answers There are very few results which allow us to know that a mathematical claim will be provable or disprovable within ZFC without actually proving or disproving it. To the best of my knowledge, the only exceptions are theories which have quantifier elimination. Few1 open mathematical problems which people are interested in are of this sort, and none of the Millenium problems are. So any of the Millenium problems could be independent of ZFC (except for the Poincare conjecture, because it has been proved!) You might be particularly interested in Scott Aaronson's survey on whether or not it is likely that $P \neq NP$ is independent of ZFC. 1 Here is an example of a question which I know is decidable in ZFC, yet whether the answer is "yes" or "no" is open. Do there exist $44$ vectors $(u_i, v_i, w_i, x_i, y_i)$ in $\mathbb{R}^5$, each with length $1$, and with the dot product between each pair $\leq 1/2$? See Wikipedia for background. This is the a first order question about real numbers, so it is decidable by Tarski's theorem. The analogous result for four dimensional vectors was only obtained in 2003; if you can get the answer for $5$ dimensions, it should be publishable in a good journal. I think this about as interesting a question as one can find which is definitely settleable in ZFC, yet still open. Most questions mathematicians care about are not of this form (and, in my opinion, are much more interesting). - 1 I was about to write a comment on the OP to this effect, but this answer is much more comprehensive, so +1. – Harry Gindi Apr 18 2011 at 17:24 2 So the problem in $\mathbb{R}^5$ involves testing the emptiness of semi-algebraic set in 220 variables with 946 inequalities and 44 equations? Hmm... I can see why brute force will not help us here. :) – Thierry Zell Apr 18 2011 at 17:30 8 If you know that some particular mathematical claim is provable or disprovable in ZFC, then you can find the proof/disproof by exhaustive search. Therefore the only possible obstacle to actually finding the proof/disproof is computational infeasibility. Conversely, it is easy to cook up examples of decidable mathematical claims that are infeasible to decide. For instance, is the first decimal digit of Graham's number greater than 5? en.wikipedia.org/wiki/Graham%27s_number A more interesting example is whether there exists a projective plane of order 12. – Timothy Chow Apr 18 2011 at 20:25 5 Another example is the existence and uniqueness of the monster simple group. It's obviously decidable (check all the possible multiplication tables), but not in any feasible way, and the actual proofs involve deep ideas. – Henry Cohn Apr 19 2011 at 0:09 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Most of the Millennium Prize Problems are individual problems, with a single yes or no answer. Decidability as a question only really makes sense in the context of a countably infinite family questions, where you can ask whether it's decidable which of those questions should be answered yes. - 14 I imagine the question meant whether they are independent of ZFC. – Henry Cohn Apr 18 2011 at 16:25 4 I imagine that the questioner should have used the correct terminology then. – Harry Gindi Apr 18 2011 at 16:55 12 To be fair, he said "decidable in the sense of Gödel", and Gödel referred to "formally undecidable propositions" (or rather "formal unentscheidbare Sätze") in the very title of his paper. This terminology has definitely become less standard since then, but it is still relatively common in informal usage. – Henry Cohn Apr 18 2011 at 17:08 4 @Henry. Whether or not there exists a set whose cardinality is strictly between the cardinality of the natural numbers and the cardinality of the real numbers is also a yes-or-no question. – Felipe Voloch Apr 18 2011 at 18:50 8 @Henry and Felipe: I think the two of you are quibbling over the definition of "undecidable". As the comments above attest, there is a sense in which each of you is correct. – Pete L. Clark Apr 18 2011 at 19:16 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494589567184448, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/31624/cross-validation-and-prediction-for-unknown-data
# Cross validation and prediction for unknown data How do we build a model, cross validate it and use it to predict for unknown data? Say I have a known dataset of 100 points. Steps for 10 fold cross-validation are- 1. Divide the data randomly into training and test datasets in a ratio of 90:10 2. Make a model on the training dataset (90 points). (I used libSVM grid.py to optimize `C` and `gamma`) 3. Test the optimized model on the test dataset (10 points) and calculate the error. 4. Repeat steps (1,2,3) 10 times for 10-fold cross validation. Average the error from each repeat to get the average error. Now, after repeating the steps 10 times, I will have 10 different optimized models. To predict for an unknown dataset (200 points), should I use the model which gave me minimum error OR should I do step 2 once again on the full data (run grid.py on full data) and use it as model for prediction of unknowns? Also I would like to know, is the procedure same for other machine-learning methods (like ANN, Random Forest, etc.) - Your steps are almost correct. Initially you want to split the data into 10 disjoint sets. You do not repeat your first step. You simply go through steps 2 and 3 ten times with the $i^{th}$ step having a training set of all data less the $i^{th}$ set and a test set of the $i^{th}$ set. – Max Jul 4 '12 at 11:03 You've tagged this with `SVM` and `neural-networks`, but your question seems to be more general and not related to those methods in particular. If I'm wrong about that, please consider adding some text that explains what your question is for these methods. – MånsT Jul 4 '12 at 11:17 @Max - If I will not repeat first step I will train and test on same data 10 time!! – d.putto Jul 4 '12 at 12:03 @d.putto - That's not what I'm suggesting. You want to randomly split your data into 10 (approximately) equally sized sets. You only do that once. Now on the first iteration, your training set consists of sets 2 through 10, while your test set is set 1. On the second iteration, your training set consists of sets 1 and 3 through 10, while your test set is set 2. This continues until you've gone through 10 iterations (until you've used each individual set as a test set). – Max Jul 4 '12 at 12:08 @Max - I think I got your point.. So 3 fold cross validation, divide data in 3 sets (say X,Y,Z). Take X as test and remaining (Y+Z) as training set. For next iteration take Y as test and (X+Z) as training set and so on... – d.putto Jul 4 '12 at 12:16 show 1 more comment ## 2 Answers Now, after repeating the steps 10 times, I will have 10 different optimized models. yes. Cross validation (like other resampling based validation methods) implicitly assumes that these models are at least equivalent in their predictions, so you are allowed to average/pool all those test results. Usually there is a second, stronger assumption: that those 10 "surrogate models" are equvalent to the model built on all 100 cases: To predict for an unknown dataset (200 points), should I use the model which gave me minimum error OR should I do step 2 once again on the full data (run grid.py on full data) and use it as model for prediction of unknowns? Usually the latter is done (second assumption). However, personally I would not do a grid optimization on the whole data again (though one can argue about that) but instead use cost and γ parameters that turned out to be a good choice from the 10 optimizations you did already (see below). However, there are also so-called aggregated models (e.g. random forest aggregates decision trees), where all 10 models are used to obtain 10 predictions for each new sample, and then an aggregated prediction (e.g. majority vote for classification, average for regression) is used. Note that you validate those models by iterating the whole cross validation procedure with new random splits. Here's a link to a recent question what such iterations are good for: Variance estimates in k-fold cross-validation Also I would like to know, is the procedure same for other machine-learning methods (like ANN, Random Forest, etc.) Yes, it can be applied very generally. As you optimize each of the surrogate models, I recommend to look a bit closer into those results: • are the optimal cost and γ parameters stable (= equal or similar for all models)? • The difference between the error reported by the grid optimization and the test error you observe for the 10% unknown data is also important: if the difference is large, the models are likely to be overfit - particularly if the optimization reports very small error rates. - @d.putto - That's not what I'm suggesting. You want to randomly split your data into 10 (approximately) equally sized sets. You only do that once. Now on the first iteration, your training set consists of sets 2 through 10, while your test set is set 1. On the second iteration, your training set consists of sets 1 and 3 through 10, while your test set is set 2. This continues until you've gone through 10 iterations (until you've used each individual set as a test set). – Max @Max - I think I got your point.. So 3 fold cross validation, divide data in 3 sets (say X,Y,Z). Take X as test and remaining (Y+Z) as training set. For next iteration take Y as test and (X+Z) as training set and so on... – d.putto I saw contradiction, according to Max, at each new iteration, the test set will be included in traningg set and the size of training set tend to increase, (until using all sets as test set and according d.putto, the size of training sets still constant ) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353312253952026, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/angular-velocity+angular-momentum
Tagged Questions 0answers 44 views Closed-form equation for orientation and angular velocity over time If a rigid body, rotating freely in 3d, experiences no friction or other external forces and has an initially diagonal inertia matrix $\mathbf{I}_0$ (with $I_{11}>I_{22}>I_{33}>0$) and ... 1answer 100 views Appearing To Reverse Object's Rotation Can it be done, and if so, how does one you explain mathematically the ability to cause a rotating object to appear to change the direction of rotation? I believe it has something to do with angular ... 1answer 202 views what happens when I roll a gyroscope along its axis of spin Say: I have a gyroscope that is spinning in the xy plane along the z axis. I then roll its spinning axis by some angle theta Now I know the gyroscope will resist my attempting to change its axis ... 2answers 283 views motion in the body-fixed frame? This is really basic, I'm sure: For rigid body motion, Euler's equations refer to $L_i$ and $\omega_i$ as measured in the fixed-body frame. But that frame is just that: fixed in the body. So how ... 1answer 273 views Cases in which angular velocity and angular momentum point into same direction I know that angular momentum $\vec{L}$ and angular velocity $\vec{\omega}$ of a rigid body doesn't point into the same direction in general. However if your body spins around a principal axis, ... 3answers 1k views Angular momentum equations I do not understand this because angular momentum is $L=I\omega$ ($I$ is moment of inertia;$\omega$ is angular velocity) but it I have also seen equations where $L= rmv\sin(x)$. I do not understand ... 1answer 312 views Equation that tells me the rpm and mass of a spinning disk needed to keep a second large mass stable using gyroscopic effects I am trying to figure out how large of a mass and how quickly I need to spin said mass to keep a two-wheeled robot stable. Ideally, I am looking for a formula that relates m1=mass of robot, m2=mass of ... 2answers 769 views Dynamics of moment of inertia I'd like to be able to determine the angular acceleration of a system of two rotating masses, which are connected so as to have a variable mechanical advantage between the two. My background with ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394586086273193, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-applied-math/175314-question-curl-cross-product.html
Thread: 1. Question in Curl of a cross product. $\nabla X (\vec A X \vec B) \;=\; (\vec B \cdot \nabla)\vec A - \vec B(\nabla \cdot \vec A) -(\vec A \cdot \nabla)\vec B + \vec A ( \nabla \cdot \vec B)$. What is $\vec A \cdot \nabla$? 2. Have they told you what $\displaystyle \mathbf{A}$ and $\displaystyle \mathbf{B}$ are equal to? 3. Originally Posted by Prove It Have they told you what $\displaystyle \mathbf{A}$ and $\displaystyle \mathbf{B}$ are equal to? Nop, this is just a general formula of the product of two vectors. 4. Originally Posted by yungman $\nabla X (\vec A X \vec B) \;=\; (\vec B \cdot \nabla)\vec A - \vec B(\nabla \cdot \vec A) -(\vec A \cdot \nabla)\vec B + \vec A ( \nabla \cdot \vec B)$. What is $\vec A \cdot \nabla$? It's the vector $\vec{A}$ dotted with the gradient operator. The result of this dot product is a scalar operator. So, for example, the expression $\displaystyle(\vec{A}\cdot\nabla)\vec{B}=\left(\su m_{j=1}^{3}A_{j}\,\dfrac{\partial}{\partial x_{j}}\right)\vec{B}=\left\langle\sum_{j=1}^{3}A_{ j}\,\dfrac{\partial B_{1}}{\partial x_{j}},\sum_{j=1}^{3}A_{j}\,\dfrac{\partial B_{2}}{\partial x_{j}},\sum_{j=1}^{3}A_{j}\,\dfrac{\partial B_{3}}{\partial x_{j}}\right\rangle.$ Does that make sense?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258071184158325, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/64543/showing-bary-rightarrow-mu-have-i-done-enough-converges-in-probability
# Showing $\bar{y} \rightarrow \mu$ Have I done enough? Converges in probability From the title I'm supposed to show $\bar{y} \rightarrow \mu$ (converges in probability) where $$y_t = \mu + u_t$$ $$u_t = \rho u_{t-2} + \epsilon_t$$$$E(\epsilon) = 0, E(\epsilon^2) = \sigma^2, E(\epsilon_t\epsilon_s) = 0, 0 \leq \rho \leq 1$$ So I'm not sure if my math is right, but If i show that $\bar{y} = \frac{1}{N}\sum (\mu + u_t) = \frac{1}{N}N\mu+\sum u_t$ and $\sum u_t$ goes to 0 will that be enough? I can show each term of $\sum u_t$ will include a $\epsilon_t$ which the ExpValue is 0. But I'm not sure if that's enough. I'm mildy mathematically mature finance grad student in my first econometrics class so any help would be really appreciated. Thanks. - There are some misprints in your question: first, $u_t=\rho u_{t-1}+\epsilon_t$, not $u_t=\rho u_{t-2}+\epsilon_t$; second, one should assume that $\rho<1$, not that $\rho\le1$; third, one wants to show that $\frac1N\sum u_t$ goes to $0$, not that $\sum u_t$ goes to $0$. See below for the steps of a proof. – Did Sep 14 '11 at 20:20 ## 1 Answer Since $\bar y_N=\mu+\bar u_N$, it is sufficient to show that $\bar u_N=\dfrac1N\sum\limits_{t=1}^Nu_t$ converges to $0$ in $L^2$ (which implies the convergence in probability), in other words, that $\mathrm E((\bar u_N)^2)\to0$. Here are the steps of the proof. 1. Write every $u_t$ as a linear combination of the random variables $u_0$ and $\epsilon_s$ for $1\le s\le t$. 2. Deduce from 1. an expression of $N\bar u_N$ as a linear combination of the random variables $u_0$ and $\epsilon_s$ for $1\le s\le N$. 3. Use the independence of all these random variables and the fact that $\mathrm E(\epsilon)=0$ to deduce from 3. an expression of $N^2\mathrm E((\bar u_N)^2)$ in terms of the parameters $\rho$, $\sigma^2=\mathrm E(\epsilon^2)$ and $\mathrm E(u_0^2)$. 4. Using the fact (which you shall prove) that $1+\rho+\rho^2+\cdots+\rho^t\le\dfrac1{1-\rho}$ for every nonnegative $t$, show that, for every $N$, $$N^2\mathrm E((\bar u_N)^2)\le N\sigma^2+\mathrm E(u_0^2).$$ 5. Conclude that $\mathrm E((\bar u_N)^2)\to0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408754706382751, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/39005/finite-extension-of-mathbb-q-p
# Finite extension of $\mathbb Q_p$ Let $\mathbb K/\mathbb Q_p$ be a finite extension of $p-$adic field $\mathbb Q_p$. Let ${\mathcal O}=\{x\in K\;:\;|x|\leq1\}$ and ${\mathcal P}=\{x\in K:\;|x|<1\}$, here $|\cdot|$ is the absolute value. Show that the quotient ring $\mathcal O/\mathcal P$ is a finite field. What is the cardinal of its and show a complete system of representatives of the residue classes of this quotient ring. - 2 This looks like homework. What have you tried? – Soarer May 14 '11 at 3:03 2 Please don't post in the imperative mode ("Show", "prove", "construct"). You aren't giving us an assignment, you are, I think, trying to ask a question. So ask, don't tell. – Arturo Magidin May 14 '11 at 3:22 ## 1 Answer ${\cal{O}}$ is a very distinguished ring inside $\mathbf{K}$...what is it? Try studying the situation first when $\mathbf{K}=\mathbf{Q}_p$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319125413894653, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/4068/formalizing-quantum-field-theory
# Formalizing Quantum Field Theory I'm wondering about current efforts to provide mathematical foundations and more solid definition for quantum field theories. I am aware of such efforts in the context of the simpler topological or conformal field theories, and of older approaches such as algebraic QFT, and the classic works of Wightman, Streater, etc. etc . I am more interested in more current approaches, in particular such approaches that incorporate the modern understanding of the subject, based on the renormalization group. I know such approaches exists and have had occasions to hear interesting things about them, I'd be interested in a brief overview of what's out there, and perhaps some references. Edit: Thanks for all the references and the answers, lots of food for thought! As followup: it seems to me that much of that is concerned with formalizing perturbative QFT, which inherits its structure from the free theory, and looking at various interesting patterns and structures which appear in perturbation theory. All of which is interesting, but in addition I am wondering about attempts to define QFT non-perturbatively, by formalizing the way physicists think about QFT (in which the RNG is the basic object, rather than a technical tool). I appreciate this is a vague question, thanks everyone for the help. - When you mention renormalization group, do you mean explicitly including a UV cutoff? – QGR Jan 28 '11 at 17:40 I mean for example, provide a more solid and rigorous treatment of renormalization group along the lines of Wilson or Polchinslki. But I'd rather someone more knowledgeable will let me know what I mean precisely. – user566 Jan 28 '11 at 17:45 Just in case it is not clear, I meant someone more knowledgeable than myself...as this is the internet one has to be explicit. – user566 Jan 28 '11 at 17:56 Dear Moshe, I would bet you won't find anything away from 1) topological field theories and theories without bulk degrees of freedom which is what knot-like mathematicians are obsessed with, 2) free field theories, 3) two-dimensional conformal field theories, 4) lattice definitions, especially of pure QCD. An explanation why such a thing probably doesn't exist is that there are no people in this world who are actually obsessed about math-style rigor and who have mastered modern QFT, including RG, at the same moment. – Luboš Motl Jan 28 '11 at 18:03 2 Moshe: Have you tried to ask Joe? ;-) Or check papers referring to his RG papers? – Luboš Motl Jan 28 '11 at 18:50 show 6 more comments ## 6 Answers There are a number of high level mathematicians who are working on giving a more mathematically precise description of perturbative QFT and the renormalization procedure. For example there is a recent paper by Borcherds http://arxiv.org/pdf/1008.0129, work of Connes and Kreimer on Hopf algebras and the work of Bloch and Kreimer on mixed Hodge structures and renormalization http://www.math.uchicago.edu/~bloch/monodromy.pdf just to name a few. To be honest, I am not mathematically sophisticated enough to judge what has been accomplished in these papers, but I think there are some problems in QFT which will probably involve some rather high-powered mathematics of the type being developed in these papers. For example, the current attempt to reformulate N=4 SYM in terms of Grassmannians apparently has some connection to rather deep mathematical objects called Motives. Results on the degree of transcendentality which show up in perturbative N=4 SYM amplitudes also seem beyond what physicists really understand and I believe the presence of transcendental objects (like $\zeta(3)$) in QFT amplitudes provides some of the motivation for the work of Bloch and Kreimer. I'm not an expert on this stuff, so perhaps someone else will chime in with a more complete explanation and additional references. Edit: One more reference which is closer to the spirit of the original question is a book in progress by Costello on perturbative quantum field theory treated from the Wilsonian, effective field theory point of view. Notes are available online at http://www.math.northwestern.edu/~costello/renormalization - – Luboš Motl Jan 28 '11 at 19:25 The Borcherds paper looks more like modern QFT, but maybe too much. It seems that the objects in it are things like propagators, so it may be really inventing fancy names for the objects that physicists use. But of course I may be missing many things, especially after the fast browsing through the preprint. ;-) – Luboš Motl Jan 28 '11 at 19:27 1 Well, you may find out one day that this authoritarian method of estimating the content of papers is flawed. While I am not sure about this particular one, I have no doubt about another Fields medalist from your list who learned enough to teach a QFT course by your criteria - a great guy - who wrote several papers convinced that loop corrections to everything including the masses are strictly equal to zero at the GUT scale (the theory reduces to a classical one), among many other surprising things. Your method really doesn't work, Jeff. – Luboš Motl Jan 28 '11 at 21:06 1 I have to say I agree with Luboš here. No one disputes the mathematical skills of these guys but when it comes to physics mathematicians produce lots of nonsense because they focus too much on formalism and rigour and forget about all physical content. – Marek Jan 28 '11 at 21:41 1 Well, I agree with Jeff Harvey. – MBN Jan 29 '11 at 5:41 show 1 more comment you'll find a lot of information on the nLab, the open online Wiki of a bunch of people working on n-categories. You should really click around and see what is there, here is the page about the "functorial" POV on QFT, the formalization of the Schrödinger picture of QFT, including TQFTs: there is also a page about the Heisenberg picture aka axiomatic quantum field theory: The wiki software written by Jaques Distler has a nice search function, use it! You'll find there is a lot about (formalizing) string theory, too, also about the work of Jacob Lurie et alt. on TQFTs, a long page about CQFT, and references about recent work on the picture of perturbative QFT and renormalization groups from the AQFT point of view. Well, since you asked specifically for the latter, here is the direct link (but this is also on the nLab, together with a whole plethora of other resources): • Romeo Brunetti, Michael Duetsch, Klaus Fredenhagen: Perturbative Algebraic Quantum Field Theory and the Renormalization Groups. There is also information about Connes' work on formalizing the standard model and unifying it with gravity by using noncommutative spaces. Besides, if anyone does not find anything that should be there, go to the nForum and tell the folks over there about it! Edit: Explanation of "noncommutative spaces": When you take a real smooth manifold like a spacetime, for example, this manifold is completely described by the algebra of charts. In fact, the very definition of manifold can be done this way. Every property of the manifold corresponds to a property of the algebra of charts. This algebra is commutative, of course. Connes' great idea of "noncommutative geometry" is that we could replace the commutative algebra of charts with a noncommutative operator algebra and see what geometric concepts we could transfer from the commutative to the noncommutative setting. Operator algebras ($C^*$-algebras, to be more precise) are then considered as a noncommutative analog of charts of a "noncommutative" space. Connes's did a lot of work on the standard model and perturbative QFT using this idea, but unfortunately it is mathematically quite sophisticated. For a good introduction for physicists, see for example: • J. Madore: "An Introduction to Noncommutative Differential Geometry and its Physical Applications", 2nd edition, Cambridge University Press This book also explains ideas of extending classical spacetime with noncommutative aspects. - "... noncommutative spaces" - it is cool! I know that the vector $(x_1,x_2)$ is not the same as the vector $(x_2,x_1)$ but I think it is something different ;-). – Vladimir Kalitvianski Jan 28 '11 at 22:27 – Marek Jan 29 '11 at 8:45 – Urs Schreiber Oct 9 '11 at 21:11 The question here is how to organize a response. Names can be given, but I hope I can give some conceptual order. Hopefully other answers will find other ways to organize their response. The Wightman axioms are classic. I take the approach here of organizing how other approaches sit with the Wightman axioms, even though they may not be axiomatic. A useful critique of the Wightman axioms can be found in Streater R F, Rep. Prog. Phys. 1975 38 771-846. More recent is the assessment of Fredenhagen, Rehren, and Seiler, “Quantum Field Theory: Where We Are” in http://arxiv.org/abs/hep-th/0603155, which I recommend. In my scheme here, however, the approach you particularly ask about, formalizations of the renormalization group, do not figure, because, as you say, they have a completely different starting point. I’d say that the starting point is perhaps the concept of Feynman integrals rather than the renormalization group in itself, but I’d also say that that’s a quibble. There is a large question of what we hope to achieve by axiomatizing. (1) We can loosen the axioms, so that we have more models, some of which might be useful in Physics, but we have to figure out which on a case-by-case basis. This makes Engineering somewhat quixotic. (2) We can tighten the axioms, with the ambition that all the models are useful in Physics, but some Physically useful models might be ruled out. Mathematicians are often happy to work with axioms that a Physicist would consider too tight. So, the Wightman axioms, more-or-less in Haag’s presentation in “Local Quantum Physics”: 1. The state space (a) is a separable Hilbert space. There are people trying to use non-associative algebras, amongst other things. (b) which supports a representation of the Poincaré group. There are people doing QG, QFT on CST, and many ways of breaking Lorentz symmetry at small scales. (c) There is a unique Poincaré invariant state. Thermal sectors don’t satisfy this. Non-unique vacuums are an old favorite, but the vacuum state is pervasive in Particle Physics. (d) The spectrum of the generator of translations is confined to the closed forward light-cone. This is an elephant, IMO. The underlying reason for this is “stability”, which has no axiomatic formulation. The belief that the spectrum condition is necessary for stability may rely on classical thinking, particularly on the primacy of the Hamiltonian or Lagrangian. Feynman integrals for loops introduce negative frequencies, however, so there’s something of a case against it. 2. The observables (which, implicitly, correspond in some way to statistics of experimental data) (a) Are operator-valued distributions. People have introduced other Generalized function spaces. Haag-Kastler tightens this, to bounded operators, but the mapping from space-time regions to operators is looser. In Particle Physics, the S-matrix, which discusses transitions between free field states on time-like hyperplanes at t=+/-infinity, has been the supreme observable for decades: trying to reconcile this with the Lorentz invariant operator-valued distributions of the Wightman axioms pretty much killed the latter. Condensed matter Physics, optics, etc., take correlation functions quite seriously, which seems to me to be at the heart of the split between Particle and other Physicists. Another elephant. (b) Are Hermitian. There’s a complex structure. People have also introduced quaternions in various ways. (c) The fields transform under the Poincaré group. This goes with 1b. (d) The observables are jointly measurable at space-like separation, but in general are not jointly measurable at time-like separation. Stepping away from the Poincaré group almost always results in violation of this axiom. Random fields, which are always jointly measurable operator-valued distributions, and the differences between them and QFT, are something I have published on. The Haag-Kastler approach to some extent brings the states and observables into the single structure of von Neumann algebras, but essentially the distinction of linear operators and their duals remains. Refusing to split the world into states and observables, which we might call “holism”, makes Physics almost impossible. There’s always the question of exactly where the Heisenberg cut should be put, but pragmatically we just put it somewhere. Bell tries to square that circle while still doing Physics in his ‘Against “Measurement” ’, and Bohm as much as left Physics behind. There are people trying to do that kind of thing, but I find very little of it useful. Returning back to earth, there’s also a question of how we deform a system that we have managed to construct so that it’s significantly different in interesting ways. This isn’t in the axioms, but the standard has been to deform the Hamiltonian or Lagrangian. Both methods, however, require a choice of one or two space-like hypersurfaces, which goes against the spirit of the Poincaré group. Algebraic deformations, the other known alternative (others?), have hardly left the ground because the constraints of positive energy, microcausality, and the primacy of the S-matrix have hitherto ruled them out (I have also published on this, based on Lie fields from the 1960s). If we deform the algebra of observables instead of the dynamics, the question arises of what "stability" might be. There is of course a question whether one ought to start from the Wightman axioms at all, but one has to choose somewhere. Then, with Lee Smolin, one has to set off into the valleys, hoping to find a bigger hill. Best wishes. - I think if you want a rigorous approach to QFT that incorporates the ideas of renormalization and effective field theory, you might want to take a long look at the work of the 'constructive QFT' school. Their work on rigorous Euclidean functional integration is very much in the spirit of Wilson. Two examples, chosen randomly among many: 1) Euclidean functional measures are usually constructed as cylinder measures on spaces of distributions. (These spaces of distributions arise as duals of nuclear spaces, which are vector spaces of functions which get their topology from families of norms which quantify "how sharply concentrated is this function?".) These measures are constructed as limits of cutoff measures, as the cutoff scale goes to zero, and the cutoff measures are measures are constructed in such a way that they approximate a renormalization group flow. For example, the construction of continuum Yang-Mills theory on a torus is done by rather explicitly block-spinning lattice Yang-Mills theory. Same thing for 2d Maxwell-Higgs. 2) In order to be an honest measure, a cylinder measure must satisfy one additional property: it must be countably additive. In Glimm & Jaffe's work, this property is obtained a consequence of a property called 'vanishing at infinity', which states, basically, that the measure is insensitive to the region of field space probed by test functions which are large and very sharply localized. This is almost exactly what one means by an 'effective' field theory. - Can you please give some references, especially to the construction of Yang-Mills on torus and 2d Maxwell-Higgs? – timur Apr 28 '11 at 4:05 1 This is by no means a complete listing, but: For 2d Maxwell-Higgs, there's a series of papers by Balaban, Brydges, Imbrie, & Jaffe, the last of which is "Effective Action & Cluster Properties of the Abelian Higgs Model". For YM on a torus, there's papers by Balaban, papers by Magnen, Rivasseau, & Seneor, and papers by Federbush. – user1504 Apr 28 '11 at 14:32 Here is my answer from a condensed matter physics point of view: Quantum field theory is a theory that describes the critical point and the neighbor of the critical point of a lattice model. (Lattice models do have a rigorous definition). So to rigorously define/classify quantum field theories is to classify all the possible critical points of lattice models, which is a very important and very hard project. (One may replace "lattice model" in the above by "non-perturbatively regulated model") - 2 – mbq♦ Jun 1 '12 at 15:08 I think they have already given you the necessary references before me. I return to your phrase: "... in addition I am wondering about attempts to define QFT non-perturbatively ...". I think the maximum that we may hope for is a partially taking into account some interactions. In this sense this part is non-perturbative. This may give us a better initial approximation (in/out states); the rest of interactions can be taken into account perturbatively. I started such an activity, see here, but no renormalization group appears in my approach, sorry. Besides, it is not formalizing but reformulating QFT basing on physical phenomena. Please, don't kill me for that! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427530169487, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4200357
Physics Forums Thread Closed Page 1 of 2 1 2 > several questions in electromagnetics I went through the book of engineering electromagnetics and I have several questions I dont understand. 1. If the line integral of electrical field is not zero at the existence of changing magnetic field, then how can KVL holds with AC current flowing in the circuit? I do not know what's wrong with my understanding. Isnt AC current generates changing magnetic field at the surface of the circuit, which generates non-zero integral of electrical field along the line path and contradict with KVL? 2. The book mentions about that conduction current density J= σE is the motion of charge in a region of zero net charge density and convection current density J=ρv is the motion of volume charge density. I couldnt understand because the formula of conduction current density J= σE is derived from the one of convection current density J=ρv. Ok, if they are different, would you please give me specific examples about what conduction current is and what convection current is. Personally I thought they were just conduction current. 3. I really couldnt understand the boundary conditions Etangential1=Etangential2 and Htangential1=Htangential2 of time-varying fields. Where do partialB/partialT and partialD/partialT go? Isnt it time-varying fields and shouldnt these two terms be zero? The book says that they should be zero between any two real physical media but also mentions that surface charge density is physical possible for either dielectrics, perfect conductors or imperfect conductors and surface current density for perfect conductors. I am confused. If surface charge density and surface current density are possible, why should these two terms be zero? PhysOrg.com engineering news on PhysOrg.com >> Researchers use light projector and single-pixel detectors to create 3-D images>> GPS solution provides 3-minute tsunami alerts>> Single-pixel power: Scientists make 3-D images without a camera I have time to answer one tonight. I'll be back tomorrow if no one else chimes in. 1 - You are astute to notice that KVL appears to be violated in the case of AC circuit. Time varying current flowing in the wires causes time varying magnetic field (amperes law). According to Maxwell's Eqn. this should cause the EMF around the loop to be non-zero. Here is how we get around this problem. 1 - We pretend that the wires do not create the aformentioned magnetic fields. 2 - Gather up all of the magnetic flux that should have been generated by the wires and assign it to an inductor that we insert into the circuit. We call this the circuit's "parastic inductance". 3 - The parastic inductor now carries all of the Maxwell's eqn. voltage drop so that we can still say that the EMF around the circuit is zero. Sometimes we break the parasitic inductance into several parts, each cooresponding to a section of the wires. In this case they are called "partial inductances". Hey many thanks for your answer and it reminds me the model of transmission line. It is clear for me now. several questions in electromagnetics Those are very good question, I can answer #3 right off my head: For line integral of E: $$\int_s \nabla X \vec E\cdot d\vec s=\int_c \vec E\cdot d\vec l=-\int_s\frac{\partial \vec B}{\partial t} \cdot d\vec s$$ Remember the closed loop rectangle in the boundary condition? As the length of two sides that is normal to the surface approach zero, the area inside the loop approach zero $$\int_s\frac{\partial \vec B}{\partial t} \cdot d\vec s \;\rightarrow \;0$$ So the term disappeared. Therefore $$\int_s \nabla X \vec E\cdot d\vec s=\int_c \vec E\cdot d\vec l=0$$ This reasoning is the same for the magnetic boundary condition. One thing very important, when you get down to it, time varying signal travels as EM wave, not as current. electrons move very slow, it's the EM wave that travels at close to light speed. The current you measure is really due to the boundary condition of the EM wave........namely $$\int_s \nabla X \vec H \cdot d\vec s = \int_s \vec J \cdot d\vec s$$ Where J is surface current at the boundary surface. Also you have to be careful about looking at KVL for everything. There was a big debate on a professor of MIT proving KVL don't hold in magnetic induction. In EE, people use equivalent circuits, using KVL, Thevenin, Norton, super position etc. They don't necessary hold in physics. If you are interested, read this long post. I spent my whole Christmas holiday on this and more. http://www.physicsforums.com/showthr...3575&highlight Quote by CheyenneXia 2. The book mentions about that conduction current density J= σE is the motion of charge in a region of zero net charge density and convection current density J=ρv is the motion of volume charge density. I couldnt understand because the formula of conduction current density J= σE is derived from the one of convection current density J=ρv. Ok, if they are different, would you please give me specific examples about what conduction current is and what convection current is. Personally I thought they were just conduction current. I did some digging as convection current is less common than conduction current, I have to read up on it. This is my understanding: Convection current mainly talking about actual charge particles moving in vacuum. The velocity is more governed by Newton force F=ma where F is eE. The velocity obey the Newton's law where du/dR=a ( u is velocity, R is displacement, a is acceleration.). Then ρ=-J/u. For conduction current, it is really electrons jumping from one atom to the next atom of a conductive material under the electric field applied. At any given time, there is no net charge in any atom inside the conductive material. The velocity is governed by both E and the mobility $\mu_e\;$ of the conductive material where $\vec u =\mu_e\;\vec E$. In conduction, $\rho=\mu_e\vec E$. The velocity is very slow, the better the conductor, the slower the velocity is. I think it's because the electron keep hitting the atoms like a pin ball machine!!! It never gain velocity......at least this is my guessing. The two might use the same symbol, but the mechanism is very different. This is just my understanding. BTW, Electromagnetics is a very difficult subject. It is like peeling an onion, you have layer after layer. You peel one layer and you might think you understand. Then you read again, then you discover you have more question than answer. Then you study, and you peel another layer.....and so on. I studied three different times with three different books. I only feel good..........if I don't open the book and look at it. If I open the book, then I start to ask question and I have no idea how to explain it. I just post a question something related to how current travel that I thought I understand. But upon reading over, something really missing and I posted in the Classical physics forum here. I got no answer. If someone here have an answer, please join in. http://www.physicsforums.com/showthr...6127&highlight Sorry to be the only one that post on this thread, by the time I thought of something, it's too late to edit my last post!!! I have been thinking about the first question and the big debate I had about the MIT professor Levin. The lesson I learned from the debate is a lot of theorems in electronics don't necessary stand the test of physics. For example, we use super position and super impost two separate circuits together to become one. We use it so much that we kind of take it as the LAW. But in physical world, there is no two circuit. Like Thevinin, we replace parallel resistor with a series resistor and create a equiv Vth. That does not hold up in physical world. I am not comfortable to explain using "consider there is two sources, one doing......, the other doing......." or even explain EM wave propagation by those 5 balls hanging on strings and if you swing one ball to hit on the right side, the left most ball bounce up immediately. Because in physical world, I don't think this is the physics. I did some thinking and digging. The reason KVL call into question and not working out in circuit in varying magnetic field is because E is no longer conservative. Remember the definition of conservative field? A conservative field is a Gradient of a scalar function. If E is conservative, $$\vec E=-\nabla V \;\hbox { where }\; V \; \hbox{ is some scalar function.}$$ Also: $$\vec E=-\nabla V \;\Rightarrow\; \nabla X \vec E= 0,\;\hbox{ which implies E is Irrotational.}$$ Remember $\nabla X \vec E=0$ means the closed loop of a conservative field is zero. This is same as the difinition of KVL. KVL only work in static condition where the E is conservative field. But in Maxwell's equation for varying field: $$\nabla X \vec E =-\frac {\partial \vec B}{\partial t}$$ It is no longer conservative, therefore KVL have issue. I know this is not the best explanation, I know it would be so much simpler to use equivalent circuits and all. That's the reason I was ranting about the theorem vs law. This post is more about me finally moving a step forward in understanding the debate of the MIT professor that I spent a month typing. Anyone have different idea, please join in, this is only my revelation of the day, peeling one layer of the onion......hopefully. Recognitions: Science Advisor Quote by yungman For conduction current, it is really electrons jumping from one atom to the next atom of a conductive material under the electric field applied. At any given time, there is no net charge in any atom inside the conductive material. The velocity is governed by both E and the mobility $\mu_e\;$ of the conductive material where $\vec u =\mu_e\;\vec E$. In conduction, $\rho=\mu_e\vec E$. The velocity is very slow, the better the conductor, the slower the velocity is. I think it's because the electron keep hitting the atoms like a pin ball machine!!! It never gain velocity......at least this is my guessing. This post contains numerous errors. Electrons do not jump from atom to atom, they are in the metallic crystal's conduction band where they act as an "electron gas". The electron drift velocity is usually directly proportional to electric field. Better conductors do not have "slower" electrons, I don't even know what this means. Better conductors have fewer scattering defects, hence a longer mean free path between collisions, which results in higher net drift velocity. There are also misconceptions in other posts in this thread. Caveat emptor, beware... Quote by marcusl This post contains numerous errors. Electrons do not jump from atom to atom, they are in the metallic crystal's conduction band where they act as an "electron gas". The electron drift velocity is usually directly proportional to electric field. Better conductors do not have "slower" electrons, I don't even know what this means. Better conductors have fewer scattering defects, hence a longer mean free path between collisions, which results in higher net drift velocity. There are also misconceptions in other posts in this thread. Caveat emptor, beware... Electrons in the outer valency band of the conductor do move around loosely. The total net charge is zero but yes the electrons do move from one atom to the other. This is in the books about conduction electrons. You can call it jump or a conduction cloud as electrons of the outer band move freely from one atom to another and they do fall back into the valency band of the atom occasionally. How ever which way you call it, they move around. Please correct any misconceptions in my post here. I would like to learn. I read my post #6 again, I should be more specific about the theorem, that it is my impression, opinion and observation only. I am not a theoretician, that's the reason I did say people please join in the last sentence. The over one month of typing in the thread debating about the validity of KVL in the MIT professor's video really get me thinking about those theorems in EE. I was using equivalent voltage source and equivalent circuit in magnetic induction.....which went nowhere. I love to hear others opinions. Recognitions: Gold Member Science Advisor I saw this video http://www.youtube.com/watch?v=EwIk2gew-R8 some time ago and decided it's sophistry. emi guy answered it - electrmagnetically induced voltage is another siource and must be included in any correct implementation of kirchoff's method. Including the voltmeter leads. at least to my simple , alleged mind old jim Quote by jim hardy I saw this video http://www.youtube.com/watch?v=EwIk2gew-R8 some time ago and decided it's sophistry. emi guy answered it - electrmagnetically induced voltage is another siource and must be included in any correct implementation of kirchoff's method. Including the voltmeter leads. at least to my simple , alleged mind old jim Ha ha, that's what was exactly what I based on to argue and I really spent the whole Christmas typing and debating two years ago!!! I was absolutely out theory by those people, I finally gave up because they site too many articles and it's over my head. If you go through the long thread, I actually did experiment and took picture and drew equivalent circuits to support my argument. http://www.physicsforums.com/showthr...3575&highlight If you go to page 14 post 224, you'll see the pictures and the argument I put out. I even show holding the probe steady at the same point, and I can change the reading on the scope just by swinging the ground lead of the probe in different position. I even analyzed and explained the reason with drawing of the ground lead of the probe, showing that the EMF was induced onto the ground lead of the probe that cause the reading to change. Anyone have a better theory, go revive that thread, I love to be vindicated from that!!! Quote by marcusl This post contains numerous errors. Electrons do not jump from atom to atom, they are in the metallic crystal's conduction band where they act as an "electron gas". The electron drift velocity is usually directly proportional to electric field. Better conductors do not have "slower" electrons, I don't even know what this means. Better conductors have fewer scattering defects, hence a longer mean free path between collisions, which results in higher net drift velocity. There are also misconceptions in other posts in this thread. Caveat emptor, beware... Can you please point out where is my error? It is important for me and others to know. As for velocity, good conductors like Ag, Cu, Ag and Al have mobility of 6EE-3 to 1.4EE-4. But if you look at Si and Ge where it is not as good a conductor, mobility is 0.14 and 0.32. They are higher. $$\vec u= \mu_e \vec E.$$ So given the same current, velocity is higher with Si and Ge according to the formula. AND also, Si and Ge has much lower conductivity, it will takes higher E to get the same current. So both point to higher velocity for Si and Ge compare to the good conductors. I further question the limitation of Ohm's Law, I even posted a specific example that Ohm's Law can not accommodate in the Classical Physics forum and looks like there is a limitation: http://www.physicsforums.com/showthread.php?t=659307 This has nothing to do with magnetic induction and conservative field, more to do with the EM propagation of the signal rather than current and voltage. Feel free to join in the other post. Thank you. I know why I was confused. Surface current density exists but no surface time-varying field densities. Time varying field exisits in 3D. Quote by yungman Those are very good question, I can answer #3 right off my head: For line integral of E: $$\int_s \nabla X \vec E\cdot d\vec s=\int_c \vec E\cdot d\vec l=-\int_s\frac{\partial \vec B}{\partial t} \cdot d\vec s$$ Remember the closed loop rectangle in the boundary condition? As the length of two sides that is normal to the surface approach zero, the area inside the loop approach zero $$\int_s\frac{\partial \vec B}{\partial t} \cdot d\vec s \;\rightarrow \;0$$ So the term disappeared. Therefore $$\int_s \nabla X \vec E\cdot d\vec s=\int_c \vec E\cdot d\vec l=0$$ This reasoning is the same for the magnetic boundary condition. One thing very important, when you get down to it, time varying signal travels as EM wave, not as current. electrons move very slow, it's the EM wave that travels at close to light speed. The current you measure is really due to the boundary condition of the EM wave........namely $$\int_s \nabla X \vec H \cdot d\vec s = \int_s \vec J \cdot d\vec s$$ Where J is surface current at the boundary surface. Also you have to be careful about looking at KVL for everything. There was a big debate on a professor of MIT proving KVL don't hold in magnetic induction. In EE, people use equivalent circuits, using KVL, Thevenin, Norton, super position etc. They don't necessary hold in physics. If you are interested, read this long post. I spent my whole Christmas holiday on this and more. http://www.physicsforums.com/showthr...3575&highlight I am more and more confused. I guess I should get a book related to electronic materials and read some charpters on conductor. Thanks. Quote by yungman Can you please point out where is my error? It is important for me and others to know. As for velocity, good conductors like Ag, Cu, Ag and Al have mobility of 6EE-3 to 1.4EE-4. But if you look at Si and Ge where it is not as good a conductor, mobility is 0.14 and 0.32. They are higher. $$\vec u= \mu_e \vec E.$$ So given the same current, velocity is higher with Si and Ge according to the formula. AND also, Si and Ge has much lower conductivity, it will takes higher E to get the same current. So both point to higher velocity for Si and Ge compare to the good conductors. I further question the limitation of Ohm's Law, I even posted a specific example that Ohm's Law can not accommodate in the Classical Physics forum and looks like there is a limitation: http://www.physicsforums.com/showthread.php?t=659307 This has nothing to do with magnetic induction and conservative field, more to do with the EM propagation of the signal rather than current and voltage. Feel free to join in the other post. Quote by CheyenneXia Thank you. I know why I was confused. Surface current density exists but no surface time-varying field densities. Time varying field exisits in 3D. As for your post about skin effect, I do not really get your question. Sorry. If you look at the drawing......which is a more detail version with color of a diagram from "Field and Wave Electromagnetics" by David K Cheng. There are two sources of current, one is the surface current from the curl of H, that explain directly where the current comes from. BUT there is another source of current arise from the Divergence of E where $$\nabla \cdot \vec D= \rho_{free}$$ You can see the charge as "+" and "-" at the boundary of the two plates. There is no account of this free charges that I can find in any books. Also about the skin effect. We know skin effect is just a definition of the thickness of a number times $e^{-1}$. The important thing is even at lower frequency, signal travels as EM wave. At low frequency, skin depth is very deep, not just surface current like the boundary condition indicates. How do you account the skin depth when all the Maxwell's equation is about surface current. Hope I don't confuse you more. Quote by CheyenneXia I am more and more confused. I guess I should get a book related to electronic materials and read some charpters on conductor. Thanks. I came up with a case to challenge the validity of Ohm's Law using an example of microstrip transmission line. As I repeat many times, time varying signal travels as EM wave, not as current and voltage. The case I presented cannot be explained by Ohm's Law that electronic people hold on so dearly. Just like what you so wisely asked about the problem with KVL in the presence of magnetic field. In my example, it is not related to magnetic induction like in your case, it is related to signal travels as EM wave, not current and voltage. This really does not have a lot to do with electronic materials, as long as it is a good conductor, it will behave like this. Electromagnetic is the most difficult subject in EE by a mile. People spend their live time studying and it's mostly peeling onions. Some manage to peel more layers and some don't. I answer your question to the best of my knowledge, but I cannot be absolutely sure I am right. When Marcusl claimed I made misguided statements in many occasions in this thread, I really want to know so I can look at it, judge the validity and learn. I know I made statement that is quite out there, but I rather speak what I think I understand so people can come back and say whether it's right or wrong. I don't want to just keep quiet on things that I believe is right so I won't be called out. That's the only way to learn......keep peeling the onion. You should read some of the post in the MIT professor Levin's thread, there are a lot of info that you might be interested. I have to say after spending all the time debating there, I really starting to see things their way........That you have to distinguish an equivalent circuit vs a general law. That I really hesitant to use "think of it as if......". You want to understand EM, my best advice is to really get good at vector calculus, line integral, divergence and curl. Each of them really mean something, just like English sentences. The original Maxwell's equations are INTERPRETED by vector calculus, use calculus to explain it, don't just rely on equivalent circuits. That's the original words. Vector calculus is the language of EM, learn the language. Quote by jim hardy I saw this video http://www.youtube.com/watch?v=EwIk2gew-R8 some time ago and decided it's sophistry. emi guy answered it - electrmagnetically induced voltage is another siource and must be included in any correct implementation of kirchoff's method. Including the voltmeter leads. at least to my simple , alleged mind old jim I watched this video in dismay. "Sophistry" is exactly the right word. When he introduces his time-varying magnetic flux he no longer has a 2 node circuit consisting of two resistors. He will have two resistors and induced voltage sources adding up to 1V in the equivalent circuit. This is at least three nodes. He is showing only two nodes (A and D) even after magnetic induction is creating emf across the wires. - Do we insert an induced 1V source in series with the top wire (splitting node D into two nodes)? - Do we insert an induced 1V source in series with the bottom wire (splitting node A into two nodes)? - Do we divide up the voltage drop by adding sources on both top and bottom? Turns out you can't tell because the problem itself is *defective*. He shows magnetic flux leaving the blackboard inside of the circuit loop (arrowheads) but he does not show how the flux returns back *into* the blackboard (feathers). In other words he is employing a fictional magnetic monopole to make his argument. Magnetic flux must always flow in closed loops, and it is the details of their complete loops, which he has omitted, which allows us to construct the correct equivalent circuit for application of Kirchoff's circuit laws. For example, if he was using a U-core to pipe magnetic flux into the circuit loop from the top (around the D wire) then we are inducing 1V across the top wire. The circuit would be drawn to include a voltage source between the top ends of the resistors (three node circuit). If he was using an E-core to pipe flux into the center from around both wires then the induced voltage will be split between top and bottom wires in proportion to the quantity of flux taking each path (4 node circuit). If he used a radially symmetrical pot-core, we would have sources equal to half of the induced voltage in series with both top and bottom wires. I would hope that this was part 1 of a two part lecture and that the students were given the full picture in the next class after pondering it for a few days. Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: several questions in electromagnetics | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 0 | | | Engineering, Comp Sci, & Technology Homework | 5 | | | General Physics | 0 | | | Advanced Physics Homework | 4 | | | Introductory Physics Homework | 6 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493101835250854, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Ricci_flow
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Ricci flow In differential geometry, Ricci flow is the flow of Riemannian metrics given by the equation $\partial_t g_{ij}=-2Ric_{ij}$ where g is the metric and Ric is the Ricci curvature. Richard Hamilton first considered this flow in 1981, showing that any 3-manifold which admits a metric of positive Ricci curvature, admits a metric of constant curvature as well. More recent work in analysis has focused on the question of how metrics evolve under the flow, and what types of parametric singularities may form. For instance, a certain class of solutions to the Ricci flow demonstrates that neckpinch singularities will form on an evolving n-dimensional metric of positive Euler characteristic as the flow approaches some characteristic time t0. In certain cases such neckpinches will even fix around a special class of solution known as the Ricci Soliton . The Ricci flow can be used formally to prove various important results, like the uniformization theorem or possibly the geometrization conjecture, which includes the famous Poincaré conjecture. 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.849765419960022, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54895/nanotube-chiral-angle-as-a-function-of-n-and-m/54898
# Nanotube chiral angle as a function of $n$ and $m$ I'm looking into nanotubes and I thought I'd assure myself that the basic geometry equations are indeed correct. No problems for the radius, I quickly found the known formula: $$R = \sqrt{3(n^2+m^2+nm)}\frac{d_{CC}}{2\pi}$$ if $d_{CC}$ is the carbon bond length. For the chiral angle, however, the equation should be $$\tan{\theta} = \frac{m\sqrt{3}}{2n+m}$$ but I got $$\tan{\theta} = \frac{m\sqrt{3}}{2n-m},$$ a sign difference. I thought I'd look for the mistake and quickly correct it, but I cannot for the life of me find it. I'll show my derivation below. The starting point is of course the nanotube structure: I'll simplify this picture to make it more clear what I did: From the above picture, it's easy to see that $$\sin{\theta} = \frac{h}{na_1}$$ and $$\sin{\left(120^{\circ} - \theta\right)} = \frac{h}{ma_2}.$$ This can be rewritten using the trigonometric identity $$\sin{\left(\alpha-\beta\right)} = \sin{\alpha}\cos{\beta} - \cos{\alpha}\sin{\beta}$$ where $\alpha = 120^{\circ}$ and $\beta = \theta$, giving rise to $$\frac{\sqrt{3}}{2}\cos{\theta} + \frac{1}{2}\sin{\theta} = \frac{h}{ma_2}.$$ Now, using the fact that $a_1 = a_2 (=\sqrt{3}d_{CC})$, we find the following equations: $$\left\{\begin{array}{rcl}n\sin{\theta} & = & \frac{h}{a_1}\\ m\left[\frac{\sqrt{3}}{2}\cos{\theta} + \frac{1}{2}\sin{\theta}\right] & = & \frac{h}{a_1}\end{array}\right.$$ which obviously allow for combination: $$\begin{eqnarray} \frac{n}{m}\sin{\theta} & = & \frac{\sqrt{3}}{2}\cos{\theta} + \frac{1}{2}\sin{\theta} \\ \left(\frac{n}{m}-\frac{1}{2}\right)\sin{\theta} & = & \frac{\sqrt{3}}{2}\cos{\theta} \\ \frac{2n-m}{2m}\sin{\theta} & = & \frac{\sqrt{3}}{2}\cos{\theta} \\ \tan{\theta} & = & \frac{m\sqrt{3}}{2n-m}. \end{eqnarray}$$ This differs from the known expression in that it has a minus sign in front of $m$ in the denominator, but I fail to see my mistake. It could be really silly and I'm just being blind... Thanks for taking a look at it. - ## 1 Answer The angle at $C$ is not $120^\circ-\theta$, it's $180^\circ-(120^\circ+\theta)=60^\circ-\theta$. The rest of the algebra is right, and you can quickly see where the sign comes from. - Ah yes! Of course, I knew it was going to be some blind spot I was having. Thank you :) – Wouter Feb 24 at 0:53 Sure thing. I think we've all had those kind of days, and to be fair I completely missed it on the first read through as well. – wsc Feb 24 at 2:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9715454578399658, "perplexity_flag": "head"}
http://mathoverflow.net/questions/6780?sort=newest
## Triangle-free Lemma ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Theorem (Triangle-free Lemma). For all $\eta>0$ there exists $c > 0$ and $n_0$ so that every graph $G$ on $n>n_0$ vertices, which contains at most $cn^3$ triangles can be made triangle free by removing at most $\eta\binom{n}{2}$ edges. I am trying to find some information related to this topic, I am unable to access the orignal paper by Ruzsa & Szemeredi. Does anyone know any useful papers/books on the triangle-free lemma? - ## 4 Answers Possibly an even better place to look is in surveys on graph property testing; this is by far the most common use nowadays of the triangle removal lemma, and any sufficiently good introduction to the subject should have some information on it. (I haven't actually read it, but I believe the most recent edition of Alon and Spencer's The Probabilistic Method has a chapter on property testing. If you have access to a copy, assuming the new chapter is anywhere near as good as the rest of the book, that would be my first recommendation.) If you want a proof and some applications to more traditional combinatorics, Tim Gowers has a wonderful two-paragraph sketch and some discussion here (about halfway down the post). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Jacob Fox has just posted A new proof of the graph removal lemma to the arXiv, containing a proof that does not use regularity directly, and thereby accomplishes some more effective bounds. - If you're already comfortable with ultrafilters, the new proof of Elek and Szegedy (on the arxiv) derives this (and the more general simplex removal lemma for hypergraphs) from Lebesgue's "points of density" theorem. It's not an easy read, but it is a nice proof. - I agree it is an interesting perspective on the subject, but I am unsure whether one can call it a truly different proof, as it still hides the regularity lemma underneath. – Boris Bukh Nov 26 2009 at 12:09 @Boris: Last I checked, though, there wasn't a proof of triangle removal that didn't go through a regularity lemma, so this isn't a particularly useful argument against it... :) – Harrison Brown Nov 26 2009 at 18:07 @Boris: hides how? As I read Elek/Szegedy, they get a simplex removal lemma without needing to state regularity (for hypergraphs). Is there some sense in which "points of density" should feel like regularity? – Kevin O'Bryant Nov 26 2009 at 20:51 1 @Harrison: I agree that there are none, but that was not an argument against Elek-Szegedy. It was an observation that the proof is in essence 'same', though of course the framework of Elek-Szegedy is more general, and thus potentially might offer insight in other settings. @Kevin: Yes, there is such a sense. The Lebesgue density theorem essentially says that one can approximate every measurable set by union of boxes. For more detailed connection, see terrytao.wordpress.com/2007/06/18/… – Boris Bukh Nov 27 2009 at 10:10 The standard name is 'triangle removal lemma'. Google search gives many results. The original paper of Ruzsa and Szemerédi is not the best reference, as they use an early version of Szemerédi regularity lemma, rather than the more convenient modern version. I recommend reading some surveys on the regularity lemma. A good example is one by Komlós and Simonovits. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345703125, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/29875/applying-nabla-times-mathbfb-mu-0-mathbfj-in-the-presence-of-magnetic?answertab=votes
Applying $\nabla\times\mathbf{B} = \mu_0\mathbf{J}$ in the presence of magnetic shielding 2012-06-13 - Revised question in experimental format (This is a thought experiment for which RF experts may have an immediate answer.) I'll assume (I could be wrong) the possibility of creating a strongly insulating material with low permittivity and very high permeability. From this material fabricate two identical 30 cm tubes with inside diameters of 1 cm, and wall thicknesses sufficient to provide robust blocking of magnetic fields. A silver wire 30 cm in length and 1 cm in diameter is inserted into one of the tubes. Call this one the silver wire. The other cylinder is placed inside within a vacuum chamber with an electron gun at one end and a conductive target at the other end. The electron gun is designed to fill the 1 cm interior with an even flow of electrons, so that it can simulate the flow of electrons in the wire as closely as possible. Call this one the vacuum wire. Now for the experiment. Identical currents $\begin{align}\mathbf{I}=\frac{d\mathbf{Q}}{d\mathbf{t}}\end{align}$ are sent through both wires. An experimenter then looks for induced magnetic fields around both wires. Will these magnetic fields be: (a) Identical, (b) Greater around the vacuum wire, or (c) Greater around the silver wire? My bet is (b). Anyone? (See below for the history of this question. Again, it may turn out to be easy for RF experts who have to deal with weird combinations of permittivity, permeability, and conductivity on a daily basis.) 2012-06-10: Original version of question Original title: Where are the electric field gradients in coil-generated magnetic fields? Background I tried to apply Feynman's SR-focused explanation of the relationship between electric and magnetic fields in wires to this, but in the end concluded he was addressing a rather different set of issues -- and even that only incompletely, since his purposes were for instructive purposes (his Lectures) than a complete analysis. My question is not about the mathematics of Maxwell's equations, but how such equations may be applied a bit to casually to situations that are actually quite different physically. Case 1: Magnetic field induction in CRTs In an old-style cathode ray tube (CRT) or television screen, the electrons that cross the vacuum of the CRT create a detectable electric field gradient between the electrons and the surrounding tube. This field can be roughly imagined by picturing the tube as a large capacitor (which it is; I know someone with the scar on his shoulder that proves just how large) in which the central vacuum area carries most of the negative charge and the sounding tube interior the positive charge. In the case of the CRT, the interior negative charges are also in rapid motion towards the screen. Any accurate assessment of the above model requires explicit use of the modern version vector based version of Maxwell's equations. However, Maxwell was also fascinated by and made extensive conceptual use of hydrodynamic-inspired models of electric and magnetic fields. For example, Maxwell originated or at least popularized the term "flux lines," meaning flow lines, to describe both electric and magnetic field structure. The phrase "field lines" is more common these days, but means the same thing. The flux line model is still used in beginning courses on electromagnetics, where experiments using magnetic and non-conductive powders can make such lines starkly easy to visualize and comprehend. The flux line model can be defined with mathematical precision for classical velocities. For the CRT example, electrons are surrounded by flux lines that extend out to the interior of the tube, and the orthogonal motion of those flux lines in turn generates a strong magnetic field with flux lines orthogonal both to the electric flux lines and to the direction of motion of the electric flux lines. So, this is all quite straightforward: The component of electric flux perpendicular to the direction of travel generates a magnetic flux line that is perpendicular to both that electric flux and its direction of motion. The electric flux lines are in turn defined by a field gradient -- a voltage -- that extends from the electron to the interior of the tube. That electric gradient is quite real and easily measured. The resulting magnetic fields is equally real and measurable, and is in fact what is used to steer the electron beam and paint the screen with an image. Case 2: Magnetic field induction in wires Now as it turns out, you can also generate an very similar magnetic field using a rather different method. That method is to embed (in the ideal case) the same number of electrons as in the CRT case, moving at the same average velocity, within a conductive wire. The conductive wire would extend along the same path as the vacuum electrons, and electrons of similar number and velocity and moving along inside the wire can generate a field that, with careful physical adjustments, can be made identical in appearance and strength to that of the CRT case of the electrons moving through a region of vacuum. Comparing the two cases So, two cases give very similar magnetic field results: Electrons moving through a vacuum, and electrons moving through a wire. Both give strong circular magnetic fields that surround the path of the moving electrons, and both results can be estimated easily using Maxwell's equations. One reflex reaction at this point may be (should be) "so what?" After all, moving electrons give magnetic fields, so why in the world shouldn't similar motions give the same magnetic fields? The interesting point is that if you look at the two cases carefully, they are not the same experimentally, and here's why: One case (the CRT) has a large-scale set of electric field lines that are very explicitly associated with the corresponding magnetic field structure. For example, if you use a tight, wire-like beam of electrons reaching from the back of the CRT to the center of its screen, then at 20 cm out from the path of the electrons there will be a noticeable electric field gradient that in terms of the flux line model is "moving" and thereby generating the magnetic field that can be measured at the same location. In the wire case, no such electric gradient exists. Because the charge of the electrons is cancelled out within the positively charged atomic background of the wire, the electric field makes no appreciable showing outside of the wire. Yet if you measure 20 cm out from the wire, you still see essentially the same magnetic field result, even though there are no longer any "moving electric flux lines" to generate the magnetic field locally. The actual question So, after all that preparation, my question is really quite simple: Why do moving electrons seem to generate approximately the same long-range magnetic field regardless of whether their field gradients (electric flux lines) are cancelled nearby or very far away? As often is the case in trying to ask questions like this, working through it has helped my look at my own question differently, so I now think I have some inkling of how to answer my own question. (And no, it's not SR based, since in this question is about why remote electric fields differ given the same "moving electron parts.") So, is induction being taught accurately? I'm asking anyway, in part because I'm not sure of my answer, but even more so because I think there needs to be some updating of how such situations are taught. Specifically, the moving-flux-line model (which I believe is still used instructively and is certainly seductive) flatly does not give correct results. If it were really accurate, there would be no such things as electromagnets and electric motors. That's because the field gradients of the moving electrons in such devices all cancel out very quickly and very locally, on the scale of atoms, leaving no appreciable external electric fields at the ranges of the stable magnetic fields they generate. References Possibly related past Physics.SE questions have been asked by: (1) The equivalent electric field of a magnetic field by Hans de Vries; asked 2012-04-19, answered 2012-04-19. A typically insightful question by Hans de Vries about the SR relationships of electric and magnetic. (2) Mechanism by which electric and magnetic fields interrelate by Nitin Nizhawan; asked 2012-02-02, no final answer. Another interesting and mostly SR question. (3) Moving conductors in magnetic fields: is there electric field or not? by Giuseppe Negro; asked 2011-05-14, answered 2011-05-14. A similar title, but not quite the same topic, I think. - You're asking if $B$ is produced by $E$, then why does $\nabla E$ not affect it? Maxwell's equation shows that $B$ cares about the curl of $E$, not its gradient... – Chris Gerig Jun 11 '12 at 1:55 And yes, obviously induction is being taught accurately, by simply stating the definition of induction. – Chris Gerig Jun 11 '12 at 1:58 Yes, curl $E$, not $\nabla E$. But hopefully I'm not the only person who used to think it was OK to approximate curl was by visualizing "$E$ field lines" moving through space. That image fits well enough with the explicit $\nabla E$ field lines of the CRT example, so I'd never really though much about the complete absence of such gradients in the wire case. My curiosity is more along the lines of whether the equilibrium ideas from Maxwell's early mechanical models might provide a more concrete way to explain why the same magnetic fields form quite nicely with or without co-located $\nabla E$. – Terry Bollinger Jun 11 '12 at 4:15 Hmm, since in retrospect Maxwell's mechanistic methods are likely not that well known... :), my point is that a stable magnetic field is an end state that does not come into existence instantly, but must instead grow outward as the electrons start moving. I'm pretty sure (didn't try) that growth works differently for the CRT and wire conductor cases, but ends in the same stable $B$ field. – Terry Bollinger Jun 12 '12 at 2:57 All I hear is gibberish terms, but if you make everything precise, with formulas, then we'll all see the correct answer. – Chris Gerig Jun 12 '12 at 2:59 show 2 more comments 2 Answers In a CRT, a power supply creates a high voltage DC electric field that accelerates an electron beam to a desired energy/velocity. The resulting DC beam current does not create that accelerating electric field. (Although space charge effects can modify that field to some extent, and means may be required to keep the beam focussed, since the electrons repel each other.) Also, the DC magnetic field produced by that current is small, and independent of the accelerating electric field. The magnetic fields in your two cases are identical: within the current distribution, a circumferential field increasing proportional to the radius, which falls to zero when it hits your magnetic-field-blocking (superconducting) tubes. The magnetic fields are identical because the currents are identical. ...unless I'm completely missing your intent... - Art, thanks. I think your answer (a), identical fields, is likely correct. My intent was this: moving electrons in the vacuum case have $\mathbf{E}$ fields that extend outside the magnetic shield, while moving electrons in the silver case do not. Does this excursion outside the shield of an $\mathbf{E}$ field that is associated with moving charge carriers translate into an increase in the $\mathbf{B}$ field in the same exterior region? I'm now thinking "probably not," because the $\mathbf{E}$ field outside the shield should be steady state for a constant current. Other comments, anyone? – Terry Bollinger Jun 15 '12 at 2:37 This is another of my hand waving answers, it is too long for comments. Your high permeability material exists, it is called mu metal . We have used it around photomultipliers to shield them from stray magnetic fields. You will have to use insulation inside the walls of your tubes. Are you talking of the very weak field outside the mu metal tubes? If you can make a thin enough and coherent enough electron gun to simulate the silver wire the result will be the same. If you fill up the space with the electrons the result will be different due to the differing boundary conditions according to the distance of the electron to the walls, since there will be a metal mirror effect. It is b) Even in the hypothetical complete insulator with high magnetic permeability the boundary conditions will be different for a silver wire and a full space, in my hand waving opinion. - Nice analysis. Thanks especially for the specifics about mu metal. Your argument about it being (b) due to boundary conditions sound fascinating, very plausible, and tempting since it would make me sound right... but alas, I was trying to minimize such effects rather than rely on them, so I can't claim to have been intending my (b) answer that way. Since @ArtBrown answered (a), which I now think is correct, with the same basic points you also made, I'll wait a day for other comments and then mark his as the answer. – Terry Bollinger Jun 15 '12 at 2:49 It is not clear to me whether the diameter of your free electron current is constrained to be the same as the wire. If it is not, I believe there will be a difference in the map of the field depending on the radius.At the impermeable walls it will be the same because the integral will have the same value I. – anna v Jun 15 '12 at 3:48 My intent was "identical" electron paths, e.g. ballistic electrons traveling at the same velocity along identical paths; so yes, the diameter of the free electron current would be constrained to the same form factor and current densities as the wire. Both radii could be substantially smaller than the mu metal cylinder, since trying to send free electrons that close to matter would cause strong (and interesting) interactions that were not my intent. So: Your answer is very good, but since @ArtBrown was first to give a well-written and accurate answer, I'll stick to my guns and award to him. – Terry Bollinger Jun 15 '12 at 21:33 Fair enough .I am not arguing for a check :) , just to understand your boundary conditions. – anna v Jun 16 '12 at 4:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467169642448425, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/wavefunction+operators
# Tagged Questions 1answer 48 views ### Question about the linearity of wave functions For piece-wise constant potential, the potential energy is constant so the time dependent wave function can take the form $\psi(x,t)=C_1e^{i(kx- \omega t)}+C_2e^{i(-kx-\omega t)}$ where ... 1answer 128 views ### Once I have the eigenvalues and the eigenvectors, how do I find the eigenfunctions? I am using Mathematica to construct a matrix for the Hamiltonian of some system. I have built this matrix already, and I have found the eigenvalues and the eigenvectors, I am uncertain if what I did ... 1answer 259 views ### How do I solve these integrals of wave function and operator? First integral $$\int \Psi^*({\bf r},t)\hat {\bf p} \Psi({\bf r},t)\, d^3r,$$ where the $\Psi({\bf r},t)=e^{i({\bf k}\cdot{\bf r}-\omega t)}\,\,\,$ and $\hat {\bf p}=-i\hbar \nabla$. Second one ... 1answer 318 views ### Weird operator and wavefunctions How can one show that $\int_{-\infty}^{\infty}\psi^*(x)(d/dx+\tanh x)(-d/dx+\tanh x)\psi(x) dx=\int_{-\infty}^{\infty} |(d/dx+\tanh x)\psi(x)|^2 dx$, where $\psi$ is normalized?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8705071210861206, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/48800/list
## Return to Answer 3 typos It is easy to make counterexamples examples of such subrings. For example, take $A=k[x,y]$ and consider the subring $$B=k[x^a y^b : 0\le \frac{b}{a}<\sqrt{2}].$$Geometrically, $B$ is spanned by monomials whose exponent vectors lie below the line $y=\sqrt{2}x$. I think your question is more quite interesting in the setting where $B=A^G$ B=A^G\subset A$is the invariant ring of some group action on$A$, A$ (or equivalently, on the space $X=\mbox{Spec }A$. A$). In many cases this subalgebra is finitely generated, which allows one can define a quotient space$X/G$by$Y=\mbox{Spec }A^G$with many good properties. This happens for example if$G$is finite or reductive. However, as shown by Nagata's famous counterexample to Hilbert's 14th problem,$A^G\$ may be infinitely generated, so the problem of defining such quotients in general is subtle. (Nagata's construction is indeed very geometrical, but a bit too complicated to restate here). 2 added 647 characters in body Let It is easy to make counterexamples of such subrings. For example, take $A=k[x,y]$ and consider the subring $$B=k[x^a y^b : 0\le \frac{b}{a}<\sqrt{2}].$$Geometrically, $B$ is spanned by monomials whose exponent vectors lie below the line $y=\sqrt{2}x$. Since I think your question is more interesting in the setting where $B=A^G$ is the invariant ring of some group action on $A$, or equivalently, on the space $X=\mbox{Spec }A$. In many cases this line does not have any lattice pointssubalgebra is finitely generated, which allows one can define a quotient space $B$ X/G$by$Y=\mbox{Spec }A^G$with many good properties. This happens for example if$G$is finite or reductive. However, as shown by Nagata's famous counterexample to Hilbert's 14th problem,$A^G\$ may be infinitely generated, so the problem of defining such quotients in general is subtle. (Nagata's construction is indeed very geometrical, but a bit too complicated to restate here). 1 [made Community Wiki] Let $A=k[x,y]$ and consider the subring $$B=k[x^a y^b : 0\le \frac{b}{a}<\sqrt{2}].$$Geometrically, $B$ is spanned by monomials whose exponent vectors lie below the line $y=\sqrt{2}x$. Since this line does not have any lattice points, $B$ is infinitely generated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085332751274109, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/3263/graphs-that-cause-dfs-and-bfs-to-process-nodes-in-the-exact-same-order
# Graphs that cause DFS and BFS to process nodes in the exact same order For some graphs, DFS and BFS search algorithms process nodes in the exact same order provided that they both start at the same node. Two examples are graphs that are paths and graphs that are star-shaped (trees of depth $1$ with an arbitrary number of children). Is there some way for categorizing graphs that satisfy this property? - 5 Note that in both cases this only works if you start at some specific node. If you pick a central node in a long path, for example, you will get back different orderings from DFS and BFS. – templatetypedef Aug 20 '12 at 4:49 1 Are there any other interesting possibilities than a star or a path? At first glance it would seem that if you had a vertex with both a sibling and a child then you immediately get different traversals, so either no vertex has children (apart from the root) and you get a star, or no vertex has a sibling and you get a path. I guess a clique also works, but it has both the star and path embedded. – Luke Mathieson Aug 20 '12 at 8:38 2 @LukeMathieson I'm thinking of a star with the rightmost child being the root of another star. I guess that would work as well. We can even make a general statement: if $G = (V,E)$ satisfies the property when the search starts at node v∈V, then so does a star whose rightmost child $= v$. Even better, if $G_1$ and $G_2$ satisfy the property and node $v_1$ is the last one processed in $G_1$ and $v_2$ is where the search starts in $G_2$, then adding the bridge edge $(v_1, v_2)$ creates a graph that satisfies the property. Replacing $v_1$ by $v_2$ also works I think. – saadtaame Aug 20 '12 at 9:51 2 Good point, so there's some sort of right-recursive composition where you could identify the right leaf of the first graph with the root of the second. – Luke Mathieson Aug 20 '12 at 11:53 @LukeMathieson It looks like you can fix the case where a node $v$ has a sibling and a child by adding an edge between that child and the parent of $v$. Here is my proposition: Given a graph $G=(V,E)$. $\forall x \in V$, if $\exists y,z,w \in V$ such that $(y,x),(z,y),(x,w) \in E$, then $(x,z) \in E \implies$ the property holds for $G$. The next step is to prove or disprove this proposition. – saadtaame Aug 20 '12 at 17:48 show 3 more comments ## 1 Answer Assume our BFS and dfs has a rule to start from specific node and in each two-way they first visit node with lowest degree: start from left most black node, then (BFS and DFS) are visit left most red node, then they will visit next black node, and so on, for make it more general, you could add some paths in between triangles, or add star after finishing triangles ... - That's correct under your assumption. You raised a good point actually; we should specify in what order are the nodes added to the agenda (stack or queue) when faced with a choice. – saadtaame Aug 21 '12 at 17:37 Bearing in mind that LIFO and FIFO for scheduling yield DFS and BFS respectively, one might argue that scheduling such as this (in which the scheduling may not either be stack- or queue-like) is neither depth- nor breadth-first search — though you can in some cases describe its tendancy to resemble one or the other. – Niel de Beaudrap Aug 22 '12 at 0:00 1 I think it can be implemented in terms of a stack or queue. It doesn't change how things are taken off (LIFO or FIFO), it changes the order in which children are added (in this case, lowest degree first). – SamM Aug 22 '12 at 6:57 @NieldeBeaudrap actually this is just a structure to show that somewhere both ways are same. – Saeed Amiri Nov 8 '12 at 16:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256495237350464, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30869/exercise-qft-and-cft?answertab=votes
# Exercise QFT and CFT Consider the action functional $S[z;t_1,t_2]=\int_{t_1}^{t^2}[g(z,\bar{z})\dot{z}\dot{\bar{z}}]^{\frac{1}{2}}dt$ with $z(t)$ a complex path with end points $z_i=z(t_i),\; i=1,2$. $g(z,\bar{z})$ is a positive real function on $H=\{z\in C: Im(z)>0\}$. After some easy questions the exercise asks me to determine (up to a multiplicative constant) the function $g(z,\bar{z})$ such that the action is symmetric under the transformation $z\rightarrow \gamma(z)= \frac{az+b}{cz+d}$, with $a,b,c,d \in R$. Can somebody help me or give me some hints? - ## 1 Answer If you took a course in GR or even remember the action for a point particle in SR, you should at least recognize the integral as the pathlength, the "function" $g$ you are looking for is a metric on the upper half plane and the transformations are called moebius transformations. This should give you enough search phrases and you should find an expression for the metric on wikipedia for example. - Thanks for the answer. In fact I recognized that g is the metric, but I have no idea about the way to determine it. I simply determined the way it must transform under the moebius transformation. – Gauge Jun 29 '12 at 11:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201486706733704, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/2497/what-is-the-biggest-number-ever-used-in-a-mathematical-proof/181085
# What is the biggest number ever used in a mathematical proof? Probably a proof (if any exist) that calls upon Knuth's up-arrow notation or Busy Beaver. - 5 – Qiaochu Yuan Aug 15 '10 at 4:44 1 This should be community wiki. Perhaps closed – Casebash Aug 15 '10 at 6:34 1 Infinity $\infty$ is used in lots of proofs :) Anything bigger? – Pratik Deoghare Aug 16 '10 at 13:05 1 I once saw a programming contest along the following lines: Write a C program, under 5K bytes, that outputs the biggest number possible. Assume (contrary to fact) that C can handle arbitrarily large integers and that your program has unlimited computational resources (i.e. memory). The winning entries were amazing. – Frank Thorne Aug 10 '12 at 18:51 @Casebash: There is a unique answer to the question, so I do not see why it should be a community wiki. – user1729 Sep 19 '12 at 11:06 show 1 more comment ## 4 Answers The mathematician Harvey Friedman observed a special finite form of Kurskal's Tree Theorem. Regarding this form, Friedman discusses the existence of a rapidly growing function he calls $TREE(n)$. The $TREE$ sequence begins $TREE(1)=1$ and $TREE(2)=3$, but $TREE(3)$ is a number so extremely large that its weak lower bound is $(A(...A(1)...))$, where the number of A's is $A(187196)$, and $A()$ is a version of Ackermann's function: $A(x) = 2↑↑...↑x$ with $x-1 ↑s$ (Knuth up-arrows). Whereas Graham's Number is $A^{64}(4)$, the above mentioned lower bound is $A^{A(187196)}$. As you can imagine, the $TREE$ function keeps on growing rather quickly. For a discussion on the hierarchy of fast growing functions see here: http://en.wikipedia.org/wiki/Fast-growing_hierarchy There are other examples of numbers greater than Graham's Number, as can be seen here: http://en.wikipedia.org/wiki/Goodstein_function#Sequence_length_as_a_function_of_the_starting_value, although I'm not sure if this number is larger than Friedman's $TREE(3)$ - In one of Friedman's posts on the FOM mailing list, he mentions a number called SCG(13) that is far larger than TREE(3): http://www.cs.nyu.edu/pipermail/fom/2006-April/010362.html I couldn't find a lot of other information about it, though. - TREE[3] is much larger than the numbers derived from Goodstein sequences for any reasonable input. See: http://www.cs.nyu.edu/pipermail/fom/2006-March/010279.html The Goodstein function is upper bounded by ε₀, whereas the TREE function is lower bounded by the small Veblen ordinal. - BB(n) can surpass any recursive number, it is not computable. Perhaps, BB(1000) already inexpressible in any existing notation. You can also: BB(BB(n)), BB(BB(BB(...(BB(n))...)) (with "n" nested functions). - 2 Correct, but the question asked for a specific number, used in a proof. Do you know of a proof where BB(1000) appears? – Rick Decker Sep 21 '12 at 18:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8794773817062378, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/21278/discrete-point-particles-stress-energy-tensor/21700
# Discrete point particles stress energy tensor I am trying to solve an exercise in Sean Carroll's GR book "Spacetime and Geometry". Basically we need to derive the stress-energy tensor of a perfect fluid (ie $T^{\mu\nu}=(\rho +p)U^{\mu}U^{\nu} + p\eta^{\mu\nu}$) from the stress-energy tensor of a discrete set of particles (ie $T^{\mu\nu}=\sum_a \frac{p^{\mu}_a p^{\nu}_a}{p^0_a}\delta^{(3)}(\mathbf x - \mathbf x^{(a)})$), under the hypothesis of isotropy. I managed to get in for the $T^{00}$ component and the $T^{0i}$ components, by replacing $p^{\mu}$ by $p^0$ a trivial sum appears: energy density for the 00-component and momentum density the 0i-components (vanishing by isotropy). But I am still struggling with the pure spatial part, I was thinking of substituting the sum by an integral, then the non diagonal part vanishes by isotropy again. Could it be something like: $\sum_a = \int d^3x \rho(x)$? Because then I need a relation that relates the density distribution $\rho$ and $p^{\mu}$ to the pressure (more exactly the definition of the pressure from these two concepts). - ## 1 Answer There are two points I wish to highlight. 1) Simple substitution of the sum by an integral would not work and is not justified. However, one should switch from microscopical quantities to macroscopical by doing avaraging over 4-volume, throughout which interparticle distances and times can be considered small. Macroscopic stress-energy tensor will be then: $${\bf T}^{\mu\nu}=\dfrac{1}{\Delta V_4}\int_{\Delta V_4}T^{\mu\nu}d V=\dfrac{1}{\sqrt{-g} d^3x^i dx^0}\int_{\Delta V_4}T^{\mu\nu} \sqrt{-g} d^3x^i dx^0$$ Then a)in $T^{\mu\nu}$ only delta-functions depend on x, b) metric determinant g is a macroscopic quantity, is constant over selected volume and can also be taken away from the integra. One arrives then at: $${\bf T}^{\mu\nu}=\dfrac{1}{d^3x^i dx^0}\sum_a\dfrac{p_a^\mu p_a^\nu}{p_a^0}\int_{\Delta V_4} \delta^{(3)}({\bf x}-{\bf x}^{(a)}) d^3 x^i dx^0 = \dfrac{1}{d^3x^i}\sum_a\dfrac{p_a^\mu p_a^\nu}{p_a^0}.$$ In the last expression the sum is taken over the particles which have world lines passing through $\Delta V_4$ (we ignore the fact that some particles could have left or entered the volume through its 3-boundary, as there are much less of them then the particles inside the volume). Now the expression ${\bf T}^{\mu\nu}= \dfrac{1}{d^3x^i}\sum_a\dfrac{p_a^\mu p_a^\nu}{p_a^0}$ can be more comfortably treated. 2) The symmetry considerations themselves. Consider the conponenent of the macroscopic tensor: ${\bf T}^{0 0} = \dfrac{1}{d^3x^i}\sum_a p_a^0 \equiv \rho$ ${\bf T}^{i 0} = \dfrac{1}{d^3x^i}\sum_a p_a^i$. As the sum $\sum_a p_a^i$ of 3-vectors is taken over a macrospoic volume, the result should result in a macroscopic 3-vector. However, if this vector was not zero, it would violate isotropy, which states that there exists no preferable direction. Hence ${\bf T}^{i 0} = 0$ ${\bf T}^{i j} = \dfrac{1}{d^3x^i}\sum_a\dfrac{p_a^i p_a^j}{p_a^0}$. As just before, the sum should produce a symmetric macroscopic 3-tensor of second order. But all symmetric 3-tensors are defined by 3 eigenvectors. If eigenvalues are non-degenerate, then there exists 3 preferred directions (3 eigenvectors), if eigenvalues are single degenerate, then there are 2 preferred directions etc. No preferred directions correspond to the case when the matrix has all eigenvalues equal, that is when the matrix is proportional to kronecker delta. The coefficient of proportionality is the pressure: ${\bf T}^{i j} \equiv P \delta^{ij}$ Expressing $\delta^{ij}$ as $\eta^{ij}+U^0 U^0$ ($U^i$ is zero by symmetry considerations, and $U^0$ is hence equal to unity), and $T^{00}$ as $\rho U^0 U^0$, one arrives at the final expression for $T$. - Thanks a lot for the 'rigorous' derivation. – toot Mar 1 '12 at 22:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909314751625061, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/29649/list
## Return to Answer 3 Added a sentence about the initial question near the beginning. There might be many ways to prove a variety has rational singularities. I certainly agree with Zsolt's comment above that you should be careful with your notation, Kov\'acs theorem refers to a $Y \to X$ and above you mention $X \subset Y$, I'm assuming you are simply abusing notation. With regards to your initial question, I would try to put $R^i \phi_* \omega_Y$ into a long exact sequence (so it depends on what $Y$ is mapping to $X$), see #3 below. Anyway, here are some things I would try for a subvariety $X$ in $Y$ (obtained in some way)way), some of which don't use the subvariety structure. 2. Is $O_X$ (locally) a summand of something with rational singularities? (Apply Boutot's theorem) 3. Sandor's result, but you may as well try with a resolution $\pi : X' \to X$ and try to show that $\pi_* \omega_{X'} = \omega_X$ (at which point, this is due to Kempf, not Kovacs) (I assume you have already shown that $X$ is CM), this is easier to compute. If you have some other $Y$ with rational singularities mapping to it in some natural way, then you might be in business via Sandor's theorem. Of course, the quickest hope for computing some higher cohomology like this is sticking it in some long exact sequence. 4. If $X$ is a divisor, you could try to show that the pair $(Y, X)$ is purely log terminal (see also Lazarsfelds's book, and adjoint ideals). In this same direction, if $X$ is NOT a divisor, you could try to show that $X$ is a minimal log canonical center of some log canonical pair and then apply Kawamata's subadjunction theorem. Of course, you could also just try to show that $X$ is log terminal directly. 5. If you have specific equations, you could also try some reduction to characteristic p techniques (like things related to F-splitting and F-rationality, some of these are very effective if you have explicit equations). Even without specific equations, some of these techniques still might be useful. 6. I suppose you could also do some Bertini type tricks if somehow this subvariety is sufficiently general (for example, a general section of a base point free linear system of something with rational singularities still has rational singularities). 7. You can also see this question: http://mathoverflow.net/questions/23091/is-there-an-obvious-way-for-showing-singularities-are-quotient/23137#23137 8. Does your variety have a small resoluation ($Y \to X$)? If it is also Cohen-Macaulay and normal, then it has rational singularities. 9. Does your variety have a Cartier divisor $D$ on it with log canonical (or maybe Du Bois) singularities such that $X \setminus D$ is has log terminal singularities (or maybe smooth). Then $X$ can be show to have rational singularities. Some things like this appeared in a paper of Koll\'ar and Shepherd-Barron (also see the related work of Karu as well as a paper of mine on Du Bois singularities). That's all I can think of right now. 2 Tried to make #3 easier to read. There might be many ways to prove a variety has rational singularities. I certainly agree with Zsolt's comment above that you should be careful with your notation, Kov\'acs theorem refers to a $Y \to X$ and above you mention $X \subset Y$, I'm assuming you are simply abusing notation. Anyway, here are some things I would try for a subvariety $X$ in $Y$ (obtained in some way) 2. Is $O_X$ (locally) a summand of something with rational singularities? (Apply Boutot's theorem) 3. Sandor's result, but you may as well try with a resolution $\pi : X' \to X$ and try to show that $\pi_* \omega_{X'} = \omega_X$ (at which point, this is due to Kempf, not Kovacs) (I assume you have already shown that $X$ is CM), this is easier to compute. Then this is due to Kempf I think. But if If you have some other $Y$ with rational singularities mapping to it in some natural way, then you might be in business via Sandor's theorem. Of course, the quickest hope for computing some higher cohomology like this is sticking it in some long exact sequence. 4. If $X$ is a divisor, you could try to show that the pair $(Y, X)$ is purely log terminal (see also Lazarsfelds's book, and adjoint ideals). In this same direction, if $X$ is NOT a divisor, you could try to show that $X$ is a minimal log canonical center of some log canonical pair and then apply Kawamata's subadjunction theorem. Of course, you could also just try to show that $X$ is log terminal directly. 5. If you have specific equations, you could also try some reduction to characteristic p techniques (like things related to F-splitting and F-rationality, some of these are very effective if you have explicit equations). Even without specific equations, some of these techniques still might be useful. 6. I suppose you could also do some Bertini type tricks if somehow this subvariety is sufficiently general (for example, a general section of a base point free linear system of something with rational singularities still has rational singularities). 7. You can also see this question: http://mathoverflow.net/questions/23091/is-there-an-obvious-way-for-showing-singularities-are-quotient/23137#23137 8. Does your variety have a small resoluation ($Y \to X$)? If it is also Cohen-Macaulay and normal, then it has rational singularities. 9. Does your variety have a Cartier divisor $D$ on it with log canonical (or maybe Du Bois) singularities such that $X \setminus D$ is has log terminal singularities (or maybe smooth). Then $X$ can be show to have rational singularities. Some things like this appeared in a paper of Koll\'ar and Shepherd-Barron (also see the related work of Karu as well as a paper of mine on Du Bois singularities). That's all I can think of right now. 1 There might be many ways to prove a variety has rational singularities. I certainly agree with Zsolt's comment above that you should be careful with your notation, Kov\'acs theorem refers to a $Y \to X$ and above you mention $X \subset Y$, I'm assuming you are simply abusing notation. Anyway, here are some things I would try for a subvariety $X$ in $Y$ (obtained in some way) 2. Is $O_X$ (locally) a summand of something with rational singularities? (Apply Boutot's theorem) 3. Sandor's result, but you may as well try with a resolution $\pi : X' \to X$ and try to show that $\pi_* \omega_{X'} = \omega_X$ (I assume you have already shown that $X$ is CM), this is easier to compute. Then this is due to Kempf I think. But if you have some other $Y$ with rational singularities mapping to it, then you might be in business via Sandor's theorem. Of course, the quickest hope for computing some higher cohomology like this is sticking it in some long exact sequence. 4. If $X$ is a divisor, you could try to show that the pair $(Y, X)$ is purely log terminal (see also Lazarsfelds's book, and adjoint ideals). In this same direction, if $X$ is NOT a divisor, you could try to show that $X$ is a minimal log canonical center of some log canonical pair and then apply Kawamata's subadjunction theorem. Of course, you could also just try to show that $X$ is log terminal directly. 5. If you have specific equations, you could also try some reduction to characteristic p techniques (like things related to F-splitting and F-rationality, some of these are very effective if you have explicit equations). Even without specific equations, some of these techniques still might be useful. 6. I suppose you could also do some Bertini type tricks if somehow this subvariety is sufficiently general (for example, a general section of a base point free linear system of something with rational singularities still has rational singularities). 7. You can also see this question: http://mathoverflow.net/questions/23091/is-there-an-obvious-way-for-showing-singularities-are-quotient/23137#23137 8. Does your variety have a small resoluation ($Y \to X$)? If it is also Cohen-Macaulay and normal, then it has rational singularities. 9. Does your variety have a Cartier divisor $D$ on it with log canonical (or maybe Du Bois) singularities such that $X \setminus D$ is has log terminal singularities (or maybe smooth). Then $X$ can be show to have rational singularities. Some things like this appeared in a paper of Koll\'ar and Shepherd-Barron (also see the related work of Karu as well as a paper of mine on Du Bois singularities). That's all I can think of right now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9599778652191162, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=134791
Physics Forums ## Dissociation of H2O due to Photon energy. I can't figure out how to do this question: Given that the rate of energetic photons is striking Venus at about 10^30 per second and the age of the solar system is about 10^17 seconds, estimate the total mass of water that could have been lost from Venus since it formed. Th mass of a water molecule is 18 x 10^-27kg. Any help will be much appreciated. Edit: all I have figured thus far is that # of photons striking multiplied by charge of an electron gives the energy transfered to the H20 but how do I figure out how much is needed to dissociate the H20 and in turn how much H20 will be lost because of it? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor You need to take another look at the energy of a photon. Unless you have been given specific information, you will have to make some assumptions about the energy of a typical photon, and how much of its energy can be absorbed by the water (all of it perhaps?) You will then need to know how much energy a water molecule would need to escape the gravity of Venus. Interesting old post. Where to find information on how much energy a "typical" photon holds? Are there non-typical photons? Admin ## Dissociation of H2O due to Photon energy. Question asked for "energetic" photons - my bet is that they meant "those able to shoot water molecules from the Venus gravity field" and whole question is a just an exercise in dimensional analysis. Edit: no, it must be more complicated; unless I did some mistake in my calculations this approach gives absurd result (too much water). By absurd, was it on the order of $$10^{21}$$ kg? The Earth has $$1.36 * 10^{21}$$ kg of water, so having an answer that is similar or somewhat larger seems to make sense considering that there is hardly any water on Venus - 20ppm in the atmosphere. This also sets upper bounds, as the water could have been lost a long time ago, but the answer estimates the maximum amount of water Venus could have had when it was first formed to not have any water today. Rhetorical question: If we fast forward to 1 trillion years in the future (assuming that we can go that far), our answer would be $$10^3$$ larger, but would it be any more or less unreasonable? On the other hand, the title refers to dissociation rather than ejection of water. It is possible that some fraction would have recombined back into water. Admin Hm, somehow I have managed to miss the result by 103 and got 1024 kg - comparable to Earth mass. 1021 looks much better. Thread Tools | | | | |----------------------------------------------------------------|-------------------------------------|---------| | Similar Threads for: Dissociation of H2O due to Photon energy. | | | | Thread | Forum | Replies | | | Biology, Chemistry & Other Homework | 2 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 1 | | | Quantum Physics | 1 | | | Introductory Physics Homework | 9 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548373222351074, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/26461/are-the-stiefel-whitney-classes-of-the-tangent-bundle-determined-by-the-mod-2-coh
## Are the stiefel-Whitney classes of the tangent bundle determined by the mod 2 cohomology? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G=\mathbb{Z}/2\mathbb{Z}$. Let $f\colon L \to N$ be a smooth map of connected smooth closed $n$-dimensional manifolds such that the induced map $f^* \colon H^*(N,G) \to H^*(L,G)$ is an isomorphism. Question: Are the pull back of the Stiefel-Whitney classes of the tangent bundle of $N$ the Stiefel-Whitney classes of the tangent bundle of $L$?. This is in fact true for the first Stiefel-Whitney class by considering coverings and degrees, but what about the higher degree classes? Motivation: This came up because (relative) spin is important in defining Floer homology with $\mathbb{Z}$ coefficients. So I am in fact mostly interested in the following sub-question. Question: In particular what about the second Stiefel-Whitney class in the case where both $N$ and $L$ are also assumed to be oriented? and if the answer is negative: what extra conditions do I need to make it positive? The idea is that I apriori have to use $G$ coefficients, but can prove that it is a $G$-cohomology equivalence, and want to use that to start the argument over again with other coefficients, but for that I need this property of the second Stiefel-Whitney class. This sub-question and the relation to Floer homology is related to orientations in real $K$-theory and delooping in the following sense: take a map $h\colon X \to U/O$ by delooping we get a map $\Omega h \colon \Omega X \to \Omega U/O \simeq \mathbb{Z}\times BO$ which classifies a virtual bundle over the loop space of $X$. This bundle is oriented iff the original map composed with the canonical map $U/O \to BO$ classified a virtual $0$-dimensional bundle with vanishing second Stiefel-Whitney class. This is the main point of why orientations in Floer homology is initimitely linked with spin! In the case of a Lagrangian sub-manifold $L\subset T^*N$ the difference of the tangent bundles precisely defines such a map $L \to U/O$ ($U(n)/O(n)$ classifies Lagrangians in $\mathbb{R}^{2n}$) such that the composition to $BO$ classifies the virtual bundle $TN-TL$. So in fact you may add this lifting property as an extra condition to the sub-question if you like, and then I would lose no generality. I believe that this condition implies that all the relative Prontryagin classes vanishes, which may be helpfull. ADDED: in light of the answer, all this motivation and these extra possible assumptions are not important nor relevant for the actual question. - "Floer homology is oriented iff you have spin!" Um... Relative pin structures are sufficient to determine coherent orientations, but I'm not sure I'd care to formulate a converse. Two disjoint Lagrangians have a natural (but trivial!) Floer complex over the integers. – Tim Perutz May 30 2010 at 18:48 By the way, the (excellent!) idea that one should prove that nearby Lagrangians are mod 2 cohomology equivalent and hence relatively spin is I believe one that Fukaya-Seidel-Smith were aware of when they wrote their papers on nearby Lagrangians. They couldn't use it because they invoke a theorem that requires char $\neq 2$. But it comes up at the very end of Abouzaid's preprint arxiv.org/pdf/1005.0358. – Tim Perutz May 30 2010 at 19:04 @Tim: The Floer homology I am refering to is the one for the action of a given Hamiltonian on a symplectic manifold $M$, not the intersection Floer homology for two Lagrangians, which I admit is a not obvious from the above since I am explicitly considering a Lagrangian in $T^*N$ (this is also what Viterbo did in "Exact Lagrange submanifolds, periodic orbits and the cohomology of the free loop space", and I am working in the same spirit), in this case the Floer homology can be given coherent orientations without employing any tricks iff $M$ is spin, – Thomas Kragh May 31 2010 at 7:51 and in the case Viterbo considers the generating functions are related by an oriented bundle iff $L\to N$ is relative spin. Thanks for the reference I had not seen that one, I am looking forward to seing if the passage from homology equivalence to homotopy equivalence can work in my frame work as well. – Thomas Kragh May 31 2010 at 7:56 I realize now that something is not quite right about this statement because Floer homology can be oriented in the case of $T^*N$ even if $N$ is not spin. I am not sure anymore what precisely the general statement is, but at least the orientation is intimitely linked to spin, because of what I write, and the relative spin statement about generating functions is valid in the Viterbo case. (I have changed the statement) – Thomas Kragh May 31 2010 at 8:05 show 4 more comments ## 3 Answers The answer to the question is positive, due to Wu's formula. See e.g. Milnor-Stasheff, Characteristic classes, lemma 11.13 and theorem 11.14. In fact, all one needs to compute the Stiefel-Whitney classes of a smooth compact manifold (orientable or not) is the cohomology mod 2 (as an algebra) and the action of the Steenrod algebra on it. Both structures are preserved under cohomology isomorphisms induced by continuous maps. - How does this relate to exotic differentiable structures on the sphere? It seems that none of the things you mention as sufficient to calculate S.-W. classes depend on the differentiable structure at all. Are S.-W. classes of the tangent bundle not enough to detect weird smoothness structures? How is the situation different for Pontryagin classes (which is I think what Milnor used to distinguish them)? – Ilya Grigoriev May 30 2010 at 18:29 Ilya, homotopy spheres are stably parallelizable. Milnor distinguished exotic 7-spheres via Pontryagin numbers of bounding 8-manifolds. – Tim Perutz May 30 2010 at 18:51 4 Ilya -- yes, the (integral) Pontrjagin classes depend on the smooth structure, but the Stiefel-Whitney classes don't and neither do the rational Pontrjagin classes. – algori May 30 2010 at 18:57 The Stiefel-Whitney classes don't even depend on the topological structure: They are preserved by homotopy equivalences between closed manifolds. This follows from the positive answer to the question asked. Another perspective on this is that these characteristic classes can be defined for spherical fibrations; you don't need a vector bundle (or even a topological disk bundle or microbundle). They can be defined as the classes that correspond by the Thom isomorphism to the Steenrod squares of the Thom class (and you need only a spherical fibration to get a Thom class and Thom isomorphism). – Tom Goodwillie Jul 20 2010 at 20:25 1 But my last comment does not apply at all to the rational Pontrjagin classes. You can give examples of homotopy-equivalent smooth closed manifolds where one is stably parallelizable and the other has nontrivial Pontrjagin classes: the total spaces of the sphere bundles over two vector bundles over a sphere -- one bundle trivial, the other corresponding to a nontrivial element of the kernel of $\pi_{4k}(BO_j)\to pi_{4k}(BG_j)$. – Tom Goodwillie Jul 20 2010 at 20:32 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The most conceptual way of understanding the relation between the mod 2 Wu and Stiefel-Whitney classes of a manifold and the action of the Steenrod algebra on the mod 2 cohomology is to use the homotopy theory of Poincare duality spaces and the Spivak normal fibration, and also the chain homotopy theory of chain complexes with symmetric Poincare complexes and the normal chain bundle expounded in my 1980 paper The algebraic theory of surgery (Part I, Part II). A map $f:L\to N$ of $n$-dimensional manifolds which induces isomorphisms in $Z_2$-coefficient cohomology also induces a chain equivalence of $n$-dimensional symmetric Poincare complexes over $Z_2$. Such a chain equivalence automatically preserves the Spivak normal chain bundles. The mod 2 Wu and Stiefel-Whitney classes of the manifolds are preserved by $f$ because they only depend only on the underlying chain homotopy structure. It is also worth reminding ourselves that Atiyah's 1960 paper Thom complexes established the $S$-duality between the Thom space of the normal bundle of a manifold $X$ and $X_+=X \cup {*}$, and so proved a conjecture of Milnor and Spanier: the stable fibre homotopy type of the tangent sphere bundle of a differentiable manifold $X$ depends only on the homotopy type of $X$. - There was a fundamental error in my answer. The error was in misunderstanding the naturality of the $w_i$. $f: L \to N$ inducing an isomorphism in cohomology does not imply anything about the induced map of the tangent bundles. This is where i made my fundamental error. Please see the comments for details or look at earlier versions of this answer. I had hoped that there would be a more "axiomatic" proof in the sense of Milnor and Stasheff. If anyone comes up with one please feel free to put one here. Thanks to Tom and Dan for the comments! - This doesn't answer the question, because a smooth map $f:N\to L$ need not give a bundle isomorphism, even if it induces isomorphisms on cohomology. – Tom Goodwillie Jul 20 2010 at 22:24 Are you saying that f being an isomorphism in mod 2 cohomology implies that `$f^*(\tau_N) = \tau_L$`? – Dan Ramras Jul 20 2010 at 22:27 (My comment is the same as Tom's) – Dan Ramras Jul 20 2010 at 22:28 I did not mean to imply that an isomorphism in cohomology implied that the bundles were isomorphic. My mistake. Do the equations i have above actually require that the map of bundles be an isomorphism of bundles? I believe i only need the bundle map. If this is wrong, is there a way it can be fixed? – Sean Tilson Jul 20 2010 at 23:56 Any map $X\to Y$ of spaces is covered by a bundle map, for any vector bundles on $X$ and $Y$. (Zero map.) Maybe some books say bundle map when they mean fiberwise isomorphism? – Tom Goodwillie Jul 21 2010 at 1:59 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104366898536682, "perplexity_flag": "head"}
http://www.math.uah.edu/stat/applets/SecretaryExperiment.html
### The Secretary Experiment | | | | | | | | | | | | | | | | | | | | | | | | | | |--------|--------|--------|--------|--------|--------|--------|--------|--------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------| | Ball 1 | Ball 2 | Ball 3 | Ball 4 | Ball 5 | Ball 6 | Ball 7 | Ball 8 | Ball 9 | Ball 10 | Ball 11 | Ball 12 | Ball 13 | Ball 14 | Ball 15 | Ball 16 | Ball 17 | Ball 18 | Ball 19 | Ball 20 | Ball 21 | Ball 22 | Ball 23 | Ball 24 | Ball 25 | | Ball 1 | Ball 2 | Ball 3 | Ball 4 | Ball 5 | Ball 6 | Ball 7 | Ball 8 | Ball 9 | Ball 10 | Ball 11 | Ball 12 | Ball 13 | Ball 14 | Ball 15 | Ball 16 | Ball 17 | Ball 18 | Ball 19 | Ball 20 | Ball 21 | Ball 22 | Ball 23 | Ball 24 | Ball 25 | Distribution graph #### Description In the secretary problem, there are $$n$$ candidates, totally ranked from best to worst, with no ties. The candidates arrive sequentially, in random order. We can not observe the absolute ranks of the candidates as they arrive, only the relative ranks. Our goal is to choose the best candidate; any other outcome is failure. In the secretary experiment, the candidates are represented as balls. For $$k \in \{1, 2, \ldots, n\}$$ strategy $$k$$ is to let the first $$k - 1$$ candidates go by, and then pick the first candidate (if she exists) who is better than all previous candidates. If this candidate does not exist, we must pick the last candidate (regardless of rank). The first row of balls shows the relative ranks of the candidates, up to the candidate that is selected. The second row of balls shows all candidates with their absolute ranks. On each run, the number (arrival order) of the selected candidate $$X$$, the number of the best candidate $$Y$$, and the indicator variable of a win $$W$$ are recorded. The number of candidates $$n$$ and the strategy parameter $$k$$ can be varied with input controls.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8526157140731812, "perplexity_flag": "middle"}
http://en.wikiversity.org/wiki/Continuum_mechanics/Thermodynamics_of_continua
# Continuum mechanics/Thermodynamics of continua From Wikiversity ## Governing Equations The equations that govern the thermomechanics of a solid include the balance laws for mass, momentum, and energy. Kinematic equations and constitutive relations are needed to complete the system of equations. Physical restrictions on the form of the constitutive relations are imposed by an entropy inequality that expresses the second law of thermodynamics in mathematical form. The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes: 1. the physical quantity itself flows through the surface that bounds the volume, 2. there is a source of the physical quantity on the surface of the volume, or/and, 3. there is a source of the physical quantity inside the volume. Let $\Omega$ be the body (an open subset of Euclidean space) and let $\partial \Omega$ be its surface (the boundary of $\Omega$). Let the motion of material points in the body be described by the map $\mathbf{x} = \boldsymbol{\varphi}(\mathbf{X}) = \mathbf{x}(\mathbf{X})$ where $\mathbf{X}$ is the position of a point in the initial configuration and $\mathbf{x}$ is the location of the same point in the deformed configuration. Recall that the deformation gradient ($\boldsymbol{F}$) is given by $\boldsymbol{F} = \frac{\partial \mathbf{x}}{\partial \mathbf{X}} = \boldsymbol{\nabla}_{\circ} \mathbf{x} ~.$ ### Balance Laws Let $f(\mathbf{x},t)$ be a physical quantity that is flowing through the body. Let $g(\mathbf{x},t)$ be sources on the surface of the body and let $h(\mathbf{x},t)$ be sources inside the body. Let $\mathbf{n}(\mathbf{x},t)$ be the outward unit normal to the surface $\partial \Omega$. Let $\mathbf{v}(\mathbf{x},t)$ be the velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface $\partial \Omega$ is moving be $u_n$ (in the direction $\mathbf{n}$). Then, balance laws can be expressed in the general form $\cfrac{d}{dt}\left[\int_{\Omega} f(\mathbf{x},t)~\text{dV}\right] = \int_{\partial \Omega } f(\mathbf{x},t)[u_n(\mathbf{x},t) - \mathbf{v}(\mathbf{x},t)\cdot\mathbf{n}(\mathbf{x},t)]~\text{dA} + \int_{\partial \Omega } g(\mathbf{x},t)~\text{dA} + \int_{\Omega} h(\mathbf{x},t)~\text{dV} ~.$ Note that the functions $f(\mathbf{x},t)$, $g(\mathbf{x},t)$, and $h(\mathbf{x},t)$ can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. It can be shown that the balance laws of mass, momentum, and energy can be written as #### Balance laws in spatial description ${ \begin{align} \dot{\rho} + \rho~\boldsymbol{\nabla} \cdot \mathbf{v} & = 0 & & \qquad\text{Balance of Mass} \\ \rho~\dot{\mathbf{v}} - \boldsymbol{\nabla} \cdot \boldsymbol{\sigma} - \rho~\mathbf{b} & = 0 & & \qquad\text{Balance of Linear Momentum} \\ \boldsymbol{\sigma} & = \boldsymbol{\sigma}^T & & \qquad\text{Balance of Angular Momentum} \\ \rho~\dot{e} - \boldsymbol{\sigma}:(\boldsymbol{\nabla}\mathbf{v}) + \boldsymbol{\nabla} \cdot \mathbf{q} - \rho~s & = 0 & & \qquad\text{Balance of Energy.} \end{align} }$ In the above equations $\rho(\mathbf{x},t)$ is the mass density (current), $\dot{\rho}$ is the material time derivative of $\rho$, $\mathbf{v}(\mathbf{x},t)$ is the particle velocity, $\dot{\mathbf{v}}$ is the material time derivative of $\mathbf{v}$, $\boldsymbol{\sigma}(\mathbf{x},t)$ is the Cauchy stress tensor, $\mathbf{b}(\mathbf{x},t)$ is the body force density, $e(\mathbf{x},t)$ is the internal energy per unit mass, $\dot{e}$ is the material time derivative of $e$, $\mathbf{q}(\mathbf{x},t)$ is the heat flux vector, and $s(\mathbf{x},t)$ is an energy source per unit mass. With respect to the reference configuration, the balance laws can be written as #### Balance laws in material description ${ \begin{align} \rho~\det(\boldsymbol{F}) - \rho_0 &= 0 & & \qquad \text{Balance of Mass} \\ \rho_0~\ddot{\mathbf{x}} - \boldsymbol{\nabla}_{\circ}\cdot\boldsymbol{P}^T -\rho_0~\mathbf{b} & = 0 & & \qquad \text{Balance of Linear Momentum} \\ \boldsymbol{F}\cdot\boldsymbol{P}^T & = \boldsymbol{P}\cdot\boldsymbol{F}^T & & \qquad \text{Balance of Angular Momentum} \\ \rho_0~\dot{e} - \boldsymbol{P}^T:\dot{\boldsymbol{F}} + \boldsymbol{\nabla}_{\circ}\cdot\mathbf{q} - \rho_0~s & = 0 & & \qquad\text{Balance of Energy.} \end{align} }$ In the above, $\boldsymbol{P}$ is the first Piola-Kirchhoff stress tensor, and $\rho_0$ is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by $\boldsymbol{P} = J~\boldsymbol{\sigma}\cdot\boldsymbol{F}^{-T} ~\text{where}~ J = \det(\boldsymbol{F})$ We can alternatively define the nominal stress tensor $\boldsymbol{N}$ which is the transpose of the first Piola-Kirchhoff stress tensor such that $\boldsymbol{N} = \boldsymbol{P}^T = J~\boldsymbol{F}^{-1}\cdot\boldsymbol{\sigma} ~.$ Then the balance laws become ${ \begin{align} \rho~\det(\boldsymbol{F}) - \rho_0 &= 0 & & \qquad \text{Balance of Mass} \\ \rho_0~\ddot{\mathbf{x}} - \boldsymbol{\nabla}_{\circ}\cdot\boldsymbol{N} -\rho_0~\mathbf{b} & = 0 & & \qquad \text{Balance of Linear Momentum} \\ \boldsymbol{F}\cdot\boldsymbol{N} & = \boldsymbol{N}^T\cdot\boldsymbol{F}^T & & \qquad \text{Balance of Angular Momentum} \\ \rho_0~\dot{e} - \boldsymbol{N}:\dot{\boldsymbol{F}} + \boldsymbol{\nabla}_{\circ}\cdot\mathbf{q} - \rho_0~s & = 0 & & \qquad\text{Balance of Energy.} \end{align} }$ Keep in mind that: The gradient and divergence operators are defined such that $\boldsymbol{\nabla} \mathbf{v} = \sum_{i,j = 1}^3 \frac{\partial v_i}{\partial x_j}\mathbf{e}_i\otimes\mathbf{e}_j = v_{i,j}\mathbf{e}_i\otimes\mathbf{e}_j ~;~~ \boldsymbol{\nabla} \cdot \mathbf{v} = \sum_{i=1}^3 \frac{\partial v_i}{\partial x_i} = v_{i,i} ~;~~ \boldsymbol{\nabla} \cdot \boldsymbol{S} = \sum_{i,j=1}^3 \frac{\partial S_{ij}}{\partial x_j}~\mathbf{e}_i = \sigma_{ij,j}~\mathbf{e}_i ~.$ where $\mathbf{v}$ is a vector field, $\boldsymbol{S}$ is a second-order tensor field, and $\mathbf{e}_i$ are the components of an orthonormal basis in the current configuration. Also, $\boldsymbol{\nabla}_{\circ} \mathbf{v} = \sum_{i,j = 1}^3 \frac{\partial v_i}{\partial X_j}\mathbf{E}_i\otimes\mathbf{E}_j = v_{i,j}\mathbf{E}_i\otimes\mathbf{E}_j ~;~~ \boldsymbol{\nabla}_{\circ}\cdot\mathbf{v} = \sum_{i=1}^3 \frac{\partial v_i}{\partial X_i} = v_{i,i} ~;~~ \boldsymbol{\nabla}_{\circ}\cdot\boldsymbol{S} = \sum_{i,j=1}^3 \frac{\partial S_{ij}}{\partial X_j}~\mathbf{E}_i = S_{ij,j}~\mathbf{E}_i$ where $\mathbf{v}$ is a vector field, $\boldsymbol{S}$ is a second-order tensor field, and $\mathbf{E}_i$ are the components of an orthonormal basis in the reference configuration. The inner product is defined as $\boldsymbol{A}:\boldsymbol{B} = \sum_{i,j=1}^3 A_{ij}~B_{ij} = A_{ij}~B_{ij} ~.$ ### The Clausius-Duhem Inequality The Clausius-Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved. Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, and an internal entropy density per unit mass ($\eta$) in the region of interest. Let $\Omega$ be such a region and let $\partial \Omega$ be its boundary. Then the second law of thermodynamics states that the rate of increase of $\eta$ in this region is greater than or equal to the sum of that supplied to $\Omega$ (as a flux or from internal sources) and the change of the internal entropy density due to material flowing in and out of the region. Let $\partial \Omega$ move with a velocity $u_n$ and let particles inside $\Omega$ have velocities $\mathbf{v}$. Let $\mathbf{n}$ be the unit outward normal to the surface $\partial \Omega$. Let $\rho$ be the density of matter in the region, $\bar{q}$ be the entropy flux at the surface, and $r$ be the entropy source per unit mass. Then the entropy inequality may be written as $\cfrac{d}{dt}\left(\int_{\Omega} \rho~\eta~\text{dV}\right) \ge \int_{\partial \omega} \rho~\eta~(u_n - \mathbf{v}\cdot\mathbf{n})~\text{dA} + \int_{\partial \omega} \bar{q}~\text{dA} + \int_{\Omega} \rho~r~\text{dV} ~.$ The scalar entropy flux can be related to the vector flux at the surface by the relation $\bar{q} = -\boldsymbol{\psi}(\mathbf{x})\cdot\mathbf{n}$. Under the assumption of incrementally isothermal conditions, we have $\boldsymbol{\psi}(\mathbf{x}) = \cfrac{\mathbf{q}(\mathbf{x})}{T} ~;~~ r = \cfrac{s}{T}$ where $\mathbf{q}$ is the heat flux vector, $s$ is a energy source per unit mass, and $T$ is the absolute temperature of a material point at $\mathbf{x}$ at time $t$. We then have the Clausius-Duhem inequality in integral form: ${ \cfrac{d}{dt}\left(\int_{\Omega} \rho~\eta~\text{dV}\right) \ge \int_{\partial \omega} \rho~\eta~(u_n - \mathbf{v}\cdot\mathbf{n})~\text{dA} - \int_{\partial \omega} \cfrac{\mathbf{q}\cdot\mathbf{n}}{T}~\text{dA} + \int_{\Omega} \cfrac{\rho~s}{T}~\text{dV} ~. }$ We can show that the entropy inequality may be written in differential form as ${ \rho~\dot{\eta} \ge - \boldsymbol{\nabla} \cdot \left(\cfrac{\mathbf{q}}{T}\right) + \cfrac{\rho~s}{T} ~. }$ In terms of the Cauchy stress and the internal energy, the Clausius-Duhem inequality may be written as Clausius-Duhem inequality ${ \rho~(\dot{e} - T~\dot{\eta}) - \boldsymbol{\sigma}:\boldsymbol{\nabla}\mathbf{v} \le - \cfrac{\mathbf{q}\cdot\boldsymbol{\nabla} T}{T} ~. }$ ### References 1. T. W. Wright. (2002) The Physics and Mathematics of Adiabatic Shear Bands. Cambridge University Press, Cambridge, UK. 2. R. C. Batra. (2006) Elements of Continuum Mechanics. AIAA, Reston, VA. 3. G. A. Maugin. (1999) The Thermomechanics of Nonlinear Irreversible Behaviors: An Introduction. World Scientific, Singapore. 4. M. E. Gurtin. (1981) An Introduction to Continuum Mechanics. Academic Press, New York.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8767645955085754, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/harmonic-analysis+harmonic-functions
# Tagged Questions 2answers 103 views ### Mean Value Property of Harmonic Function on a Square A friend of mine presented me the following problem a couple days ago: Let $S$ in $\mathbb{R}^2$ be a square and $u$ a continuous harmonic function on the closure of $S$. Show that the average of ... 1answer 24 views ### Extracting Harmonic series components I have a number which is made up of a Harmonic series. 1/2 + 1/3 + 1/4 etc. Some of the components may not be in the number.. 1/2 + 1/7 + 1/11 etc. Is it possible to recover the individual ... 2answers 47 views ### Harmonic function, existence of a constant May i ask you for a little help about a problem with harmonic function? It seems to be not that difficult, in a way even intiutively obvious but i don't really know how to show this explicitly. We ... 1answer 109 views ### Simply connected domain and harmonic function Let $\Omega$ be a simply connected domain that is properly contained in $\mathbb C$, and $u(x,y)$ is harmonic on the unit disk $\mathbb D$, then there is a funtion $f(z)$, that is one-one and ... 1answer 158 views ### Laplace equation Dirichlet problem on punctured unit ball. Let $\Omega = \{ x \in \mathbb{R}^n: 0<|x|<1 \}$ and consider the Dirichlet problem \begin{align} \Delta u &= 0 \\ u(0) &= 1 \\ u &= 0 ~~~\text{if} ~~|x|=1 \end{align} By considering ... 1answer 82 views ### Is this function a subharmonic function? Does anyone know, is $h\left(z,w\right):=\frac{\left|zw\right|}{\left|z\right|+\left|w\right|}$ for $z$ and $w$ in the unit disk $\mathbb{D}$ of the complex plane a (pluri)subharmonic function? ... 1answer 96 views ### Dirichlet Problem: Uniqueness of solution Let $u$ be the solution to a Dirichlet Problem on a bounded open domain $D \subset \Bbb R^n$. Is the uniqueness of $u$ guaranteed by the maximum principle or by the smoothness of the boundary of $D$? ... 1answer 53 views ### Dirichlet problem: Obtaining the harmonic measure through Riesz representation theorem For the Dirichlet problem on a bounded open domain $D \subset \Bbb R^n$ $$\Delta u=0, \text{ on } D, \\ \left. u\right|_{\partial D}=f \in C\left( \partial D\right).$$ With a fix $x$ in $D$, an ... 0answers 81 views ### Dirichlet Problem: Example where the Green function is not the Poisson kernel Give an example of a Dirichlet problem where the Green function is not the Poisson kernel. For a bounded open domain $D$ with a sufficiently smooth boundary and $f \in C\left(\partial D \right)$, the ... 1answer 77 views ### Dirichlet problem: Is the Poisson Integral always a solution? Let $f$ be continuous on the sufficiently smooth boundary $\partial D$ of a domain $D \subset \Bbb R^n$. Is the Poisson integral of $f$, Pf(x)=\int_{\partial D} f(t) ... 1answer 112 views ### Harmonic function with condition on part of its boundary Suppose $u$ is harmonic in the interior of the unit square $0 \leq x \leq 1$, $0\leq y\leq1$. Suppose furthermore that $u$ and its first derivatives continuously extend to the bottom side \$0\leq x ... 0answers 55 views ### Suggestion for a project on Harmonic measure and Fourier analysis I have a course project on harmonic measure and Fourier analysis. The goal is to give a presentation on a part of harmonic measure theory which relates to Fourier analysis. Harmonic measure is a vast ... 0answers 55 views ### Show that function is a constant Let $\phi \in L^2(S^{n})$. Let $f=\phi^2$ and let $f_j^m$ be a Fourier coefficients of $f$. Help me please to show that if $$\sum_{j,i}c_jf^m_jY^i_j=\phi,$$ then $f=constant$. Here $Y_j^i$ is the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8874155879020691, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/4071/popular-depictions-of-electromagnetic-wave-is-there-an-error/4090
# Popular depictions of electromagnetic wave: is there an error? Here are some depictions of electromagnetic wave, similar to the depictions in other places: Isn't there an error? It is logical to presume that the electric field should have maximum when magnetic field is at zero and vise versa, so that there is no moment when the both vectors are zero at the same time. Otherwise one comes to a conclusion that the total energy of the system becomes zero, then grows to maximum, then becomes zero again which contradicts the conservation law. - ## 2 Answers The depictions you're seeing are correct, the electric and magnetic fields both reach their amplitudes and zeroes in the same locations. Rafael's answer and certain comments on it are completely correct; energy conservation does not require that the energy density be the same at every point on the electromagnetic wave. The points where there is no field do not carry any energy. But there is never a time when the fields go to zero everywhere. In fact, the wave always maintains the same shape of peaks and valleys (for an ideal single-frequency wave in a perfect classical vacuum), so the same amount of energy is always there. It just moves. To add to Rafael's excellent answer, here's an explicit example. Consider a sinusoidal electromagnetic wave propagating in the $z$ direction. It will have an electric field given by $$\mathbf{E}(\mathbf{r},t) = E_0\hat{\mathbf{x}}\sin(kz - \omega t)$$ Take the curl of this and you get $$\nabla\times\mathbf{E}(\mathbf{r},t) = \left(\hat{\mathbf{z}}\frac{\partial}{\partial y} - \hat{\mathbf{y}}\frac{\partial}{\partial z}\right)E_0\sin(kz - \omega t) = -E_0 k\hat{\mathbf{y}}\cos(kz - \omega t)$$ Using one of Maxwell's equations, $\nabla\times\mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$, you get $$-\frac{\partial\mathbf{B}(\mathbf{r},t)}{\partial t} = -E_0 k\hat{\mathbf{y}}\cos(kz - \omega t)$$ Integrate this with respect to time to find the magnetic field, $$\mathbf{B}(\mathbf{r},t) = -\frac{E_0 k}{\omega}\hat{\mathbf{y}}\sin(kz - \omega t)$$ Comparing this with the expression for $\mathbf{E}(\mathbf{r},t)$, you find that $\mathbf{B}$ is directly proportional to $\mathbf{E}$. When and where one is zero, the other will also be zero; when and where one reaches its maximum/minimum, so does the other. For an electromagnetic wave in free space, conservation of energy is expressed by Poynting's theorem, $$\frac{\partial u}{\partial t} = -\nabla\cdot\mathbf{S}$$ The left side of this gives you the rate of change of energy density in time, where $$u = \frac{1}{2}\left(\epsilon_0 E^2 + \frac{1}{\mu_0}B^2\right)$$ and the right side tells you the electromagnetic energy flux density, in terms of the Poynting vector, $$\mathbf{S} = \frac{1}{\mu_0}\mathbf{E}\times\mathbf{B}$$ Poynting's theorem just says that the rate at which the energy density at a point changes is the opposite of the rate at which energy density flows away from that point. If you plug in the explicit expressions for the wave in my example, after a bit of algebra you find $$\frac{\partial u}{\partial t} = -\omega E_0^2\left(\epsilon_0 + \frac{k^2}{\mu_0\omega^2}\right)\sin(kz - \omega t)\cos(kz - \omega t) = -\epsilon_0\omega E_0^2 \sin\bigl(2(kz - \omega t)\bigr)$$ (using $c = \omega/k$) and $$\nabla\cdot\mathbf{S} = \frac{2}{\mu_0}\frac{k^2}{\omega}E^2 \sin(kz - \omega t)\cos(kz - \omega t) = \epsilon_0 \omega E_0^2 \sin\bigl(2(kz - \omega t)\bigr)$$ thus confirming that the equality in Poynting's theorem holds, and therefore that EM energy is conserved. Notice that the expressions for both sides of the equation include the factor $\sin\bigl(2(kz - \omega t)\bigr)$ - they're not constant. This mathematically shows you the structure of the energy in an EM wave. It's not just a uniform "column of energy;" the amount of energy contained in the wave varies sinusoidally from point to point ($S$ tells you that), and as the wave passes a particular point in space, the amount of energy it has at that point varies sinusoidally in time ($u$ tells you that). But those changes in energy with respect to space and time don't just come out of nowhere. They're precisely synchronized in the manner specified by Poynting's theorem, so that the changes in energy at a point are accounted for by the flux to and from neighboring points. - – Anixx Jan 29 '11 at 9:06 3 @Anixx: The picture shows no such thing. In fact, I have no idea what it is showing, and I certainly don't see what it has to do with photon propagation. Cite it in context if you want to get a useful interpretation. – David Zaslavsky♦ Jan 29 '11 at 9:54 There is no contradiction with conservation law, you are just applying it wrongly. To do it right, you have to consider a small closed region and check whether the variation of energy inside this domain is the same as (minus) the energy flux through its boundaries. In the case of a electromagnetic wave, when the energy density decreases inside the region, it just means that the energy has left the region you are considering. - The pictures may be depicting a static wave, but certainly, not a propagating wave. – Anixx Jan 28 '11 at 18:02 Energy stored in the wave is proportional to $c_1B^2+c_2E^2$. From the images it follows that in certain moments of time the total energy is zero. – Anixx Jan 28 '11 at 18:06 1 On the other hand, if the phases of B and E are shifted, the total energy is constant because $(\sin t)^2+(\cos t)^2$=1 which is what is expected. – Anixx Jan 28 '11 at 18:08 That's of course the right answer, Rafael. I would just add that a linearly polarized electromagnetic wave looks exactly as the illustrations suggest - and its time evolution is simply that it moves uniformly in space, by the speed of light $c$. The local conservation of energy is therefore self-evident because if a small volume $dV$ located near $\vec x$ contained energy $dE$ at time $t$, there will simply be a box $dV$ near $\vec x+c\, dt\vec n$ that contains the same energy $dE$ at moment $t+dt$. The energy was conserved, even locally: it just moved a little bit further. – Luboš Motl Jan 28 '11 at 18:10 Dear Anixx, please listen to Rafael, he is completely right. There is absolutely no reason why the total energy density should be constant. – Luboš Motl Jan 28 '11 at 18:11 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404486417770386, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/181026/group-cohomology-of-finite-groups
Group cohomology of finite groups I wonder if the group cohomology of a finite group $G$ with coefficients in $\mathbb{Z}$ is finite. This statement may be too strong. I am interested in, for instance, dihedral group. $$G=D_{2n}=\langle a,b | \ a^n=b^2=abab=e \rangle$$ Assume that $a$ acts trivially and $b$ acts as $-id$ on $\mathbb{Z}$. First cohomology is $\mathbb{Z}^G=0$. The second cohomology already seems quite involved to me. I read several post about group cohomology on StackExchange and MathOverflow, but I still have trouble computing explicit example and getting intuition behind the concept. - 1 do you mean each cohomology group $H^p(G;\mathbb{Z})$ is a finite group or do you mean the whole cohomology ring $H^*(G;\mathbb{Z})$? The latter statement is of course false, seen e.g. in the finite cyclic groups. – mland Aug 10 '12 at 14:32 Sorry for the confusion. I menat that each $H^{p}(G,M)$ is finite. – Michel Aug 10 '12 at 17:20 There are projective resolutions for the dihedral group (due to Wall) that can be used to compute the cohomology for every coefficient module. In particular it shouldn't be to hard to figure out $H^2(D_{2n};-)$. – Ralph Aug 10 '12 at 23:05 I will check it up. Thanks, Ralph. – Michel Aug 11 '12 at 0:30 1 Answer The answer is yes, since it's torsion (killed by the order of $G$) and finitely generated (since you can pick a resolution by finitely generated abelian groups). - (I mean, more precisely, that the underlying abelian groups of the free $ZG$-modules appearing in the standard resolution of $Z$ are finitely generated.) – countinghaus Aug 10 '12 at 13:16 Each term of the standard resolution is a direct sum of finite $\mathbb{Z}[G]$s but I don't quite see why it's torsion. – Michel Aug 10 '12 at 17:34 in general if $G$ is a finite group, the order of $G$ kills any cohomology group $H^i(G, M)$. To see this, use the fact that there are corestriction and restriction maps for any normal subgroup $H$ of $G$ such that $H^i(G, M) \to H^i(H, M) \to H^i (G, M)$ is multiplication by $[G:H]$. Now apply this in the special case where $H$ is trivial to get that multiplication by the order of $G$ is the zero map. – countinghaus Aug 10 '12 at 18:49 I didn't know the sequence. Thanks ^_^ – Michel Aug 10 '12 at 19:13 1 An alternative argument to see that $H^i(G;M)$ is annulated by $|G|$ for $i > 0$ is to consider $\hat{H}^\ast(G;M)$ as (unitary) module over $\hat{H}^\ast(G;\mathbb{Z})$. Then it's clear because $1 \in \hat{H}^0(G;\mathbb{Z})=\mathbb{Z}/|G|$. – Ralph Aug 10 '12 at 23:02 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345507621765137, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/46998/maximum-points-of-intersection-between-a-circle-and-rectangle
# maximum points of intersection between a circle and rectangle I had today this mathematical question: What is the maximum number of points of intersection between a circle and a rectangle such that the length of the rectangle is greater than the circle's diameter, and its width is less than the circle's diameter? - I tried to fix the English -- I hope this reflects what you want to ask? – joriki Jun 22 '11 at 21:25 exactly yes sorry for bad writting – dato Jun 22 '11 at 21:31 ## 2 Answers I'll try to add a graph to help demonstrate that the maximal number of points of intersection between a circle and a rectangle such that the length of the rectangle is greater than the circle's diameter and its width is less than the diameter would be 6 such points of intersection: two points of intersection along each of the longest sides, and two points of intersection along one of the shorter sides. There is no way that there can be any points of intersection along the second of two shorter sides if the circle is intersecting the opposing side, since its diameter is less than the length of the rectangle. Consider, for example, a circle of diameter 10 (radius 5) centered at the origin; hence its equation is $x^2 + y^2 = 25$. Consider a rectangle with vertices $(x_i, y_i)$ at $(-4, -8)$, $(4, -8)$, $(4, 4)$, $(-4, 4)$. Hence it's length (height) is $4 - (-8) = 12 > 10$, and its width is $4 - (-4) = 8 < 10$ (where 10 is the diameter of the circle). Then there are 2 points of intersection between the circle $x^2 + y^2 = 25$ and each of the line segments $y = 4$ ($-4 \leq x \leq 4$), $x = 4$ ($-8 \leq y \leq 4$), and $x = -4$ ($-8 \leq y \leq 4$), but no points of intersection between $x^2 + y^2 = 4$ and the rectangle's fourth side which lies on line $y = -8$ ($-4 \leq x \leq 4$). Solving for the points of intersection yields a total of 6 points of intersection of the circle and the rectangle: $(-4, -3), (-4, 3), (-3,4), (3, 4), (4, 3), (4, -3)$. ( If we move the circle vertically so it intersects the line $y = -8$, then it will no longer intersect the side along $y = 4$. And there is no way a circle can intersect any given (straight) line in more than two points. - That depends on the rectangle, for some values of a, b and r, its possible to intersect in 8 points, and the apearantly the condition for a, b and r, is some complex. – ilius Jun 27 '11 at 15:03 Depending on the circle and rectangles, this maximum would be 2, 4, 6 or 8 (these are values that i'm sure), But exact conditions (on a, b and r) for every of these values is the problem. – ilius Jun 27 '11 at 15:21 @ilius: in this case: the answer is Maximum = 6 (all other values are possible, less than six: e.g., 0 if they are placed so they don't intersect at all, 1 if they are placed so only one side of rectangle is tangent to the circle, etc. OP asked for maximum, given the constraints listed in the post. Yes: if a square (say 4x4) and a circle, radius 5, were both centered at origin, then there would be 8 points of intersection (which is the maximum number of points of intersection in that case). – amWhy Jun 27 '11 at 15:52 ah, yes. I was searching for 8-point exact conditions which is another issue. – ilius Jun 28 '11 at 4:52 A circle intersects a straight line in at most two points. Your circle can't intersect your rectangle on all four sides. So the answer is six. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365118741989136, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/newtonian-mechanics?page=1&sort=votes&pagesize=15
# Tagged Questions Newtonian mechanics covers the discussion of the movement of classical bodies under the influence of forces by making use of Newton’s three laws. For more general discussion of energy, momentum conservation etc., use classical-mechanics, for Newton’s description of gravity, use newtonian-gravity. 7answers 3k views ### Does juggling balls reduce the total weight of the juggler and balls? A friend offered me a brain teaser to which the solution involves a $195$ pound man juggling two $3$-pound balls to traverse a bridge having a maximum capacity of only $200$ pounds. He explained that ... 13answers 6k views ### Why does kinetic energy increase quadratically, not linearly, with speed? As Wikipedia says: [...] the kinetic energy of a non-rotating object of mass $m$ traveling at a speed $v$ is $mv^2/2$. Why does this not increase linearly with speed? Why does it take so much ... 8answers 9k views ### Why does the atmosphere rotate along with the earth? I was reading somewhere about a really cheap way of travelling: using balloons to get ourselves away from the surface of the earth. The idea held that because the earth rotates, we should be able to ... 6answers 22k views ### Could someone jump from the international space station and live? Felix Baumgartner just completed his breathtaking free-fall skydiving jump from $120,000\,\text{feet} = 39\,\text{km}$ above the Earth, breaking the speed of sound during the process. I was wondering ... 6answers 1k views ### Can I survive a free fall using a ramp and a rope? Can I survive a free fall by carrying a very light and resistant ramp using a rope? Note: lets assume the ramp is a little bit heavier at the bottom and I am very skilled at making it always land ... 5answers 2k views ### With Newton's third law, why are things capable of moving? I've got a rather humiliating question considering newton's third law "If an object A exterts a force on object B, then object B exerts an equal but opposite force on object A" -> $F_1=-F_2$ ... 4answers 3k views ### How does a mobile phone vibrate without any external force? How does a mobile phone vibrate without any external force? By Newton's law, any body can't move without any external force 1answer 703 views ### Why does it take so long to get to the ISS? I don't understand why when first launched Space X's Dragon capsule had to orbit the Earth many times in order to match up with the ISS? Was this purely to match it's speed, or to get closer (as in ... 3answers 514 views ### Is gecko-like friction Coulombic? What is the highest known Coulombic $\mu_s$ for any combination of surfaces? Materials with large coefficients of static friction would be cool and useful. Rubber on rough surfaces typically has $\mu_s\sim1-2$. When people talk about examples with very high friction, often ... 2answers 902 views ### Is there an intuitive reason the brachistochrone and the tautochrone are the same curve? The brachistochrone problem asks what shape a hill should be so a ball slides down in the least time. The tautochrone problem asks what shape yields an oscillation frequency that is independent of ... 6answers 5k views ### Newton's cradle Why, when one releases 2 balls in Newton's cradle, two balls on the opposite side bounce out at approximately the same speed as the 1st pair, rather than one ball at higher speed, or 3 balls at lower ... 8answers 4k views ### Why don't spinning tops fall over? One topic which was covered in university, but which I never understood, is how a spinning top "magically" resists the force of gravity. The conservation of energy explanations make sense, but I don't ... 5answers 273 views ### Does the mass point move? There is a question regarding basic physical understanding. Assume you have a mass point (or just a ball if you like) that is constrained on a line. You know that at $t=0$ its position is $0$, i.e., ... 6answers 3k views ### Is two cars colliding at 50mph the same as one car colliding into a wall at 100 mph? I was watching a youtube video the other day where an economist said that he challenged his physics professor on this question back when he was in school. His professor said each scenario is the same, ... 2answers 986 views ### History of interpretation of Newton's first law Nowadays it seems to be popular among physics educators to present Newton's first law as a definition of inertial frames and/or a statement that such frames exist. This is clearly a modern overlay. ... 9answers 3k views ### Why are orbits elliptical? Almost all of the orbits of planets and other celestial bodies are elliptical, not circular. Is this due to gravitational pull by other nearby massive bodies? If this was the case a two body system ... 9answers 1k views ### What is the difference between weight and mass? My science teacher is always saying the words "weight of an object" and "mass of an object," but then my physics book (that I read on my own) tells me completely different definitions from the way ... 2answers 699 views ### An example which contradict to Newton's 3rd law? Let a,b be two charged particles. $$\vec{r}_a(0)=\vec{0}$$ $$\vec{r}_b(0)=r\hat{j}$$ $$\vec{v}_a(t)=v_a \hat{i}$$ $$\vec{v}_b(t)=v_b\hat{j}$$ In which both $v_a$ and $v_b$ $<<c$. Then ... 6answers 5k views ### What is the difference between Newtonian and Lagrangian mechanics in a nutshell? What is Lagrangian mechanics, and what's the difference compared to Newtonian mechanics? I'm a mathematician/computer scientist, not a physicist, so I'm kind of looking for something like the ... 4answers 495 views ### Anti-gravity in an infinite lattice of point masses Another interesting infinite lattice problem I found while watching a physics documentary. Imagine an infinite square lattice of point masses, subject to gravity. The masses involved are all $m$ and ... 4answers 130 views ### How can I determine whether the mass of an object is evenly distributed? How can I determine whether the mass of an object is evenly distributed without doing any permanent damage? Suppose I got all the typical lab equipment. I guess I can calculate its center of mass and ... 1answer 193 views ### Modelling the movement and jumps of a chalk while drawing a dashed line on a blackboard You probably know that if you try to draw a line using a piece of chalk on a blackboard , under some conditions (for example, $\alpha<\frac{\pi}{2}$ in the picture below) you will have a dashed ... 5answers 1k views ### Why Won't a Tight Cable Ever Be Fully Straight? I posted this picture of someone on a zipline on Facebook. One of my friends saw it and asked this question, so he could try to calculate the speed at which someone on the zipline would be going ... 6answers 2k views ### Why is torque not measured in Joules? Recently, I was doing my homework and I found out that Torque can be calculated using $\tau = rF$. This means the units of torque are Newton meters. Energy is also measured in Newton meters which are ... 7answers 15k views ### Is Melancholia's orbit impossible? In the recent movie "Melancholia", a planet, also called Melancholia, enters the solar system and hits the Earth. I want to leave aside the (also unreasonable) aspect that planet "hides behind the ... 4answers 541 views ### Is there a deep reason why springs combine like capacitors? I was solving a practice Physics GRE and there was a question about springs connected in series and parallel. I was too lazy to derive the way the spring constants add in each case. But I knew how ... 2answers 818 views ### Why do ships lean to the outside, but boats lean to the inside of a turn? Small vessels generally lean into a turn, whereas big vessels lean out. Why do ships lean to the outside, but boats lean to the inside of a turn? 5answers 788 views ### What causes the back of a bike to lift when the front brake is applied? What causes the back of a bike to lift when the front brake is applied? (Like in an endo.) Also, if I were to replicate this effect with a wood block with wheels that crashes against a wall (only the ... 3answers 2k views ### Why are Saturn's rings so thin? Take a look at this picture (from APOD http://apod.nasa.gov/apod/ap110308.html): I presume that rocks within rings smash each other. Below the picture there is a note which says that Saturn's rings ... 6answers 2k views ### How do you explain spinning tops to a nine year old? Why don't spinning tops fall over? (The young scientist version) My nine year old son asked me this very question when playing with his "Battle Strikers" set. Having studied Physics myself, I am very ... 5answers 7k views ### jumping into water Two questions: Assuming you dive head first or fall straight with your legs first, what is the maximal height you can jump into water from and not get hurt? In other words, an H meter fall into ... 1answer 501 views ### How to calculate the number of glass sheets that will be broken by a falling object? In season 1, episode 7 of King of the nerds the contestants are asked to calculate how many sheets of glass will be broken by a falling object. They are shown 1 example case and then asked to ... 5answers 1k views ### Is the energy conserved in a moving frame of reference? Consider this situation: When the box is at the bottom of the frictionless incline, it will have a velocity of $v_f$. The person is an inertial frame of reference that moves at a constant ... 3answers 391 views ### infinite grid of planets with newtonian gravity Assuming only Newtonian gravity, suppose that the universe consists of an infinite number of uniform planets, uniformly distributed in a two-dimensional grid infinite in both directions and not moving ... 2answers 856 views ### Norton's dome and its equation Norton's dome is the curve $$h(r) = -\frac{2}{3g} r ^{3/2}.$$ Where $h$ is the height and $r$ is radial arc distance along the dome. The top of the dome is at $h = 0$. Via Norton's web. If we put ... 4answers 621 views ### Does it matter how you order your tug-of-war participants? In a tug-of-war match today, my summer camp students were very concerned about putting the biggest people at the back of the rope. Is there any advantage to this strategy? 1answer 209 views ### Can a fly pierce itself onto a cactus needle? Somebody on reddit posted a ridiculous picture today of a fly pierced onto a needle of a cactus: http://www.reddit.com/r/pics/comments/xarue/what_are_the_odds_of_this_accident/ Whilst the OP claims ... 3answers 371 views ### In a universe where the speed of light is infinite, are relativistic models and Newtonian models equivalent? Take our universe. Observations are consistent with relativity, but not consistent with Newtonian mechanics. Assume that our current (relativistic) model of gravitation is correct. Now increase $c$ ... 2answers 179 views ### Intuitive meaning of Newton (units) Is there any intuitive reasoning behind why the units of a "Newton" is $kg \frac{m}{s^{2}}$ and how it represents force? I always wanted to understand why objects of different mass fall at the same ... 1answer 1k views ### What is the maximum efficiency of a trebuchet? Using purely gravitational potential energy, what is the highest efficiency one can achieve with a trebuchet counter-weight type of machine? Efficiency defined here as transformation of potential ... 4answers 464 views ### What causes a soccer ball to follow a curved path? Soccer players kick the ball in a linear kick, though you find it to turn sideways, not even in one direction. Just mid air it changes that curve's direction. Any physical explanation? Maybe this ... 7answers 2k views ### Does leaning (banking) help cause turning on a bicycle? I think it's clear enough that if you turn your bicycle's steering wheel left, while moving, and you don't lean left, the bike will fall over (to the right) as you turn. I figure this is because the ... 1answer 786 views ### Are all central forces conservative? Wikipedia must be wrong It might be just a simple definition problem but I learned in class that a central force does not necessarily need to be conservative and the German Wikipedia says so too. However, the English ... 4answers 5k views ### Can the coefficient of static friction be less than that of kinetic friction? I was recently wondering what would happen if the force sliding two surfaces against each other were somehow weaker than kinetic friction but stronger than static friction. Since the sliding force is ... 1answer 248 views ### Is acceleration an average? Background I'm new to physics and math. I stopped studying both of them in high-school, and I wish I hadn't. I'm pursuing study in both topics for personal interest. Today, I'm learning about ... 3answers 309 views ### Would a bicycle stay upright if moving on a treadmill and why? I suspect not, because moving forward (or backwards for that matter) is an important part, but I would like to confirm. UPDATE: Clearly it's possible ... 4answers 245 views ### How to guess the content of a christmas present? Let us assume that the present does not make any recognizable sounds when shaken (meow splat - the present now contains a dead kitten). Let us furthermore assume ... 3answers 1k views ### Static Friction - Only thing that can accelerate a train? I'm a computer programmer that never studied physics in school and now it's coming back to bite me a bit in some of the stuff I'm being asked to program. I'm trying to self study some physics and ... 4answers 2k views ### How long would a lever have to be to move the planet Earth? Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes How long would that lever have to be? That is to say, how long a lever ... 1answer 249 views ### One strategy in a snowball fight Here's a common college physics problem: One strategy in a snowball fight is to throw a first snowball at a high angle over level ground. While your opponent is watching the first one, you ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451800584793091, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/13844/heaviest-convex-polygon
Heaviest Convex Polygon Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we have an arbitrary function $f : \mathbb{R}^2 \to \mathbb{R}$. For any subset $s \subseteq \mathbb{R}^2$, we can define $g_f(s)$ as the integral* of $f$ over the region $s$. Suppose further that we have access to an oracle that will tell us the value of $g_f(s)$ for any $s$. Now, restrict our attention to subsets of $s$ that are the convex hull of a given subset of points $\bar x_c \subseteq \{x_1, \ldots, x_N \}$ with $x_i \in \mathbb{R}^2$. Assuming calls to the oracle are O(1), what is the complexity (in terms of $N$) of finding $\bar x_c^* = \arg \max_{\bar x_c} g_f(conv(\bar x_c))$? Is there a known algorithm or reduction to a known problem? EDIT: *Previous statement that Scott answered said "average value" here. - 1 g_f doesn't always exist without some kind of assumption on f. For example, is f continuous? – Qiaochu Yuan Feb 2 2010 at 18:26 If you're asking about computational complexity, then you'll need to be more specific about the inputs. How will f be described as an input? – tylern Feb 2 2010 at 18:28 We can assume $f$ is bounded. Is that good enough? – Andrew Feb 2 2010 at 18:30 I'm only interested in complexity in terms of the number of points N. – Andrew Feb 2 2010 at 18:31 1 @Konrad, there are no computability issues if one assumes that the oracle g works. g, restricted to subsets of the given set of points, only knows a finite amount of data - more precisely, the integral of f over the regions cut out by all lines among the given points. (Also, the answer I gave below - which I have deleted - was in response to the original formulation of the question.) – Qiaochu Yuan Feb 2 2010 at 20:27 show 2 more comments 2 Answers It should be polynomial (probably O(N^3)) in the number of input points using the dynamic programming technique in my paper with Overmars et al, "Finding minimum area k-gons", Disc. Comput. Geom. 7:45-58, 1992, doi:10.1007/BF02187823. The idea is: for each three points p,q,r, let W[p,q,r] be the optimal convex polygon that has p as its bottommost point (smallest y-coordinate) and qr and rp as edges. We can calculate W[p,q,r] by looking at all choices of s for which psqr is convex and combining the (previously computed) value W[p,s,q] with the weight of triangle pqr. As described above this takes time O(N^4) but I think that, for each pair of p and q one can examine the points s and r in the order of the slopes of the lines sq and sr, keeping track of the best s seen so far and using that choice of s for each r in this slope ordering, to reduce the time to O(N^3) - Excellent--this makes sense and I will think about it further. Thank you! – Andrew Feb 2 2010 at 19:57 If I could downvote my own reply, I would: Scott Carnahan's is much better. – David Eppstein Feb 2 2010 at 20:07 I responded to his comment and upvoted his answer. Somehow you read what I intended to write even though I completely mis-stated it. – Andrew Feb 2 2010 at 20:11 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm assuming the N points are fixed ahead of time. In that case, it seems to me that you can just use the oracle on each triple of points, since any convex polygon with more than three sides will have average at most the maximum of the averages over triangles in any triangulation. This gives you O(N^3) at worst. - 2 Oh oh, of course. I apologize, I want $g$ to be the integral, not average value. Somehow David knew what I was talking about even though I wrote it completely wrong. I updated the question and profusely apologize for the mis-statement of the problem. – Andrew Feb 2 2010 at 20:10 No problem. I'm glad the confusion got cleared up. – S. Carnahan♦ Feb 2 2010 at 20:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475707411766052, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/2617-too-much-money-please-help-print.html
# Too much money!!!!!!! Please help! Printable View • April 20th 2006, 04:54 AM Slipknotfanatic89 In a country whose currency consists of \$6 bills and \$11 bills, the price of every item sold is a whole number of dollars that can be be paid (exactly) using these two types of bills. What is the largest whole number of dollars which could not be the price of an item sold in this country? • April 20th 2006, 11:37 AM topsquark Quote: Originally Posted by Slipknotfanatic89 In a country whose currency consists of \$6 bills and \$11 bills, the price of every item sold is a whole number of dollars that can be be paid (exactly) using these two types of bills. What is the largest whole number of dollars which could not be the price of an item sold in this country? There is no upper limit on the price. Any possible price in whole dollars will be of the form: \$6*n + \$11*m where n and m are non-negative integers. There are an infinite number of prices that don't fit this requirement. -Dan • April 20th 2006, 09:49 PM CaptainBlack Quote: Originally Posted by topsquark There is no upper limit on the price. Any possible price in whole dollars will be of the form: \$6*n + \$11*m where n and m are non-negative integers. There are an infinite number of prices that don't fit this requirement. -Dan Not so, when I get the chance I will post the proof that there is always a maximum price which cannot be made with bills of denominations \$b1 and \$b2 where b1 and b2 are co-prime. RonL • April 21st 2006, 02:15 AM CaptainBlack Quote: Originally Posted by Slipknotfanatic89 In a country whose currency consists of \$6 bills and \$11 bills, the price of every item sold is a whole number of dollars that can be be paid (exactly) using these two types of bills. What is the largest whole number of dollars which could not be the price of an item sold in this country? This is not as clear as I would like and there is probably a more elegant way to do this, but here it is anyway: Consider a whole number of dollars $\$N$. Now consider $<br /> R(k)=N-k.6,\ k \in \{0,\ \dots,\ 11 \}<br />$ Now as $11$ and $6$ are co-prime, for each $r \in \{0,\ \dots,\ 11 \}$: $<br /> R(k) \equiv r \mod 11,\ \mbox{for some }k \in \{0,\ \dots,\ 11 \}<br />$ So there exists $\rho$ and $\kappa \ge 0$ such that: $<br /> N=\rho.11+\kappa.6<br />$, moreover if $N \ge 11 \times 6$, $\rho \ge 0$. Hence if $N \ge \$ 66$ it can be made up by a combination of $\$ 11$ and $\$ 6$ bills. Now trail and error show that $\$65=1\times \$11+9\times \$6$, but that there is no such representation for $N = \$64$. So $\$64$ is the largest whole number of dollars which could not be the price of an item sold in this country RonL • April 21st 2006, 04:56 AM topsquark Huh! Well, I guess you DO learn something new every day! :) Sorry about that, Slipknotfanatic89! -Dan • April 21st 2006, 05:20 AM c_323_h wow, CaptainBlack, you're good , i thought it was infinite too • April 22nd 2006, 05:08 PM ThePerfectHacker Quote: Originally Posted by SlipKnotfanatic89 In a country whose currency consists of \$6 bills and \$11 bills, the price of every item sold is a whole number of dollars that can be be paid (exactly) using these two types of bills. What is the largest whole number of dollars which could not be the price of an item sold in this country? Mathematically what you are trying to find the smallest $n$ such as you cannot find non-negative integers $x,y$ such as, $6x+11y=n$. ----- This problem is from number theory. It uses something called linear diophantine equation (Bezout's Identity): If $ax+by=c$ and $d|c$ where $d=\gcd(a,b)$ then this diophantine equation has solutions. If $x_0,y_0$ is a solution pair, then all solutions and every solution (if and only if) are: $\left\{ \begin{array}{c}x=x_0+\frac{b}{d}t\\y=y_0-\frac{a}{d}t \end{array} \right$. ----- Notice that, $6(2)+11(-1)=1$ multiply by $n$ to get, $6(2n)+11(-n)=n$, by the theory of linear diophantine equations you have that all and every solution is, $\left\{ \begin{array}{c}x=2n+11t\\ y=-n-6t \end{array} \right$ for an integer $t$. But you want whole numbers for $x,y$ thus, $x\geq 0 \mbox{ and }y\geq 0$ Thus, $2n+11t\geq 0\mbox{ and }-n-6t\geq 0$ Solving these inequalites we find that $t$ must satisfy, $-\frac{2n}{11}\leq t\leq -\frac{n}{6}$ Placing a common denominator (and removing the negative), $\frac{11n}{66}\leq -t\leq \frac{12n}{66}$ Thus, $11n\leq k\leq 12n$ where $k=-66t$ is also an integer. The only requirements that $k$ must have is to be a multiple of 66. All the integers that satisfy this inequality are, $S=\{11n,11n+1,11n+2,...,11n+(n-1),12n\}$ Thus, given this set we need to find the largest $n$ such as there is no multiple of 66. In total we have $n$ distinct integers in increasing order. Thus, if $n\geq 66$ then by the Pigeonhole Principle we have that there must be a multiple of 66. Instead of approaching this problem theoretically simply do trail and error beginning with $n=65$, soon we will see that $n=49$ contains no mutiples of 66. ----- Wow two mistakes in a row first topsquark, CaptainBlack. Hope I am not the next one, $64=6(7)+11(2)$. • April 23rd 2006, 12:01 AM CaptainBlack Quote: Originally Posted by ThePerfectHacker Wow two mistakes in a row first topsquark, CaptainBlack. Hope I am not the next one, $64=6(7)+11(2)$. Must have been something wrong with my caluclator yesterday. It probably needs an oil change :mad: RonL • April 23rd 2006, 06:28 AM ThePerfectHacker My calculator is a turing machine. I use my computers calculator: C:\WINDOWS\system32\calc.exe it hs 34 digits. I was thinking of downloading a super advanced calculator with more than 100 digits. • April 23rd 2006, 07:50 AM CaptainBlack Quote: Originally Posted by ThePerfectHacker My calculator is a turing machine. I use my computers calculator: C:\WINDOWS\system32\calc.exe it hs 34 digits. I was thinking of downloading a super advanced calculator with more than 100 digits. For arbitrary precision work I use a CAS, though I have been thinking about using UBASIC for some number theory calculations recently (it works with 2000+ :D digits). RonL • April 23rd 2006, 08:38 AM rgep This is Sylvester's Coin Problem, also known as the Frobenius Coin Problem. Sylvester showed that the largest sum that cannot be made using coins of values a and b (where a and b are coprime) is $(a-1)(b-1)-1 = ab - a -b$. There are articles in Wikipedia and MathWorld. In the specific case of coins of value 6 and 11, note that 50 can be made up as 4.11 + 1.6 and then 51=3.11+3.6, 52=2.11+5.6, 53 = 1.11 + 7.6, 54 = 0.11 + 9.6, 55=5.11 + 0.6, 56 = 4.11 + 2.6 and the pattern repeats with an interval of 6. • April 23rd 2006, 08:56 AM ThePerfectHacker Quote: Originally Posted by rgep This is Sylvester's Coin Problem, also known as the Frobenius Coin Problem. Sylvester showed that the largest sum that cannot be made using coins of values a and b (where a and b are coprime) is $(a-1)(b-1)-1 = ab - a -b$. There are articles in Wikipedia and MathWorld. In the specific case of coins of value 6 and 11, note that 50 can be made up as 4.11 + 1.6 and then 51=3.11+3.6, 52=2.11+5.6, 53 = 1.11 + 7.6, 54 = 0.11 + 9.6, 55=5.11 + 0.6, 56 = 4.11 + 2.6 and the pattern repeats with an interval of 6. It looks interesting. I was able to demonstrate that $n\geq ab$ (where a and b are co-prime) can always be expressed. But I was not able to demonstrate the second propstition, I just used trail and error. ----- Does it generalize as if, $\gcd(a_1,a_2,....a_n)=1$ Then, $\prod^n_{k=1}a_k-\sum^n_{k=1}a_k$ Is the largest non expressable number? • April 24th 2006, 04:02 PM Slipknotfanatic89 The largest number I have found thus far is 49 by 11(-1) + 6(10). I can't remember whose formula this follows, but it is the lagest number I have found that cannot be solved using 6 and 11 • April 24th 2006, 05:29 PM ThePerfectHacker Quote: Originally Posted by Slipknotfanatic89 The largest number I have found thus far is 49 by 11(-1) + 6(10). I can't remember whose formula this follows, but it is the lagest number I have found that cannot be solved using 6 and 11 In the previous posts CaptainBlack and I give two different solutions to this problem (CaptainBlack was wrong because Quote: my calculator needs an oil change ). Then, rgep made another interesting post that this is a theorem. Anyways yes, the largest non-expressable number is 49. Notice that important fact that $\gcd(6,11)=1$ otherwise it is impossible. Let me explain let, $\gcd(a,b)=d>1$ Then there are no integral solutions to the equation, $ax+by=c$ where $d\not |c$ because than the left hand side is divisible by $d$ while the right hand side is not. But when the $d=1$ then it is possible because all numbers are divisible by one. All times are GMT -8. The time now is 04:22 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 58, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268753528594971, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/94841?sort=newest
## Group PGL(2,p) where p is prime ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there solvable group $G$ such that prime graph $G$ equal to prime graph $PGL(2,p)$ and $|G|=|PSL(2,p)|$? - Why not remind us what the prime graph is. – Derek Holt Apr 22 2012 at 13:01 I think the prime graph is defined as follows: Vertex: all the primes dividing $|G|$. The vertices $p, q$ have a edge if and only if there exists $g \in G$ such that $|g|=pq$. – Wei Zhou Apr 22 2012 at 13:24 ## 3 Answers You should consult the papers of Akhlaghi, Khosravi and Khatam - they have two that are relevant. I don't have subscription access to the full text of the articles but I can access enough to say the following. With regard to the group $PGL(2,q)$, the situation depends dramatically on whether or not $q$ is prime. Case 1: $q=p$, a prime. Let me quote from the mathscinet review of this paper: There are infinitely many nonisomorphic finite groups with the same prime graph as $PGL(2,p)$. In this paper, the authors determine the structure of finite groups $G$ such that $\Gamma(G)=\Gamma(PGL(2,p))$, where $11\neq p \neq19$ and $p$ is not a Mersenne or Fermat prime. In particular, if $p\neq 13$ then $G$ has a unique nonabelian composition factor which is isomorphic to $PSL(2,p)$ and if $p=13$ then G has a unique nonabelian composition factor which is isomorphic to $PSL(2,13)$ or $PSL(2,27)$. Here I'm writing $\Gamma(G)$ to mean the prime graph of a group $G$. So, to answer your question, this result means that if a solvable group $G$ is to satisfy $\Gamma(G)=\Gamma(PGL(2,p))$ for some prime $p$, then $p$ is a Mersenne or Fermat prime. Case 2: $q$ is not prime. Then this paper proves that the group $PGL(2,q)$ is characterized by its prime graph, i.e. there are no other groups sharing the same prime graph. - 1 Though "sara" has long since disappeared, your answer sheds better light on this kind of old problem, which is easy to formulate after a basic course in group theory. It's not so easy to resolve, but of course there is a long paper trail by now, including numerous multi-author papers (including the ones you cite) on the recognition problem relative to prime graphs or sets of element orders. Probably a more useful question than the one asked here would be whether any single source gives a complete survey of what's known and unknown at this point along with basic methods. – Jim Humphreys Dec 19 at 20:33 Jim, good comment. This is potentially a useful community-wiki type question for MO. Something like "What is the current status and priority of group-recognition questions?" Experts may then be able to explain (a) why a particular group-recognition question is useful, and (b) how much is known on the given question... – Nick Gill Dec 20 at 10:25 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have to say that I have no work about this problem. But I know something relate to this problem. Let $\pi_i$ ($i=1, \cdots, t$) be the connected components of the prime graph. Then $|G|=m_1\cdots m_t$, where $\pi(m_i)$ is the vertext set of $\pi_i$. The integer $m_i$ are called the order components of $G$. Then $PSL(2,q)$ ($q$ is odd prime power) is uniquely determined by its components. (see G.Y. Chen, A new characterization of PSL(2,q), Southeast Asian Bull. Math. 22 (1998), 257-263). In your problem, $G$ and $PSL(2,q)$ have the same order and prime graph, then their order components are same, and then $G \cong PSL(2,q)$. (I am sorry that I have not read this paper.) By the way, the problem are relate to Thompson conjecture, and G. Y. Chen had some good work about this conjecture. (see G.Y. Chen, On Thompson's conjecture, J. Algebra 15 (1996), 184-193.) - I am very sorry for misunderstanding your question. – Wei Zhou Apr 23 2012 at 13:17 There is such an example for $p=7$, namely the group ${\rm A \Gamma L}(1,8)$ of order 168. Its prime graph has vertices 2,3 and 7, and a single edge joining 2 and 3. The group ${\rm PGL}(2,7)$ has the same prime graph. (This is different from the prime graph of ${\rm PSL}(2,7)$, which has no edges.) This might be the only example. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425173401832581, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Erasure_code
# Erasure code In information theory, an erasure code is a forward error correction (FEC) code for the binary erasure channel, which transforms a message of k symbols into a longer message (code word) with n symbols such that the original message can be recovered from a subset of the n symbols. The fraction r = k/n is called the code rate, the fraction k’/k, where k’ denotes the number of symbols required for recovery, is called reception efficiency. ## Optimal erasure codes Optimal erasure codes have the property that any k out of the n code word symbols are sufficient to recover the original message (i.e., they have optimal reception efficiency). Optimal erasure codes are maximum distance separable codes (MDS codes). Optimal codes are often costly (in terms of memory usage, CPU time, or both) when n is large. Except for very simple schemes, practical solutions usually have quadratic encoding and decoding complexity. Using FFT techniques, the complexity may be reduced to O(n log(n)); however, this is not practical. ### Parity check Parity check is the special case where n = k + 1. From a set of k values $\{v_i\}_{1\leq i \leq k}$, a check-sum is computed and appended to the k source values: $v_{k+1}= - \sum_{i=1}^k v_i.$ The set of k + 1 values $\{v_i\}_{1\leq i \leq k+1}$ is now consistent with regard to the check-sum. If one of these values, $v_e$, is erased, it can be easily recovered by summing the remaining variables: $v_{e}= - \sum_{i=1 ,i \neq e }^{k+1}v_i.$ ### Polynomial oversampling #### Example: Err-mail (k = 2) In the simple case where k = 2, redundancy symbols may be created by sampling different points along the line between the two original symbols. This is pictured with a simple example, called err-mail: Alice wants to send her telephone number (555629) to Bob using err-mail. Err-mail works just like e-mail, except 1. About half of all the mail gets lost.[1] 2. Messages longer than 5 characters are illegal. 3. It is very expensive (similar to air-mail). Instead of asking Bob to acknowledge the messages she sends, Alice devises the following scheme. 1. She breaks her telephone number up into two parts a = 555, b = 629, and sends 2 messages – "A = 555" and "B = 629" – to Bob. 2. She constructs a linear function, $f(i) = a + (b-a)(i-1)$, in this case $f(i) = 555 + 74(i-1)$, such that $f(1) = 555$ and $f(2) = 629$. 1. She computes the values f(3), f(4), and f(5), and then transmits three redundant messages: "C = 703", "D = 777" and "E = 851". Bob knows that the form of f(k) is $f(i) = a + (b-a)(i-1)$, where a and b are the two parts of the telephone number. Now suppose Bob receives "D = 777" and "E = 851". Bob can reconstruct Alice's phone number by computing the values of a and b from the values (f(4) and f(5)) he has received. Bob can perform this procedure using any two err-mails, so the erasure code in this example has a rate of 40%. Note that Alice cannot encode her telephone number in just one err-mail, because it contains six characters, and the maximum length of one err-mail message is five characters. If she sent her phone number in pieces, asking Bob to acknowledge receipt of each piece, at least four messages would have to be sent anyway (two from Alice, and two acknowledgments from Bob). So the erasure code in this example, which requires five messages, is quite economical. This example is a little bit contrived. For truly generic erasure codes that work over any data set, we would need something other than the f(i) given. #### General case The linear construction above can be generalized to polynomial interpolation. Additionally, points are now computed over a finite field. First we choose a finite field F with order of at least n, but usually a power of 2. The sender numbers the data symbols from 0 to k − 1 and sends them. He then constructs a (Lagrange) polynomial p(x) of order k such that p(i) is equal to data symbol i. He then sends p(k), ..., p(n − 1). The receiver can now also use polynomial interpolation to recover the lost packets, provided he receives k symbols successfully. If the order of F is less than 2b, where b is the number of bits in a symbol, then multiple polynomials can be used. The sender can construct symbols k to n − 1 'on the fly', i.e., distribute the workload evenly between transmission of the symbols. If the receiver wants to do his calculations 'on the fly', he can construct a new polynomial q, such that q(i) = p(i) if symbol i < k was received successfully and q(i) = 0 when symbol i < k was not received. Now let r = p − q. Firstly we know that r(i) = 0 if symbol i < k has been received successfully. Secondly, if symbol i ≥k has been received successfully, then r(i) = p(i) − q(i) can be calculated. So we have enough data points to construct r and evaluate it to find the lost packets. So both the sender and the receiver require O(n (n − k)) operations and only O(n − k) space for operating 'on the fly'. #### Real world implementation This process is implemented by Reed–Solomon codes, with code words constructed over a finite field using a Vandermonde matrix. ## Near-optimal erasure codes Near-optimal erasure codes require (1 + ε)k symbols to recover the message (where ε>0). Reducing ε can be done at the cost of CPU time. Near-optimal erasure codes trade correction capabilities for computational complexity: practical algorithms can encode and decode with linear time complexity. Fountain codes (also known as rateless erasure codes) are notable examples of near-optimal erasure codes. They can transform a k symbol message into a practically infinite encoded form, i.e., they can generate an arbitrary amount of redundancy symbols that can all be used for error correction. Receivers can start decoding after they have received slightly more than k encoded symbols. Regenerating codes address the issue of rebuilding (also called repairing) lost encoded fragments from existing encoded fragments. This issue arises in distributed storage systems where communication to maintain encoded redundancy is a problem. ## Examples ### Optimal erasure codes • Parity: used in RAID storage systems. • Reed–Solomon coding • Erasure Resilient Systematic Code, an MDS code outperforming Reed–Solomon in the maximal number of redundant packets, see RS(4,2) with 2 bits or RS(9,2) with 3 bits • any other MDS code • Regenerating Codes[2] see also [1]. ## See also • Forward error correction codes. ## References 1. Some versions of this story refer to the err-mail daemon. 2. Original paper. CiteSeerX: .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065905213356018, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/8793/compute-integral-of-a-lebesgue-measurable-set?answertab=oldest
# Compute integral of a Lebesgue measurable set Let $B \subset [0,2\pi]$ be a Lebesgue measurable set. Prove that: $\displaystyle \lim_{n \to \infty} \int_{B} \cos(nx) dx = 0$ OK I did this assuming B is an open interval, this is pretty easy using the fact that the sine function is bounded by 1. Now I'm stuck in the general case, I'm somewhat confused by "Lebesgue measurable" I know this means that the measure is given by the outer measure i.e the infimum of the sum of all measures of open covers of B. But I'm having trouble writing it, I get confused when working with Lebesgue measurable sets. Can you please help me? - @Marc: Well, for the Lebesgue integral to even be defined you need the domain of integration to be Lebesgue measurable... – Arturo Magidin Nov 3 '10 at 19:20 Can I proceed like this? if B = (a,b) then using the FTC we can easily see the limit is zero. Now assume B is the disjoint union of open intervals, so the integral over B is equal: $\sum_{j=1}^{\infty} \int_{(a_{j},b_{j})} f$ and then? – student Nov 3 '10 at 19:26 ## 3 Answers hint Re-write the integral as the following $$\int_0^{2\pi} \chi_B(x) \cos(nx) dx$$ where $\chi_B$ is the characteristic function of the set $B$ (so that it equals 1 on $B$ and 0 else where). Now, you've already proven the case where $B$ is an open interval. Now take a sequence of decreasing coverings for $B$ by finitely many disjoint open intervals. For each covering, the result is true by what you've already shown. Take the limit using dominated convergence theorem (if $B \subset C$, then $|\chi_B \cos| \leq |\chi_C \cos|$). - Each Lebesgue integrable function can be approximated by a finite step function in norm. That is if $f\in L^1$ and $\varepsilon>0$ then there are intervals $I_1,\ldots,I_N$, and scalars $a_1,\ldots,a_N$ such that $$\int |f-\sum_{n=1}^N a_n\chi_n|<\varepsilon.$$ Where $\chi_n$ is the characteristic function on $I_n$, that is $\chi_n(x)=1$ for $x\in I_n$ and $\chi_n(x)=0$ otherwise. Now, on each bounded interval $I$ you have already proved that $\int_I \cos(nx)dx\to 0$, hence given $\varepsilon>0$ there is a step function as above and we get $$\limsup\left|\int_B\cos(kx)dx\right|= \limsup\left|\int (\chi_B-\sum_{n=1}^N a_n\chi_n +\sum_{n=1}^N a_n\chi_n)\cos(kx)dx\right|$$ $$\leq \limsup\int |\chi_B-\sum_{n=1}^N a_n\chi_n|dx +\limsup\left|\int\sum_{n=1}^N a_n\chi_n\cos(kx)dx\right|$$ $$\le\varepsilon + \sum_{n=1}^N |a_n|\cdot \limsup\left|\int\chi_n\cos(kx)dx\right| =\varepsilon +0.$$ From which you conclude the result since this holds for any $\varepsilon>0$. This works not only for $\chi_B$, but for any $f\in L^1$ - it is called the Riemann-Lebesgue Lemma. - Thanks to both. I'm still confused because books define Lebesgue measurable sets in different ways (Caratheodory extension, outer measure, etc). What is the exact definition you are using for Lebesgue measurable set? I'm sure once I understand how it is defined I will be able to understand fully your hint(s). Thanks again. - Which book do you read? There are several settings leading to the very same Lebesgue integral. – AD. Nov 3 '10 at 20:25 Bartle, Folland, Zygmund, etc. Here is my try: taken an open cover of B consisting of countable disjoint intervals (I don't get why it must be finite so I take it countable) of open intervals. Define $g_{n}(x)= \chi{\cup_{k=1}^{n} (a_k,b_k)} cos(nx)$ then g_n converges pointwise to $\chi_{B}(x)cos(nx)$, g_n is increasing so we can use monotone convergence theorem to take out the series and then integrate term by term but each of the remaining integrals are integrals over open intervals which we already know they are 0. OK? – student Nov 3 '10 at 20:39 @Marc: sounds about right; I chose that at each step you approximate by finitely many because a sum of finitely many $\epsilon$s is still small. The total number of intervals used increases for each subsequent approximation. So no, $B$ is not necessarily covered by finitely many intervals, but it is approached by a sequence of covers, each having finitely many intervals, but the number increases. Sorry for the confusion. – Willie Wong♦ Nov 3 '10 at 21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941979706287384, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/97469-trigonometry-question.html
# Thread: 1. ## Trigonometry Question If Sin[x] + Sin[x]^2 + Sin[x]^3 == 1 Find Cos[x]^6 + 4 Cos[x]^4 + 8 Cos[x]^2 Any help/solution would be appreciated. Thanks. 2. Is this the correct problem? $\sin(x)+\sin^2(x)+\sin^3(x)=1$ $\cos^6(x)+4cos^4(x)+8cos^2(x)$ 3. Originally Posted by adkinsjr Is this the correct problem? $\sin(x)+\sin^2(x)+\sin^3(x)=1$ $\cos^6(x)+4cos^4(x)+8cos^2(x)$ I'm not too sure if the question is correct. I'll reconfirm it. 4. Are you taking a highschool trig class? If so, lmk what topics you are covering. We're going to need that insight because this is a difficult problem and I'm not sure where to begin. Have you studied composite angle formulae? For example $\cos(2x)=1-2\sin^2(x)$ 5. Originally Posted by adkinsjr Are you taking a highschool trig class? If so, lmk what topics you are covering. We're going to need that insight because this is a difficult problem and I'm not sure where to begin. Have you studied composite angle formulae? For example $\cos(2x)=1-2\sin^2(x)$ Yes, I have covered multiple angle formulas last year. A friend of mine gave this problem, I tried, but couldn't solve it. 6. Originally Posted by adkinsjr Is this the correct problem? $\sin(x)+\sin^2(x)+\sin^3(x)=1$ i think it has no solution . Neither sin (x) + sin (x^2) + sin (x^3) =1 has any solutions . 7. I disagree. This is a continuous function which takes the value 0 at 0 and takes the value 3 at pi/2. So there is certainly some point between 0 and pi/2 where it is 1. It's somewhere between 30 and 36 degrees in fact. Factorise the LHS as sin(x)(1+sin^2(x)) + sin^2(x)) Then you can get replace the sin^2 terms with 1-cos^2(x) and rearrange so that the remain sin(x) term is on its own. Then square and make the same subsitution. Now you have got rid of all the sin(x) terms and only have cos(x) terms left. Presumably after some messing about you can find the required function of cos(x). 8. Originally Posted by alunw I disagree. This is a continuous function which takes the value 0 at 0 and takes the value 3 at pi/2. So there is certainly some point between 0 and pi/2 where it is 1. It's somewhere between 30 and 36 degrees in fact. Factorise the LHS as sin(x)(1+sin^2(x)) + sin^2(x)) Then you can get replace the sin^2 terms with 1-cos^2(x) and rearrange so that the remain sin(x) term is on its own. Then square and make the same subsitution. Now you have got rid of all the sin(x) terms and only have cos(x) terms left. Presumably after some messing about you can find the required function of cos(x). I agree with the rest. Question is probably wrong, but that's what I was given, sorry about that. - Wolfram|Alpha[x]+%2B+Sin[x]^2+%2B+Sin[x]^3+%3D%3D+1 shows 2 possible answers, both of which when plugged into the equation that was to be solved gave an irrational answer. The options for the answer were integers I'm sure (don't remember). 9. I was merely pointing out that sin(x)+sin^2(x)+sin^3(x)=1 undoubtedly has a solution. The fact that is irrational is completely irrelevant. However the question probably is wrong, because if you follow the strategy I outlined this is what happens: $<br /> sin(x)(1+sin^2(x)) + sin^2(x) = 1<br />$ $<br /> sin(x)(1+1-cos^2(x)) + 1-cos^2(x) = 1<br />$ $<br /> sin(x)(2-cos^2(x)) -cos^2(x) = 0<br />$ $<br /> sin(x)(2-cos^2(x)) = cos^2(x)<br />$ $<br /> sin^2(x)(4-4cos^2(x)+cos^4(x)) = cos^4(x)<br />$ $<br /> (1-cos^2(x))(4-4cos^2(x)+cos^4(x)) = cos^4(x)<br />$ $<br /> (4-4cos^2(x)+cos^4(x) -4cos^2(x)+4cos^4(x)-cos^6(x)) = cos^4(x)<br />$ $<br /> (4-8cos^2(x)+4cos^4(x)-cos^6(x)) = 0<br />$ $<br /> cos^6(x)-4cos^4(x)+8cos^2(x)=4<br />$ 10. Further to my last post the angle, which is slightly under 33 degrees (you could find an exact expression by solving the cubic in cos^2(x) given by the final line above) is very close to, but not the same as, the angle at which $cos^6(x)+4cos^4(x)+8cos^2(x)=8$ and also to the angle at which $cos^4(x)=0.5$ but all three are different angles. There is less than 1/4 of a degree between the smallest and largest of these angles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594640135765076, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/17343/what-is-the-speed-of-a-divergent-series/17354
# what is the speed of a divergent series? How to characterize the speed of a divergent series ? I have a divergent series with a parameter $x$ in it. How can i characterize the speed of divergence for different $x$ ? - While there are various rates of convergence, based on big-O, little-o, etc. notations, it will not make equal sense to talk about "speed of divergence". A divergent series need not grow uniformly. It might oscillate. Or there might be subsequences that grow and subsequences that oscillate in bounded fashion. One way to impose some consistency on the behavior of nonconvergent sequences is to look at its lim sup and lim inf. Perhaps you should explain the purpose of the characterization you seek. – hardmath Jan 13 '11 at 4:57 @hardmath : the series is non-decreasing – Rajesh D Jan 13 '11 at 5:22 This is a very vague question. Why don't you show us what series you have... – Aryabhata Jan 13 '11 at 5:30 @Moron : I don't really have any series to work with, the only restriction is that it is non decreasing. – Rajesh D Jan 13 '11 at 5:41 ## 2 Answers This is related to Hausdorff's "Pantachie" problem. Suppose $x_i$ is a monotone decreasing sequence whose sum is divergent. We say that a similar sequence $y_i$ diverges slower if $x_i/y_i \rightarrow \infty$. Example: $\sum n^{-1}$ diverges slower then $\sum n^{-0.5}$. Similarly, if $x_i$ is a monotone decreasing sequence whose sum is convergent, a similar sequence $y_i$ converges more slowly if $y_i/x_i \rightarrow \infty$. Example: $\sum n^{-2}$ converges more slowly than $\sum n^{-3}$. Hausdorff proved the following theorem: For any sequence of divergent (convergent) series, there's a sequence diverging (converging) slower than any of them. That means that there is no "expressible by finite strings" characterization of the speed of divergence (convergence), since such a characterization would not allow any series which is diverging (converging) slower. For more on the subject, look up Hausdorff gaps. - I guess ‘big O notation’ and its relatives are what you’re after, or something like that? If $(a_n)$ is a sequence with $a_n \rightarrow \infty$ as $n \rightarrow \infty$, then this notation defines statements like $(a_n) = O(n^2)$, formalising the idea that “in the long run, the sequence $(a_n)$ grows no faster than the sequence $(n^2)$”. (The use of ‘=’ in this notation is slightly confusing: the precise statement doesn’t assert that any two things are equal.) - Under the given circumstances (the sequence $\{a_n\}$ tends monotonically to plus infinity), faster rate of divergence (growth) would be equivalent to faster convergence of $\{1/a_n\}$ to zero. – hardmath Jan 13 '11 at 6:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237635731697083, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/5720/list
Return to Answer 2 Improved several things; added 16 characters in body You also have the rather new field of Leavitt Path Algebras (in which I happen to be working right now), where you take a field $K$ and a directed graph $E$, generate its extended graph $E'$ (add to $E$ its own edges reversed, denoted as $e^*$ for every edge $e$), and compute the Leavitt path algebra of $E$, $L(E)$, as the path algebra $KE'$ modulo some relations called the Cuntz-Krieger relations, inherited from the $C^*$-algebras setting, concretely: (CK1) $e^* f=\delta_{ef}$ for any two edges $e,f$ of $E'$. (CK2) $\sum_{e\in s^{-1}(v)}ee^* = v$, for $v$ a vertex which emits a nonzero finite number of edges, and $s^{-1}(v)$ the set of those edges. (One can look at (CK1) and (CK2) as an abstract generalization of the product of matrix units). These associative algebras provide us simultaneously with a purely algebraic analog of $C*$-algebras C^*\$-algebras of graph and a generalization of the Leavitt algebras (some associative algebras which do not satisfy the IBN property). The full matrix rings over $K$ of order $n$ then arise as the Leavitt path algebras of the graphs with $n$ (consecutive) vertices and $n-1$ arrows, one between every pair of consecutive vertices. Another simple example of Leavitt path algebra is the ring of Laurent polynomials over $K$, $K[x,x^{-1}]$, which appears associated to the graph with one vertex and a single loop. The theory of LPAs is a useful, and even beautifulone , becauseit allows : • They provide simple, visually attractive representations of well-known algebras. • They allow us to identify ring-theoretic look at their algebraic properties by means of associative algebras from the graph-theoretic combinatorial properties of their associated graphsin . This happens to equip us with some rather powerful tools. • Conversely, they also enable "algebraic engineering", since they give us a visual and straightforward, visual way to construct new algebras, customized with any algebraic or ring-theoretic properties we may desire. For example, we can show an algebra generated by five elements such that it is exchange but not purely innitely simple, by constructing a particular (small) graph with some (easy) graph-theoretic features. Some references: • G. Abrams, G. Aranda Pino. "The Leavitt path algebra of a graph", J. Algebra 293 (2), 319-334 (2005). (Available at http://agt.cie.uma.es/~gonzalo/papers/AA1_Web.pdf). • P. Ara, M.A. Moreno, E. Pardo. "Nonstable K-Theory for graph algebras", Algebra Repr. Th. DOI 10.1007/s10468-006-9044-z (electronic). (Available at http://www.springerlink.com/content/pu701474q5300m63/). • G. Abrams, G. Aranda Pino, F. Perera, M. Siles Molina. "Chain conditions for Leavitt path algebras". (Available at http://agt.cie.uma.es/~gonzalo/papers/AAPS1_Web.pdf). • K.R. Goodearl. "Leavitt path algebras and direct limits", Contemp. Math. 480 (2009), 165-187. 1 You also have the rather new field of Leavitt Path Algebras (in which I happen to be working right now), where you take a field $K$ and a directed graph $E$, generate its extended graph $E'$ (add to $E$ its own edges reversed, denoted as $e^*$ for every edge $e$), and compute the Leavitt path algebra of $E$, $L(E)$, as the path algebra $KE'$ modulo some relations called the Cuntz-Krieger relations, inherited from the $C^*$-algebras setting, concretely: (CK1) $e^* f=\delta_{ef}$ for any two edges $e,f$ of $E'$. (CK2) $\sum_{e\in s^{-1}(v)}ee^* = v$, for $v$ a vertex which emits a nonzero finite number of edges, and $s^{-1}(v)$ the set of those edges. (One can look at (CK1) and (CK2) as an abstract generalization of the product of matrix units). These associative algebras provide us simultaneously with a purely algebraic analog of $C*$-algebras of graph and a generalization of the Leavitt algebras (some associative algebras which do not satisfy the IBN property). The full matrix rings over $K$ of order $n$ then arise as the Leavitt path algebras of the graphs with $n$ (consecutive) vertices and $n-1$ arrows, one between every pair of consecutive vertices. Another simple example of Leavitt path algebra is the ring of Laurent polynomials over $K$, $K[x,x^{-1}]$, which appears associated to the graph with one vertex and a single loop. The theory of LPAs is a beautiful one because it allows us to identify ring-theoretic properties of associative algebras from the graph-theoretic properties of their associated graphs in a visual and straightforward way. Some references: G. Abrams, G. Aranda Pino. "The Leavitt path algebra of a graph", J. Algebra 293 (2), 319-334 (2005). (Available at http://agt.cie.uma.es/~gonzalo/papers/AA1_Web.pdf). P. Ara, M.A. Moreno, E. Pardo. "Nonstable K-Theory for graph algebras", Algebra Repr. Th. DOI 10.1007/s10468-006-9044-z (electronic). (Available at http://www.springerlink.com/content/pu701474q5300m63/). G. Abrams, G. Aranda Pino, F. Perera, M. Siles Molina. "Chain conditions for Leavitt path algebras". (Available at http://agt.cie.uma.es/~gonzalo/papers/AAPS1_Web.pdf). K.R. Goodearl. "Leavitt path algebras and direct limits", Contemp. Math. 480 (2009), 165-187.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8868892192840576, "perplexity_flag": "middle"}
http://alanrendall.wordpress.com/2009/11/05/influenza-vaccines/
# Hydrobates A mathematician thinks aloud ## Influenza vaccines I have recently been reading about influenza vaccines and I am summarizing some of the information I found here. I start with some remarks on the classification of influenza viruses. The first distinction is between influenza A and influenza B viruses. The former are classified further into subtypes HnNm for numbers $m$ and $n$. Well-known examples are H5N1 (which includes the recent ‘bird flu’) and H1N1 (which includes the pandemic of 1918 and the current ‘swine flu’.) Influenza B does not carry a pandemic threat and will not be considered further here. Every year a vaccine is produced for the seasonal flu epidemic (in fact two – one for the southern and one for the northern hemisphere). It is trivalent, being directed against three types of virus. In recent years this has always been of the form H3N2 + H1N1 + B. In particular this is the case for the present vaccine for seasonal flu. It is not expected that this vaccine will be effective against the pandemic H1N1 swine flu. Thus a separate type of vaccine has been developed for that. In the classification H and N stand for haemagglutinin and neuraminidase, two proteins which occur on the surface of the virus and come in different forms in different strains. These are the main molecules of the virus recognized by antibodies. They are involved in the processes by which the virus enters and leaves host cells, respectively. Next I come to some details concerning the vaccines themselves. I concentrate on those being applied in Germany since this is what would be relevant for me if I got vaccinated myself. I get the impression that there are a lot of unreliable and misleading statements on this subject in the media and so some care is necessary in judging the information available. On the web page of the Paul Ehrlich Institute there is a list of vaccines against seasonal flu approved in Germany in this season. Twenty products are listed. All are classified as inactivated. This means that if manufactured successfully the vaccine cannot lead to any reproduction of the virus. In other words the vaccine uses (parts of) ‘dead’ virus particles. Three of the vaccines are described as ‘virosomal’ which means that they can be administered as a nasal spray. Presumably all the others are administered by injection. Two of them include an adjuvant, a substance which is intended to amplify the immune response. This is one theme which has led to recent controversy in connection with swine flu vaccines and I will return to it later. One vaccine (Optaflu) is said to be produced in cell culture. This is connected to another theme of recent controversy, with discussion in the media about vaccines produced using cancer cells. Having looked at the vaccines for seasonal flu I now come to swine flu vaccines.The web page of the Paul Ehrlich Institute lists three vaccines approved in Germany for the new H1N1 influenza. These are called Celvapan, Focetria and Pandemrix. All three are inactivated. The second and third include an adjuvant. The first is produced in cell culture. Apparently Pandemrix is intended to be the main vaccine used in Germany. There is a statement on the web page of the PEI that, contrary to some claims in the media, this is also what has been used to immunize the employees of that institute. There has been discussion of the fact that apparently politicians and the army are to get Celvapan so that a debate about ‘second class citizens’ has taken place. This is likely to obscure the real issues. Consider next the topic of adjuvants, substances which have recently been getting some bad press. An adjuvant is a substance which increases the reaction of the immune system to an antigen given as a vaccine. A stronger immune response can lead to better immunity for a given amount of antigen. It could also in principle lead to an excessive and damaging immune reaction although I have not seen any convincing evidence that this has happened in the context of the swine flu vaccine. It would be wrong to think that the name adjuvant denotes a particular class of substances. Many different things can act as adjuvants. What they have in common is that they activate some part of the immune system. Given that the immune system is so interconnected this can lead to a stronger immune response on a wider basis. An interesting example is that in the combination vaccination for diphtheria, whooping cough and tetanus the diphtheria toxoid acts as an adjuvant for the other two vaccinations. What has just been said about the nature of adjuvants makes it clear that it is nonsensical to say that all adjuvants are bad. Each one must be considered on its own merits. In the case of Pandemrix the adjuvant is called AS03. I am unable to give any judgement on it, since I have not spent enough time studying the question. In any case my basic assumption is that what has been approved by the relevant medical authorities is OK. In other words, for these things my default attitude is trust, not mistrust. Now to the question of the cancer cells. Celvapan is produced using a cell line called Vero cells which is derived from monkeys. It seems that these cells arose from kidneys of normal monkeys and have nothing to do with cancer. Usually normal cells can only undergo a limited number of divisions while cancer cells can be immortal. Vero cells are not cancer cells but they do seem to be able to survive in cell culture for an unlimited time. I do not understand how this works. It may be noted that Vero cells have been used in a routine way to produce millions of doses of polio vaccine and so there has been ample opportunity to discover any possible dangers associated to their use. In the production of Optaflu a cell line called Madin-Darby canine kidney cells is used. This looks like the same kind of tissue as with Vero cells, except derived from a different animal. I have not found a reference anywhere claiming the use of cancer cells in this context which looks trustworthy. To finish, here is a piece of news from the web site of the ECDC in Stockholm. In the Ukraine there have now been 500 000 reported cases of acute respiratory illness and it seems that the expert opinion (cf. also the WHO website) is that most of these are related to the new H1N1 influenza. There have been 86 deaths reported from there. So it seems that the pandemic is alive and well. ### Like this: This entry was posted on November 5, 2009 at 9:03 pm and is filed under diseases, immunology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 6 Responses to “Influenza vaccines” 1. Uwe Brauer Says: November 10, 2009 at 4:13 pm | Reply Hello I don’t hope to start a “flamewar” on this, but the article http://www.spiegel.de/wissenschaft/mensch/0,1518,druck-637567,00.html unfortunately in german, gives some interesting insights. Tom Jefferson an expert in epidemics from the Cochrane Collaboration claims, that based on statistical analysis only 7% of all the infection which have flu like symptoms are actually caused by the by the influenca virus. Moreover washing the hands is the best protection. It calls his attention (and I must say mine) that the influenca virus is unique among all the other groups of virus in the sense that it is the only one for which a vaccine has been developed. Uwe Brauer 2. hydrobates Says: November 10, 2009 at 9:40 pm | Reply Hello Uwe I am not sure you should believe everything you read in ‘Der Spiegel’ on this subject. You might like to look at the comments on an article there on the home page of the Paul Ehrlich Institute (http://www.pei.de/) By the way, there must be some misunderstanding with the ‘no vaccine for a virus’ statement. For instance, I got vaccinated against yellow fever before I visited Cameroon. • Uwe Brauer Says: November 17, 2009 at 4:27 pm | Reply Hello Alan, first I think I have to clarify a misunderstanding. I did not want to claim that there are no vaccine for virus. Yellow feaver is one example small pox another. Now the claim is about virus which can cause flu like diseases. As I understand the information whose links I will provide below, correctly there are around 200 viruses which can causes these illnesses. http://www.spiegel.de/international/world/0,1518,grossbild-1594590-637119,00.html states: 29% of the infections are caused by rhinovirus, 14.4 by coronavirus 3.6% by RSV 7% by influencia and the rest 46% by virus unknown to man. However there is another issue with I find annoying: The WHO has changed its definition of pandemic! The old definition was a new virus, which went around quickly, for which you didn’t have immunity, and which created a high morbidity and mortality rate. Now the last two have been dropped, and that’s how swine flu has been categorized as a pandemic. Here are some links in addition http://www.bmj.com/cgi/content/full/333/7574/912 http://www.jefferson.edu/information/ Jefferson’s interview in english: http://www.spiegel.de/international/world/0,1518,637119,00.html Uwe 3. hydrobates Says: November 18, 2009 at 10:59 pm | Reply This evening I went to a lecture on pandemics at the Einstein Forum in Potsdam. The speaker was Stefan Kaufmann who is director at the Max Planck Institute for Infection Biology in Berlin. I have a very high opinion of him and the lecture, which was excellent, only served to reinforce this.His main subject was tuberculosis, not swine flu, but he said some things which were relevant to your comments. I should mention that this talk was not mainly about medical science, but rather about public health policy and its relations to economics and politics. He mentioned the fact that the WHO changed its definition of ‘pandemic’ at a critical moment, confirming what you mentioned in your comment. His reaction was that it was unwise of them to do that since it just led to making people suspicious. In that context he said that reacting to a disease with a vaccine is 10 per cent medicine and 90 per cent mass psychology. Another interesting remark concerned adjuvants. He believes that with a sufficiently good adjuvant (which does not yet exist) vaccinations to the seasonal flu could be made to last ten years instead of one. To finish let me mention that Kaufmann has written a book entitled ‘Waechst die Seuchengefahr? Globale Epidemien und Armut: Strategien zur Seucheneindaemmung in einer vernetzten Welt’ of which I bought a copy. [For readers who do not speak German: there is also an English version with the title 'The new plagues. Pandemics and poverty in a globalized world'] 4. Some vaccinations I have had « Hydrobates Says: November 30, 2009 at 9:08 am | Reply [...] consult. I will start with hepatitis A and B since that involves some themes which I mentioned in a previous post on influenza vaccines. A point I want to make is that the swine flu vaccines about which there has been so much public [...] 5. The immortal Henrietta Lacks « Hydrobates Says: January 23, 2010 at 9:41 pm | Reply [...] By hydrobates I find immortal cell lines a fascinating topic and I mentioned the subject in a previous post on influenza vaccines. From time to time I had heard about HeLa cells. I knew that this was a cell line derived from [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9613221883773804, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/108895/total-variation-distance-between-a-poisson-and-a-distribution-with-known-mean-var
## Total variation distance between a Poisson and a distribution with known mean/variance ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose that $\mu$ is the law of a Poisson distribution of mean 1, and that $\nu$ is the law of an unknown distribution on the non-negative integers, though I do know that its mean and variance are both $\lambda$. What, if anything, can be said about the minimum total variation distance between $\mu$ and $\nu$? It would seem natural that the TV distance is minimized when $\nu$ is also a Poisson, but I'm having trouble proving it. - ## 2 Answers I don't know the answer, but I don't think that Poisson($\lambda$) is best. What's the word for one minus the total variation distance? i.e. the maximum over all couplings of the probability that two random variables agree? Let's call it the "agreement probability". The agreement probability between Poisson($1$) and Poisson($\lambda$) decays at least exponentially fast in $\lambda$ (since the probability that Poisson(1) is at least $\lambda/2$ decays faster than exponentially, and the probability that Poisson($\lambda$) is at most $\lambda/2$ decays exponentially). I think you can do better than this by a distribution that puts more weight at 0. For example, suppose $\lambda$ is an integer, and look at a distribution that puts weight $p$ at $0$, weight $p$ at $2\lambda$, and weight $1-2p$ at $\lambda$. This has mean $\lambda$ as required, and to get variance $\lambda$ we need $\lambda = (1-2p)\lambda^2 + p (2\lambda)^2 - \lambda^2$ which gives $p=1/2\lambda$. So for large $\lambda$, the agreement probability with Poisson($1$) is then at least $1/2\lambda$ (because both distributions have weight at least $1/2\lambda$ at $0$). Anyway, this is just an observation; probably one can do much better than that. - Thank you for your response. What if $\lambda$ is between 0 and 1 (in fact, $\lambda$ is very close to 1), so that the weight of $\nu$ at 0 is already greater than the weight of $\mu$ at 0. – Shanshan Ding Oct 5 at 14:06 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Maybe you could use Pinsker's Inequality to get an upper bound on the quantity you're interested in ? There are lot of results about finding distributions minimizing the Kullback-Leibler divergence (this problem is also sometimes called Information Projection). Though not sure this is useful since you seem more interested in a lower bound... - Yes, exactly, I'm only interested in the lower bound. – Shanshan Ding Oct 5 at 14:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618691205978394, "perplexity_flag": "head"}
http://mathoverflow.net/questions/49348?sort=votes
## What is the simplest, most elementary proof that a particular number is transcendental? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I teach, among many other things, a class of wonderful and inquisitive 7th graders. We've recently been studying and discussing various number systems (N, Z, Q, R, C, algebraic numbers, and even quaternions and surreals). One thing that's been hanging in the air is giving a proof that there really do exist transcendental numbers (and in particular, real ones). They're willing to take my word for it, but I'd really like to show them if I can. I've brainstormed two possible approaches: 1) Use diagonalization on a list of algebraic numbers enumerated by their heights (in the usual way) to construct a transcendental number. This seems doable to me, and would let me share some cool facts about cardinality along the way. The asterisk by it is that, while the argument is constructive, we don't start with a number in hand and then prove that it's transcendental--a feature that I think would be nice. 2) More or less use Liouville's original proof, put as simply as I can manage. The upshots of this route are that we start with a number in hand, it's a nice bit of history, and there are some cool fraction things that we could talk about (we've been discussing repeating decimals and continued fractions). The downside is that I'm not sure if I can actually make it accessible to my students. So here is where you come in. Is there a simple, elementary proof that some particular number is transcendental? Two kinds of responses that would be helpful would be: a) to point out some different kind of argument that has a chance of being elementary enough, and b) to suggest how to recouch or bring to its essence a Liouville-like argument. My model for this is the proof Conway popularized of the fact that $\sqrt{2}$ is irrational. You can find it as proof 8''' on this page. I realize that transcendence is deep waters, and I certainly don't expect something easy to arise, but I thought I'd tap this community's expertise and ingenuity. Thanks for thinking on it. - I would be really surprised if going through 1) gave you a number with a reasonably concrete description. Have you tried it? – Qiaochu Yuan Dec 14 2010 at 4:21 You can describe a fairly short program for a specific enumeration of integer polynomials and use fairly rapid numerical methods to find the real roots to a finite precision. So you can get a number out to a fair number of digits without great grief. But the ease does not matter because there is no real interest in doing it, only that one can (and there are many nice candidates for an order). – Aaron Meyerowitz Dec 14 2010 at 4:50 4 A mind opener for students (at some level) is that since the algebraic numbers are enumerable we can list them (in principle) and put the kth one in the center of an interval of diameter $1/2^k$. Then we have a collection of intervals covering a set of which the (dense) set of rationals is a "tiny" part. But together these intervals have combined length 1 so "most" real numbers are excluded (hence transcendental). At least this shows that our intuition is far from the full story. – Aaron Meyerowitz Dec 14 2010 at 4:55 ## 6 Answers The original Liouville's number is probably the easiest, but most of the proofs tend to invoke calculus (because why not?), so let me try to show it in a more 7th-grade friendly way. I'll call this the swaths-of-zero approach. So we know that Liouville's number $L$ looks like this: .1100010000000000000000010... with a 1 in the $n!$ places. When we square it, we get this: .012100220001000000000000220002... What happens is that in the $2n!$ places we get a 1, and in the $p!+q!$ places we get a 2. (The great thing about this is that it can be explained using the elementary-school algorithm, the one they are all familiar with, for multiplication.) If we multiply $L$ by an integer and write down the answer, the value of that integer will be "laid bare" as we go deeply enough into $L$'s decimal expansion, as eventually the 1s are far enough away to become that integer without stepping on each other. Similarly, if we multiply $L^2$ by an integer, we will see that integer in some places, and 2 times that integer in others. For large enough $n,$ if we look between the $n!$ place and the $(n+1)!$ place, the last thing we'll see is that integer written at the $2n!$ place. Thus the swaths of zero in the multiple of $L$ are, $n!-(n-1)!=(n-1)(n-1)!$ long (minus a constant), whereas the widest swaths of zero in the multiple of $L^2$ are $n!-2(n-1)!=(n-2)(n-1)!$ (minus a constant) long, which is shorter, so there is no way to add positive multiples of $L$ and $L^2$ together to clear everything after the decimal point, or find positive multiples of each so that everything after the decimal point is equal. More generally: Suppose $a_jL^j+...$ and $a_kL^k+...$ are integer polynomials in $L,$ where $j>k.$ We show that their values cannot match up fully past the decimal point. The swaths of zero in the first polynomial, moving back from the $n!$ spot, are a constant away from $(n-j)(n-1)!$ long (the constant being the length of the sum of the coefficients), whereas in the second they are a constant away from $(n-k)(n-1)!$ long, in the same place (moving back from the $n!$ spot). I don't know if this explanation holds up to the standards of rigor you like to maintain when teaching them, but I think they will find it fascinating. - @David Feldman: Agreed! It was, for me, a very thought-provoking question. – Daniel Briggs Dec 14 2010 at 6:55 Daniel, thanks so much for your thoughtful answer. A few clarifying questions: 1) How do you know that a 4 doesn't pop up somewhere in L^2, or more generally that the sum of some factorials doesn't equal the sum of some others? 2) Do you mean if we look between the 2(n-1)! and the 2n! place for large enough n, we'll see the integer multiplier bare at the end of that stretch? If so, how do we know that there aren't any 2's cropping up along the way that would mess things up? 3) What with the 2's, I feel lost on how you're calculating the length of the swaths of zeros. Could you expand on this? – Justin Lanier Dec 15 2010 at 1:08 Given mL^2 for an integer m, let's go back from the n! spot towards the 2(n-1)! spot. Near the n! spot there can be contributions of the form 2m 10^-n!10^-k! for small k, but notice that the effect of the 10^-k! is to move right, rather than left, and the smallest values k! can take on are 1, 2, 6, 24, ... . So .2m+.02m+.000002m+... will be seen here, but that makes at most one more positive digit left than .2m does, and it's the same near the n! spot for any n. Once you get past it, there's nothing going back to 2(n-1)!, and then there's something (if m has final 0s, once we get past them). – Daniel Briggs Dec 15 2010 at 13:53 Or, increasing from 2(n-1)!, the first factorial sum to be seen is n!+1!. (And decreasing from 2(n-1)!, it's (n-1)!+(n-2)!, which is very far away, and this shows that the factorial sums can't conspire to make "magic" 0s.) Similarly, with mL^3, all the products involving at least one 1 from the n! place on make less than 3m 10^-n! L^2 (the 3 is from choosing the 1 to be in the n! place in the first, second, third L, and the "less than" from the microscopic overcounting); moving left from here, the first thing we see is m at the 3(n-1)! place, which is sooner than in a multiple of L^2. – Daniel Briggs Dec 15 2010 at 14:20 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I keep meaning to write this up nicely, but one can prove the transcendentality of Liouville's number in a very, very elementary way. Write $L$ for the Liouville number. Suppose $p(L)=0$ for some polynomial with integer coefficients. Then $p_+(L)=p_-(L)$ where $p=p_+-p_-$ and polynomials $p_+$ and $p_-$ have only positive coefficients and no terms of the same degree. Assume WLOG that $p_+$ has the higher degree, say, $k$, so $p_+(x)=cx^k+\cdots$. Then when you calculate $p_+(L)$ (via the distributive law) you'll get contributions of $c10^{-kn!}$ for $n=1,2,3\ldots$. With $n$ large enough, nothing arising out of $p_-(L)$ can balance these contributions, so contradiction. So no calculus! Just a little thought about how grade school arithmetic goes. I believe that the distinction between Liouville and Cantor actually turns out artificial. My argument above shows that one can view Liouville's construction as a diagonalization against the polynomials. Each nonzero digit "kills" a collection of polynomials of low degree and low height until one has killed them all! - Hi Daniel Briggs...two minds with but a single thought... your answer appeared just as I posted! – David Feldman Dec 14 2010 at 6:46 2 This type of idea came up in a letter from Goldbach to Daniel Bernoulli. Goldbach claimed that the number with 1s in every 2^k-th place and 0 everywhere else is irrational because the decimal expansion was not periodic. Liouville, in his article containing his number, actually refers to a letter by Goldbach. – Franz Lemmermeyer Dec 14 2010 at 18:17 It would be interesting to see what portion of Liouville numbers can be taken care of without much trouble by using an appropriate base for expansion and discussing the number in terms of the base (it seems that (1) q must be able to be chosen in a relatively uniform way, such as powers of the base, and (2) the places with the positive digits would have to become sparse enough so that the number wouldn't get muddy: would this requirement be equivalent to the Liouville criterion? Or stronger?) – Daniel Briggs Dec 14 2010 at 23:15 Hi, David. Thanks for your answer. Can you expand upon why "With n large enough, nothing arising out of p_(L) can balance these contributions"? I like the approach of focusing on the term of highest power, but I only see that for large n, the contribution is very small, and so I don't see the contradiction. I also like you remark about Cantor and Liouville being equivalent. But I don't see how exactly it could be that each nonzero digit "kills" a collection of polynomials, since if you removed one 1 and left the rest of L the same, wouldn't it still be transcendental? Thanks again. – Justin Lanier Dec 15 2010 at 1:21 I favour the Liouvillian approach over the Cantorian approach, because although the diagonal argument will in principle let you write down the decimal expansion of a transcendental number, you will never see the whole expansion "all at once" (so to speak), and there will be nothing special about the truncated expansion you do see; it will just be some random decimal expansion, and any finite decimal expansion can be completed to be transcendental. In the approach via Liouville, of course, you get to actually see the transcendental number. And Liouville's argument is not so difficult; it boils down to the pigeonhole principle (which, by the sounds of what you've been teaching them, your 7th graders will have no trouble understanding, if they don't already know it). I don't know an optimal reference, but if I was to pursue this path, I would fix my Liouville number first, and just focus on proving that that particular number is transcendental. (In other words, don't prove a general criterion and then check that your number satisfies; keep things more concrete by just directly proving the criterion for your chosen Liouville number.) I think that if you do this, and you just write down a putative polynomial equation with integer coeffs. that your Liouville number is supposed to satisfy, it won't be hard to argue your way to a contradiction. And because you have a concrete number, you can really work this through with your class, e.g. by beginning with a quadratic , actually plugging in your Liouville number, and staring at it and seeing why this couldn't give zero. I think this would make things quite intuitive and concrete. Added: See Daniel Briggs's very nice answer for exactly this kind of explicit argument with a concrete Liouville number. - The Liouville number is certainly the first concrete example to mention (it is also, not by chance, the first one historically). I love the way trascendence can be shown by means of elementary operations as shown here by David Feldman and Daniel Briggs. I don't know how much time you will devote to this fascinating topic, but I wouldn't omit to mention the origin, the quadrature of the circle, the most famous problem of the antiquity, and possibly the one who remained open the longest time in all history of mathematics. A small historical perspective renders justice to Liouville example (1851), since it then appears to be not just a mathematical curiosity, but a first step towards a possible proof of trascendence of other important constants. In a sense, Liouville number is trascendental because it has a too fast rational approximation. So it also illustrates a kind of paradoxical character of trascendental numbers: rational numbers seems to be "closer" to trascendental numbers than to non-rational algebraic numbers. Indeed, the first case of an already known constant proven to be trascendental was e (Hermite, 1873), exploiting the very good rational approximation given by the exponential series (depending on time, you may consider including in the course also Hilbert's simpler version of the proof, that only requires elementary calculus): and the case of e was certainly a starting point for Lindemann's work for the more difficult trascendence of $\pi$ (1882). - How about Chaitin's Constant $\Omega$ (for some fixed encoding)? The proof of transcendence is doable, and is in some ways a compromise between the two approaches, with the drawback that for any given encoding, one obviously can't actually write down the constant beyond the first few digits, though the proof is in some sense "constructive." Of course this is basically diagonalization (hidden in the proof of the insolubility of the Halting Problem) but I think it might be easier for a seventh-grader to get his hands on the first few digits of the constant than in your option (1). Plus the language of computability is wonderful, and I think easily understood by middle-schoolers. - Full details require a short paper rather than a long MO comment and Daniel Briggs and I have started discussing writing one jointly. But let me address some of what you ask right now. A finite collection of polynomials has only a finite number of roots on the real line. So you can find an interval in the real that avoids all of them. Fixing the first digit of Liouville's number together with a minimum gap till the next digit does just that. Note that $1/10$ itself can't be the root of an irreducible polynomial of degree higher than $1$. Now you increase the degree and the height (maximum magnitude of the coefficients) you'll allow your polynomials and this gives you new roots to avoid, but you can still find a small "safe" interval inside the interval you had before. This is the essence of diagonalization. Each further specification of the number kills more polynomials. The trouble is that the set of roots of polynomials of bounded degree and height looks very complicated, so Liouville's trick takes extreme measures to avoid them all but without much computation (where Cantor would determined merely one more digit to avoid the fully calculated root of just one more polynomial). Now of course you're right - changing a transcendental number by a rational leaves it transcendental. That just means that each polynomial gets killed many times (who says you can't beat a dead horse in mathematics). - Justin - it occurs to me that there's time value on this for you, right? Email me and I'll be glad to discuss this with you in detail so you have something soon to show your students! – David Feldman Dec 15 2010 at 2:29 Thanks! Will do! – Justin Lanier Dec 15 2010 at 3:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445004463195801, "perplexity_flag": "middle"}
http://en.wikibooks.org/wiki/Algorithms/Greedy_Algorithms
# Algorithms/Greedy Algorithms The latest reviewed version was checked on 21 April 2013. There are 4 pending changes awaiting review. Top, Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, A In the backtracking algorithms we looked at, we saw algorithms that found decision points and recursed over all options from that decision point. A greedy algorithm can be thought of as a backtracking algorithm where at each decision point "the best" option is already known and thus can be picked without having to recurse over any of the alternative options. The name "greedy" comes from the fact that the algorithms make decisions based on a single criterion, instead of a global analysis that would take into account the decision's effect on further steps. As we will see, such a backtracking analysis will be unnecessary in the case of greedy algorithms, so it is not greedy in the sense of causing harm for only short-term gain. Unlike backtracking algorithms, greedy algorithms can't be made for every problem. Not every problem is "solvable" using greedy algorithms. Viewing the finding solution to an optimization problem as a hill climbing problem greedy algorithms can be used for only those hills where at every point taking the steepest step would lead to the peak always. Greedy algorithms tend to be very efficient and can be implemented in a relatively straightforward fashion. Many a times in O(n) complexity as there would be a single choice at every point. However, most attempts at creating a correct greedy algorithm fail unless a precise proof of the algorithm's correctness is first demonstrated. When a greedy strategy fails to produce optimal results on all inputs, we instead refer to it as a heuristic instead of an algorithm. Heuristics can be useful when speed is more important than exact results (for example, when "good enough" results are sufficient). ## Event Scheduling Problem The first problem we'll look at that can be solved with a greedy algorithm is the event scheduling problem. We are given a set of events that have a start time and finish time, and we need to produce a subset of these events such that no events intersect each other (that is, having overlapping times), and that we have the maximum number of events scheduled as possible. Here is a formal statement of the problem: Input: events: a set of intervals $(s_i, f_i)$ where $s_i$ is the start time, and $f_i$ is the finish time. Solution: A subset S of Events. Constraint: No events can intersect (start time exclusive). That is, for all intervals $i=(s_i, f_i), j=(s_j, f_j)$ where $s_i < s_j$ it holds that $f_i\le s_j$. Objective: Maximize the number of scheduled events, i.e. maximize the size of the set S. We first begin with a backtracking solution to the problem: ```// event-schedule -- schedule as many non-conflicting events as possible function event-schedule(events array of s[1..n], j[1..n]): set if n == 0: return $\emptyset$ fi if n == 1: return {events[1]} fi let event := events[1] let S1 := union(event-schedule(events - set of conflicting events), event) let S2 := event-schedule(events - {event}) if S1.size() >= S2.size(): return S1 else return S2 fi end ``` The above algorithm will faithfully find the largest set of non-conflicting events. It brushes aside details of how the set events - set of conflicting events is computed, but it would require $O(n)$ time. Because the algorithm makes two recursive calls on itself, each with an argument of size $n - 1$, and because removing conflicts takes linear time, a recurrence for the time this algorithm takes is: $T(n) = 2\cdot T(n - 1) + O(n)$ which is $O(2^{n})$. To do: a tighter bound is possible But suppose instead of picking just the first element in the array we used some other criterion. The aim is to just pick the "right" one so that we wouldn't need two recursive calls. First, let's consider the greedy strategy of picking the shortest events first, until we can add no more events without conflicts. The idea here is that the shortest events would likely interfere less than other events. There are scenarios were picking the shortest event first produces the optimal result. However, here's a scenario where that strategy is sub-optimal: Above, the optimal solution is to pick event A and C, instead of just B alone. Perhaps instead of the shortest event we should pick the events that have the least number of conflicts. This strategy seems more direct, but it fails in this scenario: Above, we can maximize the number of events by picking A, B, C, D, and E. However, the events with the least conflicts are 6, 2 and 7, 3. But picking one of 6, 2 and one of 7, 3 means that we cannot pick B, C and D, which includes three events instead of just two. ## Dijkstra's Shortest Path Algorithm With two (high-level, pseudocode) transformations, Dijsktra's algorithm can be derived from the much less efficient backtracking algorithm. The trick here is to prove the transformations maintain correctness, but that's the whole insight into Dijkstra's algorithm anyway. [TODO: important to note the paradox that to solve this problem it's easier to solve a more-general version. That is, shortest path from s to all nodes, not just to t. Worthy of its own colored box.] To see the workings of Dijkstra's Shortest Path Algorithm, take an example: There is a start and end node, with 2 paths between them ; one path has cost 30 on first hop, then 10 on last hop to the target node, with total cost 40. Another path cost 10 on first hop, 10 on second hop, and 40 on last hop, with total cost 60. The start node is given distance zero so it can be at the front of a shortest distance queue, all the other nodes are given infinity or a large number e.g. 32767 . This makes the start node the first current node in the queue. With each iteration, the current node is the first node of a shortest path queue. It looks at all nodes adjacent to the current node; For the case of the start node, in the first path it will find a node of distance 30, and in the second path, an adjacent node of distance 10. The current nodes distance , which is zero at the beginning, is added to distances of the adjacent nodes, and the distances from the start node of each node are updated , so the nodes will be 30+0 = 30 in the 1st path , and 10+0=10 in the 2nd path. Importantly, also updated is a previous pointer attribute for each node, so each node will point back to the current node, which is the start node for these two nodes. Each node's priority is updated in the priority queue using the new distance. That ends one iteration. The current node was removed from the queue before examining its adjacent nodes. In the next iteration, the front of the queue will be the node in the second path of distance 10, and it has only one adjacent node of distance 10, and that adjacent node will distance will be updated from 32767 to 10 (the current node distance) + 10 ( the distance from the current node) = 20. In the next iteration, the second path node of cost 20 will be examined, and it has one adjacent hop of 40 to the target node, and the target nodes distance is updated from 32767 to 20 + 40 = 60 . The target node has its priority updated. In the next iteration, the shortest path node will be the first path node of cost 30, and the target node has not been yet removed from the queue. It is also adjacent to the target node, with the total distance cost of 30 + 10 = 40. Since 40 is less than 60, the previous calculated distance of the target node, the target node distance is updated to 40, and the previous pointer of the target node is updated to the node on the first path. In the final iteration, the shortest path node is the target node, and the loop exits. Looking at the previous pointers starting with the target node, a shortest path can be reverse constructed as a list to the start node. Given the above example, what kind of data structures are needed for the nodes and the algorithm ? ```# author , copyright under GFDL class Node : def __init__(self, label, distance = 32767 ): # a bug in constructor, uses a shared map initializer # , adjacency_distance_map = {} ): self.label = label self.adjacent = {} # this is an adjacency map, with keys nodes, and values the adjacent distance self.distance = distance # this is the updated distance from the start node, used as the node's priority # default distance is 32767 self.shortest_previous = None #this the last shortest distance adjacent node # the logic is that the last adjacent distance added is recorded , for any distances of the same node added def add_adjacent(self, local_distance, node): self.adjacent[node]=local_distance print "adjacency to ", self.label, " of ", self.adjacent[node], " to ", \ node.label def get_adjacent(self) : return self.adjacent.iteritems() def update_shortest( self, node): new_distance = node.adjacent[self] + node.distance #DEBUG print "for node ", node.label, " updating ", self.label, \ " with distance ", node.distance , \ " and adjacent distance ", node.adjacent[self] updated = False # node's adjacency map gives the adjacent distance for this node # the new distance for the path to this (self)node is the adjacent distance plus the other node's distance if new_distance < self.distance : # if it is the shortest distance then record the distance, and make the previous node that node self.distance = new_distance self.shortest_previous= node updated = True return updated MAX_IN_PQ = 100000 class PQ: def __init__(self , sign = -1 ): self.q = [None ] * MAX_IN_PQ # make the array preallocated self.sign = sign # a negative sign is a minimum priority queue self.end = 1 # this is the next slot of the array (self.q) to be used , self.map = {} def insert( self, priority, data): self.q[self.end] = (priority, data) # sift up after insert p = self.end self.end = self.end + 1 self.sift_up(p) def sift_up(self, p): # p is the current node's position # q[p][0] is the priority, q[p][1] is the item or node # while the parent exists ( p >= 1) , and parent's priority is less than the current node's priority while p / 2 != 0 and self.q[p/2][0]*self.sign < self.q[p][0]*self.sign: # swap the parent and the current node, and make the current node's position the parent's position tmp = self.q[p] self.q[p] = self.q[p/2] self.q[p/2] = tmp self.map[self.q[p][1]] = p p = p/2 # this map's the node to the position in the priority queue self.map[self.q[p][1]] = p return p def remove_top(self): if self.end == 1 : return (-1, None) (priority, node) = self.q[1] # put the end of the heap at the top of the heap, and sift it down to adjust the heap # after the heap's top has been removed. this takes log2(N) time, where N iis the size of the heap. self.q[1] = self.q[self.end-1] self.end = self.end - 1 self.sift_down(1) return (priority, node) def sift_down(self, p): while 1: l = p * 2 # if the left child's position is more than the size of the heap, # then left and right children don't exist if ( l > self.end) : break r= l + 1 # the selected child node should have the greatest priority t = l if r < self.end and self.q[r][0]*self.sign > self.q[l][0]*self.sign : t = r print "checking for sift down of ", self.q[p][1].label, self.q[p][0], " vs child ", self.q[t][1].label, self.q[t][0] # if the selected child with the greatest priority has a higher priority than the current node if self.q[t] [0] * self. sign > self.q [p] [0] * self.sign : # swap the current node with that child, and update the mapping of the child node to its new position tmp = self. q [ t ] self. q [ t ] = self.q [ p ] self. q [ p ] = tmp self.map [ tmp [1 ] ] = p p = t else: break # end the swap if the greatest priority child has a lesser priority than the current node # after the sift down, update the new position of the current node. self.map [ self.q[p][1] ] = p return p def update_priority(self, priority, data ) : p = self. map[ data ] print "priority prior update", p, "for priority", priority, " previous priority", self.q[p][0] if p is None : return -1 self.q[p] = (priority, self.q[p][1]) p = self.sift_up(p) p = self.sift_down(p) print "updated ", self.q[p][1].label , p, "priority now ", self.q[p][0] return p class NoPathToTargetNode ( BaseException): pass def test_1() : st = Node('start', 0) p1a = Node('p1a') p1b = Node('p1b') p2a = Node('p2a') p2b = Node('p2b') p2c = Node('p2c') p2d = Node('p2d') targ = Node('target') st.add_adjacent ( 30, p1a) #st.add_adjacent ( 10, p2a) st.add_adjacent ( 20, p2a) #p1a.add_adjacent(10, targ) p1a.add_adjacent(40, targ) p1a.add_adjacent(10, p1b) p1b.add_adjacent(10, targ) # testing alternative #p1b.add_adjacent(20, targ) p2a.add_adjacent(10, p2b) p2b.add_adjacent(5,p2c) p2c.add_adjacent(5,p2d) #p2d.add_adjacent(5,targ) #chooses the alternate path p2d.add_adjacent(15,targ) pq = PQ() # st.distance is 0, but the other's have default starting distance 32767 pq.insert( st.distance, st) pq.insert( p1a.distance, p1a) pq.insert( p2a.distance, p2a) pq.insert( p2b.distance, p2b) pq.insert(targ.distance, targ) pq.insert( p2c.distance, p2c) pq.insert( p2d.distance, p2d) pq.insert(p1b.distance, p1b) node = None while node != targ : (pr, node ) = pq.remove_top() #debug print "node ", node.label, " removed from top " if node is None: print "target node not in queue" raise elif pr == 32767: print "max distance encountered so no further nodes updated. No path to target node." raise NoPathToTargetNode # update the distance to the start node using this node's distance to all of the nodes adjacent to it, and update its priority if # a shorter distance was found for an adjacent node ( .update_shortest(..) returns true ). # this is the greedy part of the dijsktra's algorithm, always greedy for the shortest distance using the priority queue. for adj_node , dist in node.get_adjacent(): #debug print "updating adjacency from ", node.label, " to ", adj_node.label if adj_node.update_shortest( node ): pq.update_priority( adj_node.distance, adj_node) print "node and targ ", node, targ , node <> targ print "length of path", targ.distance print " shortest path" #create a reverse list from the target node, through the shortes path nodes to the start node node = targ path = [] while node <> None : path.append(node) node = node. shortest_previous for node in reversed(path): # new iterator version of list.reverse() print node.label if __name__ == "__main__": test_1() ``` ## Minimum spanning tree Wikipedia has related information at Minimum spanning tree Top, Chapters: 1, 2, 3, 4, 5, 6, 7, 8, 9, A
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309988021850586, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/88014/list
## Return to Question 2 added 266 characters in body; edited body Let $M$ be a commutative monoid with zero. Then the condition $M^* = M \setminus {0}$ is very similar to the condition for a commutative ring to be a field. This analogy is also used in the work "Schemes over $\mathbb{F}_1$ and Zeta functions" by Connes and Consani. However they don't give these monoids a name. A very silly idea might be to call them "monoid fields". Question. How are these monoids called in the literature? If there is no existing terminology yet, which one would you propose? The answer by BS tells us that in the non-commutative case these are called groups with zero. My question deals with the commutative case. I would like to have a proper name, not just a combination such as "abelian group with zero" (which is confusing anyway). 1 [made Community Wiki] # Terminology for certain monoids which are to monoids like fields are to rings Let $M$ be a commutative monoid with zero. Then the condition $M^* = M \setminus {0}$ is very similar to the condition for a commutative ring to be a field. This analogy is also used in the work "Schemes over $\mathbb{F}_1$ and Zeta functions" by Connes and Consani. However they don't give these monoids a name. A very silly idea might be to call them "monoid fields". Question. How are these monoids called in the literature? If there is no existing terminology yet, which one would you propose?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632751941680908, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/5283/de-sitter-cosmologic-limit?answertab=votes
# de sitter cosmologic limit It has been said that our universe is going to eventually become a de sitter universe. Expansion will accelerate until their relative speed become higher than the speed of light. So i want to understand what happens after this point: so from our point of view, we see a progressively shrinking event horizon (each galaxy sees one, arguably each spacetime point sees one). Now, what is the Hawking Radiation expected from this event horizon? It would seems that the event horizon area is shrinking around us, but its actually a black hole turns outwards: the black hole is actually outside the event horizon, and the visible "well-behaved" spacetime without singularities in the inside of this horizon In any case, intuitively (i don't have any arguments to support this) i would expect that the Hawking Radiation inside the horizon would grew larger as this horizon would shrink, but i would love to hear what is actually expected to happen at this point - I think it is a very good question. +1 – user1355 Feb 16 '11 at 15:59 ## 4 Answers The final fate of the universe can be computed. The de Sitter spacetime cloaks every region with a cosmological horizon, which is similar in some ways to a Rindler horizon. \$10^{100} years from now there will be nothing going on: Black holes will have decayed away and largely the universe is a de Sitter vacuum. There may be a smattering of neutron stars around, which as I recall these exist for a very long time. This horizon does emit quanta, though the flux of this radiation is exceedingly small. The universe will then eventually decay as the vacuum decays and the cosmological horizon retreats off “to infinity.” Nothing else much will be going on. We consider the decay of the de Sitter vacuum by quantum means and the prospect for this as a mechanism for the production of nascent cosmologies or baby universes. The observable universe, under eternal inflation from dark energy, will asymptotically evolve to a de Sitter spacetime. This spacetime is a vacuum configuration with a cosmological constant $\Lambda$. The stationary metric for this spacetime is $$ds^2~=~Adt^2~–~A(r)^{-1}dr^2~-~r^2d\Omega^2,~ A(r)~=~(1~-~\Lambda r^2/3)$$ A radial null geodesic with $ds^2~=~0$ and $d\Omega^2~=~0$ gives the velocity ${\dot r}~=~dr/dt$ $=~A(r)$, where this pertains to both out and in going geodesics near the cosmological horizon $r~=~\sqrt{3/\Lambda}$ as measured from $r~=~0$. The total action for the motion of a particle is $S~=~\int p_r dr$ $-~\int Hdt$. Consider the bare action of massless particles, using methods found in [1], across the horizon from $r$ to $r'$, $$S~=~\int_r^{r'}p_rdr~=~\int_r^{r'}\int_0^{p_r}dp_rdr.$$ The radial velocity of a particle is ${\dot r}~=~dr/dt$ $=~dH/dp_r$, which enters into the action as, $$S~=\int_r^{r'}\int_0^H{{dH'}\over{\dot r}}dr.$$ The field defines $H^\prime~=~\hbar\omega'$. The integration over frequencies is from $E$ to $E~-~\omega$, for the ADM energy. The action is properly written as $$S~=~-\hbar\int_r^{r'}\int_E^{E-\omega}{{d\omega'}\over{\dot r}}dr,$$ where the negative sign indicates the quanta is tunneling across the horizon to escape the Hubble region with radius The radial velocity $${\dot r}~=~\sqrt{\Lambda/3}r$$ defines the action $$S~=~-\hbar\int_r^{r’}\int_0^\omega{{d\omega dr}\over{\pm 1~-~\sqrt{\Lambda r^2/3} }}~=~\sqrt{3/\Lambda}tanh^{-1}(\sqrt{\Lambda/3}r)$$ The action is then the delay coordinate evaluated as $$r^*~=~\int {{dr}\over{1~–~\Lambda r^2/3}}~=~\sqrt{3/\Lambda}tanh^{-1}(\sqrt{\lambda/3}r).$$ The domain $[0,~\sqrt{3/\Lambda})$ defines a real valued action. Since, $tanh^{-1}(x)~=~{1\over 2}ln((1~+~x)/(1~-~x))$ for $r~>~\sqrt{3/\Lambda}$ the argument of the logarithm is negative. In this case the action is $$S~=~\sqrt{3/\Lambda}ln\Big({{\sqrt{\Lambda/3}r~+~1}\over{\sqrt{\Lambda/3}r~-~1 }}\Big)~+~i\pi\sqrt{3/\Lambda}.$$ The imaginary part represents the action for the quantum field emission as $r~\rightarrow~\infty$. The delay coordinate is defined on $[0,~\infty)$ which assures an S-matrix is defined on an unbounded causal domain, and this holds in general as well. This action does describe the emission of photons by the cosmological horizon. A tiny production of bosons occurs which causes the horizon to slowly retreat away. Eventually the de Sitter spacetime decays away into a Minkowski spacetime as $t~\rightarrow~\infty$. - "The final fate of the universe can be computed"---I love it! But I thought that the answer was supposed to be 42 :)+1 – Gordon Feb 16 '11 at 18:15 The last to lines are not clear. There are two integral signs with limits over $\Lambda$, and the integration to obtain the last line seems wrong? – MBN Feb 16 '11 at 18:24 2 @lurscher: This answer is utter crap. He made up the physics of deSitter horizons--- this is not what happens, the "photon emission" is not emission, and it doesn't make the cosmological horizon go away. This is nonsense, -1, and please, please, unaccept it. – Ron Maimon Sep 9 '12 at 3:04 1 The action you compute is meaningless, the imaginary part you are considering is nonsense, the emissions of a deSitter horizon is given by redshifting a local Unruh temperature of the near-horizon limit as always. – Ron Maimon Sep 9 '12 at 3:10 3 This answer is problematic and nonstandard. There are parts that are correct, parts that are false and parts that are pure speculation. Its better to keep things ordered by what happens classically, semiclassically and then in specific proposals of QG. Therefore I prefer Lubos' answer – Columbia Sep 9 '12 at 6:00 show 3 more comments We are already living in a nearly empty de Sitter space - the cosmological constant already represents 73% of the energy density in the Universe - and the Universe won't experience any qualitative change in the future: the percentage will just approach 100%. However, once the space may be approximated as an empty de Sitter space, all moments of time are physically equivalent. It's because de Sitter space belongs among the so-called "maximally symmetric spacetimes" - in which each point may be mapped to any other point by a symmetry transformation (isometry). So nothing will change qualitatively: the radius of the cosmic horizon will converge towards those 100 billion light years or so and never change again; we are not far from that point. Yes, it is true that de Sitter space is analogous to a black hole except that the interior of the black hole is analogous to the space behind the cosmic horizon - outside the visible Universe. The de Sitter space also emits its thermal radiation, analogous to the Hawking radiation. It's a radiation emitted from the cosmic horizon "inwards". Because the interior of the de Sitter patch is compact, unlike its black hole counterpart (the exterior of the black hole), the radiation is reabsorbed by the cosmic horizon after some time and the de Sitter space no longer loses energy. While the right theoretical description of the thermal radiation in de Sitter space is a theoretician's puzzle par excellence (we are only "pretty sure" about the semiclassical limit, and don't even know whether there exists any description that is more accurate than that), it has absolutely no impact on observable physics because the typical wavelength of the de Sitter thermal radiation is comparable to the radius of the Universe. (Note that it's true for black holes, too: the wavelength of the Hawking radiation is mostly comparable to the black hole radius.) Such low-energy quanta are obviously unobservable in practice - and in some sense, they're probably unobservable even in theory. You should imagine that there are just $O(1)$ thermal photons emitted by the cosmic horizons inside the visible cosmos whose energy is $10^{-60}$ Planck energies per photon. From an empirical viewpoint, it's ludicrous. - so the event horizon will reach an stable radius? what happens with the dark energy/cosmological constant expansion acceleration? is there an inflexion point after it starts to lower the acceleration and become zero? thanks for your answer @Lubos – lurscher Feb 16 '11 at 16:16 Hi @lurscher, the energy density in de Sitter space is a universal constant - that's why it's called its (positive) cosmological constant. By Einstein's equations, a constant energy density causes the same spacetime curvature - with the same (proper) curvature radius - which is locally interpreted as the Hubble constant. So $H=\dot a/a$ is constant, too. This can be solved by $a$ growing exponentially with the proper time of the static observer. But this exponential growth is just a coordinate effect. In other coordinates, you may see that de Sitter space is maximally symmetric. – Luboš Motl Feb 16 '11 at 16:49 There is no inflection point or anything like that: the acceleration - given by the spacetime curvature - is complete and eternal (positive) constant because it is linked to the cosmological constant. I feel that I must have written the same thing many times in the main answer so I wonder why you keep on asking the same question. – Luboš Motl Feb 16 '11 at 16:51 2 @lurscher: +1, and please accept this answer. The answer you accepted is completely, totally, unequivocally, wrong. – Ron Maimon Sep 9 '12 at 3:05 1 @lurscher: The horizon is NOT receding after reaching deSitter state, it is staying put. It is not contracting either. The local temperature is given by the surface gravity of the horizon, redshifted consistently to the entire spacetime, as always. This doesn't cause the horizon to grow or shrink, and this has been well established since before 1980. – Ron Maimon Sep 9 '12 at 3:08 show 3 more comments The Hawking radiation of a deSitter space is given, as always, from the near-horizon metric, consistently redshifted to fill all space. Normalizing the cosmological constant appropriately: $$ds^2 = - (1- {\Lambda\over 3} r^2) dt^2 + {dr^2 \over (1 - {\Lambda\over 3} r^2)} + r^2 d\Omega^2$$ If you flip the sign of $dt$, you can identify the metric of a 4-sphere after appropriate coordinate transformations, and from this read off the periodicity of t, which goes around the sphere. But this is not necessary. The near horizon metric is Rindler (as usual for hot horizons) and writing $r=r_0 - {6\over\Lambda} u^2$ where $r_0$ is the deSitter radius, you find: $$ds^2 = - ( {\Lambda^2 u^2\over 9}) dt^2 + du^2$$ Which gives the imaginary time period is $\Lambda u\over 3$, so that the near horizon temperature is $3\over\Lambda u$. Extending this using the redshift factor, the temperature at the center is $$1\over {2\pi r_0}$$ Where $$r_0 = \sqrt{3\over\Lambda}$$ Which is the usual deSitter temperature. This temperature is locally the same everwhere, because the space is isotropic. The horizon is static, and stable in equilibrium with this thermal bath, it doesn't grow and it doesn't shrink. - but i don't see why you say the horizon is static and in equilibrium. can you elaborate? – lurscher Sep 9 '12 at 18:20 @lurscher: Because the metric doesn't change in t once it reaches deSitter. The horizon stays put in r, sucking things into it, and that's the exponential inflation. – Ron Maimon Sep 9 '12 at 19:32 Hawking radiation from a de Sitter horizon is the cosmic microwave background radiation. Both are black body radiation. The Big Bang is a myth. - that is an interesting viewpoint i haven't heard or read before, not sure right off if it makes sense or what direct evidence contradicts. I would appreciate if you could provide some references – lurscher Sep 4 '12 at 16:53 No references, just intuition. Sorry, I don't usually operate on that level. So, I can't create a new paragraph in this format. Strange. More like a telegram. OK--consider, if de Sitter's model is correct. then there really exists a space-time horizon at a finite distance from the local observer. All inside this is called the "observable universe" At this horizon, we will see the same kinds of effect we see at the horizon of a black hole. Including Hawking radiation. This is what Bell Labs observed, not relic radiation from an imaginary, anthropocentric "big bang" Copyright Breton Carr 2012 – breton carr Sep 9 '12 at 0:32 2 Maimon: I don't think your attitude is logical. It is possible to argue against a false statement, people do it all the time, without having a full-blown argument to deflate. It's not necessary to annoying to be honest, but academia can train that talent out of the unwary. You may be the next Newton, son, but I think your real talent lies in Bunny of Love! Love and neutrinos to all. – breton carr Sep 10 '12 at 4:30 1 People sometimes forget that there are several observational support for a hot dense epoch. The point is, ok, you may assume that all CMB radiation comes from other place. But to take your idea seriously you have to be compatible with all other observations. For instance, say that you explain the black body radiation, what about the $10^{-5}$ perturbations around it with an almost Harrison-Zeldovich spectrum? And how can you explain the abundance of light elements with a radiation dominated phase? The baryonic acoustic oscillations which match the CMB perturbations? – Sandro Vitenti Sep 21 '12 at 22:40 1 To throw ideas is easy. The difficult part is to know the literature, the observations and work on the ideas to match everything already known (and more). – Sandro Vitenti Sep 21 '12 at 22:44 show 10 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239750504493713, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/75756/example-of-a-simple-pole?answertab=oldest
# Example of a simple pole I was told that $\operatorname{sech} x$ has a simple pole. Could someone please explain what that means? I have looked up the definition but it involves too much jargon like holomorphic, etc. Is there a simple definition and why is this true? Thanks. - 2 For an even simpler example of a simple pole: $x=0$ is a simple pole for the function $\dfrac1{x}$. – J. M. Oct 25 '11 at 15:26 1 Stanisław Łojasiewicz was a fantastic mathematician who proved very difficult inequalities about the growth of functions on complex analytic spaces , and yet he was very modest and always behaved like a simple Pole. – Georges Elencwajg Oct 25 '11 at 16:17 @GeorgesElencwajg: Haha :D – simpleton Oct 25 '11 at 23:45 – Qmechanic Nov 19 '12 at 19:16 ## 1 Answer Recall that $\operatorname{sech}(x) = \frac{1}{\cosh(x)}$. When $x \in \mathbb{R}$, hyperbolic cosine is non-negative, so $\operatorname{sech}(x)$ has no poles on the real axis. Zeros of the hyperbolic cosine are all along the imaginary axis at $z_n = i \frac{\pi}{2} + i \pi n$. Consider a vicinity of such a zero: $$\begin{align} \frac{1}{\cosh(z_n + \epsilon)} &= \frac{1}{\cosh(z_n) \cosh(\epsilon) + \sinh(z_n) \sinh(\epsilon)}\\ &= \frac{1}{\sinh(z_n)} \frac{1}{\sinh(\epsilon)}\\ &\sim \frac{1}{\sinh(z_n)} \left( \frac{1}{\epsilon} + o(1) \right) \end{align}$$ The order of the pole is one, so it is called simple. But as you see, $\operatorname{sech}(x)$ has infinitely many simple poles. Added: The series expansion for $\frac{1}{\sinh(\epsilon)}$ follows from series expansion for $\sinh(\epsilon) \sim \epsilon + \frac{\epsilon^3}{3!} + \ldots + \frac{1}{(2n+1)!}\epsilon^{2n+1} + o(\epsilon^{2n+2})$. - 1 – J. M. Oct 25 '11 at 15:13 Thanks, Sasha and @J.M. ! – simpleton Oct 25 '11 at 15:22 @simpleton: Because of those poles, $\mathrm{sech}$ is what would be termed as a meromorphic function. – J. M. Oct 25 '11 at 15:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314473271369934, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/180945-joint-probability-distribution-function-two-r-vs.html
# Thread: 1. ## Joint probability distribution of function of two r.vs If X and Y are independent standard random variables, determine the joint density function of $U=X, V=\frac{X}{Y}$ Then use your result to show that $\frac{X}{Y}$ is a Cauchy distribution. I get: $f(x,y)=f_X(x)f_Y(y)$ $\\\\\\= \frac{1}{2\pi}e^{-(x^2+y^2)/2}$ now $u=x \ and \ v=\frac{x}{y} \rightarrow y=\frac{u}{v} \ and \ x=u$ $J(x,y)= \begin{vmatrix} 1 & 0 \\ \frac{1}{y} & \frac{-x}{y^2} \\ \end{vmatrix} =\frac{-x}{y^2}$ so $J(x,y)^-^1 = \frac{y^2}{x}$ now $f_U_,_V(u,v)=f_X_,_Y(x,y)|J(x,y)|^-^1$ $= f_X_,_Y(u,\frac{u}{v^2})$ $f_U_,_V(u,v)=\frac{1}{2\pi}e^{-u^2/2}^{(1+\frac{1}{v^2})}.\frac{u}{v^2}$ $f_V(v)= \int^\infty_{-\infty} \frac{1}{2\pi}e^{-u^2/2}^{(1+\frac{1}{v^2})}.\frac{u}{v^2}\ du$ However when you evaluate the integral the answer seems to be meaningless 2. Hello, The mistake is in there : |J(x,y)|=|x|/y² The absolute value will change the boundaries. I trust you for the formula of the change of variables. I've never seen it this way, so I can't really check. 3. It works $f_V(v)= \int^\infty_{-\infty} \frac{1}{2\pi}e^{-u^2/2}^{(1+\frac{1}{v^2})}.\frac{|u|}{v^2}\ du$ $f_V(v)=2 \int^\infty_0 \frac{1}{2\pi}e^{-u^2/2}^{(1+\frac{1}{v^2})}.\frac{u}{v^2}\ du$ $f_V(v)= \int^\infty_0 \frac{1}{\pi}e^{-u^2/2}^{(1+\frac{1}{v^2})}.\frac{u}{v^2}\ du$ Now integrate... $w=(-u^2/2)(1+\frac{1}{v^2})$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8827225565910339, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/50986/need-help-in-simplifying-the-root-of-a-cubic?answertab=votes
# Need help in simplifying the root of a cubic I have this not so great looking cubic polynomial in $x$: $$(a - b)^6 + 3 (a - b)^4 (-a^2 c - b^2 d) x + 3 (a - b)^2 (a^4 c^2 - 7 a^2 b^2 c d + b^4 d^2) x^2 + (-a^2 c - b^2 d)^3 x^3$$ where $a,b,c,d>0$ and $c+d=1$ and I need to find a real root of the above cubic (in fact there should be only one, as I know that the discriminant is negative). I know there are general solutions and Mathematica can easily whip out a complicated solution for it (which it does). Now, I know that the following nifty expression is a root (the root I'm looking for): $$x_1 = \frac{(a - b)^2}{\left((a^2 c)^{1/3} + (b^2 d)^{1/3}\right)^3}$$ and I see patterns of the solution in the coefficients of my cubic. However, I can't seem to obtain $x_1$ from my cubic in that form. Here's the cubic and the solution in Mathematica code: ````eqn = (a - b)^6 + 3 (a - b)^4 (-a^2 c - b^2 d) x + 3 (a - b)^2 (a^4 c^2 - 7 a^2 b^2 c d + b^4 d^2) x^2 + (-a^2 c - b^2 d)^3 x^3; sol = ((a - b)^2)/((a^2 c)^(1/3) + (b^2 d)^(1/3))^3; (*verify sol is a root of eqn*) In[3]:= Simplify[eqn /. x -> sol] Out[3]= 0 ```` As to how I know that it is a root, I found it in a paper (non-math) with no explanation other than "tedious algebra" and mailing the authors didn't help. I'll agree it was tedious to work out the math and arrive at this cubic, but I feel like I'm almost at the finish line and need a little push to cross the line. In my end use application, it's going to be a numerical routine and for my needs, it doesn't matter if it's one expression or the other as long as they are accurate to a certain tolerance. However, there's a certain elegance to the above solution and I'm very interested in nailing it down. I tried using the general formula for the roots of a cubic but it doesn't seem like a pretty place to begin. Are there alternate ways to approach this? - 1 May I ask where this cubic equation came from? – gorilla Jul 12 '11 at 7:22 ## 2 Answers Yes, it's a bit tough to derive by hand; here's my Mathematica session on how I arrived at your expression: ````{w,v,u}=FullSimplify[(Most[#]/Last[#])&[CoefficientList[(a-b)^6+3(a-b)^4 (-a^2 c-b^2 d)x+3 (a-b)^2 (a^4 c^2-7 a^2 b^2 c d+b^4 d^2) x^2+(-a^2 c-b^2 d)^3 x^3, x]]]; q = FullSimplify[(u^2 - 3v)/9]; r = FullSimplify[(2u^3 - 9u v + 27w)/54]; y = -PowerExpand[(FullSimplify[r + PowerExpand[Sqrt[FullSimplify[r^2 - q^3]]]])^(1/3)]; FullSimplify[y + q/y -u/3] (a - b)^2/(a^(2/3)*c^(1/3) + b^(2/3)*d^(1/3))^3 ```` - Knowing a root in this case does help. Putting $x=y(a-b)^2$ and introducing new coefficients $A=(a^2 c)^{1/3}$, $B=(b^2 d)^{1/3}$ simplifies the equation. Edit: For $z=y/(A+B)\$ the equation becomes $z^3-3Cz^2+3z-1=0\$ where $C=\frac{ A^2-7 A B+B^2}{(A+B)^2}$. - 3 I think you mean $x=y(a-b)^2$. – Gerry Myerson Jul 12 '11 at 7:02 Whoah, I wrote the same comment, character for character. Hivemind. – anon Jul 12 '11 at 7:04 You may mean $A=(a^2c)^{1/3}$ rather than $A=(ac^2)^{1/3}$ – Henry Jul 12 '11 at 7:05 @Gerry Yes, thanks. – Andrew Jul 12 '11 at 7:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171983003616333, "perplexity_flag": "middle"}
http://electronics.stackexchange.com/questions/29496/s-vi-2-derivation/29516
# S = VI*/2 derivation I was wondering Where I could find the derivation for the complex power formula, S=VI*/2, where S, V and I are complex phasors. I've seen a whole bunch of verifications where people sub stuff into the equation to show that it happens to work. Here's what I know so far, If $V=V_{M} ∠ \phi _{V}$ and $I=I_{M} ∠ \phi _{I}$ and $S = V_{RMS} \cdot I_{RMS}$, then $V_{RMS}= \dfrac{V_{M} ∠ \phi _{V}}{\sqrt{2}}$ and $I_{RMS}= \dfrac{I_{M} ∠ \phi _{I}}{\sqrt{2}}$ and S = Vm∠ø_v*Im∠ø_i/2 $S = \dfrac{V_{M} ∠ \phi _{V} \cdot I_{M} ∠ \phi _{I}}{2}$ - You'll have to define S, V, I, and whatever "*/" is supposed to mean. – Olin Lathrop Apr 8 '12 at 14:59 1 @OlinLathrop, it is I* for complex conjugate of I(current) and divided by two, since they are both sin waves(V and I*) so they both have their RMS conversion. – Kortuk♦ Apr 8 '12 at 16:19 ## 1 Answer Let V and I be the instantaneous voltage and current on a load. From the definition of power, voltage and current, we have the relation for instantaneous power: $p(t) = v(t) \cdot i(t)$ Which means that the power on a given instant $t$ is equal to the product of the voltage and the current exactly on that instant. I'll assume you're familiar with what the phasor representation actually means. Just to state that shortly: a phasor is a mathematical shorthand for representing a sinusoid at a given unknown frequency. So, $V=V_{M} ∠ \phi _{V}$ is a shorthand for $v(t) = V_{M} \cdot cos(\omega t+ \phi _{V})$. Similarly: $I=I_{M} ∠ \phi _{I}$ means $i(t) = I_{M} \cdot cos(\omega t+ \phi _{I})$. Multiplying $v(t) \cdot i(t)$ for all $t$, gives us the waveform of the instantaneous power for every $t$. Working on that multiplication: $s(t) = v(t) \cdot i(t) = V_{M} \cdot cos(\omega t+ \phi _{V}) \cdot I_{M} \cdot cos(\omega t+ \phi _{I})$ As $cos(u) \cdot cos(v) = \cfrac{1}{2} \cdot [cos(u-v)+cos(u+v) ]$, with $u = \omega t+ \phi _{V}$ and $v = \omega t+ \phi _{I}$, we can simplify the equation above to: $s(t) = v(t) \cdot i(t) = \cfrac{V_{M}I_{M}}{2} \cdot [cos(\phi _{V} - \phi _{I}) + cos(2\omega t+ \phi _{V} + \phi _{I})]$ This waveform is pretty interesting for itself: it is a constant value $\cfrac{V_{M}I_{M}}{2} \cdot cos(\phi _{V} - \phi _{I})$ summed by a sinusoid $\cfrac{V_{M}I_{M}}{2} cos(2\omega t+ \phi _{V} + \phi _{I})]$. This clearly shows that the instantaneous power is not constant with time. Based on that result, we can see that the mean power is equal to the non-varying component of $s(t)$ (it's pretty straightforward to prove that mathematically, one just have to solve the integral $\cfrac{1}{T}\int_{t}^{t+T}{s(t)dt}$ ) Motivated by this result, and by the pretty sweet geometrical interpretation of $VIcos(\phi _{V} - \phi _{I})$, that value has been defined as the real power, that is, the power that is actually delivered to the load. Now you know that this so called real power is nothing more than the mean power at the load. Diving into this concept a little bit (it's a pitty I can't draw here, but I'll try): Let v be a vector with magnitude ||v|| and phase $\phi_v$, and i be a vector with magnitude ||i|| and phase $\phi_i$ If you multiply ||i|| by $cos(\phi_v-\phi_i)$ you have the projection of i over v. On the other hand, $||i||sin(\phi_v-\phi_i)$ is said to be the component of i in quadrature with v. Now you can understand why the mean power has a cool geometric interpretation: the mean power is the voltage multiplied by the projection of the current over the voltage, on the phasor space. This motivated the creation of the complex power S as: ````S = P + jQ ```` With this definition, the real part of the vector is exactly the mean power delivered to the load, and the complex part is the power said to be in quadrature, called reactive power (google for Power Triangle to see the geometrical interpretation of this result). Ok, now going back to the $s(t)$ definition, we see that $P = \cfrac{V_M I_M}{2} \cdot cos(\phi_v - \phi_i)$ and $Q$, by definition, and to comply with the definition of S, is equal to $\cfrac{V_M I_M}{2} \cdot sin(\phi_v - \phi_i)$ So, as we wanted to prove at the begining: $S = P + jQ = \cfrac{V_M I_M}{2} \cdot cos(\phi_v - \phi_i) + j\cfrac{V_M I_M}{2} \cdot sin(\phi_v - \phi_i)$ $S = \cfrac{V_M I_M}{2} \cdot [cos(\phi_v - \phi_i) + jsin(\phi_v - \phi_i) ]$ $S = \dfrac{V_{M} ∠ \phi _{V} \cdot I_{M} ∠ -\phi _{I}}{2}$ $S = \cfrac{V \cdot I*}{2}$ So, there you go, what you wanted to see ;) edit: What's the physical interpretation of Q? I've shown above what's the physical interpretation of the real part of the complex power, P, that is, the mean power delivered to the load. But what's exactly Q, how can one visualize it? It's based on the fact that cos and sin are orthogonal, and the principle of superposition can be applied to power if the two waveforms involved in the calculation are orthogonal. Let's go into the math, because that's really what matters. Using the result obtained above: $s(t) = \cfrac{V_{M}I_{M}}{2} \cdot [cos(\phi _{V} - \phi _{I}) + cos(2\omega t+ \phi _{V} + \phi _{I})]$ • First case: purely resistive load, so that $\phi _{V} - \phi _{I} = 0$ $s(t) = \cfrac{V_{M}I_{M}}{2} \cdot [1 + cos(2(\omega t+ \phi _{V}))]$ That is a sinusoid centered on $\cfrac{V_{M}I_{M}}{2}$ with that same amplitude (its minimum value is 0 and its maximum value is $V_{M}I_{M}$ ). Let's call it P • Second case: purely inductive load, so that $\phi _{V} - \phi _{I} = \cfrac{\pi}{2}$ $s(t) = \cfrac{V_{M}I_{M}}{2} \cdot [0 - cos(2(\omega t+ \phi _{V}) - \cfrac{\pi}{2} )]$ $s(t) = \cfrac{V_{M}I_{M}}{2} \cdot [sin(2(\omega t+ \phi _{V}))]$ That is a purely oscillatory waveform with mean value equal to 0. Let's call this result Q. • Third case: the generic case $\phi _{V} - \phi _{I} = \theta$ In this case, s(t) is exactly the general equation we found on the discussion above. But we can rewrite that to make use of the result of the two previous cases, like this: First, we rewrite the equation in terms of $\theta$ (notice that $\phi_V + \phi_I = \phi_V -\phi_V + \phi_V + \phi_I = 2\phi_V - \theta$): $s(t) = \cfrac{V_{M}I_{M}}{2} \cdot [cos(\theta) + cos(2(\omega t+ \phi _{V}) - \theta)]$ Knowing that: $cos(x-y) = cos(x)cos(y) + sin(x)sin(y)$, letting $x = 2(\omega t+ \phi _{V})$ and $y = \theta$ $s(t) = \cfrac{V_{M}I_{M}}{2} \cdot [cos(\theta) + cos(\theta)cos(2(\omega t + \phi_V)) + sin(\theta)sin(2(\omega t + \phi_V))]$ Rearranging the terms: $s(t) = cos(\theta) \cdot \cfrac{V_{M}I_{M}}{2} \cdot [1 + cos(2(\omega t + \phi_V))] + sin(\theta) \cdot \cfrac{V_{M}I_{M}}{2} sin(2(\omega t + \phi_V))$ Using the result of the two first cases above: $s(t) = cos(\theta)P + sin(\theta)Q$ An amazing result, right? What does that mean? Let's go back to what we are doing: calculating the power for the generic case where $\phi _{V} - \phi _{I} = \theta$, that is, solvig the equation: $s(t) = V_{M}cos(\omega t + \phi_V) \cdot I_{M}cos(\omega t + \phi_I)$ Can we rewrite $i(t) = I_{M}cos(\omega t + \phi_I)$ in the form of $i(t) = K_1 cos(\omega t + \phi_V) + K_2 sin(\omega t + \phi_V)$? Let's try: $\phi_I = \phi_V - \theta$ $i(t) = I_{M}cos(\omega t + \phi_V - \theta$) \\$ Letting $\omega t + \phi_V = u$ and $\theta = v$ With the relation: $cos(u-v) = cos(u)cos(v) + sin(u)sin(v)$ We have: $i(t) = I_{M}cos(\theta)cos(\omega t + \phi_V) + I_{M}sin(\theta)sin(\omega t + \phi_V)$ Just what we wanted, to rewrite i(t) as a sum of two components: one in phase with v(t), and one in quadrature with v(t)! Now the result of the case 3 can be explained: i(t) can be decomposed in two components, as shown above, and the power generated by i(t) is equal to the power generated by each one of these components individually. Whoa, just like superposition but for power! (Remember that this is only true, and it was proven above, because cos and sin are orthogonal) So Q is the amount of power generated by the component of i(t) that's in quadrature with v(t). It is purely oscillatory and has no mean value. P is the amount of power generated by the component of i(t) that's in phase with v(t). It is oscillatory but has a mean value that's equal the mean power delivered to the load. And the complex power S, the total power, is exactly the sum of these two components • - Thank you for your good explantation! I have a few questions though: 1. I don't follow what happened to $-\cfrac{V_{M}I_{M}}{2} cos(2\omega t+ \phi _{V} + \phi _{I})$. I thought this term would be the reactive power, Q; however, $Q = ||i||sin(\phi_v-\phi_i)$. 2. I don't understand how you went from $S = \cfrac{V_M I_M}{2} \cdot [cos(\phi_v - \phi_i) + jsin(\phi_v - \phi_i) ]$ tp $S = \dfrac{V_{M} ∠ \phi _{V} \cdot I_{M} ∠ -\phi _{I}}{2}$. It's as though $\cos(\phi_v - \phi_i)$ is a phasor, but it's just a constant. Thanks again for your answer! – user968243 Apr 9 '12 at 1:49 Yep. you're right, that's NOT Q. The reactive power is defined only in terms of the phase difference between voltage and tension, and it's a value that's directly related to the definition of S as a phasor. It's the power that would be delivered by the current in quadrature with the voltage. The time varying component is not taken into account, because in this sense what really matters is the mean power at the load. The varying part EXISTS, is really there (watch a incandescent light bulb, for example), but, over time, the power is related only to the static part of s(t). ;) – Castilho Apr 9 '12 at 2:10 Okay, so does this varying part have a special name? Anyway, so if I understand it correctly, the amount of I in the direction of V is the real power, and the amount of I, perpendicular to V is the complex power. – user968243 Apr 9 '12 at 4:48 almost that, the amount of I in the direction of V multiplied by V is the real power P, the amount of I perpendicular to V multiplied by V is the REACTIVE power Q, P+jQ is the complex power, or apparent power ;) – Castilho Apr 9 '12 at 9:37 Okay, that makes sense! Actually in my previous comment, I was asking what the name for this is: −VMIM2cos(2ωt+ϕV+ϕI) I really thought that it was the reactive power... Thanks for your reples by the way, I'm grateful! – user968243 Apr 9 '12 at 12:11 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517526626586914, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/249598/how-can-i-apply-a-inclusionexclusion-principle-in-this-task?answertab=active
# How can I apply a Inclusion–exclusion principle in this task? Simple task from combinatorics: How much sequences does exist, that consist of letters A, B, C, D, ..., O, P; if no sequence could have any of these words: PONK, DOBA, COP. This task is about Inclusion-exclusion principle, but I can't understand why. I think that sets that inlude PONK, COP or DOBA doesn't have any common elements, because there is just one letter O. Can anyone explain me how to solve this task? thanks Oh, I have understood the complexity of this question just now. A real task also describe, that: edit: sequence can't have any of words {PONK, DOBA, COP} also if we delete some letters. That means that sequence A P C O D N E K F ... also should not be counted - ## 1 Answer In the corrected problem there are still $16!$ sequences altogether. The number that include the letters PONK in that order (but not necessarily adjacent) can be computed as follows: there are $\binom{16}4$ ways to choose the four positions to be filled by the letters P, O, N, and K, and there are then $12!$ ways to permute the remaining letters, so there are $\binom{16}412!$ sequences that contain ...P...O...N...K... . In similar fashion you can count the sequences that contain ...D...O...B...A... and those that contain ...C...O...P... . However, the sets $\{\text{P,N,K}\}$, $\{\text{D,B,A}\}$, and $\{\text{C,P}\}$ are pairwise disjoint, so a sequence can contain two or even all three of these subsequences, and we’ll need to go through the complete inclusion-exclusion calculation. Let’s try to count the sequences that contain both ...P...O...N...K... and ...D...O...B...A... . First we choose $7$ of the $16$ places for the letters P, O, N, K, D, B, and A; this can be done in $\binom{16}7$ ways. The O must fill the third of these seven positions. The P and D must fill the first two, but they can do so in either order. There are $\binom42$ ways to choose the two positions for N and K, which must go in that order, and B and A will then fill the remaining positions, again in that order. Finally, the other $9$ letters can be permuted arbitrarily. The total is therefore $$\binom{16}7\cdot2\cdot\binom42\cdot9!\;.$$ You should be able to use similar analyses to calculate the remaining terms of the inclusion-exclusion calculation. - Thanks a lot, I didn't thought that it would be so easy. total nr. of sequences = 16! nr. with PONK = 13! nr. with DOBA = 13! nr. with COP = 14! so result should be 16! - 13! - 13! - 14! ? I just can't forget the main axiom of math - if somethings is solved easy - it is solved wrong – George Dec 3 '12 at 0:10 @George: You’re welcome, and yes, your answer is correct. I like that axiom! :-) – Brian M. Scott Dec 3 '12 at 0:21 Unfortunately, task was given in very interesting form, so I did a mistake in description. I am sorry. – George Dec 3 '12 at 0:37 @George: I’ve updated the answer to reflect the corrected problem. – Brian M. Scott Dec 3 '12 at 1:09 I really appreciate that. Thanks for your time and attention. – George Dec 3 '12 at 1:28 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463489055633545, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/96488?sort=newest
## Function with Fourier coefficient of order $o(n^{-m})$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $(a_n),(b_n)$ be Fourier coefficients of periodic locally integrable function $f: R \rightarrow \R$. Assume that $n^ma_n, n^m b_n \rightarrow 0$ when $n \rightarrow \infty$. By Weierstrass test $f$ is of class $C^{m-2}$. Is maybe $f^{m-2}$ absolutely continuous or differentiable almost everywhere, everywhere, etc.? - ## 1 Answer Under these assumptions the function is in the Sobolev space $H^{m-1/2-\epsilon}$ for any $\epsilon>0$. This implies in particular that $f^{(m-1)}$ is in $L^p$ for any $p<\infty$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8802517652511597, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/166148-partial-derivatives-word-problem-help.html
# Thread: 1. ## Partial derivatives, word problem help Heat is being conducted radially through a cylindrical pipe. The temperature at a radius r is T(r). In Cartesian co-ordinates, $r = \sqrt{(x^{2}+ y^{2}})$ show that $\displaystyle \frac{\partial T}{\partial x} = \frac{x}{r} \frac{dT}{dr}$ 2. $T(r)=[\math]T(\sqrt{x^2+y^2))$ You have to differentiate r with restpect to x, this is just $\frac{\partial r}{\partial x}=\frac{2x}{2\sqrt{x^2+y^2}}=\frac{x}{r}$. So $\frac{\partial T}{\partial x}=\frac{\partial r}{\partial x}\frac{dT}{dr}=\frac{x}{r}\frac{dT}{dr}$ 3. Originally Posted by adkinsjr $<b>T(r)=[\math]T(\sqrt{x^2+y^2))</b>$ You have to differentiate r with restpect to x, this is just $\frac{\partial r}{\partial x}=\frac{2x}{2\sqrt{x^2+y^2}}=\frac{x}{r}$. So $\frac{\partial T}{\partial x}=\frac{\partial r}{\partial x}\frac{dT}{dr}=\frac{x}{r}\frac{dT}{dr}$ Thank you so much, but I dont understand what the first expression is and how you got it? your saying $T(r) = T(\sqrt{ x^{2} + y^{2}})$ ? How comes? Also for $\frac{\partial r}{\partial x}$ , you took the partial derivative of r with respect to x, from the equation $r = \sqrt{(x^{2} + y^{2})$ ? so you get this expression ; $\frac{2x}{2\sqrt{x^2+y^2}}$ but how does that equal the expression; $\frac{x}{r}$ ? thank you. 4. edit; I actually understand how you got the first expression, thanks, could please explain the rest? Thank you 5. Thank you so much, but I dont understand what the first expression is and how you got it? your saying $T(r) = T(\sqrt{ x^{2} + y^{2}})$ ? How comes? All I'm doing is substituting the equation $r=\sqrt{x^2+y^2}$ into the expression $T(r)$ so you can see that T is a composite function of x and y, and therefore the partial of T with respect to x can be calculated using a chain rule. Also for $\frac{\partial r}{\partial x}$ , you took the partial derivative of r with respect to x, from the equation $r = \sqrt{(x^{2} + y^{2})$ ? so you get this expression ; $\frac{2x}{2\sqrt{x^2+y^2}}$ but how does that equal the expression; $\frac{x}{r}$ ? thank you. $r=\sqrt{x^2+y^2}$ The derivative of a function like $\sqrt{u}$ is $\frac{u'}{2\sqrt{u}}$. You have to apply the chain rule to find the partial of r with respect to x. $\frac{\partial r}{\partial x}=\frac{1}{2\sqrt{x^2+y^2}}\frac{\partial}{\parti al x}(x^2+y^2)=\frac{2x}{2\sqrt{x^2+y^2}}$ Remember that $r=\sqrt{x^2+y^2}$ so just substitute that into the partial derivative for r with respect to x, and you get $\frac{x}{r}$ 6. Thank you , I understand your method now. 7. sorry just one question, how does $\frac{dT}{dr}$ come into the equation ? It seems like its just been put there? Thank you
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560649394989014, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/161808-integration-problem-exponantial.html
# Thread: 1. ## an integration problem with exponantial I have encountered a problem with the following form: $\int x e^{ax^2-bx}dx$ I can only find formulas for $\int x e^{ax^2}dx$ and $\int x e^{bx}dx$ and don't know how to make use of these two formulas. Can someone help me on this? Thanks a lot!! 2. Originally Posted by billzhao I have encountered a problem with the following form: $\int x e^{ax^2-bx}dx$ I can only find formulas for $\int x e^{ax^2}dx$ and $\int x e^{bx}dx$ and don't know how to make use of these two formulas. Can someone help me on this? Thanks a lot!! Complete the square in the exponent to get $\displaysytle a(x^2-\frac{b}{a}x+\frac{b^2}{4a^2})-\frac{b^2}{4a}=a(x-\frac{b}{a})^2-\frac{b^2}{4a}$ Now the integral looks like $\displaystyle \int xe^{a(x-\frac{b}{2a})^2-\frac{b^2}{4a}}dx=e^{-\frac{b^2}{4a}}\int xe^{a(x-\frac{b}{2a})^2} dx$ From here just make a u sub for the exponent and then you will use both of your formula's for the integral 3. ## Thanks for the quick reply!! But... Is there a closed form solution for the below integration? $\int e^{ax^2}dx$ 4. Yes, but you have to use the "special function" $erf(x)= \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}dt$. Let $iu= \sqrt{a}x$. Then $dx= \frac{i}{\sqrt{a}}du$, $e^{ax^2}= e^{-u^2}$ and the integral becomes $\frac{i}{\sqrt{a}}\int e^{-u^2}du= \frac{i\sqrt{\pi}}{2\sqrt{a}}erf(-i\sqrt{a}x)+ C$. The fact that you have $x^2$ in the exponential rather than $-x^2$ complicates things and causes those "i" factors. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359790086746216, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/81876/question-about-picks-theorem
# Question about Pick's Theorem Is there a Pick's Theorem for a general lattice in $\mathbb{R}^{2}$? - 8 Can't you just apply a linear transformation to convert the lattice to the standard lattice, and then apply the standard Pick's theorem to the image of the polygon? – Grumpy Parsnip Nov 14 '11 at 3:22 So if I do apply a linear transformation to convert the lattice to the standard lattice, why is it guaranteed that we have the same number of lattice points inside and on the boundary of the polygon before and after the transformation? – user4269 Nov 14 '11 at 3:57 Because the transformation maps lines to lines. Also, the intersection of two lines is mapped to the intersection of the mapped images of the lines. So parallel lines are mapped to parallel lines. Each lattice point is the intersection of two lines, parallel to the lines/vectors defining the lattice. This shows the number of lattice points in the boundary is invariant. – Andres Caicedo Nov 14 '11 at 4:05 A similar geometric argument (perhaps a little bit harder to make rigorous; but this is just a general fact about invertible linear transformations) shows the interior of a polygon is mapped to the interior of the image of the polygon. So also the number of lattice points in the interior is invariant. – Andres Caicedo Nov 14 '11 at 4:08 +1 I was just about to ask the same. – draks ... Mar 30 '12 at 17:48 show 1 more comment ## 1 Answer Certainly there is. One such version is stated as Theorem 4.1 in this paper of mine: Theorem (Pick's Theorem): Let $\Lambda$ be a two-dimensional lattice in $\mathbb{R}^k$ with 2-volume $\delta$. Let $P$ be a $\Lambda$-lattice polygon containing $h$ interior lattice points and $b$ boundary lattice points. Then the area $A(P)$ of $P$ is equal to $\delta \cdot (h + \frac{b}{2} - 1)$. As a reference to the proof I give a 2003 book of Erdős and Surányi. But -- especially for the $k = 2$ case that you asked about -- Jim Conant's comment is right on: the proof consists simply of making a linear change of variables to get from $\Lambda$ to $\mathbb{Z}^2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8994585871696472, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6352/encrypting-a-key-with-the-same-key/6355
# Encrypting a key with the same key I am new to crypto and trying to understand why it would be insecure to use AES to encrypt a key with the same key. Basically, something like this: encrypt(key, key) What happens when both key and message are the same (128bits). The obvious question would be why I would want to do that. If I have a secure channel to send the key, why encrypt it? My questions is more for understanding. So any comments are welcome. I tried searching for an answer for the above, but couldnt not find one. Sorry if its been answered before. A link to the older posts would also be a great help in that case. Thanks in advance. - This question makes no sense. You're asking us why something would be insecure without telling us why you want to do it. Something can't be insecure unless there's something you're trying to prevent that would constitute a security failure. Without requirements whose violation would constitute a security problem, it's logically impossible for something to be "insecure". – David Schwartz Feb 15 at 10:04 @madhukar2k2 "I tried searching for an answer for the above, but couldnt not find one." - try looking at circular security, it might give you some answers. Additionally, I've added some things in my answer below. – hakoja Feb 15 at 10:48 Is fgrieu's premise along the lines of the security situation that you had in mind? As it is, the question does not specify the context of the security you are interested in (as @David pointed out). Could you add some detail or incorporate fgrieu's premise into the question? – B-Con Feb 15 at 20:31 Note that all the problems mentioned are only relevant if it is known from other sources that there is the key encrypted, not if the key is included in the plaintext by chance. – Paŭlo Ebermann♦ Feb 17 at 11:24 ## 4 Answers I'm taking the question as: given some cryptosystem using AES-128 with some random secret key $K$, what are benefits and drawbacks of computing and making public $\hat K=\operatorname{AES}_K(K)$? Benefit - $\hat K$ can be used as a KCV: A legitimate holder of $K$ could do the same calculation with the $K'$ he holds, and compare $\operatorname{AES}_{K'}(K')$ to the public $\hat K$. If there's no match, then one of $K'$, $K$, or the copy $\hat K$ used in the test is altered. If there's a match, it is almost as certain that $K'=K$ that it is certain that $\hat K$ is genuine. In this usage, $\hat K$ would be a Key Check Value (and far from the worst KCV ever used). And if the purpose of such a KCV and test is to guard against alteration of $K$, or of the AES encryption engine, and $\hat K$ is not made public but rather stored secretly along $K$, that seems a byzantine but effective idea (caveat emptor: side channel leakage considerations may apply). Drawback 1 - $\hat K$ enables an attack on a reduced-round version of AES: A passive attacker knowing $\hat K$ has the same test as above to determine (with high confidence) if a guess of $K$ is right. That test is slightly more useful than just a random plaintext/ciphertext pair would be, because, in AES-128, the first computational step is to XOR the input and key; that gives zero in case of computation of $\hat K$; and thus the result of the first AES round is a constant (0x6363636363636363). Revealing $\hat K$ is like revealing a plaintext/ciphertext pair for a variant of AES-128 with 9 rounds instead of 10. This saves close to 10% of the work in an attack by brute force; and might conceivably open to a cryptanalytic attack that has a benefit at 9 rounds, but not the full 10; I do not see either as truly worrying in practice, though. Drawback 2 - in some protocol, $K$ can leak: An active attacker might trick a legitimate party holding $K$ into revealing $K$, if that legitimate party uses $K$ to compute the function $\operatorname{AES}^{-1}_K()$. For example, assume a (dumbed down) authentication protocol where Alice draws a random $R$; computes $C=\operatorname{AES}_K(R)$; sends that to Bob, who computes $R'=\operatorname{AES}^{-1}_K(C)$ and sends that back to Alice (which compares $R$ to $R'$). An active adversary knowing $\hat K$ can obtain $K$ from Bob by submitting $\hat K$ instead of $C$, and will get $K$ as $R'$ (because $\operatorname{AES}^{-1}_K(\hat K)=K$). More generally, availability of $\hat K$ could entirely ruin the security of any protocol or encryption mode where $\operatorname{AES}^{-1}_K()$ is used anywhere, and invalidate the security argument of others. - Thanks for providing the benefits and drawbacks. – madhukar2k2 Feb 15 at 19:51 There is no point to this. It would be like having two keys for a locked chest and then locking one of them in the chest. The only way to get the key out is by using the other key. You already have the key so why do you need the one in the chest? You can just make copies of the key without needing to go into the chest. - Thank you for the comment. I understand your point. I was trying to see what happens in a case when key is the same as the message (unintentional). For example, if I have something like this: key = Random(); message = Random(); cipherText = aes_encrypt(key, message). assume that both key and message are from the same space (size 2 ^ 128). The probability of key = message is very low, but wanted to understand if such a case happens, will the system be considered insecure. – madhukar2k2 Feb 15 at 7:54 There are no known different attacks against AES when the key and message are the same. – ponsfonze Feb 15 at 8:36 Practically speaking it's so unlikely that it cannot be called "insecure". Theoritically speaking, if you take a look at the insides of aes you'll see that the first operation is xoring the plaintext with the key, effectively cancelling the state whatever the key. From then on the only difference between $AES_K(K)$ and $AES_{K'}(K')$ will come from the difference in the key scheduling of both keys. Hope it helps – Alexandre Yamajako Feb 15 at 8:36 @madhukar2k2: No, that would not make the system insecure. If you pick a random 128-bit number, you might pick zero, and an attacker might start searching at zero. But if you exclude zero as "insecure", then an attacker doesn't have to start at zero and the problem repeats for one. You only want to exclude unsafe choices if they have a high enough probability that the benefit from excluding them exceeds the cost. Reducing the search space by excluding a matching key has a cost (reducing an attacker's search space) greater than the benefit (eliminating one unsafe random key). – David Schwartz Feb 15 at 10:06 All: Thank you for your comments. – madhukar2k2 Feb 15 at 16:35 If the encryption scheme is AES, then I would guess (as mentioned in the comments), that this would probably not be a big concern. HOWEVER, in general, a secure encryption scheme (and here I define secure particularly to mean IND-CPA-secure), can still fail miserably when encrypting its own keys. To see this, assume $\mathcal{E}$ is an IND-CPA secure scheme. Then I can create another IND-CPA secure scheme, $\mathcal{E}'$, which is very insecure when given the secret key to encrypt. In particular, we define $\mathcal{E'}$ as: $$\mathcal{E}'_k(m) = \begin{cases} \mathcal{E}_k(m), & \text{ when } m \neq k \\ k, & \text{ when } m = k \end{cases}$$ Intuitively we understand that this must be IND-CPA secure, since the probability that the attacker will choose $m = k$ (in the IND-CPA game) is negligible. But, when given the secret key as the message, this encryption scheme is not exactly good. This is of course an artificial scheme, but it shows nevertheless that strange things can happen when you encrypt the secret key, in general. This question is very much related to the notion of circular security, which you can read more about in this post, on the excellent blog of Matthew Green (where admittedly, I took the above idea from). - Thank you @hakoja. Will definitely read more on circular sercurity. – madhukar2k2 Feb 15 at 16:36 For many things in cryptography, you must be careful to note the difference between "not known to be secure" and "insecure." If you use (an ideal representation of) AES to encrypt its own key, it is the case that its "not known to be secure" rather than it is "insecure." Thus, you can stare at the details of AES all day long and never see the problem. What you need to stare at instead are the details of why we think that AES is secure; or more specifically, why we think AES with certain modes of operation (e.g., CBC or CTR) has one of the most basic definitions of security: something called semantic security. The definition of semantic security involves a game where an adversary chooses messages to be encrypted by a blackbox with an embedded secret key, gets back ciphertexts, and has to guess some information about which ciphertexts correspond to which submitted messages. The reason the proof of semantic security does not "go through" when the the adversary chooses to encrypt the secret key is because the adversary doesn't know the secret key (or rather, if the adversary did know it, she could win always win the game). In other words, the hardness of the game is directly tied to the hardness of guessing the secret key (in addition to some assumptions about the block cipher). This is more a peculiarity of provable security than a practical concern. It is possible to construct a pathological cryptosystem that is provably semantically secure but trivially insecure when you encrypt your own key. This fine but there are also pathological constructions of ideal block ciphers and hashes that become trivially insecure when implemented with a real function like AES or SHA2. These constructions, and the more general theoretic point, are routinely written off by cryptographers designing practical algorithms (my point isn't that these cryptographers are right to do that; only that they are prevalent --- don't kill the messenger). - Thank you @PulpSpy – madhukar2k2 Feb 15 at 16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947227418422699, "perplexity_flag": "middle"}
http://divisbyzero.com/2010/03/14/an-application-of-graph-theory-to-architecture/?like=1&_wpnonce=5e79d40c83
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | March 14, 2010 ## An application of graph theory to architecture Several years ago I came across a fascinating application of graph theory to architecture. It is in the 1983 book Incidence and symmetry in design and architecture, by Jenny A. Baglivo and Jack E. Graver. I don’t know if it is well known among experts in the field, but I’ve never seen it elsewhere. So I thought I’d share it with all of you. Here’s the simplest version of the problem. Suppose you have an ${m\times n}$ grid of beams. The beams are steel and can’t stretch, compress, or bend, but they are attached together by pin-joints. In other words, the joints provide no rigidity in the structure. To make the structure rigid you are allowed to add steel cross-beams (which would force a rhombus to be square). Question. Where should you place the cross-beams, and what is the fewest number needed to stabilize the structure? For example, consider the ${5\times 4}$ structures below. They both have nine cross-beams. Is either one rigid? It turns out that the one on the left is rigid. In fact, it is over-braced; it can be braced with only eight cross-beams (we can remove the dotted beam shown in the figure below). The one on the right is not rigid, as can be seen below. I encourage you to read the discussion by Baglivo and Graver, as they do a beautiful job of walking the reader through the problem. Here is a sketch of the argument. 1. In any bracing configuration (rigid or not) all of the vertical bars in a given row are parallel and all of the horizontal bars in a given column are parallel. You can see that in the deformed example on the right. 2. A cross-beam forces a parallelogram to be a square. By (1), if the square is in the ith row and jth column, then all of the vertical bars in the ith row are perpendicular to all of the horizontal bars in the jth column. 3. To have a rigid structure it suffices to show that the vertical bars in any row are perpendicular to the horizontal bars in any column. 4. We can construct a bipartite graph from any bracing structure. In particular, we have vertices representing row numbers and vertices representing column numbers. We draw an edge from the vertex for the ith row to the vertex for the jth column if there is a cross-beam in the ith row and jth column. The graphs for our two structures are shown below. 5. Using property (2) repeatedly we find that if there is a path in the graph from the ith row to the jth column, then the vertical bars in the ith row are perpendicular to the horizontal bars in the jth column. 6. We conclude that property (3) holds (i.e., the structure is rigid) if and only if the bipartite graph is connected. 7. The smallest connected graph is a tree, and the the largest tree in the bipartite graph with ${(m,n)}$ vertices has ${m+n-1}$ edges. Thus we have the following theorem. Theorem. A bracing of an ${m\times n}$ grid is rigid iff the corresponding bipartite graph is connected. Moreover, the rigid bracing has the fewest possible cross-beams iff the bipartite graph is a tree. In this case it has ${m+n-1}$ cross-beams. Returning to our examples, we see that the graph on the left is connected, so that structure is rigid. In fact, we can remove one edge to obtain a tree, so the structure is over-braced. The graph on the right is not connected, so it is not rigid. However, we could add one more bracing to make it rigid (where?). Again, I encourage you to check out the original source. They go on to discuss the following two related problems, both of which have graph theoretical solutions. Question. What if we want the structure to be double-braced? That is, we want it to be rigid even if one of the cross-beams happens to fail (they allow cross-beams to form X’s in a square). [You may not be surprised to learn that a tree in the bipartite graph is not sufficient—you need cycles.] Question. What if the cross-beams are replaced by wires? The wires are not rigid (they can collapse), but they cannot stretch. How do you cross-brace your structure so that it is rigid? [In this case you need to look at directed graphs.] ### Like this: Posted in Math | Tags: applied mathematics, architecture, discrete mathematics, graph theory, rigid structures ## Responses 1. Very cool! Note, in the graph on the left you are missing the edge (3,2). By: Brent on March 16, 2010 at 4:10 pm • Thank you for noticing that! The graphs are fixed now. By: Dave Richeson on March 16, 2010 at 8:36 pm 2. [...] An application of graph theory to architecture. Very cool application of one of my favourite areas of mathematics. Unlike many mathematicians, I [...] By: Top ten math posts of 2010 | 11:23 on December 31, 2010 at 7:41 am
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255833625793457, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/02/23/an-example-of-a-manifold/?like=1&source=post_flair&_wpnonce=fe7f791e1e
The Unapologetic Mathematician An Example of a Manifold Let’s be a little more explicit about our example from last time. The two-dimensional sphere consists of all the points in $\mathbb{R}^3$ of unit length. If we pick an orthonormal basis for $\mathbb{R}^3$ and write the coordinates with respect to this basis as $x$, $y$, and $z$, then we’re considering all triples $(x,y,z)$ with $x^2+y^2+z^2=1$. We want to show that this set is a manifold. We know that we can’t hope to map the whole sphere into a plane, so we have to take some points out. Specifically, let’s remove those points with $z\leq0$, just leaving one open hemisphere. We will map this hemisphere homeomorphically to an open region in $\mathbb{R}^2$. But this is easy: just forget the $z$-component! Sending the point $(x,y,z)$ down to the point $(x,y)$ is clearly a continuous map from the open hemisphere to the open disk with $x^2+y^2<1$. Further, for any point $(x,y)$ in the open disk, there is a unique $z\geq0$ with $x^2+y^2+z^2=1$. Indeed, we can write down $\displaystyle(x,y)\mapsto\left(x,y,\sqrt{1-x^2-y^2}\right)$ This inverse is also continuous, and so our map is indeed a homeomorphism. Similarly we can handle all the points in the lower hemisphere $z<0$. Again we send $(x,y,z)$ to $(x,y)$, but this time for any $(x,y)$ in the open unit disk — satisfying $x^2+y^2<0$ we can write $\displaystyle(x,y)\mapsto\left(x,y,-\sqrt{1-x^2-y^2}\right)$ which is also continuous, so this map is again a homeomorphism. Are we done? no, since we haven’t taken care of the points with $z=0$. But in these cases we can treat the other coordinates similarly: if $y>0$ we have our inverse pair $\displaystyle(x,y,z)\mapsto(x,z)\qquad(x,z)\mapsto\left(x,\sqrt{1-x^2-z^2},z\right)$ while if $y<0$ we have $\displaystyle(x,y,z)\mapsto(x,z)\qquad(x,z)\mapsto\left(x,-\sqrt{1-x^2-z^2},y,z\right)$ Similarly if $x>0$ we have $\displaystyle(x,y,z)\mapsto(y,z)\qquad(y,z)\mapsto\left(\sqrt{1-y^2-z^2},y,z\right)$ while if $x<0$ we have $\displaystyle(x,y,z)\mapsto(y,z)\qquad(y,z)\mapsto\left(-\sqrt{1-y^2-z^2},y,z\right)$ Now are we done? Yes, since every point on the sphere must have at least one coordinate different from zero, every point must fall into one of these six cases. Thus every point has some neighborhood which is homeomorphic to an open region in $\mathbb{R}^2$. This same approach can be generalized to any number of dimensions. The $n$-dimensional sphere consists of those points in $\mathbb{R}^{n+1}$ with unit length. It can be covered by $2(n+1)$ open hemispheres, each with a projection just like the ones above. Like this: Posted by John Armstrong | Differential Topology, Topology 4 Comments » 1. is there a requirement to make sure the maps are compatible? ie if a point’s domain is in more than one map that the composite is continuous (in both directions)? Comment by scot | February 23, 2011 | Reply 2. We’re getting to that scot, stay tuned! Comment by | February 23, 2011 | Reply 3. [...] Let’s look back at yesterday’s example of a manifold. Not only did we cover the entire sphere by open neighborhoods with homeomorphisms to [...] Pingback by | February 23, 2011 | Reply 4. [...] our example of a manifold, we covered the two-dimensional sphere with coordinate patches, and so we have an [...] Pingback by | March 2, 2011 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279263615608215, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/182281/is-there-a-relationship-between-lnx-pm-y-and-lnx-pm-lny
# Is there a relationship between $\ln(x\pm y)$ and $\ln(x)\pm\ln(y)$? I am dealing with points on a 2d space $(x, y)$ where $x$ and $y$ are always positive integers. In an algorithm, I have pre-computed $\log_2(x)$ and $\log_2(y)$ for given points of interest. I now need to compute $\log_2(x+y)$ and $\log_2(x-y)$ but don't have the luxury to do so from a computational perspective so I started looking for some sort of correlation. I wasn't sure if there was a relationship between $\ln(x\pm y)$ and $\ln(x)\pm\ln(y)$ so I charted some numbers in Excel and came across the following: • The ratio of $(Log(X+Y) / (Log(X)+Log(Y)))$ seems to mostly lie between 0.5 and 0.7. • For a large set of numbers the average ratio and median ratio seem to be ALWAYS between 0.56 and 0.58. • Of course, there are corner cases in subtraction where x-y turns out negative. How can I avoid that by the way? So the question is: • Am I missing some fundamental concepts to have to find this relationship this way? • If the answer to the above is no, how reliable would this correlation be for all integers x and y? The magnitude I am dealing with is around 2 to the power 10,000,000. SOME EXTRA CONTEXT: Some have suggested more context may bring about a different approach altogether so here it is. In 2D space, I have a starting point $(x, y)$ and need to move around following some rules. Allowed directions are $\pm (horizontal$, $vertical$, $diagonal$ and $anti-diagonal)$. Some other restrictions include not moving along the diagonal of $(x, y)$ where $(x*y)$ is a power of 2. The target is to get to the top left of the grid where the concentration of power of 2 numbers is high. Lastly, we can only change direction after encountering the diagonal of a power of 2. So we start looping at the starting point, find neighboring power of 2 coordinates and filter out all diagonal intersections between our current position and power of 2 neighbors (which become potential turning points). Once we have this list, we need to determine optimal direction so we land closest to (1, 1) in euclidean distance. This is where we cannot afford any more multiplication, division, logarithms, etc. - When $x-y$ is negative, $\log(x-y)$ does not make any sense by itself. To decide what to do about that, we'd need information about what you're trying to do and what led you to conclude that $\log(x-y)$ is what you need to compute. – Henning Makholm Aug 14 '12 at 1:44 – Raheel Khan Aug 14 '12 at 1:51 1 Keep in mind that my answer probably isn't the best if those logarithms are expensive. I'd suggest giving a more thorough account of what exactly you're trying to accomplish, so that we can potentially suggest completely new routes (e.g, do you actually need to calculate euclidean distance or can it be approximated by something else?) – Robert Mastragostino Aug 14 '12 at 1:53 1 @RaheelKhan: Do you really have all 5 million bits of each of these numbers stored? (Mind reels). Or do you mean that each number is always an exact power of 2? In the latter case, their squares are also exact powers of 2, and the squared distances will therefore have exactly 2 bits set, which you can compare easily just by remembering their positions. – Henning Makholm Aug 14 '12 at 2:15 1 How sure are you that the Euclidean distance is what you need? If it's just for a heuristic, for example, Manhattan distance would be a lot easier to calculate and compare. – Henning Makholm Aug 14 '12 at 11:15 show 8 more comments ## 1 Answer The defining property of the logarithm is that it converts multiplication to addition. Thus we have $\log xy = \log x + \log y$ and $\log \frac{x}{y} = \log x - \log y$. But it does not really convert addition to anything. More precisely, conditional on Schanuel's conjecture the numbers $\log p$, as $p$ runs over the primes, are algebraically independent; that is, there is no polynomial relationship between them whatsoever (with rational coefficients). In particular, there is no polynomial relationship between the numbers $\log (2 + 3), \log 2, \log 3$ (for example). However, if you're willing to settle for approximate relationships and if one of $x$ or $y$ is small relative to the other, you can use the Taylor series of the logarithm. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565656185150146, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/183020-trigonomentic-mapping-question.html
# Thread: 1. ## Trigonomentic mapping question The question For the mapping f(z) = sinh(z), find and sketch the image of Im(z) = d. If I'm not mistaken, this is just a horizontal line in the z-plane through some constant d. With mapping questions involving Z, I usually try and write f(z) in terms of z, then substitute it into the equation. However this one has me stumped. I tried: Let f(z) = w $w = \frac{e^z - e^{-z}}{2}$ $2w = e^z - e^{-z}$ Now I'm unsure of how to write this in terms of z. Is this the correct approach? If so, how do I progress? Thank you. 2. ## Re: Trigonomentic mapping question $\displaystyle \sinh{(z)} = \sinh{(x + iy)} = \sinh{(x)}\cos{(y)} + i\cosh{(x)}\sin{(y)}$. You need to plot the imaginary part of this equal to $\displaystyle d$, so $\displaystyle \begin{align*} \cosh{(x)}\sin{(y)} &= d \\ \sin{(y)} &= \frac{d}{\cosh{(x)}} \\ y &= \arcsin{\left[\frac{d}{\cosh{(x)}}\right]} \end{align*}$ Now you need to plot this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9154515862464905, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/reference-request?sort=votes&pagesize=30
# Tagged Questions Use this tag for questions seeking a single specific paper or a short, non-open-ended list of references, like "What paper first discovered X?", "Where can I find the original derivation of X?", or "What is the canonical source for X?" etc. 6answers 844 views ### What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis? This question was listed as one of the questions in the proposal (see here), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or ... 8answers 2k views ### Comprehensive book on group theory for physicists? I am looking for a good source on group theory aimed at physicists. I'd prefer one with a good general introduction to group theory, not just focusing on Lie groups or crystal groups but one that ... 3answers 93 views ### Paper listing known Seiberg-dual pairs of N=1 gauge theories Is there a nice list of known Seiberg-dual pairs somewhere? There are so many papers from the middle 1990s but I do not find comprehensive review. Could you suggest a reference? Seiberg's original ... 1answer 368 views ### Minimum viscosity of liquids In a lecture by Purcell he mentions that he notices that there aren't any liquids with viscosities much less than that of water, even though they go up seemingly unbounded. In an endnote (endnote 1 in ... 6answers 306 views ### Classic Literature in Quantum Gravity? I've seen it said in various places that a major reason people like string theory as a theory of quantum gravity is that it does a good job of matching our prejudices about how a quantum gravity ... 3answers 1k views ### Good reading on the Keldysh formalism I'd like some suggestions for good reading materials on the Keldysh formalism in a condmat context. I'm familiar with the imaginary time, coherent state, path integral formalism, but lately I've been ... 2answers 174 views ### Possible research implications of proof of John Cardy's a-theorem in QFT According to this recent article in Nature magazine, John Cardy's a-theorem may have found a proof. Question: What would the possible implications be in relation to further research in QFT? ... 6answers 1k views ### Where should a physicist go to learn chemistry? I took an introductory chemistry course long ago, but the rules seemed arbitrary, and I've forgotten most of what I learned. Now that I have an undergraduate education in physics, I should be able to ... 2answers 50 views ### Numerical Analysis of Elliptic PDEs I am looking for an elementary reference regarding issues of stability in numerical analysis of non-linear elliptic PDEs, particularly using the finite difference method (but something more ... 1answer 92 views ### 6d Massive Gravity Massive gravity (with a Fierz-Pauli mass) in 4 dimensions is very well-studied, involving exotic phenomena like the vDVZ discontinuity and the Vainshtein effect that all have an elegant and physically ... 1answer 383 views ### A reading list to build up to the spin statistics theorem Wikipedia's article on the spin-statistics theorem sums it up thusly: In quantum mechanics, the spin-statistics theorem relates the spin of a particle to the particle statistics it obeys. The spin ... 2answers 128 views ### Literature on fractal properties of quasicrystals At the seminar where the talk was about quasicrystals, I mentioned that some results on their properties remind the fractals. The person who gave the talk was not too fluent in a rigor mathematics ... 2answers 234 views ### Searching books and papers with equations Sometimes I may come up with an equation in mind, so I want to search for the related material. It may be the case that I learn it before but forget the name, or, there is no name for the equation ... 2answers 141 views ### Gauge invariance for electromagnetic potential observables in test function form This is a reference request for a relationship in quantum field theory between the electromagnetic potential and the electromagnetic field when they are presented in test function form. $U(1)$ gauge ... 2answers 70 views ### Discussions of the axioms of AQFT The most recent discussion of what axioms one might drop from the Wightman axioms to allow the construction of realistic models that I'm aware of is Streater, Rep. Prog. Phys. 1975 38 771-846, ... 2answers 607 views ### Treatment of boundary terms when applying the variational principle One of the main sources of subtlety in the AdS/CFT correspondence is the role played by boundary terms in the action. For example, for a scalar field in AdS there is range of masses just above the ... 1answer 452 views ### Entanglement in time Quantum entanglement links particles through time, according to this study that received some publicity last year: New Type Of Entanglement Allows 'Teleportation in Time,' Say Physicists at The ... 4answers 292 views ### Applications of Geometric Topology to Theoretical Physics Geometric topology is the study of manifolds, maps between manifolds, and embeddings of manifolds in one another. Included in this sub-branch of Pure Mathematics; knot theory, homotopy, manifold ... 2answers 304 views ### What is the current state of research in quantum gravity? I was browsing through this and was wondering what progress in quantum gravity research has taken place since the (preprint) publication. If anyone can provide some helpful feedback I would be ... 1answer 57 views ### Introduction to neutron star physics I enjoy thinking about theoretical astrophysics because I want to understand black holes. Given that no one understands black holes, I like to ponder the nearest thing to a black hole: a neutron star! ... 4answers 317 views ### The Schwinger model The Schwinger model is the 2d QED with massless fermions. An important result about it (which I would like to understand) is that this is a gauge invariant theory which contains a free massive vector ... 1answer 28 views ### “Blue Bumper” Stars I was recently overviewing various massive compact halo object studies (the Anglo-Australian MACHO collaboration and the French I/II EROS collaboration), and they frequently reference "blue bumper ... 3answers 134 views ### Hilbert-Schmidt basis for many qubits - reference Every density matrix of $n$ qubits can be written in the following way \hat{\rho}=\frac{1}{2^n}\sum_{i_1,i_2,\ldots,i_n=0}^3 t_{i_1i_2\ldots i_n} ... 1answer 40 views ### Functional relations for Kochen-Specker proofs Many proofs of the Kochen-Specker theorem use some form of the following argument (from Mermin's "Simple Unified Form for the major No-Hidden-Variables Theorems" ) [I]f some functional relation ... 2answers 336 views ### What Hermitian operators can be observables? We can construct a Hermitian operator $O$ in the following general way: find a complete set of projectors $P_\lambda$ which commute, assign to each projector a unique real number \$\lambda\in\mathbb ... 3answers 184 views ### References on the physics of anyons Anyone know some good introductory references on the physics of anyons? 6answers 71 views ### Papers and preprints worth reading, Jan-midFeb 2012 [closed] Which recent (i.e. Jan-midFeb 2012) papers and preprint do you consider really worth reading? References should be followed by a summary saying what is the result and (implicitly or explicitly) why ... 1answer 73 views ### Many body quantum states analyzed as probabilistic sequences Measurements of consecutive sites in a many body qudit system (e.q. a spin chain) can be interpreted as generating a probabilistic sequence of numbers $X_1 X_2 X_3 \ldots$, where \$X_i\in ... 0answers 120 views ### Intuitive sketch of the correspondence of a string theory to its limiting quantum field theory I'm looking for an intuitive sketch of how one shows the correspondence of string theory to a certain QFT. My best guess is that one calculates the scattering amplitudes in the string theory as a ... 0answers 36 views ### Minimal strings and topological strings In http://arxiv.org/abs/hep-th/0206255 Dijkgraaf and Vafa showed that the closed string partition function of the topological B-model on a Calabi-Yau of the form $uv-H(x,y)=0$ coincides with the free ... 6answers 847 views ### What is a tensor? I have a pretty good knowledge of physics but couldn't understand what a tensor is. I just couldn't understand it, and the wiki page is very hard to understand as well. Can someone refer me to a good ... 3answers 405 views ### Noether theorem with semigroup of symmetry instead of group Suppose You have semigroup instead of typical group construction in Noether theorem. Is this interesting? In fact there is no time-reversal symmetry in the nature, right? At least not in the same ... 2answers 658 views ### Modern and complete references for the $k\cdot p$ method? I've recently started studying the $k\cdot p$ method for describing electronic bandstructures near the centre of the Brillouin zone and I've been finding it hard to find any pedagogical references on ... 2answers 51 views ### “tmf(n) is the space of supersymmetric conformal field theories of central charge -n” I read this intriguing statement in John Baez' week 197 the other day, and I've been giving it some thought. The post in question is from 2003, so I was wondering if there has been any progress in ... 2answers 116 views ### Simulation of QED Can anyone point me to a paper dealing with simulation of QED or the Standard Model in general? I will particularly appreciate a review paper. 2answers 445 views ### Majorana zero mode in quantum field theory Recently, Majorana zero mode becomes very hot in condensed matter physics. I remember there was a lot of study of fermion zero mode in quantum field theory, where advanced math, such as index ... 4answers 347 views ### Hamiltonian and the space-time structure I'm reading Arnold's "Mathematical Methods of Classical Mechanics" but I failed to find rigorous development for the allowed forms of Hamiltonian. Space-time structure dictates the form of ... 1answer 87 views ### Fourier Methods in General Relativity I am looking for some references which discuss Fourier transform methods in GR. Specifically supposing you have a metric $g_{\mu \nu}(x)$ and its Fourier transform $\tilde{g}_{\mu \nu}(k)$, what does ... 1answer 49 views ### Canonical averages in a Fermi gas aka generalized Fermi-Dirac distribution I am in the process of applying Beenakker's tunneling master equation theory of quantum dots (with some generalizations) to some problems of non-adiabatic charge pumping. As a part of this work I ... 3answers 451 views ### Boundary layer theory in fluids learning resources I'm trying to understand boundary layer theory in fluids. All I've found are dimensional arguments, order of magnitude arguments, etc... What I'm looking for is more mathematically sound arguments. ... 1answer 164 views ### Are Born-Oppenheimer energies analytic functions of nuclear positions? I am looking for references to bibliography that explores the smoothness and analyticity of eigenvalues and eigenfunctions (and matrix elements in general) of a hamiltonian that depends on some ... 1answer 78 views ### Characters of $\widehat{\mathfrak{su}}(2)_k$ and WZW coset construction I am currently studying affine Lie algebras and the WZW coset construction. I have a minor technical problem in calculating the (specialized) character of $\widehat{\mathfrak{su}}(2)_k$ for an affine ... 0answers 205 views ### Lower bounds on spectral gaps of ferromagnetic spin-1/2 XXX Hamiltonians? Question. Are there any references or techniques which can be applied to obtain energy gaps for ferromagnetic XXX spin-1/2 Hamitlonians, on general interaction graphs, or tree-graphs? I'm interested ... 10answers 2k views ### Physics for mathematicians How and from where does a mathematician learn physics from a mathematical stand point? I am reading the book by Spivak Elementary Mechanics from a mathematicians view point. The first couple of pages ... 8answers 4k views ### Which Mechanics book is the best for beginner in math major? I'm a bachelor student majoring in math, and pretty interested in physics. I would like a book to study for classical mechanics, that will prepare me to work through Goldstein's Classical Mechanics. ... 2answers 277 views ### Wilson/Polyakov loops in Weinberg's QFT books I wanted to know if the discussion on Wilson loops and Polyakov loops (and their relationship to confinement and asymptotic freedom) is present in the three volumes of Weinberg's QFT books but in some ... 1answer 69 views ### Quantum mechanics as a Markov process I am currently involved in some understanding on this matter with a colleague of mine. I know all the literature about but I do not know the state of art. Please, could you provide some relevant ... 1answer 105 views ### Request for Reference: BRST formalism/transformations Could anyone please suggest a very basic paper/reference/literature on BRST symmetry/formalism that requires rudimentary knowledge of Dirac's method for dealing with constrained systems and generation ... 2answers 352 views ### What's a good reference for the electrodynamics of moving media? The answer to a previous question suggests that a moving, permanently magnetized material has an effective electric polarization $\vec{v}\times\vec{M}$. This is easy to check in the case of ... 2answers 282 views ### Is there a published upper limit on the electron's electric quadrupole moment? I understand an electric quadrupole moment is forbidden in the standard electron theory. In this paper considering general relativistic corrections (Kerr-Newman metric around the electron), however, ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197720885276794, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Gradient_descent
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Gradient descent Gradient descent is an optimization algorithm that approaches a local maximum of a function by taking steps proportional to the gradient (or the approximate gradient) of the function at the current point. If instead one takes steps proportional to the negative of the gradient, one approaches a local minimum of that function. This algorithm is also known as steepest descent, or the method of steepest descent, not to be confused with the method for approximating integrals with the same name, see method of steepest descent. ## Description of the method Gradient descent is based on the observation that if the real-valued function $F(\mathbf{x})$ is defined and differentiable in a neighborhood of a point $\mathbf{a}$, then $F(\mathbf{x})$ increases fastest if one goes from $\mathbf{a}$ in the direction of the gradient of F at $\mathbf{a}$, $\nabla F(\mathbf{a})$. It follows that, if $\mathbf{b}=\mathbf{a}+\gamma\nabla F(\mathbf{a})$ for γ > 0 a small enough number, then $F(\mathbf{a})\leq F(\mathbf{b})$. With this observation in mind, one starts with a guess $\mathbf{x}_0$ for a local maximum of F, and considers the sequence $\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \dots$ such that $\mathbf{x}_{n+1}=\mathbf{x}_n+\gamma \nabla F(\mathbf{x}_n),\ n \ge 0.$ We have $F(\mathbf{x}_0)\le F(\mathbf{x}_1)\le F(\mathbf{x}_2)\le \dots,$ so hopefully the sequence $(\mathbf{x}_n)$ converges to the desired local maximum. Note that the value of the step size γ is allowed to change at every iteration. Let us illustrate this process in the picture below. Here F is assumed to be defined on the plane, and that its graph looks like a hill. The blue curves are the contour lines, that is, the regions on which the value of F is constant. A red arrow originating at a point shows the direction of the gradient at that point. Note that the gradient at a point is perpendicular to the contour line going through that point. We see that gradient descent leads us to the top of the hill, that is, to the point where the value of the function F is largest. To have gradient descent go towards a local minimum, one needs to replace γ with - γ. ## Comments Note that gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. Two weaknesses of gradient descent are: 1. The algorithm can take many iterations to converge towards a local maximum/minimum, if the curvature in different directions is very different 2. Finding the optimal γ per step can be time-consuming. Conversely, using a fixed γ can yield poor results. Conjugate gradient is often a better alternative. A more powerful algorithm is given by the BFGS method which consists in calculating on every step a matrix by which is multiplied the gradient vector to go into a "better" direction, combined with a more sophisticated linear search algorithm, to find the "best" value of γ. ## See also 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8739789724349976, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/92131/fx-fx1-and-f-1-x-fx/92164
# $f(x)=f(x+1)$ and $f(-1/x)=f(x)$ Is there any function $f$ which would satisfy $f(x)=f(x+1)$ and $f(-1/x)=f(x)$ for every $x$ or at least positive $x$? For the widest possible domains of $x$? If I could turn this functional equation into differential equations, I could use some approximate analytic method to get the solution. Thanks in advance. In a more general case, is the a function $g$ so $f \left( \frac{ax+b}{cx+d} \right) = g(x)$? For real $a$, $b$, $c$ and $d$? - I think you want to ask: "Find all the function satisfy..." instead of asking "is there any function which satisfy..." since there are many functions satisfying the functional equation you stated, e.g. constant functions. – Paul Dec 16 '11 at 23:22 @Jose Garcia: You would want to impose additional conditions on $f$ relevant to your needs. For example, continuity everywhere is too strong, for then only the constant functions work. – André Nicolas Dec 17 '11 at 0:12 2 – user17762 Jun 26 '12 at 21:18 ## 5 Answers The general solution of $f(x)=f(x+1)$, according to http://eqworld.ipmnet.ru/en/solutions/fe/fe1101.pdf, should be $f(x)=\Theta(x)$, where $\Theta(x)$ is an arbitrary periodic function with unit period. The general solution of $f(-\frac{1}{x})=f(x)$, according to http://eqworld.ipmnet.ru/en/solutions/fe/fe1120.pdf, should be $f(x)=\Phi(x,-\frac{1}{x})$, where $\Phi(x,-\frac{1}{x})$ is any symmetric function of $x$ and $-\frac{1}{x}$. So the function $f(x)$ that satisfly $f(x)=f(x+1)$ and $f(-\frac{1}{x})=f(x)$ is the intersection of the above two general solutions. - 1 Is it really interesting ? – Lierre Jun 24 '12 at 9:09 11 (-1) "The general solution of $f(−1/x)=f(x)$, according to [...], should be $f(x)=\Phi(x,−1/x)$, where $\Phi(x)$ is any symmetric function of $x$ and $−1/x$." -- This is as empty as a statement can be. – TMM Jun 26 '12 at 21:27 2 I am not sure why this was accepted by the OP. While the answer is correct, it is vacuous. – Eric♦ Jul 1 '12 at 20:36 Captain obvious nervously smoking in aside – Norbert Jul 4 '12 at 0:32 I'm not sure if this is of interest, but if, instead of real $x$, you consider $z$ in the complex upper half plane, then the two linear fractional transformations $$z\to z+1,\quad z\to -1/z$$ generate the modular group. I.e., writing the linear fractional transformations as matrices, they generate $\text{PSL}_2(\mathbb Z)$. The classical $j$ invariant http://en.wikipedia.org/wiki/J-invariant is an example of a function invariant under the modular group - This neat property of the Klein invariant also makes for a good numerical algorithm for evaluating it... – J. M. Dec 17 '11 at 5:52 A very simple solution to these functional equations is any constant function of the form $f(x)=c$ for some constant $c$. Let f(x)=3. Then $f(x+1)=f(x)=f(-1/x)=3$. - May I ask why this was downvoted? If this answer does not respond to the question appropriately, I will remove it. – analysisj Dec 16 '11 at 23:42 Your answer is correct, I can not understand why anyone would have downvoted it. – Arjang Dec 16 '11 at 23:59 (Edit) You have $$\frac{-1}{\frac{-1}{\frac{-1}{x}+1}+1}=x-1,$$ so $$f(x)=f\left(\frac{-1}{x}\right)=f\left(\frac{-1}{x}+1\right)=\dots =f\left(\frac{-1}{\frac{-1}{\frac{-1}{x}+1}+1}\right)=f(x-1),$$ and so you can even explicitly shift to the left. One can actually construct any continued fraction and so you see that the function takes the same value on all rational values (even for only a finite number of operations for each number). Due to $f(x)=f(x\pm 1)$ you have a grid with gap distance $1$ at the right of any starting value. There the function always take the same value, i.e. $f(n)=f(0)$ for all integers. Now consider $n=-2$, then $f(1/2)=f(-2)=f(0-2)=f(0),$ and consequently also $f(1/2+n)=f(0)$. So the grid really only has distance $1/2$. We can go on and collect more point with that value. Look at numbers with bigger absolute values, like $-3:\ f(1/3)=f(-3)=f(0)$. Obviously $f(1/n)=f(0)$ for any $n$. And so in general we have $f(1/n+m)=f(0)$ for all integers $n$ and $m$. Therefore also $f(1/(1/n+m))=f(0)$, and so $f(1/(1/n+m)+k)=f(0)$ and so on. Sidenote: The first condition is obviously periodicity. And regarding the other condition, notice that whenever you have an operation $g$ with $g(g(x))=x$, like it is the case for $g(x)=-\frac{1}{x}$ or in fact all real functions that can be mirrored w.r.t. the $45°$ axis, then for any function $f$, the function $$\hat{f}(x):=f(x)+f(g(x))$$ fulfills $$\hat{f}(g(x))=f(g(x))+f(g(g(x)))=f(g(x))+f(x)=\hat{f}(x)$$ Actually, via the identity $$f(x)=\frac{1}{2}\left(f(x)+f(g(x))\right)+\frac{1}{2}\left(f(x)-f(g(x))\right),$$ all functions have a part which fullfills the relation, except the ones which fulfill the anti-relation $f(g(x))=-f(x)$ for which that part is zero. - (+1) I like the first part of your answer. – TMM Jun 26 '12 at 21:30 As stopple noted, the transformations $z \to z+1$ and $z \to -1/z$ generate the modular group. For any two rationals $r$ and $s$ there is a transformation in this group that takes $r$ to $s$, and thus $f(r) = f(s)$. In particular, if $f$ is continuous that says $f$ is constant. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122698903083801, "perplexity_flag": "head"}
http://mathoverflow.net/questions/103817/seeking-reference-on-regularity-theory-for-nonlinear-elliptic-pde
## Seeking reference on regularity theory for nonlinear elliptic PDE ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I am searching for a reference on a result I know must exist proving regularity for weak solutions of a (nonlinear, but well-behaved) elliptic homogeneous PDE. Working over say a bounded open subset of $\mathbb{R}^n$ would be fine, and I don't need to deal with very nonlinear PDE - quasilinear is enough for me. If someone can help locate a reference, I'd be very grateful! - ## 1 Answer "Elliptic Partial Differential Equations of Second Order" by David Gilbarg and Neil S. Trudinger - Oh, thank you! I really appreciate it. Headed to library. :-) – Idempotent Aug 2 at 21:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919419527053833, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/05/18/
# The Unapologetic Mathematician ## Galois Connections I want to mention a topic I thought I’d hit back when we talked about adjoint functors. We know that every poset is a category, with the elements as objects and a single arrow from $a$ to $b$ if $a\leq b$. Functors between such categories are monotone functions, preserving the order. Contravariant functors are so-called “antitone” functions, which reverse the order, but the same abstract nonsense as usual tells us this is just a monotone function to the “opposite” poset with the order reversed. So let’s consider an adjoint pair $F\dashv G$ of such functors. This means there is a natural isomorphism between $\hom(F(a),b)$ and $\hom(a,G(b))$. But each of these hom-sets is either empty (if $a\not\leq b$) or a singleton (if $a\leq b$). So the adjunction between $F$ and $G$ means that $F(a)\leq b$ if and only if $a\leq G(b)$. The analogous condition for an antitone adjoint pair is that $b\leq F(a)$ if and only if $a\leq G(b)$. There are some immediate consequences to having a Galois connection, which are connected to properties of adjoints. First off, we know that $a\leq G(F(a))$ and $F(G(b))\leq b$. This essentially expresses the unit and counit of the adjunction. For the antitone version, let’s show the analogous statement more directly: we know that $F(a)\leq F(a)$, so the adjoint condition says that $a\leq G(F(a))$. Similarly, $b\leq F(G(b))$. This second condition is backwards because we’re reversing the order on one of the posets. Using the unit and the counit of an adjunction, we found a certain quasi-inverse relation between some natural transformations on functors. For our purposes, we observe that since $a\leq G(F(a))$ we have the special case $G(b)\leq G(F(G(b)))$. But $F(G(b))\leq b$, and $G$ preserves the order. Thus $G(F(G(b)))\leq G(b)$. So $G(b)=G(F(G(b)))$. Similarly, we find that $F(G(F(a)))=F(a)$, which holds for both monotone and antitone Galois connections. Chasing special cases further, we find that $G(F(G(F(a))))=G(F(a))$, and that $F(G(F(G(b))))=F(G(b))$ for either kind of Galois connection. That is, $F\circ G$ and $G\circ F$ are idempotent functions. In general categories, the composition of two adjoint functors gives a monad, and this idempotence is just the analogue in our particular categories. In particular, these functions behave like closure operators, but for the fact that general posets don’t have joins or bottom elements to preserve in the third and fourth Kuratowski axioms. And so elements left fixed by $G\circ F$ (or $F\circ G$) are called “closed” elements of the poset. The images of $F$ and $G$ consist of such closed elements ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90891432762146, "perplexity_flag": "head"}
http://citizendia.org/Solenoid
For other uses, see Solenoid (disambiguation). Magnetic field created by a solenoid A solenoid is a 3-dimensional coil. A coil is a series of loops A coiled coil is a structure where the coil itself is in turn also looping In physics, the term solenoid refers to a loop of wire, often wrapped around a metallic core, which produces a magnetic field when an electrical current is passed through it. Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. The M acro E xpansion T emplate A ttribute L anguage complements TAL, providing macros which allow the reuse of code across In Physics, a magnetic field is a Vector field that permeates space and which can exert a magnetic force on moving Electric charges Electric current is the flow (movement of Electric charge. The SI unit of electric current is the Ampere. Solenoids are important because they can create controlled magnetic fields and can be used as electromagnets. An electromagnet is a type of Magnet in which the Magnetic field is produced by the flow of an electric current. The term solenoid refers specifically to a magnet designed to produce a uniform magnetic field in a volume of space (where some experiment might be carried out). In engineering, the term solenoid may also refer to a variety of transducer devices that convert energy into linear motion. Engineering is the Discipline and Profession of applying technical and scientific Knowledge and A transducer is a device usually electrical, electronic, Electro-mechanical, Electromagnetic, Photonic, or Photovoltaic In Physics and other Sciences energy (from the Greek grc ἐνέργεια - Energeia, "activity operation" from grc ἐνεργός The term is also often used to refer to a solenoid valve, which is an integrated device containing an electromechanical solenoid which actuates either a pneumatic or hydraulic valve, or a solenoid switch, which is a specific type of relay that internally uses an electromechanical solenoid to operate an electrical switch; for example, an automobile starter solenoid, or a linear solenoid, which is an electromechanical solenoid. A solenoid valve is an Electromechanical Valve for use with Liquid or Gas controlled by running or stopping an Electric current Pneumatics, Pressurized gas to affect mechanical motion Pneumatic power is used in Industry, where it is common to have factory units plumbed for Compressed For the mechanical technology see Hydraulic machinery and Hydraulic cylinder Hydraulics is a topic of science and Engineering A relay is an electrical Switch that opens and closes under the control of another Electrical circuit. A starter solenoid (or starter relay is the part of an Automobile which relays a large Electric current to the starter motor, which in turn sets the engine ## Magnetic field This is a derivation of the magnetic field around a solenoid that is long enough so that fringe effects can be ignored. In the diagram to the right, we immediately know that the field points in the positive z direction inside the solenoid, and in the negative z direction outside the solenoid. A solenoid with 3 Ampèrian loops We see this by applying the right hand grip rule for the field around a wire. For the related yet different principle relating to electromagnetic coils see Right-hand rule. If we wrap our right hand around a wire with the thumb pointing in the direction of the current, the fingers show how the field behaves. Since we are dealing with a long solenoid, all of the components of the magnetic field not pointing upwards cancel out by symmetry. Outside, a similar cancellation occurs, and the field is only pointing downwards. Now consider loop "c". By Ampère's law, we know that the path integral of B around this loop is zero, since no current passes through it (and where it can be assumed that the circuital electric field passing through the loop is constant under such conditions such as a constant, or constantly changing current through the solenoid). We have shown above that the field is pointing upwards inside the solenoid, so the horizontal portions of loop "c" doesn't contribute anything to the integral. Thus the integral up side 1 is equal to the integral down side 2. Since we can arbitrarily change the dimensions of the loop and get the same result, the only physical explanation is that the integrands are actually equal, that is, the magnetic field inside the solenoid is constant. A similar argument can be applied to loop "a" to conclude that the field outside the solenoid is constant. A solenoid with a looping magnetic field line An intuitive argument can be used to show that the field outside the solenoid is actually zero. Magnetic field lines only exist as loops, they cannot diverge from or converge to a point like electric field lines can. The magnetic field lines go up the inside of the solenoid, so they must go down the outside so that they can form a loop. However, the volume outside the solenoid is much greater than the volume inside, so the density of magnetic field lines outside is greatly reduced. Recall also that the field outside is constant. In order for the total number of field lines to be conserved, the field outside must go to zero as the solenoid gets longer. Now we can consider loop "b". Take the path integral of B around the loop, with the height of the loop set to h. The horizontal components vanish, and the field outside is zero, so Ampère's Law gives us: Bh = μ0NI From which we get: $B = \mu_0 \frac{N I}{h}$ This equation is for a solenoid with no core. The inclusion of a usually metal core, such as iron, increases the magnitude of the magnetic field of the solenoid when it is unchanged (same current, length, number of coils). Iron (ˈаɪɚn is a Chemical element with the symbol Fe (ferrum and Atomic number 26 This expressed by the formula $b = \kappa \mu_0 \frac{N I}{h}$ κ is the permeability constant of the material that the core is made of. κμ0 is the relative permeability (μ) of the core material such that: $B = \mu \frac{N I}{h}$ ## Rotary Voice Coil This is a rotational version of a solenoid. In Multiphase flow in Porous media, relative permeability is a dimensionless measure of the effective permeability of each phase Typically the fixed magnet is on the outside, and the coil part moves in an arc controlled by the current flow through the coils. Rotary voice coils are widely employed in devices such as disk drives. Disk storage is a general category of a Computer storage mechanisms in which data is recorded on planar round and rotating surfaces ( disks, discs, or ## Electromechanical solenoids A 1920 explanation of a commercial solenoid used as an electromechanical actuator Electromechanical solenoids consist of an electromagnetically inductive coil, wound around a movable steel or iron slug (termed the armature). Steel is an Alloy consisting mostly of Iron, with a Carbon content between 0 Iron (ˈаɪɚn is a Chemical element with the symbol Fe (ferrum and Atomic number 26 The coil is shaped such that the armature can be moved in and out of the center, altering the coil's inductance and thereby becoming an electromagnet. In Electrical circuits, any Electric current i produces a Magnetic field and hence generates a total Magnetic flux \Phi acting The armature is used to provide a mechanical force to some mechanism (such as controlling a pneumatic valve). Although typically weak over anything but very short distances, solenoids may be controlled directly by a controller circuit, and thus have very low reaction times. The force applied to the armature is proportional to the change in inductance of the coil with respect to the change in position of the armature, and the current flowing through the coil. The force applied to the armature will always move the armature in a direction that increases the coil's inductance. The magnetic field inside a solenoid is given by: $B=\mu_0 n I=\mu_0 \frac{NI}{h}$ where $\mu_0=4\pi \times 10^{-7}$ henries per meter, B is the magnetic field magnitude in teslas, n is the number of turns per meter, I is the current in amperes, N is the number of turns and h is the length of the solenoid in meters. The ampere, in practice often shortened to amp, (symbol A is a unit of Electric current, or amount of Electric charge per second See also: Electromagnet. An electromagnet is a type of Magnet in which the Magnetic field is produced by the flow of an electric current. Electromechanical solenoids are commonly seen in electronic paintball markers, pinball machines, dot matrix printers and fuel injectors. PaintBallMarkerjpg|right|thumb|250px|Spyder VS2 Paintball Marker Pinball is a type of coin-operated Arcade game where a player attempts to score points by manipulating one or more Metal balls on a playfield inside a Glass A dot matrix printer or impact matrix printer refers to a type of Computer printer with a print head that runs back and forth on the page and prints by impact striking Fuel injection is a system for mixing fuel with air in an Internal combustion engine. ## Pneumatic solenoid valves A pneumatic solenoid valve is a switch for routing air to any pneumatic device, usually an actuator of some kind. A solenoid valve is an Electromechanical Valve for use with Liquid or Gas controlled by running or stopping an Electric current An actuator is a mechanical device for moving or controlling a mechanism or system A solenoid consists of a balanced or easily movable core, which channels the gas to the appropriate port, coupled to a small linear solenoid. The valve allows a small current applied to the solenoid to switch a large amount of high pressure gas, typically up to 100 psi (7 bar, 0. 7 MPa, 0. 7 MN/m²). Some solenoids are capable of operating at far greater pressures. Pneumatic solenoids may have one, two, or three output ports, and the requisite number of vents. The valves are commonly used to control a piston or other linear actuator. The pneumatic solenoid is akin to a transistor, allowing a relatively small signal to control a large device. In Electronics, a transistor is a Semiconductor device commonly used to amplify or switch electronic signals It is also the interface between electronic controllers and pneumatic systems. ## Hydraulic solenoid valves Hydraulic solenoid valves are in general similar to pneumatic solenoid valves except that they control the flow of hydraulic fluid (oil), often at around 3000 psi (210 bar, 21 MPa, 21 MN/m²). A solenoid valve is an Electromechanical Valve for use with Liquid or Gas controlled by running or stopping an Electric current Hydraulic machinery uses solenoids to control the flow of oil to rams or actuators to (for instance) bend sheets of titanium in aerospace manufacturing. Hydraulic machinery are machines and tools which use Fluid power to do work Solenoid-controlled valves are often used in irrigation systems, where a relatively weak solenoid opens and closes a small pilot valve, which in turn activates the main valve by applying fluid pressure to a piston or diaphragm that is is mechanically coupled to the main valve. Transmission solenoids control fluid flow through an automatic transmission and are typically installed in the transmission valve body. A transmission solenoid is an electro-hydraulic valve that controls fluid flow into and throughout an Automatic transmission. ## Automobile starter solenoid In a car or truck, the starter solenoid is part of an automobile ignition system. A starter solenoid (or starter relay is the part of an Automobile which relays a large Electric current to the starter motor, which in turn sets the engine Also called a starter relay, the starter solenoid receives a large electric current from the car battery and a small electrical current from the ignition switch. Electric current is the flow (movement of Electric charge. The SI unit of electric current is the Ampere. A car battery is a type of Rechargeable battery that supplies electric energy to an Automobile. A key is a device which is used to open a lock. A typical key consist of two parts the blade, which slides into the Keyway of the lock and distinguishes When the ignition switch is turned on (when the key is turned to start the car), the small electrical current forces the starter solenoid to close a pair of heavy contacts, thus relaying the large electrical current to the starter motor. An automobile self-starter (commonly "starter motor" or simply "starter" is an Electric motor that initiates rotational motion in a car's Internal Starter solenoids can also be built into the starter itself, often visible on the outside of the starter. If a starter solenoid receives insufficient power from the battery, it will fail to start the motor, and may produce a rapid 'clicking' or 'clacking' sound. The internal combustion engine is an engine in which the Combustion of Fuel and an Oxidizer (typically air occurs in a confined space called a This can be caused by a low or dead battery, by corroded or loose connections in the cable, or by a broken or damaged positive (red) cable from the battery. Corrosion means the breaking down of essential properties in a material due to Chemical reactions with its surroundings Any of these will result in some power to the solenoid, but not enough hold the heavy contacts closed, so the starter motor itself never spins, and the engine is not rotated (does not start).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222134351730347, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/70500/finding-inverse-cosh
# Finding inverse cosh I am trying to find $\cosh^{-1}1$ I end up with something that looks like $e^y+e^{-y}=2x$. I followed the formula correctly so I believe that is correct up to this point. I then plug in $1$ for $x$ and I get $e^y+e^{-y}=2$ which, according to my mathematical knowledge, is still correct. From here I have absolutely no idea what to do as anything I do gives me an incredibly complicated problem or the wrong answer. - 1 Multiply both sides by $e^y$, then subtract the right-hand side from the left-hand side. What does that give you? – anon Oct 7 '11 at 1:51 1 $e^{2y}-2e^y+1=0$ I think. – Jordan Oct 7 '11 at 1:52 4 Look up what I did in detail for the inverse $\sinh$. Almost everything will be the same except for one change of sign, and the fact that $x=1$. – André Nicolas Oct 7 '11 at 1:59 4 @Jordan: If even such a basic thing as the quadratic formula goes over your head, why on earth are you tormenting yourself by trying to learn calculus? You're just wasting your time. You would be better off spending your time on something that you're good at... (If you really really need to learn calculus, then – as many people have already pointed out in comments to your previous questions – you should first learn the pre-calculus stuff reasonably well.) – Hans Lundmark Oct 7 '11 at 5:53 2 @Jordan: Please don't think of yourself as stupid just because you're struggling with math. At the university where I work, I meet lots of students that are obviously intelligent and ambitious, but still have a hard time with the math courses. In many cases, I think it can be blamed on their being exposed to years of bad teaching in elementary school. When I see the types of errors and misunderstandings that you display here, I don't think "That's one stupid fellow!", but rather "How is it possible that no math teacher has explained this and set it straight years ago?". – Hans Lundmark Oct 8 '11 at 6:46 show 21 more comments ## 5 Answers $$\cosh(y)=x$$ since $$\cosh^2(y)-\sinh^2(y)=1$$ or $$x^2-\sinh^2(y)=1$$ then $$\sinh(y)=\sqrt{x^2-1}$$ now add $\cosh(y)=x$ to both sides to make $$\sinh(y)+\cosh(y) = \sqrt{x^2-1} + x$$ which the left hand side simplifies to : $\exp(y)$ so the answer is $$y=\ln\left(\sqrt{x^2-1}+x\right)$$ - 1 I don't follow at all what happened. I am assuming you are using hyperbolic identities which I have not memorized. – Jordan Oct 7 '11 at 2:56 1 And you are looking up the definition and identities for hyperbolic functions right now .. correct? – ja72 Oct 7 '11 at 11:32 I have them written down on an index card. – Jordan Oct 7 '11 at 21:18 You have found out that the unknown $y$ satisfies the equation $e^y+e^{-y}=2$. Multiply by $e^y$ and rearrange terms. You then get $$e^{2y}-2e^y+1=0\ .$$ Now use the following trick: Put $e^y=:u$ with a new unknown $u$. This $u$ has to satisfy the quadratic equation $$u^2-2u+1=0\ ,\quad{\rm i.e.,}\quad (u-1)^2=0\ .$$ The last equation has the unique solution $u=1$. The corresponding $y$ therefore satisfies the equation $e^y=1$, and there is only one such real $y$, namely $y=0$. All in all we have shown that $\cosh^{-1}(1)=0$, which is corroborated by the fact that conversely $\cosh(0)={1\over2}(e^0+e^{-0})=1$. - (+1) In my view: very, very nice! – Gottfried Helms Oct 7 '11 at 15:46 It may be more helpful to consider the significant hyperbolic identities first. We have in general: $\small \begin{array} {rcllll} 1)& \exp(z) &=& \cosh(z) + \sinh(z) \\ 2)& 1 &=& \cosh(z)^2 - \sinh(z)^2 \\ &&& \implies \\ 3)&\sinh(z) &=& \pm \sqrt{\cosh(z)^2-1} & \text{ using 2)}\\ 4)& \exp(z)&=& \cosh(z) \pm \sqrt{\cosh(z)^2-1} & \text{ using 1) and 3)}\\ \end{array}$ Now the given problem is to find another expression for $\small y=\cosh^{-1}(x)$ which means $\small x = \cosh(y)$ We use 4) and insert our current y for the general z to get $\small \begin{array} {rcllll} 5)& \exp(y)&=& \cosh(y) \pm \sqrt{\cosh(y)^2-1} & \text{ using 4)}\\ 6)& \exp(y)&=& x \pm \sqrt{x^2-1} & \text{ inserting x for } \cosh(y)\\ 7)& y&=& \log(x \pm \sqrt{x^2-1} ) & \\ 8)& \cosh^{-1}(x)&=& \log(x \pm \sqrt{x^2-1} ) &\text{ inserting } \cosh^{-1}(x) \text{ for } y \\ 9)& \cosh^{-1}(1)&=& ??? \\ \end{array}$ Now 8) can be used as a new, general hyperbolic identity like that in the list from 1) to 4) and 9) is your remaining little to-do ... - What does exp in this stand for? – Jordan Oct 7 '11 at 21:19 – Gottfried Helms Oct 8 '11 at 7:14 $$e^y+e^{-y}=2$$ Letting $u = e^y$, this becomes $$u + \frac 1u = 2$$ Multiplying both sides by $u$: $$u^2 + 1 = 2u$$ That's just a quadratic equation. - Let, $cosh^{-1}(y)=x\implies cosh(x)=y\implies e^x+e^{-x}=2y$, Let $t=e^x$, therefore $t^2-2yt+1=0$ therefore solution of $t=y+\sqrt{y^2-1}$ or $y-\sqrt{y^2-1}$ $\implies x=\ln(y+\sqrt{y^2-1})$ or $x=\ln(y-\sqrt{y^2-1})$ since $y=1\implies x=0$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526496529579163, "perplexity_flag": "head"}
http://crypto.stackexchange.com/tags/sha-2/hot?filter=all
# Tag Info ## Hot answers tagged sha-2 16 ### Is SHA-512 bijective when hashing a single 512-bit block? It would be very freakish if it turned out to be true. It is not an expected property of SHA-512 to have such bijectivity. It would be worrisome, even, because that's a kind of structure that should not appear in a proper cryptographic hash function. Actually proving that SHA-512, for 512-bit blocks, is not bijective, would already be a kind of a problem. ... 9 ### Is SHA-256 secure as a CTR block cipher? Well, as far as we know, the mode you suggest should be secure. Now, to be honest, AES256 versus your mode isn't quite a fair comparison; your mode gives somewhat less theoretical security; if you encrypt a known $2^n$ block message, the key can be recovered with $2^{256-n}$ effort; however, this observation doesn't really affect the practical security. ... 6 ### Why are these specific values used to initialise the hash buffer in SHA-512? The initial hash values for SHA-512 are the 64-bit binary expansion of the fractional part of the square root of the 9th through 16th primes (23, 29, 31, ..., 53). That is: $$I_0 = \left \lfloor \mathrm{frac} \left (\sqrt{23} \right ) · 2^{64} \right \rfloor$$ $$I_1 = \left \lfloor \mathrm{frac} \left (\sqrt{29} \right ) · 2^{64} \right \rfloor$$ $$\cdots$$ ... 6 ### A simple block cipher based on the SHA-256 hash function First of all, this no block cypher at all. It's a stream cypher. Thus you can use every key only once, and you can't use any cypher modes built on block cyphers. Your scheme is vulnerable to a known plaintext attack. If the attacker knows 32 aligned(or 63 unaligned) bytes of plaintext, he can calculate the state of your cypher: $S_i = P_i \oplus C_i$ ... 5 ### Is SHA-512 bijective when hashing a single 512-bit block? No. Cryptographic hash functions model a random function, not a random permutation. A significant fraction of output hash values are expected to be unreachable and another fraction have multiple preimages. While bijectivity in general does not mean that the inverse is easy to calculate, for the types of constructs which are used in hash functions in ... 5 ### Is calculating a hash code for a large file in parallel less secure than doing it sequentially? Actually a tree-based hashing as you describe it (your method 2) somewhat lowers resistance to second preimages. For a hash function with a n-bit output, we expect resistance to: collisions up to 2n/2 effort, second preimages up to 2n, preimages up to 2n. "Effort" is here measured in number of invocations of the hash function on a short, "elementary" ... 5 ### Is the last step of an iterated cryptographic hash still as resistant to preimage attacks as the original hash? If the hash function is any good, then it should behave as a "random function" (i.e. a function chosen randomly and uniformly among all possible functions). For a random function with output size $n$ bits, it is expected that nested application will follow a "rho" pattern: the sequence of successive values ultimately enters a cycle with an expected size of ... 5 ### A simple block cipher based on the SHA-256 hash function Your cipher looks a bit like the output feedback mode of operation for block ciphers. While OFB for block ciphers is considered safe (as long as it is used right), OFB for a hash function like you are using it has the problem that the key is only used at the start, to generate the "initialization vector", not at each step of the algorithm. Thus, as ... 5 ### Is calculating a hash code for a large file in parallel less secure than doing it sequentially? If you want to use Skein (one of the SHA-3 candidates) anyway: it has a "mode of operation" (configuration variant) for tree hashing, which works just like your method 2. It does this internally of the operation, as multiple calls of UBI on the individual blocks. This is described in section 3.5.6 of the Skein specification paper (version 1.3). You will ... 5 ### Using SHA-256 with different initial hash value With the message padding scheme of SHA-2/SHA-256 as it stands (add one 1 bit, a minimal number of 0 bits so that the overall padded message will end on a block boundary, then the original message length over some fixed number of bits), I know no attack enabled by allowing a different IV. However, allowing an arbitrary IV renders ineffective one of the two ... 5 ### What does Maj and Ch mean in SHA-256 algorithm? The definitions given in FIPS 180-4 are $$\mathtt{Maj}(x, y, z)=(x∧y)⊕(x∧z)⊕(y∧z)$$ $$\mathtt{Ch}(x,y,z)=(x∧y)⊕(¬x∧z)$$ where $∧$ is bitwise AND, $⊕$ is bitwise exclusive-OR, and $¬$ is bitwise negation. The functions are defined for bit vectors (of 32 bits in case fo SHA-256). I'm positive $\mathtt{Maj}$ stands for majority: the result is set according to ... 4 ### Does the SHA hash function always generate a fixed length hash? Essentially yes, they do. Depending on the exact hash function you choose depends on the length of output you'd expect. For example, SHA256 produces 256 bits of output. This does then beg the question "but the length of the hash is fixed and there are infinite possible inputs??!!". That's correct, except that $2^{256}$ is ... 4 ### Are there any known collisions for the SHA-2 family of hash functions? In short, no. So, what is the current state of cryptanalysis with SHA-1 (for reference only as this question relates to SHA-2) and SHA-2? Bruce Schneier has declared SHA-1 broken. That is because researchers found a way to break full SHA-1 in $2^{69}$ operations. Much less than the $2^{80}$ operations it should take to find a collision due to the birthday ... 4 ### Is SHA-256 secure as a CTR block cipher? The CTR mode of encryption is defined in general for any cryptographically strong pseudo-random function (PRF). You can build such a PRF from a hash function. For CTR, you produce a key stream by concatenating: $$F(k,0) || F(k,1) || ... || F(k,m)$$ where $F$ is your secure PRF, $k$ is your key, and $m$ is the the length of your plaintext divided by the ... 3 ### Are derived hashes weakening the root? To the best of our knowledge, SHA256 does not leak any additional information from related hashes. On the other hand, the state of "our knowledge" might not be that comprehensive; this security property of SHA256 cannot be derived from the base security assumptions of a hash function (preimage resistance, second preimage resistance and collision ... 2 ### A simple block cipher based on the SHA-256 hash function If I understood your code correctly, what you are doing is encrypting a message $m$ with a key $k$ by: $c=m\oplus h(k)$, in an ECB mode where $h$ is some hash function. Take two encrypted blocks $c_1$ and $c_2$ and add them: $c_1\oplus c_2 = m_1 \oplus h(k) \oplus m_2 \oplus h(k)=m_1\oplus m_2$. Moreover, you may loose entropy if the initial secret is ... 2 ### Is SHA-256 secure as a CTR block cipher? What you explain in the question resembles SHACAL-2 cipher's forward cipher function, see http://en.wikipedia.org/wiki/SHACAL#Security_of_SHACAL-2. SHACAL-2 is NESSIE accepted way of using SHA-256 as cipher, therefore it has appeared somewhat secure. 2 ### Are derived hashes weakening the root? Given just h1 and h2, if the salts are of any significant length then it will be impossible to uniquely determine "root" even if the hash function is very weak so long as it performs enough "compression". If both salts are known as well as h1 and h2 then the value of root is impractical to determine as long as the hash function is secure. Recovery of root ... 2 ### How to represent a 32-byte SHA2 hash in the shortest possible string? There are 94 printable ascii characters. Not all of which are valid for file names, however. There should be $64=2^6$ that are valid for file names, so read $6$ bits at a time and map those to one of the $64$ characters that are valid for file names. That would give you $256/6\approx 43$ characters. It will be hard to get much smaller than that. That ... 2 ### Does the SHA hash function always generate a fixed length hash? By the definition in FIPS 180-4, published March 2012, there are 160 bits in the output of SHA-1 224 bits in the output of SHA-224 256 bits in the output of SHA-256 384 bits in the output of SHA-384 512 bits in the output of SHA-512 224 bits in the output of SHA-512/224 256 bits in the output of SHA-512/256 1 ### Are derived hashes weakening the root? The answer depends on assumptions on plaintext. If an adversary can enumerate the possible plaintext (e.g. if plaintext is a password, mediocre passphrase, or a published file) then yes: knowledge of h1 or h2 allows finding what plaintext is, by verifying beyond reasonable doubt an hypothesis made. For some level of protection against that, use a ... 1 ### Are derived hashes weakening the root? Presuming root contains enough entropy to make a brute force search infeasible given only $h1$ and $salt1$, the (presumed) preimage resistance of SHA256 means that finding root would still be infeasible even if the attacker also knows the value $h2$ and $salt2$. Update: First order preimage resistance is usually defined as, for a random value $h$, it is ... 1 ### Is calculating a hash code for a large file in parallel less secure than doing it sequentially? To sum up other contributions, the proposed construction: is at least as secure as SHA-256 against collision attacks, that is the ability for an adversary to contruct two files with the same hash; if SHA-256 was perfect, difficulty would be in the orider of 2128 hashes. is slightly less secure than SHA-256 against second-preimage attacks, that is the ... 1 ### Is calculating a hash code for a large file in parallel less secure than doing it sequentially? If a hash function is suitable for general use, it will be suitable for this use. So long as an attacker cannot find two binary strings that hash to the same value, your method is secure. If you aren't confident that's true of the hash algorithm you are using, you picked a bad algorithm. Saying that an attacker has 32,768 opportunities to find a collision ... 1 ### Is calculating a hash code for a large file in parallel less secure than doing it sequentially? Method 2 is no less secure than method 1. Here's why: the cryptographical property that a hash function possesses is that it is supposed to be computationally infeasible to find any two distinct preimages that hash to the same value. Method 1 relies on this directly. However, if we were to have an example of a collision with method 2, this implies that ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207284450531006, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Skin_friction_coefficient&diff=9027&oldid=9026
[Sponsors] Home > Wiki > Skin friction coefficient # Skin friction coefficient ### From CFD-Wiki (Difference between revisions) | | | | | |---------|--------------------------------------------------------------------------------------------------------|---------|-----------------------------------------------------------------------------| | | | | | | Line 6: | | Line 6: | | | | | | | | | | | | | - | It is related to the momentum thickness as follows: <math>C_f \equiv 2\frac{\{d theta}}{\{d x}}</math> | + | It is related to the momentum thickness as follows: C_f = 2(d theta)/ (d x) | | - | | + | | | | | | | | | | | | ## Revision as of 17:45, 7 April 2008 The skin friction coefficient, $C_f$, is defined by: $C_f \equiv \frac{\tau_w}{\frac{1}{2} \, \rho \, U_\infty^2}$ Where $\tau_w$ is the local wall shear stress, $\rho$ is the fluid density and $U_\infty$ is the free-stream velocity (usually taken ouside of the boundary layer or at the inlet). It is related to the momentum thickness as follows: C_f = 2(d theta)/ (d x) Someone should add some correlations and references for them here
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8744864463806152, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/44029/list
## Return to Answer 3 added 63 characters in body Edit again: this answer is wrong, see the comments. The new set of families (for each object $X$) is called the sieve generated by the existing covers of $X$. One term for a Grothendieck pretopology is a basis for a Grothendieck topology, and different bases can give rise to the same Grothendieck topology. All of them, and the topology they generate, have the same sheaves. See here for example. Edit: Actually it is proposition C.2.1.9 in Johnstone's Sketches of an Elephant (Google books ) 2 Added reference The new set of families (for each object $X$) is called the sieve generated by the existing covers of $X$. One term for a Grothendieck pretopology is a basis for a Grothendieck topology, and different bases can give rise to the same Grothendieck topology. All of them, and the topology they generate, have the same sheaves. See here for example. Edit: Actually it is proposition C.2.1.9 in Johnstone's Sketches of an Elephant (Google books ) 1 The new set of families (for each object $X$) is called the sieve generated by the existing covers of $X$. One term for a Grothendieck pretopology is a basis for a Grothendieck topology, and different bases can give rise to the same Grothendieck topology. All of them, and the topology they generate, have the same sheaves. See here for example.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8952003121376038, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/27/why-does-implied-volatility-show-an-inverse-relation-with-strike-price-when-exam/1603
# Why does implied volatility show an inverse relation with strike price when examining option chains? When looking at option chains, I often notice that the (broker calculated) implied volatility has an inverse relation to the strike price. This seems true both for calls and puts. As a current example, I could point at the SPY calls for MAR31'11 : the 117 strike has 19.62% implied volatility, which decreaseses quite steadily until the 139 strike that has just 11.96%. (SPY is now at 128.65) My intuition would be that volatility is a property of the underlying, and should therefore be roughly the same regardless of strike price. Is this inverse relation expected behaviour? What forces would cause it, and what does it mean? Having no idea how my broker calculates implied volatility, could it be the result of them using alternative (wrong?) inputs for calculation parameters like interest rate or dividends? - 4 – chrisaycock♦ Jan 31 '11 at 22:00 1 @Arjen Kruithof, you've got a 0% accept rate - people won't answer questions unless you at least do them the courtesy of accepting some of them, which ups their reputation ... – Gravitas Sep 1 '12 at 19:03 ## 7 Answers The skew is almost always bid for puts on the stock market. When stocks go down, people tend to panic and volatility goes up as a result. Since the puts get more vega when the market goes down, they trade at higher vols. Read up on stochastic volatility for a more in-depth explanation. - It can be shown using a combination of calendar and butterfly that one can lock now the future variance conditionally to the spot being around some specific level (local vol). So if you bought it and it gets realized higher and the spot is there, you get money. if the spot is not there, you are neutral. Another way to look at the dependency of spot level and vol level is just using a regular delta hedge strategy whose PL is path dependent on what spot level is regarding strike when volatility gets realized. These dependency combined with the market sentiment that volatility is higher when spot goes down leads to higher vol price for options with lower strike. - That implied volatility you are observing was calculated using the standard Black-Scholes model (BSM). As we all know, no model is a perfect representation of reality. The variation (or skew) you observe is a consequence of the model being wrong. Let's think about the implications of the BSM not being exactly correct and everybody knowing that fact. Market prices cannot come solely from the model in this case. In particular, an important result is that (since the model is incorrect) even if you were to plug in the "right" value for every parameter, you would not get the market option prices. Any model, including the BSM, can be run "backwards", by which we mean here that it can start with an option price and derive an implied parameter. If the model has $M$ parameters $p_1, p_2, \dots, p_M$ that are normally used to find a price $V$, then we can also choose any one of the parameters, call it $p_n$, to derive from an observed price $W$ (normally by root-finding techniques). It so happens that for the BSM most of the parameters are reasonably easy to observe (strike, interest rate, etc.) while volatility is a rather more mysterious quantity, especially because the BSM needs future volatility rather than past volatility. Therefore, the market practitioners tend to pick on that parameter and talk about implied volatility even though in principle we could do everything in terms of, say, implied dividend yield. In any case, since the model is wrong, we don't expect to get the exact right option prices when we run the model forward, and therefore don't expect to get one "right" parameter when we run it backward. That's why you see variation in volatility by option strike. Now, as to the exact shape of that variation (decreasing implied volatility with strike), there are quite a few explanations and they are not mutually exclusive. For example, a somewhat more credible model than the plain old BSM is Black-Scholes With Jumps (BSJ), where the underlying price can take a sudden dive. You need extra parameters to describe the jumps of course, but the result is a model whose implied volatility skew is "flatter". Because those jumps are to the downside, they show up as higher prices(=higher BSM implied volatility) for the low-strike options. Other explanations involve transaction costs, discrete stock price processes, bankruptcy, stochastic volatility, market psychology, etc. - The Black-Scholes model is based on a set of assumptions about the distribution of asset returns which are incorrect for real markets. For mathematical simplicity, returns are assumed to be normally distributed, but in reality the distributed is asymetrical (skew) and has fat tails (kurtosis). If you back out implied volatility from option prices across a range of strikes at the same expiry, you will observe that it is not a constant, but a function of strike that tilts downwards (skew) and curves upwards (smile). You can think of the value of an option in a number of ways, including: • the Expected Value e.g. the sum of the size of the pay out multiplied by the probability of getting it • The cost of hedging the option Where implied volatility demonstrates skew it is because the Black-Scholes model is an approximation and to get the right price for the option you need to adjust its value. - "My intuition would be that volatility is a property of the underlying, and should therefore be roughly the same regardless of strike price". I agree, but the market doesn't. People who buy out-of-money calls tend to be more optimistic than those who buy at-the-money calls, so out-of-money calls are "overpriced" and thus have a higher volatility. Oddly, people who buy deep-in-the-money calls ALSO tend to be more optimistic, so these calls ALSO have a higher implied volatility. Why? You can buy a deep-in-the-money call for much less than the stock price. When the stock goes up 1 point, the deep-in-the-money call goes up almost 1 point too, so you get the same gain for less investment (ie, leverage). You probably also noticed implied volatility varies with expiration date too. Ultimately, the market determines how much an option is worth, and thus the volatility. Black-Scholes' belief that volatility was a fundamental characteristic of an instrument isn't really accurate. - It's well know by everybody that when the market prices goes up, the implied volatility goes down and vice versa. So the strike prices is having the reverse relationship when examinig the option chains. - Short answer: volatility skew. Longer answer: investors are willing to pay more for out of the money puts (disaster hedge). This buying bids up the price of puts, which makes the volatility implied by those prices go up. calls and puts at the same strike must trade roughly at the same implied volatility otherwise there is arbitrage, this is why you see the same phenomenon for lower strike calls. (investors are less willing to do this when buying out of the money calls(higher strikes), and so those options typically trade at lower bids, and lower implied volatility. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343910813331604, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/107873-twisting-turning-infinite-sum.html
# Thread: 1. ## the twisting and turning infinite sum of... $\frac{2}{5} - \frac{3}{10} + \frac{4}{15} - \frac{5}{20} + \frac{6}{25} - \frac{7}{30} + ...$ so I think the series looks like: $\sum_{n=1}^{\infty}{\frac{(-1)^{n-1}*(n+1)}{5n}}$ looks a little bit geometric, the problem is to show that it either converges or diverges, we have the powerful limit comparison test at our disposal but I am not sure what to compare it to, or if there is an alternate path to solution, I have tried the integral, basic comparison and nth term tests... any help kindly appriciated! thank you!! 2. the limit of the general term doesn't exist, so your series diverges. 3. the limit of the general term, the limit of n goes to infinity of ${\frac{(-1)^{n-1}*(n+1)}{5n}}$ I don't see how it does not exist? it looks like an indeterminate form to me, if you apply L'H we still end up with a (-1) to the infinity term in the numerator?? Should I take the log of the limit? I hope am not missing something obvious! 4. ## Continued Confusion So I have re-written the sum as $\sum_{n=2}^{\infty}{\frac{(-1)^{n}*(n)}{5n-5}}$ But I still can't see that the limit doesn't exist, (I am aware that if it doesn't exist, then the series diverges.) Does L'Hopital's rule work here, (does the limit comparison test work for a lower sum limit of n=2?) because I don't see any other way of showing the limit is non-existant, and showing, therefore, that the sum diverges! Sorry if I am missing something obvious! Thank you! 5. Originally Posted by matt.qmar the limit of the general term, the limit of n goes to infinity of ${\frac{(-1)^{n-1}*(n+1)}{5n}}$ I don't see how it does not exist? it looks like an indeterminate form to me, if you apply L'H we still end up with a (-1) to the infinity term in the numerator?? Should I take the log of the limit? I hope am not missing something obvious! Let $a_n=\frac{(-1)^{n-1}(n+1)}{5n}$ (just so I don't have to keep writing it). As $n\to\infty$, $\frac{n+1}{5n}\to\frac{1}{5}\neq0$, so the sum diverges. Conceptually, for large enough $n$, $\sum_{n>N} a_n\approx \frac{1}{5}-\frac{1}{5}+\frac{1}{5}...$ You see why that diverges? 6. thank you so much! I am not sure how I missed the (n+1)/5n. Still not exactly sure how (-1)^infinity was taken care of, but I suppose it doesn't matter (always either +/- 1) thanks again! 7. Originally Posted by matt.qmar Still not exactly sure how (-1)^infinity was taken care of, but I suppose it doesn't matter (always either +/- 1) It wasn't taken care of. The terms oscillate, and they do not approach 0; as Krizalid said, the limit does not exist.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399295449256897, "perplexity_flag": "middle"}