url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/85365/what-are-the-best-known-results-for-the-stable-homotopy-groups-of-spheres
|
# What are the best known results for the stable homotopy groups of spheres?
There are a number of proposed ways to compute the stable homotopy groups of spheres. One can rather peculiarly consider stable (co)homotopy of an Eilenberg Maclane spectrum as a generalised (co)homology theory and use the Atiyah–Hirzebruch spectral sequence (in the same way one sometimes uses the Serre spectral sequence knowing information about the $E_{\infty}$ page). Another approach is to use the Adams spectral sequence. Here one takes a so-called Adams resolution of the sphere (it is more sensible to do this with spectra as we then get a genuine free resolution of $\mathbb{Z}/p\mathbb{Z}$ over the Steenrod algebra). One gets a spectral sequence which converges to the p-part of the stable homotopy group. A variant is to do this with some (nice enough I guess) generalised cohomlogy theory which leads to the Adams–Novikov spectral sequence. I have a few different questions:
1. What are the best results on this? I see here it says that the best known result as of 2007 was up to the 64th stem.
2. Which method gives the best known results?
3. In relation to the (classical) Adams spectral sequence one has that the $E_{2}$ terms (mod 2) are given by $\mathrm{Ext}_{A}(\mathbb{Z}/2,\mathbb{Z}/2\mathbb{Z})$. Now this is rather difficult on the face of it to compute as one must find a workable free resolution of $\mathbb{Z}/2\mathbb{Z}$. There is in fact a certain differential graded algebra called that lambda algebra whose cohomology is precisely this. Does anyone know of good source where the details are worked out for this?
4. Following the last question I wonder if anyone knows any good sources on differentials in the Adams spectral sequence?
[I guess an answer to the last 2 questions is probably just Ravenel's book, but if anyone knows some other fairly readable stuff then that would be more than welcome.]
-
1
wouldn't that be more appropriate for MathOverflow ? – Glougloubarbaki Nov 24 '11 at 21:37
1
I'm not a specialist, but I was under the impression that even nowadays nothing beats Ravenel's book. – PseudoNeo Nov 24 '11 at 22:16
## 3 Answers
1) This Question is a little vague. You could be asking about how far out we know these stable homotopy groups and what the best up to date reference is. This is not the way people are approaching this problem currently. I believe Christian Nassau has the most extensive Adams Spectral Sequence computations. Is that what you are interested in?
2) The Adams Spectral Sequence is not easy to improve upon at the prime 2. However, at odd primes it is a result of Haynes Millers that the Adams-Novikov SS is a strict improvement over the Adams SS. The $E_2$ term of the ANSS is very hard to compute, it is a huge obstruction to progress. In fact, the Chromatic SS was developed in order to compute the $E_2$ term of the ANSS.
There are other methods though. The current approach seems to be studying the $K(n)$ and $E_n$ local homotopy groups of spheres by means of a Descent/Adams/Homotopy Fixed Point SS. These are what people are working on now, the main obstruction is, again, the $E_2$ term. For these we need to know the cohomology of certain Profinite groups with coefficients in some non-trivial modules.
3) I don't know about the Lambda algebra, but you should be able to write down a minimal resolution in a small range.
4) There is an approach to a large number off differentials in the Adams SS due to Bob Bruner. You take advantage of the highly multiplicative structure on the Adams filtration. This allows you to propagate differentials in low dimensions to get new ones. Its pretty cool.
-
Thanks for the answer! With the first question I guess I was coming from the perspective of the ASS and wondering specifically what is the largest stable homotopy group that we know (for example in Mahowold and Tangora's paper "Some differentials in the Adams spectral sequence" they work out the first 45 stems), but you obviously have a point that this approach is limited and not really getting to the heart of the problem. Do you know some good sources in relation to Bob Bruner's work? – CPM Nov 26 '11 at 23:38
– Sean Tilson Nov 28 '11 at 16:07
All your answers are somewhere in the green book!
In fact in Chapter 7 in the book Ravenel uses the Adams-Novikov spectral sequence to calculate the first thousand (stable) stems for $p=5$
The best method seems to be the Adams-Novikov spectral sequence with Brown-Peterson (co)homology.
The Lambda algebra can be found in Chapter 3 of the green book.
You could also try having a read of McCleary's 'A User's Guide to Spectral Sequences' - the classical Adams spectral sequence is treated in that (Chapter 9), and a brief introduction to the Adams-Novikov spectral sequence (Chapter 11)
Edit: One further comment. You don't really need the Lambda algebra to find a (minimal) free resolutions of $\mathbb{Z}/2\mathbb{Z}$ over the Steenrod algebra. Have a look in Mosher and Tangora's book, or in Allen Hatcher's spectral sequence's book
-
Well I know you can just write down naively and think Adams did precisely that. It gets quite demanding though as you need deep knowledge about the Adem relations to calculate the necessary number of generators. As such doing this for large values I believe becomes very difficult and so it is more sensible computationally to consider the lambda algebra. – CPM Nov 26 '11 at 4:50
1
I think you are correct! It all depends on how far you wish to push the calculation. You might want to read about the May spectral sequence as well – Juan S Nov 26 '11 at 12:38
If you are looking into Lambda algebra, the green book has some introduction to it. For some extensive examples you can checkout two papers: W.H. Lin's $\rm Ext^{4,\ast}_A(\Bbb Z/2,\Bbb Z/2)$ and $\rm Ext^{5,\ast}_A(\Bbb Z/2,\Bbb Z/2)$, and T.W. Chen's Determination of $\rm Ext^{5,*}_\scr A(\Bbb Z/2,\Bbb Z/2)$, where they worked out the $E_2$ terms of the Adams spectral sequence of spheres for filtration 4 and 5, completely with all relations. The results are rather complicated, but you can get the idea how it works.
-
– CPM Jan 9 at 13:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505510330200195, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/21128/when-to-learn-category-theory/21134
|
# When to learn category theory?
I'm a undergraduate who wishes to learn category theory but I only have basic knowledge of linear algebra and set theory, I've also had a short course on number theory which used some basic concepts about groups and modular arithmetic. Is it too early to start learning category theory? should I wait to take a course on abstract algebra?
Is it very important to use category theory facts in a first course in group theory, ring theory, fields and Galois theory, modules and tensor products (each of those is a one semester course), would that make it a 'better' course?
I was unsure to learn category theory early but this post Mathematical subjects you wish you learned earlier inspired me to ask you given my background.
-
4
Unless you are an abstract person, I guess it's better to see more concrete mathematical constructions before diving into category theory. It won't be late after a first course in abstract algebra, or even in the middle of algebraic topology, for otherwise you may not appreciate the generality. – Soarer Feb 9 '11 at 7:00
1
If possible could you please also change the question to "how get started in category theory" rather than "when"? I highly disagree with that there has to be spcific order to when a subject can be learned, I passed a PDE course before passing the DE course where it was considered that one has to do DE before they can do PDE. So you don't need others persmission to study category theory. Ask for list of recomended books/free web lectures etc. – Arjang Feb 9 '11 at 7:41
12
@Arjang Why should he change the question to something different than what what he wants to ask? You can ask your own question if you want. You can also leave an answer to the effect "whenever you want" if you know what you are talking about. – Alex B. Feb 9 '11 at 7:50
3
@Arjang As I said, you are welcome to post your (not very informed, I dare say) opinion as an answer. – Alex B. Feb 10 '11 at 1:05
1
Start reading Lang's 'Algebra', and when you'll get to categories, you'll see for yourself if it's time for you or not ;) – Alexei Averchenko Nov 23 '11 at 9:25
show 3 more comments
## 8 Answers
Luckily these days there is a beautiful text that teaches algebra and category theory at the same time: Aluffi - Chapter 0. It deserves to be more well-known. Besides the fact that it uses (basic) category language from the outset, it is very well-written. If I would ever teach an algebra course, this would probably the text I would use.
-
1
I find that book way too "wordy". It is indeed well-written, but reading it takes ages. It has tons of exercises though, which is the best thing about the book. – Fredrik Meyer Feb 10 '11 at 4:43
1
I just finished a three-semeseter sequence of algebra using Aluffi's book. I am very happy that I learned some category theory from this book. The only negative thing about the book is it lacks detialed examples in some sections (I was lucky to have an excellent prof who gave lots of examples and assigned a lot of exercises). However, if you combine it with Dummit and Foote, you will enjoy learning algebra. – yaa09d Feb 25 '12 at 17:40
I very often find some knowledge of category theory useful to understand things conceptually.
A book that one could read before studying mathematics at the university is Lawvere's and Schanuel's Conceptual mathematics. This is an introduction to category theoretic ideas on a most elementary level.
Edit: two days ago, there appeared a very interesting-looking book by David I. Spivak on the arxiv called Category theory for scientists.
-
2
I whole-heartily agree with the recommendation of Lawvere and Schanuel's text as an elementary introduction to category theory. – Greg Graviton Feb 9 '11 at 15:19
and @ Greg Graviton, thank you both for mentioning "Conceptual mathematics" getting into category theory finally! – Arjang Feb 10 '11 at 3:49
I tried reading Mac Lane's classic Categories for the working mathematician the summer after I finished first year. That didn't go very well, and I didn't learn much. I suspect the reason was that the examples were too inaccessible at that stage. On the other hand, I did understand Lawvere and Rosebrugh's Sets for Mathematics at that time, but again the lack of examples meant that I didn't appreciate the significance and elegance of categorification.
I would recommend not learning category theory until you've seen enough concrete examples to be able motivate its study properly — at the very least one course in group theory, one in linear algebra, and one in general point-set topology. Generalised abstract nonsense is better appreciated when you realise, for example, that the Cartesian product of sets, the direct product of groups, the direct product of vector spaces, and the product topology all satisfy the same universal property: Given two objects $A$ and $B$, their product is an object $A \times B$ together with a pair of arrows (structure-preserving maps) $p_1 : A \times B \to A$, $p_2 : A \times B \to B$ which are universal, in the sense that for any pair of arrows $f: X \to A$, $g: X \to B$, there is a unique arrow $(f, g): X \to A \times B$ such that $p_1 \circ (f, g) = f$ and $p_2 \circ (f, g) = g$.
On the other hand, learning category theory can also lead to some insights: for example, it is a remarkable fact that in the category of vector spaces, the direct sum of finitely many vector spaces is the same as the direct product of them. This is a mysterious coincidence, since in the other categories you are familiar with, the constructions corresponding to direct sum and direct product are distinct and in general not the same. This, together with some other facts, should convince you that something special and very nice is going on in linear algebra.
-
Thanks for that, if that is all it takes to jump into a category theory why do an entire course on those subjects, instead of just learning the concepts you mentioned above as part of an introduction to category theory? – Arjang Feb 9 '11 at 7:45
1
@Arjang: Not everything can be phrased in categorical language, and some things are actually more cumbersome. For instance, the categorical definition of a subobject is a bit obscure and will seem mysterious unless you realise objects in a category need not be sets. Moreover not everything generalises. There is no analogue of the rank-nullity theorem in topology, for instance. And what would an analogue of Sylow's theorems look like in linear algebra? There are many good reasons to study individual categories in their own right. – Zhen Lin Feb 9 '11 at 8:19
3
I don't know if I agree that the coincidence of finite products and coproducts makes linear algebra special. After all, this is true for modules over an arbitrary ring, sheaves of abelian things, and so on: actually any abelian category has finite "biproducts." So this is quite a common thing. I would say linear algebra is special because vector spaces are free, and this of course can also be discussed categorically (free objects, the free functor is left adjoint to the forgetful functor, etc.). – Justin Campbell Feb 9 '11 at 17:33
1
Also, while there is no analogue of the rank-nullity theorem in topology, there is one in any "algebraic" category in the form of the first isomorphism theorem, and again this can be discussed (but not proved) in the categorical framework of kernels, cokernels, etc. But Sylow's theorems are a great example of results which are very special to a particular category, in this case the category of groups. – Justin Campbell Feb 9 '11 at 17:46
thank you, Just a question, by " There is no analogue of the rank-nullity theorem in topology", do you mean it has been proven that there is not or just simply there hasn't been an analogue up till now? I agree with studying the individual categories on their own, but having a knowldege of a higher structure that the framework is in should be usefull. Also "what would an analogue of Sylow's theorems look like in linear algebra?" seems just like something worth studying category for. +1 from me. – Arjang Feb 9 '11 at 22:11
show 2 more comments
Echoing a bit of user1728 and Sean's comments, Category Theory is a wonderful unifying language that ties together a lot of ideas and makes certain things much easier, but it is pretty rough going to learn in the abstract.
I was lucky enough that the professor that taught Abstract Algebra at the National University in Mexico when I took it did a lot of the proofs as if they were category theory, but without actually saying "Category Theory". So he proved that the product of groups has the universal property of a product, and the uniqueness up to unique isomorphism, and so on, with diagrams; did the same thing with rings. Etc. By the end of the course, he was mentioning that all of these ideas were special cases of a general theory called "Category Theory". And so on.
By the time I got to an actual course in Category Theory, I had a whole library of mental examples to draw upon when looking at all the different concepts, amplified with some of the less algebraic-flavored examples (such as considering a partially ordered set as a category, etc) that the professor for that course gave. With that in hand, the first couple of chapters of Mac Lane's book became easier to digest and understand, and use elsewhere.
Of course, this may slant my view; I tend to view Category Theory more as a useful unifying language than as a particular subject (in which I am at least somewhat wrong, if not more). But I suspect you'll be able to get into, and get a lot more out of, Category Theory if you have the library of examples on hand.
Of course, as I said, I was lucky: I was primed for Category Theory with examples that were essentially Category Theory without saying so. You may not benefit from that. Still, I think that waiting until you study some abstract algebra and see some of these constructions in action might be a good idea.
-
I agree a lot with user1728's answer. IMO category theory is a beautiful subject, but one that does not make a whole lot of sense without examples. I think it would be pretty hard to learn category theory while also learning all of the standard examples. I consider having an abstract algebra and/or a topology course an absolute prerequisite to understanding category theory. For me, the best part about category theory is that it is a framework that makes life easier.
There are some subjects that it is absolutely necessary to know category theory before hand, but a pass familiarity with the definitions should suffice for a first algebra or galois theory course.
-
The professor of the short algebraic topology course I took during my second year refused to give us the definition of functor, saying that it is better to start using them and building up interesting examples before looking at the abstract definitions. I remember that at the time I tough that this was quite stupid, but now I actually agree with him. I was happy to learn the basics of category theory the following year having more examples in mind. Most of the concepts of category theory are extremely natural but to realize that you need a good background. Otherwise you will most likely struggle against the abstractness with few chances to understand what's truly going on. Moreover, I don't think that you will profit much from knowing some category theory this early, before an advanced course in abstract algebra. I learned it while studying module theory, quite a long time after a first course in abstract algebra and a course in Galois theory, and I felt that that was perfect.
-
(I actually wanted to write my small anecdote as a comment but it looks like I don't have enough reputation to comment on other people's posts) – user1728 Feb 9 '11 at 10:51
10
What a silly idea. If you understand what a group homomorphism is, you can understand what functors are. – Qiaochu Yuan Feb 9 '11 at 21:07
knowing something is never bad, knowing the wrong things is. He could have at least give a definition and tell you it will all become clear with examples. So when looking at the examples one would be trying to see how they relate to the definition. – Arjang Feb 9 '11 at 22:04
Well, of course you can read what categories and functors are (I'm pretty sure that the OP already did), but the only point that I can see for doing this is to recognise a functor as you see one (which imo is a good thing precisely because you prepare yourself for when you'll really learn category thoery). I'd say that there's no need at all of more advanced category theory before you'll actually need to use tools like equivalences of categories or limits. – user1728 Feb 10 '11 at 7:56
I think there are good reasons to learn category theory early on for the unifying perspective it gives and also good reasons to wait until you have a healthy amount of examples on hand. One thing I want to emphasize, since no one here has mentioned it thus far: elementary category theory is largely a matter of learning a new language, with many definitions and few results. In fact, I would go so far as to say there are no results in elementary category theory. The Yoneda lemma might count, depending on whether it is actually elementary and whether its proof is actually more than a tautology.
Anyway, I think this helps the case of those who would encourage you to learn the subject earlier, since reading or talking about category theory is pretty smooth sailing in the sense that there are no intricate proofs to follow. Note, however, that the definitions of basic categorical concepts may themselves be somewhat intricate, but hopefully after a while they will seem natural and not cumbersome.
-
1
I don't know if it should be considered "elementary category theory", but the fact that a functor is an equivalence of categories if and only if it is fully faithful and essentially surjective is an higly nontrivial result in my opinion. – user1728 Feb 10 '11 at 8:07
1
@user1728: I think that is an elementary fact, and I disagree that the proof is "highly nontrivial." In my opinion it's a pretty routine proof, although it does require the axiom of choice. – Justin Campbell Feb 11 '11 at 19:20
But I agree at least that are there are some things which are not self-evident and require checking for yourself. Another, more complicated example is the equivalence of the two or three common definitions of an adjunction of functors. So maybe the claim that there are no results at this level whatsoever was hyperbolic. – Justin Campbell Feb 11 '11 at 19:22
I think that the RAPL theorem also qualifies as a non-trivial result in elementary category theory. – Alexander Thumm Aug 17 '12 at 10:26
I've already answered a similar question on MathOverflow here.
Here are some thoughts of mine:
category theory (seen as a language) should not be teached just in advanced courses, but it should be developed into the basics courses, in a very gradual way;
some elementary concepts are so simple that also a first year student can understand them, if these concepts are presented in the right way: for instance you can see a category just as a graph with operations and functors as graph morphisms preserving the operations (this definition is not more complex or abstract then group-group homomorphism and vectorial space-linear map definitions; because these concepts are so simple way not introducing them early?
learning category theory helps in making connections between many different concepts, because it shows the deep unity in maths;
category theory is first of all a language, and so it gives us a new way of reasoning; this new way of reasoning requires some time to be fully assimilated, and this assimilation could require years; for this reason I think it's best starting to learn category soon;
having category theory help to learn new maths: for instance I've learned category theory for my interest in logic and foundations, then knowing those concepts helped to understand constructions in algebraic topology and algebraic geometry faster then what I would have done without it;
Other things can be found in the link above.
-
great, great answer! – Vicfred Nov 24 '11 at 4:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609302878379822, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/149850/is-there-a-formula-for-ncr-that-considers-a-min-max-range-restricted-compositi
|
# Is there a formula for nCr that considers a min/max range? (restricted composition estimation)
I'm bad at math and hope I explain this right(please don't be upset if I don't, I'm not trying to be lazy or a jerk, I really don't understand what information is sometimes required and focus on the wrong part I think is important).
Basically I have a program that looks for combinations. The best way to explain is it vai an example, say you have 4 items you want to buy at a store: apple,peach,pear and orange. You want to know how much percent you can fit of each into a basket but you tell yourself you want a min. of 20% of each item and a max of 60% of each item(so apple:25%, peach:25%, pear:25%, and orange:25% works perfectly but not apple:0%, peach:0%, pear:50%, and orange:50% because we set a min of 25%).
So my program seems to work and using the above example, I found this code that helps me figure out exactly how many results to expect:
````import math
def nCr(n,r):
f = math.factorial
return f(n) / f(r) / f(n-r)
if __name__ == '__main__':
print nCr(20+4-1,20) #percent+buckets(items)-1, percent
````
this gives me the correct answer(1771) because it doesn't need to factor in the max(60%) because its never reached(it only uses 20% as the input). But is there a way I could modify this formula(or use something else) that would tell me how many results to expect if I have something like 40 items with a range of 2-5% or something(something that factors in the max value as well).
Any help would be great..As I mentioned, I'm not good at math so if possible can you explain it as simply as possible?
-
## 1 Answer
This is a form of restricted composition (similar to a partition, but with order mattering), which you can calculate using recurrences. You seem to be assuming exact percentages.
I have a Java applet which does something similar and you can see the code (though it was not designed to be easy to read for anyone else: this particular calculation uses the fifth of the Composition methods).
The applet assumes each part is at least 1, so you would have to recast your problem: the number of compositions of 100 into exactly 4 parts with each part at least 20 and no more than 60 is the same [by taking 19 off each of the 4 parts] as the number of compositions of 24 into exactly 4 positive parts with each part no more than 41. As you say, the answer is 1771.
Similarly the the number of compositions of 100 into exactly 40 parts with each part at least 2 and no more than 5 is the same as the number of compositions of 60 into exactly 40 positive parts with each part no more than 4. The applet gives an answer of 1725325033144622.
-
Thanks so much Henry, this looks awesome..I understand the postive parts, no more than(is the range)..but what is composition how are you deriving 24 and 60? 24 in my example was min range(20%)-items(4), but in the second example min(2-40 is not 60)..I'm confused. Also are you running the code directly of via the site? its been running 15 minutes for me to test your examples. – Lostsoul May 26 '12 at 0:20
come to think about it, if you have the formula it might be easier as I have a function that I can just plug the numbers into.. – Lostsoul May 26 '12 at 1:11
@Lostsoul: Each time I reduced the minimum to $1$: I took $19$ off each of the $4$ parts and $100-19\times 4 = 24$. In the second case I took $1$ off each of the $40$ parts and $100-1\times 40 = 60$. – Henry May 26 '12 at 8:39
Thanks for that Henry. Did you run this on the site you sent me? I have been trying to run the applet but I never get any results. I've tried it with both the latest firefox/chrome(I am using a mac). Is it working for you? and if so how long does it take(I stop it after 10 minutes)? – Lostsoul May 29 '12 at 3:31
or better, do you know the formula for this? I think I can program it myself if I can see the logic. – Lostsoul May 29 '12 at 3:42
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526374936103821, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/86303/differentiating-the-function-x3-cdot-min-x-9
|
# Differentiating the function $x^3\cdot\min\{x,9\}$
Today I came across the function $x^3\cdot \min\{x,9\}$.
My teacher differentiated it and wrote it directly again as $3x^2\cdot\min\{x,9\}$
I was wondering how come the $\min\{x,9\}$ part is not affected by differentiation. Any ideas please? Or am I missing something pretty obvious?
-
What is "my sir"? You are correct to ask, because $3x^2\cdot\min\{x,9\}$ is wrong. I recommend considering the 2 cases $x<9$ and $x>9$, and checking that the function is not differentiable at $9$. – Jonas Meyer Nov 28 '11 at 8:42
i meant, it was the intermediate step of a question that sir asked us to solve in the class and i got struck here, and he without any doubt whatsoever directly differentiated it to which i was a bit stunned – Bhargav Nov 28 '11 at 8:44
and i apologize for bad usage of english, i am not so fluent with my english usage – Bhargav Nov 28 '11 at 8:45
1
It sounds like your teacher made a mistake. Can you see how the formula simplifies in the two cases where $x<9$ and $x>9$? (No need to apologize, and thank you for clarifying. I hope that changing "sir" to "teacher" conveys the intended meaning well.) – Jonas Meyer Nov 28 '11 at 8:47
## 2 Answers
Another way of looking at the mistake made by your teacher is that he forgot how to differentiate the product of two functions. The derivative of the first factor $f(x)=x^3$ is, indeed, $f'(x)=3x^2$, but the derivative of the second factor $g(x)=\min\{x,9\}$ is $g'(x)=1,$ if $x<9$, and $g'(x)=0$, if $x>9$. As others have pointed out, $g(x)$ is not differentiable at the point $x=9$. The correct derivative is thus $$\frac d{dx}\,\big(f(x)g(x)\big)=f'(x)g(x)+f(x)g'(x),$$ wherever the two derivatives both exist.
It is, of course, an easy exercise to verify that this formula coincides with the answer David Mitra gave.
-
You have a piecewise defined function here: if $x<9$, then $\min(x,9)=x$ and $f(x)=x^3\cdot x=x^4$. But if if $x>9$, then $\min(x,9)=9$ and $f(x)=x^3\cdot 9=9x^3$.
So, your function is $$f(x)=\cases{ x^4, &x<9\cr 9x^3,&x\ge9}$$
You'll have to find the derivative separately for each piece:
For $x<9$, $$\tag{1}f'(x)=(x^4)'=4x^3.$$
For $x>9$, $$\tag{2}f'(x)=(9x^3)'=27x^2.$$
At $x=9$, in order to determine if $f'(9)$ is defined, you need to see if the above formulas "match up" at $x=9$. That is, you need to check that the "derivative at 9" given by (1) and (2) are the same. So, you need to compute the limit of the expression in (1) as $x$ approaches 9 from the left, and the limit of the expression in (2) as $x$ approaches 9 from the right: $$\lim_{x\rightarrow 9^-} 4x^3=4\cdot 9^3.$$
$$\lim_{x\rightarrow 9^+}27x^2=27\cdot9^2=3\cdot 9^3.$$ Since the two are different, $f'(9)$ is undefined.
See Jonas' astute observation below.
Looking at the graph of $f$ and the expressions (1) and (2), it is not too hard to see that $f'(9)$ exists if and only if the two limits above exist and are equal. The previous statement, however, is not true in general.
-
4
Technically to show that $f'(9)$ is undefined it might be better to show that the left-hand and right-hand limits of the difference quotients don't match up. It is true that if the left-hand and right-hand limits of the derivative exist at a point and are unequal then the function can't be differentiable there, but that is a little more work to show. – Jonas Meyer Nov 28 '11 at 8:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540322422981262, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/281158/sigma-with-or-underneath/281162
|
Sigma with Or Underneath
This is the notation found in Spivak's Calculus: Chapter 23: Infinite Series Theorem 9
What does $\displaystyle \sum_{i \text{ or }j > L} |a_i||b_j|$ mean?
-
1 Answer
Whatever is underneath a summation sign tells you what you sum over. Formally, every such notation basically means this:
Let $A$ be a set of indices. A summation of some objects over the indexing set $A$ would be denoted (prefixed, really) by
$$\sum_{\alpha \in A}$$
In this sense, we can understand the notation $$\sum_{i=1}^n$$ to mean $$\sum_{i \in \{1,\ldots,n\}}.$$
Similarly in your case, $$\sum_{i~\text{or}~j > L}$$ indicates that your sum is the indexing set $A = \{(i,j): i > L~\text{or}~ j > L\}$. When you are performing the summation, you should think about all possible ordered pairs $(i,j)$ and include in the sum only those which satisfy $i > L$ or $j > L$. (Or both, of course.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382542371749878, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/132919-complete-square-write-quadratic-equation-f-x-3x-2-6x-5-standard-form.html
|
# Thread:
1. ## a.complete the square to write the quadratic equation f(x)-3x^2+6x+5 in standard form
b. Identify the vertex of the parabola
c. State the equation of the axis of symmetry.
d. Factor or use the quadratic formula to find the x-intercepts of the graph, if any.
2. ??? You want someone to do the problem for you?
What have you done on this problem yourself?
Have you completed the square in $f(x)= 3x^2+ 6x+ 5$?.
I will give you a hint to start: write it as $f(x)= 3(x^2+ 2x )+ 5$ and complete the square in the parentheses.
3. I found the vertex of the parabola and stated the equation of the axis of symmetry and sketched the graph. Having trouble with completing the square and factoring or using the quadratic formula to find the x-intercepts. I know there are 2 x intercepts.
4. $f(x)= 3x^2+ 6x+ 5= 3(x^2+ 2x+ )+ 5$
What do you need to add (and subtract) to make the expression in the parentheses a perfect square?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8844570517539978, "perplexity_flag": "head"}
|
http://www.reference.com/browse/quadratic+equation
|
Definitions
Nearby Words
# quadratic equation
Algebraic equation of particular importance in optimization. A more descriptive name is second-degree polynomial equation. Its standard form is math.amath.x2 + math.bmath.x + math.c = 0, and its solution is given by the quadratic formula which guarantees two real-number solutions, one real-number solution, or two complex-number solutions, depending on whether the discriminate, math.b2 − 4math.amath.c, is greater than, equal to, or less than 0.
Encyclopedia Britannica, 2008. Encyclopedia Britannica Online.
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is
$ax^2+bx+c=0,,!$
where a ≠ 0. (For if a = 0, the equation becomes a linear equation.)
The letters a, b, and c are called coefficients: the quadratic coefficient a is the coefficient of x2, the linear coefficient b is the coefficient of x, and c is the constant coefficient, also called the free term or constant term.
Quadratic equations are called quadratic because quadratus is Latin for "square"; in the leading term the variable is squared.
## Quadratic formula
A quadratic equation with real or complex coefficients has two, but not necessarily distinct, solutions, called roots, which may or may not be real, given by the quadratic formula:
$x = frac\left\{-b pm sqrt \left\{b^2-4ac\right\}\right\}\left\{2a\right\},$
where the symbol "±" indicates that both
$x_+ = frac\left\{-b + sqrt \left\{b^2-4ac\right\}\right\}\left\{2a\right\}text\left\{ and \right\}x_- = frac\left\{-b - sqrt \left\{b^2-4ac\right\}\right\}\left\{2a\right\}.$
are solutions.
## Discriminant
In the above formula, the expression underneath the square root sign:
$Delta = b^2 - 4ac , ,!$
is called the of the quadratic equation.
A quadratic equation with real coefficients can have either one or two distinct real roots, or two distinct complex roots. In this case the discriminant determines the number and nature of the roots. There are three cases:
• If the discriminant is positive, there are two distinct roots, both of which are real numbers. For quadratic equations with integer coefficients, if the discriminant is a perfect square, then the roots are rational numbers—in other cases they may be quadratic irrationals.
• If the discriminant is zero, there is exactly one distinct root, and that root is a real number. Sometimes called a double root, its value is:
• :$x = -frac\left\{b\right\}\left\{2a\right\} . ,!$
• If the discriminant is negative, there are no real roots. Rather, there are two distinct (non-real) complex roots, which are complex conjugates of each other:
• :$begin\left\{align\right\}$
x &= frac{-b}{2a} + i frac{sqrt {4ac - b^2}}{2a} , x &= frac{-b}{2a} - i frac{sqrt {4ac - b^2}}{2a}. end{align}
Thus the roots are distinct if and only if the discriminant is non-zero, and the roots are real if and only if the discriminant is non-negative.
## Geometry
The roots of the quadratic equation
$ax^2+bx+c=0,,$
are also the zeros of the quadratic function:
$f\left(x\right) = ax^2+bx+c,,$
since they are the values of x for which
$f\left(x\right) = 0.,$
If a, b, and c are real numbers and the domain of f is the set of real numbers, then the zeros of f are exactly the x-coordinates of the points where the graph touches the x-axis.
It follows from the above that, if the discriminant is positive, the graph touches the x-axis at two points, if zero, the graph touches at one point, and if negative, the graph does not touch the x-axis.
## Examples
• $7x + 15 - 2x^2 = 0$ has a strictly positive discriminant $Delta = 169$ and therefore has two real solutions:
$x_1=frac\left\{-7-sqrt\left\{169\right\}\right\}\left\{2cdot\left(-2\right)\right\}=frac\left\{-7-13\right\}\left\{-4\right\} =frac\left\{20\right\}\left\{4\right\}= 5$
and
$x_2=frac\left\{-7+sqrt\left\{169\right\}\right\}\left\{2cdot\left(-2\right)\right\} = frac\left\{-7+13\right\}\left\{-4\right\}=frac\left\{6\right\}\left\{-4\right\}=-frac\left\{3\right\}\left\{2\right\}.$
• $x^2 -2x + 1 = 0$ has a discriminant $Delta$ whose value is zero, therefore it has a double solution$x_0=-tfrac\left\{-2\right\}\left\{2\right\}=1$
• $x^2 + 3 x + 3 = 0$ has no real solution because $Delta = - 3 < 0$. But it has two complex solutions $x_1$ and $x_2$:
$x_1 = frac\left\{-3 - sqrt\left\{3\right\} i\right\}\left\{2\right\}text\left\{ and \right\}x_2 = frac\left\{-3 + sqrt\left\{3\right\} i\right\}\left\{2\right\}.$
## Quadratic factorization
The term
$x - r,$
is a factor of the polynomial
$ax^2+bx+c,$
if and only if r is a root of the quadratic equation
$ax^2+bx+c=0.$
It follows from the quadratic formula that
$ax^2+bx+c = a left\left(x - frac\left\{-b + sqrt \left\{b^2-4ac\right\}\right\}\left\{2a\right\} right\right) left\left(x - frac\left\{-b - sqrt \left\{b^2-4ac\right\}\right\}\left\{2a\right\} right\right).$
In the special case where the quadratic has only one distinct root (i.e. the discriminant is zero), the quadratic polynomial can be factored as
$ax^2+bx+c = a left\left(x + frac\left\{b\right\}\left\{2a\right\} right\right)^2.,!$
## Application to higher-degree equations
Certain higher-degree equations can be brought into quadratic form and solved that way. For example, the 6th-degree equation in x:
$x^6 - 4x^3 + 8 = 0,$
can be rewritten as:
$\left(x^3\right)^2 - 4\left(x^3\right) + 8 = 0,,$
or, equivalently, as a quadratic equation in a new variable u:
$u^2 - 4u + 8 = 0,,$
where
$u = x^3.,$
Solving the quadratic equation for u results in the two solutions:
$u = 2 pm 2i.$
Thus
$x^3 = 2 pm 2i,.$
Concentrating on finding the three cube roots of – the other three solutions for x will be their complex conjugates – rewriting the right-hand side using Euler's formula:
$x^3 = 2^\left\{tfrac\left\{3\right\}\left\{2\right\}\right\}e^\left\{tfrac\left\{1\right\}\left\{4\right\}pi i\right\} = 2^\left\{tfrac\left\{3\right\}\left\{2\right\}\right\}e^\left\{tfrac\left\{8k+1\right\}\left\{4\right\}pi i\right\},$
(since e2kπi = 1), gives the three solutions:
$x = 2^\left\{tfrac\left\{1\right\}\left\{2\right\}\right\}e^\left\{tfrac\left\{8k+1\right\}\left\{12\right\}pi i\right\},,~k = 0, 1, 2,.$
Using Eulers' formula again together with trigonometric identities such as cos(π/12) = , and adding the complex conjugates, gives the complete collection of solutions as:
$x_\left\{1,2\right\} = -1 pm i,,$
$x_\left\{3,4\right\} = frac\left\{1 + sqrt\left\{3\right\}\right\}\left\{2\right\} pm frac\left\{1 - sqrt\left\{3\right\}\right\}\left\{2\right\}i,$
and
$x_\left\{5,6\right\} = frac\left\{1 - sqrt\left\{3\right\}\right\}\left\{2\right\} pm frac\left\{1 + sqrt\left\{3\right\}\right\}\left\{2\right\}i.,$
## History
The Babylonians, as early as 1800 BC (displayed on Old Babylonian clay tablets) could solve a pair of simultaneous equations of the form:
$x+y=p, xy=q$
which are equivalent to the equation:
$x^2+q=px$
The original pair of equations were solved as follows:
1. Form $frac\left\{x+y\right\}\left\{2\right\}$
2. Form $left\left(frac\left\{x+y\right\}\left\{2\right\}right\right)^2$
3. Form $left\left(frac\left\{x+y\right\}\left\{2\right\}right\right)^2 - xy$
4. Form $sqrt\left\{left\left(frac\left\{x+y\right\}\left\{2\right\}right\right)^2 - xy\right\} = frac\left\{x-y\right\}\left\{2\right\}$
5. Find $x, y$ by inspection of the values in (1) and (4).
In the Sulba Sutras in ancient India circa 8th century BCE quadratic equations of the form ax2 = c and ax2 + bx = c were explored using geometric methods. Babylonian mathematicians from circa 400 BCE and Chinese mathematicians from circa 200 BCE used the method of completing the square to solve quadratic equations with positive roots, but did not have a general formula. Euclid, the Greek mathematician, produced a more abstract geometrical method around 300 BCE.
In 628 CE, Brahmagupta, an Indian mathematician, gave the first explicit (although still not completely general) solution of the quadratic equation:
$ax^2+bx=c$
To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value. (Brahmasphutasiddhanta (Colebrook translation, 1817, page 346)
This is equivalent to:
$x = frac\left\{sqrt\left\{4ac+b^2\right\}-b\right\}\left\{2a\right\}.$
The dated to have been written in India in the 7th century CE contained an algebraic formula for solving quadratic equations, as well as quadratic indeterminate equations (originally of type ax/c = y). Mohammad bin Musa Al-kwarismi (Persia, 9th century) developed a set of formulae that worked for positive solutions. His work was based on Brahmagupta. Abraham bar Hiyya Ha-Nasi (also known by the Latin name Savasorda) introduced the complete solution to Europe in his book Liber embadorum in the 12th century. Bhāskara II (1114–1185), an Indian mathematician–astronomer, gave the first general solution to the quadratic equation with two roots.
The writing of the Chinese mathematician Yang Hui (1238-1298 AD) represents the first in which quadratic equations with negative coefficients of 'x' appear, although he attributes this to the earlier Liu Yi.
## Derivation
The quadratic formula can be derived by the method of completing the square, so as to make use of the algebraic identity:
$x^2+2xh+h^2 = \left(x+h\right)^2.,!$
Dividing the quadratic equation
$ax^2+bx+c=0 ,!$
by a (which is allowed because a is non-zero), gives:
$x^2 + frac\left\{b\right\}\left\{a\right\} x + frac\left\{c\right\}\left\{a\right\}=0,,!$
or
$x^2 + frac\left\{b\right\}\left\{a\right\} x= -frac\left\{c\right\}\left\{a\right\} qquad \left(1\right)$
The quadratic equation is now in a form to which the method of completing the square can be applied. To "complete the square" is to find some constant k such that
$x^2 + frac\left\{b\right\}\left\{a\right\}x + k = x^2+2xh+h^2,,!$
for another constant h. In order for these equations to be true,
$frac\left\{b\right\}\left\{a\right\} = 2h!$
or
$h = frac\left\{b\right\}\left\{2a\right\},!$
and
$k = h^2,,!$
thus
$k = frac\left\{b^2\right\}\left\{4a^2\right\}.,!$
Adding this constant to equation (1) produces
$x^2+frac\left\{b\right\}\left\{a\right\}x+frac\left\{b^2\right\}\left\{4a^2\right\}=-frac\left\{c\right\}\left\{a\right\}+frac\left\{b^2\right\}\left\{4a^2\right\}.,!$
The left side is now a perfect square because
$x^2+frac\left\{b\right\}\left\{a\right\}x+frac\left\{b^2\right\}\left\{4a^2\right\} = left\left(x + frac\left\{b\right\}\left\{2a\right\} right\right)^2$
The right side can be written as a single fraction, with common denominator 4a2. This gives
$left\left(x+frac\left\{b\right\}\left\{2a\right\}right\right)^2=frac\left\{b^2-4ac\right\}\left\{4a^2\right\}.$
Taking the square root of both sides yields
$left|x+frac\left\{b\right\}\left\{2a\right\}right| = frac\left\{sqrt\left\{b^2-4ac \right\}\right\}$
>Rightarrow x+frac{b}{2a}=pmfrac{sqrt{b^2-4ac }}{2a}.
Isolating x, gives
$x=-frac\left\{b\right\}\left\{2a\right\}pmfrac\left\{sqrt\left\{b^2-4ac \right\}\right\}\left\{2a\right\}=frac\left\{-bpmsqrt\left\{b^2-4ac \right\}\right\}\left\{2a\right\}.$
## Alternative formula
In some situations it is preferable to express the roots in an alternate form.
$x =frac\left\{2c\right\}\left\{-b mp sqrt \left\{b^2-4ac \right\}\right\} .$
This alternative requires c to be nonzero; for, if c is zero, the formula correctly gives zero as one root, but fails to give any second, non-zero root. Instead, one of the two choices for ∓ produces a division by zero, which is undefined.
The roots are the same regardless of which expression we use; the alternate form is merely an algebraic variation of the common form:
$begin\left\{align\right\}$
frac{-b + sqrt {b^2-4ac }}{2a} &{}= frac{-b + sqrt {b^2-4ac }}{2a} cdot frac{-b - sqrt {b^2-4ac }}{-b - sqrt {b^2-4ac }} &{}= frac{4ac}{2a left (-b - sqrt {b^2-4ac} right ) } &{}=frac{2c}{-b - sqrt {b^2-4ac }}. end{align}
The alternative formula can reduce loss of precision in the numerical evaluation of the roots, which may be a problem if one of the roots is much smaller than the other in absolute magnitude. The problem of c possibly being zero can be avoided by using a mixed approach:
$x_1 = frac\left\{-b - sgn b ,sqrt \left\{b^2-4ac\right\}\right\}\left\{2a\right\},$
$x_2 = frac\left\{c\right\}\left\{ax_1\right\}.$
Here sgn denotes the sign function.
## Floating point implementation
A careful floating point computer implementation differs a little from both forms to produce a robust result. Assuming the discriminant, b2 − 4ac, is positive and b is nonzero, the code will be something like the following.
$t := -tfrac12 big\left(b + sgn\left(b\right) sqrt\left\{b^2-4ac\right\} big\right) ,!$
$r_1 := t/a ,!$
$r_2 := c/t ,!$
Here sgn(b) is the sign function, where sgn(b) is 1 if b is positive and −1 if b is negative; its use ensures that the quantities added are of the same sign, avoiding . The computation of r2 uses the fact that the product of the roots is c/a.
See Numerical Recipes in C, Section 5.6: "Quadratic and Cubic Equations".
## Viète's formulas
Viète's formulas give a simple relation between the roots of a polynomial and its coefficients. In the case of the quadratic polynomial, they take the following form:
$x_+ + x_- = -frac\left\{b\right\}\left\{a\right\}$
and
$x_+ cdot x_- = frac\left\{c\right\}\left\{a\right\}.$
The first formula above yields a convenient expression when graphing a quadratic function. Since the graph is symmetric with respect to a vertical line through the vertex, when there are two real roots the vertex’s x-coordinate is located at the average of the roots (or intercepts). Thus the x-coordinate of the vertex is given by the expression:
$x_V = frac \left\{x_+ + x_-\right\} \left\{2\right\} = -frac\left\{b\right\}\left\{2a\right\}.$
The y-coordinate can be obtained by substituting the above result into the given quadratic equation, giving
$y_V = - frac\left\{b^2\right\}\left\{4a\right\} + c = - frac\left\{ b^2 - 4ac\right\} \left\{4a\right\}.$
## Generalizations
The formula and its derivation remain correct if the coefficients a, b and c are complex numbers, or more generally members of any field whose characteristic is not 2. (In a field of characteristic 2, the element 2a is zero and it is impossible to divide by it.)
The symbol
$pm sqrt \left\{b^2-4ac\right\}$
in the formula should be understood as "either of the two elements whose square is
$b^2-4ac,,$
if such elements exist. In some fields, some elements have no square roots and some have two; only zero has just one square root, except in fields of characteristic 2. Note that even if a field does not contain a square root of some number, there is always a quadratic extension field which does, so the quadratic formula will always make sense as a formula in that extension field.
### Characteristic 2
In a field of characteristic 2, the quadratic formula, which relies on 2 being a unit, does not hold. Consider the monic quadratic polynomial
$displaystyle x^\left\{2\right\} + bx + c$
over a field of characteristic 2. If b = 0, then the solution reduces to extracting a square root, so the solution is
$displaystyle x = sqrt\left\{c\right\}$
and note that there is only one root since
$displaystyle -sqrt\left\{c\right\} = -sqrt\left\{c\right\} + 2sqrt\left\{c\right\} = sqrt\left\{c\right\}.$
In summary,
$displaystyle x^\left\{2\right\} + c = \left(x + sqrt\left\{c\right\}\right)^\left\{2\right\}.$
In the case that b ≠ 0, there are two distinct roots, but if the polynomial is irreducible, they cannot be expressed in terms of square roots of numbers in the coefficient field. Instead, define the 2-root R(c) of c to be a root of the polynomial x2 + x + c, an element of the splitting field of that polynomial. One verifies that R(c) + 1 is also a root. In terms of the 2-root operation, the two roots of the (non-monic) quadratic ax2 + bx + c are
$frac\left\{b\right\}\left\{a\right\}Rleft\left(frac\left\{ac\right\}\left\{b^2\right\}right\right)$
and
$frac\left\{b\right\}\left\{a\right\}left\left(Rleft\left(frac\left\{ac\right\}\left\{b^2\right\}right\right)+1right\right).$
For example, let a denote a multiplicative generator of the group of units of F4, the Galois field of order four (thus a and a + 1 are roots of x2 + x + 1 over F4). Because (a + 1)2 = a, a + 1 is the unique solution of the quadratic equation x2 + a = 0. On the other hand, the polynomial x + ax + 1 is irreducible over F4, but splits over F16, where it has the two roots ab and ab + a, where b is a root of x2 + x + a in F16.
This is a special case of Artin-Schreier theory.
## References
• Stillwell, John. 2004. Mathematics and its History. Berlin and New York: Springer-Verlag.
## External links
• Calculator for solving Quadratics (also solves Cubics and Quartics)
• 101 uses of a quadratic equation part I Part II
• Quadratic graphical explorer Interactive applet. Sliders for a,b,c show effects on a graph.
• Trigonometric solutions: You Could Learn a Lot from a Quadratic
• Basic Explanation & Application Overview & Factoring
• Graph and Roots of Quadratic Polynomial from cut-the-knot
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 79, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899630606174469, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/23319/the-form-of-lagrangian-for-a-free-particle
|
# The form of Lagrangian for a free particle
I've just registred here, and I'm very glad that finally I have found such a place for questions.
I have small question about Classical Mechanics, Lagrangian of a free particle. I just read Deriving the Lagrangian for a free particle blog. So, if I am correct, we have, that the free particle moves with a constant velocity in inertial frame and also that
$$\vec{0}~=~\frac{d}{dt}\frac{\partial L}{\partial \vec{v}} ~=~\frac{d }{dt} \left(2\vec{v}~\ell^{\prime}\right)$$
$\ell^{\prime}$ means $\frac{\partial L}{\partial v^2}$ .Hence $$\vec{c}~=~\left(2\vec{v}~\ell^{\prime}\right)$$
So, this two statements mean that $\ell^{\prime}$ is constant, so $$L~=~ \ell(v^2)~=~\alpha v^2+\beta,$$
Isn't this enough to derive the lagrangian of a free particle. If yes (but I'm sure no) why did Landau use the Galilean transformation formulas etc to derive that formula.
Thanks a lot!
-
## 1 Answer
The answer is No, OP's argument(v1) is not enough to derive the Lagrangian for a non-relativistic free particle. It is true that the constant of motion mentioned by OP
$$\vec{c}~:=~\frac{\partial L}{\partial \vec{v}}~=~2\vec{v}~\ell^{\prime}$$
does not depend on time $t$. (It is in fact the canonical/conjugate momentum, which in general is different from the mechanical/kinetic momentum $m\vec{v}$.) However, $\vec{c}$ could still depend on e.g. the initial velocity of the particle (which is also the velocity $\vec{v}$, since we know that the velocity is constant for a free particle, cf. e.g. the first part of the linked answer).
This is perhaps best illustrated by taking a simple example, say
$$L~=~\ell(v^2)~=~ v^4.$$
Then
$$\vec{c}~=~2\vec{v}~\ell^{\prime} ~=~4\vec{v}~v^2,$$
which is a constant of motion, as it should be.
-
Right, other functions of $v^2$ can do it, too. After all, the relativistic Lagrangian proportional to $-mc^2\sqrt{1-v^2/c^2}$ is an example of that. For small $v$, it may be expanded as $mv^2$ plus corrections proportional to any other even power of $v$. – Luboš Motl Apr 6 '12 at 9:13
OK, now I really understood what was the problem! Thank you very much! – achatrch Apr 6 '12 at 16:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405624866485596, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/27924/particles-for-all-forces-how-do-they-know-where-to-go-and-what-to-avoid/27927
|
# Particles for all forces: how do they know where to go, and what to avoid?
Here's an intuitive problem which I can't get around, can someone please explain it?
Consider a proton P and an electron E moving through the electromagnetic field (or other particles for other forces, same argument). They exert a force upon one another. In classical mechanics this is expressed as their contributing to the field and the field exerts a force back upon them in turn. In quantum mechanics the model is the exchange of a particle.
Let's say one such particle X is emitted from P and heads towards E. In the basic scenario, E absorbs it and changes its momentum accordingly. Fine.
How does X know where E is going to be by the time it arrives? What's to stop E dodging it, or having some other particle intercept X en route?
Are P and E emitting a constant stream of force-carrying particles towards every other non-force-carrying particle in the universe? Doesn't this imply a vast amount of radiation all over the place?
I am tempted to shrug of the entire particle exchange as a mere numerical convenience; a discretization of the Maxwell equations perhaps. I am reluctant to say "virtual particle" because I suspect that term means something different to what I think it means.
Or is it a kind of observer effect: E "observes" X in the act of absorbing it, all non-intercepting paths have zero probability when the waveform collapses?
Or have I missed the point entirely?
-
3
The particle exchange model is a convenient story that helps us remember how to draw Feynman diagrams, which, in turn, help us remember terms in the perturbation series of interacting QFTs. But I think that it's unwise to take it too seriously--fundamentally, you still have particles interacting with local quantum fields. It's just that we can, in a certain limit, make weakly interacting quantized fields look like they're interacting via an infinite series of particle exchanges, with only the few lowest-order ones important. – Jerry Schirmer May 7 '12 at 12:19
2
You can alleviate much of the confusion by a)discarding the classical notion of a particle being a point like object and more importantly b)thinking in terms of fields interacting with each other, i.e an electron field/proton field coupling with other fields. +1 for a good question. – Antillar Maximus May 7 '12 at 12:27
@JerrySchirmer: It is not "just a story" as there are Feynman diagrams in string theory where you don't have quantum fields. Besides, the particle picture is mathematically equivalent to other formulations, so any paradox must have a resolution. – Ron Maimon May 7 '12 at 17:20
@RonMaimon: but you would have to admit that the Feynman diagrams are simply bookkeeping techniques for keeping track of terms in a perturbation theory in string theory. In any case, the physical thing is the sum of the perturbation theory, not the individual terms. – Jerry Schirmer May 7 '12 at 18:24
@JerrySchirmer: Sure, that's true, but you can measure intermediate photon states in principle, by measuring the quantum field, and see that an electron is producing virtual photons. The different histories interfere together to produce the usual perturbation expansion, but the resulting quantum mechanical story is correct, in the sense that the particle emission and absorption (in old-fasioned perturbation theory, where you don't have particles going back in time) is compatible with what you would see if you measure the instantaneous quantum field at two times. – Ron Maimon May 7 '12 at 18:44
show 1 more comment
## 4 Answers
This choice is closest to the the correct one.
I am tempted to shrug of the entire particle exchange as a mere numerical convenience; a discretization of the Maxwell equations perhaps. I am reluctant to say "virtual particle" because I suspect that term means something different to what I think it means.
And virtual exchange is a correct description, because during the interaction the exchanged particle is not on mass shell.
Keep in mind that in the microcosm of particles nature is quantum mechanical. The particle scattering on another particle and the momentum and energy and quantum number exchanges between them are all described by one wave function, one mathematical formula that gives the probability for the interaction to take place in the way it has been ( will be ) observed.. Thus it is not a matter for "knowing" but a matter of "being".
The Feynman diagrams that give rise to the "particle exchange" framework are just a mathematical algorithm for the calculations and help in understanding how to proceed with them.
To see how classical fields are built up by the substructure of quantum mechanics see the essay here.
-
1
"Thus it is not a matter for "knowing" but a matter of "being"". Well said :) – Manishearth♦ May 7 '12 at 12:37
In a non relativistic Classical Mechanics (CM) there is an interaction potential involving both coordinates: $U(\vec{r}_1-\vec{r}_2)$ and the corresponding force present in either particle equation. There is no need in "exchange" interpretation here. Same for non relativistic QM.
In relativistic case the potential becomes "retarded". Its time evolution may be expanded in a Fourier series and each plane wave can be called a "longitudinal virtual photon". You see, it it nearly the same interaction potential (force) as in the non relativistic CM, acting between charged particles.
Apart from retarded "longitudinal" potential, there is also "transversal" vector potential that may include real electromagnetic waves propagating in all directions, not only between charged particles in question. The real photons are not absorbed but scattered so they do not contribute into the charge "attraction". The latter is described with those "virtual photons".
-
As Jerry Schirmer points out, it is not really a `discretization of the Maxwell equations` as you say, but rather a series expansion of the quantum mechanical cross section for interaction. Thus you put in an electron and a proton with some momenta and you want to calculate the probability of them coming out with some other momenta, which you can express as something like $${}_\textrm{out}\langle p^+,q_1;e^-,q_2|p^+,p_1;e^-,p_2\rangle_\textrm{in}=\lim_{T\rightarrow\infty}\langle p^+,q_1;e^-,q_2|e^{iH(2T)}|p^+,p_1;e^-,p_2\rangle.$$ You then make a series expansion of this quantity in the interaction hamiltonian (or more exactly in the interaction strength $\alpha=e^2/\hbar c$). Feynman's contribution (one of them, anyway) was to give a graphical way of constructing each of the terms in the series (most of which involve pretty ugly integrals and will in fact diverge if not treated properly using renormalization) so that each term gets interpreted as a physical process where, say, the electron and proton interchange a virtual photon.
The truth is of course that these virtual photon exchanges are not physical: only the whole scattering process is physical and you cannot observe what happens in the middle.
-
You can observe what is in the middle by making a measurement using classical probes you electrically polarize at the right moment. You can actually do this for long enough scales. The Feynman expansion is physical, it is not a mathematical trick. – Ron Maimon May 7 '12 at 17:44
In the particle exchange picture, the particles are emitted in all directions and only the ones going from P in the direction of E that hit E are intercepted and have an effect. The other particles interfere themselves out of existence, as there is no on-shell state they can enter while conserving energy, or else return to P, giving the self-energy modification to P's mass. In fact, most return to P, since the self-energy is divergent, while only a small fraction make it to E by comparison.
This process is virtual, so that it is defined by temporary intermediate states which only can stick around until their phase randomizes them away. For the case of a classical force, you need to use particles that go every which way, forward and backward in time.
Consider two classical objects interacting with a (free) quantum field according to this Lagrangian:
$$\int |\partial_\phi|^2 + \phi(x) s(x)$$
where the source is two delta functions $s(x) = g\delta(x-x_0) + g\delta(x-x_1)$. Each of these classical sources is steadily spitting out and absorbing particles per unit time at a steady rate g, as you can see by the added source term in the Hamitlonian:
$$g\phi(x_0) = g\int {d^3k\over 2E_k} e^{ikx_0} \alpha_k + e^{-ikx_0}\alpha^\dagger_k$$
the g term is multiplying a creation operator and an annihilation operator, so the Hamiltonian has a steady amplitude g per unit time to emit any on-shell particle, and the same amplitude to absorb one. If you have no other source, the particles that are absorbed are those emitted by the source, and you just get an (infinite) self-energy renormalization of the mass.
This description is the on-shell old-fasioned perturbation theory, in which the intermediate states are k-states and the description is Hamiltonian in time. This is not covariant, but it shows you that particles are spat out and absorbed, and the two sources only interact to the extent that some of the particles spat out by one are absorbed by the other. The old-fasioned picture is useless for actual computations, but it reveals the particle processes most clearly, because it follows the annihilation and creation of physical particles in detail in time.
The result of the interaction when there are two sources is altered by those particles produced by one, absorbed by the other later. The covariant Schwinger/Feynman form of this introduces particles that meander around in space and time both. Those that do not get absorbed by the other make a field around the particle.
The fact that you are doing things by loop order means that you are not considering the process of a particle emitted by one source absorbed by itself, since this is a loop. The loop order separation of terms makes the scattering process look weird, since it looks like the emitted particle knew where to go to find the other particle. It didn't. If it came back to the first particle, we would include it as part of the next order of Feynman diagram as part of the self-energy graph.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433262944221497, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2012/09/06/the-radical-of-the-killing-form/?like=1&_wpnonce=1e4d043b68
|
# The Unapologetic Mathematician
## The Radical of the Killing Form
The first and most important structural result using the Killing form regards its “radical”. We never really defined this before, but it’s not hard: the radical of a binary form $B$ on a vector space $V$ is the subspace consisting of all $v\in V$ such that $B(v,w)=0$ for all $w\in V$. That is, if we regard $B$ as a linear map $v\mapsto B(v,\underline{\hphantom{X}})$, the radical is the kernel of this map. Thus we see that $B$ is nondegenerate if and only if its radical is zero; we’ve only ever dealt much with nondegenerate bilinear forms, so we’ve never really had to consider the radical.
Now, the radical of the Killing form $\kappa$ is more than just a subspace of $L$; the associative property tells us that it’s an ideal. Indeed, if $s$ is in the radical and $x,y\in L$ are any other two Lie algebra elements, then we find that
$\displaystyle\kappa([s,x],y)=\kappa(s,[x,y])=0$
thus $[s,x]$ is in the radical as well.
We recall that there was another “radical” we’ve mentioned: the radical of a Lie algebra is its maximal solvable ideal. This is not necessarily the same as the radical of the Killing form, but we can see that the radical of the form is contained in the radical of the algebra. By definition, if $x$ is in the radical of $\kappa$ and $y\in L$ is any other Lie algebra element we have
$\displaystyle\kappa(x,y)=\mathrm{Tr}(\mathrm{ad}(x),\mathrm{ad}(y))=0$
Cartan’s criterion then tells us that the radical of $\kappa$ is solvable, and is thus contained in $\mathrm{Rad}(L)$, the radical of the algebra. Immediately we conclude that if $L$ is semisimple — if $\mathrm{Rad}(L)=0$ — then the Killing form must be nondegenerate.
It turns out that the converse is also true. In fact, the radical of $\kappa$ contains all abelian ideals $I\subseteq L$. Indeed, if $x\in I$ and $y\in L$ then $\mathrm{ad}(x)\mathrm{ad}(y):L\to I$, and the square of this map sends $L$ into $[I,I]=0$. Thus $\mathrm{ad}(x)\mathrm{ad}(y)$ is nilpotent, and thus has trace zero, proving that $\kappa(x,y)=0$, and that $x$ is contained in the radical of $\kappa$. So if the Killing form is nondegenerate its radical is zero, and there can be no abelian ideals of $L$. But the derived series of $\mathrm{Rad}(L)$ eventually hits zero, and its last nonzero term is an abelian ideal of $L$. This can only work out if $\mathrm{Rad}(L)$ is already zero, and thus $L$ is semisimple.
So we have a nice condition for semisimplicity: calculate the Killing form and check that it’s nondegenerate.
### Like this:
Posted by John Armstrong | Algebra, Lie Algebras
## 3 Comments »
1. [...] taking its determinant to find . Since this is nonzero, we conclude that is nondegenerate, which we know means that is semisimple — at least in fields where . Share this:StumbleUponDiggRedditLike [...]
Pingback by | September 7, 2012 | Reply
2. [...] to the Killing form . The associativity of shows that is also an ideal, just as we saw for the radical. Indeed, the radical of is just . Anyhow, Cartan’s criterion again shows that the [...]
Pingback by | September 8, 2012 | Reply
3. [...] of . Then we can define to be the subspace orthogonal (with respect to ) to , and the fact that the Killing form is nondegenerate tells us that , and thus [...]
Pingback by | September 11, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323564171791077, "perplexity_flag": "head"}
|
http://mathhelpforum.com/trigonometry/79926-4-problems.html
|
# Thread:
1. ## 4 problems
Write an expression for tan x degrees and an expression for tan(90-x)degrees, in terms of a, b, and c. How is the tangent of an angle related to the tangent of the angle's complement?
Write and expression for (sin X degrees)squared + (cos x degrees) squared in terms of a, b, and c. Then use the Pythagorean Theorem to simplify your expression
A) Use triangel ACD to write expressions for p squared + q squared in terms of b and for p in terms of b and angle A.
B) Find a squared in terms of c, p and q. Give your answer in expanded form.
C) Use results from parts a and b to find an expression for a squared in terms of b, c and angle A
D) Use the results from part c to find the angle A round your final answer to the nearest tenth. (with this problem is triangle ABC with side lengths AB = 27 AC=32 BC= 23)
other problem:
when the sun is shining at a 62 degree angle of depression, a flagpole forms a shadow of length x feet. Later the sun shines at an angle of 40 degrees, and the shadow is 25 feet longer than before
a) Draw a picture of the scenario
B) write two expressions for the height of the flagpole in terms of x
C) how tall is the flagpole to the nearest tenth of a foot
Thanks a please show all work.
2. Originally Posted by abc123
...
other problem:
when the sun is shining at a 62 degree angle of depression, a flagpole forms a shadow of length x feet. Later the sun shines at an angle of 40 degrees, and the shadow is 25 feet longer than before
a) Draw a picture of the scenario
B) write two expressions for the height of the flagpole in terms of x
C) how tall is the flagpole to the nearest tenth of a foot
Thanks a please show all work. How about showing what you have done so far?
to a) see attachment
to b) Use the tan-function:
$\tan(62^\circ)=\dfrac hx$ ..... $\tan(40^\circ)=\dfrac h{x+25}$
to c) Solve the first equation for x: $x=\dfrac h{\tan(62^\circ)}$
and plug in the term of x into the second equation:
$\tan(40^\circ)=\dfrac h{\dfrac h{\tan(62^\circ)}+25}$
Solve for h. (For your confirmation only $x \approx 37.88$)
Attached Thumbnails
#### Search Tags
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8811560869216919, "perplexity_flag": "middle"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G03/g03aac.html
|
# NAG Library Function Documentnag_mv_prin_comp (g03aac)
## 1 Purpose
nag_mv_prin_comp (g03aac) performs a principal component analysis on a data matrix; both the principal component loadings and the principal component scores are returned.
## 2 Specification
#include <nag.h>
#include <nagg03.h>
void nag_mv_prin_comp (Nag_PrinCompMat pcmatrix, Nag_PrinCompScores scores, Integer n, Integer m, const double x[], Integer tdx, const Integer isx[], double s[], const double wt[], Integer nvar, double e[], Integer tde, double p[], Integer tdp, double v[], Integer tdv, NagError *fail)
## 3 Description
Let $X$ be an $n$ by $p$ data matrix of $n$ observations on $p$ variables ${x}_{1},{x}_{2},\dots ,{x}_{p}$ and let the $p$ by $p$ variance-covariance matrix of ${x}_{1},{x}_{2},\dots ,{x}_{p}$ be $S$. A vector ${a}_{1}$ of length $p$ is found such that:
$a1T Sa 1$
is maximized subject to
$a1T a 1 = 1 .$
The variable ${z}_{1}={\sum }_{i=1}^{p}{a}_{1i}{x}_{i}$ is known as the first principal component and gives the linear combination of the variables that gives the maximum variation. A second principal component, ${z}_{2}={\sum }_{i=1}^{p}{a}_{2i}{x}_{i}$, is found such that:
$a2T Sa 2$
is maximized subject to
$a2T a 2 = 1$
and
$a2T a 1 = 0 .$
This gives the linear combination of variables that is orthogonal to the first principal component that gives the maximum variation. Further principal components are derived in a similar way.
The vectors ${a}_{1},{a}_{2},\dots ,{a}_{p}$, are the eigenvectors of the matrix $S$ and associated with each eigenvector is the eigenvalue, ${\lambda }_{i}^{2}$. The value of ${\lambda }_{i}^{2}/\sum {\lambda }_{i}^{2}$ gives the proportion of variation explained by the $i$th principal component. Alternatively, the ${a}_{i}$'s can be considered as the right singular vectors in a singular value decomposition with singular values ${\lambda }_{i}$ of the data matrix centred about its mean and scaled by $1/\sqrt{\left(n-1\right)}$, ${X}_{s}$. This latter approach is used in nag_mv_prin_comp (g03aac), with
$X s = V Λ P ′$
where $\Lambda $ is a diagonal matrix with elements ${\lambda }_{i}$, ${P}^{\prime }$ is the $p$ by $p$ matrix with columns ${a}_{i}$ and $V$ is an $n$ by $p$ matrix with ${V}^{\prime }V=I$, which gives the principal component scores.
Principal component analysis is often used to reduce the dimension of a dataset, replacing a large number of correlated variables with a smaller number of orthogonal variables that still contain most of the information in the original dataset.
The choice of the number of dimensions required is usually based on the amount of variation accounted for by the leading principal components. If $k$ principal components are selected, then a test of the equality of the remaining $p-k$ eigenvalues is
$n - 2 p + 5 / 6 - ∑ i = k + 1 p log λ i 2 + p-k log ∑ i = k + 1 p λ i 2 / p-k$
which has, asymptotically, a ${\chi }^{2}$ distribution with $\frac{1}{2}\left(p-k-1\right)\left(p-k+2\right)$ degrees of freedom.
Equality of the remaining eigenvalues indicates that if any more principal components are to be considered then they all should be considered.
Instead of the variance-covariance matrix the correlation matrix, the sums of squares and cross-products matrix or a standardized sums of squares and cross-products matrix may be used. In the last case $S$ is replaced by ${\sigma }^{-1/2}S{\sigma }^{-1/2}$ for a diagonal matrix $\sigma $ with positive elements. If the correlation matrix is used, the ${\chi }^{2}$ approximation for the statistic given above is not valid.
The principal component scores, $F$, are the values of the principal component variables for the observations. These can be standardized so that the variance of these scores for each principal component is 1.0 or equal to the corresponding eigenvalue.
Weights can be used with the analysis, in which case the matrix $X$ is first centred about the weighted means then each row is scaled by an amount $\sqrt{{w}_{i}}$, where ${w}_{i}$ is the weight for the $i$th observation.
## 4 References
Chatfield C and Collins A J (1980) Introduction to Multivariate Analysis Chapman and Hall
Cooley W C and Lohnes P R (1971) Multivariate Data Analysis Wiley
Hammarling S (1985) The singular value decomposition in multivariate statistics SIGNUM Newsl. 20(3) 2–25
Kendall M G and Stuart A (1979) The Advanced Theory of Statistics (3 Volumes) (4th Edition) Griffin
Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill
## 5 Arguments
1: pcmatrix – Nag_PrinCompMatInput
On entry: indicates for which type of matrix the principal component analysis is to be carried out.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$
It is for the correlation matrix.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatStandardised}$
It is for the standardized matrix, with standardizations given by s.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatSumSq}$
It is for the sums of squares and cross-products matrix.
${\mathbf{pcmatrix}}=\mathrm{Nag_MatVarCovar}$
It is for the variance-covariance matrix.
Constraint: ${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$, $\mathrm{Nag_MatStandardised}$, $\mathrm{Nag_MatSumSq}$ or $\mathrm{Nag_MatVarCovar}$.
2: scores – Nag_PrinCompScoresInput
On entry: specifies the type of principal component scores to be used.
${\mathbf{scores}}=\mathrm{Nag_ScoresStand}$
The principal component scores are standardized so that ${F}^{\prime }F=I$, i.e., $F={X}_{s}P{\Lambda }^{-1}=V$.
${\mathbf{scores}}=\mathrm{Nag_ScoresNotStand}$
The principal component scores are unstandardized, i.e., $F={X}_{s}P=V\Lambda $.
${\mathbf{scores}}=\mathrm{Nag_ScoresUnitVar}$
The principal component scores are standardized so that they have unit variance.
${\mathbf{scores}}=\mathrm{Nag_ScoresEigenval}$
The principal component scores are standardized so that they have variance equal to the corresponding eigenvalue.
Constraint: ${\mathbf{scores}}=\mathrm{Nag_ScoresStand}$, $\mathrm{Nag_ScoresNotStand}$, $\mathrm{Nag_ScoresUnitVar}$ or $\mathrm{Nag_ScoresEigenval}$.
3: n – IntegerInput
On entry: the number of observations, $n$.
Constraint: ${\mathbf{n}}\ge 2$.
4: m – IntegerInput
On entry: the number of variables in the data matrix, $m$.
Constraint: ${\mathbf{m}}\ge 1$.
5: x[${\mathbf{n}}×{\mathbf{tdx}}$] – const doubleInput
On entry: ${\mathbf{x}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdx}}+\mathit{j}-1\right]$ must contain the $\mathit{i}$th observation for the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,m$.
6: tdx – IntegerInput
On entry: the stride separating matrix column elements in the array x.
Constraint: ${\mathbf{tdx}}\ge {\mathbf{m}}$.
7: isx[m] – const IntegerInput
On entry: ${\mathbf{isx}}\left[j-1\right]$ indicates whether or not the $j$th variable is to be included in the analysis. If ${\mathbf{isx}}\left[\mathit{j}-1\right]>0$, then the variable contained in the $\mathit{j}$th column of x is included in the principal component analysis, for $\mathit{j}=1,2,\dots ,m$.
Constraint: ${\mathbf{isx}}\left[j-1\right]>0$ for nvar values of $j$.
8: s[m] – doubleInput/Output
On entry: the standardizations to be used, if any.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatStandardised}$, then the first $m$ elements of s must contain the standardization coefficients, the diagonal elements of $\sigma $.
Constraint: if ${\mathbf{isx}}\left[\mathit{j}-1\right]>0$, ${\mathbf{s}}\left[\mathit{j}-1\right]>0.0$, for $\mathit{j}=1,2,\dots ,m$.
On exit: if ${\mathbf{pcmatrix}}=\mathrm{Nag_MatStandardised}$, then s is unchanged on exit.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$, then s contains the variances of the selected variables. ${\mathbf{s}}\left[j-1\right]$ contains the variance of the variable in the $j$th column of x if ${\mathbf{isx}}\left[j-1\right]>0$.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatSumSq}$ or $\mathrm{Nag_MatVarCovar}$, then s is not referenced.
9: wt[n] – const doubleInput
On entry: the elements of wt must contain the weights to be used in the principal component analysis. The effective number of observations is the sum of the weights.
Constraints:
• ${\mathbf{wt}}\left[\mathit{i}-1\right]\ge 0.0$, for $\mathit{i}=1,2,\dots ,n$;
• the sum of weights $\ge {\mathbf{nvar}}+1$;
• if ${\mathbf{wt}}\left[\mathit{i}-1\right]=0.0$, the $\mathit{i}$th observation is not included in the analysis.
Note: if wt is set to the null pointer NULL, i.e., (double *)0, then wt is not referenced and the effective number of observations is $n$.
10: nvar – IntegerInput
On entry: the number of variables in the principal component analysis, $p$.
Constraint: $1\le {\mathbf{nvar}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}}-1,{\mathbf{m}}\right)$.
11: e[${\mathbf{nvar}}×{\mathbf{tde}}$] – doubleOutput
On exit: the statistics of the principal component analysis. ${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+0\right]$, the eigenvalues associated with the $\mathit{i}$th principal component, ${\lambda }_{\mathit{i}}^{2}$, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+1\right]$, the proportion of variation explained by the $\mathit{i}$th principal component, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+2\right]$, the cumulative proportion of variation explained by the first $\mathit{i}$ principal components, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+3\right]$, the ${\chi }^{2}$ statistics, for $\mathit{i}=1,2,\dots ,p$.
${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+4\right]$, the degrees of freedom for the ${\chi }^{2}$ statistics, for $\mathit{i}=1,2,\dots ,p$.
If ${\mathbf{pcmatrix}}\ne \mathrm{Nag_MatCorrelation}$, then ${\mathbf{e}}\left[\left(\mathit{i}-1\right)×{\mathbf{tde}}+5\right]$ contains the significance level for the ${\chi }^{2}$ statistic, for $\mathit{i}=1,2,\dots ,p$.
If ${\mathbf{pcmatrix}}=\mathrm{Nag_MatCorrelation}$, then ${\mathbf{e}}\left[\left(i-1\right)×{\mathbf{tde}}+5\right]$ is returned as zero.
12: tde – IntegerInput
On entry: the stride separating matrix column elements in the array e.
Constraint: ${\mathbf{tde}}\ge 6$.
13: p[${\mathbf{nvar}}×{\mathbf{tdp}}$] – doubleOutput
Note: the $\left(i,j\right)$th element of the matrix $P$ is stored in ${\mathbf{p}}\left[\left(i-1\right)×{\mathbf{tdp}}+j-1\right]$.
On exit: the first nvar columns of p contain the principal component loadings, ${a}_{i}$. The $j$th column of p contains the nvar coefficients for the $j$th principal component.
14: tdp – IntegerInput
On entry: the stride separating matrix column elements in the array p.
Constraint: ${\mathbf{tdp}}\ge {\mathbf{nvar}}$.
15: v[${\mathbf{n}}×{\mathbf{tdv}}$] – doubleOutput
Note: the $\left(i,j\right)$th element of the matrix $V$ is stored in ${\mathbf{v}}\left[\left(i-1\right)×{\mathbf{tdv}}+j-1\right]$.
On exit: the first nvar columns of v contain the principal component scores. The $j$th column of v contains the n scores for the $j$th principal component.
If weights are supplied in the array wt, then any rows for which ${\mathbf{wt}}\left[i-1\right]$ is zero will be set to zero.
16: tdv – IntegerInput
On entry: the stride separating matrix column elements in the array v.
Constraint: ${\mathbf{tdv}}\ge {\mathbf{nvar}}$.
17: fail – NagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_2_INT_ARG_GE
On entry, ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{nvar}}<{\mathbf{n}}$.
NE_2_INT_ARG_GT
On entry, ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$ while ${\mathbf{m}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{nvar}}\le {\mathbf{m}}$.
NE_2_INT_ARG_LT
On entry, ${\mathbf{tdp}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdp}}\ge {\mathbf{nvar}}$.
On entry, ${\mathbf{tdv}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdv}}\ge {\mathbf{nvar}}$.
On entry, ${\mathbf{tdx}}=〈\mathit{\text{value}}〉$ while ${\mathbf{m}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdx}}\ge {\mathbf{m}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_BAD_PARAM
On entry, argument pcmatrix had an illegal value.
On entry, argument scores had an illegal value.
NE_INT_ARG_LT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{m}}\ge 1$.
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 2$.
On entry, ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{nvar}}\ge 1$.
On entry, ${\mathbf{tde}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{tde}}\ge 6$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_NEG_WEIGHT_ELEMENT
On entry, ${\mathbf{wt}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$.
Constraint: when referenced, all elements of wt must be non-negative.
NE_OBSERV_LT_VAR
With weighted data, the effective number of observations given by the sum of weights $\text{}=〈\mathit{\text{value}}〉$, while the number of variables included in the analysis, ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$.
Constraint: effective number of observations $>{\mathbf{nvar}}+1$.
NE_SVD_NOT_CONV
The singular value decomposition has failed to converge. This is an unlikely error exit.
NE_VAR_INCL_INDICATED
The number of variables, nvar in the analysis $\text{}=〈\mathit{\text{value}}〉$, while the number of variables included in the analysis via array ${\mathbf{isx}}=〈\mathit{\text{value}}〉$.
Constraint: these two numbers must be the same.
NE_VAR_INCL_STANDARD
On entry, the standardization element ${\mathbf{s}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$, while the variable to be included ${\mathbf{isx}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$.
Constraint: when a variable is to be included, the standardization element must be positive.
NE_ZERO_EIGVALS
All eigenvalues/singular values are zero. This will be caused by all the variables being constant.
## 7 Accuracy
As nag_mv_prin_comp (g03aac) uses a singular value decomposition of the data matrix, it will be less affected by ill-conditioned problems than traditional methods using the eigenvalue decomposition of the variance-covariance matrix.
None.
## 9 Example
A dataset is taken from Cooley and Lohnes (1971), it consists of ten observations on three variables. The unweighted principal components based on the variance-covariance matrix are computed and unstandardized principal component scores requested.
### 9.1 Program Text
Program Text (g03aace.c)
### 9.2 Program Data
Program Data (g03aace.d)
### 9.3 Program Results
Program Results (g03aace.r)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 182, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7349637150764465, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/50971/how-to-make-ext-and-tor-constructive/51000
|
How to make Ext and Tor constructive?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
EDIT: This post was substantially modified with the help of the comments and answers. Thank you!
Judging by their definitions, the $\mathrm{Ext}$ and $\mathrm{Tor}$ functors are among the most non-constructive things considered in algebra:
(1) Their very definition requires taking an infinite projective or injective resolution; constructing a homotopy equivalence between two such resolutions requires infinitely many choices.
(2) Injective resolutions are rather problematic in a constructive world (e. g. the proof of "injective = divisible" requires Zorn, and as far as I understand the construction of an injective resolution relies on this fact).
(3) Projective/injective resolutions are not really canonical, so $\mathrm{Ext}$ and $\mathrm{Tor}$ are not functors from "pairs of modules" to "groups", but rather functors from "pairs of modules" to some category between "groups" and "isomorphism classes of groups". This is a problem already from the classical viewpoint.
(4) Projective resolutions are not guaranteed to exist in a constructive world, because the free module on a set needs not be projective! In order to avert this kind of trouble, we could try restricting ourselves to very well-behaved modules (such as, finite-dimensional over a field), but even then we are in for a bad surprise: Sometimes, the "best" projective resolution for a finitely-generated module uses non-finitely-generated projective modules (I will show such an example further below). These can be tricky to deal with, constructively. Mike Shulman has mentioned (in the comments) that injective and projective resolutions (and already the proofs that the different definitions of "projective" are equivalent, and that the different definitions of "injective" are equivalent) require choice - maybe the currently accepted notions of injectivity and projectivity are not "the right one" except for finitely-generated modules? (Cf. also this here.)
On the other hand, if we think about the ideas behind $\mathrm{Ext}$ and $\mathrm{Tor}$ and projective resolutions (I honestly don't know the ideas behind injective resolutions, besides to dualize the notion of projective resolutions), they are (at least partially) inspired by some of the most down-to-earth constructive mathematics, namely syzygy theory. So a natural question to pose is: How can we implement the theory of $\mathrm{Ext}$ and $\mathrm{Tor}$, or at least a part of this theory which still has the same applications as the whole theory, without having to extend our logical framework beyound constructivism?
It is not hard to address the issues (1), (2), (3) above one at a time, at least when it comes to the basic properties of $\mathrm{Ext}$ and $\mathrm{Tor}$:
For (1), the workaround is easy: If you want $\mathrm{Ext}^n\left(M,N\right)$ for two modules $M$ and $N$ and some $n\in\mathbb N$, you don't need a whole infinite projective resolution $...\to P^2\to P^1\to P^0\to M$. It is enough to have an exact sequence $P^{n+1}\to P^n\to P^{n-1}\to ...\to P^1\to P^0\to M$, where $P^0$, $P^1$, ..., $P^n$ are projective. (It is not necessary for $P^{n+1}$ to be projective. Generally, $P^{n+1}$ is somewhat like a red herring when $\mathrm{Ext}^n\left(M,N\right)$ is concerned.)
For (2), the only solution I know is not to use injective resolutions. Usually, things that can be formulated with projective resolutions only can also be proven with projective resolutions only. But this is not a solution I like, since it breaks symmetry.
(3) I think this is what anafunctors are for, but I have not brought myself to read the ncatlab article yet. I know, laziness is an issue... At the moment, I am solving this issue in a low-level way: Never speak of $\mathrm{Ext}\left(M,N\right)$, but rather speak of $\mathrm{Ext}\left(M_P,N\right)$ or $\mathrm{Ext}\left(M,N_Q\right)$, where $P$ and $Q$ are respective projective/injective resolutions. This seems to be the honest way to do work with $\mathrm{Ext}$'s anyway, because once you start proving things, these resolutions suddenly do matter, and you find yourself confused (well... I find myself confused) if you suppress them in the notation. [Note: If you want to read Makkai's lectures on anafunctors and you are tired of the Ghostview error messages, remove the "EPSF-1.2" part of the first line of each of the PS files.]
Now, (4) is my main problem. I could live without injective resolutions, without the fake canonicity of $\mathrm{Ext}$ and $\mathrm{Tor}$, and without infinite projective resolutions, but if I am to do homological algebra, I can hardly dispense with finite-length projective resolutions! Unfortunately, as I said, in constructive mathematics, there is no guarantee that a module has a projective resolution at all. The standard way to construct a projective resolution for an $R$-module $M$ begins by taking the free module $R\left[M\right]$ on $M$-as-a-set - or, let me rather say, $M$-as-a-type. Is this free module projective, constructively? This depends on what we know about $M$-as-a-type. Alas, in general, modules considered in algebra often have neither an obvious set of generators nor an a-priori algorithm for membership testing; they can be as complicated as "the module of all $A$-equivariant maps from $V$ to $W$" with $A$, $V$, $W$ being infinite-dimensional. Some are even proper classes, even in the classical sense. The free module over a discrete finite set is projective constructively, but the free module over an arbitrary type does so only if we allow a weaker form of AC through the backdoor. Anyway, even if there is a projective resolution, it cannot really be used for explicit computations if the modules involved are not finitely generated. Now, here is an example of where non-finitely generated modules have an appearance:
Theorem. If $R$ is a ring, then the global homological dimension of the polynomial ring $R\left[x\right]$ is $\leq$ the global homological dimension of $R$ plus $1$.
I am referring to the proof given in Crawley-Boevey's lecture notes. (Look at page 31, absatz (2).) For the proof, we let $M$ be an $R\left[x\right]$-module, and we take the projective resolution
$0 \to R\left[x\right] \otimes_R M \to R\left[x\right] \otimes_R M \to M \to 0$,
where the second arrow sends $p\otimes m$ to $px\otimes m-p\otimes xm$, and the third arrow sends $q\otimes n$ to $qn$. (This is a particular case of the standard resolution of a representation of a quiver, which appears on page 7 of a different set of lecture notes by the same Crawley-Boevey.)
Now, the problem is that even if $M$ is finitely generated as an $R\left[x\right]$-module, $R\left[x\right] \otimes_R M$ needs not be.
Is there a known way around this?
Generally, what is known about constructive $\mathrm{Ext}$/$\mathrm{Tor}$ theory? Are there texts on it, just as Lombardi's one on various other parts of algebra (strangely, this text talks a lot about projective modules, but gives $\mathrm{Ext}$ and $\mathrm{Tor}$ a wide berth)? Do the problems dissolve if I really use anafunctors? Do derived categories help? Should we replace our notions of "injective" and "projective" by better ones? Or is there some deeper reason for the non-constructivity, i. e. is $\mathrm{Ext}$/$\mathrm{Tor}$ theory too strong?
Endnote for everyone who does not care about constructive mathematics: Even dropping constructivism aside, I believe that there remain quite a lot of real issues with homological algebra. First there is the matter of canonicity, then there is the problem of too-big constructions (proper classes etc.), the frightening unapproachability of injective covers, the idea that one day we might want to work in a topos where even ZF is too much asked, etc. I am changing the topic to a discussion of how to fix these issues (well, it already is such a discussion), constructivism being just one of the many directions to work in.
-
1
There is a constructive (even functorial) resolution using a 2-sided bar construction that works for any ring R with module M. (It uses the forgetful-free adjoint pair between R-modules and sets.) If you define it using this horribly impractical definition, there is no indeterminacy and the "standard" definitions become theorems about ways to compute Tor. – Tyler Lawson Jan 3 2011 at 0:55
1
@Mike: It's the chain complex associated to a two-sided bar construction `$B(F_R, F_R, M)$` where `$F_R$` is the free R-module functor; as such its homology groups are the homotopy groups of the underlying simplicial abelian group, which is homotopy discrete because it has an extra degeneracy. So the resulting complex is always acyclic in positive degrees due to having a canonical retraction. – Tyler Lawson Jan 3 2011 at 1:14
2
But there seems to be a lingering issue that frees might not be projective in the absence of choice. – Todd Trimble Jan 3 2011 at 2:05
2
@Theo B: why do those huge free modules require choice? I thought there was a canonical basis for them (like the underlying set of $M\times N$). That gives you the tensor product of abelian groups; then the tensor product of R-modules is just a coequalizer in Ab. – Mike Shulman Jan 3 2011 at 6:16
1
For what it's worth, if you're looking for an application of injective resolutions, they're important in sheaf theory. Mostly one considers injective resolutions in the category of sheaf complexes, but occasionally resolutions of groups and R-modules (which can be considered as sheaves over a point) come into play. That said, I don't have anything useful to contribute to the ongoing discussion! – Greg Friedman Jan 3 2011 at 7:54
show 29 more comments
4 Answers
I have some sympathy for the question, since I have gotten bothered by the noncanoncity -- although less so by the nonconstructivity -- of the usual definition in my youth. (These days I'm less bothered by such things.) Here are a few ways around it for $Ext$.
1. There is a definition going back Yoneda, I think, of $Ext^n(M,N)$ as an equivalence class of exact sequences $0\to N\to \ldots \to M\to 0$ of length $n+2$ (including $M,N$). Take a look at Hilton and Stammbach's Homological Algebra. This is quite cumbersome but it solves the dependency on, let alone existence of, suitable resolutions.
2. This was mentioned already by Mike Shulman. You can take $Ext^n(M,N)= Hom_{D(R)}(M,N[n])$ in the derived category $D(R)$ of $R$-modules. Once you get used to the formalism, this is much more convenient than 1 (at least for me).
3. You can choose a canonical resolution initially for the definition of $Ext^n(M,N)$. Here is one general choice. Let $F^0(M)=F(M)$ be the free module generated by elements of $M$. We have canonical map $c_M:F(M)\to M$, let $F^{-1}(M)= F(\ker c_M)$. By continuing in this fashion, we build a canonical free resolution $\ldots F^{-1}(M)\to F^0(M)\to M\to 0$. Dually, you can use the injective hull* to build an injective resolution of $N$. As several people have pointed out, some important categories have enough injectives but not enough projectives, so it's good to get used to them.
Addendum Regarding your original question about how constructive this can be made, I think for certain classes of modules (e.g. finitely presented modules) over certain classes of rings (e.g. polynomial rings), this is not only possible, but it seems to have been implemented in some computer algebra packages already. I notice that the latest Macaulay 2 has commands for Ext and Tor.
*If your are unhappy with this, use instead the double character module $$I(N)= Hom_{\mathbb{Z}}(Hom_{\mathbb{Z}}(N,\mathbb{Q}/\mathbb{Z}), \mathbb{Q}/\mathbb{Z})$$
-
Thanks a lot. I have heard of 1, although this creates a new problem (namely, $\mathrm{Ext}$ becomes a proper class), and doesn't work for $\mathrm{Tor}$ (not even for $\mathrm{Tor}^0$, which is reflected by the fact that the tensor product is the most difficult to understand thing in linear algebra). As for 2, I am trying to get used to derived categories; ATM I can't prove that this definition is equivalent to the usual one however. 3 is what Tyler proposed, and what I am currently using. – darij grinberg Jan 3 2011 at 17:29
However, injective hulls are not as canonical, as far as I understand, so 3 really works only for the projective part of the theory. – darij grinberg Jan 3 2011 at 17:30
Yes, I agree, these approaches solves some problems and creates others. Incidentally, whenever I teach sheaf cohomology, I use Godement's canonical flasque resolutions and avoid injectives altogether (and before anyone says anything, I know this won't work for arbitrary topoi...). – Donu Arapura Jan 3 2011 at 17:38
Oh, another thing: I heard of the double-character definition of $I(N)$, but I am not sure whether it really has any of the required properties constructively. We need to show that the canonical map $N\to I(N)$ is injective. Is it? – darij grinberg Mar 19 2011 at 11:56
@Donu, that won't work for arbtrary --- oops, you anticipated me... :-) – Ravi Vakil Jul 21 2011 at 22:33
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Perhaps constructively it is misguided to expect Tor and Ext to be definable in terms of a single resolution.
One way to define Ext(M,N) is as the homology groups of DHom(M,N) in the derived category of chain complexes. The derived category of chain complexes is obtained from the category of chain complexes by inverting quasi-isomorphisms. In the presence of projective or injective resolutions, the category of chain complexes has a (projective or injective) Quillen model structure so that DHom(M,N) can be computed as Hom(QM,N) or Hom(M,RN) for a projective (= cofibrant) resolution QM or an injective (= fibrant) resolution RN.
In the absence of resolutions, it seems likely to me that the "right" thing to look at would still be DHom(M,N); it would just be harder to compute. A priori you would have to look at arbitrary zigzags of chain complexes, with backwards-pointing maps being quasi-isomorphisms. But it might still happen that there is a calculus of fractions or something that would reduce it to two- or three-step zigzags that you have to look at. But you would still then have to use different "resolutions" to represent different elements of Ext(M,N).
This is all based on general nonsense intuition, however, not on any experience of actually trying to do anything with Tor and Ext constructively, so it could be way off.
-
Thanks: even if this doesn't answer my question, it provides me a lot to think and read about. ;) I am a bit scared of derived categories: with the definition I know, the derived category of a locally small but not small category is not even locally small anymore, so there seems to be some inherent evil in that notion which should be exorcised first (in the fires of type theory?...). Now I have done enough rambling for 10 soft-questions, so I guess I should stop. Thanks again for all the help. – darij grinberg Jan 3 2011 at 1:13
2
It's true that if you take a locally-small non-small category and invert some weak equivalences, the result may no longer be locally small. I would guess that local-smallness of the derived category of a ring requires a weak form of choice (similar to nlab.mathforge.org/nlab/show/WISC). I'm not sure what's evil about non-locally-small categories, though. – Mike Shulman Jan 3 2011 at 4:07
This is not exactly what you're asking, but computationally there is no problem with Ext and Tor over finitely-generated commutative algebras over a field. The algorithms are described in Eisenbud's commutative algebra book, as well as other places. For schemes, you can use the Cech definitions. I don't know to what extent these depend on choice to prove that they compute the right thing, since the theory relies on injective resolutions at various points, even though they are avoidable in calculations.
It seems to me that you could constructively handle injective modules the same way that you constructively handle the algebraic closure of a field. (Does the existence of the algebraic closure of $\mathbb{Q}$ require choice?) I don't think you literally need the whole injective module, just as in the way you usually don't literally need the whole algebraic closure of a ring. You adjoin roots of polynomials/solutions of linear equations as necessary. I don't know if you can give a constructive proof that this stops, though.
-
The existence and uniqueness of algebraic closures can be deduced from the ultrafilter lemma, so doesn't require the full power of choice: see mathoverflow.net/questions/46566/… . In the specific case of Q, doesn't the proof that C is algebraically closed go through in ZF? Then we can just take the algebraic numbers in C. – Qiaochu Yuan Jan 3 2011 at 18:57
@Qiaochu: In ZF, I believe so, but constructively, I don't think you can even prove that the real numbers are real-closed: the intermediate value theorem requires some LEM. – Mike Shulman Jan 3 2011 at 20:45
The "just adjoin roots as necessary" sounds similar to what I suggested about needing to use different "resolutions" to represent different morphisms in the derived category? – Mike Shulman Jan 3 2011 at 20:47
That's a good point. You could work with chain complexes that are quasi-isomorphic to the original module, such that the first module in the complex is an essential extension of the original module, and each additional module is the complex is an essential extension of the previous one. Now we just need to prove that we can get every morphism of the derived category that way... – arsmath Jan 3 2011 at 22:52
I am no authority on what I'm about to write, but the nLab discusses a weakened form of the axiom of choice called COSHEP (the Category Of Sets Has Enough Projectives), aka the Presentation Axiom (PAx) that some constructivists apparently consider reasonable. The principle does indeed hold for various models of intuitionistic set theory, for example any presheaf topos and also the effective topos, where the full axiom of choice fails badly.
Under COSHEP, it is easy to construct projective resolutions in algebra. And apparently this is a main application for constructive mathematicians, so the question raised by Darij has certainly been considered in the literature.
As regards injective resolutions: isn't the situation in Grothendieck toposes rather better than for projective resolutions? I am thinking here that a primary mathematical justification for interest in constructive proofs is that they admit a much wider semantic range than classical proofs. In particular, the internal logic of a topos is 'constructive', and a primary reason one is interested in constructive proofs is that they tend to carry over to toposes. Anyway, if I'm not too badly informed, the category of abelian group objects in a Grothendieck topos always has enough injectives; it may not have enough projectives however. Indeed, the interest in things like "flabby sheaves" seems to come down squarely on the injective side. (Hm, just noticed that Greg Friedman made a similar point in a comment above.)
All that being said, I expect that to push "enough injectives" through to Grothendieck toposes, one must assume certain baseline choice principles on the base topos $Set$. The "enough injectives" result on Grothendieck toposes probably hinges on some external considerations (like the existence of a generator) rather than internal logical considerations, so I'm a little murky on what constructivism really has to say here. All I'm saying above is that if your primary interest in constructive proofs is that they carry over to Grothendieck toposes, then fear not: they have enough injectives already.
I'm hoping that someone such as Andreas Blass will weigh in here at some point.
-
My primary interest in constructive proofs is that they are constructive, and alas I don't know what a topos is. This is going to change when I have something like a month of spare time for reading Barr's TTT book. Unfortunately, I have no idea when this is going to be. To be honest, I have NO idea why free modules are not projective constructively, so it seems like confusion has got the better of me now. – darij grinberg Jan 3 2011 at 14:08
Thanks for the reply, the mention of COSHEP seems to explain some things. – darij grinberg Jan 3 2011 at 14:10
@Darij: A detailed introduction into topos theory takes months or even years to read, I think. As a first introduction, you might check out maths.gla.ac.uk/~tl/cafe_topos_intro.pdf – Martin Brandenburg Jan 3 2011 at 15:22
@Darij: perhaps you've got it sorted out by now, but you just have to work through the usual element-chasing proofs that free modules are projective to see where AC comes in. "Given f: P --> X, P free, and an epi p: Y --> X, I have to lift f through p. Let e_i be a basis of P; the lift should take e_i to some y which maps down to p(y). So I need to choose such y etc." It's hard to do this without some form of choice. – Todd Trimble Jan 3 2011 at 16:02
I see, this depends on what we mean by a "surjection". For me, a map $f:X\to Y$ (of sets) is called a surjection if and only if for every $y\in Y$, there exists an $x\in X$ such that $f(x)=y$. Now, "for every" is, type-theoretically, a lambda abstraction, so if I know that "for every $y\in Y$, there exists an $x\in X$ such that $f(x)=y$", then I actually have a map $g:Y\to X$ which assigns to each $y\in Y$ an $x\in X$ satisfying $f(x)=y$. Now, this has an ugly consequence, namely that if $X$ is a set and $\sim$ is an equivalence relation on $X$, then the canonical projection ... – darij grinberg Jan 3 2011 at 17:41
show 6 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 100, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432932138442993, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/32401/can-the-image-of-a-schur-functor-always-be-made-an-irreducible-representation
|
## Can the image of a Schur functor always be made an irreducible representation?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For a partition $\lambda$ let $S^{\lambda}$ be the corresponding Schur functor. Is it true that for every $\lambda$ there exists an irreducible representation $V$ of a finite nonabelian group $G$ such that $S^{\lambda}(V)$ is still irreducible?
This is not obvious to me even for the symmetric and exterior powers (although maybe I'm not thinking hard enough), so any partial results would be appreciated.
-
1
I hope you don't mind too much that I made a little edit for clarity. I found the original statement very confusing. – Ben Webster♦ Jul 18 2010 at 22:03
Ben's edit makes the question more precise, but I'm unclear about the motivation for it. Would there be interesting consequences (for general linear groups or for finite groups) if `$f$` can be computed explicitly or if the boundedness is somehow shown to be true or false? The question is intriguing, but also puzzling. Classical Schur-Weyl theory gives good algorithmic knowledge of the constituents of tensor powers of $V$` for general linear groups over `$\mathbb{C}$`; but neither this nor the finite subgroups are known in closed form as `$n$` grows (eventually all finite groups appear). – Jim Humphreys Jul 18 2010 at 22:08
@Ben: Thanks. @Jim: The motivation is vaguer than the question, which is why I wasn't sure whether to put it in. For any representation V of a finite group G we know that V^{\otimes k} decomposes into parts corresponding to the irreps of GL(V), which by Schur-Weyl duality can be labeled by some irreps of S_k. I want to know if this is the best one can do for a "generic" group G, e.g. whether each of these representations is actually irreducible for some finite group G and some representation V. If I've got the statement of the problem right, this is equivalent to f being unbounded. – Qiaochu Yuan Jul 18 2010 at 22:31
I have edited the statement of the question to reflect my motivation more accurately. – Qiaochu Yuan Jul 18 2010 at 22:40
4
The answer is no, for the sixth symmetric power in characteristic zero. But I don't know if there is an easy proof. See "Symmetric powers and a problem of Kollar and Larsen," by Guralnick and Thiep. – moonface Jul 18 2010 at 22:57
show 2 more comments
## 2 Answers
Since the Guralnick and Tiep paper is very long, I thought I would summarize my understanding of it. Note that I only learned about this result from moonface last night, so I am hardly an expert. This answer is community wiki, in case anyone can improve on my summary.
We want to establish the following result: Let $G$ be a finite noncommutative subgroup of $GL(V)$, with $V$ a $\mathbb{C}$ vector space. Let $k \geq 6$. Then $\mathrm{Sym}^k(V)$ is reducible. (If $G$ is commutative and $V$ is one dimensional then, of course, $\mathrm{Sym}^k(V)$ is always one dimensional and hence irreducible.)
Our proof is by induction on $|G|$. Our base case will be when $G$ is a central extension of a simple group.
Choose a nontrivial normal subgroup $H$ of $G$ such that $G/H$ is simple. If we can't do this then $G$ is simple and we are in the base case. Let $V \cong \bigoplus U_i$ be the decomposition of $V$ into $H$-isotypic components. If there is more than one summand, then $\bigoplus \mathrm{Sym}^k(U_i)$ is a nontrivial $G$-subrep of $\mathrm{Sym}^k(V)$. So we may assume that $V \cong W \otimes X$, where $W$ is an $H$-irrep and $H$ acts trivially on $X$. Then one can show (proof of lemma 2.5) then $G$ is contained in $GL(W) \times GL(X)$. (This is nontrivial but not deep; you could give it as a problem in a graduate-level representation theory course.)
Then $\mathrm{Sym}^k(W) \otimes \mathrm{Sym}^k(X)$ is a subrepresentation of $\mathrm{Sym}^k(V)$. This subrep is proper unless $\dim X=1$ or $\dim W=1$. If $\dim X=1$, then $V$ is an irrep of $H$ and $\mathrm{Sym}^k(V)$ is irreducible as an $H$-rep, so we are done by induction.
If $W$ is one dimensional, then $H$ acts on $V$ by scalars, so $H$ is central. So we are in our base case: a central extension of a simple group. This case is done by group cohomology and the classification of simple groups. See section 4 for groups of Lie type, section 6 for alternating groups and section 7 for sporadic groups.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
By averaging, over $\mathbb R$ every representation of a finite group can be given a positive definite inner product so that the action is orthogonal. And my memory is that Schur functurs of representations of $\operatorname{O}(n)$ are not usually irreducible. For example, $\operatorname{Sym}^2(\mathbb R^n)$ has a one-dimensional summand over $\operatorname{O}(n)$, given by the trace w.r.t. the inner product on $\mathbb R^n$. This suggests to me that over any finite group, the essential image of $\operatorname{Sym}^2$ contains very few irreps. But maybe I am in error.
-
It sounds like you mean that Sym^2(V) is reducible unless dim(V) = 1? SL(2,3) has some 2 dim reps with Sym^2 irreducible. This appears somewhat common. Checking a standard list of character tables, one sees there are examples for Sym^2 in dimensions 2,3,4,5,6,8,9,10,11,12,13,14,18,20,21,26,28,32,41,42,43,45,60,342,1333; examples for Sym^3 in dimensions 2,3,4,5,6,8,9,10,12,13,14,18,20,32; examples for Sym^4 in dimensions 2,4,6,12; and examples for Sym^5 in dimensions 2,4,6,12. There were no examples for Sym^6 as indicated by Guralnick and Thiep. Maybe this is a U(n) versus O(n) problem? – Jack Schmidt Jul 19 2010 at 12:33
Yeah, I think all this is saying is that the Frobenius–Schur indicator indicates whether the trivial rep is a summand of the 2nd symmetric power, the 2nd exterior power, or neither. O(n) reps means it is a summand of the 2nd symmetric power. It's pretty common for an irrep not to be realized in O(n). – Jack Schmidt Jul 19 2010 at 13:21
Ah, so I was in error. It was late last night, after some wine, that I wrote my answer :) – Theo Johnson-Freyd Jul 19 2010 at 17:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352893829345703, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/5746/tweaking-textbook-rsa-to-make-the-encryption-a-pseudorandom-function/5757
|
# Tweaking textbook RSA to make the encryption a Pseudorandom function
Lets say I want to tweak/alter the textbook RSA encryption function to create a pseudorandom function by pre-processing the input.
Suppose I do something simple like add 2 to the input before encrypting it:
$$c = (m+2)^e \bmod N$$
How do I know if this is not a secure pseudorandom function? Does this give any information about N?
I am aware that no tweak can make it a pseudo-random function. I am just trying to understand which values this setting cannot output so as to build a distinguisher. Sorry if I wasn't clear earlier. My question originally meant to ask which ciphertexts cannot be the output of this setting in order to distinguish it from a purely random function. I know 1 thing - This setting will never output 0 & 1 as the input is bounded $$0<=m<2^n$$
-
Didn't you answer your own question with the edit? This function can never output zero or one. If not, please clarify your question. – Jens Dec 18 '12 at 11:38
## 1 Answer
This is not a secure pseudorandom function. No tweak is going to make this a pseudorandom function. After all, it is a public-key operation, so anyone who knows your algorithm can compute $f(m)$ given $m$. Consequently, it cannot possibly be pseudorandom: there will always be a trivial distinguisher.
What are you actually trying to accomplish? Why do you think you want a tweak to RSA that makes it a pseudorandom function -- what problem are you trying to solve? I think you may need to step back from the specific mechanism you have in mind and think through what are the broader requirements of your particular application. Once you've done that, we may be in a better position to help you solve your problem.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234657287597656, "perplexity_flag": "head"}
|
http://blog.eqnets.com/2009/11/30/birds-on-a-wire-and-the-ising-model/
|
# Equilibrium Networks
Science, networks, and security
## Birds on a wire and the Ising model
Statistical physics is very good at describing lots of physical systems, but one of the basic tenets underlying our technology is that statistical physics is also a good framework for describing computer network traffic. Lots of recent work by lots of people has focused on applying statistical physics to nontraditional areas: behavioral economics, link analysis (what the physicists abusively call network theory), automobile traffic, etc.
In this post I’m going to talk about a way in which one of the simplest models from statistical physics might inform group dynamics in birds (and probably even people in similar situations). As far as I know, the experiment hasn’t been done–the closest work to it seems to be on flocking (though I’ll give \$.50 and a Sprite to the first person to point out a direct reference to this sort of thing). I’ve been kicking it around for years and I think that at varying scopes and levels of complexity, it might constitute anything from a really good high school science fair project to a PhD dissertation. In fact I may decide to run with this idea myself some day, and I hope that anyone else out there who wants to do the same will let me know.
The basic idea is simple. But first let me show you a couple of pictures.
Notice how the tree in the picture above looks? There doesn’t seem to be any wind. But I bet that either the birds flocked to the wire together or there was at least a breeze when the picture below was taken:
Because the birds are on wires, they can face in essentially one of two directions. In the first picture it looks very close to a 60%-40% split, with most of the roughly 60 birds facing left. In the second picture, 14 birds are facing right and only one is facing left.
Now let me show you an equation:
$H = -J\sum_{\langle i j \rangle} s_i s_j - K\sum_i s_i.$
If you are a physicist you already know that this is the Hamiltonian for the spin-1/2 Ising model with an applied field, but I will explain this briefly. The Hamiltonian $H$ is really just a fancy word for energy. It is the energy of a model (notionally magnetic) system in which spins $s_i$ that occupy sites that are (typically) on a lattice (e.g., a one-dimensional lattice of equally spaced points) take the values $\pm 1$ and can be taken as caricatures of dipoles. The notation $\langle i j \rangle$ indicates that the first sum is taken over nearest neighbors in the lattice: the spins interact, but only with their neighbors, and the strength of this interaction is reflected in the exchange energy $J.$ The strength of the spins’ interaction with an applied (again notionally magnetic) field is governed by the field strength $K.$ This is the archetype of spin models in statistical physics, and it won’t serve much for me to reproduce a discussion that can be found many other places (you may like to refer to Goldenfeld’s Lectures on Phase Transitions and the Renormalization Group, which also covers the the renormalization group method that inspires the data reduction techniques used in our software). Suffice it to say that these sorts of models comprise a vast field of study and already have an enormous number of applications in lots of different areas.
Now let me talk about what the pictures and the model have in common. The (local or global) average spin is called the magnetization. Ignoring an arbitrary sign, in the first picture the magnetization is roughly 0.2, and in the second it’s about 0.87. The 1D spin-1/2 Ising model is famous for exhibiting a simple phase transition in magnetization: indeed, the expected value of the magnetization for in the thermodynamic limit is shown in every introductory statistical physics course worth the name to be
$\langle s \rangle = \frac{\sinh \beta K}{\sqrt{\sinh^2 \beta K + e^{-4\beta J}}}$
where $\beta \equiv 1/T$ is the inverse temperature (in natural units). As ever, a picture is worth a thousand words:
For $K = 0$ and $T > 0,$ it’s easy to see that $\langle s \rangle = 0.$ But if $K \ne 0, J > 0$ and $T \downarrow 0$, then taking the subsequent limit $K \rightarrow 0^\pm$ yields a magnetization of $\pm 1.$ At zero temperature the model becomes completely magnetized–i.e., totally ordered. (Finite-temperature phase transitions in magnetization in the real world are of paramount importance for superconductivity.)
And at long last, here’s the point. I am willing to bet (\$.50 and a Sprite, as usual) that the arrangement of birds on wires can be well described by a simple spin model, and probably the spin-1/2 Ising model provided that the spacing between birds isn’t too wide. I expect that the same model with varying parameters works for many–or even most or all–species in some regime, which is a bet on a particularly strong kind of universality. Neglecting spacing between birds, I expect the effective exchange strength to depend on the species of bird, and the effective applied field to depend on the wind speed and angle, and possibly the sun’s relative location (and probably a transient to model the effects of arriving on the wire in a flock). I don’t have any firm suspicions on what might govern an effective temperature here, but I wouldn’t be surprised to see something that could be well described by Kawasaki or Glauber dynamics for spin flips: that is, I reckon that–as usual–it’s necessary to take timescales into account in order to unambiguously assign a formal or effective temperature (if the birds effectively stay still, then dynamics aren’t relevant and the temperature should be regarded as being already accounted for in the exchange and field parameters). I used to think about doing this kind of experiment using tagged photographs or their ilk near windsocks or something similar, but I can’t see how to get any decent results that way without more effort than a direct experiment. I think it probably ought to be done (at least initially) in a controlled environment.
Anyways, there it is. The experiment always wins, but I have a hunch how it would turn out.
UPDATE 30 Jan 2010: Somebody had another interesting idea involving birds on wires.
### Like this:
Like Loading...
This entry was posted on Monday, November 30th, 2009 at 02:33 and is filed under Equilibrium Networks, Mathematics, Networks, Nonrandom bits, Science. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
### 6 Responses to Birds on a wire and the Ising model
1. RZ says:
Wasn’t something similar done with cow orientations using google earth?
Anyway, as far as I can remember, the correlation function for 1-d Ising is exponential, which implies that for a given snapshot, the probability to see N sequential birds with the same orientation decays exponentially. It is difficult to separate this prediction from other simple models of bird orientation.
A real test is to compare to equilibrium dynamics ( if I had to guess, I’d say Glauber). An exploration of time dynamics of birds on a wire would be much more difficult, just consider modeling the case of a bird leaving or landing on the wire. Looks more like an adsorption problem to me- preferential attachment where birds already are perched + a “hard core” repulsion at close ranges.
• eqnets says:
The cow thing is covered here:
http://afp.google.com/article/ALeqM5j1FvUL7uj_NIAPrfLwSb0HMV4gnA
To me the interesting part would be determining the parameters of the model in terms of environmental influences
2. RZ says:
According to the link, the cows really do align themselves according to the external magnetic field. I guess they are para-magnetic (this is funnier in hebrew).
3. eqnets says:
Found this today:
http://answers.google.com/answers/threadview?id=442524
4. RZ says:
Cool, but that means that there isn’t any bird-bird interaction, so J=0 and no Ising, at least to zero’th approximation.
5. Hosting Anak Bangsa says:
Wow, Nice post you have, thanks
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937031090259552, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/17732?sort=newest
|
Difference between measures and distributions
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
On the one hand, Wikipedia suggests that every distribution defines a Radon measure:
On the other hand, Terry Tao and LK suggest not:
Can someone please clarify this for me?
-
You might be interested in the answers to this question: mathoverflow.net/questions/4706/… – Tom Leinster Mar 10 2010 at 19:00
4
Wikipedia does not suggest that every distribution defines a Radon measure, it says that every distribution which is non-negative on non-negative functions is positive Radon measure, and this is a rather different statement! – Mariano Suárez-Alvarez Mar 13 2010 at 14:58
Yes, see my comment on Deane Huang's post below. My error was assuming that every distribution is the difference of two positive distributions. This holds (as far as I remember) for signed measures. – Tom Ellis Mar 14 2010 at 12:14
4 Answers
This is a summary of what I've learned about this question based on the answers of the other commenters.
[*] Any positive distribution defines a positive Radon measure.
I had naively assumed a result for distributions like The Hahn Decomposition Theorem[1] for measures, i.e. I assumed that a distribution could be expressed as the difference of two positive distributions. If it could be, then applying Theorem [*] would yield the result that any distribution is a signed measure.
However, this is not the case. The derivative of the delta function, i.e. δ', satisfies δ'(f) = -f'(0). This is not a measure. I can't find any way of proving it's not the difference of two positive distributions, other than by contradiction using the above result.
[1] http://en.wikipedia.org/wiki/Hahn_decomposition_theorem
-
δ' is not continuous on the space of continuous functions. Would this show that δ' is not a signed measure? – timur Jul 15 2011 at 9:34
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think the decisive point is continuity with respect to different topologies. Let $C$ be the space of continuous functions of compact support and $D$ the space of smooth functions of compact support. The inclusion $D\hookrightarrow C$ is a continuous map when you give both spaces the corresponding inductive limit topology. That means, that every continuous linear functional of $C$, i.e., each Radon-measure, defines a continuous linear functional on $D$, i.e., a distribution. But not every distribution extends to a continuous linear map on $C$. Examples are the derivatives of the Dirac distribution. The line in Wikipedia relates to an important property of linear functionals on $C$: if such a functional is positive, i.e., if it maps functions $f\ge 0$ to numbers $\ge 0$, then it is AUTOMATICALLY CONTINUOUS. This is a very important fact, though it is not hard to prove.
-
2
It is worth noting that the first part of this answer generalizes, in the sense that $D$ is dense in just about any function space you can think of, with continuous inclusion. E.g., $L^p$, Sobolev spaces, and so forth. And hence the duals of such function spaces can be considered to consist of distributions. – Harald Hanche-Olsen Mar 13 2010 at 17:04
A much better explanation than mine. – Deane Yang Mar 13 2010 at 17:11
Sorry for perhaps sounding stupid, but: Why does the Hahn-Banach theorem not work in extending the measure here? – Regenbogen Mar 14 2010 at 0:37
2
Because it is not continuous with respect to the topology of D. – anton Mar 14 2010 at 14:15
Measures are dual to continuous functions, whereas distributions are derivatives of them.
-
4
well, at least in original definition, distributions are dual to smooth functions... – Yemon Choi Mar 11 2010 at 3:55
Are they not dual also to the subspace of continuous functions of $\mathcal D$ in its subspace topology? – Mariano Suárez-Alvarez Mar 13 2010 at 15:00
Do you mean this sentence:
Conversely, essentially by the Riesz representation theorem, every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure.
The condition that the distribution be non-negative for non-negative functions is non-trivial. Not every distribution satisfies this, so not every distribution is a Radon measure.
The fundamental examples are the delta function at a point (which is a measure) and its derivatives (which are not measures).
-
Perhaps I should have been clearer: does every distribution correspond to a signed measure? – Tom Ellis Mar 12 2010 at 8:13
ncatlab.org/nlab/show/distribution says "For an example of a distribution .. which does not arise from a measure, consider the derivative of the Dirac distribution. (As a functional, it maps a test function f to −f′(0).)" -- so my understanding of signed measures is not deep enough: there's something magic about measures that makes every signed measure the difference of two measures. The equivalent result is clearly not true for distributions! – Tom Ellis Mar 12 2010 at 8:49
I love the sentence “The condition that the distribution be non-negative for non-negative functions is non-trivial.” It seems that it should be simplifiable by some sort of elimination of double negations, but any such ‘simplification’ radically changes its meaning. – L Spice Jun 21 2011 at 20:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282035231590271, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/17723/list
|
Return to Answer
2 added 268 characters in body
Amazingly, it seems that the answer is yes:
Mioduszewski, J. On two-to-one continuous functions. (Russian summary) Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys. 9 1961 129--132.
The author announces results concerning two-to-one functions $f$ on a locally compact separable space $X$, proofs of which appear in Rozprawy Mat. 24 (1962), 1--41. Let $\phi$ be the (discontinuous) involution defined by $\varphi(x)=f^{-1}f(x)-x$. A result of the reviewer [Duke Math. J. 10 (1943), 49--57; MR0008697 (5,47e)] asserts that if $X$ is a compact manifold or $f$ is closed and $X$ is a locally compact manifold, then the investigation of $\phi$ is equivalent to the investigation of a continuous involution. The author calls a point $x\in X$ pseudo-Euclidean if it has a neighborhood $H$ such that the closure of the component of $x$ in $H$ is a Euclidean solid sphere. The principal theorem asserts that if $x$ is a pseudo-Euclidean point with $K$ as the solid sphere of the definition, and if $\psi=\varphi|K$, that $\lim\text{}\sup_{y\rightarrow x}\psi(y)=x\bigcup\varphi(x)$ is impossible. This yields an extension of the result of the reviewer quoted above. The author indicates the existence of a plane simply connected domain $G$ whose boundary is an irreducible cut of the plane into two domains and such that there exists a two-to-one mapping defined on $\overline G$. This is in contrast to the result of Roberts [ibid. 6 (1940), 256--262; MR0001923 (1,319d)], which asserts the non-existence of two-to-one mappings defined on two-cells. The existence of two-to-one mappings defined on Euclidean spaces $E^n$, $n\geq 2$, is shown. However, the question of the existence of two-to-one mappings defined on $n$-cells, $n>3$, remains open. [MathSciNet review by P. Civin.]
I can't access this paper, so I can't say anything about the construction. It would be nice to see some corroboration for this result and/or a more (physically) accessible contemporary treatment.
Addendum: Petya's response gives a link to the paper, from which one can see that the function is essentially defined in terms of the involution $\iota$, so it is not immediately clear what the codomain is or whether it can be embedded in $\mathbb{R}^2$.
1
Amazingly, it seems that the answer is yes:
Mioduszewski, J. On two-to-one continuous functions. (Russian summary) Bull. Acad. Polon. Sci. Sér. Sci. Math. Astronom. Phys. 9 1961 129--132.
The author announces results concerning two-to-one functions $f$ on a locally compact separable space $X$, proofs of which appear in Rozprawy Mat. 24 (1962), 1--41. Let $\phi$ be the (discontinuous) involution defined by $\varphi(x)=f^{-1}f(x)-x$. A result of the reviewer [Duke Math. J. 10 (1943), 49--57; MR0008697 (5,47e)] asserts that if $X$ is a compact manifold or $f$ is closed and $X$ is a locally compact manifold, then the investigation of $\phi$ is equivalent to the investigation of a continuous involution. The author calls a point $x\in X$ pseudo-Euclidean if it has a neighborhood $H$ such that the closure of the component of $x$ in $H$ is a Euclidean solid sphere. The principal theorem asserts that if $x$ is a pseudo-Euclidean point with $K$ as the solid sphere of the definition, and if $\psi=\varphi|K$, that $\lim\text{}\sup_{y\rightarrow x}\psi(y)=x\bigcup\varphi(x)$ is impossible. This yields an extension of the result of the reviewer quoted above. The author indicates the existence of a plane simply connected domain $G$ whose boundary is an irreducible cut of the plane into two domains and such that there exists a two-to-one mapping defined on $\overline G$. This is in contrast to the result of Roberts [ibid. 6 (1940), 256--262; MR0001923 (1,319d)], which asserts the non-existence of two-to-one mappings defined on two-cells. The existence of two-to-one mappings defined on Euclidean spaces $E^n$, $n\geq 2$, is shown. However, the question of the existence of two-to-one mappings defined on $n$-cells, $n>3$, remains open. [MathSciNet review by P. Civin.]
I can't access this paper, so I can't say anything about the construction. It would be nice to see some corroboration for this result and/or a more (physically) accessible contemporary treatment.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207733273506165, "perplexity_flag": "head"}
|
http://advogato.org/person/vicious/diary.html?start=322
|
# Older blog entries for vicious (starting at number 322)
Numerical range
I was fiddling with numerical range of two by two matrices so I modified my root testing python program to do this. The numerical range of $A$ is the set of all values
$\frac{v^* A v}{v^* v}$
for all nonzero vectors $v$. This set is a compact set (it can be seen as the image of the mapping \$latex $v^* A v$ of vectors on the unit sphere $v^* v = 1$ which is a compact set). It’s convex which is harder to show. For two by two it is an elliptic disc (could be degenerate).
See the result here, it plugs in random vectors and shows the result. Here’s an example plot for the matrix $A=\begin{bmatrix} 1 & i \\ 1 & -1 \end{bmatrix}$.
The code is really inefficient and eats up all your cpu. There’s no effort to optimize this.
Syndicated 2012-11-09 00:06:40 from The Spectre of Math
Economy and elections
I have a theory as to why did the economy improve over this summer and into the fall, which led to Obama winning the election. I bet a part of this was the money spent on the campaigns. That was 2 billion dollars that went to very targeted places like Ohio. No wonder economy in Ohio is doing quite well. If it weren’t for the election, Sheldon Adelson would not spend 100 million on random stuff over the period, he would sit on the money. This way he spent it to elect Romney improving the economy in battlegound states which led to Obama winning.
Yes it is a bit of a stretch, but it should not be totally dismissed. Apparently the campaigns spent approximately 190 million just in Ohio [1]. That means that GDP of Ohio went up by 0.04 percent just because of the election (the GDP is 477 billion [2]). That’s not much but it’s not negligible. Also note that it wasn’t spread out over the whole year it was rather concentrated. Further note that state spending is 26 billion [2], so this is 0.73 percent of what the state spends in a given year. If the state gets say 5 percent of that money in various taxes (just pulling a number out of a hat; a reasonable estimate in my layman opinion based on state budget versus GDP) that would mean approximately 10 million extra tax revenue for the state. Not at all bad.
So, Sheldon Adelson was really rooting for Obama! Sneaky way to do it too.
[1] http://www.nationaljournal.com/hotline/ad-spending-in-presidential-battleground-states-20120620
[2] http://en.wikipedia.org/wiki/Economy_of_Ohio
Syndicated 2012-11-08 18:55:26 from The Spectre of Math
Linus has way too much time on his hands
So latest news comes that Linus has switched to KDE. This apparently after first switching to XFCE, then I guess back to GNOME. Hmmm.
I’m still on XFCE. Can’t be bothered to try anything else. Yes XFCE is somewhat sucky, but once you fix its stupidities (such as the filemanager taking a minute to start up due to some vfs snafu that’s been apparently around forever), it’s there. I’ve entertained the thought of trying something else, but it’s not an exciting enough proposition.
Now I am wondering what to do once Fedora 16 stops being supported. Should I spend the afternoon upgrading to 18? The issue is that I can’t do the normal upgrade thing since that would boot into it’s own environment and would not load a necessary module that I do on startup that turns off the bad nvidia card with a screwed up heatsink. It’s impossible to do this in BIOS (stupid stupid Lenovo, never buying another Lenovo again). Anyway, that means having to do it right after boot, but before the GUI comes up since that would (even if using the intel card) turn the laptop into a portable oven, and it will just turn off and die nowdays. I am thinking that maybe if the upgrade happens during the wintertime, I could just stick the laptop on snow (and wait till it’s at least 20 below freezing) and then it could stay sane for the duration of the upgrade perhaps. I will probably try to do the upgrade by yum only, but that seems like it could be bug prone and would require some manual tinkering, and I just don’t care enough to do that.
Next time picking a distro I’m going with something LTS I think. And … Get off my lawn!!!
Syndicated 2012-11-03 20:44:28 from The Spectre of Math
Visualizing complex singularities
I needed a way to visualize which t get hit for a polynomial such as $t^2+zt+z=0$ when z ranges in a simple set such as a square or a circle. That is, really this is a generically two-valued function above the z plane. Of course we can’t just graph it since we don’t have 4 real dimensions (I want t and z to of course be complex). For each complex z, there are generically two complex t above it.
So instead of looking for existing solutions (boring, surely there is a much more refined tool out there) I decided it is the perfect time to learn a bit of Python and checkout how it does math. Surprisingly well it turns out. Look at the code yourself. You will need numpy, cairo, and pygobject. I think except for numpy everything was installed on fedora. To change the polynomial or drawing parameters you need to change the code. It’s not really documented, but it should not be too hard to find where to change things. It’s less than 150 lines long, and you should take into considerations that I’ve never before written a line of python code, so there might be some things which are ugly. I did have the advantage of knowing GTK, though I never used Cairo before and I only vaguely knew how it works. It’s probably an hour or two’s worth coding, the rest of yesterday afternoon was spent on playing around with different polynomials.
What it does is randomly pick z points in a rectangle, by default real and imagnary parts going from -1 to 1. Each z point has a certain color assigned. On the left hand side of the plot you can see the points picked along with their colors. Then it solves the polynomial and plots the two (or more if the polynomial of higher degree) solutions on the right with those colors. It uses the alpha channel on the right so that you get an idea of how often a certain point is picked. Anyway, here is the resulting plot for the polynomial given above:
I am glad to report (or not glad, depending on your point of view) to report that using the code I did find a counterexample to a Lemma I was trying to prove. In fact the counterexample is essentially the polynomial above. That is, I was thinking you’d probably have to have hit every t inside the “outline” of the image if all the roots were 0 at zero. It turns this is not true. In fact there exist polynomials where t points arbitrarily close to zero are not hit even if the outline is pretty big (actually the hypothesis in the lemma were more complicated, but no point in stating them since it’s not true). For example, $t^2+zt+\frac{z}{n}=0$ doesn’t hit a whole neighbourhood of the point $t=-\frac{1}{n}$. Below is the plot for $n=5$. Note that as n goes to infinity the singularity gets close to $t(t+z) = 0$ which is the union of two complex lines.
By the way, be prepared the program eats up quite a bit of ram, it’s very inefficient in what it does, so don’t run it on a very old machine. It will stop plotting points after a while so that it doesn’t bring your machine to its knees if you happen to forget to hit “Stop”. Also it does points in large “bursts” instead of one by one.
Update: I realized after I wrote above that I never wrote a line of python code that I did write a line of python code before. In my evince/vim/synctex setup I did fiddle with some python code that I stole from gedit, but I didn’t really write any new code there rather than just whacking some old code I did not totally understand with a hammer till it fit in the hole that I needed (a round peg will go into a square hole if hit hard enough).
Syndicated 2012-05-30 17:16:11 from The Spectre of Math
Return to linear search
So … apparently searching an unordered list without any structure whatsoever is supposed to be better than having structure. At least that’s the new GNOME shell design that removes categories, removes any ordering and places icons in pages. The arguments are that it’s hard to categorize things and people use spatial memory to find where things are.
The spatial memory was here before with nautilus. It didn’t work out so great. No people don’t have spatial memory. For example for me, I use a small number of applications often, I put their launchers somewhere easy to reach. The rest of the applications I use rarely if never. No I do not remember where they are, I do not even remember what they are named. E.g. I don’t remember what the ftp client list, but I am not a total moron and I correctly guess to look for it in the “Internet” menu which is managable. Given I’ve used ftp probably once in a year, I do not remember where it is. Another example is when Maia (6 year old) needs a game to play. I never play games, but I have a few installed for these occasions. Do I want to look through an unordered list of 50-100 icons? Hell no. I want to click on “Games” and pick one. 95% or so of applications i have installed I use rarely. I will not “remember” where they are. I don’t want to spend hours trying to sort or organize the list of icons. Isn’t that what the computer can do for me? Vast majority of people (non-geeks) never change their default config, they use it as it came. So they will not organize it unless the computer organizes it for them. I have an android tablet, and this paged interface with icons you have to somehow organize yourself is totally annoying. One of the reasons why I find the tablet unusable (I don’t think I’ve turned it on for a few months now). That interface might work well when you have 10 apps, but it fails miserably when you have 100.
If I could remember that games are on page 4 (after presumably I’ve made a lot of unneeded effort to put them there) I can remember they are in the “Games” category. Actually there I don’t have to memorize it. Why don’t we just number all the buttons in an application since the user could remember what button number 4 that’s right next to button number 3 on window number 5 does. I mean, the user can use spatial memory right?
Now as for “that’s why there is search” … yeah but that only works when you know what you are searching for. I usually know what I am searching for once I found it. It’s this idea that google is the best interface for everything. Google is useful for the web because there are waaaaay too many pages to categorize. That’s not a problem for applications. Search is a compromise. It is a way to find things when there are too many to organize.
The argument “some apps don’t fit into one category neatly” also fails. The whole idea of the vfolder menus was that you could have arbitrary queries for submenus. You can have an app appear in every category where it makes sense. Now just because people making up the menus didn’t get it just right doesn’t make it a bad idea. Also now this leads to a lot of apps without any categories. The problem I think is with the original terminology. When I was designing this system I used “Keywords” instead of “Categories”. But KDE already had Keywords, so we used Categories, but you should think of them as Keywords on which to query where the icon appears. It describes the application, it doesn’t hardcode where it appears. Unfortunately, there seems to be a lack of understanding of this concept which always led to miscategorization. For example someone changed the original design to say some things were some sort of “core categories” or whatnot and that only one should appear on an icon and that there will be a menu with that name. That defeats the purpose. It’s like beating out the front glass of your car and then complaining about the wind.
Finally, what if I lend my computer to someone to do something quickly. No I am a normal person, so I don’t create a new account. And even if I do create a new account, the default sorting of apps is unlikely to be helpful. If someone just wants to quickly do something that doesn’t involve the icons on the dash, they’re out of luck if I have lots of apps installed. Plus at work I will have a different UI, on my laptop I have a different UI, and any other computer I use will have a different UI. I can’t customize everyone of them just to use them.
As it is, if I had a friend use my computer with gnome-shell they were lost. If it’s made even less usable … thank god for XFCE, though I worry that these moves towards iphonization of the UI will lead to even worse categorization. There are already many .desktop’s with badly filled out Categories field, so there will be less incentive to do it correctly.
Syndicated 2012-05-12 17:02:31 from The Spectre of Math
Determinants
I just feel like ranting about determinant notation. I always get in this mood when preparing a lecture on determinants and I look through various books for ideas on better presentation and the somewhat standard notation makes my skin crawl. Many people think it is a good idea to use
$\left\lvert \begin{matrix} a & b \\ c & d \end{matrix} \right\rvert$
instead of the sane, and hardly any more verbose
$\det \left[ \begin{matrix} a & b \\ c & d \end{matrix} \right]$ or $\det \left( \left[ \begin{matrix} a & b \\ c & d \end{matrix} \right] \right)$.
Now what’s the problem with the first one.
1) Unless you look carefully you might mistake the vertical lines for brackets and simply see a matrix, not its determinant.
2) vertical lines look like something positive while the determinant is negative.
3) What about 1 by 1 matrces. $|a|$ is a determinant of $[a]$ or is it the absolute value of $a$.
4) What if you want the absolute value of the determinant (something commonly done). Then if you’d write
$\left\lvert\left\lvert \begin{matrix} a & b \\ c & d \end{matrix} \right\rvert\right\rvert$
that looks more like the operator norm of the matrix rather than absolute value of its determinant. So in this case, even those calculus or linear algebra books that use the vertical lines will write:
$\left\lvert \det \left( \left[ \begin{matrix} a & b \\ c & d \end{matrix} \right] \right) \right\rvert$
So now the student might be confused because they don’t expect to see “det” used for determinant (consistency in notation is out the window).
So … if you are teaching linear algebra or writing a book on linear algebra, do the right thing: Don’t use vertical lines.
Syndicated 2012-05-09 20:51:55 from The Spectre of Math
GNOME UI Fail
So, another GNOME UI fail. Marketa has a new computer: Using compositing leads to crashes so using fallback gnome (am thinking i should switch her to xfce as well). But this is really not a problem of the fallback.
Anyway, the UI fail I am talking about is “adding a printer”. Something which she figured out how to do previously. Not with the new UI for the printing. The thing is, the window is almost empty and it is not at all clear what to press to add a printer. So she hasn’t figured it out and I had to help out. I figured out three things
1) The “unlock” thing is totally unintuitive. She did not think of pressing it. She doesn’t want to unlock anything, she wants to add a printer. With it, some parts of the UI are greyed out, but it’s not clear what should happen.
2) There is just a “+” in a lower corner that you have to press. She did not figure out that’s what you have to press to add a printer. A button with “Add printer” would have been a million times better.
3) Not even I figured out how to set default options for the printer such as duplex, resolution, etc… Pressing “Options” is something about forbidden users or whatnot, which is a totally useless option on a laptop.
If a PhD who has used computers for years can’t figure out how to do something like this, there is a problem with the UI.
This is a symptom all over the new GNOME system settings. It’s very hard to set something up if it didn’t set itself up automatically. There’s also a lot of guesswork involved now. The UI may be slightly prettier, but it is a step backwards usage-wise.
Here’s a solution:
1) Get rid of the lock thing, go back to the model that if you do something that requires authentication, ask for authentication. Why should there be extra UI that only confuses the user.
2) Change the “+” and “-” buttons to have the actual text. “Add printer” “Remove printer”.
3) “Add printer” should be very prominent in the UI. I bet 90% of the time when a normal user enters that dialog, they want to add a printer.
4) Put options where they can be accessed. Surely the options are accessible somewhere, but I didn’t find it.
…
Maybe I should file a bug that will get ignored …
Syndicated 2012-04-29 15:21:08 from The Spectre of Math
CS costs too much
Apparently computer science is not too interesting and costs too much. \$1.7 mil at University of Florida apparently. So obviously we cut it, so that the athletic department (costing \$99 mil) can get an extra \$2 mil a year. It’s obvious where our priorities are as a society. Even if nothing got cut, 1.7 vs 99 is pretty bad.
Syndicated 2012-04-23 13:35:31 from The Spectre of Math
XFCE 1, GNOME Shell 0
After a year of using GNOME-shell, I finally got fed up with it. GNOME shell is unfortunately really annoying to use. There are so many decisions it tries to do, that it does some of them wrong. New window placement, the whole status thing in the corner getting triggered when I don’t want it to, the overview getting triggered all the time by mistake, as well as for example custom launcher setup. When I run my script for editting latex it never shows evince and I have to focus it by alt-tab “by hand.” The whole Alt-Tab behaviour is totally nuts. I also really REALLY hate the fact that dialogs are now “attached” to their parents. I often need to look at the original window because I just forgot what I was going to type in, such as “how many pages did the document have again and what pag I am on now” when printing, this happens really really often for me, so gnome shell drives me up the wall. There are just so many little things like that that overall make it a total pain. Some are solved through extensions or change in behaviour, but I use several computers, so learning different behaviour just for my laptop is annoying.
Consistency be damned is the new motto now. From those new and cool interfaces, they are all quite different, Unity, Cinnamon, GNOME shell, (I haven’t tried KDE, I guess I won’t be able to go there out of GNOME loyalty, which was the only reason why I kept using GNOME shell for so long). Apparently rounded corners are more important than working correctly.
So at first I was happy with GNOME shell. Mostly because it seems to be aimed (despite what anyone says) at people who use the command line. People who mouse around will find GNOME shell annoying. For example my wife will not be searching for apps using the keyboard to launch them. Also the fact that it’s impossible to customize GNOME nowdays to a specific purpose easily (using dconf-editor which has totally broken UI, is really not an answer, I wasted lots of time trying to get some things to work). Either ues GNOME shell for what it’s specifically designed for, or use something else. So flexibility is also out the window.
GNOME shell seems to also think that your mousing is very precise, which it never was for me. I commonly press the wrong button, or the mouse will go somewhere it shouldn’t and the interface punishes you for it. See above about entering the overview by mistake (whenever I wanted to hit a menu or the back button or some such).
I tried LXDE, but it’s buggy as hell (at least in fedora). The window list seems to jump around, launchers don’t always work, the battery status doesn’t work, and workspace switcher is totally broken. OK, so no go there. I tried Cinnamon for a few days, but it’s bad in many of the ways that GNOME shell is. Unity is even worse.
I had some trouble with XFCE in the past (on ubuntu that was upgraded a few times, so it might not have been fair to xfce). Anyway, I installed it on fedora, and quickly set it up, and … it works. It’s not perfect, but I don’t need it to be perfect. I want it to just work, and so far it does. It gets out of my way, unlike GNOME shell which kept trying to get in my way. Plus it’s fast.
So kudos to XFCE. I think I’ll stick with it.
Syndicated 2012-04-15 18:50:49 from The Spectre of Math
Priorities
Two things I saw recently 1) NASA budget for climate research is 1 billion (for all those satellites and all that), 2) Facebook buys instagram for 1 billion.
Now we can see where our priorities (as a society) lie. What I don’t get is, that instagram has software that a reasonably good programmer could have done in a few weekends of binge hacking. It does nothing really new. You could even take fairly off the shelf things. Perhaps the servers and the online setup might be costlier, but still, nothing all that strange. To think that this is worth to us as much as figuring out where the next hurricane will hit, or when will the ice caps melt is “interesting”.
Though it is not totally out of sync with what else is happening. When the entire UC system which is responsible for several nobel prizes and innumerable new cures for diseases and leaps in terms of understanding the world, not to mention educating a huge number of students, when that system has a budget hole the size of one CEOs bonus, and it’s a huge hit for the university. Something is off in priorities. Actually there is a very good likelyhood that this CEO will die of some cancer that wasn’t cured because we don’t fund science enough.
Syndicated 2012-04-13 18:26:47 from The Spectre of Math
313 older entries...
New Advogato Features
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542813897132874, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/225142/homomorphisms-mathbbz-m-mathbbz-to-mathbbz-n-mathbbz
|
# Homomorphisms $\mathbb{Z}/m\mathbb{Z} \to \mathbb{Z}/n\mathbb{Z}$
please provide an explicit description of $Hom_{\mathbb{Z}}(\mathbb{Z}/m\mathbb{Z} ,\mathbb{Z}/n\mathbb{Z})$ and also $\mathbb{Z}/m\mathbb{Z} \otimes_{\mathbb{Z}} \mathbb{Z}/n\mathbb{Z}$.
Thank you
-
1
What's the context? What have you tried? – Julian Kuelshammer Oct 30 '12 at 12:02
1
I solved for $m,n$ coprime. Applying Chinese remainder we get a sort of decomposition. Am I on the right track? – user17090 Oct 30 '12 at 12:05
1
Yes, yes. What have you got exactly? (You could include it in the text.) For non coprime $n,m$ you will have to consider the $\gcd$ and $\lcm$ of them. – Berci Oct 30 '12 at 12:13
Try specific examples such as ${\mathbb Z}/3$ and ${\mathbb Z}/12$. Write down everything explicitly. – Scott Carter Oct 30 '12 at 12:26
Any element $\psi\in Hom_{\mathbb{Z}}(\mathbb{Z}/m\mathbb{Z} ,\mathbb{Z}/n\mathbb{Z})$ is uniquely determined by where it sends $1$, and is well defined only if $m\cdot\psi(1) = 0$. – Arthur Oct 30 '12 at 13:07
## 1 Answer
Here is a start: You have the exact sequence
$$0 \rightarrow \Bbb{Z} \stackrel{f}{\longrightarrow} \Bbb{Z} \longrightarrow \Bbb{Z}/m \longrightarrow 0.$$
where $f$ is multiplication by $m$. Now apply $\textrm{Hom}(-,\Bbb{Z}/n)$ to get that
$$0 \rightarrow \textrm{Hom}(\Bbb{Z}/m,\Bbb{Z}/n) \rightarrow \textrm{Hom}(\Bbb{Z},\Bbb{Z}/n) \stackrel{f_\ast}{\rightarrow} \textrm{Hom}(\Bbb{Z},\Bbb{Z}/n)$$
is exact. This tells you that the number of homomorphisms from $\Bbb{Z}/m\rightarrow \Bbb{Z}/n$ is at most $n$ because the middle term is the cyclic group of order $n$. Now the map $f_\ast$ is defined by $f_\ast(\phi) = \phi \circ f$. What are those $\phi$ such that $f_\ast(\phi) = 0$?
As for the second problem, can you use the chinese remainder theorem to give you a map $\phi : \Bbb{Z}/m \times \Bbb{Z}/n \longrightarrow \Bbb{Z}/(\textrm{gcd}(m,n))$? If you can then you have a unique map out of the tensor product which you can show is an isomorphism.
-
Thank you for the nice answer. – user17090 Oct 31 '12 at 10:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347077012062073, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/tensor-calculus?page=2&sort=newest&pagesize=15
|
# Tagged Questions
The tensor-calculus tag has no wiki summary.
0answers
1k views
### Superfields and the Inconsistency of regularization by dimensional reduction
Question: How can you show the inconsistency of regularization by dimensional reduction in the $\mathcal{N}=1$ superfield approach (without reducing to components)? Background and some references: ...
2answers
1k views
### What is the covariant derivative in mathematician's language?
In mathematics, we talk about tangent vectors and cotangent vectors on a manifold at each point, and vector fields and cotangent vector fields (also known as differential one-forms). When we talk ...
0answers
270 views
### I lost a factor of two in the electromagnetic field tensor
I apologize for this simple question, but I lost a factor of 2 and can't find it anymore, so now I'm looking on the internet, perhaps one of you has some information about its whereabouts. :-) ...
4answers
1k views
### History of Electromagnetic Field Tensor
I'm curious to learn how people discovered that electric and magnetic fields could be nicely put into one simple tensor. It's clear that the tensor provides many beautiful simplifications to the ...
2answers
414 views
### What kind of invariants are proper time and proper length?
Under the Lorentz transformations, quantities are classed as four-vectors, Lorentz scalars etc depending upon how their measurement in one coordinate system transforms as a measurement in another ...
3answers
2k views
### What is the physical meaning of the connection and the curvature tensor?
Regarding general relativity: What is the physical meaning of the Christoffel symbol ($\Gamma^i_{\ jk}$)? What are the (preferably physical) differences between the Riemann curvature tensor (\$R^i_{\ ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243687987327576, "perplexity_flag": "middle"}
|
http://www.encyclopediaofmath.org/index.php/Metric_space
|
# Metric space
From Encyclopedia of Mathematics
A set $X$ together with a metric $\rho$. The set-theoretic approach to the study of figures (spaces) is based on the study of the relative position of their elementary constituents. A fundamental characteristic of the relative position of points of a space is the distance between them. This approach leads to the idea of a metric space, first suggested by M. Fréchet [2] in connection with the discussion of function spaces. It turns out that sets of objects of very different types carry natural metrics. As metric spaces one may consider sets of states, functions and mappings, subsets of Euclidean spaces, and Hilbert spaces. Metrics are important in the study of convergence (of series, functions) and for the solution of questions concerning approximation.
The development of the theory of metric spaces has proceeded in the following main directions.
## Contents
### General theory of metric spaces.
Here one studies properties of metric spaces which are invariant relative to isometries: one-to-one and onto mappings which preserve distance (cf. Isometric mapping). Such properties include completeness, boundedness, total boundedness, and widths. Properties of this type are called metric.
### Topological theory of metric spaces.
Its subject is the properties of metric spaces which are preserved under homeomorphisms (cf. Homeomorphism). Among these are compactness, separability, connectedness, the Baire property, and zero dimensionality. Properties of this type are called topological.
Theory of spaces on which a metric has been given that is compatible with some additional algebraic structure (for example, a vector space or a group). Here one is concerned with Euclidean spaces, pre-Hilbert and Hilbert spaces (cf. Hilbert space) (of any weight), Banach spaces, Banach algebras, Banach lattices (cf. Banach space; Banach algebra; Banach lattice), and countably-normed spaces (cf. Countably-normed space). The facts available here are essentially connected with the discussion of important properties of metrics or norms, but the content, on the whole, belongs to the corresponding domains of algebra and functional analysis.
The discussion of particular metrics plays an important role in investigations in non-Euclidean geometries, differential geometry, mechanics, and physics. Here a central place is occupied by the notion of a Riemannian metric in a Riemannian space (see Riemannian geometry). A broader approach to the study of the surfaces and figures that arise in differential geometry is related to the concept of a -space, resulting from the addition of certain conditions to the metric axioms (see Geodesic geometry), which creates a basis for the discussion of geodesics in -spaces by ensuring their existence and nice properties. Typical here is the abandoning of the methods of differential calculus. In this connection it turns out that much of differential geometry is not connected with differentiability conditions, but is determined only by geometric axioms. Geodesic geometry is of interest not only as a generalization of Riemannian geometry, but also as an attempt to investigate geometric objects more geometrically, without using complicated computations.
In each set a metric can be defined by the following rule: if , and if . This is called the trivial metric. Each metric on a set gives rise, in a natural way, to a topology on . The concept of a topological space axiomatizes the relation of absolute nearness of a point to a set, whereas the concept of a metric space formalizes the notion of relative nearness of points. The distance of a point to a set in a metric space is defined as . A point is said to be absolutely near to a set if . The closure of in is the set of all points of absolutely near to . The topology in uniquely associated with this operation is called the topology generated in by the metric . The trivial metric leads to the trivial topology, in which all sets are closed.
In research on metric spaces (particularly on their topological properties) the idea of a convergent sequence plays an important role. This is explained by the fact that the topology of a metric space can be completely described in the language of sequences.
Let be a sequence of points in a metric space . It is said to converge to a point if for each there is an integer such that for all . The sequence is called fundamental if for each there is an integer such that for all .
An important metric property is completeness. A metric space is called complete if each fundamental sequence in it converges to a point of it. The space is always complete. Completeness of a metric space is not a topological property: A metric space homeomorphic to a complete metric space may be non-complete, for example, the real line with the usual metric is homeomorphic to the interval with the same metric; however, the first metric space is complete and the second is not.
Examples of complete metric spaces are Euclidean and Banach spaces. An important property of complete metric spaces, preserved under homeomorphisms, is the Baire property, on the strength of which each complete metric space without isolated points is uncountable. Therefore the usual topological space of the rational numbers is not generated by any complete metric. However, each metric space may be represented as a subset of some complete metric space by the standard construction of completion. Two fundamental sequences and in a metric space are called equivalent if
Let be the resulting collection of equivalence classes. A metric is introduced on by the rule: If and , , then
For , let , where for all . Then is a complete metric space and is an isometry of onto an everywhere-dense subset in ( is called the completion of ).
Related to the discussion of completion is the Lavrent'ev theorem on the extension of homeomorphisms. This theorem implies that the property of a metric space being a -set in its completion is a topological invariant (in contrast with the non-invariance of metric completeness itself relative to homeomorphisms).
Two metrics and on a set are called topologically equivalent if the topologies and generated by them coincide. On a finite set all metrics are equivalent; they generate the discrete topology. The Aleksandrov–Hausdorff theorem: A metric on a set is topologically equivalent to some complete metric if and only if is a -set in the completion of . In particular, the space of irrational numbers with the usual metric, relative to which it is not complete, is homeomorphic to the complete metric Baire space whose points are the infinite sequences of natural numbers, with distance given by , where is such that and for all .
The following example of a complete metric space is important: The space of all continuous functions on , with metric defined by the rule
for all . The space is separable — there is a countable everywhere-dense set in it (cf. Separable space). It turns out that each separable metric space is isometric to some subset of (the Banach–Mazur theorem). This result means that all metrics which generate separable topologies are obtained by restricting the natural metric on the set of continuous functions.
A subset of a complete metric space , equipped with the same metric (more precisely, its restriction to ), is a complete metric space if and only if is closed in .
There is a fundamental connection between the ideas of completeness and compactness for a metric space. Compactness of a metric space is equivalent to any of the following conditions: 1) any sequence in contains a convergent subsequence; 2) each countable open covering (cf. Covering (of a set)) of contains a finite subcovering; 3) in any open covering of there is a finite subcovering; 4) each decreasing sequence of non-empty closed sets in has a non-empty intersection; and 5) every closed discrete subset of is finite.
The simplest examples of compact metric spaces are: finite discrete spaces, any interval (together with its end points), a square, a circle, and a sphere. In general, a subset of the Euclidean space , with the usual metric, is compact if and only if it is closed and bounded.
The conditions listed are not all equivalent outside the class of metric spaces (see Compact space). H. Lebesgue (1911) established that for each open covering of a compact metric space there is a number such that each set of diameter is contained in some element of . This implies the fundamental property of compact metric spaces: Every continuous mapping of such a space into an arbitrary metric space is uniformly continuous (cf. Uniform continuity). Further, a metric space is compact if and only if each real-valued continuous function on it is bounded (and attains its least and greatest values).
Each compact metric space is complete, but the converse is false; the simplest example is an infinite discrete space with the trivial metric. However, the following characteristic property holds: A metric space is compact if and only if every metric space homeomorphic to it is complete.
It is intuitively clear that compactness implies, besides completeness, some sort of boundedness; this is confirmed by the consideration of compact subsets of . In general, a metric space is called bounded if there is a real number such that for all . Every compact metric space is bounded. The space is complete and bounded, but not compact if is infinite; thus, completeness and boundedness together are not sufficient for compactness in the class of metric spaces. In general, each metric on a set is topologically equivalent to some bounded metric, which is complete if the given metric is complete. In this connection, there is the important notion of total boundedness. A metric space is called totally bounded if for each there is a finite set such that for all . The set here is called an -net in . A metric space is compact if and only if it is complete and totally bounded, and is totally bounded if and only if it is isometric to a subset of some compact metric space. More precisely, total boundedness of a metric space is equivalent to compactness of its completion . Each subspace of a totally-bounded metric space is totally bounded. All totally-bounded metric spaces (in particular, all compact metric spaces) are separable and have a countable base. Compactness, in general, is not inherited by subsets; a set is relatively compact in a metric space if the closure of in is a compact metric space. If is complete, then relative compactness of a set in is equivalent to total boundedness of equipped with the metric .
An important role in functional analysis is played by a criterion for compactness of a set of continuous functions on in the metric space . This criterion is the following (the Arzelà–Ascoli theorem): A set is relatively compact in if and only if: 1) there is a number such that for all and all ; and 2) for each there is a such that for all and all for which .
A mapping of a metric space into itself is called a contraction if there is a real number such that
for all . An important theorem for complete metric spaces is the contracting- (contraction-) mapping principle (cf. also Contraction-mapping principle): For each such mapping of a (non-empty) complete metric space into itself there is precisely one fixed point.
The topological theory of metric spaces is significantly simpler than the general theory of topological spaces. Below the most important topological properties of metric spaces are given. Here one has in mind properties of the topology that is generated by the metric .
Each metric space is normal and even collectionwise normal (cf. Normal space). This permits the extension of continuous real-valued functions from closed subsets of a metric space to the whole space. A stronger result is: For each closed subset of a metric space there is a linear mapping of the space of all continuous real-valued functions on to the space of all continuous real-valued functions on such that (for any ) is an extension of and
(Dugundji's theorem). This theorem is related to Hausdorff's theorem on the extension of metrics: If a closed subspace of a metrizable space is already metrizable with a metric (generating the topology on as a subspace of ), then it is possible to extend to a metric on the whole of , generating the original topology on . Similar results are valid for totally-bounded metrics and complete metrics.
Research into the topological properties of metric spaces is, to a large extent, based on the following theorem of A.H. Stone: A metric space is paracompact, that is, any open covering has an open locally finite refinement (locally finite means that each point has a neighbourhood intersecting only a finite number of elements of , cf. also Paracompact space). The Nagata–Smirnov metrization criterion (see Metrizable space) is based on the paracompactness of metric spaces.
For a metric space there are important theorems on the equivalence of topological properties which are distinct in general topological spaces. Thus, the following cardinal-valued invariants coincide: the density, the character, the weight, the Suslin number, and the Lindelöf number (cf. also Cardinal characteristic). For metric spaces countable compactness, pseudo-compactness and compactness are equivalent. For metric spaces the dimensions (the covering dimension) and (the large inductive dimension) coincide, and for separable metric spaces the small inductive dimension coincides with and (see Dimension theory).
Each metric space is star normal; any open covering of has an open star refinement , that is, for each point there is a containing every for which . Related to this theorem is the following metrizability criterion (Stone–Arkhangel'skii): A regular space is metrizable by a totally-bounded metric if the space has a countable base. But even a countable regular space need not be metrizable. The simplest example is obtained by adjoining to the discrete natural numbers one exterior point from its Stone–Čech compactification. The criterion for metrizability of a metric space by a complete metric is unexpected: It is necessary and sufficient that be a -set in some (and then in any) Hausdorff compactification of . However, Hausdorff compactifications of metric spaces carry complete information on the topology of the latter; this is clear from Čech's theorem: Metric spaces are homeomorphic if and only if their Stone–Čech compactifications are homeomorphic.
A metric space need not have a countable base, but it always satisfies the first axiom of countability: it has a countable base at each point. In addition, each compact set in a metric space has a countable base. Moreover, in each metric space there is a base such that each point of the space belongs to only countably many of its elements — a point-countable base, but this property is weaker than metrizability, even for paracompact Hausdorff spaces. Regular separable spaces satisfying the first axiom of countability need not be metrizable.
The condition for metrizability of a separated topological group is easily found: It is necessary and sufficient that the space of the group satisfies the first axiom of countability; there are then both left-invariant and right-invariant metrics generating the topology.
Connected with each metric space , in a standard way, there is another metric space, namely the space of its non-empty bounded closed subsets with the Hausdorff metric, which is defined as follows:
The space is isometric to a closed subspace of . If is complete, then is complete. But topological equivalence of two metrics and given on does not imply, in general, that the corresponding Hausdorff metrics and are topologically equivalent.
A continuous image of a metric space need not be homeomorphic to any metric space, even when the Hausdorff separation axiom is satisfied. This also applies to quotient spaces of metric spaces. For example, if in the plane one shrinks a fixed line to a point, taking as individual elements of the decomposition all the points of the plane not on the line, then one obtains a non-metrizable separable space, at whose special point the first axiom of countability is not satisfied. There is a general criterion for the metrizability of a quotient space of a metric space (see [6]). In particular, the quotient space associated with a continuous decomposition of a metric space into compact sets is always metrizable. Every Hausdorff space which is a continuous image of a compact metric space is metrizable and compact. This is a particular case of a general proposition on the non-increase of the weight of a topological space under a continuous mapping into a compact set. But even when the image of a metric space is metrizable, the metric realizing the metrization of need not be obtained from by means of any formula. Instead of a metric on , with respect to it is natural to define a function by means of the rule: For , is equal to the distance in the sense of between the inverse images of the points under the mapping under discussion. Often (for example, if is the decomposition space of a metric space into compact sets) is closely related to the topology of and is symmetric. This means that for all , if and only if . The symmetric relation thus defined almost never satisfies the triangle axiom (cf. Metric), but if the decomposition into compact sets is continuous, then has topological properties which successfully replace the triangle axiom and guarantee the metrizability of the image by a "true" metric. A topological space which is the image of a metric space under a continuous open and closed mapping is itself homeomorphic to a metric space. However, under continuous open mappings, metrizability is not always preserved: All spaces satisfying the first axiom of countability, and only they, are the images of metric spaces under continuous open mappings.
Among the generalizations of metric spaces the most important are pseudo-metric spaces, spaces with a symmetric relation and spaces with -metrics [7]. These are defined axiomatically by a natural weakening of the axioms of a metric space. However, the distance here, as usual, is expressed by a non-negative real number. One may consider generalized metrics with values in ordered semi-groups, semi-fields, etc. (see [8]). In this way a generalized metrization of an arbitrary completely-regular space can be obtained.
A fundamental generalization of the concept of a metric space is the notion of a uniform space. Further, there are purely topological extensions of the class of metric spaces, among which are the important classes of spaces with a uniform base, Moore spaces, feathered and paracompact feathered spaces, and lattice spaces (cf. Moore space; Feathered space). The class of paracompact spaces is too broad a generalization of the class of metric spaces, because paracompactness is not even preserved under finite products. On the contrary, the class of paracompact feathered spaces is a successful simultaneous generalization of the class of spaces homeomorphic to metric spaces and the class of compact spaces. In another direction the idea of a metric generalizes to a -metric and a -metric [4]. The concept of a statistical metric space, introduced by K. Menger, turns out to be topologically equivalent to the idea of a space with a symmetric relation.
#### References
[1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian)
[2] M. Fréchet, "Sur quelques points du calcul fonctionnel" Rend. Circ. Mat. Palermo , 22 (1906) pp. 1–74 Zbl 37.0348.02
[3] A.V. Arkhangel'skii, V.I. Ponomarev, "Fundamentals of general topology: problems and exercises" , Reidel (1984) (Translated from Russian) MR785749 Zbl 0568.54001
[4] E.V. Shchepin, "Topology of limit spaces of uncountable inverse spectra" Russian Math. Surveys , 31 : 5 (1976) pp. 155–191 Uspekhi Mat. Nauk , 31 : 5 (1976) pp. 191–226 Zbl 0356.54026
[5] R. Engelking, "General topology" , Heldermann (1989) MR1039321 Zbl 0684.54001
[6] A.V. Arkhangel'skii, "Factor-mappings of metric spaces" Soviet Math. Dokl. , 5 : 2 (1964) pp. 368–371 Dokl. Akad. Nauk SSSR , 155 : 2 (1964) pp. 247–250 Zbl 0129.38104
[7] S.I. Nedev, "-metrizable spaces" Trans. Moscow Math. Soc. , 24 (1971) pp. 213–247 Trudy Moskov. Mat. Obshch. , 24 (1971) pp. 201–236 MR0367935 Zbl 0417.54008 Zbl 0295.54039 Zbl 0257.54025 Zbl 0255.54026 Zbl 0255.54025 Zbl 0248.54034 Zbl 0246.54035 Zbl 0242.54030 Zbl 0242.54029
[8] M.Ya. Antonovskii, V.G. Boltyanskii, T.A. Sarymsakov, "An outline of the theory of topological semi-fields" Russian Math. Surveys , 21 : 4 (1966) pp. 163–192 Uspekhi Mat. Nauk , 21 : 4 (1966) pp. 185–218
[9] S.I. Nedev, M.M. Choban, "-metrics and proximity spaces. Metrization of proximity spaces" Serdica , 1 (1975) pp. 12–28 (In Russian) MR0394579
#### Comments
The trivial metric is also called the discrete metric. A fundamental sequence is also called a Cauchy sequence. Star-normal spaces are also called fully normal.
There are fairly obvious numerical invariants of metric spaces such as width (diameter) and (various kinds of) dimension. A rather more hidden numerical invariant is the Gross dispersion number or rendezvous number, whose existence and uniqueness is guaranteed by the following theorem, [a12]. Let be a compact connected metric space; then there is a unique number such that for all and all sets of points there is a point such that
(a1)
Some examples are as follows. If is a ball of radius in Euclidean -space, then ; if is the -dimensional sphere of unit diameter in Euclidean -space, then, [a15],
where is the gamma-function; if is an equilateral triangle in , then . The theorem guaranteeing the existence of generalizes in two directions. First, can be replaced by any symmetric function (where symmetric means ), [a13], and further the average on the left of (a1) can be replaced by an integral, [a14]. Thus, for a compact connected Hausdorff space and a symmetric function there exists a unique real number such that for any regular Borel probability measure on there is a point such that
The metric spaces on which every continuous function (to any metric space, or just to the real line) is uniformly continuous are studied in [a2]. The simplest description is this: There is a compact subset such that the complement of any neighbourhood of is discrete.
Beyond the completion of a metric space is the injective envelope . In general, is not dense in (e.g., if is a circle, is infinite dimensional), but is an essential extension of ; this means that a non-expansive mapping from to any metric space whose restriction to preserves all distances must preserve all distances in (non-expansive means that for all ). is characterized, up to a unique isometry, as an essential extension of which has no further essential extension. This is equivalent to injectivity in the sense of extendability of mappings, and also to the following Helly-type property: Any family of spherical neighbourhoods satisfying the consistency conditions , for all , has a common point. Because of this equivalence, the injective metric spaces are also called hyperconvex. See [a1], [a5].
The injective envelope of a real Banach space is itself a real Banach space in a unique compatible way. This result is known only from a highly non-constructive proof combining H. Cohen's construction of relative injective envelopes in the category of real Banach spaces [a3], the Aronszain–Panitchpakdi theorem that an injective real Banach algebra is an injective metric space [a1], and the Mazur–Ulam theorem that every isometry of real Banach spaces is affine [a9]. Compare [a6].
Injective spaces support a much stronger fixed-point theorem than the contraction-mapping theorem: Every non-expansive mapping of a bounded injective metric space into itself has a fixed point (the Sine–Soardi theorem). However, the extensive development of fixed-point theory of non-expansive mappings has been done mostly in the important special case of convex subsets of Banach spaces. A survey of it, up to 1980, is in [a8]. Compare [a7].
It should be noted that the Stone–Arkhangel'skii metrization criterion involves A.H. Stone, who also proved the paracompactness of metric spaces. In the Stone–Čech compactification, it is M.H. Stone.
It was noted above that two topologically-equivalent metrics on a space do not, in general, give topologically-equivalent Hausdorff metrics on the hyperspace . In fact, they do so if and only if they are uniformly equivalent [a4].
#### References
[a1] N. [N. Aronszain] Aronszajn, P. Panitchpakdi, "Extensions of uniformly continuous transformations and hyperconvex metric spaces" Pacific J. Math. , 6 (1956) pp. 405–439 MR0084762
[a2] M. Atsuji, "Uniform continuity of continuous functions of metric spaces" Pacific J. Math. , 8 (1958) pp. 11–16 MR0099023 Zbl 0082.16207
[a3] H. Cohen, "Injective envelopes of Banach spaces" Bull. Amer. Math. Soc. , 70 (1964) pp. 723–726 MR0184060 Zbl 0124.06505
[a4] D. Hammond Smith, "Hyperspaces of a uniformizable space" Proc. Cambridge Philos. Soc. , 62 (1966) pp. 25–28
[a5] J. Isbell, "Six theorems about injective metric spaces" Comment. Math. Helv. , 39 (1964) pp. 65–76 MR0182949 Zbl 0151.30205
[a6] J. Isbell, "Three remarks on injective envelopes of Banach spaces" J. Math. Anal. Appl. , 66 (1969) pp. 301–306 MR0251512 Zbl 0206.42201
[a7] V.I. Istraţescu, "Fixed point theory" , Reidel (1981) MR0620639 Zbl 0465.47035
[a8] W.A. Kirk, "Fixed point theory for nonexpansive mappings" , Fixed Point Theory , Lect. notes in math. , 886 , Springer (1981) pp. 484–505 MR0643024 Zbl 0479.47049
[a9] S. Mazur, S. Ulam, "Sur les transformations isométriques d'espaces vectoriels" C.R. Acad. Sci. Paris , 194 (1932) pp. 946–948
[a10] J. Dugundji, "Topology" , Allyn & Bacon (1966) MR0193606 Zbl 0144.21501
[a11] J.L. Kelley, "General topology" , Springer (1975) MR0370454 Zbl 0306.54002
[a12] O. Gross, "The rendezvous value of a metric space" M. Dresher (ed.) L.S. Shapley (ed.) A.W. Tucker (ed.) , Advances in game theory , Princeton Univ. Press (1964) pp. 49–53 Zbl 0126.16401
[a13] W. Stadje, "A property of compact connected spaces" Arch. Math. , 36 (1981) pp. 275–280 MR0620518 Zbl 0457.54017
[a14] J. Cleary, S.A. Morris, D. Yost, "Numerical geometry—numbers for shapes" Amer. Math. Monthly , 93 (1986) pp. 260–275 MR0835294 Zbl 0598.51014
[a15] S.A. Morris, P. Nickolas, "On the average distance property of compact spaces" Arch. Math. , 40 (1983) pp. 459–463 MR707736 Zbl 0528.54028
How to Cite This Entry:
Metric space. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Metric_space&oldid=29394
This article was adapted from an original article by A.V. Arkhangel'skii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8961881995201111, "perplexity_flag": "head"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Chemical_potential
|
All Science Fair Projects
Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
Chemical potential
The chemical potential of a thermodynamic system is the amount by which the energy of the sytem would change if an additional particle is introduced, with the entropy and volume held fixed. If a system contains more than one species of particle, there is a separate chemical potential associated with each species, defined as the change in energy when the number of particles of that species is increased by one.
The chemical potential is particularly important when studying systems of reacting particles. Consider the simplest case of two species, where a particle of species 1 can transform into a particle of species 2 and vice versa. An example of such a system is a supersaturated mixture of water liquid (species 1) and water vapor (species 2). In equilibrium, the chemical potentials of the two species must be equal, because any increase in one chemical potential would allow particles of that species to transform into the other species with the net emission of heat (see second law of thermodynamics.) In chemical reactions, the equilibrium conditions are generally more complicated because more than two species are involved. In this case, the relation between the chemical potentials at equilibrium is given by the law of mass action.
Since the chemical potential is a thermodynamic quantity, it is defined independently of the microscopic behavior of the system, i.e. the properties of the constituent particles. However, some systems contain important variables that are equivalent to the chemical potential. In Fermi gases and Fermi liquids, the chemical potential at zero temperature is equivalent to the Fermi energy. In electronic systems, the chemical potential is equivalent to the negative of the electrical potential. For systems containing particles which can be spontaneously created or destroyed, such as photons and phonons, the chemical potential is identically zero.
Precise definition
Consider a thermodynamic system containing n constituent species. Its total energy E is postulated to be a function of the entropy S, the volume V, and the number of particles of each species N1,..., Nn:
$E \equiv E(S,V,N_1,..N_n)$
The chemical potential of the j-th species, μj is defined as the partial derivative
$\mu_j = \left( \frac{\partial E}{\partial N_j} \right)_{S,V, N_{i \ne j}}$
where the subscripts simply emphasize that the entropy, volume, and the other particle numbers are to be kept constant.
In real systems, it is usually difficult to hold the entropy fixed, since this involves good thermal insulation. It is therefore more convenient to use the Helmholtz free energy F, which is a function of the temperature T, volume, and particle numbers:
$F \equiv F(T,V,N_1,..N_n)$
In terms of the Helmholtz free energy, the chemical potential is
$\mu_j = \left( \frac{\partial F}{\partial N_j} \right)_{T,V, N_{i \ne j}}$
See also:
03-10-2013 05:06:04
Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines.
Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter.
Science Fair Coach What do science fair judges look out for?
ScienceHound Science Fair Projects for students of all ages
All Science Fair Projects.com Site All Science Fair Projects Homepage
Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8863469958305359, "perplexity_flag": "head"}
|
http://en.wikibooks.org/wiki/X86_Disassembly/Code_Obfuscation
|
# x86 Disassembly/Code Obfuscation
## Code Obfuscation
Code Obfuscation is the act of making the assembly code or machine code of a program more difficult to disassemble or decompile. The term "obfuscation" is typically used to suggest a deliberate attempt to add difficulty, but many other practices will cause code to be obfuscated without that being the intention. Software vendors may attempt to obfuscate or even encrypt code to prevent reverse engineering efforts. There are many different types of obfuscations. Notice that many code optimizations (discussed in the previous chapter) have the side-effect of making code more difficult to read, and therefore optimizations act as obfuscations.
## What is Code Obfuscation?
There are many things that obfuscation could be:
• Encrypted code that is decrypted prior to runtime.
• Compressed code that is decompressed prior to runtime.
• Executables that contain Encrypted sections, and a simple decrypter.
• Code instructions that are put in a hard-to read order.
• Code instructions which are used in a non-obvious way.
This chapter will try to examine some common methods of obfuscating code, but will not necessarily delve into methods to break the obfuscation.
## Interleaving
Optimizing Compilers will engage in a process called interleaving to try and maximize parallelism in pipelined processors. This technique is based on two premises:
1. That certain instructions can be executed out of order and still maintain the correct output
2. That processors can perform certain pairs of tasks simultaneously.
### x86 NetBurst Architecture
The Intel NetBurst Architecture divides an x86 processor into 2 distinct parts: the supporting hardware, and the primitive core processor. The primitive core of a processor contains the ability to perform some calculations blindingly fast, but not the instructions that you or I am familiar with. The processor first converts the code instructions into a form called "micro-ops" that are then handled by the primitive core processor.
The processor can also be broken down into 4 components, or modules, each of which is capable of performing certain tasks. Since each module can operate separately, up to 4 separate tasks can be handled simultaneously by the processor core, so long as those tasks can be performed by each of the 4 modules:
Port0
Double-speed integer arithmetic, floating point load, memory store
Port1
Double-speed integer arithmetic, floating point arithmetic
Port2
memory read
Port3
memory write (writes to address bus)
So for instance, the processor can simultaneously perform 2 integer arithmetic instructions in both Port0 and Port1, so a compiler will frequently go to great lengths to put arithmetic instructions close to each other. If the timing is just right, up to 4 arithmetic instructions can be executed in a single instruction period.
Notice however that writing to memory is particularly slow (requiring the address to be sent by Port3, and the data itself to be written by Port0). Floating point numbers need to be loaded to the FPU before they can be operated on, so a floating point load and a floating point arithmetic instruction cannot operate on a single value in a single instruction cycle. Therefore, it is not uncommon to see floating point values loaded, integer values be manipulated, and then the floating point value be operated on.
## Non-Intuitive Instructions
Optimizing compilers frequently will use instructions that are not intuitive. Some instructions can perform tasks for which they were not designed, typically as a helpful side effect. Sometimes, one instruction can perform a task more quickly than other specialized instructions can.
The only way to know that one instruction is faster than another is to consult the processor documentation. However, knowing some of the most common substitutions is very useful to the reverser.
Here are some examples. The code in the first box operates more quickly than the one in the second, but performs exactly the same tasks.
Example 1
Fast
```xor eax, eax
```
Slow
```mov eax, 0
```
Example 2
Fast
```shl eax, 3
```
Slow
```mul eax, 8
```
Sometimes such transformations could be made to make the analysis more difficult:
Example 3
Fast
```push $next_instr
jmp $some_function
$next_instr:...
```
Slow
```call $some_function
```
Example 4
Fast
```pop eax
jmp eax
```
Slow
```retn
```
### Common Instruction Substitutions
lea
The lea instruction has the following form:
``` lea dest, (XS:)[reg1 + reg2 * x]
```
Where XS is a segment register (SS, DS, CS, etc...), reg1 is the base address, reg2 is a variable offset, and x is a multiplicative scaling factor. What lea does, essentially, is load the memory address being pointed to in the second argument, into the first argument. Look at the following example:
``` mov eax, 1
lea ecx, [eax + 4]
```
Now, what is the value of ecx? The answer is that ecx has the value of (eax + 4), which is 5. In essence, lea is used to do addition and multiplication of a register and a constant that is a byte or less (-128 to +127).
Now, consider:
``` mov eax, 1
lea ecx, [eax+eax*2]
```
Now, ecx equals 3.
The difference is that lea is quick (because it only adds a register and a small constant), whereas the add and mul instructions are more versatile, but slower. lea is used for arithmetic in this fashion very frequently, even when compilers are not actively optimizing the code.
xor
The xor instruction performs the bit-wise exclusive-or operation on two operands. Consider then, the following example:
``` mov al, 0xAA
xor al, al
```
What does this do? Lets take a look at the binary:
``` 10101010 ;10101010 = 0xAA
xor 10101010
--------
00000000
```
The answer is that "xor reg, reg" sets the register to 0. More importantly, however, is that "xor eax, eax" sets eax to 0 faster (and the generated code instruction is smaller) than an equivalent "mov eax, 0".
mov edi, edi
On a 64-bit x86 system, this instruction clears the high 32-bits of the rdi register.
shl, shr
left-shifting, in binary arithmetic, is equivalent to multiplying the operand by 2. Right-shifting is also equivalent to integer division by 2, although the lowest bit is dropped. in general, left-shifting by $N$ spaces multiplies the operand by $2^N$, and right shifting by $N$ spaces is the same as dividing by $2^N$. One important fact is that resulting number is an integer with no fractional part present. For example:
``` mov al, 31 ; 00011111
shr al, 1 ; 00001111 = 15, not 15.5
```
xchg
xchg exchanges the contents of two registers, or a register and a memory address. A noteworthy point is the fact that xchg operates faster than a move instruction. For this reason, xchg will be used to move a value from a source to a destination, when the value in the source no longer needs to be saved.
As an example, consider this code:
```mov ebx, eax
mov eax, 0
```
Here, the value in `eax` is stored in `ebx`, and then `eax` is loaded with the value zero. We can perform the same operation, but using `xchg` and `xor` instead:
```xchg eax, ebx
xor eax, eax
```
It may surprise you to learn that the second code example operates significantly faster than the first one does.
## Obfuscators
There are a number of tools on the market that will automate the process of code obfuscation. These products will use a number of transformations to turn a code snippet into a less-readable form, although it will not affect the program flow itself (although the transformations may increase code size or execution time).
## Code Transformations
Code transformations are a way of reordering code so that it performs exactly the same task but becomes more difficult to trace and disassemble. We can best demonstrate this technique by example. Let's say that we have 2 functions, FunctionA and FunctionB. Both of these two functions are comprised of 3 separate parts, which are performed in order. We can break this down as such:
``` FunctionA()
{
FuncAPart1();
FuncAPart2();
FuncAPart3();
}
FunctionB()
{
FuncBPart1();
FuncBPart2();
FuncBPart3();
}
```
And we have our main program, that executes the two functions:
``` main()
{
FunctionA();
FunctionB();
}
```
Now, we can rearrange these snippets to a form that is much more complicated (in assembly):
``` main:
jmp FAP1
FBP3: call FuncBPart3
jmp end
FBP1: call FuncBPart1
jmp FBP2
FAP2: call FuncAPart2
jmp FAP3
FBP2: call FuncBPart2
jmp FBP3
FAP1: call FuncAPart1
jmp FAP2
FAP3: call FuncAPart3
jmp FBP1
end:
```
As you can see, this is much harder to read, although it perfectly preserves the program flow of the original code. This code is much harder for a human to read, although it isn't hard at all for an automated debugging tool (such as IDA Pro) to read.
## Opaque Predicates
An Opaque Predicate is a predicate inside the code, that cannot be evaluated during static analysis. This forces the attacker to perform a dynamic analysis to understand the result of the line. Typically this is related to a branch instruction that is used to prevent in static analysis the understanding which code path is taken.
## Code Encryption
Code can be encrypted, just like any other type of data, except that code can also work to encrypt and decrypt itself. Encrypted programs cannot be directly disassembled. However, such a program can also not be run directly because the encrypted opcodes cannot be interpreted properly by the CPU. For this reason, an encrypted program must contain some sort of method for decrypting itself prior to operation.
The most basic method is to include a small stub program that decrypts the remainder of the executable, and then passes control to the decrypted routines.
### Disassembling Encrypted Code
To disassemble an encrypted executable, you must first determine how the code is being decrypted. Code can be decrypted in one of two primary ways:
1. All at once. The entire code portion is decrypted in a single pass, and left decrypted during execution. Using a debugger, allow the decryption routine to run completely, and then dump the decrypted code into a file for further analysis.
2. By Block. The code is encrypted in separate blocks, where each block may have a separate encryption key. Blocks may be decrypted before use, and re-encrypted again after use. Using a debugger, you can attempt to capture all the decryption keys and then use those keys to decrypt the entire program at once later, or you can wait for the blocks to be decrypted, and then dump the blocks individually to a separate file for analysis.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276562333106995, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4204775
|
Physics Forums
## Every feild has a subset isomorphic to rational numbers?
I am reading linear algebra by Georgi Shilov. It is my first encounter with linear algebra. After defining what a field is and what isomorphism means he says that it follows that every field has a subset isomorphic to rational numbers. I don't see the connection.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Quote by Tyler314 I am reading linear algebra by Georgi Shilov. It is my first encounter with linear algebra. After defining what a field is and what isomorphism means he says that it follows that every field has a subset isomorphic to rational numbers. I don't see the connection.
Either you're misinterpreting the statement or it's very wrong. I can't see how a field with a finite number of elements could be isomorphic to Q. Could directly quote the section?
Recognitions: Homework Help That is a good book. As a recall that statement does not apply to the most general field, there is some qualification. With a qualification (perhaps the feild must be of characteristic 0 so that Ʃ1=0 only when the sum is empty) it is obviously true a/b is just a 1's divided by b 1's.
## Every feild has a subset isomorphic to rational numbers?
What is true is that every field has a subfield which is isomorphic to either ##Q## or to the field ##Z_p## (integers modulo ##p##) for some prime ##p##.
I've got a copy of Shilov in front of me, and on page 2 while defining a field (or number field, as he calls it) he writes "The numbers 1, 1+1=2, 2+1=3, etc. are said to be natural; it is assumed that none of these numbers is zero." That is, he is only working with fields of characteristic zero. In this case, it is immediate that every such field contains a subfield isomorphic to the field of rational numbers, i.e. the rationals can be isomorphically embedded in any field of characteristic zero.
Mentor I wouldn't say that it's "immediate", but it's fairly easy to prove. Denote the field by ##\mathbb F##. For each positive integer n, define n1=1+...+1, where 1 is the multiplicative identity of ##\mathbb F##, and there are n copies of 1 on the right. Also define (-n)1=(-1)+...+(-1), and 01=0, where the 0 on the left is the additive identity in the field of integers, and the 0 on the right is the additive identity of ##\mathbb F##. Now you can define a function ##f:\mathbb Q\to\mathbb F## by $$f\left(\frac p q\right)=(p1)(q1)^{-1}.$$ This only makes sense if we can prove that the right-hand side depends only on the quotient p/q, so you would have to do that. Then you would of course also have to prove that this f is a field isomorphism.
Tags
feilds, isomorphism, linear algebra
Thread Tools
| | | |
|-------------------------------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: Every feild has a subset isomorphic to rational numbers? | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 3 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 2 |
| | Calculus & Beyond Homework | 3 |
| | Precalculus Mathematics Homework | 8 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333158731460571, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/158631/summation-of-a-series
|
# Summation of a series.
I encountered this problem in Physics before i knew about a thing called `Taylor Polynomials` My problem was that i had to sum this series :
$$\sum^\infty_{n=1}\frac{(-1)^{n+1}}{n}$$ basically $$1,-\frac{1}{2},\frac{1}{3},-\frac{1}{4},\frac{1}{5},-\frac{1}{6},\frac{1}{7}.....$$
So now i know that there is something called a taylor polynomial that says that
$$\ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\frac{x^5}{5}-\frac{x^6}{6}+\frac{x^7}{7}....$$
So the above summation boils down to $\ln 2$.
What if i never knew the exansion then how would I calculate it?
Earlier I tried solving it like so ,
divide it into two different sets i.e.
$$\text{1 and $\dfrac{1}{3}+\frac{1}{5}+\frac{1}{7}+\frac{1}{9}+\frac{1}{11}+\frac{1}{13} \ldots$ and $-\dfrac{1}{2}-\frac{1}{4}-\frac{1}{6}-\frac{1}{8}-\frac{1}{10}\ldots$}$$
I said Hey! the first set would contain stuff like, $$\frac{1}{3^n},\frac{1}{5^n},\ldots$$ each of them would probably be reduced to a sum like so $$\sum^\infty_{n=1}\frac1{a^n}=\frac1{a-1}$$ and further become $$\sum^\infty_{a=3}\frac1{a-1}$$ which would subtract all the numbers in the other even set giving 1 as the answer which is wrong .
Where did I go wrong and how could I proceed even without knowing Taylor polynomials
-
1
– M.B. Jun 15 '12 at 12:36
3
You cannot re-arrange, or split-up the series in two parts, or use any of that: your series is not absolutely convergent and so any "rearranging" can seriously screw up the sum. – Willie Wong♦ Jun 15 '12 at 12:40
## 3 Answers
I find Norbert's solution more appealing if you run it backwards.
You're trying to evaluate $$1-{1\over2}+{1\over3}-{1\over4}+\cdots$$ Let $$f(x)=x-{1\over2}x^2+{1\over3}x^3-{1\over4}x^4+\cdots$$ Then we want $f(1)$. So, how can we find a simple formula for $f(x)$? Differentiate it: $$f'(x)=1-x+x^2-x^3+\cdots$$ Recognize this as a geometric series, first term $1$, constant ratio $-x$, so sum is $$f'(x)={1\over1+x}$$ Having differentiated, now antidifferentiate to get $$f(x)=\int{1\over1+x}\,dx=\log(1+x)+C$$ But what is $C$? Well, from the origial formula for $f$, we see $f(0)=0$, so that forces $C=0$, so $f(x)=\log(1+x)$, so $f(1)$, which is what we wanted, is $\log 2$.
-
## Did you find this question interesting? Try our newsletter
I assume that you know basic integration and formula of sum of infinite geometric series. Recall that $$\frac{1}{1+t}=\sum\limits_{n=0}^\infty(-t)^n,\quad t\in(-1,1)$$ Then integrate it over the interval $[0,x]$, whith $-1\leq x\leq 1$ to get $$\ln(1+x)= \int\limits_{0}^x\frac{1}{1+t}dt= \int\limits_{0}^x\sum\limits_{n=0}^\infty(-t)^n dt= \sum\limits_{n=0}^\infty(-1)^n\int\limits_{0}^x t^ndt=$$ $$\sum\limits_{n=0}^\infty(-1)^n\frac{x^{n+1}}{n+1}= \sum\limits_{n=1}^\infty\frac{(-1)^{n+1}x^n}{n}$$ Now we substitute $x=1$, and obtain $$\sum\limits_{n=1}^\infty\frac{(-1)^{n+1}}{n}=\ln 2$$ Well, this is quite elementary solution, but not rigorous enough - I light-heartedly interchanged summation and integration. But such things won't bother physicists much.
$\left[ - \sum\limits_{k = 0}^\infty {\ln \left( {(1 - q)(x{q^k} - \alpha ) + 3q - 1} \right)} + \sum\limits_{k = 0}^\infty {\ln \left( {\frac{{(1 - q)}}{2}(x{q^k} - (\alpha + 1))} \right)} \right]$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445411562919617, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/108737/small-categories-and-completeness/108742
|
Small categories and completeness
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(1) Can a small category be cocomplete? Meaning, have all small colimits? I'd be glad to see an example.
(2) Suppose $\mathcal C$ is a small category, with $Ob(\mathcal C)$ being of cardinality $\kappa$. May $\mathcal C$ have all small limits of cardinality $\leq \kappa$ ? Does this now allow examples which ware outruled in (1)?
-
2 Answers
Small (co)complete categories are posets by a theorem of Freyd. If $C$ has all small coproducts and its class of morphisms $C_1$ is small, then $C(x,y)^{C_1}\simeq C(\coprod_{f\in C_1} x, y)\subseteq C_1$. If $C(x,y)>1$, then $C_1$ has a subset of strictly greater cardinality: contradiction.
A poset that has suprema and infima of all of its subsets is a complete category.
-
I didn't know this theorem, it's really a beautiful piece of math. Probably, the same argument shows that you can only get (2) with posets. – Fernando Muro Oct 3 at 19:57
3
You have to be careful about (2), since the OP seems to be measuring the size of a small category by the cardinality of its set of objects, not its set of morphisms. That measure might be a bad idea, by the way. – Todd Trimble Oct 3 at 20:44
1
@Todd Trimble: since every object has an identity morphism, the set of morphisms has a greater cardinality than the set of objects. I fotgot to mention that. – Wouter Stekelenburg Oct 4 at 8:06
@Todd Trimble: What is OP? In addition, I see what you mean that measuring a category's size only by the cardinality of it's objects isn't such a good idea, as it might have a much greater cardinality of morphisms. Is there any other reason? – Shlomi A Oct 4 at 9:24
1
I think that the best definition of cardinal for categories (and the one I've seen most used) is the cadinal of all morphisms in a skeleton (i.e. a subcategory with one object for each isomorphism class). – Fernando Muro Oct 4 at 11:14
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
(1) Yes, the trivial (final) category, with only one object and one morphism (the identity in the unique object).
(2) If you only asked for colimits of cardinality $<\kappa$ then you would find many examples, e.g. finite abelian groups ($\kappa = \aleph_0$ here). If you insists in $\leq\kappa$ I think you face the same problem as above. Just think of vector spaces of dimension $<\kappa$. This category, up to isomorphism, has $\leq \kappa$ objects but it doesn't have colimits of sice $\leq \kappa$ since the coproduct of $\kappa$ copies of the ground field has dimension exactly $\kappa$.
I've been speaking about colimits in (2) instead of limits, which is what you ask for, so take opposite categories.
-
Colimits is fine for me. Your example of vector spaces of dimension $< \kappa$ is clear. However, is there some more general argument that shows that any small category with $Ob(\mathcal C)$ of cardinality $\kappa$ cannot have all colimits of size $\kappa$? – Shlomi A Oct 3 at 22:14
@Shlomi, probably not, now I'd go to prove that any category with your conditions in (2) is a poset, have a try. – Fernando Muro Oct 4 at 6:00
@Fernando, thanks. It seems that a poset with $\kappa$ objects and a maximal element works if one requires all colimits of cardinality $\leq \kappa$. However, I haven't managed to convince myself that such a category (2) must be a poset. – Shlomi A Oct 4 at 10:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548172950744629, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/30025?sort=newest
|
Sub-representations of the affine group
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $F=\mathrm{GF}\left(p^k\right)$ be any finite field. Let $G$ be the group of all affine permutations on $F$ (i.e. permutations of form $x\mapsto ax+b$). Then the set of all functions from $F$ to $\bar{F}$ is a linear representation of $G$, where $g(f)(x)=f(gx)$.
What are all sub-representations of this representation? Is it possible to characterize them?
Note: that in this case $\mathrm{gcd}\left(\left|G\right|,F\right)$ not equal to $1$.
-
1
For any $n\geq 0$, the functions representable by a polynomial of degree less than or equal to $n$ clearly form a subrepresentation. The successive quotients are isomorphic to the symmetric powers of the permutation representation of the $x\to ax$ group (the normal subgroup of translations acts trivially) and can be completely described. – Victor Protsak Jun 30 2010 at 9:40
Victor: I don't follow. Your successive quotients are one dimensional or I am not a hare. – Bugs Bunny Jun 30 2010 at 11:31
Sorry for being a but unclear, but you, guys and hares, have figured it out all by yourselves. – Victor Protsak Jul 2 2010 at 4:41
1 Answer
As Victor explained consider the functions $X^m$ where $X^m(\alpha)=\alpha^m$. As $m$ runs between $0$ and $p^k-1$, these functions form a basis of your space of functions. This is a nice wavy basis, i.e., its elements span one-dimensional subrepresentations under the multiplicative group.
Now you have to take the additive group into account. All you need to do is to use binomial formula on $(X+\alpha)^m$ and observe which non-zero $X^t$-s, you can get out. This depends on the $p$-th power in $m$.
In particular, as Victor pointed out, polynomials of degree less than $m$ will span a submodule. But there are more, for instance, polynomials of degree $p$ and zero. In general, you will be getting spans of $X^t$ with $t\leq m$ and $t$ is divisible by the $p$-th power present in $m$ as well as the sums of these gadgets.
Hint: $(X+\alpha)^{p^sn}=(X^{p^s}+\alpha^{p^s})^n$
-
Does it characterize all sub-representations? Let $c_1, \ldots c_k$ be integer numbers. Consider the set $S=\{t=a_0+a_1p+a_2p^2 \ldots a_k p^k | a_i<c_i\}$. Then span of $X^t$ for $t\in S$ will be sub-representation. Does it all sub-representations? If yes, how can you prove it? – Klim Efremenko Jun 30 2010 at 13:33
Yes, all sub-representations looks like this. Proof: Let $a=a_0+a_1p+a_2p^2+a_k p^k$ then ${a \choose b}$ non zero mod p only if $b_i< a_i$. – Klim Efremenko Jun 30 2010 at 14:13
I think these are all subrepresentations. To prove this, use complete reducibility as a representation of the multiplicative group. This will tell you that any subrepresentation (of affine group) is spanned by some of $X^t$ and all it remains is to invert some funny determinant, which proves that the span of $(X+\alpha)^m$ is what I said it is. Judging by you comment, you have done it already. – Bugs Bunny Jun 30 2010 at 16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174460768699646, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/78581/feasibility-of-linear-equations-with-few-variables-mod-k/78590
|
## Feasibility of linear equations with few variables mod k
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Say I want to verify the feasibility of
$$Ax \equiv b \text{ (mod } k)$$
where $A \in \mathbb{Z}^{m \times n}, b \in \mathbb{Z}^{m}$ and $k \in \mathbb{N}$. Is there a fast way to verify if there is such an $x$ when $n$ and $k$ are bounded by a constant. Even the case of $n=2$ is of interest to me.
-
2
By faster, you mean faster than computing a Hermite normal form of A modulo k? – felix Oct 19 2011 at 17:15
## 2 Answers
Yes, for $k$ prime you use gaussian elimination, for $k$ composite, use gaussian elimination with respect to the prime factors.
EDIT @Emil is right, of course. For prime powers, check out:
http://staff.fim.uni-passau.de/forschung/mip-berichte/MIP-0101.ps
-
3
I don’t think this works when $k$ is not square free. For instance, $pAx\equiv pb$ is always solvable mod $p$, but its solvability mod $p^2$ is equivalent to the solvability of $Ax\equiv b\pmod p$. Gaussian elimination does not directly apply to prime powers either, it relies on the ring being a field. – Emil Jeřábek Oct 19 2011 at 15:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There are two simple approaches.
1. If you prefer to do all of your algebra modulo k, the best way to do this is to compute the Smith Normal form of the matrix A (or more generally, a related matrix). You can do this by multiplication of unimodular matrices on the left and right of A (i.e. elementary row and column operations respectively) to compute greatest common divisors.
• First, compute the greatest common divisor of the first column, together with k, in the (1,1) coefficient (which will change the value of the first row in general).
• Then, compute the greatest common divisor of the first row, together with k, again producing the result in the (1,1) coefficient (which will change the value of the first column in general).
... And so forth, until the (1,1) coefficient divides all coefficients in the first row and also the first column; then clear the first row and first column. Repeat this for the second row/column, etc. until you have only coefficients on the diagonal; and all the while, keep track of the products of unimodular matrices which you accumulate on the left and right. Doing so yields unimodular matrices U,V such that S = VAU has only elements on the diagonal; and the system Ax ≡ b (mod k) is then equivalent to the system Sz ≡ Vb (mod k), where we perform the substitution z = U−1x. This system is solvable if and only if each row individually is solvable; and for any solution z, we may obtain a solution x = Uz of the original system.
2. An equivalent approach which is not restricted to artihmetic mod k, which is likely to be faster in practise, is to reduce to Diophantine equations. To wit: a linear congruence of the form $$\mathbf a_j \cdot \mathbf x \equiv b_j \pmod{k}$$ should instead be interpreted as the equivalent Diophantine equation $$\mathbf a_j \cdot \mathbf x + k d_j = b_j \;,$$ introducing a new auxiliary variable dj for the multiple of k which is neglected in the modular arithmetic. One can express an entire system Ax ≡ b (mod k) of such equations — involving the same modulus; but different moduli for each equation can be easily accomodated — by the system of integer equations $$\Bigl[\;\; A \;\;\Big|\;\; k I \;\;\Bigr] (\mathbf x \oplus \mathbf d) = \mathbf b \;.$$ In reference to Felix's comment above, you then compute a Hermite Normal Form (or any upper-triangular matrix, really) using unimodular transformations. If C = [ A | kI ], then one looks for the unimodular transformation U such that C' = CU is in Hermite normal form (in fact, any upper-triangular form will do); this may be done incrementally by column reduction to clear out rows, starting with the last one. One may then solve the system of equations $$C' \mathbf z = \mathbf b$$ over the reals, performing the substitution z = U -1 (x ⊕ d). The final several coefficients of z will be fixed, and will be integer-valued if there are any solutions at all to the original system of equations; and the first several will be free parameters, albeit ones which should range only over ℤ. Then, we may solve for x by computing (x ⊕ d) = Uz.
[This answer is completely revised, replacing an answer which I deleted after realizing that it was wrong; I subsequently added a reference to the Smith Normal Form above.]
-
As far as I can see, it’s possible that the given solution mod $p^t$ cannot be extended to a solution mod $p^{t+1}$, even though another one can. You’d thus need to keep a list of all solutions of the equation, which can be exponentially many. Or am I missing something? – Emil Jeřábek Oct 19 2011 at 16:59
@Emil: you were right, of course. Please see my completely revised answer. – Niel de Beaudrap Nov 16 2011 at 19:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951892256736755, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/59554/algebraic-proof-that-collection-of-all-subsets-of-a-set-power-set-of-n-eleme/59557
|
# Algebraic proof that collection of all subsets of a set (power set) of $N$ elements has $2^N$ elements
In other words, is there an algebraic proof showing that $\sum_{k=0}^{N} {N\choose k} = 2^N$? I've been trying to do it some some time now, but I can't seem to figure it out.
-
3
– Mike Spivey Aug 24 '11 at 22:30
## 5 Answers
An algebraic proof:
Expand $(1+x)^n$ using binomial theorem which gives $$(1+x)^n={n\choose 0}x^0 + {n\choose 1}x^1 + {n\choose 2}x^2 + {n\choose 3}x^3 + \cdots + {n\choose n}x^n$$
set $x = 1$ hence,
$$(1+1)^n={n\choose 0}1^0 + {n\choose 1}1^1 + {n\choose 2}1^2 + {n\choose 3}1^3 + \cdots + {n\choose n}1^n$$
$$\Rightarrow 2^n={n\choose 0} + {n\choose 1} + {n\choose 2} + {n\choose 3} + \cdots + {n\choose n}$$
$$\Rightarrow 2^n=\sum_{k=0}^{n} {n\choose k}$$
QED!
-
Another approach is identifying the powerset $\mathcal P \, X$ of a set $X$ with the set of functions $X \to 2$ (that is to say with the set of idicator functions of the subsets). Of course this is only useful if you have any previous results on cardinalities of sets of functions between (finite) sets.
-
The obvious bijection between the subsets of $X$ and binary numbers between $0$ and $2^{N}-1$ implcitly suggested by this answer seems very natural to me, but maybe a little more explanation might be given. – Geoff Robinson Dec 22 '11 at 17:17
I don't know what you mean by "algebraic". Notice that if $N$ is $0$, we have the empty set, which has exactly one subset, namely itself. That's a basis for a proof by mathematical induction.
For the inductive step, suppose a set with $N$ elements has $2^N$ subsets, and consider a set of $N+1$ elements that results from adding one additional element called $x$ to the set. All of the $2^N$ subsets of our original set of $2^N$ elements are also subsets of our newly enlarged set that contains $x$. In addition, for each such set $S$, the set $S\cup\{x\}$ is a subset of our enlarged set. So we have our original $2^N$ subsets plus $2^N$ new subsets---the ones that contain $\{x\}$. The number of subsets of the enlarged set is thus $2^N + 2^N$.
Now for an "algebraic" part of the arugment: $2^N + 2^N = 2^{N+1}$.
-
This answers the question in the subject heading, but doesn't explicitly deal with the one in the body of the question. – Michael Hardy Aug 24 '11 at 22:25
By the binomial theorem, $\sum_{k=0}^{N} {N\choose k} = (1+1)^N = 2^N$.
However, note that the binomial theorem admits a natural combinatorial proof, which exactly relates coefficients with subsets. So the argument above is not exclusively algebraic in nature.
-
Another inductive proof that is somewhat more algebraic (in the sense of not using the combinatorial interpretation of $\binom{n}{k}$): check the base cases, and note that
$$\sum_{k=0}^{n-1} \binom{n}{k} + \sum_{k=0}^{n-1} \binom{n}{k+1} = \sum_{k=0}^{n-1} \binom{n+1}{k+1}$$
Of course, the combinatorial proofs are much more enlightening.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487022757530212, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2010/07/02/absolute-continuity-ii/?like=1&source=post_flair&_wpnonce=5d5a17dad5
|
# The Unapologetic Mathematician
## Absolute Continuity II
Now that we’ve redefined absolute continuity, we should tie it back to the original one. That definition makes precise the idea of “smallness” as being bounded in size below some $\epsilon$ or $\delta$, but the new one draws a sharp condition that “small” sets are those of measure zero. As it turns out, in the presence of a finiteness condition the two are the same: if $\nu$ is a finite signed measure and if $\mu$ is any signed measure such that $\nu\ll\mu$, then to every $\epsilon>0$ there is a $\delta$ so that $\lvert\nu\rvert(E)<\epsilon$ for every measurable $E$ with $\lvert\mu\rvert(E)<\delta$.
So, let’s say that the conclusion fails, and there’s some $\epsilon$ for which we can find a sequence of measurable $E_n$ with $\lvert\mu\rvert(E_n)<\frac{1}{2^n}$, and yet $\lvert\nu\rvert(E_n)>\epsilon$ for each $n$. Then we can define $E=\limsup\limits_{n\to\infty}E_n$, and find
$\displaystyle\lvert\mu\rvert(E)\leq\sum\limits_{i=n}^\infty\lvert\mu\rvert(E_i)<\frac{1}{2^{n-1}}$
for each $n$, and thus $\lvert\mu\rvert(E)=0$. But we also find, since $\nu$ is finite,
$\displaystyle\lvert\nu\rvert(E)=\lim\limits_{n\to\infty}\lvert\nu\rvert\left(\bigcup\limits_{i=n}^\infty E_i\right)\geq\limsup\limits_{n\to\infty}\lvert\nu\rvert(E_n)\geq\epsilon$
But this contradicts the assertion that $\nu\ll\mu$.
Let’s make the connection even further by proving the following proposition: if $\mu$ is a signed measure and if $f$ is integrable with respect to $\lvert\mu\rvert$ then we can integrate $f$ with respect to $\mu$ and define
$\displaystyle\nu(E)=\int\limits_Ef\,d\mu$
for every measurable $E$. It’s easy to see that $\nu$ is a finite signed measure, and I say that $\nu\ll\mu$. Indeed, if $\lvert\mu\rvert(E)=0$ then $\mu^+(E)=0=\mu^-(E)$. Thus we see that
$\displaystyle\nu(E)=\int\limits_Ef\,d\mu=\int\limits_Ef\,d\mu^+-\int\limits_Ef\,d\mu^-=0-0=0$
and so $\nu\ll\mu$ as asserted. This extends our old result that indefinite integrals with respect to measures are absolutely continuous in our old sense.
It’s easy to verify that the relation $\ll$ is reflexive — $\mu\ll\mu$ — and transitive — $\mu_1\ll\mu_2$ and $\mu_2\ll\mu_3$ together imply $\mu_1\ll\mu_3$ — and so it forms a preorder on the collection of signed measures. Two measures $\mu$ and $\nu$ so that $\nu\ll\mu$ and $\mu\ll\nu$ are said to be equivalent, and we write $\mu\equiv\nu$.
For example, we can verify that $\mu\equiv\lvert\mu\rvert$. Indeed, $\mu\ll\mu$, and we know this implies that $\lvert\mu\rvert\ll\mu$. Similarly, $\lvert\mu\rvert\ll\lvert\mu\rvert$, which implies that $\mu\ll\lvert\mu\rvert$. This is useful because it allows us to show that $\lvert\mu\rvert(E)=0$ for a measurable set $E$ if and only if $\mu(F)=0$ for all measurable subsets $F\subseteq E$. If $\lvert\mu\rvert(E)=0$, then $\lvert\mu\rvert(F)=0$ since $\lvert\mu\rvert$ is a measure, and then $\mu(F)=0$ since $\mu\ll\lvert\mu\rvert$. Conversely, if $\mu(F)=0$ for all measurable subsets $F\subseteq E$, then in particular $\mu(E)=0$, and thus $\lvert\mu\rvert(E)=0$ since $\lvert\mu\rvert\ll\mu$.
### Like this:
Posted by John Armstrong | Analysis, Measure Theory
## 1 Comment »
1. [...] is finite, we know that for every there is a so that if then . Using this , our assertion of continuity [...]
Pingback by | August 16, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 64, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428039789199829, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/11048-basis-kernel.html
|
# Thread:
1. ## basis for kernel
I know that a basis for the kernel is the map of the nullspace of the standard matrix for the linear transformation.
I found the nullspace for a particular problem to have a basis formed by the set
1/2
-3
-1/4
1
The original basis was the standard basis {1, x, x^2, x^3}.
Because I am working with polynomials, my professor said that the basis for a kernel must be polynomials. How do I change the nullspace basis given above into a polynomial?
2. Originally Posted by PvtBillPilgrim
I found the nullspace for a particular problem to have a basis formed by the set
1/2
-3
-1/4
1
The original basis was the standard basis {1, x, x^2, x^3}.
Because I am working with polynomials, my professor said that the basis for a kernel must be polynomials. How do I change the nullspace basis given above into a polynomial?
The matrix that you got is the coordinate vector. Meaning the coordinate vector relative to the ordered base:
(1,x,x^2,x^3)
That means the basis is,
(1/2)(1)+-3(x)+(-1/4)(x^2)+1(x^3)
Thus, the vector (polynomial in this case) is,
$\frac{1}{2}-3x-\frac{1}{4}x^2+x^3$
This is the basis for the nullspace.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9104145765304565, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/55783/elementary-question-in-differential-geometry-closed
|
## Elementary question in differential geometry [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to learn differential geometry (i.e., teach myself!) So here is a question that came up.
For some $h > 0$, consider the cone
$C_h = \{ (x,y,z) \; : \; 0 \le z = \sqrt{x^2 + y^2} < h \} \subset \mathbb{R}^3$
endowed with subspace topology. It seems that we can cover this with a single chart $(U,\phi)$ where $U = C_h$ and $\phi$ is the projection $\phi(x,y,z) = (x,y)$. So it seems that this defines a differentiable structure and we get a smooth ($C^\infty$) 2-dimensional manifold. (Is it correct?)
Now consider the inclusion map $i : C_h \to \mathbb{R}^3$, is this maps smooth? It doesn't seem to me that it is. The expression of $i$ in the chart above is not smooth at $(0,0)$ and I don't seem to be able to find any other compatible chart around zero which has a smooth representation. (Haven't given it much thought though). If this is true how one shows that this map is not smooth. (Also, if this is true, a vague question is whether removing the origin is the only way to fix this problem)
-
does smoothness depend on the chart? – roy smith Feb 17 2011 at 21:15
2
This question might get a better answer at math.stackexchange.com. Roughly speaking, it's at undergraduate level, and MO is mostly for graduate-level-and-above questions. For this reason, I'm going to vote to close. – HW Feb 17 2011 at 21:27
I agree with Henry – Steven Gubkin Feb 17 2011 at 21:33
1
Okay. Fair enough. I should thank you, Roy, for the hint. I guess you are implying that since smoothness does not depend on the chart, I have shown that the inclusion map is not smooth. Intuitively there is something non-smooth about that point of the pointed cone. I just wanted to confirm that it can be made into a smooth manifold as above (which if true is odd to me and interesting!) and that what is wrong shows itself for example as the non-smoothness of the inclusion map into R^3. (I was also wondering if it is possible to remedy this somehow or is this in some sense intrinsic.) – passerby51 Feb 17 2011 at 22:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598876237869263, "perplexity_flag": "head"}
|
http://qchu.wordpress.com/2012/11/09/short-cycles-in-random-permutations/
|
# Annoying Precision
Feeds:
Posts
Comments
## Short cycles in random permutations
November 9, 2012 by Qiaochu Yuan
Previously we showed that the distribution of fixed points of a random permutation of $n$ elements behaves asymptotically (in the limit as $n \to \infty$) like a Poisson random variable with parameter $\lambda = 1$. As it turns out, this generalizes to the following.
Theorem: As $n \to \infty$, the number of cycles of length $1, 2, ... k$ of a random permutation of $n$ elements are asymptotically independent Poisson with parameters $1, \frac{1}{2}, ... \frac{1}{k}$.
This is a fairly strong statement which essentially settles the asymptotic description of short cycles in random permutations.
Proof
We will prove pointwise convergence of moment generating functions. First, the Poisson random variable $X_{\lambda}$ with parameter $\lambda$ is the random variable which takes on non-negative integer values with probabilities
$\displaystyle \mathbb{P}(X_{\lambda} = k) = e^{-\lambda} \frac{\lambda^k}{k!}$.
$X_{\lambda}$ therefore has moment generating function
$\displaystyle \mathbb{E}(e^{t X_{\lambda}}) = e^{-\lambda} \sum_{k \ge 0} \frac{e^{tk} \lambda^k}{k!} = e^{\lambda (e^t - 1)}$
which is the exponential generating function of the Touchard polynomials
$\displaystyle T_n(\lambda) = \sum_{k=0}^n S(n, k) \lambda^k$
where $S(n, k)$ are the Stirling numbers of the second kind. These specialize to the Bell numbers when $\lambda = 1$ as expected.
Because we are discussing $k$ random variables, we should compute a joint moment generating function. The joint moment generating function of $k$ independent Poisson random variables with parameters $\lambda_1, ... \lambda_k$ is
$\displaystyle \mathbb{E}(\exp \left( t_1 X_{\lambda_1} + ... + t_k X_{\lambda_k} \right) = \exp \left( \lambda_1 (e^{t_1} - 1) + ... + \lambda_k (e^{t_k} - 1) \right)$.
Back to permutations. By the exponential formula, letting $c_k(\sigma)$ denote the number of cycles of length $k$ in a permutation $\sigma$, we have
$\displaystyle \sum_{n \ge 0} \frac{t^n}{n!} \sum_{\sigma \in S_n} z_1^{c_1(\sigma)} ... z_k^{c_k(\sigma)} = \exp \left( z_1 t + ... + z_k \frac{t^k}{k} + \frac{t^{k+1}}{k+1} + ... \right)$
which simplifies to
$\displaystyle \frac{1}{1 - t} \exp \left( (z_1 - 1) t + ... + (z_k - 1) \frac{t^k}{k} \right)$.
It is a general and straightforward observation that if $f(t)$ is a power series with radius of convergence greater than $1$, then $\frac{f(t)}{1 - t}$ has a power series expansion whose coefficients asymptotically approach $f(1)$. Substituting $z_i = e^{t_i}$, we conclude that the coefficients of
$\displaystyle \sum_{n \ge 0} \frac{t^n}{n!} \sum_{\sigma \in S_n} e^{t_1 c_1(\sigma)} ... e^{t_k c_k(\sigma)}$
asymptotically approach
$\displaystyle \exp \left( (e^{t_1} - 1) + ... + (e^{t_k} - 1) \frac{1}{k} \right)$.
But the former is precisely the joint moment generating function of $c_1, ... c_k$, and the latter is precisely the joint moment generating function of independent Poisson random variables with parameters $1, ... \frac{1}{k}$. The conclusion follows.
Mean and variance
An exact result which can be deduced using the above methods is that the expected number of $k$-cycles of a random permutation of $n$ elements is $\frac{1}{k}$ if $k \le n$ and $0$ otherwise. It follows that the total expected number of cycles is the harmonic number
$\displaystyle H_n = 1 + \frac{1}{2} + ... + \frac{1}{n} \sim \log n$.
Since Poisson random variables have the same mean and variance, and by the asymptotic independence statement above, we expect the variance of the total number of cycles to also be asymptotic to $\log n$. This is in fact true, and can be shown using the exponential formula as above.
In The Anatomy of Integers and Permutations, Granville suggested that the decomposition of a random permutation into cycles should be thought of as analogous to the decomposition of a random integer into prime factors. In light of this analogy, the above computation should be thought of as roughly analogous to the Hardy-Ramanujan theorem.
### Like this:
Posted in algebraic combinatorics, probability | Tagged asymptotics, cycle indices, generating functions, MaBloWriMo | 1 Comment
### One Response
1. [...] « Short cycles in random permutations [...]
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8412783145904541, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/259117/hartshorne-exercise-ii-3-19-b
|
# Hartshorne Exercise II. 3.19 (b)
How do we prove the following exercise of Hartshorne?
Let $A$ be a subring of an integral domain $B$. Suppose $B$ is a finitely generated $A$-algebra. Let $b$ be a non-zero element of $B$. Then there exists a non-zero element $a$ of $A$ with the following property. If $\psi\colon A \rightarrow \Omega$ is any homomorphism of $A$ to an algebraically closed field $\Omega$ such that $\psi(a) \neq 0$, then $\psi$ extends to a homomorphism $\phi\colon B \rightarrow \Omega$ such that $\phi(b) \neq 0$.
-
## 1 Answer
Check Proposition $5.23$ in Atiyah-Macdonald.
-
1
We are supposed to prove it by ourselves. – Makoto Kato Dec 15 '12 at 7:38
5
Dear @Makoto: Just refer to the proof there. I am not sure there is any point in reproducing the proof here. It's not altogether trivial. – Rankeya Dec 15 '12 at 7:42
1
Thanks for the reference. I expect someone would post his original proof, though. You know there are usually several different proofs of a proposition. – Makoto Kato Dec 15 '12 at 7:59
6
@MakotoKato: So you don't want to copy from book, but you do want to copy from somebody doing your work for you here? What's better about the latter? And didn't you miss a [homework] tag on the question, then? – Henning Makholm Dec 15 '12 at 23:40
3
If you want a hint instead of a complete solution, then say so in your question. Otherwise this is a perfectly valid answer. – Zhen Lin Dec 18 '12 at 1:31
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312785863876343, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/quantum-mechanics?page=4&sort=unanswered&pagesize=30
|
# Tagged Questions
Quantum mechanics describes the microscopic properties of nature in a regime where classical mechanics no longer applies. It explains phenomena such as the wave-particle duality, quantization of energy and the uncertainty principle and is generally used in single body systems. Use the ...
learn more… | top users | synonyms (1)
0answers
69 views
### Identifying fragments when there is a superposition of fragments in quantum Darwinism
In Zurek's theory of quantum Darwinism, information about the pointer states of a system imprint themselves upon fragments of the environment carrying records about the state of the system. ...
0answers
110 views
### Quantum circuit decomposition
I need to construct a universal quantum circuit decomposition for a three-qubit operation where one qubit is the control bit, and a two-qubit unitary operator acts on the other two depending on the ...
0answers
17 views
### Showing that the CHSH inequality is not violated
I can usually work out whether CHSH inequality is violated when the observables that we are measuring and the state we are in is given explicitly, but I'm struggling with the generality of the ...
0answers
34 views
### What does the difference in odds for Bell's inequality tell us about quantum mechanics?
Bell's inequality defines a lower bound for agreement/disagreement between entangled particles. When the experiment is conducted it shows lower odds. What does this tell us? Is it possible that we ...
0answers
36 views
### Question regarding operators and cylindrical coordinates
I have the following problem in my hand: I need to arrive from the Cartesian expression $$x_{j}{\partial_{k}}x_{j}{\partial_{k}}-x_{j}{\partial_{k}}x_{k}{\partial_{j}}$$ to this expression: ...
0answers
41 views
### Why doublons and holons are not bounded in spin-1/2 Hubbard chain?
The Hubbard model reads $$H = -t \sum_{\langle ij \rangle, \sigma} c_{j\sigma}^\dagger c_{i\sigma} + U\sum_i n_{i\uparrow}n_{i\downarrow}$$ In the large $U$ limit and at half-filling, the Hubbard ...
0answers
18 views
### Degeneracy of orbitals in magenetic field
Why is that in an external magnetic field(uniform) the degeneracy of d,f orbitals is lost but the degeneracy of p orbitals remain intact assuming the main cause of losing degeneracy is the difference ...
0answers
51 views
### The gauge-invariance of the probability current
It is simple to show that under the gauge transformation \begin{cases}\vec A\to\vec A+\nabla\chi\\ \phi\to\phi-\frac{\partial \chi}{\partial t}\\ \psi\to \psi ...
0answers
31 views
### Physical significance of effective wave function
In Yanhua Shih's book on quantum optics, the coherence functions are expressed in terms of effective wave function. Here are the expressions for single photon wave packets. To derive the coherence ...
0answers
21 views
### What does it mean to erase the which-path information of something?
In this particular case, I am told that very fast measurements erase which-path frequency information of photons. I'm not really sure what that means though. I do not entirely understand the concept ...
0answers
62 views
### What does this notation mean in terms of quantic numbers, and how to imagine the electrons in this quantic system? (Helium $2^1$ $P$ and $2^3$ $P$)
Helium atom in the $2^1$ $P$ and $2^3$ $P$ excited states Now I'm guessing that 1 electron should be considered in the 1s state, but what about the other? Should I consider the other as simply ...
0answers
51 views
### Hamiltonian matrix propertu
A professor made an statement to prove the variational theorem: Because the Hamiltonian (H operator of quantum physics) is diagonal in its own eigenfunction, the terms in \$\left \langle \Phi _{m} ...
0answers
47 views
### Finding the coefficients of a spinor
From the Schrödinger equation of a system I'm investigating, where the wave function is a 4-component spinor of coefficients $C_1, C_2, C_3, C_4$, I am able to obtain the expression \$\begin{pmatrix} ...
0answers
80 views
### Quantum uncertainty can explain the Riemann Hypothesis?
In the recent paper "Riemann Hypothesis as an Uncertainty Relation" (http://arxiv.org/abs/1304.2435) the author claims that the presence of zeros out of the critical line may lead to the violation of ...
0answers
42 views
### Partial Measure Probability
Let be a $$|\psi\rangle = \dfrac{3}{5\sqrt{2}}|00 \rangle- \dfrac{3i}{5\sqrt{2}}|01 \rangle+ \dfrac{2\sqrt{2}}{5}|10 \rangle - \dfrac{2\sqrt{2} i}{5}|11 \rangle$$ state with two qubits. ...
0answers
32 views
### The physical implementation of quantum annealing algorithm
From that question about differences between Quantum annealing and simulated annealing, we found (in comments to answer) that physical implementation of quantum annealing exists (D-Wave quantum ...
0answers
27 views
### Partial Measurement in Computational Basis
I am reading my lecture, here say: For example, we can measure the first qubit of system described by the state \$|\psi\rangle = \sqrt{\dfrac{2}{3}}|0\rangle \otimes \dfrac{|0\rangle - ...
0answers
57 views
### Quantum harmonic oscillator. Finding operators
Problem: I'm trying to verify that $p_H(T)$ and $x_H(T)$ satisfy the following equations, (by solving the Heisenberg equation): $x_H(t)=x_H(0)cos(\omega t)+(1/m\omega)p_H(0)sin(\omega t)$ ...
0answers
40 views
### Wave equations for two intervals at Potential step
Lets say we have a potential step as in the picture: In the region I there is a free particle with a wavefunction $\psi_I$ while in the region II the wave function will be $\psi_{II}$. Let me ...
0answers
33 views
### Is it easier to determine the number of states with raising/lowering operators or using scattering?
A particle is bound by $$V(x) = \begin{cases}\infty,& x <0 \\ \frac{-32\hbar^2}{ma}, & x\le a \\ 0, & x \le a\end{cases}$$ a) how many states are there? i'm attempting ...
0answers
35 views
### Do the other properties of a particle also have a phase?
Particles have a phase that oscillates in space-time. We know this because particles have a phase frequency (De Broglie wavelength) and this is why they interfere in space, like in the double slit ...
0answers
94 views
### Wave function ansatz for disclinated graphene with spin
I am currently investigating spin dynamics in disclinated graphene. More information about my approach can be found in my other post. I would like to know if my approach is somewhat correct to find ...
0answers
34 views
### Landauer's principle vs Rayleigh–Jeans law
Can we argue based on Landauer's principle that if one bit information is changed inside a blackbody, the total radiated energy should be at least or in order of $kTln2$? If it is so, can we also ...
0answers
44 views
### How large must the Quantum teleportation fidelity have to be in order for it to be useful?
This question relates and stems from my original question. Please read this one and the comments before answering this question. Quantum Teleportation Fidelity I know that for discrete variables ...
0answers
72 views
### Impulse travelling faster than light
There have been conducted many experiments in which light impulses traveled faster than light like the one in Princeton in 2000. This phenomenon has something to do with quantum entanglement. Does ...
0answers
39 views
### Photon detection rate for pure / mixed states coming from single mode point source
Let the pure states be in superposition of horizontal and vertically polarized basis states. They are arriving at the point detector one at a time. So, a pure state is \$|\Psi\rangle = \alpha|V\rangle ...
0answers
29 views
### Thermionic emission, delayed emission and predissociation
In molecular photodissociation, the thermionic emission, delayed emission and predissociation are the same? otherwise, what is the difference between them? My question is not about the solids, but I ...
0answers
55 views
### Did Planck said that his theory is distinct from Zeno's paradoxes?
I remember once reading that Planck or some other prominent figure in quantum physics said that the theory (probably Planck length or Planck time in particular) is not about the thing what Zeno's ...
0answers
31 views
### Implication of Fock matrix elements
In linear combination of atomic orbitals/molecular orbital based quantum chemistry theory, when the block of Fock matrix elements connecting occupied with virtual orbitals is zero, why does this imply ...
0answers
42 views
### Why People talk so much about Feshbach resonance while dealing with Bose-Einstein Condensate (BEC)?
Why People talk so much about Feshbach resonance while dealing with Bose-Einstein Condensate (BEC)? Why not tune the system near the resonance and measure the effect on BEC formation?
0answers
17 views
### Allowed Quantum States- Filkelstein and Rubinstein constraints
So basically i'm doing a report on Finkelstein and Rubinstein constraints. I have a system where the allowed quantum states satisfy ...
0answers
33 views
### Where can I find the Bohr Sommerfield condition?
I need to solve the Hydrogen Atom using the phase integral [Bohr Sommerfield Condition] but I don't know where can I find it. Help me please!
0answers
37 views
### Rotating Frame with degenerate levels
I'm working with a angular momentum transition J=0 -> J=1 with no applied magnetic field; so, the upper level has degeneracy 3. This atom is coupled with an electric field propagatin in the ...
0answers
77 views
### Quantum mechanics, whats possible?
There is a thread in Physicsforums.com which states due to Quantum Mechanics, if you wait long enough diamonds will appear in your pocket, it also states its possible for all your atoms to ...
0answers
102 views
### Polarization photon and Stokes parameters
I have the following situation: About the polarization of the photon, I introduce the basis: Horizontal polarization $|\leftrightarrow>=\binom{1}{0}$ Vertical polarization ...
0answers
36 views
### reference for wavepackets and uncertainty relation
Can someone suggest a reference for a rigorous proof(from harmonic analysis) that for any wavepacket other than the gaussian, we have an inequality ie \delta x \delta k > 1
0answers
55 views
### Why is the transition into N proportional to N+1?
I am having trouble understanding the origin of the bosonic stimulated emission. How can I qualitatively understand why bosons Boson's attract each other into similar quantum states. The furtherst I ...
0answers
57 views
### What is the link between the density matrix and Hestenes' spinors in geometric algebra?
The density matrix (or state matrix) is a generalization of a wave function that is able to describe incoherent superpositions of an N-state system. It is often written as a matrix and observables are ...
0answers
111 views
### Phase diagram problem for ternary system
For a ternary system, three composites are present. Temperature is also a variable. Assuming that pressure is held constant, what is the minimum number of phases that may be present in a ternary ...
0answers
18 views
### Is there a way to compute or explain if a decay prefers decaying into mainly mass or mainly energy?
Is there a way to compute or explain if a decay prefers decaying into mainly mass or mainly energy ? I know quarks prefer to decay into the most massfull quarks : ...
0answers
45 views
### Dilatations in non-relativistic QM and operator tranformation
I was looking at a QM textbook exercise dealing with dilatations, the transformations are $x \rightarrow x' = \lambda x$ transforming $|\psi\rangle$ into \$|\psi'\rangle = ...
0answers
27 views
### How to find relaxation matrix?
Could anyone help me to understand how to derive equation 9 from 8 in this article I am reading? I am not getting about the matrix $R$, Super operator $\Gamma$.
0answers
139 views
### Newton Gravitational constant $G$, Plank constant $\hbar$ , Speed of Light $c$ : The Dream Team of moderators?
The 3 great constants of Nature are well known : The Speed of light $c$ (special relativity) The Plank constant $\hbar$ (quantum mechanics) The Newton ...
0answers
33 views
### motivating theories structurally from operationalism, not from underlying structured sets (Categorifying Operationalism and Abramsky's Chu spaces)
We normally construct theories by presuming underlying sets, such as sets of space time points, or sets of vectors in a Hilbert space. I think you can show that leads to weaknesses of realism (see ...
0answers
93 views
### Direction vector of a physical quantity matrix
A physical quantity can be represented by the following form: $A = a_1\sigma_1 + a_2\sigma_2 + a_3\sigma_3$ where $\sigma$ matrices are Pauli matrices. Also suppose that there is \$B = b_1\Sigma_1 + ...
0answers
152 views
### Connection between first and second quantization
This is my question: In a book on many body quantum theory I came across equality between antisymmetrized many-particle state vector which, as you know, includes sum over permutations of product ...
0answers
80 views
### Quantum Mutual Information scaling
Wikipedia provides a simple definition of Quantum Mutual Information: $$I(\rho^{ab})= S(\rho^{a}) + S(\rho^{b}) - S(\rho^{ab})$$ where in terms of relative information we have: I(\rho^{ab})= ...
0answers
203 views
### Intensity of the diffraction pattern of the double slit
I am trying another approach for my last unanswered question. (Bounty still on for 3 days. Anyone? Please?) Note that this is not the same question but a greatly simplified version concerning a much ...
0answers
53 views
### Counterpart of the Klein Gordon Equation on the “Coordinate Shell”
The relation $$\psi=Ce^{i/\hbar(Et-\mathbf{p}\cdot\mathbf{x})}\tag{1}$$ satisfies the Klein Gordon equation on the mass shell, i.e. for $E^2=p^2+m^2$. Now let's think in the reverse direction. ...
0answers
102 views
### Can experiment distinguish the basis in which a singlet state is represented?
Let $\left(|\uparrow\rangle,|\downarrow\rangle\right)$ and $\left(|\nearrow\rangle,|\swarrow\rangle\right)$ be two bases of the $2$-dimensional Hilbert space $H$. Can an experiment distinguish ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9054757356643677, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/292925/cartesian-and-polar-coordinate
|
# Cartesian and Polar Coordinate
I should give the Cartesian Coordinates $(x,y)\in \mathbb{R\times R}$ and Polar Coordinates $(r,\varphi)\in R^+\times [0,2\pi)$ of the following Complex Numbers:
a) $z_{1}=-i$
b) $z_{2}=\sqrt{3}+i$
c) $z_{3}=3\sqrt{2}\cdot e^{- \frac{\pi}{4}i}$
d) $z_{4}=-4e^{\frac{\pi}{3}i}$
Can someone help me solve this. I found the Cartesian coordinates of a) $(0,-1)$ and b) $(\sqrt{3} \approx1.73, 1)$ but what are the Cartesian coordinates of $z_{3},z_{4}$ and what should i do to find the Polar Coordinates ?
I just got c) i think. I must use the Euler Formula ${ e }^{ iz }=\cos { z+i\sin { z } }$ so it will be $3\sqrt{2}(\cos { (0) } +i\sin { (-\frac { \pi }{ 4 } } )$ right?
-
1
For c, you are close, but the $z$ on the right is the same in both places. It should be $\frac \pi 4$ – Ross Millikan Feb 2 at 17:41
1
A nitpick only, but please don't say things like "$\sqrt{3}=1.73$"--it causes many on this site near-physical anguish. It's okay just to say "$\sqrt{3}$" and not give the approximation, or (if you'd like) to say "$\sqrt{3}\approx 17.3$" with `\approx` to get that $\approx$ symbol. – Cameron Buie Feb 2 at 17:45
@RossMillikan thanks,now i see $\cos{(-\frac{\pi}{4})}$ – Devid Feb 2 at 18:03
## 3 Answers
We know if $z$ is $(x,y)$ in the Cartesian Coordinates and $(r,\theta)$ in the Polar Coordinates,
$x=r\cos\theta$ and $y=r\sin\theta$ where $r$ is conventionally taken as non-negative
So, $x^2+y^2=r^2\implies r=+\sqrt{x^2+y^2}$
and $\tan\theta =\frac yx\implies \theta=\arctan \frac yx,$ the quandrant of $\theta$ will be dictated by the signs of $\sin\theta$ and $\cos\theta$
For the last two cases, we also need Euler Formula or Identity.
For $(c),3\sqrt2e^{-\frac\pi4i}=3\sqrt2(\cos(-\frac\pi4)+i\sin(-\frac\pi4))$ $=3\sqrt2(\frac1{\sqrt2}-i \frac1{\sqrt2})=3-i$
So, $r=\sqrt{3^2+1^2}=\sqrt{10},\sin\theta= -\frac1{\sqrt{10}}<0,\cos\theta=\frac3{\sqrt{10}}>0$ so $\theta$ lies in the 4th Quadrant.
So, $\theta=\arctan\left(-\frac13\right)$ which lies in the 4th Quadrant
-
Ok so to be sure that i understand it: d) $z_{4}=-4\cdot e^{\frac{\pi}{3}i}$ this will be $-4(\cos{\frac{\pi}{3}}+\sin{\frac{\pi}{3}})=-4(\frac{1}{2}+i\frac{\sqrt{3}}{2})=-\frac{4}{2}-\frac{4\sqrt{3}}{2}i=-2-2\sqrt{3}i$ So the Cartesian Coordinates are $(-2,-2\sqrt{3})$ $r=\sqrt{(-2)^2+(-2\sqrt{3})^2}=\sqrt{16}=4$ $\boldsymbol{\sin{\theta}}=\frac{y}{r}=-\frac{\sqrt{3}}{2}$, $\boldsymbol{\cos{\theta}}=\frac{x}{r}=-\frac{1}{2}$ so $\theta$ lies in the $3_{rd}$ Quadrant, $\theta=arctan\frac{y}{x}=arctan(\sqrt{3})$ is this correct ? – Devid Feb 2 at 19:57
@Devid, yes, the value will be $\pi+\frac\pi3$ as the principal value of $\arctan \sqrt3$ is $\frac\pi3$ – lab bhattacharjee Feb 3 at 5:34
But when i do the same with $z_{1}$ i get $arctan \frac{-1}{0}$, but this can't be. – Devid Feb 3 at 10:35
$\phi=\begin{cases} \arctan(\frac{y}{x}) & \text{for } x>0\\ \arctan(\frac{x}{y})+\pi & \text{for} \ x<0 \\\frac{\pi}{2} & \text{for} \ x=0,y>0 \\ -\frac{\pi}{2} & \text{for} \ x=0,y<0 \end{cases}$ – Devid Feb 3 at 10:54
1
– lab bhattacharjee Feb 3 at 12:38
Hint: $$e^{i\theta}=cos\theta+i\sin\theta$$
-
Hint: $z=(x,y)$ then $r = \sqrt {x^2+y^2}$ and $\phi=\tan^{-1}\frac{y}{x}$ and $$r e^{i\phi}= r (\cos \phi + i \sin \phi).$$
-
Thanks but whats with $z_{1}$ i get $arctan(-\frac{1}{0})$ – Devid Feb 2 at 21:43
1
$\phi=\frac{-\pi}{2}$ – Maisam Hedyelloo Feb 3 at 3:20
+1 nice way. – Babak S. Feb 3 at 3:35
@MaisamHedyelloo how did you get $\frac{-\pi}{2}$ ? – Devid Feb 3 at 10:36
ok i got it: $\phi=\begin{cases} \arctan(\frac{y}{x}) & \text{for } x>0\\ \arctan(\frac{x}{y})+\pi & \text{for} \ x<0 \\\frac{\pi}{2} & \text{for} \ x=0,y>0 \\ -\frac{\pi}{2} & \text{for} \ x=0,y<0 \end{cases}$ – Devid Feb 3 at 10:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9009076356887817, "perplexity_flag": "middle"}
|
http://conservapedia.com/Galilean_Relativity
|
# Galilean Relativity
### From Conservapedia
Whilst, in the world of Physics, the term Relativity is usually taken to refer to Einstein’s theories of Special and General Relativity, the concept can be more generally applied to the study of how the laws of physics vary or remain the same to different observers, particularly to observers travelling with different velocities. Galilean Relativity, formulated by Galileo Galilei, is the theory that was most widespread before Einstein, and formed part of the basis of Newtonian Mechanics[1].
## Inertial Frames
Like Einstein, Galileo postulated that the laws of physics remain the same for observers in all inertial (non-accelerating) frames of reference. That is to say that for two bodies moving at different (but constant) velocities, it is impossible to make an absolute determination as to whether one is moving and the other stationary. All that can be determined is their relative velocity. The thought experiment that Galileo used was to consider a passenger in the hold of the ship on a calm see, who cannot look outside. There is no experiment that he can perform within the hold that will allow him to determine whether the ship is moving or not[2]. He formalized the idea as follows:
Any two observers moving at constant speed and direction with respect to one another will obtain the same results for all mechanical experiments.[3]
## Relative Velocity
Where Galilean Relativity primarily differs from Special Relativity is in the calculation of relative velocity. If as observed by an observer in an inertial frame of reference two bodies have velocities of v1 and v2, then their relative velocity v, is:
v = v1 + v2[4]
(Note that if the two bodies are travelling towards each other then one of the velocities will be negative, thus if v1 and v2 are taken simply magnitudes, then the v = v1 – v2 more familiar in school is produced.)
Under Special Relativity, the relative velocity is calculated using the Lorentz Transformation, producing:
$v = \cfrac {v_1 + v_2}{1 + \cfrac{v_1v_2}{c^2}}$[5]
where c is the speed of light in a vacuum.
(This is expressed in a simplified non-vector form, assuming that the two velocities are in a single dimension. For vectors, the calculation v1v2 needs to be performed as a dot product.)
Note that at low velocities (where v1v2 is small) then v1v2/c2 is close to zero and so the equation gives the same result as the Galilean formulation. Since c is such a large number (c2 being 9x1016 m2s-2), the Galilean transformation is sufficiently accurate for the everyday situations which humans encounter, and thus the transformation is sometimes regarded as intuitive. Even for the speeds involved in modern space exploration, Lorentzian adjustments are small. It is only with extremely lightweight bodies (i.e. subatomic particles) that high enough speeds can be achieved, and thus devices such as particle accelerators need to take account of the differences.
Thus it is debatable whether the Galilean transformation is actually ‘wrong’ since it is still of practical use in many situations – Special Relativity is a refinement under extreme conditions. Similarly, Newtonian Gravitation is perfectly adequate in many situations – General Relativity is a broadly equivalent refinement.
## Moral Relativism
Many liberals (and others) see an analogy between Einsteinian Relativity and Moral Relativism[6], arguing that if there is no absolute frame of reference for velocity, then there is no absolute standard for moral behaviour. However, despite the fact that Galilean Relativity maintains exactly the same tenet, the association with Galilean Relativity is not made. This can only be put down to an ignorance of the history of science.
## References
1. ↑ http://www.wolframscience.com/reference/notes/1041c
2. ↑ http://www.scribd.com/doc/51240234/10/is-for-Salvatius%E2%80%99-Ship-Sailing-along-its-Own-Space-Time-Line
3. ↑ http://physics.ucr.edu/~wudka/Physics7/Notes_www/node47.html
4. ↑ http://psi.phys.wits.ac.za/teaching/Connell/phys284/2005/lecture-01/lecture_01/node5.html
5. ↑ http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/ltrans.html
6. ↑ See, e.g., historian Paul Johnson's book about the 20th century, and the article written by liberal law professor Laurence Tribe as allegedly assisted by Barack Obama.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9036957621574402, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/banach-spaces+normed-spaces
|
# Tagged Questions
3answers
77 views
### Show that $c$ is closed in $l^{\infty}$
Let $$c=\{ (a_i)_{i \in \mathbb{N}} \ ; \ \ a_i \in \mathbb{R}\ ,\ \forall i \in \mathbb{N} \ , \ \mbox{exist} \ \displaystyle \lim_{i \to \infty}(a_i)\}$$ l^{\infty}=\{ (a_i)_{i \in \mathbb{N}} \ ...
1answer
25 views
### Good source for Triebel-Lizorkin spaces?
I'm trying to look into different types of function spaces. At the moment, at least for function spaces involving integration, I only have $L^p$ and $W^{k,p}$. The next function spaces I thought I'd ...
1answer
33 views
### How to verify whether $(C_{00},\|\cdot\|_p)$ is complete
How to verify whether $C_{00}=\{(x_n):\text{All but finitely many terms are }0\}$ is complete with respect to $p$-norm given by $$\|(x_n)\|_p=\left(\sum_{n=1}^\infty|x_n|^p\right)^{1/p}$$ where \$1\le ...
3answers
55 views
### Example of two norms on same space, non-equivalent, with one dominating the other
I am looking for an example (with proof) of two norms defined on the same vector space, such that the norms on the two spaces are NOT equivalent, but such that one norm dominates the other...
1answer
49 views
### Normed vector spaces and Banach spaces
Let $X$ be a Banach space with norm $||.||$ and let $S$ be a non-empty subset in $X$. Let $F_b(S,X)$ be the vector space of $F(S,X)$ of all functions $f:S \rightarrow X$ such that \$\{||f(s)||:s \in ...
2answers
60 views
### Banach spaces and quotient space
Let $X$ be a normed vector space, $M$ a closed subspace of $X$ such that $M$ and $X/M$ are Banach spaces. Any hint to prove that $X$ must be a Banach space?
1answer
70 views
### Is $(\ell^1 , \| \cdot \| )$ a Norm space?
Suppose $x= \{x_n \} \in \ell^ 1$ and $\| x \| = \sup | \sum_{k=1}^n x_k |$, let $\|x\|_1 = \sum_{n=1}^{\infty} |x_n |$ is a norm for $\ell^1$ . Is $(\ell^ 1 , \| \cdot \| )$ a Normed ...
1answer
80 views
### Is $(l^1 ,\|.\|)$ a Banach space?
Suppose $x=\{x_n\}\in l^1$ and $\|x\|=\sup|\sum_{k=1}^{n}x_k|$, let $\|x\|_1=\sum_{n=1}^{\infty}|x_n|$ is a norm for $l^1$ . Is $(l^1 ,\|.\|)$ a Banach space?
1answer
30 views
### Proving $\ell_\infty$ is complete
I start by taking a Cauchy sequence $(a_i)$ in $\ell_\infty$. I denote the terms of $(a_i)$ as $f_1, f_2, f_3, \dots$ and so on. For each $x \in \mathbb{N}$, the sequence \$(f_1(x), f_2(x), f_3(x), ...
1answer
33 views
### (p-q)-Lipschitz continuity of linear function
I have the following linear function $f(x,y,z) = ax + by + cz.$ I need to prove that f() is (p-q) Lipschitz continuous where $p=1$ and $q=\infty$. For a given two points $(x_1, y_1, z_1)$ and \$(x_0, ...
2answers
79 views
### $l_1$ equipped with the sup norm is NOT a Banach Space
Prove that $l_1 = \{ x = (x_k)_{k\in\mathbb{N}}\subset \mathbb{R};\ \sum_{k\in\mathbb{N}}\ |x_k| < +\infty \}$ equipped with the norm $\| x\| = \mathrm{sup}_{k\in\mathbb{N}} |x_k|$ is NOT a Banach ...
1answer
43 views
### $L_{k}^{1}([0,1])$ is a Banach space
Let $L_{k}^{1}([0,1])$ be the space of all $f\in C^{k-1}([0,1])$ such that $f^{(k-1)}$ is absolutely continuous on $[0,1]$ (and hence $f^{(k)}$ exists a.e. and is in $L^{1}([0,1])$). Show that ...
3answers
42 views
### generalization of a normed space
I study analysis and have a problem: I have a normed space for example $(X,M)$ that is not complete, how can I complete the space $X$ with respect to norm $M$? please help me Thanks
1answer
53 views
### Banach spaces and their unit sphere
Let $X$ be a normed vector space. Show that if a subsequence of a Cauchy sequence converges, then the whole sequence converges. Use the part 1 to show that $S = \{x\in X : \|x\| = 1\}$ is complete ...
1answer
35 views
### Maximun norm over the complex sequence
Is $C_0$ (the space of all the complex sequences that satisfy $\lim_{n\rightarrow \infty }{x_n} =0$ ) is a Banach space relative to the maximum norm ( $\|x\| =max|x_n|$) and pairwise operations ? ...
0answers
75 views
### Is the result still true if we drop completeness?
I know how to prove the following exercise ( from Folland) : If $X$, $Y$ are Banach spaces. $T:X\rightarrow Y$ is a linear map such that $f\circ T\in\operatorname{dual}(X)$ whenever \$f\in ...
1answer
119 views
### Isometric isomorphism
In the case that $L:B_1 \rightarrow B_2$ is a linear mapping of Banach spaces and $L$ is a isometric isomorphism (bijection and $||Lx||_{B_1} = ||x||_{B_2}$) can I say that $L\overline{L}= 1$ is ...
1answer
43 views
### Hypervolume of a $N$-dimensional ball in $p$-norm
Suppose I have a N-dimensional ball with radius R in p-norm: $$\sum_{n=1}^N |x_n|^p = R^p$$ Is there a closed formula for its (hyper)volume? I can't find anything. If there isn't, can we at least ...
1answer
173 views
### About Banach Spaces And Absolute Convergence Of Seires
How to prove the following two assertions: If in a normed space $X$, absolute convergence of any series always implies convergence of that series, then $X$ is a Banach space. In a Banach space, ...
1answer
23 views
### A limit superior question in the context of the Neumann series
I'm trying to understand a step in the proof that the Neumann series converges: Let $X$ be a Banach space and $T\in L(X)$ (the space of bounded, i.e. continuous linear maps $X\to X$). It is known ...
0answers
97 views
### Is it a Banach space? If so what is its dual?
Let $(E_n)$ be a sequence of Banach spaces and $(w_n)$ be a sequence of positive real numbers. For $1\leq p <\infty$ define \$\bigoplus\limits_P E_n:=\{(x_n):x_n\in E_n,\sum\limits_n\lVert ...
2answers
127 views
### Prove that $(B, \|-\|_{\infty})$ complete. B the set of bounded real valued functions on [0,1] which are pointwise limit of continuous functions.
Question: Prove that $(B, \|-\|_{\infty})$ is complete. B the set of bounded real valued functions on [0,1] which are pointwise limit of continuous functions on [0,1]. Context: Old exam problem I'm ...
3answers
122 views
### A question about the proof of the open mapping theorem in “Functional Analysis” by Rudin
In the proof of the open mapping theorem in "Functional Analysis" by Rudin, there is the following argument: Let $X$ be a topological vector space in which its topology is induced by a complete ...
2answers
92 views
### on proving that $\|\cdot\|_2$ is a norm on $C[0,1]$
Let $\mathbb{F}$ be either $\mathbb{R}$ or $\mathbb{C}$ and consider the vector space $C[0,1]$, the collection of continuous functions $f\colon[0,1]\to\mathbb{F}$. I want to show that $\|\cdot\|_2$ is ...
1answer
225 views
### Equivalence of reflexive and weakly compact
In a normed space $X$ is there an equivalence between these two proposition? 1) $X$ is reflexive; 2) $B$, the unit ball of $X$, is weakly compact.
1answer
123 views
### $C^1 [0,1]$ with different norm
If the space $C^1 [0,1]$ is equiped with norm $\Vert \cdot\Vert_1$,where $$\Vert f\Vert_1=|f(0)|+\Vert f'\Vert _{C}=|f(0)|+\sup_{t\in [0,1]}|f'(t)|$$ for any $f\in C^1 [0,1]$, is this space Banach? ...
1answer
39 views
### Gaussian type and Euclidean sections
I have a second question about Chapter 9 in Milman and Schechtman's book "Asymptotic theory of finite dimensional normed spaces" (first question here). It's about the proof of Theorem 9.7 (pg. 55). ...
3answers
94 views
### Question about proof that multiplication in Banach algebra is continuous
Here's the proof in my notes: Where does the last inequality come from? If I want to show that it's continuous at $((x,y)$ I can use the inverse triangle inequality to get (\|x^\prime\| + ...
3answers
107 views
### $B(V,W)$ is complete if $W$ is
Let $B(V,W)$ be the space of bounded linear maps from $V$ to $W$. Then it is complete with respect to the operator norm. Can you tell me if my proof is correct? Thanks. It's easy to verify that the ...
1answer
57 views
### Euclidean sections of normed spaces with known cotype
I'm having trouble digesting the proof of Theorem 9.6 in Milman and Schechtman's classic book "Asymptotic theory of finite dimensional normed spaces" (pg. 54). I'm new to functional analysis, so this ...
0answers
131 views
### $C_c(X)$ dense in $L^p$
In class we proved that $C_c(X)$ is dense in $L^p$ where $X$ is a locally compact, $\sigma$-compact Hausdorff space either equipped with a Radon measure or equipped with a locally finite measure ...
1answer
77 views
### Balls and transformed sets in normed vector spaces
Let $T$ be a surjective, continuous linear operator between two Banach spaces $E$ and $F$. Assume that it is $B_F(y_0,4c)\subset \overline{T(B_E(0,1))}$, where $c>0$, $y_0 \in F$ ($B$ is for ...
0answers
162 views
### Question about proof of Stone-Weierstrass
I would like to know if I understand the details in the proof of Stone-Weierstrass (in $\mathbb R$) so I'd like to post it here in my own words. Can you please check it and tell me if it's correct? ...
2answers
119 views
### Proof of the lemma used in proving that a finite-dimensional normed space is complete
I'm trying to understand the proof for the lemma: $$\|\alpha _1 e_1 + \alpha _2 e_2 + \cdots + \alpha_n e_n\| \geq c (|\alpha_1|+|\alpha_2|+\cdots+|\alpha_n|)$$ where $c>0$ and the $e_i$s are ...
1answer
134 views
### Question about proof of Arzelà-Ascoli
(Arzelà-Ascoli, $\Longleftarrow$) Let $K$ be a compact metric space. Let $S \subset (C(K), \|\cdot\|_\infty)$ be closed, bounded and equicontinuous. Then $S$ is compact, that is, for a sequence $f_n$ ...
1answer
172 views
### Completion of $C_c$ with respect to $\|\cdot\|_\Psi$
I'm doing the second half of the following exercise in my lecture notes: "Let $C_c(R)$ be the vector space of continuous functions $f : R \to R$ with \$\mathrm{supp}(f)=\overline{ \{x \in R \mid ...
2answers
173 views
### If $V \times W$ with the product norm is complete, must $V$ and $W$ be complete?
Let $V,W$ be two normed vector spaces (over a field $K$). Then their product $V \times W$ with the norm $\|(x,y)\| = \|x\|_V + \|y\|_W$ is a normed space. Using this norm it's easy to show that if ...
1answer
62 views
### Something weaker than the Riesz basis
I have some function $f$, real valued and continuous. I formed functions $\{f_{m,k}, k \in \mathbb{Z}, m>0\}$ such that that $\mathrm{span}\{f_{m,k}, k \in \mathbb{Z}, m>0\}$ is dense in ...
1answer
83 views
### Banach space, Normed vector space
Help me please with this question. Let's $Y$ be Banach space, $Z$ - Normed vector space and $(T_{n})_{\mathbb{N}}$ - the sequence in $B(Y,Z)$ so that all sequence $(y_{n})_{\mathbb{N}}$ in Y holds: ...
3answers
78 views
### $\|\cdot\|_1 \leq \|\cdot\|_2 \leq \|\cdot\|_{\infty}$ for functions in $C([0,1])$?
Why does the following hold for continuous functions on $[0,1]$? $\|\cdot\|_1 \leq \|\cdot\|_2 \leq \|\cdot\|_{\infty}$
0answers
45 views
### existance of the interpolation space
Let $X\subset L_1+L_2$ and let $Y$ be interpolation space between $L_2$ and $L_{\infty}.$ Given $U:X\longrightarrow Y$. My question is the following: Is there exists space $Z\subset Y$, such that ...
2answers
549 views
### Prove that $X'$ is a Banach space
I'm taking a new course on functional analysis and meet with the following problem. If $X$ is a normed space (not necessarily complete), then prove that $X'$ is a Banach space. Definition: When the ...
1answer
167 views
### There exists an isometric embedding
Let $W$ be a closed linear subspace of a normed vector space $V$. Let $i_V: V \to V^{**}$. and $i_W: W \to W^{**}$ be the canonical embeddings of V and W into their second duals. Prove that there ...
1answer
181 views
### Hahn-Banach. Extend the functional by continuity
Let $E$ be a dense linear subspace of a normed vector space $X$, and let $Y$ be a Banach space. Suppose $T_{0}\in\mathcal{L}(E,Y)$ is a bounded linear operator from $E$ to $Y$. Show that $T_{0}$ can ...
1answer
867 views
### Banach Spaces - How can $B,B',B'', B''', B'''',B''''',\ldots$ behave?
(ZFC) Let $\big\langle B,+,\cdot, \:\: \|\cdot\| \:\: \big\rangle$ be a Banach space. Define $\mathbf{B} \; = \;\big\langle B,+,\cdot, \:\: \|\cdot\| \:\: \big\rangle$. Define \$\: \mathbf{B}_0 ...
1answer
163 views
### To construct a counterexample of normed space
Please construct a counterexample for the following: $A$ is normed space and $M$ is a dense subspace of $A$, if there is a functional $f$ such that $f(M) = 0$, then $f=0$. Besides, if $A$ is a Banach ...
1answer
691 views
### Banach space of Lipschitz functions
Let $X$ be a compact metric space, and $F$ the space of all lipschitz functions $X \to \mathbf{C}$. Let $|f|_L$ be the least Lipschitz constant. We endow $F$ with the norm \$||f|| = |f|_L + ...
3answers
334 views
### Is there an easy example of a vector space which can not be endowed with the structure of a Banach space
Let $V$ be a real vector space. Is there always a norm on $V$ such that $V$ is complete with respect to this norm? If not, is there an easy counterexample?
1answer
99 views
### tensorisation of linear map
Let $X$ be a Banach space and $T \colon \ell^2\rightarrow \ell^2$ be a bounded linear map. Suppose that the linear map $T\otimes Id_ {X}:\ell^2\otimes X\rightarrow \ell^2\otimes X$ which maps \$e_i ...
3answers
622 views
### Operator norm on product space
I have a bilinear operator $B\colon X \times Y\to Z$ with $X,Y,Z$ normed spaces, and define a norm on $X \times Y$ by $\lVert(x,y)\rVert = \lVert x\rVert_X + \lVert y\rVert_Y$ (using the respective ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 194, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9054092168807983, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/202216-finding-root-75-a.html
|
1Thanks
• 1 Post By kalwin
# Thread:
1. ## Finding the root of 75
I have been told that in order to find the root of 75 you need to factor it
and then do 5 times root of 3 or something like that
anyway why cant i just break 75 into 2 times 35 and so on?
in any case in simple terms can you explain it?
2. ## Re: Finding the root of 75
Originally Posted by ariel32
I have been told that in order to find the root of 75 you need to factor it
and then do 5 times root of 3 or something like that
anyway why cant i just break 75 into 2 times 35 and so on?
in any case in simple terms can you explain it?
1. You can easily calculate the square-root of a number if this number is a square: $\sqrt{9} = 3~or~\sqrt{\frac4{25}}=\frac25$
2. If the number is not a square (75 is not a square of a rational number) you can express $\sqrt{75}$ as an approximation in decimal form. You have then a result with a lot of digits - and it is still not accurat!
Therefore you try to split a number into a product of a square (the square should be as large as possible) and another rational number. So you are able to calculate the square-root at least from the square. The other number remains under the root sign.
Example:
$\sqrt{128} = \sqrt{4 \cdot 32} = \sqrt{16 \cdot 8} = \sqrt{64 \cdot 2}$
Since 64 is the greatest square you'll get $\sqrt{128} = 8 \cdot \sqrt{2}$
3. ## Re: Finding the root of 75
$\sqrt{75}=\sqrt{3\cdot5\cdot 5}$
$\sqrt{3\cdot5^{2}}$
$5\sqrt{3}$
4. ## Re: Finding the root of 75
Rewrite each as a multiple of a square:
√(25 * 3)
√25 * √3
5√3
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277825355529785, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/223752/rates-of-change/223759
|
# Rates of change
Quick rates of change question:
A rectangular water tank has a square base with sides of length 0.55m. Water is poured into the tank so that the fluid volume in the tank increases at a constant rate of 0.2m$^3$ per hour. How fast is the water leveraging in metres per hour?
I've worked out the volume of the water tank V=0.55$^3$=0.166375m$^3$. From the question, dv/dt=0.2m$^3$ p/hr but I do not know what to do next. Any help will be beneficial.
-
## 1 Answer
Let's write the formula for the volume of water in the tank as a function of depth $h$ and length $l$:
$$V = hl^2.$$
Now, we have that both volume $V$ and depth $h$ are functions of time:
$$V(t) = h(t)l^2.$$
We know that $dV/dt = 0.2 m^3/hr$. We also know that
$$\frac{dV(t)}{dt} = \frac{d}{dt}\left(h(t)l^2\right) = l^2\frac{dh(t)}{dt}.$$
It should be straightforward from here.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941955029964447, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/5042/backward-euler-method
|
# Backward Euler method
Can you explain me how does the backward Euler method works? I have seen the formula and try to understand the method, but what I can't understand is why and how to use the Newton-Rapson method.
Do you have a link to a good tutorial? Something that can help me understand it graphically?
UPDATE
According to what Paul said, could you please tell me if what follows is correct?
The Cauchy problem is like this
$$u(t) = f\big ( u(t), t \big ) \quad \text{where} \quad f\big ( u(t), t \big ) = u'\big ( u(t), t \big )$$
then, according to the example given by Paul, I'll have something like:
$$u'(t)=\big [u(t) \big]^3$$
Now, the backward Euler method reads:
$$u_{t+1} = u_t + hf\big ( u_{t+1}, t+1 )~~~~~~~~~~\text{not really sure about this}$$
So for, $t = 0$ and $h = 1/2$ I should have the following
$$u_1 = u_0 + \frac{1}{2}(u_1)^3$$
For this equation I have to solve by means of Newton-Raphson before going to the next time $t = 1$:
$$u_1 - u_0 - \frac{1}{2}(u_1)^3 = 0$$
From here I don't know what to do, how to use Newton's method??
UPDATE
I think I finally got it. From backward Euler's method I have:
$$u_{t+1} = u_t + hf(u_{t+1}, t+1)$$
this can be seen as (just like everybody told me):
$$F(u_{t+1})= u_{t+1} - u_t - hf(u_{t+1}, t+1) = 0$$
where $u_{t+1} = x$ so I have
$$F(x)= x - u_t - hf(x, t+1) = 0$$
which now I see a familiar way to use Newton's (or whatever other method) to find the value of $x$
-
1
I think your last formula is correct. – Paul♦ Jan 18 at 22:31
## 2 Answers
You would use backward euler method to solve a differential equation of the form $u_t =f(u,t)$ where $f$ is not necessarily a linear function in u. When f is non-linear, then the backward euler method results in a set of non-linear equations that need to be solved for each time step. Ergo, Newton-raphson can be used to solve it.
For example, take
$$\frac{du}{dt}=u^3(t)$$
Backward euler results in
$$\frac{u_{t+1}-u_{t}}{\Delta t}=u_{t+1}^3$$
Since you can't solve for $u_{t+1}$ directly, you have to estimate it by a numerical scheme. This is where newton-raphson comes into play. Rearrange this equation so that the RHS=0, we get
$$\frac{u_{t+1}-u_{t}}{\Delta t}-u_{t+1}^3=0$$
Now, we can apply newton's method to find the root of this equation $u_{t+1}$, which in turn becomes the solution to the differential equation at the next timestep.
-
Ok. Almost got it. First, when you say "of the form $u_t = f(u,t)$", this means $f(u,t) = u'\big ( u(t), t \big )$, I ask just because I don't want to mess up with the notation. Then, $du/dt = u^3(t) = f(u, t)$ right? Finally, when the RHS = 0, from the resulting equation $u_t$, $\Delta t$, are known but not $u_{t+1}$ so how do I evaluate that with Newton-Raphson? – BRabbit27 Jan 18 at 17:31
Backwards Euler, as you state you know, is time advances using the equation: $U_{t+\Delta t} = U_{t}+ U'(t,U_{t+\Delta t}) \Delta t$.
You can not explicitly evaluate $U'_{t+\Delta t}$ since you don't know $U_{t+\Delta t}$. To make it more clear the equation is:
$X = U(t) + U'(t,X)\Delta t$
Therefore you must solve the equation for X (which is U at the next time step) using some methodology (this is where you use Newton-Raphson in your case).
As far as graphical interpretation, think about your standard Euler. The idea there is that you are taking the current value of $U$ and its derivative at the current time, and assuming that it is nearly linear for some small change in time ($\Delta t$). So you are drawing a straight line from $U(t)$ at a slope of $U'(t)$.
For backwards Euler, all you are doing is using the slope at the end of your line approximation rather than the start of it. As to why you would want to do this, it is a more complicated answer involving the stability of your solution.
-
I appreciate your help, but still I can't see where does the Newton-Rapson comes to play... I mean, the equation of backward Euler I got it, and obviously I'm trying to compute the value of $U'(t, U_{t+\Delta t})$ which I can't. Newton-Rapson is used to find the zeros of a function isn't it? Sorry, I can't see how those two things fit... – BRabbit27 Jan 18 at 16:38
All this subindices confuse me A LOT. Because when I saw the backward Euler combined with Newton-Raphson, there appeared a superindex k, k+1... – BRabbit27 Jan 18 at 16:41
1
I made a slight update to hopefully clarify for you. – Godric Seer Jan 18 at 21:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9561772346496582, "perplexity_flag": "head"}
|
http://nrich.maths.org/2363/index?nomenu=1
|
London is situated at longitude $0^o$, latitude $52^o$ North and Cape Town at longitude $18^o$ East, latitude $34^o$ South. Taking the earth to be a sphere with unit radius (and ultimately scaling by 6367 kilometres for the radius of the earth) work out coordinates for both places, then find the angle LOC where L represents London, O the centre of the earth and C Cape Town. Hence find the distance on the surface of the earth between the two places. If a plane flies at an altitude of 6 kilometres and the journey takes 11 hours what is the average speed?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8534866571426392, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/02/19/
|
# The Unapologetic Mathematician
## Integration gives signed areas
I haven’t gotten much time to work on the promised deconstruction, so I’ll punt to a math post I wrote up earlier.
Okay, let’s look back and see what integration is really calculating. We started in on integration by trying to find the area between the horizontal axis and the graph of a positive function. But what happens as we extend the formalism of integration to handle more general situations?
What if the function $f$ we integrate is negative? Then $-f$ is positive, and $\int_a^b-f(x)dx$ is the area between the horizontal axis and the graph of $-f$. But moving from $f$ to $-f$ is just a reflection through the horizontal axis. The horizontal axis stays in the same place, and it seems the area should be the same. But by the basic rules of integration we spun off at the end of yesterday’s post, we see that
$\displaystyle\int\limits_a^bf(x)dx=-\int\limits_a^b-f(x)dx$
That is, we don’t get the same answer; we get its negative. So, integration counts areas below the horizontal axis as negative. We could also see this from the Riemann sums, where we replace all the function evaluations with their negatives, and factor out a $-1$ from the whole sum.
How else could we extend the formalism of integration? What if we ran it “backwards”, from the right endpoint of our interval to the left? That is, let’s take an “interval” $\left[b,a\right]$ with $a<b$. Then when we partition the interval we should get a string of partition points decreasing as we go along. Then when we set up the Riemann sum we’ll get negative values for each $x_i-x_{i-1}$ We can factor out all these signs to give an overall negative sign, along with a Riemann sum for the integral over $\left[a,b\right]$. The upshot is that we can integrate over an interval from right to left at the cost of introducing an overall negative sign.
We can handle this by attaching a sign to an interval, just like we did to points yesterday. We write $\left[b,a\right]^-=\left[a,b\right]$. Then when we integrate over a signed interval, we take its sign into account. Notice that if we integrate over both $\left[a,b\right]$ and $\left[a,b\right]^-$ the two parts cancel each other out, and we get ${0}$.
Posted by John Armstrong | Analysis, Calculus | 7 Comments
## Rubik’s 7x7x7 Cube
From Alexandre Borovik I find this video of someone solving the “order seven” Rubik’s Cube.
I’m not about to sit down and work up a solution like we did before, but it shouldn’t be impossible to repeat the same sort of analysis. I will point out, however, that the solver in this video is making heavy use of both of our solution techniques: commutators and a tower of nested subgroups.
The nested subgroups are obvious. As the solution progresses, more and more structure becomes apparent, and is preserved as the solution continues. In particular, the solver builds up the centers of faces and then slips to the subgroup of maneuvers which leaves such “big centers” fixed in place. Near the end, almost all of the moves are twists of the outer faces, because these are assured not to affect anything but the edge and corner cubies.
The commutators take a quicker eye to spot, but they’re in there. Watch how many times he’ll do a couple twists, a short maneuver, and then undo those couple twists. Just as we used such commutators, these provide easy generalizations of basic cycles, and they form the heart of this solver’s algorithm.
Alexandre asked a question about the asymptotic growth of the “worst assembly time” for the $n\times n\times n$ cube. What this is really asking is for the “diameter” of the $n$th Rubik’s group $G_n$. I don’t know offhand what this would be, but here’s a way to get at a rough estimate.
First, find a similar expression for the structure of $G_n$ as we found before for $G_3$. Then what basic twists do we have? For $n=3$ we had all six faces, which could be turned either way, and we let the center slices be fixed. In general we’ll have $\lfloor\frac{n}{2}\rfloor$ slices in each of six directions, each of which can be turned either way, for a total of $12\lfloor\frac{n}{2}\rfloor$ generators (and their inverses). But each generator should (usually) be followed by a different one, and definitely not by its own inverse. Thus we can estimate the number of words of length $l$ as $\left(12\lfloor\frac{n}{2}\rfloor-2\right)^l$. Then the structure of $G_n$ gives us a total size of the group, and the diameter should be about $\log_{\left(12\lfloor\frac{n}{2}\rfloor-2\right)}(|G_n|)$. Notice that for $n=3$ this gives us $20$, which isn’t far off from the known upper bound of $26$ quarter-turns.
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354156255722046, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Center_of_gravity
|
# Center of mass
(Redirected from Center of gravity)
"Center of gravity" redirects here. For the military concept, see center of gravity (military). For the precise definition, see centers of gravity in non-uniform fields.
This child's toy uses the principles of center of mass to keep balance on a finger.
In physics, the center of mass, of a distribution of mass in space is the unique point where the weighted relative position of the distributed mass sums to zero. The distribution of mass is balanced around the center of mass and the average of the weighted position coordinates of the distributed mass defines its coordinates. Calculations in mechanics are simplified when formulated with respect to the center of mass.
In the case of a single rigid body, the center of mass is fixed in relation to the body, and if the body has uniform density, it will be located at the centroid. The center of mass may be located outside the physical body, as is sometimes the case for hollow or open-shaped objects, such as a horseshoe. In the case of a distribution of separate bodies, such as the planets of the Solar System, the center of mass may not correspond to the position of any individual member of the system.
The center of mass is a useful reference point for calculations in mechanics that involve masses distributed in space, such as the linear and angular momentum of planetary bodies and rigid body dynamics. In orbital mechanics, the equations of motion of planets are formulated as point masses located at the centers of mass. The center of mass frame is an inertial frame in which the center of mass of a system is at rest at with respect the origin of the coordinate system.
## History
The concept of "center of mass" in the form of the "center of gravity" was first introduced by the ancient Greek physicist, mathematician, and engineer Archimedes of Syracuse. He worked with simplified assumptions about gravity that amount to a uniform field, thus arriving at the mathematical properties of what we now call the center of mass. Archimedes showed that the torque exerted on a lever by weights resting at various points along the lever is the same as what it would be if all of the weights were moved to a single point — their center of mass. In work on floating bodies he demonstrated that the orientation of a floating object is the one that makes its center of mass as low as possible. He developed mathematical techniques for finding the centers of mass of objects of uniform density of various well-defined shapes.[1]
Later mathematicians who developed the theory of the center of mass include Pappus of Alexandria, Guido Ubaldi, Francesco Maurolico,[2] Federico Commandino,[3] Simon Stevin,[4] Luca Valerio,[5] Jean-Charles de la Faille, Paul Guldin,[6] John Wallis, Louis Carré, Pierre Varignon, and Alexis Clairaut.[7]
Newton's second law is reformulated with respect to the center of mass in Euler's first law.[8]
Diagram of an educational toy that balances on a point: the CM (C) settles below its support (P)
## Definition of center of mass
The center of mass is the unique point at the center of a distribution of mass in space that has the property that the weighted position vectors relative to this point sum to zero.
### A system of particles
In the case of a system of particles Pi, i = 1, …, n , each with mass mi that are located in space with coordinates ri, i = 1, …, n , the coordinates R of the center of mass satisfy the condition
$\sum_{i=1}^n m_i(\mathbf{r}_i - \mathbf{R}) = 0.$
Solve this equation for R to obtain the formula
$\mathbf{R} = \frac{1}{M} \sum_{i=1}^n m_i \mathbf{r}_i,$
where M is the sum of the masses of all of the particles.
### A continuous volume
If the mass distribution is continuous with the density ρ(r) within a volume V, then the integral of the weighted position coordinates of the points in this volume relative to the center of mass R is zero, that is
$\int_V \rho(\mathbf{r})(\mathbf{r}-\mathbf{R})dV = 0.$
Solve this equation for the coordinates R to obtain
$\mathbf R = \frac 1M \int_V\rho(\mathbf{r}) \mathbf{r} dV,$
where M is the total mass in the volume.
If a continuous mass distribution has uniform density, which means ρ is constant, then the center of mass is the same as the centroid of the volume.[9]
### Barycentric coordinates
Further information: Barycentric_coordinate_system
The coordinates R of the center of mass of a two-particle system, P1 and P2, with masses m1 and m2 is given by
$\mathbf{R} = \frac{1}{m_1+m_2}(m_1 \mathbf{r}_1 + m_2\mathbf{r}_2).$
Let the percentage of the total mass divided between these two particles vary from 100% P1 and 0% P2 through 50% P1 and 50% P2 to 0% P1 and 100% P2, then the center of mass R moves along the line from P1 to P2. The percentages of mass at each point can be viewed as projective coordinates of the point R on this line, and are termed barycentric coordinates. This can be generalized to three points and four points to define projective coordinates in the plane, and in space, respectively.
### Systems with periodic boundary conditions
For particles in a system with periodic boundary conditions two particles can be neighbors even though they are on opposite sides of the system. This occurs often in molecular dynamics simulations, for example, in which clusters form at random locations and sometimes neighboring atoms cross the periodic boundary. When a cluster straddles the periodic boundary, a naive calculation of the center of mass will be incorrect. A generalized method for calculating the center of mass for periodic systems is to treat each coordinate, x and y and/or z, as if it were on a circle instead of a line.[10] The calculation takes every particle's x coordinate and maps it to an angle,
$\theta_i = \frac{x_i}{x_{max}} 2 \pi$
where xmax is the system size in the x direction. From this angle, two new points $(\xi_i,\zeta_i)$ can be generated:
$\xi_i = \frac{x_{max}}{2 \pi} \cos(\theta_i)$
$\zeta_i = \frac{x_{max}}{2 \pi} \sin(\theta_i)$
In the $(\xi,\zeta)$ plane, these coordinates lie on a circle of radius xmax. From the collection of $\xi_i$ and $\zeta_i$ values from all the particles, the averages $\overline{\xi}$ and $\overline{\zeta}$ are calculated. These values are mapped back into a new angle, $\overline{\theta}$, from which the x coordinate of the center of mass can be obtained:
$\overline{\theta} = atan2(-\overline{\zeta},-\overline{\xi}) + \pi$
$x_{com} = x_{max} \frac{ \overline{\theta}}{2 \pi}$
The process can be repeated for all dimensions of the system to determine the complete center of mass. The utility of the algorithm is that it allows the mathematics to determine where the "best" center of mass is, instead of guessing or using cluster analysis to "unfold" a cluster straddling the periodic boundaries. It must be noted that if both average values are zero, $(\overline{\xi},\overline{\zeta}) = (0,0)$, then $\overline{\theta}$ is undefined. This is a correct result, because it only occurs when all particles are exactly evenly spaced. In that condition, their x coordinates are mathematically identical in a periodic system.
## Center of gravity
Center of gravity is the point in a body around which the resultant torque due to gravity forces vanish. Near the surface of the earth, where the gravity acts downward as a parallel force field, the center of gravity and the center of mass are the same.
The study of the dynamics of aircraft, vehicles and vessels assumes that the system moves in near-earth gravity, and therefore the terms center of gravity and center of mass are used interchangeably.
In physics the benefits of using the center of mass to model a mass distribution can be seen by considering the resultant of the gravity forces on a continuous body. Consider a body of volume V with density ρ(r) at each point r in the volume. In a parallel gravity field the force f at each point r is given by,
$\mathbf{f}(\mathbf{r}) = -dm\, g\vec{k}= -\rho(\mathbf{r})dV\,g\vec{k},$
where dm is the mass at the point r, g is the acceleration of gravity, and k is a unit vector defining the vertical direction. Choose a reference point R in the volume and compute the resultant force and torque at this point,
$\mathbf{F} = \int_V \mathbf{f}(\mathbf{r}) = \int_V\rho(\mathbf{r})dV( -g\vec{k}) = -Mg\vec{k},$
and
$\mathbf{T} = \int_V (\mathbf{r}-\mathbf{R})\times \mathbf{f}(\mathbf{r}) = \int_V (\mathbf{r}-\mathbf{R})\times (-g\rho(\mathbf{r})dV\vec{k} )= \left(\int_V \rho(\mathbf{r}) (\mathbf{r}-\mathbf{R})dV \right)\times (-g\vec{k}) .$
If the reference point R is chosen so that it is the center of mass, then
$\int_V \rho(\mathbf{r}) (\mathbf{r}-\mathbf{R})dV =0,$
which means the resultant torque T=0. Because the resultant torque is zero the body will move as though it is a particle with its mass concentrated at the center of mass.
By selecting the center of gravity as the reference point for a rigid body, the gravity forces will not cause the body to rotate, which means weight of the body can be considered to be concentrated at the center of mass.
## Linear and angular momentum
The linear and angular momentum of a collection of particles can be simplified by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, i=1,...,n be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors,
$\mathbf{r}_i = (\mathbf{r}_i - \mathbf{R}) + \mathbf{R}, \quad \mathbf{v}_i = \frac{d}{dt}(\mathbf{r}_i - \mathbf{R}) + \mathbf{V}.$
The total linear and angular momentum vectors relative to the reference point R are
$\mathbf{p} = \frac{d}{dt}\left(\sum_{i=1}^n m_i (\mathbf{r}_i - \mathbf{R})\right) + \left(\sum_{i=1}^n m_i\right) \mathbf{V},$
and
$\mathbf{L} = \sum_{i=1}^n m_i (\mathbf{r}_i-\mathbf{R})\times \frac{d}{dt}(\mathbf{r}_i - \mathbf{R}) + \left(\sum_{i=1}^n m_i (\mathbf{r}_i-\mathbf{R})\right)\times\mathbf{V}.$
If R is chosen as the center of mass these equations simplify to
$\mathbf{p} = M\mathbf{V},\quad \mathbf{L} = \sum_{i=1}^n m_i (\mathbf{r}_i-\mathbf{R})\times \frac{d}{dt}(\mathbf{r}_i - \mathbf{R}).$
Newton's laws of motion require that for any system with no external forces the momentum of the system is constant, which means the center of mass moves with constant velocity. This applies for all systems with classical internal forces, including magnetic fields, electric fields, chemical reactions, and so on. More formally, this is true for any internal forces that satisfy Newton's Third Law.[11]
## Locating the center of mass
Main article: Locating the center of mass
Plumb line method
The experimental determination of the center of mass of a body uses gravity forces on the body and relies on the fact that in the parallel gravity field near the surface of the earth the center of mass is the same as the center of gravity.
The center of mass of a body with an axis of symmetry and constant density must lie on this axis. Thus, the center of mass of a circular cylinder of constant density has its center of mass on the axis of the cylinder. In the same way, the center of mass of a spherically symmetric body of constant density is at the center of the sphere. In general, for any symmetry of a body, its center of mass will be a fixed point of that symmetry.[12]
### In two dimensions
An experimental method for locating the center of mass is to suspend the object from two locations and to drop plumb lines from the suspension points. The intersection of the two lines is the center of mass.[13]
The shape of an object might already be mathematically determined, but it may be too complex to use a known formula. In this case, one can subdivide the complex shape into simpler, more elementary shapes, whose centers of mass are easy to find. If the total mass and center of mass can be determined for each area, then the center of mass of the whole is the weighted average of the centers.[14] This method can even work for objects with holes, which can be accounted for as negative masses.[15]
A direct development of the planimeter known as an integraph, or integerometer, can be used to establish the position of the centroid or center of mass of an irregular two-dimensional shape. This method can be applied to a shape with an irregular, smooth or complex boundary where other methods are too difficult. It was regularly used by ship builders to compare with the required displacement and centre of buoyancy of a ship, and ensure it would not capsize.[16][17]
### In three dimensions
An experimental method to locate the three dimensional coordinates of the center of mass begins by supporting the object at three points and measuring the forces, F1, F2, and F3 that resist the weight of the object, W= −Wk (k is the unit vector in the vertical direction). Let r1, r2, and r3 be the position coordinates of the support points, then the coordinates R of the center of mass satisfy the condition that the resultant torque is zero,
$\mathbf{T}= (\mathbf{r}_1-\mathbf{R})\times\mathbf{F}_1+(\mathbf{r}_2-\mathbf{R})\times\mathbf{F}_2+(\mathbf{r}_3-\mathbf{R})\times\mathbf{F}_3=0,$
or
$\mathbf{R}\times(-W\vec{k})= \mathbf{r}_1\times\mathbf{F}_1+\mathbf{r}_2\times\mathbf{F}_2+\mathbf{r}_3\times\mathbf{F}_3.$
This equation yields the coordinates of the center of mass R* in the horizontal plane as,
$\mathbf{R}^* =\frac{1}{W} \vec{k}\times(\mathbf{r}_1\times\mathbf{F}_1+\mathbf{r}_2\times\mathbf{F}_2+\mathbf{r}_3\times\mathbf{F}_3).$
The center of mass lies on the vertical line L, given by
$L(t) = \mathbf{R}^* + t\vec{k}.$
The three dimensional coordinates of the center of mass are determined by performing this experiment twice with the object positioned so that these forces are measured for two different horizontal planes through the object. The center of mass will be the intersection of the two lines L1 and L2 obtained from the two experiments.
## Applications
Estimated center of mass/gravity (blue sphere) of a gymnast at the end of performing a cartwheel. Notice center is outside the body in this position.
Engineers try to design a sports car so that its center of mass is lowered to make the car handle better. When high jumpers perform a "Fosbury Flop", they bend their body in such a way that it clears the bar while its center of mass does not necessarily clear it.[18]
### Aeronautics
Main article: Center of gravity of an aircraft
The center of mass is an important point on an aircraft, which significantly affects the stability of the aircraft. To ensure the aircraft is stable enough to be safe to fly, the center of mass must fall within specified limits. If the center of mass is ahead of the forward limit, the aircraft will be less maneuverable, possibly to the point of being unable to rotate for takeoff or flare for landing.[19] If the center of mass is behind the aft limit, the aircraft will be more maneuverable, but also less stable, and possibly so unstable that it is impossible to fly. The moment arm of the elevator will also be reduced, which makes it more difficult to recover from a stalled condition.[20]
For helicopters in hover, the center of mass is always directly below the rotorhead. In forward flight, the center of mass will move aft to balance the negative pitch torque produced by applying cyclic control to propel the helicopter forward; consequently a cruising helicopter flies "nose-down" in level flight.
### Astronomy
Two bodies orbiting a barycenter inside one body
Main article: Barycentric coordinates (astronomy)
The center of mass plays an important role in astronomy and astrophysics, where it is commonly referred to as the barycenter. The barycenter is the point between two objects where they balance each other; it is the center of mass where two or more celestial bodies orbit each other. When a moon orbits a planet, or a planet orbits a star, both bodies are actually orbiting around a point that lies away from the center of the primary (larger) body.[21] For example, the Moon does not orbit the exact center of the Earth, but a point on a line between the center of the Earth and the Moon, approximately 1,710 km (1062 miles) below the surface of the Earth, where their respective masses balance. This is the point about which the Earth and Moon orbit as they travel around the Sun. If the masses are more similar, e.g., Pluto and Charon, the barycenter will fall outside both bodies.
## Notes
1. Bai, Linge; Breen, David (2008). "Calculating Center of Mass in an Unbounded 2D Environment". Journal of Graphics, GPU, and Game Tools 13 (4): 53–60. doi:10.1080/2151237X.2008.10129266.
2. "The theory and design of British shipbuilding. (page 3 of 14)". Amos Lowrey Ayre. Retrieved 20 August 2012.
## References
• Baron, Margaret E. (2004) [1969], The Origins of the Infinitesimal Calculus, Courier Dover Publications, ISBN 0-486-49544-2
• Beatty, Millard F. (2006), Principles of Engineering Mechanics, Volume 2: Dynamics—The Analysis of Motion, Mathematical Concepts and Methods in Science and Engineering 33, Springer, ISBN 0-387-23704-6
• Feynman, Richard; Leighton, Robert; Sands, Matthew (1963), , Addison Wesley, ISBN 0-201-02116-1
• Federal Aviation Administration (2007), Aircraft Weight and Balance Handbook, United States Government Printing Office, retrieved 23 October 2011
• Giambattista, Alan; Richardson, Betty McCarthy; Richardson, Robert Coleman (2007), College physics, Volume 1 (2nd ed.), McGraw-Hill Higher Education, ISBN 0-07-110608-1
• Goldstein, Herbert; Poole, Charles; Safko, John (2001), (3rd ed.), Addison Wesley, ISBN 0-201-65702-3
• Hamill, Patrick (2009), Intermediate Dynamics, Jones & Bartlett Learning, ISBN 978-0-7637-5728-1
• Kleppner, Daniel; Kolenkow, Robert (1973), An Introduction to Mechanics (2nd ed.), McGraw-Hill, ISBN 0-07-035048-5
• Levi, Mark (2009), The Mathematical Mechanic: Using Physical Reasoning to Solve Problems, Princeton University Press Unknown parameter `|isgn=` ignored (help)
• Mancosu, Paolo (1999), Philosophy of mathematics and mathematical practice in the seventeenth century, Oxford University Press, ISBN 0-19-513244-0
• Murray, Carl; Dermott, Stanley (1999), Solar System Dynamics, Cambridge University Press, ISBN 0-521-57295-9
• Sangwin, Christopher J. (2006), "Locating the centre of mass by mechanical means", Journal of the Oughtred Society 15 (2), retrieved 23 October 2011
• Shore, Steven N. (2008), Forces in Physics: A Historical Perspective, Greenwood Press, ISBN 978-0-313-33303-3
• Symon, Keith R. (1971), Mechanics (3rd ed.), Addison-Wesley, ISBN 0-201-07392-7
• Van Pelt, Michael (2005), Space Tourism: Adventures in Earth Orbit and Beyond, Springer, ISBN 0-387-40213-6
•
• Asimov, Isaac (1988) [1966], , Barnes & Noble Books, ISBN 0-88029-251-2
• Beatty, Millard F. (2006), Principles of Engineering Mechanics, Volume 2: Dynamics—The Analysis of Motion, Mathematical Concepts and Methods in Science and Engineering 33, Springer, ISBN 0-387-23704-6
• Feynman, Richard; Leighton, Robert B.; Sands, Matthew (1963), 1 (Sixth printing, February 1977 ed.), Addison-Wesley, ISBN 0-201-02010-6H
• Frautschi, Steven C.; Olenick, Richard P.; Apostol, Tom M.; Goodstein, David L. (1986), The Mechanical Universe: Mechanics and heat, advanced edition, Cambridge University Press, ISBN 0-521-30432-6
• Goldstein, Herbert; Poole, Charles; Safko, John (2002), (3rd ed.), Addison-Wesley, ISBN 0-201-65702-3
• Goodman, Lawrence E.; Warner, William H. (2001) [1964], Statics, Dover, ISBN 0-486-42005-1
• Hamill, Patrick (2009), Intermediate Dynamics, Jones & Bartlett Learning, ISBN 978-0-7637-5728-1
• Jong, I. G.; Rogers, B. G. (1995), Engineering Mechanics: Statics, Saunders College Publishing, ISBN 0-03-026309-3
• Millikan, Robert Andrews (1902), Mechanics, molecular physics and heat: a twelve weeks' college course, Chicago: Scott, Foresman and Company, retrieved 25 May 2011
• Pollard, David D.; Fletcher, Raymond C. (2005), Fundamentals of Structural Geology, Cambridge University Press, ISBN 0-521-83927-0
• Pytel, Andrew; Kiusalaas, Jaan (2010), Engineering Mechanics: Statics 1 (3rd ed.), Cengage Learning, ISBN 978-0-495-29559-4
• Rosen, Joe; Gothard, Lisa Quinn (2009), Encyclopedia of Physical Science, Infobase Publishing, ISBN 978-0-8160-7011-4
• Serway, Raymond A.; Jewett, John W. (2006), Principles of physics: a calculus-based text 1 (4th ed.), Thomson Learning, ISBN 0-534-49143-X
• Shirley, James H.; Fairbridge, Rhodes Whitmore (1997), Encyclopedia of planetary sciences, Springer, ISBN 0-412-06951-2
• De Silva, Clarence W. (2002), Vibration and shock handbook, CRC Press, ISBN 978-0-8493-1580-0
• Symon, Keith R. (1964), Mechanics, Addison-Wesley, ISBN 60-5164 Check `|isbn=` value (help)
• Tipler, Paul A.; Mosca, Gene (2004), Physics for Scientists and Engineers 1A (5th ed.), W. H. Freeman and Company, ISBN 0-7167-0900-7
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8637308478355408, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/linear-algebra?sort=active
|
# Tagged Questions
Questions on the linear algebra functionality of Mathematica.
0answers
48 views
### Nontrivial solutions of equation
Here I have one problem how to find nontrivial solution of a system of equation. I want to choose one variable, for example X1 and to get solutions of X2(X1) and X3(X1). It is not difficult when I ...
0answers
5 views
### Finding Vectors in cartesian form [migrated]
I am stuck on this question could you please help me. Find,in Cartesian form, the equations of the straight line through the point with position vector (-1,2,-3) parallel to the direction given by ...
2answers
140 views
### A matrix-vector cross product
I want to do a cross product involving a vector of Pauli matrices $\vec \sigma = \left( {{\sigma _1},{\sigma _2},{\sigma _3}} \right)$; for example, $\vec \sigma \times \left( {1,2,3} \right)$. ...
1answer
85 views
### Exploiting self-adjointness when changing basis
I am using Mathematica to analyze a real, self-adjoint matrix $H$ of the size $32 \times 32$, which comes from a physics problem. In the picture there is also a matrix $Q$ which commutes with $H$. I ...
0answers
165 views
### Matrix algebra vs. PrincipalComponents and Varimax/Oblimin
Using matrix algebra I can calculate loadings and scores from the covariance matrix (data matrix is column centered): ...
4answers
249 views
### How to find the index of a square matrix in Mathematica quickly?
Let $A$ be an $n\times n$ complex matrix. The smallest nonnegative integer $k$ such that $\mathrm{rank}(A^{k+1})=\mathrm{rank}(A^{k})$, is the index fo $A$ and denoted by $\mathrm{Ind}(A)$. I would ...
1answer
39 views
### Partial row reduction of a matrix
I have an $m\times n$ matrix (presumably of full rank) with $m>n$, and I would like to row reduce it, but leave the last column unreduced; that is, I want to get output on the form \$\pmatrix{ 1 ...
1answer
82 views
### large matrix eigenvalue problem
I need solve a very large complex matrix (not sparse and not symmetry) eigenvalue problem, e.g., 1e4*1e4 or even 1e6*1e6. How large dimensions of the matrix can Mathematica support? And, how about ...
0answers
53 views
### Fast calculation of commute distances on large graphs (i.e. fast computation of the pseudo-inverse of a large Laplacian / Kirchhoff matrix)
I have a large, locally connected and undirected graph $G$ with $\approx 10^4$ vertices and $\approx 10^5$ to $\approx 10^6$ edges. Moreover I can bound the maximum vertex degree as $Q_{max}$. I ...
2answers
372 views
### Can RowReduce work in this matrix?
The matrix $Q$ with dimensions $n\times2*n*m$ is structured by $$Q=[B|AB|\cdots|A^{2*n-1}B]$$ where $Q$ is an augmented matrix built from a $3\times3$ matrix, $A$, and a $3\times2$ matrix, $B$. I ...
3answers
102 views
### Compute the rank of a matrix with variable entries
Say I have a matrix like $$M=\left( \begin{array}{c c c} x & xz & w-2x \\ wz^3 & xy & z \\ y^2-z^3 & x+w & z+x^5 \end{array} \right)$$ is it possible to ask Mathematica ...
2answers
72 views
### Matrix echelon/upper diagonal form
Is there a way to find the echelon form of a matrix in Mathematica? I see there is a function to find the reduced echelon form, RowReduce[], but I can't see ...
2answers
194 views
### badly conditioned matrix (General::luc)
With some matrices I am receiving the following message Inverse::luc Result for Inverse of badly conditioned matrix (M) may contain significant numerical errors. How can I tell to Mathematica to ...
3answers
218 views
### Proving a recurrence in Mathematica
I have $$j_n=\int_0^1 x^{2n} \sin(\pi x)dx.$$ How do I show that $$j_{n+1}= \frac{1}{\pi^2}(\pi- (2n+1)(2n+2)j_n)\, ?$$ I keep getting a recurring integration by parts and I can't simplify it. ...
2answers
154 views
### Why does my matrix lose rank?
I want to check the rank of a matrix for observability, but Mathematica loses a rank if the matrix contains very large numbers. Let's say my matrix is ...
2answers
215 views
### How to extract and compute on the diagonal entities of a sparse matrix very fast?
As could be seen in the following code: ...
2answers
99 views
### Computing distance matrix for a list
Using functional programming in Mathematica, how can I compute a distance matrix for every element in a list of matrices... The distance would be computed between the item in the list and a "target ...
1answer
425 views
### Eigenvalues and Determinant of a large matrix
Can anybody kindly explain to me what is going wrong regarding a simple problem I have? I can find the eigenvalues of a large matrix using Eigenvalues[], but when I ...
0answers
101 views
### Calculating the rank of a huge sparse array
By virtue of the suggestion in my previous question, I constructed the sparse matrix whose size is $518400 \times 86400$, mostly filled with $0$ and $\pm 1$. Now I want to calculate its rank. Since ...
2answers
120 views
### Efficient ways to create matrices and solve matrix equations
I am attempting, for the first time, to use Mathematica to do some serious linear algebra. I would like to solve systems of equations of the form $$U_{n n'} f_{n'} = b_n.$$ I have an expression for ...
0answers
105 views
### 6x6 matrix NullSpace
I'm working with a 6x6 matrix. Whenever I try to find the NullSpace and FullSimplify it, I get the error No more memory ...
2answers
116 views
### Defining a non-commutative operator algebra in Mathematica
I am trying to develop a function(s) to do some commutator algebra to compute the enveloping algebra and ideals of a Lie algebra. For example if I have $SU(2)$ algebra generated by $L_i$ ($i=1,2,3$), ...
1answer
99 views
### Computing the sign of the real part of eigenvalues in a 3D linearized system with 3 parameters
So I have this dynamical system given by: $$\left\{\begin{aligned} x' &= a(y-\phi(x))\\ y' &= x-y+z\\ z' &= -by \end{aligned}\right.$$ where $\phi(x) = \mu x^3 - \nu x$ and $a,b,\mu,\nu$ ...
2answers
232 views
### Gram Schmidt Process for Polynomials
I want to implement the Gram Schimdt procedure to the vector space of polynomials of degree up to 5, i.e. I want to find an orthogonal basis from the set of vectors $v=(1,x,x^2,x^3,x^4,x^5)$. The ...
1answer
217 views
### Efficient method for inverting a block tridiagonal matrix
Is there a better method to invert a large block tridiagonal Hermitian block matrix, other than treating it as a ordinary matrix? For example: ...
1answer
330 views
### Octonions in Mathematica
Is there a package or Notebook for Mathematica that can enable me to do some numerical calculations with octonions? Maybe a way to plug-in the octonion multiplication table?
2answers
201 views
### Speed up 4D matrix/array generation
I have to fill a 4D array, whose entries are $\mathrm{sinc}\left[j(a-b)^2+j(c-d)^2-\phi\right]$ for a fixed value of $\phi$ (normally -15) and a fixed value of $j$ (normally about 0.00005). The way ...
1answer
95 views
### Why is EigenValues returning Root expressions?
This is the code I have: ...
1answer
62 views
### Why does Eigenvalues[matrix I defined] not work? [duplicate]
This is the code I have in my mathematica notebook. I want to find the eigenvalues of the matrix I created called Hmatrix as defined below. However when I type Eigenvalues[Hmatrix] I get the Hmatrix ...
1answer
118 views
### Interpolating a Bivariate Polynomial over a Finite Field
Let $F=GF(p)$ be a finite field, $p$ prime and write $F^\times=\{x_1,\ldots,x_n\}$. I'm trying to implement an earlier version of Sudan's list-decoding algorithm for Reed Solomon Codes ...
1answer
133 views
### How to get the determinant and inverse of a large sparse symmetric matrix?
For example, the following is a $12\times 12$ symmetric matrix. Det and Inverse take too much time and don't even work on my ...
2answers
606 views
### Solving a tridiagonal system of linear equations using the Thomas algorithm
I'm trying to write a function that can solve a tridiagonal system of linear equations using the Thomas algorithm. It basically solves the following equation. (Details can be found at the Wiki page ...
1answer
301 views
### TensorContract of inverse matrix
Matrix inverse in mathematica If $A$ is an invertible $n \times n$ matrix, then $A\cdot A^{-1} = I$. To get this statement in Mathematica, you need the assumption ...
1answer
66 views
### Confirming the existence of a function related to a matrix
Is it possible to get an answer to the following question in Mathematica? Let $M$ be a $n$ by $n$ matrix, is there a function $m:\mathbb{N}\times \mathbb{N}\rightarrow \mathbb{Z}$ such that ...
1answer
127 views
### Can I reduce a matrix inequality such as $\mathbf x^\prime\mathbf A\mathbf x > \mathbf x^\prime\mathbf x$?
I'm new to Mathematica. When I do linear algebra, I wonder if I can have an inequality such as $\mathbf x^\prime\mathbf A\mathbf x > \mathbf x^\prime\mathbf x$, where $\mathbf x$ is a column vector ...
0answers
94 views
### How to express this output in the form $X=A.x$?
This problem arose in my stereo vision project. I have two matrices: A = \left( \begin{array}{ccc} \text{x1}*\text{p131}-\text{p111} & \text{x1}*\text{p132}-\text{p112} & ...
3answers
604 views
### Discrete Convolution
Ask to compute the convolution of 2 lists, I managed to do so, with what I feel is rather heavy : Let my 2 lists be : a = {1,2,3,4} b = {1,1,1,1,1,1}; The below function adds 0s on each part of ...
1answer
319 views
### Linear Solve with Modular Arithmetic
I am interested in using LinearSolve[m,b] which will find a solution to the equation $m.x=b$, where I am in mod 2 arithmetic. Is there any way to perform this ...
0answers
62 views
### Inverse problem of Eigenvalues in DSolve
For my graduation exam I must prepare system of equations to satisfy some specific conditions. I have solutions, output 2, but I need equations eq11 and eq22. So here is an example. ...
3answers
85 views
### Selecting terms containing some expression
Imagine I have an expression like a*k + (a^2)*b*c + b*e and I would like to obtain the term containing, for example, some power of a. In that case I would ...
1answer
252 views
### Decoupling system of differential equations
Here I have one task and it is preparation for small exam. I solved it by hand for first case 1), but I need to check it in $Mathematica$ and to try to implement it for both cases 1) and 2) ...
1answer
102 views
### How to create a large sparse block matrix
I need to generate a very large sparse block matrix, with blocks consisting only of ones along the diagonal. I have tried several ways of doing this, but I seem to always run out of memory. The ...
0answers
178 views
### More efficient matrix-vector product
Dear mathematica users, In my present research I am faced with a real dense $n\times n$ matrix $A$ where $n \geq 3000$ (hopefully even more). The coefficients of this matrix are fixed, but I will ...
2answers
181 views
### Compiling LinearSolve[] or creating a compilable procedural version of it
Earlier today I had a discussion with a representative at Premier Support about the 2 questions I've asked here over the past couple of days: Seeking strategies to deploy a function securely ...
3answers
233 views
### Correct way to populate a DiagonalMatrix?
I would like to create a series of correlation matrices that starts with : sensMat[[1]] = DiagonalMatrix[ { 1,1,1,1,1 } ]) // MatrixForm and iterates in 0.1 ...
0answers
82 views
### Not getting the required eigenvalues [closed]
I'm trying to use Mathematica to show that the eigenvalues of $U$ are $\pm\dfrac{1-i}{\sqrt{2}}$, where $U = (I + T + iS)(I - T- iS)^{-1}$ where \$ S = \left( \begin{matrix} 1 & 1 \\ 1 ...
2answers
1k views
### How can I improve the speed of eigenvalue decompositions for large matrices?
I often need to compute the eigenvalues of large matrices, and I invariably resort to MATLAB for these, simply because it is much faster. I'd like to change that, so that I can work entirely inside my ...
0answers
76 views
### Evaluating a function on permutations of its arguments
Say I have a function "temp" of $n+1$ variables, $y,z1,z2,z3,...,zn$. I want to test if my function has certain symmetries like swapping $y$ with square of any $z$, swapping any two of the zs, ...
2answers
438 views
### Linear equation with complex numbers
I have to solve an equation of the type $$a z+b \overline{z}=c$$ with $a,b,c\in\mathbb{C}$. My approach is to set $F(z)=a z+b \overline{z}-c$ transform $z$ to $x+i y$ and then get a real linear ...
1answer
107 views
### RowReduce Problem
Here are two examples: RowReduce[{{3, 1, a}, {2, 1, b}}] evaluates to {{1, 0, a - b}, {0, 1, -2 a + 3 b}} but ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897631824016571, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/fluid-dynamics?page=2&sort=active&pagesize=30
|
# Tagged Questions
The quantitative study of how fluids (gases and liquids) move.
0answers
21 views
### Books describing motion of objects in fluid [closed]
I'm looking for any resources that would help me model the behavior of objects moving in fluid. My end goal is to be able to describe the motion of irregularly shaped objects in a river environment. ...
1answer
29 views
### Additional boundary conditions for inclined flow?
I am solving an inclined flow problem, and am stuck. The problem is to find the volumetric flow rate of inclined flow in a square channel. Once I have the velocity profile, I can just integrate over ...
2answers
575 views
### Analog Hawking radiation
I am confused by most discussions of analog Hawking radiation in fluids (see, for example, the recent experimental result of Weinfurtner et al. Phys. Rev. Lett. 106, 021302 (2011), ...
1answer
125 views
### Validity of the Multi-Species Navier-Stokes Equations for real gases
I'm wondering what are the validity limits of Multi-Species Navier-Stokes equations. I'm aware of the limit for rarefied gases. But is there any new limit that arises in the context of real gases? I ...
0answers
32 views
### What is a good reynolds number for this process?
I’m trying to convince my boss that the mixers we are using are too much. I’m trying to prove that we are over-mixing our product. Our product is ink…just your basic ink found in your printer at home. ...
0answers
15 views
### CMB anisotropies and tightly coupled limit
Sorry if this is a technical question. I am studying the origin of CMB anisotropies and the tightly coupled limit of the Boltzmann equations. We have a fluid composed of ionized electrons and photons. ...
4answers
361 views
### Why does the fundamental mode of a recorder disappear when you blow harder?
I have a simple recorder, like this: When I cover all the holes and blow gently, it blows at about 550 Hz, but when I blow more forcefully, it jumps an octave and blows 1100 Hz. What's the ...
3answers
434 views
### What's the surface area of a liquid? How does evaporation increase if the surface area of a liquid is increased?
Wikipedia says that a substance that has a larger surface area will evaporate faster, as there are more surface molecules that are able to escape. I think the rate of evaporation should decrease as ...
0answers
13 views
### What type of constraint is the homogeneity of a fluid?
Suppose we have a homogeneous fluid (may or may not have viscosity). Is the constraint due to homogeneity a holonomic constraint? Thanks.
2answers
134 views
### Hydrostatic pressure on a teapot spout
The phenomenon where water flows on the outside side of a teapot spout is named "The teapot effect", and occurs due to a difference in pressure between water and the atmosphere. Consider the image of ...
1answer
210 views
### Calculating Reynolds number for a viscous droplet
I'm trying to develop a very basic scaling law/unit analysis for viscous droplet formation, and I'd like to get some rough numerical values of the Reynolds number to play with. To be specific, I'm ...
3answers
319 views
### A fly in an accelerating car
A fly is flying around in a car, the fly never touches any surface in the car only fly’s around in the air inside the car. The car accelerates. does the fly slam in to the rear window. or does the fly ...
1answer
139 views
### Lagrangian Coordinates in Fluid Flow
I apologize if this is not the right place to ask this question: I am currently reading a paper by Y. Brenier, where for the fluid flow he introduces a Lagrangian label $a$ instead of the vertical ...
2answers
251 views
### Can vorticity be destroyed?
I have a professor that is fond of saying that vorticity cannot be destroyed. I see how this is true for inviscid flows, but is this also true for viscous flow? The vorticity equation is shown below ...
0answers
50 views
### Seashell occurrance
Sometimes, sea shells accumulate on the sea shore, but sometimes they will instead be dragged back out to sea. What are the main physical factors that determine which of these things will happen?
1answer
280 views
### Archimedes principle and specific gravity
A physical balance measures the gravitational mass of a body. I conducted an experiment to find out the specific gravity of a bob. I first measured the mass of the bob in air, and then in water. The ...
2answers
106 views
### The viscous force between the layers of liquid is same, then why there is variation in the velocities of its layers?
I have learned in my textbook that when the liquid flows the bottom layer of the liquid never moves because of friction, but the upper layers move with increasing velocities how it is possible if the ...
1answer
839 views
### Dimensional Analysis: Buckingham Pi Theorem
I am studying for a fluids quiz and I am having a few problems relating to dimensional analysis but for the time being fundamentally I have a problem selecting the repeating variables. Like does ...
0answers
51 views
### Equilibrium of a sphere in a water tank
A rigid sphere of radius $R_S$ made from a material with specific gravity $SG_s$ is completely submerged in a tank of water with radius $R_t$ and initial depth $L$ as shown in the figure The ...
2answers
54 views
### When is a flow vortex free?
To solve problems in fluid dynamics one states often the assumption that the flow is vortex free i.e. $rot(u) = 0$ It is a basic assumption which is needed for potential flow problems etc.. My ...
1answer
80 views
### Perfect fluids in cosmology?
In cosmology, it is often assumed that the equation of state of a cosmological fluid is of the form $p=w\rho$. Why is this? Is it the equation of a perfect fluid? Why does $w=0$ for matter $1/3$ for ...
3answers
362 views
### How much information about the scale of a waterfall can be obtained from its sound?
Is it possible to constrain the height, volume flow, or distance of a waterfall from the quantitative analysis of a high-quality recording of its sound? As an aside, the simulated sounds of fluid ...
2answers
544 views
### A quantitative explanation of EM coherence domains in liquid with DNA
I've been looking with interest at a recent biology paper claiming that DNA molecules give off electromagnetic signals which can cause the same types of molecules to be reconstructed at a remote ...
2answers
116 views
### Physics behind the flow of gas coming out of a balloon
I'm working with stratospheric balloons (latex ones) and I want to put a valve on it so it can float for a longer time. I'm trying to define which valve I should use, which demands I estimate the flow ...
0answers
47 views
### Undergrad project advice [closed]
I am presently in my senior year and I am considering fluid mechanics for my thesis. What area of research of fluid mechanics which is purely analytical and very mathematical since I am an applied ...
11answers
4k views
### How long a straw could Superman use?
To suck water through a straw, you create a partial vacuum in your lungs. Water rises through the straw until the pressure in the straw at the water level equals atmospheric pressure. This ...
0answers
46 views
### Robot controling pouring process from a bottle
I need to solve a problem within mechanic of fluids for a part of my thesis. Robot will pick up a bottle of beer, cola, julebrus or any other kind of beverage. And then it has to bring it to the glass ...
1answer
101 views
### Drinking juice through a straw
Why we are able to suck more drink through a larger diameter straw than a smaller diameter straw if $p_1 v_1 = p_2 v_2 = Q$ as per Bernoulli's Principle. The pressure difference I create in mouth ...
2answers
517 views
### Is the wind's force on a stationary object proportional to $v^2$?
I am on a boat docked at Cape Charles, VA, about 30 or 40 miles from the center of Hurricane Irene. This understandably got me thinking about the force of wind on the boat. Since air friction is ...
0answers
93 views
### 2-D Turbulence - how does it look like?
Consider parallel flow in the X direction over a 2D semi infinite flat plate. If turbulence is 2-D, in which axes should we expect the vortices to form. Also, are there any experimental/visualization ...
1answer
83 views
### In a column of rising hot air, is the velocity higher at the top?
Since the movement of the air is induced by buoyancy, i. e. there's a constant force acting on the air, so I would expect the velocity to increase during ascent, much like an object falling down due ...
3answers
167 views
### How do I intuit viscosity in a rotating fluid?
Suppose I have two plates with a viscous fluid in between. I slide them in the same direction (a direction in their own plane), one at $5 \,\text{m/s}$ and the other at $6 \,\text{m/s}$. Due to the ...
1answer
65 views
### Explanation for the next steps of chaplygin dipole
this post is the Chaplygin dipole, it's an interesting issue. Can someone explain me these steps in other words please? any Explanation of any step will help me, I hope that together I will ...
1answer
250 views
### Why does a transformation to a rotating reference frame NOT break temporal scale invariance?
Naively, I thought that transforming a scale invariant equation (such as the Navier-Stokes equations for example) to a rotating reference frame (for example the rotating earth) would break the ...
1answer
139 views
### Why do air bubbles stick to the side of plastic tubing?
I'm watching water with air bubbles flow through transparent plastic tubing. The inner diameter is a few mm. Bubbles typically are the same diameter as the tubing, with length about the same or up ...
3answers
106 views
### Navier-Stokes system
I have to study this system which name is Navier-Stokes. Can you explain please what means that $p$, $u$ and $(u \cdot \nabla)u$. What represents in reality? Tell me please, how should I read the ...
2answers
115 views
### Could some design of a propeller be used in both air and water?
Propellers in water are smaller in diameter. They also move more slowly. On the other hand, aircraft propellers are larger in diameter, have narrower blades and operate at very high speeds. An ...
0answers
33 views
### Curls in water taken in a liquid [duplicate]
Consider a beaker having a hole at the bottom at its geometric centre is connected to a pipe which is closed initially. Water is filled when the pipe is opened I saw curls are being formed. Why they ...
1answer
178 views
### How much effect does the Bernoulli effect have on lift?
I understand that the Bernoulli effect is a flawed explanation for the cause of lift, and does not cause much at all, but how much? Is there any experimental data on the force caused by the ...
0answers
31 views
### What temperatures can be reached in an air-to-air thermocompressor nozzle and why?
People are generally of the opinion that the boiler injector cannot be redesigned to run on air. In other words, an air-to-air thermocompressor that puts fresh air into a tank without a mechanical ...
1answer
89 views
### Does gravity affects temperature reading of a mercury thermometer?
I remember when I was in primary school, the science teacher put me in charge of a mercury thermometer. I do not quite understand the mechanics behind except that mercury expands when it is hot and ...
1answer
92 views
### Concerning drag on a flow past a cylinder
I am wondering about the drag coefficient for a flow past a cylinder. I am reading this article. I understand why the drag is high to begin with (point 2), when the boundary layer separates and the ...
2answers
110 views
### Will this type of engine produce thrust?
I was wondering that if I create a engine as shown below in the image will it produce any kind of thrust or it is a complete junk?
2answers
112 views
### Whats the anti-torque mechanism in horizontal take-off aircraft?
In most helicopters there is the anti-torque tail rotor to prevent the body from spinning in the opposite direction to the main rotor. What's the equivalent mechanism in horizontal takeoff single ...
1answer
285 views
### Exact Solutions to the Navier-Stokes Equations
There are a number of exact solutions to the Navier-Stokes equations. How many exact solutions are currently known? Is it possible to enumerate all of the solutions to the Navier-Stokes equations?
1answer
62 views
### Calculating the dimensional wall-normal coordinate for a self-similar compressible boundary layer using Levy-Lees transformation
How can I convert my self-similar boundary layer solution that is a function of the nondimensional wall-normal coordinate $\eta$ to be a function of dimensional $y$? For instance, if I determine from ...
3answers
425 views
### Apparent paradox in equation of continuity
Equation of continuity says us that if we insert some fluid in a tube, the same amount of fluid will come out from the other end. If we make a small hole in a hose pipe, water will come out with a ...
2answers
146 views
### Why water in the sink follow a curved path?
When you fill the sink with water and then allow the water to be drained, the water forms a vortex.. And then it starts to follow a curved path downwards by effects of gravity.. Why this phenomena ...
2answers
339 views
### Multiple source Pipe Network: Given a known outflow, can we deduce the inflows?
I have the following simple Y shape water pipe network: Given $S_{out}$ ( in $m^3/s$), can we compute $S_1$ and $S_2$? For the pipe, we know for each pipe their corresponding diameter $d$, length ...
8answers
2k views
### Why do ice cubes come out easier from top trays?
This is my "hey, I've noticed that too!" question for the week. If you stack two plastic ice cube trays with water in them in a freezer, the resulting ice cubes in the top tray will usually come out ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251260161399841, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/91550/valuation-ring-of-dimension-2
|
## valuation ring of dimension 2
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was looking the valuation ring of dimension $2$. Then I found, it has two number of non-zero prime ideals and localization at prime is a valuation domain again. Moreover, there is a one-to-one correspondence between prime ideals and valuation overrings. Dimension 2 means, we have two non-zero prime ideals, then the valuation ring should have two valuation overrings different from quotient field. We will get one by localize at prime ideal of height 1 and another one by localized at maximal ideal. I have difficulty to figure out the overrvaluation ring with respect to maximal ideal. That is, let $V$ be a valuation rings and $0\subseteq p\subseteq m$ then $V\subseteq V_{m} \subseteq V_{(p)}\subseteq V_{(0)} = K$, quotient field. The second sequence forces me that $V$ has dimension $3$ instead of $2$. Where is my mistake?
-
1
The maximal chain $0 \subset p \subset m$ shows that $V$ has dimension $2$. That's all. – Harry Mar 18 2012 at 17:32
Thanks Harry. I think this is the case, the localization of local ring at its maximal ideal is equal to itself. – Rajnish Mar 18 2012 at 19:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366600513458252, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=3841889
|
Physics Forums
Page 2 of 8 < 1 2 3 4 5 > Last »
## Anyone considering a career as a patent attorney?
Quote by Woopydalan How about Biology PhDs?
That'll work... The best gauge is to look at current job postings and see what backgrounds are desired.
Supposing that you have zero knowledge of IP law, what are the skills people are looking for in an interview? Maybe more specific: as a theoretical physicist, with no industry experience, is analytic insight and experience with publishing articles sufficient to get in?
Quote by eendavid Supposing that you have zero knowledge of IP law, what are the skills people are looking for in an interview? Maybe more specific: as a theoretical physicist, with no industry experience, is analytic insight and experience with publishing articles sufficient to get in?
Sorry, what job are you intending to interview for?
I might be interested in a job as a patent attorney (so if I understood correctly, you first apply for trainee patent attorney). I guess what I'm asking is: should one try to get a basic gist of IP law before applying?
I'm a Civil Engineering undergrad at UCLA and I'm also considering minoring in Environmental Engineering. What would you say about pursuing patent law after a few years of work in the field?
Quote by apat007 I'm a Civil Engineering undergrad at UCLA and I'm also considering minoring in Environmental Engineering. What would you say about pursuing patent law after a few years of work in the field?
Those engineering degrees are not "hot" with respect to what job postings are looking for but that's not to say that you can't do it. I think with those degrees you would likely have to have a law degree as well as I believe it would be difficult to get a patent agent position.
Real world scientific experience is always viewed positively.
I hope this answers your question but I understand I'm not saying a lot. I'm trying to give practical advise with respect to the job market based on my experiences and general statistics. Just because it may be an uphill battle doesn't mean it's impossible to become a patent attorney, you may just have to be really determined to make it happen.
Oh okay, I see. Well I wasn't expecting to go into patent law without a law degree. Would it be worth pursuing a law degree or is patent law coupled with Civil or Environmental engineering just not a typical thing to do?
Quote by apat007 Oh okay, I see. Well I wasn't expecting to go into patent law without a law degree. Would it be worth pursuing a law degree or is patent law coupled with Civil or Environmental engineering just not a typical thing to do?
Worth pursuing is a very personal question that I can't answer. CIV E or ENVIOR E + patent law are not "typical" or mainstream but that's not to say it isn't at all worth doing if that's a career you are interested in.
Is it possible to give us an example of something you might encounter on a day-to-day basis? I know there is possibly a privacy issue, so just anything that is as general as possible, so... 1) What kind of technical skills would you mainly employ? 2) How do you even begin to start analysing systems which you've never seen or encountered before? 3) As for electrical engineering, what kind of majoring stream is particularly suited to this role, signals, electronics, telecomm, photonics?
Quote by NewtonianAlch Is it possible to give us an example of something you might encounter on a day-to-day basis? I know there is possibly a privacy issue, so just anything that is as general as possible, so... 1) What kind of technical skills would you mainly employ? 2) How do you even begin to start analysing systems which you've never seen or encountered before? 3) As for electrical engineering, what kind of majoring stream is particularly suited to this role, signals, electronics, telecomm, photonics?
Well today I received an "Office Action" from the USPTO. This Office Action is correspondence rejecting the patent claims over prior patents. I need to take a look at the prior art patents and either argue that the Examiner's rejection is improper or amend the patent claims to distinguish the prior art. Amending the claims is somewhat of a game. You need to distinguish the prior art but you do not want the patent claims to be so narrow that they are difficult to enforce/easy to get around. Before I can really get into this work, however, I need to do a brief analysis first, report the Office Action out to the client and wait for their feedback.
A second project I have for the day is to get a patent illustrator working on drawings for a new patent application. I need to send him the production drawings and brief illustration of what I think we need to disclose the invention and patentable aspects. After that, I will likely begin drafting the patent application (background of the invention, summary, brief description of the drawings, detailed description, and claims).
Currently, I'm also working on some trademark litigation. We're in the discovery stage of litigation and I'm assisting with that process. Last week I attended depositions and helped prepare our client to best answer the questions we anticipated he would be asked.
1) You have to be able to understand that technology you are trying to patent as well as to understand prior art patents, which will be used in evaluating your application. The technical knowledge I use can also be as simple as knowing what to name various parts or scientific concepts. Your technical knowledge serves as a foundation for understanding new inventions and old. It also provides a basis for understanding what's out there already so you can draft patent claims that will not automatically be rejected as being too broad.
2) It often helps to have the inventor sit and walk you through it. Sometimes they will also provide a technical disclosure explaining how it works and what the novel features are. If you are asked to understand the prior art patents or known products/processes, it also helps to have your client briefly explain it to you as they are the experts in their respective technologies.
3) Since I don't work in the EE field, I can't say. I would expect that all of the mentioned focus areas would be desired. It really comes down to this - if it is advancing technology, the marketplace will need people to understand it and patent those new advances.
Hi, you said that your work week isn't typical - 35 hrs per week. From your perspective, what is typical for the number of hours that a patent lawyer works? I know it varies from company to company, but would a 40 - 45 hour work week be unlikely?
Also, what would be a typical salary range after working in as a patent lawyer for 5 years?
Would a 40 - 45 hour work week be unlikely? I would say most patent attorneys working at mid-sided firms work about 50 hours a week. It's not entirely unlikely but 40 hr a week jobs could be more difficult to find. With the economy for lawyers being in bad shape, it's hard to be picky, especially when you are first starting out. Typical salary range after working in as a patent lawyer for 5 years? Of course this depends on a lot of factors. I would estimate $80-$175K/yr is a decent range. I know that's not very helpful but it will depend on the size of the firm you work for, where in the country you are and how many hours you bill. \$175K/yr may sound great but those people are likely working 60-80/hrs a week in high pressure positions. I have many friends who make great money, typically in the form of bonuses. They work super hard all year chasing the dollars. The more you work, the more you can earn. These estimates are just my best guess and are not based on any research. For both of these questions it is difficult to generalize for an entire industry. These are ballpark responses and there is a lot of variance.
Admin A relevant topic - http://www.ted.com/talks/drew_curtis...ent_troll.html It's important to understand patent law and IP law, as well as the technical details of a given technology.
Well, this is extremely uncanny. I noticed this post right after I just posted a question about this. What tests or qualifications do patent attorneys need to have above passing the bar like normal attorneys? Also how is a civil engineering background?
Quote by Classico22 Well, this is extremely uncanny. I noticed this post right after I just posted a question about this. What tests or qualifications do patent attorneys need to have above passing the bar like normal attorneys? Also how is a civil engineering background?
To practice before the USPTO you need to take the patent bar exam. In order to take this exam, you must have a technical background. See prior posts re: what qualifies and how various backgrounds can affect your career options.
Quote by eendavid I might be interested in a job as a patent attorney (so if I understood correctly, you first apply for trainee patent attorney). I guess what I'm asking is: should one try to get a basic gist of IP law before applying?
I'm not sure I understand your question. Assuming you are in law school or will be attending law school, I would take IP classes to get the gist. Be sure to choose a school with a solid IP program. If I misunderstood your question, please clarify.
Page 2 of 8 < 1 2 3 4 5 > Last »
Tags
career advice, patent expert, patent law
Thread Tools
Similar Threads for: Anyone considering a career as a patent attorney?
Thread Forum Replies
Career Guidance 15
General Engineering 5
Career Guidance 0
General Discussion 31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713537096977234, "perplexity_flag": "middle"}
|
http://dorigo.wordpress.com/tag/quantum-mechanics/
|
# A Quantum Diaries Survivor
private thoughts of a physicist and chessplayer
## Testing the Bell inequality with Lambda hyperonsApril 14, 2009
Posted by dorigo in news, physics, science.
Tags: bell inequality, quantum mechanics, quantum optics, stern gerlach
comments closed
This morning I came back from Easter vacations to my office and was suddenly assaulted by a pile of errands crying to be evaded, but I prevailed, and I still found some time to get fascinated by browsing through a preprint appeared a week ago on the Arxiv, 0904.1000. The paper, by Xi-Qing Hao, Hong-Wei Ke, Yi-Bing Ding, Peng-Nian Shen, and Xue-Qian Li [wow, I'm past the hard part of this post], is titled “Testing the Bell Inequality at Experiments of High Energy Physics“. Here is the abstract:
Besides using the laser beam, it is very tempting to directly testify the Bell inequality at high energy experiments where the spin correlation is exactly what the original Bell inequality investigates. In this work, we follow the proposal raised in literature and use the successive decays $J/\psi \to \gamma \eta_c \to \Lambda \bar \Lambda \to p \pi^- \bar p \pi^+$ to testify the Bell inequality. [...] (We) make a Monte-Carlo simulation of the processes based on the quantum field theory (QFT). Since the underlying theory is QFT, it implies that we pre-admit the validity of quantum picture. Even though the QFT is true, we need to find how big the database should be, so that we can clearly show deviations of the correlation from the Bell inequality determined by the local hidden variable theory. [...]
Testing the Bell inequality with the decay of short-lived subatomic particles sounds really cool, doesn’t it ? Or does it ? Unfortunately, my quantum mechanics is too rusty to allow me to get past a careful post which explains things tidily, in the short time left between now and a well-deserved sleep. You can read elsewhere about the Bell inequality, and how it tests whether pure quantum mechanics rules -destroying correlations between quantum systems separated by a space-like interval- or whether a local hidden variable theory holds instead: and besides, almost anybody can write a better account of that than me, so if you feel you can help, you are invited to guest-blog about it here.
Besides embarassing myself, I still wanted to mention the paper today, because the authors make a honest attempt at proposing an experiment which might actually work, and which could avoid some drawbacks of all experimental tests so far attempted, which belong to the realm of quantum optics. In their own words,
Over a half century, many experiments have been carried out [...] among them, the polarization entanglement experiments of two-photons and multi-photons attract the widest attention of the physics society. All photon experimental data indicate that the Bell inequality and its extension forms are violated, and the results are fully consistent with the prediction of QM. The consistency can reach as high as 30 standard deviations. [...] when analyzing the data, one needs to introduce additional assumptions, so that the requirement of LHVT cannot be completely satisfied. That is why as generally considered, so far, the Bell inequality has not undergone serious test yet.
Being totally ignorant of quantum optics I am willing to buy the above as true, although, being a sceptical son of a bitch, the statement makes me slightly dubious. Anyway, let me get to the point of this post.
Any respectable quantum mechanic could convince you that in order to check the Bell inequality with the decay chain mentioned above, it all boils down to measuring the correlation between the pions emitted in the decay of the Lambda particles, i.e., the polarization of the Lambda baryons: in the end, one just measures one single, clean angle $\theta$ between the observed final state pions. The authors show that this would require about one billion decays of the $J/\psi$ mesons produced by an electron-positron collider running at 3.09 GeV center-of-mass energy (the mass of the J/psi resonance): this is because the decay chain involving the clean $\Lambda \bar \Lambda$ final state is rare: the branching fraction of $J/\psi \to \eta_c \gamma$ is 0.013, the decay $\eta_c \to \Lambda \bar \Lambda$ occurs once in a thousand cases, and finally, each Lambda hyperon has a 64% chance to yield a proton-pion final state. So, 0.013 times 0.001 times 0.64 squared makes a chance about as frequent as a Pope appointment. However, if we had such a sample, here is what we would get:
The plot shows the measured angle between the two charged pions one would obtain from 3382 pion pairs (resulting from a billion $J/\psi \to \eta_c \gamma$ decays through double hyperon decay) compared with pure quantum mechanics predictions (the blue line) and by the Bell inequality (the area within the green lines). The simulated events are taken to follow the QM predictions, and such statistics would indeed refute the Bell inequality -although not by a huge statistical margin.
So, the one above is an interesting distribution, but if the paper was all about showing they can compute branching fractions and run a toy Monte Carlo simulation (which even I could do in the time it takes to write a lousy post), it would not be worth much. Instead, they have an improved idea, which is to apply a suitable magnetic field and exploit the anomalous magnetic moment of the Lambda baryons to measure simultaneously their polarization along three independent axes, turning a passive measurement -one involving a check of the decay kinematics of the Lambda particles- into an active one -directly figuring out the polarization. This is a sort of double Stern-Gerlach experiment. Here I would really love to explain what a Stern-Gerlach experiment is, and even more to make sense of the above gibberish, but for today I feel really drained out, and I will just quote the authors again:
One can install two Stern-Gerlach apparatuses at two sides with flexible angles with respect to according to the electron-positron beams. The apparatus provides a non-uniform magnetic field which may decline trajectory of the neutral $\Lambda$ ($\bar \Lambda$) due to its non-zero anomalous magnetic moment i.e. the force is proportional to $d/n (- \mu B)$ where $\mu$ is the anomalous magnetic moment of $\Lambda$, B is a non-uniform external magnetic field and d/n is a directional derivative. Because $\Lambda$ is neutral, the Lorentz force does not apply, therefore one may expect to use the apparatus to directly measure the polarization [...]. But one must first identify the particle flying into the Stern-Gerlach apparatus [...]. It can be determined by its decay product [...]. Here one only needs the decay product to tag the decaying particle, but does not use it to do kinematic measurements.
I think this idea is brilliant and it might actually be turned into a technical proposal. However, the experimental problems connected to setting up such an apparatus, detecting the golden decays in a huge background of impure quantum states, and capturing Lambdas inside inhomogeneous magnetic fields, are mindboggling: no wonder the authors do not have a Monte Carlo for that. Also, it remains to be seen whether such pains are really called for. If you ask me, quantum mechanics is right, period: why bother ?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330933094024658, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/182332-geometric-sum-involving-complex-numbers.html
|
# Thread:
1. ## Geometric Sum, involving complex numbers
Hi Forum!
I came across an exercise involving both geometric sum and complex numbers.
I've read through Wikipedia and found: The summation formula for geometric series remains valid even when the common ratio is a complex number.
So, for the question:
A sequence has ten terms with first term $i$, for $i = \sqrt-1$. Each subsequent term is 2i times its previous term. That is, the 2nd term is $i.2i$ .
The sum of this sequence is a + bi . What is the value of |a + b|?
If we wish to get the sum of the geometric progression we use:
$S_n=a(1-r^n)/(1-r)$
Using this you only give headaches, since we are searching for a+bi solution.
So, what would be the best approach? --besides from calculating all the terms up to n=10.
Is it really possible to use the geometric sum formula here?
Thanks!
2. Originally Posted by Zellator
Hi Forum!
I came across an exercise involving both geometric sum and complex numbers.
I've read through Wikipedia and found: The summation formula for geometric series remains valid even when the common ratio is a complex number.
So, for the question:
A sequence has ten terms with first term $i$, for $i = \sqrt-1$. Each subsequent term is 2i times its previous term. That is, the 2nd term is $i.2i$ .
The sum of this sequence is a + bi . What is the value of |a + b|?
If we wish to get the sum of the geometric progression we use:
$S_n=a(1-r^n)/(1-r)$
Using this you only give headaches, since we are searching for a+bi solution.
So, what would be the best approach? --besides from calculating all the terms up to n=10.
Is it really possible to use the geometric sum formula here?
Thanks!
I don't see what would cause a "headache". Roll up your sleeves, use the formula, get an answer and then convert the answer into the form a + ib.
If you need more help, please show your work and say where you are stuck.
3. Originally Posted by mr fantastic
I don't see what would cause a "headache". Roll up your sleeves, use the formula, get an answer and then convert the answer into the form a + ib.
If you need more help, please show your work and say where you are stuck.
Hi mr fantastic (sorry about the wrong section thing, the test is said to be of the lowest level of 3, I thought it was alright.)
Ok, this is my first contact with complex numbers so:
$a_1_0=-2^1^0$ since $i.(2i)^9=2^9.i^1^0$
$a_1=i$
$r=2i$
$S_1_0=[i-2i(-2^9)]/(1-2i)$
$S_1_0=(i+2^1^0i)/(1-2i)$
$S_1_0=i(1+2^1^0)/(1-2i)$
We could go a little further and multiply the whole equation by (1-2i), but this would give the first equation -gradually- once more.
Am I missing something here?
Thanks.
4. Are you trying to multiply by the conjugate? If you are you are doing it wrong because the conjugate is (1+2i) not (1-2i)
Also I'm not quite sure how you got your numerator...
The formula says $a(1-r^n)$
We know n is 10, a is i and r is 2i.
$i(1-(2i)^{10})$
Have you learnt to deal with complex numbers...?
You have to know that $i^2 = -1$ so $i^3 = -i$
5. Originally Posted by jgv115
Are you trying to multiply by the conjugate? If you are you are doing it wrong because the conjugate is (1+2i) not (1-2i)
Also I'm not quite sure how you got your numerator...
The formula says $a(1-r^n)$
We know n is 10, a is i and r is 2i.
$i(1-(2i)^{10})$
Have you learnt to deal with complex numbers...?
You have to know that $i^2 = -1$ so $i^3 = -i$
Hi jgv115!
Yes, I know that $i^3 = -i$, that's why we get $i+2^1^0i$.
So that was the problem; easy as it may seem, I've never learnt to deal with complex numbers.
That is a good thing to learn!
Ok, all sorted out here!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548473954200745, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/220532/one-question-about-proof-of-martingale-representation-theorem?answertab=oldest
|
# One question about proof of martingale representation theorem
Does any one know which book I can find the proof of martingale representation theorem in detail? I.E. Any $F_B$ adapted local martingale M is continuous and can be written as a integration of Brownian.
There is a proof on my notes. It says there exists a sequence of stopping times {$T_n$} such that $M^{T_n}$ is a bounded martingale. But we don't know M is continuous, I don't know how to construct a well-defined sequence of stopping times to make $M^{T_n}$ be a bounded martingale.
Thanks!
-
## 1 Answer
There are many references for example Philip E. Protter's book Stochastic Integration and Differential Equations.
On the net George Lowther's Blog Almost sure where this post should do the trick.
Best regards
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360478520393372, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/71298/boundedness-criterion-for-operators-on-mixed-lebesgue-spaces
|
## Boundedness criterion for operators on mixed Lebesgue spaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Define the mixed Lebesgue space $l_{p,q}$ as the space of all doubly indexed sequences ${\bf a}= (a(i,j))_{i,j\in\mathbb{Z}}$ such that
```\begin{equation}
\|{\bf a}\|_{p,q} := \left( \sum_{i\in\mathbb{Z}} \left(
\sum_{j\in\mathbb{Z}} |a(i,j)|^p \right)^{q/p} \right)^{1/q}
<\infty.
\end{equation}```
Such spaces are sometimes called mixed Lebesgue spaces, Lebesgue-Bochner spaces or Strichartz spaces. I am interested in boundedness conditions for operators on $l_{p,q}$, namely I have the following concrete question:
Given a matrix operator `${\bf A}=\left(A(i,j;i'j')\right)_{i,j,i',j'\in\mathbb{Z}}$` acting on $l_{p,q}$ define
```$$
C:=\max\left(\sup_{i,j}\sum_{i',j'}|A(i,j;i',j')|,\sup_{i',j'}\sum_{i,j}|A(i,j;i',j')|
,\sup_{i',j}\sum_{i,j'}|A(i,j;i',j')|,\sup_{i,j'}\sum_{i',j}|A(i,j;i',j')|\right).
$$```
Do we have
`$$\|{\bf A}\|_{l_{p,q}\to l_{p,q}}\le C \mbox{ for }p,q \geq 1 ?$$`
If $p=q$ this holds by Riesz-Thorin interpolation and considering the cases $p=q=1$ and $p=q=\infty$.
-
Note that by interpolation, only the cases $p=1,q=\infty$ and $p=\infty,q=1$ need to be checked. And by duality, one is fact enough. – Mikael de la Salle Jul 26 2011 at 9:52
1
... and $A$ given by $A(i,j;i',j') = 1_{i=j'} 1_{j=i'}$ gives a counterexample. It indeed satisfies $C=1$, whereas it is not bounded on $\ell_{p,q}$ unless $p=q$. – Mikael de la Salle Jul 26 2011 at 10:01
Thanks! I have modified the definition of C and the new question asks if the estimate holds with this definition. – Philipp Jul 26 2011 at 10:21
Ok, my guess is that the answer is no. One has to replace the term `$$ \sup_{i,j'}\sum_{i',j} \dots $$` with `$$\sup_{i}\sum_{i'}\sup_{j'}\sum_{j}\dots,$$` and then it works. – Philipp Jul 26 2011 at 11:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8794481754302979, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87672/basis-for-m-k-gamman-with-fourier-coeffs-in-mathbbz-zeta-n/87789
|
## Basis for $M_k(\Gamma(N))$ with Fourier coeffs in $\mathbb{Z}[\Zeta_N]$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi all.
Recently i read that the space of completely holomorphic (also at the cusps) modular forms $M_k(\Gamma(N))$ possesses a basis having Fourier coefficients in $\mathbb{Z}[\zeta_N]$ where $\zeta_N = e^{2 \pi i / N}$.
Can somebody point out a reference for this?
I already know the following things: At least for $k \geq 2$, $S_k(\Gamma(N))$ -- the subspace of cusp forms -- possesses a basis having Fourier coefficients in $\mathbb{Z}$ (see Shimura, Thm 3.52). What is missing is the Eisenstein series $G^{v}$ (see Diamond/Shurman, Thm 4.2.3). All the Fourier coiefficients except the first one do indeed lie inside $\mathbb{Z}[\zeta_N]$ (up to a constant in $\mathbb{Q}$) but the constant term of the Eienstein series is (in the case that $v_1 \equiv 0 \mod N$) the term
$\sum_{n \in \mathbb{Z} \setminus \{0\}, n \equiv v_1 \mod N} \frac{1}{n^k}$
This is the Hurwitz Zeta Function up to the term $N^{-k}$. The question here is: is this value in $\mathbb{Z}[\zeta_N]$ (up to some denominator) or is there a completely different way to see that such a basis with Fourier coeffs in $\mathbb{Z}[\zeta_N]$ exists?
Note that i am aware of this post: http://mathoverflow.net/questions/78043/is-there-a-miller-basis-for-m-kn but i was not able to locate the result in these books.
best and thanks!
Fabian Werner
-
I guess you mean the value divided by $\pi^k$ ? – François Brunault Feb 6 2012 at 16:12
1
Fabian, don't forget that the higher Fourier coefficients are all multiplied by $\pi^k$ (look at the definition of $C_k$ in Diamond/Shurman Theorem 4.2.3). – BR Feb 6 2012 at 20:52
1
(So, in fact, the Eisenstein series can be normalized.) – BR Feb 6 2012 at 21:00
1
Fabian, I actually don't know any better methods than those Francois mentions in his answer. But, there is a general principle that the zero-th term in the Fourier expansion of a modular form lies in the field generated by the higher Fourier coefficients (see, e.g., math.umn.edu/~garrett/m/v/…), so I knew it had to work even if I didn't know how! – BR Feb 9 2012 at 3:21
1
Here is the reference to the rationality principle used by Klingen (in the more general context of Hilbert modular forms) : ams.u-strasbg.fr/mathscinet/search/…*&s6=&s7=&s8=All&vfpref=html&yearRangeFirst=&yearRangeSecond=&yrop=eq&r=2&mx-pid=133304 (page 266). Note that in order to apply it here, you will need to know that $M_k(\Gamma(N))$ admits a basis having all Fourier coefficients in $\mathbf{Q}(\zeta_N)$, so this is a little bit circular... – François Brunault Feb 9 2012 at 10:49
show 8 more comments
## 1 Answer
The constant term of the Eisenstein series $G_k^{0,v}$ in Diamond-Shurman is, up to a factor $N^k$, given by
$$\zeta(k,\frac{v}{N}) + (-1)^k \zeta(k,-\frac{v}{N})$$
where $\zeta(s,x) = \sum_{\substack{n \in \mathbf{Q}_{>0}, \ n \equiv x \mod{1}}} \frac{1}{n^s}$ is the Hurwitz zeta function.
You can prove by hand that this constant term indeed lies in $\pi^k \cdot \mathbf{Q}(\zeta_N)$. This is a tedious exercise (which I admit I haven't done) using the functional equation of the Hurwitz zeta function linking $\zeta(s,\cdot)$ and $\zeta(1-s,\cdot)$ and the fact that $\zeta(1-k,x) \in \mathbf{Q}[x]$ for any $k \geq 1$ (it is given by a Bernoulli polynomial). For these two facts see for example Wikipedia.
The more conceptual explanation is that $\Gamma(N) \backslash (\mathcal{H} \cup \mathbf{P}^1(\mathbf{Q}))$ admits a canonical model $X(N)$ defined over $\mathbf{Q}(\zeta_N)$ (see Shimura, Introduction to the arithmetic theory of automorphic functions, Chapter 6). Moreover, there is a more conceptual definition of Eisenstein series of weight $k$ as sections of $\mathcal{L}^{\otimes k}$, where $\mathcal{L}$ is a certain line bundle on $X(N)$ (defined using the universal elliptic curve over $Y(N)$). Since the cusps of $X(N)$ are rational over $\mathbf{Q}(\zeta_N)$, the Fourier coefficients of these Eisenstein series belong automatically to $\mathbf{Q}(\zeta_N)$. It then suffices to check that these Eisenstein series coincide with $G_k^{0,v}$ (suitably divided by $(2\pi i)^k$). One reference I know for this point of view is Kato, $p$-adic Hodge theory and values of zeta functions of modular forms, Astérisque 295, section 3.
-
@Francois: I tried the first version but unfortunately i have no idea on how to filter out one particular evaluation of the Hurwitz Zeta function in the functional relation. Was this proposed as an excercise in some book? Were there any hints or so? – Fabian Werner Feb 8 2012 at 9:10
(i.e. the functional equation only seems to say: if one of the values $\zeta(k, \frac{d}{N})$ is nice, so are the other ones) – Fabian Werner Feb 8 2012 at 9:12
@Fabian : I won't write the exercise for you, but here are some hints : take the formula given by Wikipedia and evaluate at $s=k$. On the left hand side you have the value $\zeta(1-k,m/N)$, which you know is rational (by the second fact I mentioned). And on the right hand side you have the values $\zeta(k,\cdot)$, which you're interested in. You can package all the formulas for $1 \leq m \leq N$ by saying the vector of $\zeta(1-k,m/N)$'s is some matrix times the vector of $\zeta(k,m'/N)$'s. Then it suffices to check the matrix has coefficients in $\mathbf{Q}(\zeta_N)$ and is invertible. – François Brunault Feb 8 2012 at 9:39
Maybe i am completely blind but... Lets take $N=3$ and $k$ divisible by 4 then (for $c$ some constant) $\zeta(1-k, \frac{d}{N}) = c * \sum_{v=1}^{N} cos(\frac{k\pi}{2} - \frac{2\pi d v}{N}) \zeta(k, \frac{v}{N})$ now $cos(\frac{k\pi}{2} - \frac{2\pi d v}{N}) = cos(\frac{2\pi d v}{N}) = \frac{1}{2}(e^{2\pi i dv/N} + e^{-2\pi i dv/N}) = 1/2(x^{vd} + \overline{x}^{vd})$ ($x = e^{2\pi i / 3}$) so the matrix is given by $\begin{pmatrix} x+\overline{x} & x^2 + \overline{x}^2 & 2 \\ x^2 + \overline{x}^2 & x + \overline{x} & 2 \\ 2 & 2 & 2\end{pmatrix}$ but this matrix is clearly not invertible... – Fabian Werner Feb 8 2012 at 15:57
I was being somewhat sloppy. What you should consider is the vector of values $\zeta(k,v/N)+(-1)^k \zeta(k,-v/N)$, instead of $\zeta(k,v/N)$. Otherwise, you cannot get algebraic values (this is related to the following recent MO question : mathoverflow.net/questions/87348/…). – François Brunault Feb 8 2012 at 17:01
show 8 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9084706902503967, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/80321/is-the-stalk-of-the-colimit-of-sheaves-equal-to-the-colimit-of-the-stalks
|
## Is the stalk of the (co)limit of sheaves equal to the (co)limit of the stalks?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
More precisely, if $\mathcal F_i$ is a system of sheaves, is it the case that $$(\lim \mathcal F_i)_p = \lim ((\mathcal F_i)_p)$$ and similarly for colimits? I can see how to get a map $$(\lim \mathcal F_i)_p \rightarrow \lim ((\mathcal F_i)_p)$$ by taking the stalks in the diagram for $\lim \mathcal F_i$ and then using the universal property of $(\lim \mathcal F_i)_p$, but I can't see how I should use the sheaf condition to go the other way.
I learning algebraic geometry and taking a class with Hartshorne's book, and we showed that a map of sheaves is injective/surjective iff it is injective/surjective on stalks. Since one can check injectivity/surjectivity by computing the kernal/cokernal, I am thinking that what I wrote above would be a generalization of this fact.
Looking at sheafication as a adjoint and how it interacts with limits and colimits really made me feel more comfortable with sheafication, so I'm hoping that looking the interaction between sheaves and their stalks in terms of limits and colimits will give me some intuition.
-
5
Yes, Hartshorne's book is not too good for this type of fundamental categorical fact. The answer given by Martin is very familiar to category theorists; see for instance the book by Mac Lane and Moerdijk on topos theory (where they discuss sheaves over a space). You can also think of the process of taking a stalk as a particular kind of colimit, which always commutes with colimits. It commutes with finite limits because taking a stalk is an example of a filtered colimit (see Categories for the Working Mathematician). – Todd Trimble Nov 7 2011 at 19:24
I recommend George Kempf's little paperback, Algebraic Varieties. The (true) result you want is Lemmna 4.4.1. Kempf covers in 140 pages sheaves, varieties, cohomology, and RRT for curves and surfaces. – roy smith Nov 8 2011 at 5:13
## 1 Answer
Let $F$ be a sheaf on $X$ and $p \in X$. Then $F_p$ is just the pullback $i^{-1} F$, where $i : \{p\} \to X$ is the inclusion of a point. Now $i^{-1}$ is left adjoint to $i_*$, thus cocontinuous, i.e. preserves all colimits. This shows that the canonical morphism $\mathrm{colim}_i(F_p) \to \mathrm{colim}_i(F)_p$ is an isomorphism.
Now for limits, we get a canonical morphism $(\mathrm{lim}_i F)_p \to \mathrm{lim}_i(F_p)$. This is almost never an isomorphism (neither injective nor surjective). Consider infinite products and see what happens. However, the (left) exactness of $F \mapsto F_p$ means that this functor preserves finite limits.
-
Thanks, I think this is exactly what I was looking for. – Drew Nov 7 2011 at 19:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411939978599548, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/2508/modeling-interest-rates-with-correlation
|
Modeling interest rates with correlation
I'm trying to model interest rates, and will use the following equation:
$dr = \mu r dt + \sigma r dW$
I'm also being told that interest rates are 40% correlated to S&P returns. How can I include correlation to the S&P into this equation? (It is pretty weird that interest rates are being correlated to S&P returns)
-
2
First, that is a diffusion equation usually used to model equities. Mainly because interest rates are assume to mean revert to some long range target. A better model would be the Vasicek model. Second, what is the function that is dependent on the diffusion? – strimp099 Dec 4 '11 at 14:40
What short rate model is this that multiplies $r$ with drift and $r$ with diffusion? What probability measure is this specified under? – Jase Dec 7 '12 at 1:55
1 Answer
The model you assume for the interest rate process is a Geometric Brownian Motion.
As strimp099 highlights in his comments it is mainly used to model equities because you most of the time want your interest rate models to be positive and mean reverting.
A few models have been developed: Vasicek, CIR, HW. You could have a pick in there.
As for the correlation, the idea is to make your process $r_t$ rely on a multi-dimensional Brownian motion, for example 2-dimension, where the first one is specific to the interest rate process and the other one is the brownian motion used in the equities model (representing your S&P 500).
Example:
$$dr_t=a(b-r_t)dt+\sigma(dW^1_t+dW^2_t)$$
with
$$dS_t = \mu S_t dt + \sigma S_t dW^2_t$$
This is how you "induce" correlation; by having the same Brownian motion in the dynamics of the two processes. You could also have $r_t$ occuring somewhere in $dS_t$.
In your question, you discuss about the S&P, but it's really important to understand that including the correlation requires you to define a model for S&P as well, which is the $S_t$ in my example.
-
For what it's worth, if he is trying to model interest rates with, say, a 1 week horizon then GBM is not a horrible approximation. The mean reversion term would be negligible over such a period. – Brian B Dec 5 '11 at 17:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9659239649772644, "perplexity_flag": "middle"}
|
http://physicspages.com/2011/02/15/hermite-differential-equation-generating-functions/
|
Notes on topics in science
## Hermite differential equation – generating functions
Required math: calculus
Required physics: none
Hermite’s differential equation shows up during the solution of the Schrödinger equation for the harmonic oscillator. The differential equation can be written in the form
$\displaystyle \frac{d^{2}f}{dy^{2}}-2y\frac{df}{dy}+(\epsilon-1)f=0$
but an analysis of the series solution of this equation shows that the parameter ${\epsilon}$ has to have the form
$\displaystyle \epsilon=2n+1$
for some integer ${n}$, so we can rewrite the differential equation as
$\displaystyle \frac{d^{2}f}{dy^{2}}-2y\frac{df}{dy}+2nf=0$
We know the solutions of this equation are polynomials in ${y}$, and we got (from the series solution) a recursion formula for the coefficients of the polynomials, but a recursion formula can be difficult to work with, and it turns out that there is another form that can be used to work with these polynomials. This uses the idea of the generating function.
The idea is that we can write a function ${S(y,s)}$, where ${y}$ is the same ${y}$ as in the differential equation, and ${s}$ is a kind of dummy variable that allows us to do calculations (as we’ll see in a moment). Suppose we define this function as follows:
$\displaystyle S(y,s)\equiv e^{-s^{2}+2sy}\ \ \ \ \ (1)$
From the expansion of the exponential in a Taylor series, we can also write this as
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle S(y,s)$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{(-s^{2}+2sy)^{m}}{m!}$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{s^{m}(2y-s)^{m}}{m!}$ |
At first (and probably second) glance, this formula seems to have little relation to Hermite polynomials, but let’s write out the first few terms of the series
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle S(y,s)$ | $\displaystyle =$ | $\displaystyle 1+\frac{s(2y-s)}{1!}+\frac{s^{2}(2y-s)^{2}}{2!}+\frac{s^{3}(2y-s)^{3}}{3!}+...$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle 1+2ys+(-1+2y^{2})s^{2}+(-2y+\frac{4}{3}y^{3})s^{3}+...$ |
In the second line, we regrouped the series so that terms with the same power of ${s}$ are grouped together. The ${m^{th}}$ term in the series contains terms involving ${s}$ to the ${m^{th}}$ and higher powers only, so if we want to isolate those terms for a particular power (say the ${n^{th}}$ power) of ${s}$ we need look at only the first ${n}$ terms of the series. What do we get if we look at terms involving each successive power of ${s}$, starting with the zeroth power? As can be seen above, the term involving ${s^{m}}$ is multiplied by a polynomial in ${y}$ and by comparing these polynomials with those obtained by our earlier definition of the Hermite polynomials, we can see that each polynomial here is ${H_{m}(y)/m!}$. That is
$\displaystyle S(y,s)=\sum_{m=0}^{\infty}\frac{H_{m}(y)}{m!}s^{m}\ \ \ \ \ (2)$
Obviously we haven’t proved this in general, but this function may also be taken as the definition of Hermite polynomials, as the other definition that we used earlier can be derived from it, as we’ll see at the end of this post.
The Hermite polynomials can be obtained from this generating function by taking derivatives, as follows. Since the ${j^{th}}$ derivative of ${s^{m}}$ is zero if ${m<j}$, taking this derivative will eliminate all terms with ${m<j.}$ The ${j^{th}}$ derivative of ${s^{j}}$ is the constant ${j!}$. For all higher powers where ${m>j}$, the ${j^{th}}$ derivative will leave a term ${s^{m-j}}$. So if we take the ${j^{th}}$ derivative of ${S(y,s)}$ and then set ${s=0}$ we will isolate the single term involving ${H_{j}(y)}$:
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle \frac{d^{j}S(y,s)}{ds^{j}}\Big|_{s=0}$ | $\displaystyle =$ | $\displaystyle j!\frac{H_{j}(y)}{j!}$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle H_{j}(y)$ |
This is the reason that ${S(y,s)}$ is called a generating function: it provides a relatively simple way of generating all the Hermite polynomials.
Since we started by defining the generating function, we should prove that the polynomials that it generates really are solutions of Hermite’s differential equation. We can do this by taking derivatives of the generating function (but without the step of setting ${s=0}$). We take derivatives of 1 and 2 and then set them equal to each other.
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle \frac{\partial S}{\partial y}$ | $\displaystyle =$ | $\displaystyle 2se^{-s^{2}+2sy}$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{2s^{m+1}}{m!}H_{m}(y)\mathrm{\: from\;(1)}$ |
| $\displaystyle \frac{\partial S}{\partial y}$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{1}{m!}\frac{dH_{m}}{dy}s^{m}\:\mathrm{from}\;(2)$ |
Now we use the old trick of requiring these two results to be equal for all values of ${s}$, which implies that the coefficients of each power of ${s}$ must be equal independently. That is
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle \sum_{m=0}^{\infty}\frac{2s^{m+1}}{m!}H_{m}(y)$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{1}{m!}\frac{dH_{m}}{dy}s^{m}$ |
| $\displaystyle \sum_{m=1}^{\infty}\frac{2s^{m}}{(m-1)!}H_{m-1}(y)$ | $\displaystyle =$ | $\displaystyle \sum_{m=1}^{\infty}\frac{1}{m!}\frac{dH_{m}}{dy}s^{m}$ |
In the second line, we adjusted the summation index on the left so that the power of ${s}$ was ${s^{m}}$. On the right, we dropped the ${m=0}$ term since ${dH_{0}/dy=0}$ anyway (since ${H_{0}=1}$). The two sums are now aligned, so we can say
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle \frac{2}{(m-1)!}H_{m-1}(y)$ | $\displaystyle =$ | $\displaystyle \frac{1}{m!}\frac{dH_{m}}{dy}$ |
| $\displaystyle 2mH_{m-1}$ | $\displaystyle =$ | $\displaystyle H_{m}'$ |
By a similar process we can take the other derivative with respect to ${s}$:
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $\displaystyle \frac{\partial S}{\partial s}$ | $\displaystyle =$ | $\displaystyle (-2s+2y)e^{-s^{2}+2sy}$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{(-2s+2y)s^{m}}{m!}H_{m}\:\mathrm{from}\;(1)$ |
| $\displaystyle \frac{\partial S}{\partial s}$ | $\displaystyle =$ | $\displaystyle \sum_{m=1}^{\infty}\frac{m}{m!}H_{m}s^{m-1}\:\mathrm{from}\;(2)$ |
We have ignored the ${m=0}$ term in the last line, since the derivative of the first term in the series with respect to ${s}$ is zero. Aligning the powers of ${s}$ gives
| | | |
|------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|
| $\displaystyle -\sum_{m=1}^{\infty}\frac{2s^{m}}{(m-1)!}H_{m-1}+\sum_{m=0}^{\infty}\frac{2ys^{m}}{m!}H_{m}$ | $\displaystyle =$ | $\displaystyle \sum_{m=0}^{\infty}\frac{1}{m!}H_{m+1}s^{m}$ |
| $\displaystyle -2mH_{m-1}+2yH_{m}$ | $\displaystyle =$ | $\displaystyle H_{m+1}$ |
This relation is valid for all ${m}$ even though the ${m=0}$ case is a bit fortuitous. With ${m=0}$ we get ${2yH_{0}=H_{1}}$ which is true, since ${H_{0}=1}$ and ${H_{1}=2y}$.
From these results we can show that the polynomials do in fact solve Hermite’s differential equation. We do this by showing that the results above allow us to reconstruct the equation. From the second result:
| | | |
|------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|
| $\displaystyle H_{m+1}$ | $\displaystyle =$ | $\displaystyle 2yH_{m}-2mH_{n-1}$ |
| $\displaystyle H'_{m+1}$ | $\displaystyle =$ | $\displaystyle 2H_{m}+2yH'_{m}-2mH'_{n-1}$ |
From the first result, ${2mH'_{m-1}=H_{m}''}$, and ${H'_{m+1}=2(m+1)H_{m}}$ so substituting these into the last line above, we get
$\displaystyle H_{m}''-2yH'_{m}+2mH_{m}=0$
which is Hermite’s equation. QED.
One final bit of business is to show that the generating function approach is equivalent to the other definition of Hermite polynomials, that is, that it is equivalent to saying
$\displaystyle H_{n}\equiv(-1)^{n}e^{x^{2}}\frac{d^{n}}{dx^{n}}e^{-x^{2}}$
The generating function 1 can be written as
| | | |
|------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|
| $\displaystyle S(y,s)$ | $\displaystyle \equiv$ | $\displaystyle e^{-s^{2}+2sy}$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle e^{y^{2}-(s-y)^{2}}$ |
so taking the derivative, we get
| | | |
|------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------|
| $\displaystyle \frac{\partial^{n}S}{\partial s^{n}}$ | $\displaystyle =$ | $\displaystyle e^{y^{2}}\frac{\partial^{n}}{\partial s^{n}}e^{-(s-y)^{2}}$ |
| $\displaystyle$ | $\displaystyle =$ | $\displaystyle (-1)^{n}e^{y^{2}}\frac{\partial^{n}}{\partial y^{n}}e^{-(s-y)^{2}}$ |
since for any function ${f(s-y)}$, ${\partial f/\partial s=-\partial f/\partial y}$. Setting ${s=0}$, we reclaim the original definition:
$\displaystyle \frac{\partial^{n}S}{\partial s^{n}}\Big|_{s=0}=(-1)^{n}e^{y^{2}}\frac{\partial^{n}}{\partial y^{n}}e^{-y^{2}}$ so the two definitions are equivalent.
### Like this:
By , on Tuesday, 15 February 2011 at 17:17, under Calculus, Mathematics. Tags: generating functions, harmonic oscillator, Hermite differential equation, Hermite polynomials. 2 Comments
### Trackbacks
• By Harmonic oscillator – Hermite polynomials « Physics tutorials on Wednesday, 16 February 2011 at 10:41
[...] the other stationary states, we can show that equation 1 has as its solutions the Hermite polynomials . Furthermore, these polynomials form [...]
• By Hermite polynomials – recursion relations « Physics tutorials on Monday, 23 July 2012 at 15:13
[...] first theorem is that the Hermite polynomials can be obtained from a generating function. We’ve seen generating functions in the context of the Laguerre polynomials, which occur in [...]
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 136, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396477341651917, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4066335
|
Physics Forums
## Economic Speculation
My question/topic of discussion is, "Economic Speculation and Hedging". For many people this is just a mysterious thing that big bad Wall Street bankers do, and I want to find out about it.
How does the calculus and the other maths apply to speculation and hedging? I understand that math is required in order to speculate and hedge, but I do not understand how it is applied. How do you determine whether or not a stock is a good bet or not? On the other hand, how do you determine if you should hedge a bet? Can you give an example of the process used.
I understand that most people here are mathematicians and physicists, but I'm fairly confident in my assumption that at least a few people know how the math is applied
PhysOrg.com social sciences news on PhysOrg.com >> Evolution of lying>> Exhaustive computer research project shows shift in English language>> US white majority to linger if immigration slows
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Moved to social sciences.
Okay, didn't see that section when looking to post this.
Blog Entries: 3
## Economic Speculation
Look up Value at Risk,
http://www.investopedia.com/articles...#axzz26BYUdVf0
http://en.wikipedia.org/wiki/Value_at_risk
http://en.wikipedia.org/wiki/Financi...Liquidity_risk
In order to hedge you have to have a risk model. Value at risk quantifies the type of risk which a company tries to achieve. Var basically quantities the maximum loss a company is expected to see in a given time frame:
“For example, if a portfolio of stocks has a one-day 5% VaR of $1 million, there is a 0.05 probability that the portfolio will fall in value by more than$1 million over a one day period if there is no trading. Informally, a loss of \$1 million or more on this portfolio is expected on 1 day in 20.”
I believe there are regulatory requirements for banks where over a given time frame (say a year) a bank must have a probity of at least (say 99%) of not exceeding a level of loss (enough loss to effect their capital adequacy).
Now various types of assets have certain statistical properties such as correlation to the market, anti-correlation to the market, susceptibility to tail events. If we could know the statistical properties of various types of assets we might be able to quantify risk. If we can quantify risk we can choose our assets to keep some risk metric (such as VAR) below a certain threshold.
Bank regulators asses the ability of each banks, risk models ability to measure the expected/maximum losses. They de-rate their assets to compensate for past errors in risk predictions. The problem with this is that banks change their risk models over a much shorter period of time then major market events. The consequence is that the risk models always appear better then they actually are. This allows banks to over value the worth of their risk weighted capital. This props up the value of their stock and the amount of money they are allowed to borrow.
Recognitions: Gold Member When Genius Failed: The Rise and Fall of Long-Term Capital Management. Get this book, it's gripping story about pioneers in this field. They used Black-Scholes model(which is based on heat equation) to model prices of certain assets(mainly goverment bonds) and when, say, price of 2 bonds with different maturity dates were too different, they made bet that the prices would converge. But the profits from this were very very tiny per \$, so to get any reasonable profits they had to use huge leverage ratio - borrow huge amounts of stuff from banks that are vital for non-financial economy - and on one magical day the bet did not work, so they were not able to repay the stuff to the real, non-investment, banks. As for how the math is applied - look e.g. for econophysics books on amazon. Basicaly anything short of string theory was probably tried on economics.
How does the calculus and the other maths apply to speculation and hedging? I understand that math is required in order to speculate and hedge, but I do not understand how it is applied. How do you determine whether or not a stock is a good bet or not? On the other hand, how do you determine if you should hedge a bet? Can you give an example of the process used.
Mathematics is certainly not required to speculate, and for hedging it depends on how complex that hedging will be. For an example of using statistics in this field, you can check out the book Analysis of Financial Time Series (Econometrics).
Tags
applications, calculus, economic, hedging, speculation
Thread Tools
| | | |
|-------------------------------------------|---------------------------|---------|
| Similar Threads for: Economic Speculation | | |
| Thread | Forum | Replies |
| | Social Sciences | 3 |
| | Beyond the Standard Model | 17 |
| | Social Sciences | 14 |
| | Current Events | 107 |
| | Social Sciences | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327253103256226, "perplexity_flag": "middle"}
|
http://mathlesstraveled.com/2007/11/15/perfect-numbers-part-ii/
|
Explorations in mathematical beauty
## Perfect numbers, part II
Posted on November 15, 2007 by
In my last post I introduced the concept of perfect numbers, which are positive integers n for which $\sigma(n) = 2n$, where $\sigma(n)$ denotes the sum of all the divisors of n. Incidentally, we could write the definition of $\sigma(n)$ as
$\displaystyle \sigma(n) = \sum_{d|n} d$,
that is, the sum ranging over all d for which d divides n ($d|n$). Of course, this is slightly nonstandard $\sum$-notation; usually there is some sort of index variable which counts by ones from a starting to an ending value, but here the index varaible, d, doesn’t count by ones but instead only takes on values that evenly divide n. This notation is actually fairly common, though, and there’s not really any chance of confusion.
While I’m on a tangent, I should also mention that in general, $\sigma_k (n)$ is used to denote the sum of the kth powers of the divisors of n, that is,
$\displaystyle \sigma_k (n) = \sum_{d|n} d^k.$
For example, $\sigma_3(6) = 1^3 + 2^3 + 3^3 + 6^3 = 252$. So, the function that we have been calling $\sigma(n)$ is really $\sigma_1(n)$, and $\sigma_0(n)$ tells how many divisors n has, instead of giving their sum (do you see why?).
Now, as you may recall, in my last post I claimed the following formula for calculating $\sigma(n)$, assuming that n can be factored as $n = p_1^{\alpha_1}p_2^{\alpha_2} \dots p_m^{\alpha_m}$:
$\displaystyle \sigma(n) = \prod_{i=1}^m \frac{p_i^{\alpha_i+1} - 1}{p_i - 1}.$
But I didn’t explain where this formula comes from! Well, today I’d like to rectify that. Let’s start with an example: let’s add up all the divisors of 18. Simple enough: 1 + 3 + 9 + 2 + 6 + 18 = 39. But we can write this in a slightly different form which shows more clearly what is going on:
$\sigma(18) = 2^03^0 + 2^03^1 + 2^03^2 + 2^13^0 + 2^13^1 + 2^13^2 = 39.$
Do you see the pattern? 18 can be factored as $2^1 3^2$, and every divisor of 18 is of the form $2^r 3^s$, where $0 \leq r \leq 1$ and $0 \leq s \leq 2$. This makes sense — clearly, any divisor of 18 can only use the same prime factors that 18 has, and a given prime factor must occur between zero times and the number of times it occurs in 18 (since otherwise we wouldn’t get a divisor — for example, nothing with a factor of $2^2$ can be a divisor of 18). In fact, we can make a stronger statement than this: we can obtain every single divisor of 18 exactly once by taking $2^r 3^s$ for every possible combination of values for $0 \leq r \leq 1$ and $0 \leq s \leq 2$.
There’s nothing special about 18 here, of course. In general, if we have $n = p_1^{\alpha_1}p_2^{\alpha_2} \dots p_m^{\alpha_m}$, then every divisor of n will be of the form $p_1^{\beta_1} p_2^{\beta_2} \dots p_m^{\beta_m}$, where $0 \leq \beta_i \leq \alpha_i$ (that is, each $\beta_i$ falls between 0 and the corresponding $\alpha_i$, inclusive). We get every divisor exactly once if we take all possible combinations of values for the $\beta_i$‘s.
Now, going back to our example of adding up the divisors of 18 for a moment: notice that we can factor it nicely, like so:
$\displaystyle \begin{array}{rcl} \sigma(18) & = & 2^03^0 + 2^03^1 + 2^03^2 + 2^13^0 + 2^13^1 + 2^13^2 \\ & = & 3^0(2^0 + 2^1) + 3^1(2^0 + 2^1) + 3^2(2^0 + 2^1) \\ & = & (2^0 + 2^1)(3^0 + 3^1 + 3^2) \end{array}$
In other words, to get a divisor of 18, first choose a power of 2 $(2^0 + 2^1)$ and then choose a power of 3 $(3^0 + 3^1 + 3^2)$; multiplying these expressions gives us the sum of all possible choices. In the general case, the sum of all divisors of the form $p_1^{\beta_1} p_2^{\beta_2} \dots p_m^{\beta_m}$, for all possible combinations of values for the $\beta_i$‘s, can be factored as:
$\sigma(n) = (1 + p_1 + p_1^2 + \dots + p_1^{\alpha_1}) \cdots (1 + p_m + p_m^2 + \dots + p_m^{\alpha_m}).$
The dots in the middle indicate that there is one term $(1 + p_i + p_i^2 + \dots + p_i^{\alpha_i})$ for every different prime $p_i$. Again, this represents first choosing a power of $p_1$, then choosing a power of $p_2$, and so on up to $p_m$; multiplying all of these terms gives us the sum of all possible choices. As an example, consider $n = 720 = 2^4 3^2 5$: we obtain
$\begin{array}{rcl} \sigma(720) & = & (2^0 + 2^1 + 2^2 + 2^3 + 2^4)(3^0 + 3^1 + 3^2)(5^0 + 5^1) \\ & = & (1 + 2 + 4 + 8 + 16)(1 + 3 + 9)(1 + 5) \\ & = & 31 \cdot 13 \cdot 6 = 2418. \end{array}$
You can verify for yourself that this is, in fact, the sum of all the divisors of 720! OK, we’re almost there. The only step that remains is to simplify the sums of the form $S = 1 + p_i + p_i^2 + \dots + p_i^{\alpha_i}$. You may recognize this as a geometric series, a sum of terms where each term is a constant multiple of the previous. Note that $p_iS = p_i + p_i^2 + \dots + p_i^{\alpha_i} + p_i^{\alpha_i + 1}$, so if we subtract S from this, all the terms cancel out except for the first and last, giving
$p_iS - S = p_i^{\alpha_i + 1} - 1,$
and therefore
$\displaystyle S = \frac{p_i^{\alpha_i + 1} - 1}{p_i - 1}.$
Looks familiar, doesn’t it? Sure it does! Now do you see where the formula for $\sigma(n)$ comes from?
Next up: finding more perfect numbers!
39.953605 -75.213937
### Share this:
This entry was posted in algebra, famous numbers, number theory. Bookmark the permalink.
### 8 Responses to Perfect numbers, part II
1. DB says:
Of course! So simple, now that you explained it! Thanks for a really fun and well-written blog.
P.S. There’s a typo: you said 2^r 3^2 when you meant 2^r 3^s, in the paragraph that starts, “Do you see the pattern?”
2. Jonathan says:
I understood these last two posts! What powerful notations exist in math; it feels like programming languages did not so much invent anything as they did translate it. You do a great job translating math, Brent, thanks so much.
3. Brent says:
@DB: Thanks for the kind words! And thanks for the typo catch, I’ve fixed it now.
@Jonathan: Hooray! These last two posts were a bit more notation-heavy/technical than usual so I’m very glad to hear that it made sense to people. And your statement about programming languages is more true than you know; the theory behind computers and programming languages always has been, and still is, heavily based in mathematics.
4. Jonathan says:
I was reading the other Jonathan’s comment, and getting confused. I know it sounds right, but I know I didn’t write that.
The factoring bit is very nice. I think I will give my combinatorics class a day off next week – I’ll steal some of your perfect number material, do some work, and let them search a bit…
Thanks!
(other) Jonathan
5. Brent says:
Jonathan: excellent, steal away. Let me know how your combinatorics class does with it! Perhaps I will write about Catalan numbers sometime…
heh, I know the other Jonathan (a friend from college) wasn’t you; I can tell since I can see posters’ e-mail addresses, although they don’t show up on the comments page.
6. Zan says:
Brent– I found your page somehow a while ago and am enjoying the posts, especially this one. As a child, 6 was my favorite number because it was perfect (evidently this is what happens when one has math teachers for parents). I hope that all is well with you!
-Zan
7. Brent says:
Hi Zan! Glad you’re enjoying the posts. =) That’s really cute, lacking math-teacher parents I don’t think I even knew what perfect numbers were until middle school or so.
I’m doing well, just finished applying to PhD programs in CS, now I just have to wait… hope you’re well too!
8. Jaime Montuerto says:
Hi,
It’s beautiful, simple and elegant. Writing ideas this way makes creates bridge to a lot of math enthusiast. Truly appreciates this blog.
I am not a math expert but love math secret mysteries like these. Now I see some interesting relationship between additive and multiplicative structure of numbers. Are there more structures that can be described like this?
Personally I am working on the mysteries of Mersenne numbers and its relationship with prime numbers. I observed that there is a close relationship between the two numbers. It’s not surprising that Mersenne numbers exhibit these properties. Like relationship with perfect numbers, wagstaff prime numbers and I believed in all prime numbers.
Looking forward for more post.
Comments are closed.
• Brent's blogging goal
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 48, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299783110618591, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Computational_complexity
|
# Computational complexity theory
(Redirected from Computational complexity)
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.
Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, it tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically.
## Computational problems
An optimal traveling salesperson tour through Germany’s 15 largest cities. It is the shortest among 43,589,145,600[nb 1] possible tours visiting each city exactly once.
### Problem instances
A computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g. 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres in length passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
### Representing problem instances
When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
### Decision problems as formal languages
A decision problem has only two possible outputs, yes or no (or alternately 1 or 0) on any input.
Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose answer is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected, or not. The formal language associated with this decision problem is then the set of all connected graphs—of course, to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
### Function problems
A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem, that is, it isn't just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples (a, b, c) such that the relation a × b = c holds. Deciding whether a given triple is member of this set corresponds to solving the problem of multiplying two numbers. Similarly, finding the minimum value of a mathematical function f(x) is equivalent to a search on k for the problem of determining whether a feasible point exists for f(x)≤ k.
### Measuring the size of an instance
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as function of the size of the instance. This is usually taken to be the size of the input in bits. Complexity theory is interested in how algorithms scale with an increase in the input size. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with 2n vertices compared to the time taken for a graph with n vertices?
If the input size is n, the time taken can be expressed as a function of n. Since the time taken on different inputs of the same size can be different, the worst-case time complexity T(n) is defined to be the maximum time taken over all inputs of size n. If T(n) is a polynomial in n, then the algorithm is said to be a polynomial time algorithm. Cobham's thesis says that a problem can be solved with a feasible amount of resources if it admits a polynomial time algorithm.
## Machine models and complexity measures
### Turing machine
An artistic representation of a Turing machine
Main article: Turing machine
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a thought experiment representing a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see nondeterministic algorithm.
### Other machine models
Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.[1] What all these models have in common is that the machines operate deterministically.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a nondeterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The nondeterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that nondeterministic time is a very important resource in analyzing computational problems.
### Complexity measures
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine M on input x is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine M is said to operate within time f(n), if the time required by M on each input of length n is at most f(n). A decision problem A can be solved in time f(n) if there exists a Turing machine operating in time f(n) that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time f(n) on a deterministic Turing machine is then denoted by DTIME(f(n)).
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity.
### Best, worst and average case complexity
Visualization of the quicksort algorithm that has average case performance $\Theta(n\log n)$.
The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size n may be faster to solve than others, we define the following complexities:
• Best-case complexity: This is the complexity of solving the problem for the best input of size n.
• Worst-case complexity: This is the complexity of solving the problem for the worst input of size n.
• Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size n.
For example, consider the deterministic sorting algorithm quicksort. This solves the problem of sorting a list of integers that is given as the input. The worst-case is when the input is sorted or sorted in reverse order, and the algorithm takes time O(n2) for this case. If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is O(n log n). The best case occurs when each pivoting divides the list in half, also needing O(n log n) time.
### Upper and lower bounds on the complexity of problems
To classify the computation time (or similar resources, such as space consumption), one is interested in proving upper and lower bounds on the minimum amount of time required by the most efficient algorithm solving a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity, unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound T(n) on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most T(n). However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of T(n) for a problem requires showing that no algorithm can have time complexity lower than T(n).
Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if T(n) = 7n2 + 15n + 40, in big O notation one would write T(n) = O(n2).
## Complexity classes
### Defining complexity classes
A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
• The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc.
• The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on nondeterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc.
• The resource (or resources) that are being bounded and the bounds: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc.
Of course, some complexity classes have complex definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
The set of decision problems solvable by a deterministic Turing machine within time f(n). (This complexity class is known as DTIME(f(n)).)
But bounding the computation time above by some concrete function f(n) often yields complexity classes that depend on the chosen machine model. For instance, the language {xx | x is any binary string} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP.
### Important complexity classes
A representation of the relation among complexity classes
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Complexity class Model of computation Resource constraint
DTIME(f(n)) Deterministic Turing machine Time f(n)
P Deterministic Turing machine Time poly(n)
EXPTIME Deterministic Turing machine Time 2poly(n)
NTIME(f(n)) Non-deterministic Turing machine Time f(n)
NP Non-deterministic Turing machine Time poly(n)
NEXPTIME Non-deterministic Turing machine Time 2poly(n)
DSPACE(f(n)) Deterministic Turing machine Space f(n)
L Deterministic Turing machine Space O(log n)
PSPACE Deterministic Turing machine Space poly(n)
EXPSPACE Deterministic Turing machine Space 2poly(n)
NSPACE(f(n)) Non-deterministic Turing machine Space f(n)
NL Non-deterministic Turing machine Space O(log n)
NPSPACE Non-deterministic Turing machine Space poly(n)
NEXPSPACE Non-deterministic Turing machine Space 2poly(n)
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem.
Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems.
### Hierarchy theorems
Main articles: time hierarchy theorem and space hierarchy theorem
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME(n) is contained in DTIME(n2), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, the time hierarchy theorem states that
$\operatorname{DTIME}\big(f(n) \big) \subsetneq \operatorname{DTIME} \big(f(n) \sdot \log^{2}(f(n)) \big)$.
The space hierarchy theorem states that
$\operatorname{DSPACE}\big(f(n)\big) \subsetneq \operatorname{DSPACE} \big(f(n) \sdot \log(f(n)) \big)$.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
### Reduction
Main article: Reduction (complexity)
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at least as difficult as another problem. For instance, if a problem X can be solved using an algorithm for Y, X is no more difficult than Y, and we say that X reduces to Y. There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems.
If a problem X is in C and hard for C, then X is said to be complete for C. This means that X is the hardest problem in C. (Since many problems could be equally hard, one might say that X is one of the hardest problems in C.) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, Π2, to another problem, Π1, would indicate that there is no known polynomial-time solution for Π1. This is because a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.[2]
## Important open problems
Diagram of complexity classes provided that P ≠ NP. The existence of problems in NP outside both P and NP-complete in this case was established by Ladner.[3]
### P versus NP problem
Main article: P versus NP problem
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special nondeterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[2] If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology,[4] and the ability to find formal proofs of pure mathematics theorems.[5] The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US\$1,000,000 prize for resolving the problem.[6]
### Problems in NP not known to be in P or NP-complete
It was shown by Ladner that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete.[3] Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete.[7] If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level.[8] Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to Laszlo Babai and Eugene Luks has run time 2O(√(n log(n))) for graphs with n vertices.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP[9]). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP will equal co-NP). The best known algorithm for integer factorization is the general number field sieve, which takes time O(e(64/9)1/3(n.log 2)1/3(log (n.log 2))2/3) to factor an n-bit integer. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
### Separations between other complexity classes
Many known complexity classes are suspected to be unequal, but this has not been proved. For instance P ⊆ NP ⊆ PP ⊆ PSPACE, but it is possible that P = PSPACE. If P is not equal to NP, then P is not equal to PSPACE either. Since there are many known complexity classes between P and PSPACE, such as RP, BPP, PP, BQP, MA, PH, etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines, co-NP is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of NP problems. It is believed[10] that NP is not equal to co-NP; however, it has not yet been proven. It has been shown that if these two complexity classes are not equal then P is not equal to NP.
Similarly, it is not known if L (the set of all problems that can be solved in logarithmic space) is strictly contained in P or equal to P. Again, there are many complexity classes between the two, such as NL and NC, and it is not known if they are distinct or equal classes.
It is suspected that P and BPP are equal. However, it is currently open if BPP = NEXP.
## Intractability
See also: Combinatorial explosion
Problems that can be solved in theory (e.g., given infinite time), but which in practice take too long for their solutions to be useful, are known as intractable problems.[11] In complexity theory, problems that lack polynomial-time solutions are considered to be intractable for more than the smallest inputs. In fact, the Cobham–Edmonds thesis states that only those problems that can be solved in polynomial time can be feasibly computed on some computational device. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP is not the same as P, then the NP-complete problems are also intractable in this sense. To see why exponential-time algorithms might be unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. Nevertheless a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances.
What intractability means in practice is open to debate. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.
## Continuous complexity theory
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis[12] is information based complexity.
Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations.[13] Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.[14]
## History
The analysis of algorithms has been studied long before the invention of computers. Gabriel Lamé gave a running time analysis of the Euclidean algorithm in 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible notion of computer.
Fortnow & Homer (2003) date the beginning of systematic studies in computational complexity to the seminal paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard Stearns (1965), which laid out the definitions of time and space complexity and proved the hierarchy theorems. Also, in 1965 Edmonds defined a "good" algorithm as one with running time bounded by a polynomial of the input size.[15]
According to Fortnow & Homer (2003), earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper[16] on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure.[17] As he remembers:
However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited from switching theory, with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure".
—Boris Trakhtenbrot, From Logic to Theoretical Computer Science – An Update. In: Pillars of Computer Science, LNCS 4800, Springer 2008.
In 1967, Manuel Blum developed an axiomatic complexity theory based on his axioms and proved an important result, the so-called, speed-up theorem. The field really began to flourish in 1971 when the US researcher Stephen Cook and, working independently, Leonid Levin in the USSR, proved that there exist practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete.[18]
## See also
Relationship between computability theory, complexity theory and formal language theory.
## Notes
1. Take one city, and take all possible orders of the other 14 cities. Then divide by two because it does not matter in which direction in time they come after each other: 14!/2 = 43,589,145,600.
## References
1. ^ a b
2. ^ a b Ladner, Richard E. (1975), "On the structure of polynomial time reducibility" (PDF), 22 (1): 151–171, doi:10.1145/321864.321877.
3. Berger, Bonnie A.; Leighton, T (1998), "Protein folding in the hydrophobic-hydrophilic (HP) model is NP-complete", Journal of Computational Biology 5 (1): p27–40, doi:10.1089/cmb.1998.5.27, PMID 9541869. More than one of `|number=` and `|issue=` specified (help)
4. Cook, Stephen (April 2000), The P versus NP Problem, Clay Mathematics Institute, retrieved 2006-10-18.
5. Jaffe, Arthur M. (2006), "The Millennium Grand Challenge in Mathematics", Notices of the AMS 53 (6), retrieved 2006-10-18.
6. Arvind, Vikraman; Kurur, Piyush P. (2006), "Graph isomorphism is in SPP", Information and Computation 204 (5): 835–852, doi:10.1016/j.ic.2006.02.002.
7. Uwe Schöning, "Graph isomorphism is in the low hierarchy", Proceedings of the 4th Annual Symposium on Theoretical Aspects of Computer Science, 1987, 114–124; also: Journal of Computer and System Sciences, vol. 37 (1988), 312–323
8. Lance Fortnow. Computational Complexity Blog: Complexity Class of the Week: Factoring. September 13, 2002. http://weblog.fortnow.com/2002/09/complexity-class-of-week-factoring.html
9. Smale, Steve (1997). "Complexity Theory and Numerical Analysis". Acta Numerica (Cambridge Univ Press). CiteSeerX: .
10. Tomlin, Claire J.; Mitchell, Ian; Bayen, Alexandre M.; Oishi, Meeko (July 2003). "Computational Techniques for the Verification of Hybrid Systems". Proceedings of the IEEE 91 (7). CiteSeerX: .
11. Richard M. Karp, "Combinatorics, Complexity, and Randomness", 1985 Turing Award Lecture
12. Yamada, H. (1962). "Real-Time Computation and Recursive Functions Not Real-Time Computable". IEEE Transactions on Electronic Computers. EC-11 (6): 753–760. doi:10.1109/TEC.1962.5219459.
13. Trakhtenbrot, B.A.: Signalizing functions and tabular operators. Uchionnye Zapiski Penzenskogo Pedinstituta (Transactions of the Penza Pedagogoical Institute) 4, 75–87 (1956) (in Russian)
14. Richard M. Karp (1972), "Reducibility Among Combinatorial Problems", in R. E. Miller and J. W. Thatcher (editors), Complexity of Computer Computations, New York: Plenum, pp. 85–103
### Textbooks
• More than one of `|last1=` and `|author=` specified (help)
• Downey, Rod; Fellows, Michael (1999), Parameterized complexity, Berlin, New York: Springer-Verlag
• Du, Ding-Zhu; Ko, Ker-I (2000), Theory of Computational Complexity, John Wiley & Sons, ISBN 978-0-471-34506-0 Unknown parameter `|country=` ignored (help)
• Goldreich, Oded (2008), Computational Complexity: A Conceptual Perspective, Cambridge University Press
• van Leeuwen, Jan, ed. (1990), Handbook of theoretical computer science (vol. A): algorithms and complexity, MIT Press, ISBN 978-0-444-88071-0
• Papadimitriou, Christos (1994), Computational Complexity (1st ed.), Addison Wesley, ISBN 0-201-53082-1
• Sipser, Michael (2006), (2nd ed.), USA: Thomson Course Technology, ISBN 0-534-95097-3
•
### Surveys
• Khalil, Hatem; Ulery, Dana (1976), A Review of Current Studies on Complexity of Algorithms for Partial Differential Equations, ACM '76 Proceedings of the 1976 Annual Conference, p. 197, doi:10.1145/800191.805573
• Cook, Stephen (1983), "An overview of computational complexity", Commun. ACM (ACM) 26 (6): 400–408, doi:10.1145/358141.358144, ISSN 0001-0782
• Fortnow, Lance; Homer, Steven (2003), "A Short History of Computational Complexity", Bulletin of the EATCS 80: 95–133
• Mertens, Stephan (2002), "Computational Complexity for Physicists", Computing in Science and Engg. (Piscataway, NJ, USA: IEEE Educational Activities Department) 4 (3): 31–47, arXiv:cond-mat/0012185, doi:10.1109/5992.998639, ISSN 1521-9615 Unknown parameter `|unused_data=` ignored (help)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91703861951828, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=311180
|
Physics Forums
## gradient of spherical co-ords/ differentiation help
1. The problem statement, all variables and given/known data
h(sph)=exp(r2sin2($$\theta$$)sin2($$\phi$$)+r2cos2($$\theta$$))
need to find gradient of this function, i have er and etheta.... but can someone please tel me why when maple differentiates with respect to phi, why does it say it equals zero????
coz i get (2r2sin2($$\theta$$)sin($$\phi$$)cos($$\phi$$)*h)ephi
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
MY ORIGINAL cartesian function was h(x,y,z)=exp(y2+z2) then i converted it.... i know once i get dh/dphi i have to multiply it by 1/rsin(theta) but maple says that dh/dphi=0???? how can that be since dh/dtheta works???
if anyone doesnt know im trying to find dh/dphi... to get (1/rsin(theta))(dh/dphi).... and apparently it equals 0... and i cant found out y...
Recognitions:
Homework Help
## gradient of spherical co-ords/ differentiation help
I think the problem lies with what $\theta$ and $\phi$ are. In physics the azimuth and zenith are often reversed. So what convention does your book use? In this case differentiation with respect to $\phi$ does not yield 0 where as differentiation with respect to $\theta$ does yield 0. So I am pretty sure you're using the wrong convention.
Secondly why do you even bother to transform that function into spherical coordinates?
i dont know, its what my hw said sheet said to do.... and my book states that r=root(x^2+y^2+z^2) theta=arccos(z/r) and phi=arctan(y/x)..... something like that?? sorry im unsure what you mean by convention... and what do mean the diff with respect to theta equals 0??? how did you get that?
and the gradh in cart co-ords is (0,2y*exp(y^2+z^2),2z*exp(y^2+z^2)) can just apply spherical co-ords straight from gradh????
Recognitions: Homework Help There are two different conventions. 1) The mathematics convention where the azimuth, the angle between the x-axis and the radius in the x-y plane, is called $\theta$ and the zenith, the angle between the z axis and the radius is called $\phi$ 2)The physics convention where the azimuth, the angle between the x-axis and the radius in the x-y plane, is called $\phi$ and the zenith, the angle between the z axis and the radius is called $\theta$ Read this carefully, draw a picture perhaps so that you fully understand the difference. The definition in your book has $\theta$ as the zenith. Where as seeing as maple gives 0, has $\phi$ as the zenith. Do you understand why maple gets zero and you get a non-zero value now? Write down the definition of the gradient in spherical coordinates (on the forum) and label each term with, radius, zenith, azimuth. Since you want to find the $\phi$ term you will need to use which term in the definition of the gradient given the convention used in the book? Since maple uses the other convention which term did maple use? And no you can't just apply spherical coordinates straight after taking the Cartesian gradient.
ok then this is for maths, and r is length, theta is angle measured down form z axis, and phi is angle measured from x axis. ok i think i understand, what you are saying but if maple is opposite to my book, then does that mean that etheta is 0?? then definition of grad in sperical in my book is f(r,$$\theta$$,$$\phi$$) then graf=($$\partial$$f/$$\partial$$r)er+(1/r)($$\partial$$f/$$\partial$$$$\theta$$)etheta+(1/r*sin$$\theta$$)($$\partial$$f/$$\partial$$$$\phi$$)ephi does that help?
so will my answer have a zero??? or is just rigth to do what i did... or how do u change maple to do what i want?
Recognitions: Homework Help Even though this may be for maths the convention your book uses is the convention I labeled the physics convention. It doesn't really matter though feel free to call it anything you want as long as you always pay attention to how $\phi$ and $\theta$ are defined. This way you can use the appropriate convention. So the gradient your book gives you is: $$\nabla f=\frac{\partial f}{\partial r}e_r+\frac{1}{r} \frac{\partial f}{\partial \theta}e_\theta+\frac{1}{r \sin \theta} \frac{\partial f}{\partial \phi}e_\phi$$ maple: $$\nabla f=\frac{\partial f}{\partial r}e_r+\frac{1}{r} \frac{\partial f}{\partial \phi}e_\phi+\frac{1}{r \sin \phi} \frac{\partial f}{\partial \theta}e_\theta$$ So if you want to know the $\phi$ component then you have to use the the $\theta$ component in maple while also switching all $\theta s$ and $\phi s$ in the definitions of x,y,z in spherical coordinates. Nice and confusing isn't it? Just calculate all three terms and I will check if you got the correct answer or not. I would really suggest you do it by hand instead of using some function in a program you don't now its definition of.
ok yes, well for etheta i get ((2r2sin$$\theta$$sin2$$\phi$$cos$$\theta$$)-2r2cos$$\theta$$sin$$\theta$$*h)/r???? which does not equal zero???
Recognitions: Homework Help I made an error in the last sentence of my previous post. I will fix it. If you include the exponent your answer is the correct one. You can simplify it a bit if you like.
so all three should be... (2rsin2$$\theta$$sin2$$\phi$$+2rcos2$$\theta$$)exp(...)er ((2r2sin$$\theta$$sin2sin$$\phi$$cos$$\theta$$-2r2cos$$\theta$$sin$$\theta$$)/r)exp(...)etheta ((2r2sin2$$\theta$$sin$$\phi$$cos$$\theta$$)/(rsin$$\theta$$))exp(....)ephi ???? idk, just doesnt look right
Recognitions: Homework Help In the third one the cosine should read $\cos \phi$. The others seem correct. Try to simplify the second one by using $\sin 2x=2 \sin x \cos x$ and $\sin^2x+\cos^2x=1$.
ok ty heaps for ur help = ) very much appreciated
Thread Tools
| | | |
|--------------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: gradient of spherical co-ords/ differentiation help | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 13 |
| | Calculus & Beyond Homework | 5 |
| | Calculus & Beyond Homework | 14 |
| | Differential Geometry | 3 |
| | Calculus & Beyond Homework | 7 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 36, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310598373413086, "perplexity_flag": "middle"}
|
http://ldtopology.wordpress.com/2011/05/30/a-funny-thing-about-circular-thin-position/
|
Low Dimensional Topology
May 30, 2011
A funny thing about circular thin position
Filed under: 3-manifolds,Heegaard splittings,Knot theory,Thin position — Jesse Johnson @ 2:34 pm
At the AMS Sectional in Iowa City a few months ago, there were a number of talks about circular thin position (i.e. circular generalized Heegaard splittings). This is an idea that was introduced by Fabiola Manjarrez-Gutierrez for studying knot complements [1], though as she notes, it can be applied to any 3-manifold with infinite first homology. Alexander Coward gave a talk about using these ideas to study knots with unknotting number one (i.e. knots that become the unknot after a single crossing change) and he pointed out a difference between circular thin position and standard thin position that really blew me away: There are infinitely many circular generalized Heegaard splittings for the unknot that come from stabilizing the minimal thin position exactly once. Below the fold, I’ll give a brief description of circular thin position, then explain this surprising phenomenon.
First we need a slightly more general than usual definition of a compression body: A compression body is a manifold that you get by taking $F \times [0,1]$ for $F$ a compact, orientable surface (possibly with boundary), then attaching some 1-handles to $F \times \{1\}$. The vertical boundary is $\partial F \times [0,1]$, the negative boundary is $F \times \{0\}$ and the positive bondary is the rest of the boundary. A circular generalized Heegaard splitting for a knot complement is a decomposition into compression bodies that intersect alternately along their positive and negative boiundaries and whose vertical boundaries make up the boundary of the knot complement.
Here’s how you should think about this: Start with a Seifert surface $S$ for a knot $K$, and just for fun assume $S$ not the leaf of a surface bundle. Then you can’t isotope the surface around the knot and back onto itself, but you can do something almost as good. You can attach some tubes to $S$, creating a higher genus surface that cobounds a compression body with the original. Then you can compress this surface on the other side to create another compression body. If you repeat this enough times, you will eventually make your way around the knot and back to the original surface $S$. The trail of compression bodies defines a circular generalized Heegaard splitting.
To get to the idea that blew me away, I want to work in a slightly different setting, where the pictures are easier to draw. Consider a solid torus $T = S^1 \times D$ containing an unknotted core $L = S^1 \times x$ for $x$ a point in the disk $D$. This is shown on the left in the figure below. A meridian of $T$ will intersect $L$ in a single point. If we take the double branched cover over $L$, we’ll get a new solid torus, which I want to think of as the complement of the unknot. The meridians of $T$ lift to a family of spanning disks for the unknot that define its disk bundle structure.
Now lets add a kink to $L$, as in the middle figure on the left. That is, I want to isotope it so that instead of going monotonically around the solid torus, it changes direction and backtracks for a short while, then changes back to its original direction and completes the circle. If we take the double branched cover again, we still get the unknot. But now a number of the meridians of $T$ intersect the branch set in three points instead of one, and just lift to punctured tori instead of disks. Plus there are exactly two disks that are tangent to $L$. I’ve mentioned before that when you lift a bridge surface for a knot to the double branched cover, the resulting surface will bound a handlebody on either side. Similarly, if we lift one of the one-intersection meridians and one of the three-intersection meridians to the double branched cover of $L$, the resulting surfaces will split the knot complement into two compression bodies, defining a circular generalized Heegaard splitting.
What we just did corresponds to a stabilization, since we added a one-handle and a canceling 2-handle in the generlized (circular) Heegaard splitting. But the interesting part comes next. Drag those two points where $L$ is tangent to the meridian disks away from each other and around $T$ until they pass again on the other side. This is shown on the right in the Figure above. Now they intersect each meridian in either three points of five points. So in the double branched cover, half the meridians lift to once-punctured tori and the other half lift to once-punctured genus-two surfaces. But this still determines a circular generalized Heegaard splitting for the same reason as before. So we’ve created a new circular position for the knot without stabilizing it further. And, of course, we can do this again to get the meridians to lift to genus two and three surfaces, and so on.
So, what we find is that there are circular generalized Heegaard splittings for the unknot with a single pair of handles such that the thin and thick surfaces have arbitrarily high genus. In standard thin position, we can’t do this because there’s no way to send the handles around the back, like we do here.
3 Comments »
1. What a beautiful idea!!!
Focusing on knot complements, I wonder how much smooth 3-manifold topology can be done using circle-valued Morse functions. A-priori, it looks like one should be able to do everything, and then some! Thin position is working really nicely.
An idle thought I had while reading this is “What analogue of Waldhausen’s Theorem holds for circular Heegaard splittings of certain special knot complements?”
Also, I wonder if there is any facet of knot theory to which real-valued Morse functions are intrinsically better-suited… If not, we (at least the knot theorists among us) should all be switching over to the circle-valued world!
Comment by Daniel Moskovich — June 10, 2011 @ 8:04 am
• I think it’s too early to say how circle valued Morse theory will compare to other methods, but Fabiola is working with Mario Eudave-Munoz and others to extend these ideas. So it will be interesting to see how things progress. From Alex’s talk, it sounds like Kobayashi has classified circular Heegaard splittings of the unknot with one handle of each type. (He did this using different language, before circular thin position was defined.) I agree that it would be interesting to try to classify circular generalized Heegaard splittings for other knots.
Comment by — June 15, 2011 @ 1:10 pm
2. [...] Jesse Johnson: A funny thing about circular thin position [...]
Pingback by — June 13, 2011 @ 12:43 pm
RSS feed for comments on this post. TrackBack URI
Theme: Rubric. Blog at WordPress.com.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053440093994141, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/08/28/the-internal-monoidal-product/?like=1&_wpnonce=67418a1937
|
# The Unapologetic Mathematician
## The Internal Monoidal Product
As we’re talking about enriched categories, we’re always coming back to the monoidal category $\mathcal{V}$. This has an underlying category $\mathcal{V}_0$, which is then equipped with a monoidal product — an ordinary functor $\otimes:\mathcal{V}_0\times\mathcal{V}_0\rightarrow\mathcal{V}_0$. But as usual we don’t want to work with ordinary categories and functors unless we have to.
Luckily, we can turn this monoidal product into a $\mathcal{V}$-functor between $\mathcal{V}$-categories: $\mathrm{Ten}:\mathcal{V}\otimes\mathcal{V}\rightarrow\mathcal{V}$. Here, $\mathrm{Ten}$ refers to “tensor product”. On objects we do the same thing as before — $\mathrm{Ten}(X,Y)=X\otimes Y$ — because the objects of the $\mathcal{V}$-category $\mathcal{V}$ are the same as those of the ordinary category $\mathcal{V}_0$. But now we have to consider how this functor should act on the hom-objects. So we recall that we define the internal hom-functor as $\hom_\mathcal{V}(X,Y)=Y^X$, using the closed structure on $\mathcal{V}$.
So to have a $\mathcal{V}$-functor we need morphisms $\mathrm{Ten}:\hom_{\mathcal{V}\otimes\mathcal{V}}((X_1,Y_1),(X_2,Y_2))\rightarrow\hom_\mathcal{V}(X_1\otimes Y_1,X_2\otimes Y_2)$. On the left we defined the hom-object for the product $\mathcal{V}$-category as $\hom_\mathcal{V}(X_1,X_2)\otimes\hom_\mathcal{V}(Y_1,Y_2)$, which is then defined as $X_2^{X_1}\otimes Y_2^{Y_1}$. On the right we have the exponential $(X_2\otimes Y_2)^{X_1\otimes Y_1}$.
But by the closure adjunction an arrow $X_2^{X_1}\otimes Y_2^{Y_1}\rightarrow(X_2\otimes Y_2)^{X_1\otimes Y_1}$ is equivalent to an arrow $(X_2^{X_1}\otimes Y_2^{Y_1})\otimes(X_1\otimes Y_1)\rightarrow(X_2\otimes Y_2)$. Now we can just swap around the factors on the left to get $(X_2^{X_1}\otimes X_1)\otimes(Y_2^{Y_1}\otimes Y_1)$, and then inside each set of parentheses we can use the evaluation morphism we get from the closure adjunction, leaving us with $X_2\otimes Y_2$. Putting together the swap and the evaluations we get the arrow we want. And then the closure adjunction flips this to the morphism we needed to define the monoidal product on hom-objects.
The underlying ordinary functor $\mathrm{Ten}_0$ of the $\mathcal{V}$-functor $\mathrm{Ten}$ is the old monoidal product $\otimes$ again. On objects we already have the same action, so we need to check that the underlying function of the morphism $\mathrm{Ten}:X_2^{X_1}\otimes Y_2^{Y_1}\rightarrow(X_2\otimes Y_2)^{X_1\otimes Y_1}$ is the same as the function $\otimes:\hom_{\mathcal{V}_0}(X_1,X_2)\times\hom_{\mathcal{V}_0}(Y_1,Y_2)\rightarrow\hom_{\mathcal{V}_0}(X_1\otimes Y_1,X_2\otimes Y_2)$. We already know that the underlying set of $B^A$ is $\hom_{\mathcal{V}_0}(A,B)$ and the cartesian product of hom-sets underlies the monoidal product of hom-objects, so we at least know that the underlying source and target objects are correct.
So what’s the underlying function? We have the arrow $\mathrm{Ten}:X_2^{X_1}\otimes Y_2^{Y_1}\rightarrow(X_2\otimes Y_2)^{X_1\otimes Y_1}$ and we need to produce a function $\mathrm{Ten}_0:\hom_{\mathcal{V}_0}(\mathbf{1},X_2^{X_1})\times\hom_{\mathcal{V}_0}(\mathbf{1},Y_2^{Y_1})\rightarrow\hom_{\mathcal{V}_0}(\mathbf{1},(X_2\otimes Y_2)^{X_1\otimes Y_1})$. In each of these hom-sets we can use the closure adjunction to get a function $\hom_{\mathcal{V}_0}(X_1,X_2)\times\hom_{\mathcal{V}_0}(Y_1,Y_2)\rightarrow\hom_{\mathcal{V}_0}(X_1\otimes Y_1,X_2\otimes Y_2)$. But this is clearly function $(f,g)\mapsto f\otimes g$ for the ordinary tensor product.
In light of this tight relationship between $\mathrm{Ten}$ and $\otimes$, I’ll usually just write $\otimes$ for each. Again, when I don’t specify whether I’m talking about the ordinary or the enriched functor I’ll default to the enriched version.
### Like this:
Posted by John Armstrong | Category theory
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902305006980896, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/24350/what-does-it-mean-for-a-mathematical-statement-to-be-true/24417
|
## What does it mean for a mathematical statement to be true?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
As I understand it, mathematics is concerned with correct deductions using postulates and rules of inference. From what I have seen, statements are called true if they are correct deductions and false if they are incorrect deductions. If this is the case, then there is no need for the words true and false. I have read something along the lines that Godel's incompleteness theorems prove that there are true statements which are unprovable, but if you cannot prove a statement, how can you be certain that it is true? And if a statement is unprovable, what does it mean to say that it is true?
-
2
Philosophers argue ad infinitum about correspondence versus coherence theories of truth. – Robin Chapman May 12 2010 at 9:10
2
For many mathematicians, the truth/untruth of a statement is not quite the same as whether it can be formally deduced from a given set of axioms! For instance, I might regard a given axiom as "false". – GS May 12 2010 at 12:22
13
As I understand it, writing is concerned with putting words together into grammatically correct sentences. – Harald Hanche-Olsen May 12 2010 at 17:03
@Harald: mathoverflow.net/questions/7155/… – Qiaochu Yuan May 12 2010 at 20:11
@Qiaochu: Yes, I knew I had the idea from somewhere, but couldn't remember where. Information overload and all that. – Harald Hanche-Olsen May 13 2010 at 16:18
## 15 Answers
Tarski defined what it means to say that a first-order statement is true in a structure $M\models \varphi$ by a simple induction on formulas. This is a completely mathematical definition of truth.
Goedel defined what it means to say that a statement $\varphi$ is provable from a theory $T$, namely, there should be a finite sequence of statements constituting a proof, meaning that each statement is either an axiom or follows from earlier statements by certain logical rules. (There are numerous equivalent proof systems, useful for various purposes.)
The Completeness Theorem of first order logic, proved by Goedel, asserts that a statement $\varphi$ is true in all models of a theory $T$ if and only if there is a proof of $\varphi$ from $T$. Thus, for example, any statement in the language of group theory is true in all groups if and only if there is a proof of that statement from the basic group axioms.
The Incompleteness Theorem, also proved by Goedel, asserts that any consistent theory $T$ extending some a very weak theory of arithmetic admits statements $\varphi$ that are not provable from $T$, but which are true in the intended model of the natural numbers. That is, we prove in a stronger theory that is able to speak of this intended model that $\varphi$ is true there, and we also prove that $\varphi$ is not provable in $T$. This is the sense in which there are true-but-unprovable statements.
The situation can be confusing if you think of provable as a notion by itself, without thinking much about varying the collection of axioms. After all, as the background theory becomes stronger, we can of course prove more and more. The true-but-unprovable statement is really unprovable-in-$T$, but provable in a stronger theory.
Actually, although ZFC proves that every arithmetic statement is either true or false in the standard model of the natural numbers, nevertheless there are certain statements for which ZFC does not prove which of these situations occurs.
Much or almost all of mathematics can be viewed with the set-theoretical axioms ZFC as the background theory, and so for most of mathematics, the naive view equating true with provable in ZFC will not get you into trouble. But the independence phenomenon will eventually arrive, making such a view ultimately unsustainable. The fact is that there are numerous mathematical questions that cannot be settled on the basis of ZFC, such as the Continuum Hypothesis and many other examples. We have of course many strengthenings of ZFC to stronger theories, involving large cardinals and other set-theoretic principles, and these stronger theories settle many of those independent questions. Some set theorists have a view that these various stronger theories are approaching some kind of undescribable limit theory, and that it is that limit theory that is the true theory of sets. Others have a view that set-theoretic truth is inherently unsettled, and that we really have a multiverse of different concepts of set. On that view, the situation is that we seem to have no standard model of sets, in the way that we seem to have a standard model of arithmetic.
-
Can the phrase "these various stronger theories are approaching some kind of undescribable limit theory" be made formally precise in some sense? – Qfwfq May 12 2010 at 17:27
It is extremely difficult and is the subject of current philosophical work by Woodin, Koellner, Maddy, Hauser and others. But many large cardinal set theorists seem to espouse a view of this sort. – Joel David Hamkins May 12 2010 at 17:31
Interesting! Do you know any reference graspable by non-logicians? – Qfwfq May 12 2010 at 18:35
Can you clarify what the "intended model of arithmetic" is? It seems to me that the Incompleteness Theorem is really a statement about this model, that no theory can prove every statement that is true in it. – Kevin Casto May 14 2010 at 20:46
The intended model is $\langle N,+,\cdot,0,1,\lt\rangle$, also known as the standard model of arithmetic. And you are right that one way to understand the Incompleteness Theorem is that it says we cannot write down a complete axiomatization of the theory of this structure. – Joel David Hamkins May 15 2010 at 10:43
show 5 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I say that $$\int_{-\infty}^{+\infty} e^{-x^2} dx = \sqrt{\pi}$$ is true .... and any talk about "correct deductions using postulates" is a rationalization added later. Mathematicians knew this fact (yes, fact) even before the field of mathematical logic began.
-
2
Do you have a similar view about the Continuum Hypothesis? This question was also asked before math logic began, first by Cantor, and then famously, as the first Hilbert problem. – Joel David Hamkins May 12 2010 at 12:41
1
If it was not possible to prove this, would you still be confident in calling this true? – teil May 12 2010 at 12:42
7
If someone asks you why it is true, then you will deduce it from something more basic. And if they ask you why that is true, you will deduce it from something more basic still. What you won't say (unless you want to leave your questioner very dissatisfied) is that it's just a Platonic fact about the mathematical universe. Obviously these "why" questions have to come to an end eventually, and ... bingo ... there are your postulates. – gowers May 12 2010 at 21:25
3
The "truth" of the above integral equality is only possible because "we know what you mean". But the movement to establish foundations for calculus, etc. only arose because people started to lose confidence that they really did know what was meant. --- How do we know what you mean? Because the integral is defined in a certain way; specifically, with sufficient detail that it becomes possible to demonstrate the truth of the equality above. Intuitions regarding interesting relationships do not become "true" or "false" until crystallized into axiomatizable definitions admitting proofs. – Niel de Beaudrap May 13 2010 at 9:40
1
Dear Niel, I (and I think many others) really believe that things are more complicated than that. Mathematical truths often seem to lead dual lives, both as empirical facts and as abstractly deducible consequences of axioms. Our mathematical lives are richer if we allow this tension to exist! Best, Stephen – GS May 13 2010 at 10:26
show 4 more comments
Part of the reason for the confusion here is that the word "true" is sometimes used informally, and at other times it is used as a technical mathematical term.
Informally, asserting that "X is true" is usually just another way to assert X itself. When I say, "I believe that the Riemann hypothesis is true," I just mean that I believe that all the non-trivial zeros of the Riemann zeta-function lie on the critical line. (Note in particular that I'm not claiming to have a proof of the Riemann hypothesis!) This insight is due to Tarski. If you know what a mathematical statement X asserts, then "X is true" states no more and no less than what X itself asserts. Now, there is a slight caveat here: Mathematicians being cautious folk, some of them will refrain from asserting that X is true unless they know how to prove X or at least believe that X has been proved. So in some informal contexts, "X is true" actually means "X is proved." As we would expect of informal discourse, the usage of the word is not always consistent.
The word "true" can, however, be defined mathematically. Truth is a property of sentences. If you have defined a formal language $L$, such as the first-order language of arithmetic, then you can define a sentence $S$ in $L$ to be true if and only if $S$ holds of the natural numbers. So for example the sentence $\exists x: x > 0$ is true because there does indeed exist a natural number greater than 0. Here it is important to note that true is not the same as provable. The formal sentence corresponding to the twin prime conjecture (which I won't bother writing out here) is true if and only if there are infinitely many twin primes, and it doesn't matter that we have no idea how to prove or disprove the conjecture.
Now, perhaps this bothers you. Is it legitimate to define truth in this manner? Some people don't think so. However, note that there is really nothing different going on here from what we normally do in mathematics. When we were sitting in our number theory class, we all knew what it meant for there to be infinitely many twin primes. Why should we suddenly stop understanding what this means when we move to the mathematical logic classroom? If we understand what it means, then there should be no problem with defining some particular formal sentence to be true if and only if there are infinitely many twin primes. It is as legitimate a mathematical definition as any other mathematical definition.
Now, how can we have true but unprovable statements? And if we had one how would we know? Joel David Hamkins explained this well, but in brief, "unprovable" is always with respect to some set of axioms. Therefore it is possible for some statement to be true but unprovable from some particular set of axioms $A$. In order to know that it's true, of course, we still have to prove it, but that will be a proof from some other set of axioms besides $A$.
-
Both the optimistic view that all true mathematical statements can be proven and its denial are respectable positions in the philosophy of mathematics, with the pessimistic view being more popular. The question is more philosophical than mathematical, hence, I guess, your question's downvotes.
Neil Tennant 's Taming of the True (1997) argues for the optimistic thesis, and covers a lot of ground on the way. I recommend it to you if you want to explore the issue.
-
1
How can it be philosophical? Godel's theorem is mathematics, so the terms have to be defined. – teil May 12 2010 at 9:27
@NR: it's worth noting that Gödel's incompleteness and completeness results deal with a vey specific kind of hypotheses and proofs; that is to say, first order. In general, a statement is true if whenever the hypotheses are true, the conclusion is. In the case of first order logic this can be restated as: for any model where the hypotheses are true, the conclusion is. However, whether or not you're in first order, what you really care about is whether there's an example where the hypotheses are true but the conclusion is wrong. – H. Hasson May 12 2010 at 10:41
It occurred to me that there's another point to make here. Even if you care about groups, fields, etc., your theorem can be stated in the language of sets. You may, if it's first order in the language of groups, etc., treat it as such. But the more general notion is being first order in sets. – H. Hasson May 12 2010 at 10:50
1
@Negative What if the terms can't be defined? en.wikipedia.org/wiki/… – Dan Piponi May 12 2010 at 16:11
@H Hasson: First sentence is true, but misses the depth of the result. Because the Halting Problem is undecidable and individual cases can be encoded as 1st-order sentences of number theory, the set Th(N) of true 1st-order sentences of number theory is not computable, and therefore is not computably enumerable. Now for any proof system worthy of the name, the set of provable [1st-order number theory] sentences is c.e. It therefore cannot coincide with Th(N) — either the system proves false sentences, or (preferably) fails to prove true ones. One only need encompass 1st-order theory. – Chad Groft May 13 2010 at 13:18
The Stanford Encyclopedia of Philosophy has several articles on theories of truth, which may be helpful for getting acquainted with what is known in the area. Their top-level article is
http://plato.stanford.edu/entries/truth/
-
Let me offer an explanation of the difference between truth and provability from postulates which is (I think) slightly different from those already presented. (Although perhaps close in spirit to that of Gerald Edgars's.)
First of all, if we are talking about results of the form "for all groups, ..." or "for all topological spaces, ... " then in this case truth and provability are essentially the same: a result is true if it can be deduced from the axioms. (There is the caveat that the notion of group or topological space involves the underlying notion of set, and so the choice of ambient set theory plays a role. This role is usually tacit, but for certain questions becomes overt and important; nevertheless, I will ignore it here, possibly at my peril.)
But other results, e.g in number theory, reason not from axioms but from the natural numbers. Of course, along the way, you may use results from group theory, field theory, topology, ..., which will be applicable provided that you apply them to structures that satisfy the axioms of the relevant theory. But in the end, everything rests on the properties of the natural numbers, which (by Godel) we know can't be captured by the Peano axioms (or any other finitary axiom scheme). How do we agree on what is true then? Well, experience shows that humans have a common conception of the natural numbers, from which they can reason in a consistent fashion; and so there is agreement on truth.
If you like, this is not so different from the model theoretic description of truth, except that I want to add that we are given certain models (e.g. the standard model of the natural numbers) on which we agree and which form the basis for much of our mathematics. (Again, certain types of reasoning, e.g. about arbitrary subsets of the natural numbers, can lead to set-theoretic complications, and hence (at least potential) disagreement, but let me also ignore that here.)
In summary: certain areas of mathematics (e.g. number theory) are not about deductions from systems of axioms, but rather about studying properties of certain fundamental mathematical objects. Axiomatic reasoning then plays a role, but is not the fundamental point.
-
But ultimately, won't the independence phenomenon still arrive? After all, even for the strongest known axiomatizations of mathematics, such as ZFC+large cardinals, Goedel's theorem still provides arithmetic statements that are neither provable nor refutable. Although ZFC proves that there is a standard model of arithmetic, different models of ZFC can have non-isomorphic standard models (and even non-elementarily equivalent). So the question of what is true in the standard model depends on the set-theoretic background. – Joel David Hamkins May 13 2010 at 12:31
2
Dear Joel, This is the question swept under the rug by my parenthetical decisions to ignore set-theoretic issues. My own position on this is unabashedly platonist, in that I believe that there is a true standard model for the natural numbers (the one in my imagination!), and perhaps a little naive. Thankfully, the comment box doesn't give me enough space to try to seriously defend my position! – Emerton May 13 2010 at 14:02
There are two answers to your question:
• A statement is true in absolute if it can be proven formally from the axioms.
• A statement is true in a model if, using the interpretation of the formulas inside the model, it is a valid statement about those interpretations.
Assuming your set of axioms is consistent (which is equivalent to the existence of a model), then
$\qquad$ truth in absolute $\Rightarrow$ truth in any model.
Conversely, if a statement is not true in absolute, then there exists a model in which it is false.
Let's take an example to illustrate all this. Let $P$ be a property of integer numbers, and let's assume that you want to know whether the formula $\exists n\in \mathbb Z : P(n)$ is true. Three situations can occur:
• You're able to find $n\in \mathbb Z$ such that $P(n)$.
• You're able to prove that $\not\exists n\in \mathbb Z : P(n)$
• Neither of the above.
In the latter case, there will exist a model $\tilde{\mathbb Z}$ of the integers (it's going to be some ring, probably much bigger than $\mathbb Z$, and that satisfies all the axioms that "characterize" $\mathbb Z$) that contains an element $n\in \tilde {\mathbb Z}$ satisgying $P$.
-
This is a question which I spent some time thinking about myself when first encountering Goedel's incompleteness theorems. I should add the disclaimer that I am no expert in logic and set theory, but I think I can answer this question sufficiently well to understand statements such as Goedel's incompleteness theorems (at least, sufficiently well to satisfy myself).
One one end of the scale, there are statements such as CH and AOC which are independent of ZF set theory, so it is not at all clear if they are really true and we could argue about such things forever. Even for statements which are true in the sense that it is possible to prove that they hold in all models of ZF, it is still possible that in an alternative theory they could fail. Even things like the intermediate value theorem, which I think we can agree is true, can fail with intuitionistic logic.
On the other end of the scale, there are statements which we should agree are true independently of any model of set theory or foundation of maths. For example, I know that 3+4=7. There are simple rules for addition of integers which we just have to follow to determine that such an identity holds. You might come up with some freaky model of integer addition following different rules where 3+4=6, but that is really a different statement involving a different operation from what is commonly understood by addition. Similarly, I know that there are positive integral solutions to $x^2+y^2=z^2$. To verify that such equations have a solution we just need to iterate through all possible triples $(x,y,z)\in\mathbb{N}^3$ and test whether $x^2+y^2=z^2$, stopping when a solution is reached. In this case we are guaranteed to arrive at some solution, such as (3,4,5), proving that there is indeed a solution to the equation.
More generally, consider any statement which can be interpreted in terms of a deterministic, computable, algorithm. If we simply follow through that algorithm and find that, after some finite number of steps, the algorithm terminates in some state then the truth of that statement should hold regardless of the logic system we are founding our mathematical universe on.
So, there are statements of the following form: "A specified program (P) for some Turing machine and given initial state (S0) will eventually terminate in some specified final state (S1)". If such a statement is true, then we can prove it by simply running the program - step by step until it reaches the final state. Such statements, I would say, must be true in all reasonable foundations of logic & maths. Identities involving addition and multiplication of integers fall into this category, as there are standard rules of addition & multiplication which we can program. So does the existence of solutions to diophantine equations like $x^2+y^2=z^2$. Existence in any one reasonable logic system implies existence in any other.
At the next level, there are statements which are falsifiable by a computable algorithm, which are of the following form: "A specified program (P) for some Turing machine with initial state (S0) will never terminate". For example, "There are no positive integer solutions to $x^3+y^3=z^3$" fall into this category. You can write a program to iterate through all triples (x,y,z) checking whether $x^3+y^3=z^3$. Fermat's last theorem tells us that this will never terminate. We can never prove this by running such a program, as it would take forever. However, the negation of statement such as this is just of the previous form, whose truth I just argued, holds independently of the "reasonable" logic system used (this is basically $\omega$-consistency, used by Goedel). That is, if I can write an algorithm which I can prove is never going to terminate, then I wouldn't believe some alternative logic which claimed that it did. In the same way, if you came up with some alternative logical theory claiming that there there are positive integer solutions to $x^3+y^3=z^3$ (without providing any explicit solutions, of course), then I wouldn't hesitate in saying that the theory is wrong. Statements like $$\int_{-\infty}^\infty e^{-x^2}\,dx=\sqrt{\pi}$$ are also of this form. Assuming we agree on what integration, $e^{-x^2}$, $\pi$ and $\sqrt{\ }$ mean, then we can write a program which will evaluate both sides of this identity to ever increasing levels of accuracy, and terminates if the two sides disagree to this accuracy. The identity is then equivalent to the statement that this program never terminates.
Going through the proof of Goedels incompleteness theorem generates a statement of the above form. i.e., "Program P with initial state S0 never terminates" with two properties. (1) If the program P terminates it returns a proof that the program never terminates in the logic system. (2) If there exists a proof that P terminates in the logic system, then P never terminates.
So, if P terminated then it would generate a proof that the logic system is inconsistent and, similarly, if the program never terminates then it is not possible to prove this within the given logic system.
In fact, P can be constructed as a program which searches through all possible proof strings in the logic system until it finds a proof of "P never terminates", at which point it terminates. The assumptions required for the logic system are that is "effectively generated", basically meaning that it is possible to write a program checking all possible proofs of a statement.
-
Doesn't this use Church's thesis - which states that recursive and computable functions are the same thing? – teil May 13 2010 at 3:13
This is a philosophical question, rather than a matehmatical one. Anyway personally (it's a metter of personal taste!) I totally agree that mathematics is more about correctness than about truth.
In the following paragraphs I will try to (partially) answer your specific doubts about Goedel incompleteness in a down to earth way, with the caveat that I'm no expert in logic nor I am a philosopher. (See also this MO question, from which I will borrow a piece of notation). I had some doubts about whether to post this answer, as it resulted being a bit too verbose, but in the end I thought it may help to clarify the related philosophical questions to a non-mathematician, and also to myself.
The point is that there are several "levels" in which you can "state" a certain mathematical statement; more: in theory, in order to make clear what you formally want to state, along with the informal "verbal" mathematical statement itself (such as $2+2=4$) you should specify in which "level" it sits. (Of course, as mathematicians don't want to get crazy, in everyday practice all of this is left completely as understood, even in mathematical logic).
For example, suppose we work in the framework of Zermelo-Frenkel set theory ZF (plus a formal logical deduction system, such as Hilbert-Frege HF): let's call it Set1. In this setting, you can talk formally about sets and draw correct (relative to the deduction system) inferences about sets from the axioms. You can also formally talk and prove things about other mathematical entities (such as $\mathbb{N}$, $\mathbb{R}$, algebraic varieties or operators on Hilbert spaces), but everything always boils down to sets.
Still in this framework (that we called Set1) you can also play the game that logicians play: talking, and proving things, about theories $T$. How? Well, you only have sets, and in terms of sets alone you can define "logical symbols", the "language" $L$ of the theory you want to talk about, the "well formed formulae" in $L$, and also the set of "axioms" of your theory. Examples of such theories are Peano arithmetic PA (that in this incarnation we should perhaps call PA2), group theory, and (which is the reason of your perplexity) a version of Zermelo-Frenkel set theory ZF as well (that we will call Set2). Note that every piece of Set2 "is" a set of Set1: even the "$\in$" symbol, or the "$=$" symbol, of Set2 is itself a set (e.g. a string of 0's and 1's specifying it's ascii character code...) of which we can formally talk within Set1, likewise every logical formula regardless of its "truth" or even well-formedness. Stating that a certain formula can be deduced from the axioms in Set2 reduces to a certain "combinatorial" (syntactical) assertion in Set1 about sets that describe sentences of Set2.
The good think about having a meta-theory Set1 in which to construct (or from which to see) other formal theories $T$ is that you can compare different theories, and the good thing of this meta-theory being a set theory is that you can talk of models of these theories: you have a notion of semantics.
In the light of what we've said so far, you can think of the statement "$2+2=4$" either as a statement about natural numbers (elements of $\mathbb{N}$, constructed as "finite von Neumann ordinals" within Set1, for which $0:=\emptyset$, $1:=${$\emptyset$} etc.); or as a sentence of PA2 (which is actually itself a bare set, of which Set1 can talk).
An interesting (or quite obvious?) thing is that in some cases it makes sense to go on to "construct theories" also within the lower levels. For example, within Set2 you can easily mimick what you did at the above level and have formal theories, such as ZF set theory itself, again (which we can call Set3)! A crucial observation of Goedel's is that you can construct a version of Peano arithmetic not only within Set2 but even within PA2 itself (not surprisingly we'll call such a theory PA3). So you have natural numbers (of which PA2 formulae talk of) codifying sentences of Peano arithmetic!
So, if we loosely write "$A-\triangleright B$" to indicate that the theory or structure $B$ can be "constructed" (or "formalized") within the theory $A$, we have a picture like this:
Set1 $-\triangleright$ ($\mathbb{N}$; PA2 $-\triangleright$ PA3; Set2 $-\triangleright$ Set3; T2 $-\triangleright$ T3; ...).
So, you see that in some cases a theory can "talk about itself": PA2 talks about sentences of PA3 (as they are just natural numbers!), and there is a formally precise way of stating and proving, within Set1, that "PA3 is essentially the same thing as PA2 in disguise".
Furthermore, you can make sense of otherwise loose questions such as "Can the theory $T$ prove it's own consistency?". How? Well, you construct (within Set1) a version of $T$, say T2, and within T2 formalize another theory T3 that also "works exatly as $T$". Then you have to formalize the notion of proof.
So, the Goedel incompleteness result stating that
"Peano arithmetic cannot prove its own consistency"
is really a theorem of Set1 asserting that "PA2 cannot prove the consistency of PA3". This means: however you've codified the axioms and formulae of PA as natural numbers and the deduction rules as sentences about natural numbers (all within PA2), there is no way, manipulating correctly the formulae of PA2, to obtain a formula (expressed of course in terms of logical relations between natural numbers, according to your codification) that reads like "It is not true that axioms of PA3 imply $1\neq 1$".
You can say an exactly analogous thing about Set2 $-\triangleright$ Set3, and likewise about every theory "at least compliceted as PA".
Now, about truth. First of all, the distinction between provability a and truth, as far as I understand it. It is easy to say what being "provable" means for a formula in a formal theory $T$: it means that you can obtain it applying correct inferences starting from the axioms of $T$. This is a purely syntactical notion.
The concept of "truth", as understood in the semantic sense, poses some problems, as it depends on a set-theory-like meta-theory within which you are supposed to work (say, Set1). Saying that a certain formula of $T$ is true means that it holds true once interpreted in every model of $T$ (Of course for this definition to be of any use, $T$ must have models!).
If we could convince ourselves in a rigorous way that ZF was a consistent theory (and hence had "models"), it would be great because then we could simply define a sentence to be "true" if it holds in every model. This was Hilbert's program. Unfortunately, as said above, it is impossible to rigorously (within ZF itself for example) prove the consistency of ZF.
The assertion of Goedel's that
"There is a property of natural numbers that is true but unprovable from the axioms of Peano arithmetic"
is a theorem of Set1 stating that there is a sentence of PA2 that holds true* in any model of PA2 (such as $\mathbb{N}$) but is not obtainable as the conclusion of a finite set of correct logical inference steps from the axioms of PA2.
*(that a sentence of PA2 is "true in any model" here means: "the corresponding interpretation of that sentence in each model, which is a sentence of Set1, is a consequence of the axioms of Set1")
According to Goedel's theorems, you can find undecidable statements in any consistent theory which is rich enough to describe elementary arithmetic. That is, such a theory is either inconsistent or incomplete.
Foundational problems about the absolute meaning of truth arise in the "zeroth" level, i.e. about sentences expressed in what is supposed to be the foundational theory Th0 for all of mathematics According to some, this Th0 ought to be itself a formal theory, such as ZF or some theory of classes or something weaker or different; and according to others it cannot be prescribed but in an informal way and reflect some ontological -or psychological- entity such as the "real universe of sets". I would roughly classify the former viewpoint as "formalism" and the second as "platonism".
One point in favour of the platonism is that you have an absolute concept of truth in mathematics. One drawback is that you have to commit an act of faith about the existence of some "true universe of sets" on which you have no rigorous control (and hence the absolute concept of truth is not formally well defined). According to platonism, the Goedel incompleteness results say that
"Logic cannot capture all of mathematical truth".
On the other hand, one point in favour of "formalism" (in my sense) is that you don't need any ontological commitment about mathematics, but you still have a perfectly rigorous -though relative- control of your statements via checking the correctness of their derivation from some set of axioms (axioms that vary according to what you want to do). One consequence (not necessarily a drawback in my opinion) is that the Goedel incompleteness results assume the meaning:
"There is no place for an absolute concept of truth: you must accept that mathematics (unlike the natural sciences) is more a science about correctness than a science about truth".
-
In my humble opinion, the best reference for this kind of questions is Bourbaki's "Set Theory" ... Actually, I would recommend Bourbaki's book to people who, like me, have trouble to understand other texts on the same subject.
-
3
See mathoverflow.net/questions/16174 for logician Adrian Mathias' review of this text. – Joel David Hamkins May 12 2010 at 12:43
1
Thank you! Unfortunately, I don't understand Adrian Mathias' review. But I feel I understand Bourbaki's book. – Pierre-Yves Gaillard May 12 2010 at 13:16
The answer to the "unprovable but true" question is found on Wikipedia:
For each consistent formal theory T having the required small amount of number theory, the corresponding Gödel sentence G asserts: “G cannot be proved to be true within the theory T”...
If G is true: G cannot be proved within the theory, and the theory is incomplete. If G is false: then G can be proved within the theory and then the theory is inconsistent, since G is both provable and refutable from T.
-
If ‘true’ isn’t the same as provable according to a set of specific axioms and rules, then, since every such provable statement is true, then there must be ‘true’ statements that are not provable – otherwise provable and true would be synonymous. That means that as long as you define true as being different to provable, you don’t actually need Godel's incompleteness theorems to show that there are true statements which are unprovable.
Tarski’s definition of truth assumes that there can be a statement A which is true because there can exist a infinite number of proofs of an infinite number of individual statements that together constitute a proof of statement A - even if no proof of the entirety of these infinite number of individual statements exists. So Tarksi’s proof is basically reliant on a Platonist viewpoint that an infinite number of proofs of infinite number of particular individual statements exists, even though no proof can be shown that this is the case. Whether Tarski’s definition is a clarification of truth is a matter of opinion, not a matter of fact.
-
This may help: http://www.ditext.com/tarski/tarski.html.
Is it Philosophy or Mathematics? Both?
-
I think it is Philosophical Question having a Mathematical Response.
This question cannot be rigorously expressed nor solved mathematically, nevertheless a philosopher may "understand" the question and may even "find" the response. This response obviously exists because it can only be YES or NO (and this is a binary mathematical response), unfortunately the correct answer is not yet known. Despite the fact no rigorous argument may lead (even by a philosopher) to discover the correct response, the response may be discovered empirically in say some billion years simply by oberving if all nowadays mathematical conjectures have been solved or not.
-
I am attonished by how little is known about logic by mathematicians. I am sorry, I dont want to insult anyone, it is just a realisation about the common "meta-knowledege" about what we are doing. (This is not the first question that I see here that should be solved in an undergraduate course in mathematical logic)
-
3
Dear nickname, I don't believe your assertions are true. There are genuine disagreements about what the foundations of our subject are (e.g. platonic vs. formalistic, to indicate too useful extreme positions, between which many other positions can be usefully plotted.) I don't think that these issues can be resolved by a course in logic. (E.g. I don't think a genuine belief in/disbelief in the existence of the standard model of the natural numbers, or a true set-theoretic world, is something that can be decided by mathematical logic alone.) – Emerton May 25 2010 at 21:25
3
But Nickname, perhaps the algebraic geometers here might be equally astonished by how litttle algebraic geometry the average logician knows? My view is that this is a great strength of MathOverflow, that it allows us to learn about other areas so easily and so well. – Joel David Hamkins May 25 2010 at 21:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537121653556824, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/38283/computing-the-largest-eigenvalue-of-a-very-large-sparse-matrix
|
## Computing the largest Eigenvalue of a very large sparse matrix?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to compute the asymptotic growth-rate in a specific combinatorial problem depending on a parameter w, using the Transfer-Matrix method. This amounts to computing the largest eigenvalue of the corresponding matrix.
For small values of w, the corresponding matrix is small and I can use the so-called power method - start with some vector, and multiply it by the matrix over and over, and under certain conditions you'll get the eigenvector corresponding to the largest eigenvalue. However, for the values of w I'm interested in, the matrix becomes to large, and so the vector becomes too large - $n>10,000,000$ entries or so, so it can't be contained in the computer's memory anymore and I need extra programming tricks or a very powerful computer.
As for the matrix itself, I don't need to store it in memory - I can access it as a black box, i.e. given $i,j$ I can return $A_{ij}$ via a simple computation. Also, the matrix has only 0 and 1 entries, and I believe it to be sparse (i.e. only around $\log n$ of the entries are 1's, $n$ being the number of rows/columns). However, the matrix is not symmetric.
Is there some method more space-effective for computation of eigenvalues for a case like this?
-
2
A duplicate of math.stackexchange.com/questions/4368/… where it has already had a reply. – Robin Chapman Sep 10 2010 at 9:20
This type of black box, where each entry of a huge matrix is determined by some computation, unfortunately seems pretty much useless for sparse matrix computations. More useful would be a list of the nonzero entries, or at least a nice small subset of the entries containing the nonzero ones as a subset. – Darsh Ranjan Sep 10 2010 at 9:44
Is it worth copy and pasting the answer in here or closing this question? – alext87 Sep 10 2010 at 9:52
## 2 Answers
You could use the Arnoldi Iteration algorithm. This algorithm only requires the matrix A for matrix-vector multiplication. I'm expecting that you will be able to black-box the function v→Av. What you generate is an upper Hessenberg matrix H whose eigenvalues whose can be computed cheaply (by a direct method or Rayleigh quotient iteration) and which approximate the eigenvalues of A. Arnoldi Iteration will give the best approximation to the dominant eigenvalue so I suspect you won't have to do many iterations before you have a good estimate.
An excellent introduction to this is: "Numerical Linear Algebra" by Trefethen and Bau. (p250)
The basic algorithm can be found here: http://en.wikipedia.org/wiki/Arnoldi_iteration
Now the only thing that is required to make this a fully functional algorithm is a termination condition. Since you don't seem to need the dominant eigenvalue to a high degree of accuracy I would not worry and just stop when the dominant eigenvalue estimate doesn't change too much.
If you have Matlab you can always use the built in function eigs(Afun,n,...) where Afun is the black-box function handle that computes Av.
-
2
This is a copy of my answer from math.stackexchange.com/questions/4368/… – alext87 Sep 10 2010 at 10:00
I think this combined with Ricky Demer's answer is a good answer to this question: first reduce the dimension of the space as much as possible (to about log(n)), then use an iterative method like Arnoldi. – Darsh Ranjan Sep 11 2010 at 2:35
I think you should put in a significant effort to use established packages -- in this case, probably ARPACK (caam.rice.edu/software/ARPACK). It sometimes feels like computer drudgery rather than research adventure, but it's likely to be a much quicker route to a reliable answer. I see in the Applications section of that website that someone got eigenvalues from a 2million-order matrix in 600 CPU hours. If nothing else, this should motivate you to use an efficient algorithm (and 600 CPU hours actually sounds really good for a problem this size). – ed-wynn Sep 15 2010 at 8:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
. nonzeroindices = set([])
. for i in range(n):
.. for j in range(n):
... if A(i,j) == 1:
.... nonzeroindices.union(set([i]))
. nonzeroindex = sorted(nonzeroindices)
. newmatrix = []
. for i in range(n):
.. newmatrix.append([])
.. for j in range(n):
... newmatrix[i].append(A(nonzeroindex[i],nonzeroindex[j]))
Then compute the eigenvalues of the around 2*log(n)-by-2*log(n) newmatrix.
-
You don't even have to care about whether a column contains nonzero entries: if the row doesn't, then no eigenvector (except for the eigenvalue 0) can have a nonzero entry in that position. So you can just make a list of all the rows with nonzero entries and delete all the other ones (and the corresponding columns). – Darsh Ranjan Sep 10 2010 at 9:55
excellent point – Ricky Demer Sep 10 2010 at 10:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8903449773788452, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/galilean-relativity
|
# Tagged Questions
The galilean-relativity tag has no wiki summary.
4answers
147 views
### Inertial Frames of Reference - Inertial vs. Accelerated Frames
According to Robert Resnick's book "Introduction to Special Relativity", a line states the following as the definition of an inertial frame of reference: "We define an inertial system as a frame of ...
1answer
90 views
### Faraday's Law and Galilean Invariance
In Jackson's text he says that Faraday law is actually: \oint_{\partial \Sigma} \mathbf{E} \cdot \mathrm{d}\boldsymbol{\ell} = -k\iint_{\Sigma} \frac{\partial \mathbf B}{\partial t} \cdot ...
1answer
123 views
### Galilean relativity in projectile motion
Consider a reference frame $S^'$ moving in the initial direction of motion of a projectile launched at time, $t=0$. In the frame $S$ the projectile motion is: $$x=u(cos\theta)t$$ ...
1answer
55 views
### Proper notation when working with three Euclidean spatial coordinates in a setting with a time parameter
The How does the Euclidean metric is the symmetry group of Euclidean space. It includes rotations and translations. Say I consider an Euclidean space and a time parameter. How does the Euclidean ...
1answer
107 views
### Do observers at different speeds perceive other speeds differently?
I was told that if a plane takes less time to travel from the China to US as opposed to the other direction due to rotation of the earth. I suspect this is incorrect however. From this scenario I ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8851367235183716, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/43133/list
|
Return to Answer
2 added 300 characters in body
The fact that the entries of the matrix are real does seem to help. The state of the art is the following.
• The spectrum is continuous functions of $\xi$. However, it is not always possible to label the eigenvalues so that they individually are continuous functions.
• When the multiplicities $m_1,\ldots,m_r$ do not change as $\xi$ varies (no crossing of eigenvalues), then the eigenvalues are as smooth as the matrix. If the domain is simply connected, the eigenvalues may be labelled so as to be smooth functions.
• When the entries are analytic functions of a single variable ($k=1$) and the eigenvalues remain real, then the eigenvalues may be labelled so as to be analytic functions. However, in case of crossing, this nice labelling is not the obvious one (i.e. not $\lambda_1\le\lambda_1\le\cdots$). This become false for $k\ge 2$, as shown by the example $$\left(\begin{array}{cc} \xi_1 & \xi_2 \\ \xi_2 & -\xi_1 \end{array}\right).$$end{array}\right)\qquad\qquad (1).
• The situation is not that good concerning the eigenvectors. The following is called Petrowski's example, $$\left(\begin{array}{ccc} 0 & \xi_1 & \xi_1 \\ 0 & 0 & 0 \\ \xi_1 & 0 & \xi_2 \end{array}\right).$$ The eigenvalues are real for every $\xi$, distinct when $\xi_1\ne0$. The matrix is diagonalisable for every $\xi$, but two eigenvectors have the same limit when $\xi_1\rightarrow0$.
If the domain is not simply connected, you may have additional difficulties with eigenvectors. Take example (1) above, with $\xi$ running over the unit circle $S^1$. When you follow continuously a unit eigenvector $V(\xi)$, it is flipped (i.e. multiplied by $-1$) after one loop
1
The fact that the entries of the matrix are real does seem to help. The state of the art is the following.
• The spectrum is continuous functions of $\xi$. However, it is not always possible to label the eigenvalues so that they individually are continuous functions.
• When the multiplicities $m_1,\ldots,m_r$ do not change as $\xi$ varies (no crossing of eigenvalues), then the eigenvalues are as smooth as the matrix. If the domain is simply connected, the eigenvalues may be labelled so as to be smooth functions.
• When the entries are analytic functions of a single variable ($k=1$) and the eigenvalues remain real, then the eigenvalues may be labelled so as to be analytic functions. However, in case of crossing, this nice labelling is not the obvious one (i.e. not $\lambda_1\le\lambda_1\le\cdots$). This become false for $k\ge 2$, as shown by the example $$\left(\begin{array}{cc} \xi_1 & \xi_2 \\ \xi_2 & -\xi_1 \end{array}\right).$$
• The situation is not that good concerning the eigenvectors. The following is called Petrowski's example, $$\left(\begin{array}{ccc} 0 & \xi_1 & \xi_1 \\ 0 & 0 & 0 \\ \xi_1 & 0 & \xi_2 \end{array}\right).$$ The eigenvalues are real for every $\xi$, distinct when $\xi_1\ne0$. The matrix is diagonalisable for every $\xi$, but two eigenvectors have the same limit when $\xi_1\rightarrow0$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240816831588745, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/175221-higher-derivatives-question.html
|
# Thread:
1. ## Higher Derivatives Question
Hi all,
Here goes the question:
Given that $y=xsin3x+cos3x$, show that $x^2\frac{d^2y}{dx^2}+2y+4x^2y=0$.
I am quite comfortable in deriving the normal and higher derivatives(*Just to make sure I am on the right track, is $\frac{dy}{dx}$=sin3x-3sin3x+3xcos3x?) and am more concerned about the 'showing' part. Hopefully someone can guide me along.
Another one:
Given that $xy=sinx$, prove that $\frac{d^2y}{dx^2}+\frac{2}{x}\frac{dy}{dx}+y=0$.
It seems like a typical implicit diff. question other than the higher derivatives part. I haven't really learn how to derive higher derivatives using implicit diff.
Any help is appreciated. Thanks in advance!
2. When you have found $\displaystyle \frac{dy}{dx}$ and $\displaystyle \frac{d^2y}{dx^2}$, substitute them and $\displaystyle y$ into the LHS of your equation. Show that it simplifies to the RHS.
3. Originally Posted by Prove It
When you have found $\displaystyle \frac{dy}{dx}$ and $\displaystyle \frac{d^2y}{dx^2}$, substitute them and $\displaystyle y$ into the LHS of your equation. Show that it simplifies to the RHS.
Cheers. Your reply was short and concise but manage to set my thinking straight. Now I am proud that I am finally able to attempt the question. Thanks again.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551255106925964, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/05/04/some-examples-of-modules/?like=1&_wpnonce=81a70c627a
|
# The Unapologetic Mathematician
## Some examples of modules
Today I want to run through a bunch of examples of the constructions we’ve been considering for modules. I’ll restrict to the case of a ring $R$ with unit.
One easy example of an $R$-module that I’ve mentioned before is the ring $R$ itself. We drop down to the underlying abelian group and then act on it using the ring multiplication. There are both left and right actions here: $r\cdot x=rx$ and $x\cdot r=xr$ where $r$ and $x$ are ring elements, $x$ considered as an element of the module. We’ll start off by taking this module and sticking it into some of the constructions.
When we consider $\hom_{R-{\rm mod}}(R,M)$ for some left $R$-module $M$ the left module structures on $R$ and $M$ will get eaten and the right module structure on $R$ will get flipped over, leaving us a left $R$-module. We can pick an element $f\in\hom_{R-{\rm mod}}(R,M)$ by specifying $f(1)\in M$. Then $f(r)=f(r\cdot1)=r\cdot f(1)$, telling us where everything else goes. If we write $f_m$ for the homomorphism with $f_m(1)=m$, then the left action of $R$ on homomorphisms says
$\left[r\cdot f_m\right](1)=f_m(r\cdot1)=r\cdot f_m(1)=r\cdot m$
Thus $r\cdot f_m=f_{r\cdot m}$. This means that $\hom_{R-{\rm mod}}(R,M)\cong M$ as left $R$-modules.
On the other hand, if we consider $\hom_{R-{\rm mod}}(M,R)$ we get a right $R$-module. This consists of all $R$-linear functions from $M$ to the ring $R$ itself. We call this the “dual” module to $M$, and write $M^*=\hom_{R-{\rm mod}}(M,R)$. Elements of the dual module are often called “linear functionals” on $M$.
Tensor products are even easier. When we consider $R\otimes_RM$ for a left $R$-module $M$ we can use the construction of tensor products to write an element as a finite sum: $\sum r_i\otimes m_i$. But then we can use the middle-linear property to write $r_i\otimes m_i=(1\cdot r_i)\otimes m_i=1\otimes(r_i\cdot m_i)$, and then the linearity to collect all the terms together, giving $1\otimes m$. The tensor product eats the module structure on $M$ and the right module structure on $R$, leaving a left $R$-module structure. We calculate
$r\cdot(1\otimes m)=(r\cdot1)\otimes m=r\cdot m=(1\cdot r)\otimes m=1\otimes(r\cdot m)$
so $R\otimes_RM\cong M$ as left $R$-modules.
Now let’s take two left $R$-modules $M$ and $N$ and make $\hom(M,N)$. This is an abelian group — a $\mathbb{Z}$-module — as is $M$. Let’s write $M$ as $\hom(R,M)$ as above and then tensor over $\mathbb{Z}$ with $\hom(M,N)$. Then we can compose homomorphisms
$\hom(M,N)\otimes M\cong\hom(M,N)\otimes\hom(R,M)\rightarrow\hom(R,N)\cong N$
This is the “evaluation” homomorphism that takes an element $m\in M$ and a homomorphism $f\in\hom(M,N)$ and gives back $f(m)\in N$.
As a special case, we can take $R$ itself in place of $N$. We get an evaluation homomorphism $M^*\otimes M\rightarrow R$. This “canonical pairing” we often write as $\langle\mu,m\rangle=\mu(m)$ for a linear functional $\mu$ and module element $m$.
What if we composed with an element of $N^*=\hom(N,R)$ instead of $M\cong\hom(R,M)$? We use the evaluation homomorphism to get
$N^*\otimes\hom(M,N)=\hom(N,R)\otimes\hom(M,N)\rightarrow\hom(M,R)=M^*$
So given a homomorphism $f:M\rightarrow N$ we get a homomorphism $f^*:N^*\rightarrow M^*$
Of course, all this goes through suitably changed by swapping “right” for “left”. For example, given a right $R$-module $M$ we have a dual left $R$-module $M^*=\hom_{{\rm mod}-R}(M,R)$.
What do we get if we start with a left module $M$, dualize it, then dualize again to get another left module $M^{**}=\left(M^*\right)^*$? Following the definitions we see $M^{**}=\hom_{{\rm mod}-R}(\hom_{R-{\rm mod}}(M,R),R)$. I claim that there is a natural morphism of left $R$-modules $M\rightarrow M^{**}$. That is, a special element of
$\hom_{R-{\rm mod}}(M,\hom_{{\rm mod}-R}(\hom_{R-{\rm mod}}(M,R),R)$
but we know that this is isomorphic to
$\hom_{R-{\rm mod}}(\hom_{R-{\rm mod}}(M,R)\otimes M,R)$
which we write as
$\hom_{R-{\rm mod}}(M^*\otimes M,R)$
so we’re really looking for a special homomorphism from $M^*\otimes M$ to $R$. And we’ve got one: the canonical pairing! So we take the canonical pairing as a homomorphism from $M^*\otimes M\rightarrow R$ and pass it through this natural isomorphism to get a homomorphism $d:M\rightarrow M^{**}$. In case this looks completely insane, here it is in terms of elements: $d(m)$ takes a linear functional $\mu$ and gives back an element of the ring by the rule $\left[d(m)\right](\mu)=\mu(m)$.
### Like this:
Posted by John Armstrong | Ring theory
## 1 Comment »
1. [...] Following yesterday’s examples of module constructions, we consider a ring with unit. Again, is a left and a right module over itself by [...]
Pingback by | May 5, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 89, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8610305786132812, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/85313/homotopy-type-of-tensors-of-moore-spectra/85575
|
## Homotopy type of tensors of Moore spectra
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to hear what's known about the homotopy type of smash products of mod-$p^j$ Moore spectra, for $p$ an odd prime.
First, here is what I'm specifically interested in: there is a short exact sequence `\[0 \to \mathbb{Z} \xrightarrow{p^j} \mathbb{Z} \to \mathbb{Z}/p^j \to 0.\]` Tensoring this short exact sequence against your favorite group $G$ yields an exact sequence `\[\cdots \to G \xrightarrow{p^j} G \to G \otimes \mathbb{Z}/p^j \to 0,\]` which exhibits $G \otimes \mathbb{Z}/p^j \cong G / p^j G$ as $G$ with its $p^j$-divisible part stripped out.
Moore spectra play a related role in homotopy theory: they are defined by the cofiber sequence `\[S \xrightarrow{p^j} S \to M(p^j).\]` Smashing through with any spectrum $X$ gives the new cofiber sequence `\[X \xrightarrow{p^j} X \to X \wedge M(p^j),\]` and chasing this around shows that the homotopy group $\pi_n X \wedge M(p^j)$ is a mix of $\pi_n X / p^j(\pi_n X)$, as one would expect, together with the $p^j$-torsion of $\pi_{n-1} X$, which is new and different. So, though $X \wedge M(p^j)$ is sometimes written $X / p^j$, and though this notation suggests a useful analogy, this isn't exactly true, and we have to be careful about things we expect to follow from the algebraic setting.
The specific algebraic fact I'm interested in is that the composition of the tensor functors $- \otimes \mathbb{Z}/p^j$ and $- \otimes \mathbb{Z}/p^i$ for $j > i$ has a reduction: `\[- \otimes \mathbb{Z}/p^j \otimes \mathbb{Z}/p^i \cong - \otimes \mathbb{Z}/p^i.\]` The exact translation of this statement to Moore spectra and the smash product is not true --- one can, for instance, compute the reduced integral homology of $M(p^i) \wedge M(p^j)$ to see that there are too many cells around for it to be equivalent to $M(p^i)$ alone. However, the same homology calculation suggests something related: there is an abstract isomorphism between the reduced homology groups of $M(p^j) \wedge M(p^i)$ and those of $M(p^i) \wedge M(p^i)$. This is, of course, also true for groups; it is indeed the case that $\mathbb{Z}/p^i \otimes \mathbb{Z}/p^j \cong \mathbb{Z}/p^i \otimes \mathbb{Z}/p^i$. This is what I want to know:
For $j > i$, is $M(p^j) \wedge M(p^i)$ homotopy equivalent to $M(p^i) \wedge M(p^i)$?
If this is not true, I'm willing to throw in some extra qualifiers. For instance, is the situation improved if we work $K(n)$-locally? Is it true only when $j \gg i$? What if we additionally restrict to $j \gg i \gg 0$?
This specific question aside, I am also interested in any and all known features of the homotopy type of $M(p^i) \wedge M(p^j)$ --- any favorite fact you have that would help me get a grip on them. I'm also specifically interested in variants of the above question for generalized Moore spectra: can anything similar be said about those?
-
## 2 Answers
For $p\neq 2$ the answer is yes. It's an easy computation usinghomology decompositions in the sense of Eckmann–Hilton the well known exact sequence
$$\operatorname{Ext}(A,\pi_{n+1}(X))\hookrightarrow [M(A,n),X]\twoheadrightarrow \operatorname{Hom}(A,\pi_n(X)),$$
and the computation
$$\pi_{n+1}(M(A,n))=A\otimes\mathbb{Z}/2.$$
Here $M(A,n)$ is the Moore spectrum whose homology is the abelian group $A$ concentrated in degree $n$.
As you point out, it is easy to check that
$$H_n(M(A,s)\wedge M(B,t))= \left\{\begin{array}{ll} A\otimes B,&n=s+t,\\ \operatorname{Tor}_1(A,B),&n=s+t+1,\\ 0,&\text{otherwise}. \end{array}\right.$$
Therefore, $M(A,s)\wedge M(B,t)$ can be obtained as the homotopy cofiber of a map $$f\colon M(\operatorname{Tor}_1(A,B),s+t)\longrightarrow M(A\otimes B,s+t)$$ which is trivial in homology $H _{*}(f)=0$.
Suppose for instance that $A$ and $B$ are finite and do not have $2$-torsion. Then the previous short exact sequence shows that homology induces an isomorphism
$$H _{s+t}\colon [M(\operatorname{Tor}_1(A,B),s+t), M(A\otimes B,s+t)]\cong \operatorname{Hom}(\operatorname{Tor}_1(A,B),A\otimes B).$$
Therefore $f$ must be null-homotopic, so
$$M(A,s)\wedge M(B,t) \simeq M(A\otimes B,s+t)\vee M(\operatorname{Tor}_1(A,B),s+t+1).$$
If you take $A=\mathbb{Z}/p^i$ and either $B=\mathbb{Z}/p^j$ or $B=\mathbb{Z}/p^i$ you always obtain the same thing on the right.
For $p= 2$ the answer is no. The spectrum $M(\mathbb{Z}/2,0)\wedge M(\mathbb{Z}/4,0)$ is the mapping cone of $$4\colon M(\mathbb{Z}/2,0)\longrightarrow M(\mathbb{Z}/2,0)$$ which is knonw to be null-homotopic, actually $[M(\mathbb{Z}/2,0),M(\mathbb{Z}/2,0)]\cong\mathbb{Z}/4$ generated by the identity. Hence
$$M(\mathbb{Z}/2,0)\wedge M(\mathbb{Z}/4,0)\simeq M(\mathbb{Z}/2,0)\vee M(\mathbb{Z}/2,1).$$
In particular the action of the Steenrod algebra on the mod 2 homology of $M(\mathbb{Z}/2,0)\wedge M(\mathbb{Z}/4,0)$ is trivial.
On the other hand, it is known that the Steenrod algebra mode 2 acts on the mod 2 homology of $M(\mathbb{Z}/2,0)\wedge M(\mathbb{Z}/2,0)$ in a non-trivial way, i.e. the first Steenrod operation sends the 0-dimensional generator to the 1-dimensional generator. Therefore $M(\mathbb{Z}/2,0)\wedge M(\mathbb{Z}/4,0)$ and $M(\mathbb{Z}/2,0)\wedge M(\mathbb{Z}/2,0)$ cannot be homotopy equivalent.
As for references, see Hatcher's book. This book doesn't deal with spectra but since we are working with finite-dimensional CW-complexes you can assume that we are in the stable range.
-
Oh, very nice! I knew about this decomposition but had never actually seen it used; its associated spectral sequence takes the form $H_* X \Rightarrow H_* X$, which is not so helpful, so I had no idea what the point was. Now I know! // I'm going to leave this unaccepted for a little while longer to encourage onlookers to tell me about generalized Moore spectra, but this is very much what I was looking for. I'm glad it turned out to be so easy. Thank you! – Eric Peterson Jan 10 2012 at 19:43
You're welcome :-) – Fernando Muro Jan 10 2012 at 19:59
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You might find this paper useful:
````\bib{MR760188}{article}{
author={Oka, Shichir{\^o}},
title={Multiplications on the Moore spectrum},
journal={Mem. Fac. Sci. Kyushu Univ. Ser. A},
volume={38},
date={1984},
number={2},
pages={257--276},
issn={0373-6385},
review={\MR{760188 (85j:55019)}},
doi={10.2206/kyushumfs.38.257},
}
````
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449053406715393, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/668/whats-an-intuitive-way-to-think-about-the-determinant/1084
|
# What's an intuitive way to think about the determinant?
In my linear algebra class, we just talked about determinants. So far I’ve been understanding the material okay, but now I’m very confused. I get that when the determinant is zero, the matrix doesn’t have an inverse. I can find the determinant of a $2\times 2$ matrix by the formula. Our teacher showed us how to compute the determinant of an $N \times N$ matrix by breaking it up into the determinants of smaller matrices, and apparently there is a way by summing over a bunch of permutations. But the notation is really hard for me and I don’t really know what’s going on with them anymore. Can someone help me figure out what a determinant is, intuitively, and how all those definitions of it are related?
-
22
I just wanted this question to be in the archives, because it's a perennial one that admits a better response than is in most textbooks. – Jamie Banks Jul 25 '10 at 2:26
2
Hehe, you're going against your own suggestion of asking questions that you actually want answered!! I'm teasing though, I understand your motivation. Can we set a precedent of making seeded questions CW? I kinda like that idea, I will propose it on Meta. I am rambling. – BBischof Jul 25 '10 at 2:34
3
– BBischof Jul 25 '10 at 2:36
@BBischof, see the meta thread for CW discussion – Jamie Banks Jul 25 '10 at 6:10
1
In the end, I hope this didn't come across as me hating on your question... Yet somehow, I feel like that has happened. – BBischof Jul 25 '10 at 19:17
show 3 more comments
## 9 Answers
Your trouble with determinants is pretty common. They’re a hard thing to teach well, too, for two main reasons that I can see: the formulas you learn for computing them are messy and complicated, and there’s no “natural” way to interpret the value of the determinant, the way it’s easy to interpret the derivatives you do in calculus at first as the slope of the tangent line. It’s hard to believe things like the invertibility condition you’ve stated when it’s not even clear what the numbers mean and where they come from.
Rather than show that the many usual definitions are all the same by comparing them to each other, I’m going to state some general properties of the determinant that I claim are enough to specify uniquely what number you should get when you put in a given matrix. Then it’s not too bad to check that all of the definitions for determinant that you’ve seen satisfy those properties I’ll state.
The first thing to think about if you want an “abstract” definition of the determinant to unify all those others is that it’s not an array of numbers with bars on the side. What we’re really looking for is a function that takes N vectors (the N columns of the matrix) and returns a number. Let’s assume we’re working with real numbers for now.
Remember how those operations you mentioned change the value of the determinant?
(1) Switching two rows or columns changes the sign.
(2) Multiplying one row by a constant multiplies the whole determinant by that constant.
(3) The general fact that number two draws from: the determinant is linear in each row. That is, if you think of it as a function $\det: \mathbb{R}^{n^2} \rightarrow \mathbb{R}$, then $\det(a \vec{v_1} +b \vec{w_1}, \vec{v_2},...,\vec{v_n}) = a \det(\vec{v_1},\vec{v_2},...,\vec{v_n}) + b \det(\vec{w_1}, \vec{v_2}, ...,\vec{v_n})$, and the corresponding condition in each other slot.
I claim that these facts, together with the fact that the determinant of the identity matrix is one, is enough to define a unique function that takes in N vectors (each of length N) and returns a real number, the determinant of the matrix given by those vectors. I won’t prove that, but I’ll show you how it helps with some other interpretations of the determinant.
In particular, there’s a nice geometric way to think of a determinant. Consider the unit cube in N dimensional space: the set of vectors of length N with coordinates 0 or 1 in each spot. The determinant of the linear transformation (matrix) T is the signed volume of the region gotten by applying T to the unit cube. (Don’t worry too much if you don’t know what the “signed” part means, for now).
How does that follow from our abstract definition?
Well, if you apply the identity to the unit cube, you get back the unit cube. And the volume of the unit cube is 1.
If you stretch the cube by a constant factor in one direction only, the new volume is that constant. And if you stack two blocks together aligned on the same direction, their combined volume is the sum of their volumes: this all shows that the signed volume we have is linear in each coordinate when considered as a function of the input vectors.
Finally, when you switch two of the vectors that define the unit cube, you flip the orientation. (Again, this is something to come back to later if you don’t know what that means).
So there are ways to think about the determinant that aren’t symbol-pushing. If you’ve studied multivariable calculus, you could think about, with this geometric definition of determinant, why determinants (the Jacobian) pop up when we change coordinates doing integration. Hint: a derivative is a linear approximations of the associated function, and consider a “differential volume element” in your starting coordinate system.
It’s not too much work to check that the area of the parallelogram formed by vectors $(a,b)$ and $(c,d)$ is $\det((a,b),(c,d))$, either: you might try that to get a sense for things.
-
6
Very nice explanation. I think you should clarify what you mean by "the determinate is linear in each row" though – Casebash Jul 25 '10 at 3:07
1
Great answer. We were taught the determinant as the generalized volume function in our algebra class. – Neil G Aug 28 '10 at 9:26
I hope you don't mind but I corrected a small typo in the third property of the determinant and added some Latex to make the identity a little bit easier to read. – Adrián Barquero Nov 14 '10 at 18:38
1
Nicely done. We should all keep this in mind when we teach determinants. – Chris Leary Nov 11 '11 at 19:30
1
– Chris Taylor May 24 '12 at 7:51
show 3 more comments
You could think of a determinant as a volume. Think of the columns of the matrix as vectors at the origin forming the edges of a skewed box. The determinant gives the volume of that box. For example, in 2 dimensions, the columns of the matrix are the edges of a rhombus.
You can derive the algebraic properties from this geometrical interpretation. For example, if two of the columns are linearly dependent, your box is missing a dimension and so it's been flattened to have zero volume.
-
10
If I may, I would add to this answer (which I think is a very good one) in two minor aspects. First, a determinant also has a sign, so we want the concept of oriented volume. (This is somewhat tricky, but definitely important, so you might as well have it in mind when you're learning about "right hand rules" and such.) Second, I think better than a volume is thinking of the determinant as the multiplicative change in volume of a parallelopiped under the linear transformation. (Of course you can always take the first one to be the unit n-cube and say that you are just dividing by one.) – Pete L. Clark Jul 28 '10 at 20:08
+1: I like this answer because there is a direct link to some application in physics: In special relativity we are talking of the conservation of space-time-volume, which means that the determinant of the transformation matrix is const. 1 – vonjd Jan 16 '11 at 10:22
In addition to the answers, above, the determinant is a function from the set of set of square matrices into the real numbers that preserves the operation of multiplication: \begin{equation}\det(AB) = \det(A)\det(B) \end{equation} and so it carries $some$ information about square matrices into the much more familiar set of real numbers.
Some examples:
The determinant function maps the identity matrix $I$ to the identity element of the real numbers ($\det(I) = 1$.)
Which real number does not have a multiplicative inverse? The number 0. So which square matrices do not have multiplicative inverses? Those which are mapped to 0 by the determinant function.
What is the determinant of the inverse of a matrix? The inverse of the determinant, of course. (Etc.)
This "operation preserving" property of the determinant explains some of the value of the determinant function and provides a certain level of "intuition" for me in working with matrices.
-
+1 for including the questions. Many of them. Good ones. Especially the "So which square matrices do not have multiplicative inverses?" pair. And for featuring a nice doggy in your portrait! – naxa Jan 30 at 17:40
The top exterior power of an $n$-dimensional vector space $V$ is one-dimensional. Its elements are sometimes called pseudoscalars, and they represent oriented $n$-dimensional volume elements.
A linear operator $f$ on $V$ can be extended to a linear map on the exterior algebra according to the rules $f(\alpha) = \alpha$ for $\alpha$ a scalar and $f(A \wedge B) = f(A) \wedge f(B), f(A + B) = f(A) + f(B)$ for $A$ and $B$ blades of arbitrary grade. Trivia: some authors call this extension an outermorphism. The extended map will be grade-preserving; that is, if $A$ is a homogeneous element of the exterior algebra of grade $m$, then $f(A)$ will also have grade $m$. (This can be verified from the properties of the extended map I just listed.)
All this implies that a linear map on the exterior algebra of $V$ once restricted to the top exterior power reduces to multiplication by a constant: the determinant of the original linear transformation. Since pseudoscalars represent oriented volume elements, this means that the determinant is precisely the factor by which the map scales oriented volumes.
-
2
It's worth mentioning here that, as abstract as the construction by the top exterior power is, it's the cleanest way to derive the permutation formula. – Qiaochu Yuan Jul 31 '10 at 8:14
For the record I'll try to give a reply to this old question, since I think some elements can be added to what has been already said.
Even though they are basically just (complicated) expressions, determinants can be mysterious when first encountered. Questions that arise naturally are: (1) how are they defined in general?, (2) what are their important properties?, (3) why do they exist?, (4) why should we care?, and (5) why does their expression get so huge for large matrices?
Since $2\times2$ and $3\times3$ determinants are easily defined explicitly, question (1) can wait. While (2) has many answers, the most important ones are, to me: determinants detect (by becoming 0) the linear dependence of $n$ vectors in dimension $n$, and they are an expression in the coordinates of those vectors (rather than for instance an algorithm). If you have a family of vectors that depend (or at least one of them depends) on a parameter, and you're asking (or are being asked) for which parameter values they are linearly dependent, than trying to use Gaussian elimination or something similar to detect linear dependence can run into trouble: one might need assumptions on the parameter to assure some coefficient is nonzero, and even then dividing by it gives very messy expressions. Provided the number of vectors equals the dimension $n$ of the space, taking a determinant will however immediately transform the question into an equation for the parameter (which one may or may not be capable of solving, but that is another matter). This is exactly how one obtains an equation in eigenvalue problems, in case you've seen those. This provides a first answer to (4). (But there is a lot more you can do with determinants once you get used to them.)
As for question (3), the mystery of why determinants exist in the first place can be reduced by considering the situation where one has $n-1$ given linearly independent vectors, and asks when a final unknown vector $\vec x$ will remain independent from them, in terms of its coordinates. The answer is that it usually will, in fact always unless $\vec x$ happens to be in the linear span $S$ of those $n-1$ vectors, which is a subspace of dimension $n-1$. For instance, if $n=2$ (with one vector $\vec v$ given) the answer is "unless $\vec x$ is a scalar multiple of $\vec v$". Now if one imagines a fixed (nonzero) linear combination of the coordinates of $\vec x$ (the technical term is a linear form on the space), then it will become 0 precisely when $\vec x$ is in some subspace of dimension $n-1$. With some luck, this can be arranged to be precisely the linear span $S$. (In fact no luck is involved: if one extends the $n-1$ vectors by one more vector to a basis, then expressing $\vec x$ in that basis and taking its final coordinate will define such a linear form; however you can ignore this argument unless you are particularly suspicious.) Now the crucial observation is that not only does such a linear combination exist, its coefficients can be taken to be expressions in the coordinates of our $n-1$ vectors. For instance in the case $n=2$ if one puts $\vec v={a\choose b}$ and $\vec x={x_1\choose x_2}$, then the linear combination $-bx_1+ax_2$ does the job (it becomes 0 precisely when $\vec x$ is a scalar multiple of $\vec v$), and $-b$ and $a$ are clearly expressions in the coordinates of $\vec v$. In fact they are linear expressions. For $n=3$ with two given vectors, the expressions for the coefficients of the linear combination are more complicated, but they can still be explicitly written down (each coefficient is the difference of two products of coordinates, one form each vector). These expressions are linear in each of the vectors, if the other one is fixed.
Thus one arrives at the notion of a multilinear expression (or form). The determinant is in fact a multilinear form: an expression that depends on $n$ vectors, and is linear in each of them taken individually (fixing the other vectors to arbitrary values). This means it is a sum of terms, each of which is the product of a coefficient, and of one coordinate each of all the $n$ vectors. But even ignoring the coefficients, there are many such terms possible: a whopping $n^n$ of them!
However, we want an expression that becomes 0 when the vectors are linearly dependent. Now the magic (sort of) is that even the seemingly much weaker requirement that the expression becomes 0 when two successive vectors among the $n$ are equal will assure this, and it will moreover almost force the form of our expression upon us. Multilinear forms that satisfy this requirement are called alternating. I'll skip the (easy) arguments, but an alternating form cannot involve terms that take the same coordinate of any two different vectors, and they must change sign whenever one interchanges the role of two vectors (in particular they cannot be symmetric with respect to the vectors, even though the notion of linear dependence is symmetric; note that already $-bx_1+ax_2$ is not symmetric in $(a,b)$ and $(x_1,x_2)$). Thus any one term must involve each of the $n$ coordinates once, but not necessarily in order: it applies a permutation of the coordinates 1,2,...,$n$ to the successive vectors. Moreover, if a term involves one such permutation, then any term obtained by interchanging two positions in the permutation must also occur, with an opposite coefficient. But any two permutations can be transformed into one another by repeating such interchanges, so if there are any terms at all, then there must be terms for all $n!$ permutations and their coefficients are all equal or opposite. This explains question (5), why the determinant is such a huge expression when $n$ is large.
Finally the fact that determinants exist turns out to be directly related to the fact that signs can be associated to all permutations in such a way that interchanging entries always changes the sign, which is part of the answer to question (3). As for question (1), we can now say that the determinant is uniquely determined by being an $n$-linear alternating expression in the entires of $n$ column vectors, which contains a term consisting of the product of their coordinates 1,2,...,$n$ in that order (the diagonal term) with coefficient $+1$. The explicit expression is a sum over all $n!$ permutations, the corresponding term being obtained by applying those coordinates in permuted order, and with the sign of the permutation as coefficient. A lot more can be said about question (2), but I'll stop here.
-
I recorded a lecture on the geometric definition of determinants:
Geometric definition of determinants
It has elements from the answers by Katie Banks and John Cook, and goes into details in a leisurely manner.
-
If you have a matrix
• $H$ then you can calculate the correlationmatrix with
• $G = H \times H^H$ (H^H denotes the complex conjugated and transposed version of $H$).
If you do a eigenvalue decomposition of $G$ you get eigenvalues $\lambda$ and eigenvectors $v$, that in combination $\lambda\times v$ describes the same space.
Now there is the following equation, saying:
• Determinant($H*H^H$) = Product of all eigenvalues $\lambda$
I.e., if you have a $3\times3$ matrix $H$ then $G$ is $3\times3$ too giving us three eigenvalues. The product of these eigenvalues give as the volume of a cuboid. With every extra dimension/eigenvalue the cuboid gets an extra dimension.
-
Think about a scalar equation, $$ax = b$$ where we want to solve for $x$. We know we can always solve the equation if $a\neq 0$, however, if $a=0$ then the answer is "it depends". If $b\neq 0$, then we cannot solve it, however, if $b=0$ then there are many solutions (i.e. $x \in \mathbb{R}$). The key point is that the ability to solve the equation unambiguously depends on whether $a=0$.
When we consider the similar equation for matrices
$$\mathbf{Ax} = \mathbf{b}$$
the question as to whether we can solve it is not so easily settled by whether $\mathbf{A}=\mathbf{0}$ because $\mathbf{A}$ could consist of all non-zero elements and still not be solvable for $\mathbf{b}\neq\mathbf{0}$. In fact, for two different vectors $\mathbf{y}_1 \neq \mathbf{0}$ and $\mathbf{y}_2\neq \mathbf{0}$ we could very well have that
$$\mathbf{Ay}_1 \neq \mathbf{0}$$ and $$\mathbf{Ay}_2 = \mathbf{0}.$$
If we think of $\mathbf{y}$ as a vector, then there are some directions in which $\mathbf{A}$ behaves like non-zero (this is called the row space) and other directions where $\mathbf{A}$ behaves like zero (this is called the null space). The bottom line is that if $\mathbf{A}$ behaves like zero in some directions, then the answer to the question "is $\mathbf{Ax} = \mathbf{b}$ generally solvable for any $\mathbf{b}$?" is "it depends on $\mathbf{b}$". More specifically, if $\mathbf{b}$ is in the column space of $\mathbf{A}$, then there is a solution.
So is there a way that we can tell whether $\mathbf{A}$ behaves like zero in some directions? Yes, it is the determinant! If $\det(\mathbf{A})\neq 0$ then $\mathbf{Ax} = \mathbf{b}$ always has a solution. However if, $\det(\mathbf{A}) = 0$ then $\mathbf{Ax} = \mathbf{b}$ may or may not have a solution depending on $\mathbf{b}$ and if there is one, then there are an infinite number of solutions.
-
now, to get an intuitive idea we first have to consider a set of simultaneous linear equqtions .In order to get an answer we should eliminate one variable and equate another.......... for example 2a+3b=13 and a+b=5 equating this we get a=2 and b=3 ........... but the thing toobserve is if the equations are proportional then it is impossible to come to an conclusion...... like.... a+b=2 and 2a+2b=4. we cannot find an solution to this bcoz theyre nothing but the same equations!!!(hint: divide eq. 2a+2b=4 by 2) ..... and hence determinant are used to determine the consistency of an equation here the first set is consistent(can be verified easily) .... but the second one is not bcoz the two rows are proportional!!
-
3
First of all, I don't find this to be very intuitive... secondly, the system \begin{align*} a + b &= 2\\ 2a + 2b &= 4 \end{align*} is a perfectly consistent system... it simply has more than one solution: $a = b = 1$ works, as does $a = 0$, $b = 2$... and infinitely many other options. An inconsistent set of equations isn't one that is "proportional," but rather one like \begin{align*} a + b &= 0\\ a + b &= 3, \end{align*} as no combination of $a$ and $b$ will be able to make this work. So the determinant isn't saying that there's an inconsistency, it's saying there's a dependence. – Stahl May 13 at 7:11
thanx a ton buddy....i'll take that correction ..... and please keep guiding me further.... – Abhinav Raichur May 13 at 7:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 118, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315924048423767, "perplexity_flag": "head"}
|
http://nrich.maths.org/4718/index?nomenu=1
|
## 'An Introduction to Mathematical Induction' printed from http://nrich.maths.org/
### Show menu
Quite often in mathematics we find ourselves wanting to prove a statement that we think is true for every natural number $n$.
For example, you may have met the formula $\frac{1}{6}n(n+1)(2n+1)$ for the sum $$\sum_{i=1}^n i^2=1^2+2^2+\ldots+n^2.$$ We can try some values of $n$, and see that the formula seems to be right: \begin{eqnarray} n=1: & & \sum_{i=1}^n i^2 = 1^2 = 1;\\ & & \frac{1}{6}n(n+1)(2n+1) = \frac{1}{6}\times 1\times 2\times 3 = 1\\ \\ n=2: & & \sum_{i=1}^n i^2 = 1^2 + 2^2 = 1+4 = 5;\\ & & \frac{1}{6}n(n+1)(2n+1) = \frac{1}{6}\times 2\times 3\times 5 = 5\\ \\ n=3: & & \sum_{i=1}^n i^2 = 1^2 + 2^2 + 3^2 = 1+4+9 = 14;\\ & & \frac{1}{6}n(n+1)(2n+1) = \frac{1}{6}\times 3\times 4\times 7 = 14.\\ \end{eqnarray} But we want to prove that this is true for all positive integers $n$, and it's going to be impossible if we try to do this by putting in all possible values! Instead, we're going to have to be a bit more cunning$\ldots$
Have you ever played that game with dominoes where you line them up on end and then, by knocking over the first one, knock over the whole lot? You can think of proof by induction as the mathematical equivalent (although it does involve infinitely many dominoes!). Suppose that we have a statement $P(n)$, and that we want to show that it's true for all $n$. So in our example above, $P(n)$ is: $$"\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1) (2n+1)".$$ Think of each $P(n)$ as a domino. If we can show that $P(1)$ is true (that is, we can knock over the first domino)and that if $P(n)$ is true then so is $P(n+1)$ (knocking over one domino means the next one will also fall over), do you agree that we've then shown that $P(n)$ is true for all $n\geq 1$ (because all of the dominoes will fall over)?
We'll know that $P(1)$ is true, so we'll know that $P(1+1)=P(2)$ is true, so we'll know that $P(2+1)=P(3)$ is true, so we'll know that $P(3+1)=P(4)$ is true, $\ldots$ You get the idea.
Let's go back to our example from above, about sums of squares, and use induction to prove the result. We know that $P(1)$ is true (we did that before!). Now let's see what would happen if we knew that $P(n)$ is true. Then we'd know that $\sum_{i=1}^n i^2 = \frac{1}{6}n(n+1)(2n+1)$. So what can we say about $\sum_{i=1}^{n+1} i^2$? Well, \begin{eqnarray} \sum_{i=1}^{n+1} i^2 & = & (n+1)^2 + \sum_{i=1}^n i^2\\ & = & (n+1)^2 + \frac{1}{6}n(n+1)(2n+1)\quad\textrm{(by the inductive hypothesis)}\\ & = & \frac{1}{6}(n+1)\left[6(n+1)+n(2n+1)\right]\\ & = & \frac{1}{6}(n+1)\left[6n+6+2n^2+n\right]\\ & = & \frac{1}{6}(n+1)\left[2n^2+7n+6\right]\\ & = & \frac{1}{6}(n+1)(n+2)(2n+3). \end{eqnarray} This looks familiar; what is $P(n+1)$? Well, let's substitute $n+1$ in place of $n$ in our statement of $P(n)$. So we're aiming for $$\sum_{i=1}^{n+1} i^2 = \frac{1}{6}(n+1)(n+2)(2(n+1)+1)=\frac{1}{6}(n+1)(n+2)(2n+3),$$ which is exactly what we've just got! (Working out what you're aiming for can often give you an idea of how to manipulate the algebra.) So we've shown that if $P(n)$ is true, then $P(n+1)$ is true. Since we also know that $P(1)$ is true, we know that $P(2)$ is true, so $P(3)$ is true, so $P(4)$ is true, so $\ldots$ In other words, we've shown that $P(n)$ is true for all $n\geq 1$, by mathematical induction.
Warning: When encountering induction for the first time, people usually remember to show "if $P(n)$ is true then $P(n+1)$ is true'', because that's what induction is really about, but a common mistake is to forget to show that $P(1)$ is true. Proof by induction is a two-stage process, even if one stage is usually very easy. The dominoes won't fall over unless you knock over the first one!
Don't forget that your first domino doesn't have to be $P(1)$. It could be $P(2)$, or $P(19)$, or $P(1000000)$. For example, we can use induction to show $3^n> n^3$ for $n\geq 4$ (see the exercises below)
One last thing: induction is only a method of proof . For example, if you're trying to sum a list of numbers and have a guess for the answer, then you may be able to use induction to prove it. But you can't use induction to find the answer in the first place. Also, there are often other methods of proof: I've given some examples below of things that you might like to try to prove by induction, but several of them can be proved at least as easily by other methods (indeed, you've probably seen some proved by other methods).
Don't forget that if you get stuck with these, you can ask for help on Ask NRICH. You can also consult the Hints if you need to.
1. Prove by induction that $$\sum_{i=1}^n i^3 = \frac{1}{4}n^2(n+1)^2\textrm{ for }n\geq 1.$$
2. Use induction to show that $$\sum_{i=1}^n r^{i-1}=\frac{r^n - 1}{r-1}\textrm{ for }n\geq 1\textrm{ and }r\neq 1.$$
3. Use induction to show that $3^n> n^3$ for $n\geq 4$. (Note that you have to start at $n=4$ as the result isn't true for $n=3$!)
4. Use induction to show that $$\left(\begin{array}{c}n+1\\r\end{array}\right) = \left(\begin{array}{c}n\\r\end{array}\right) + \left(\begin{array}{c}n\\r-1\end{array}\right)\textrm{ for }n\geq 1$$ (where $\left(\begin{array}{c}n\\r\end{array}\right)$ is the binomial coefficient). (You will need to use induction on $n$ and keep $r$ fixed). Remark: This is rather fiddly; starting with the right-hand side will make things easier. This example is an excellent lesson in why you should not always use induction: thinking about counting things will help you prove the result much more easily. Can you see how?
5. Use induction to show that $4^n+6n$ is divisible by 6 for $n\geq 1$. [No this is not a misprint. What part of the argument works? What part fails?]
6. Use induction to show that $4^n + 6n -1$ is divisible by 9 for $n\geq 1$.
Vicky finished a degree in Maths at Cambridge last summer and is now doing a fourth year course studying Combinatorics, Number Theory and Algebra, still at Trinity College, Cambridge.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414516687393188, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/92075/when-cayley-graphs-of-the-symmetric-group-wrt-generating-sets-of-transpositions-a
|
## When Cayley graphs of the symmetric group wrt generating sets of transpositions are isomorphic?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Dear All,
I thought the following question might be well-known, but couldn't find anywhere, so decided to ask here:
Let $A$ and $B$ be two generating sets for $S_n$, consisting of transpositions.
Question: When the Cayley graphs of $S_n$ with respect to $A$ and $B$ are isomorphic?
Well, if $\Gamma(S_n,A)$ is isomorphic to $\Gamma(S_n,B)$, then of course $|A|=|B|$. Is it true that the answer to the question is "whenever $A$ and $B$ are conjugate"?
-
Isomorphic just as abstract graphs, or as graphs colored by the names of the generators? In particular, can I test whether two transpositions $s$ and $t$ commute by following edges labeled $s$, $t$, $s$, $t$ from some starting vertex? – David Speyer Mar 24 2012 at 19:37
## 1 Answer
I claim the answer to your question is yes. This is my first time posting on mathoverflow. I hope my latex goes ok.
Given $\Gamma(S_n,A)$, build an auxiliary graph $X(\Gamma(S_n,A))$, with vertex set ${1,\ldots,n}$ and two vertices are adjacent if the corresponding involution is in $A$. Build a second auxiliary graph $Y$ with vertex set the elements of $A$ with an edge between them if they commute. Note that $Y$ is the complement of the line graph of $X$.
Let $\Gamma_1=\Gamma(S_n,A)$ and let $\Gamma_2=\Gamma(S_n,B)$.
We have to show that if $\Gamma_1$ and $\Gamma_2$ are isomorphic, then so are $X(\Gamma_1)$ and $X(\Gamma_2)$. Since $X(\Gamma_1)$ and $X(\Gamma_2)$ are connected, they are isomorphic if and only if $Y(\Gamma_1)$ and $Y(\Gamma_2)$ are (assuming they have at least 4 vertices, see http://en.wikipedia.org/wiki/Line_graph#Characterization_and_recognition). It thus suffices to show that if $\Gamma_1$ and $\Gamma_2$ are isomorphic, then so are $Y(\Gamma_1)$ and $Y(\Gamma_2)$.
I will do this by showing that, given $\Gamma_1$ without labels, I can recover $Y(\Gamma_1)$ uniquely up to conjugacy in $S_n$.
The crucial observation is that in $\Gamma(S_n,A)$, an element at distance 2 from the identity is either a 3-cycle or a product of two disjoint transpositions. If it is a product of two distinct transpositions, then there will be exactly two paths of length 2 joining it with the identity (in other words, it will be contained in a unique $4$-cycle with the identity). If it is a 3-cycle, there will be exactly either one or three paths of length 2 joining it to the identity.
First, label one vertex of $\Gamma_1$ "1" (think of it as the identity). Now, labels the neighbours of 1 with $x_1,\ldots,x_k$ (where $k$ is the valency of $\Gamma$). We think of these as being undetermined transpositions. By the argument above, $x_i$ and $x_j$ commute if and only if they are contained in a unique $4$-cycle with the identity. We can now construct $Y(\Gamma_1$), in a unique way up to conjugacy in $S_n$.
-
1
Nice proof! It might be clearer to write $X$ and $Y$ as functions of $A$, since they can't be obtained directly from $\Gamma$ without its labels. – Brendan McKay Mar 25 2012 at 4:57
Let me also point out that we don't need the "assuming they have at least 4 vertices" qualifier -- for $X_1$ and $X_2$ to be nonisomorphic, while having isomorphic line graphs, would require one to have 3 vertices and the other to have 4; but this is impossible as by construction they both have $n$ vertices. – Harry Altman Mar 25 2012 at 9:07
That's a very beautiful proof! I tried also to do something via the graph $X$, but I didn't know about the the trick with $Y$. Thanks a lot! – Victor Mar 26 2012 at 7:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462213516235352, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/91429-vector-field.html
|
# Thread:
1. ## Vector Field
There is a vector field G in $R^3$ such that: ?
$curl G = xy^2i+yz^2j+zx^2k$
2. The divergence of the curl of any vector field is 0. So find div(curl G) and see if it's zero or not. Do you know how to find the divergence of a vector field?
3. Originally Posted by Random Variable
The divergence of the curl of any vector field is 0. So find div(curl G) and see if it's zero or not. Do you know how to find the divergence of a vector field?
Yes thank you. Not exist the vector field to this curl
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164248108863831, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=581852
|
Physics Forums
Age of a vaccum energy dominated universe
The other day, I was calculating the age of universe dominated by vacuum energy and it turned out to be infinity. What does age of the universe being infinite mean? On explanation I thought of is that may be this implies that such a universe has no beginning. Is it a proper explanation?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Science Advisor No, it simply means the universe will never experience a big crunch, i.e. recollapse. Simply put, the scale factor never returns to zero. It is of course possible to have a universe which starts with a=0, but then persists indefinitely (as is the case with our own).
Hmm, isn't that more like the fate of the universe. What I was trying to calculate was what would be the present age of the universe in standard Friedman cosmology for a flat universe(sorry i didn't mention that before), as a function of the observed CMB redshift and Hubble. But what I got was that for vacuum dominated universe, the age would turn out to be infinity (irrespective of value of redshift and H), and the only meaningful explanation I could think of was universe with no beginning
Mentor
Age of a vaccum energy dominated universe
Quote by sri sharan Hmm, isn't that more like the fate of the universe. What I was trying to calculate was what would be the present age of the universe in standard Friedman cosmology for a flat universe(sorry i didn't mention that before), as a function of the observed CMB redshift and Hubble. But what I got was that for vacuum dominated universe, the age would turn out to be infinity (irrespective of value of redshift and H), and the only meaningful explanation I could think of was universe with no beginning
Perhaps you were doing your integrals wrong? What values did you use for $\Omega_m$ and $\Omega_\Lambda$? Even if I plug in 0 for the former and 1 for the latter in here...
http://www.astro.ucla.edu/~wright/CosmoCalc.html
...I get about 37 Gyr, not ∞.
Also, what do you mean by, "as a function of the observed CMB redshift?" What does that have to do with anything? Isn't the only relevant value of z the value at which you want to compute the age of the universe (which would be z = 0 for the age at the present time)?
Mentor
Quote by sri sharan But what I got was that for vacuum dominated universe, the age would turn out to be infinity (irrespective of value of redshift and H), and the only meaningful explanation I could think of was universe with no beginning
Yes.
Mentor
Quote by George Jones Yes.
Yeah, my bad. When I responded to the OP, I hadn't actually written out the equations (EDIT: and I'm assuming that this is a case for which the numerical calculator that I linked to simply breaks down). So tell me if I'm doing this right. With only dark energy (assuming it's in the form of a cosmological constant) the Friedmann equation is$$\left(\frac{\dot{a}}{a}\right)^2 = \frac{\Lambda}{3}$$This assumes the universe is spatially flat. This becomes$$\frac{1}{a}\frac{da}{dt} = \left(\frac{\Lambda}{3}\right)^{1/2}$$which you can solve analytically to get $$a(t) = \exp\left[\left(\frac{\Lambda}{3}\right)^{1/2}(t-t_0)\right]$$where I arbitrarily chose t0 to be the time value when the scale factor is unity. The thing is, as you back in time, for t < t0, the scale factor asymptotically approaches 0, but never actually reaches it. So it would seem that indeed this type of cosmological model does not have a beginning.
I'm guessing that the OP tried to invert the differential equation and then integrate to solve for t(a), but obtained something proportional to $\int_0^1 \frac{1}{a}\,da$ which does not converge -- which is another way of showing the same result.
So I read that this is the de Sitter universe, and that it is also used as an approximation to inflationary models whose dynamics are similar. Is this idea of "no beginning" sort of the basis for "eternal inflation?"
Recognitions:
Science Advisor
Quote by cepheid So I read that this is the de Sitter universe, and that it is also used as an approximation to inflationary models whose dynamics are similar. Is this idea of "no beginning" sort of the basis for "eternal inflation?"
The only issue here is that any amount of matter or radiation causes the universe to have a finite age. So it is not considered feasible for inflation to be past-eternal, because there will always be some matter or radiation, no matter how diffuse.
Quote by cepheid I'm guessing that the OP tried to invert the differential equation and then integrate to solve for t(a), but obtained something proportional to $\int_0^1 \frac{1}{a}\,da$ which does not converge -- which is another way of showing the same result. So I read that this is the de Sitter universe, and that it is also used as an approximation to inflationary models whose dynamics are similar. Is this idea of "no beginning" sort of the basis for "eternal inflation?"
yeah, that's what I did. And thanks of the the De Sitter info . Didnt know about that before
Thread Tools
| | | |
|----------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Age of a vaccum energy dominated universe | | |
| Thread | Forum | Replies |
| | Cosmology | 3 |
| | Advanced Physics Homework | 0 |
| | Astrophysics | 9 |
| | Cosmology | 5 |
| | Advanced Physics Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605591893196106, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/16473/show-the-matrix-product-u-itisi-t-is-1-is-unitary
|
# Show the matrix product $U = (I+T+iS)(I-T-iS)^{-1}$ is unitary
$S$ is real symmetric. $T$ is real skew-symmetric. I have shown that $T\pm iS$ is skew-Hermitian. I am further asked to show that $U = (I+T+iS)(I-T-iS)^{-1}$ is unitary.
Denoting by $^\dagger$ the conjugate transpose, I have that
$U^\dagger = [(I+T+iS)(I-T-iS)^{-1}]^\dagger$
$= ((I-T-iS)^\dagger)^{-1}(I+T+iS)^\dagger$
$= (I+T+iS)^{-1}(I-T-iS)$
But then the products $UU^\dagger$ and $U^\dagger U$ have the factors in the wrong order to cancel out to make identity. I must be missing something obvious here; I'd appreciate a steer!
-
For any $A$, $(I+A)$ commutes with $(I-A)$, and hence with $(I-A)^{-1}$ if it exists. – Chris Eagle Jan 5 '11 at 19:25
Thanks Chris, that makes it easy. I don't think I've seen that simple (but useful-looking) result until now! I have proved it using index notation; is there a nicer way? – user5426 Jan 5 '11 at 19:50
If $A$ commutes with $B$, then $AB=BA$. If further $B$ is invertible, then $B^{-1}(AB)B^{-1}=B^{-1}(BA)B^{-1}$, and hence $B^{-1}A=AB^{-1}$. – Chris Eagle Jan 5 '11 at 19:58
Sorted, thanks very much indeed. – user5426 Jan 6 '11 at 1:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416186809539795, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/40841/is-quasiclassicality-in-consistent-histories-the-preferred-basis-problem-in-disg/40844
|
# Is quasiclassicality in consistent histories the preferred basis problem in disguise?
Is quasiclassicality in consistent histories the preferred basis problem in disguise? Out of the numerous possible consistent realms in consistent histories — with no canonical choice — we're urged to choose quasiclassical realms. What exactly quasiclassical means though, isn't too clear. In fact, it's starting to seem that if you try to probe too closely what is meant by quasiclassicality, it turns out to be the same thing as the preferred basis problem in other interpretations. Quasiclassical then appears to be a codeword to hide the preferred basis problem under, hoping that no one else will notice this sleight of hand. If quasiclassicality isn't well-defined, then as Kent and Dowker pointed out long ago, a realm which is "quasiclassical" now, whatever that means, can be consistently extended into consistent realms which aren't quasiclassical either in the past or the future, and this is problematic as long as there is no hard criteria to pick out what is quasiclassical.
Consider this example: We have a quantum computer, and we start off with some initial quantum state at time $t_0$. Then, we run a simulation performing a unitary transformation U on this state, ending at time $t_1$. Suppose the quasiclassical projectors at $t_1$ are incompatible with those at $t_0$, i.e. they are not mutually consistent. Consistent histories tells us we can choose a quasiclassical realm at $t_0$ or at $t_1$, but not both simultaneously. Now, consider this scenario: We compute U, then without measuring or disturbing the computer states in any way, we fully uncompute $U^{-1}$, leaving us back with the original state at time $t_2$. Then, once again, without disturbing or observing the internal states in any way, we compute $U$ again, then $U^{-1}$, etc. continuing this sequence as long as we wish. We can now have two mutually incompatible "quasiclassical" realms: one consisting of quasiclassical projectors at even times $t_{2i}$, and the other of quasiclassical projectors at odd times $t_{2i+1}$. According to consistent histories, we always get the same outcomes for projectors at times differing by an even number of "timesteps". In other words, the probability for chains where the projectors differ after an even number of timesteps is zero. So, consistent histories says, in the even realm, the "collapsed" outcome after each even number of timesteps has to repeat itself by being the same. In the odd realm, the same thing can be said about outcomes after an odd number of timesteps. However, both realms can't be combined. Here, we have the case of two mutually incompatible quasiclassical realms. Of course, it might be argued that the internal states of a quantum computer shouldn't be considered quasiclassical, but in that case, what do you mean by quasiclassical?
What if we're currently in a quantum simulation which is programmed to fully uncompute in the future? Is there a quasiclassical realm containing coarse grained descriptions of us which would roughly match what we consider our quasiclassical experiences?
What do the other interpretations say in such a scenario? Copenhagen leaves no room for uncollapses. So, the fact that we can keep uncomputing means no collapse ever takes place, at least not until the very end of the sequence. No collapse means the internal states are never real, not until the very end, at any rate. MWI suggests we keep branching each odd timestep, and then the branches remerge coherently each even timestep, and this process occurs again and again. However, it's not clear why in the corresponding consistent histories interpretation, we ought to end up in the same branch after each odd number of timesteps. In modal interpretations in fact, we end up in a different branch after each odd number of timesteps.
-
## 1 Answer
The assertions are absolutely untrue and illogical. The Consistent Histories interpretation is, as the name indicates, based on... consistent histories, not "quasi-classical variables". Consistent histories are described by a very particular "orthogonality" condition for the products of projection operator, products that define alternative histories. Nothing else is needed.
Gell-Mann and Hartle only used the term "quasi-classical variables" informally for some variables for which we have known that classical physics applied rather well – and they talked about them as litmus tests or examples of their interpretation. They're not needed in any way to define their rules of the game. The allowed "preferred bases" are derived from the consistency condition for the histories, not from some informal terms that the interpretation doesn't depend upon.
However, it is true that in general, there may be many mutually incompatible ways to construct a set of consistent histories. There is nothing logically inconsistent about this fact. The Consistent Histories interpretation is really a refinement of the Copenhagen interpretation, it defines what logical systems to make statements about the world are internally consistent and how the probabilities of legitimate propositions are computed from the dynamical quantum mechanical framework, and no other "interpretation" may even sensibly approach the questions you have asked.
The qubits describing a functional quantum computer's immediate state in the middle of a calculation are obviously not quasiclassical variables. They're as non-classical as you can get; indeed, that's the whole point of quantum computing.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405688643455505, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/46970/proofs-of-the-uncountability-of-the-reals/47021
|
## Proofs of the uncountability of the reals.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Recently, I learnt in my analysis class the proof of the uncountability of the reals via the Nested Interval Theorem. At first, I was excited to see a variant proof (as it did not use the diagonal argument explicitly). However, as time passed, I began to see that the proof was just the old one veiled under new terminology. So, till now I believe that any proof of the uncountability of the reals must use Cantor's diagonal argument.
Is my belief justified?
Thank you.
-
7
It's not too hard to see that the reals have the same cardinality as the power set of the naturals. So we are reduced to showing that a set cannot have the same cardinality as its power set. This is shown using the same argument as the Russell Paradox (i.e., assume a bijection $\phi \colon \mathcal{P}(X) \to X$ exists, and take the set $T$ of all $x \in X$ such that $x \not\in \phi^{-1}(x)$. Then ask whether $\phi(T) \in T$.) I don't think this is the same as the diagonal argument, although I can imagine that someone sufficiently determined might be able to argue otherwise. – Charles Staats Nov 22 2010 at 17:39
19
Why the votes to close? I think that this is an interesting question. For what it's worth I cast a vote to keep open which should be taken into account by the next person wishing to vote to close. If you wish to do so, then please let's take this to meta, where I have started this thread: meta.mathoverflow.net/discussion/789/… – José Figueroa-O'Farrill Nov 22 2010 at 17:46
4
@Francesco: no, the uncountability isn’t a corollary of the countability of the algebraics! Cantor’s original uncountability argument is what the OP refers to as the Nested Interval Theorem. The corollary Cantor then draws from this, together with countability of the algebraics, is the existence of transcendentals (lots of ’em) within any interval. – Peter LeFanu Lumsdaine Nov 22 2010 at 17:59
11
The nested interval method and the diagonal method are fundamentally the same method, as is the Russell paradox method. These are all the diagonal method. – Joel David Hamkins Nov 22 2010 at 18:06
20
I also cast a vote against closing. The question is: "Is there a different proof of this theorem?" which, to me, sounds very interesting and a natural question that a mathematically mature but non-expert-in-set-theory person might ask. I've asked several questions that have exposed my lamentable ignorance of the subtleties of mathematical foundations and, so far, all have received very interesting and informative answers. This one feels as though it is in the same vein as those. – Andrew Stacey Nov 22 2010 at 18:20
show 21 more comments
## 13 Answers
Mathematics isn't yet ready to prove results of the form, "Every proof of Theorem T must use Argument A." Think closely about how you might try to prove something like that. You would need to set up some plausible system for mathematics in which Cantor's diagonal argument is blocked and the reals are countable. Nobody has any idea how to do that.
The best you can hope for is to look at each proof on a case-by-case basis and decide, subjectively, whether it is "essentially the diagonal argument in disguise." If you're lucky, you'll run into one that your intuition tells you is a fundamentally different proof, and that will settle the question to your satisfaction. But if that doesn't happen, then the most you'll be able to say is that every known proof seems to you to be the same. As explained above, you won't be able to conclude definitively that every possible argument must use diagonalization.
-
2
Oh...Thank you. I did not think the question will go this far. So, the question is pretty like the statement in Cosmology: "The observable universe is finite but nobody yet knows whether it is infinite or not, let alone boundedness." – To be cont'd Nov 22 2010 at 18:49
5
Well, there is Quine's New Foundations in set theory, in which the diagonal argument is blocked from disproving the existence of a set of all sets, because of the inability to express the predicate $x \not \in x$. But I gather that NF does not block the diagonal argument from demonstrating the uncountability of the reals, so this isn't quite an answer to the problem at hand... – Terry Tao Nov 22 2010 at 23:28
Thank you. Now, I know that it is possible to block diagonalization in some setting. – To be cont'd Nov 23 2010 at 11:55
3
Reverse mathematics (en.wikipedia.org/wiki/Reverse_mathematics) is almost exactly about studying which axioms and arguments are necessary for certain theorems. If you want to know whether axiom X is necessary for a theorem Y, you can try to see if there's a model of Y in which X doesn't hold. It's not as easy to see whether a certain argument is necessary, but often you can axiomatize what it means to be able to do a certain argument, e.g. there are systems which capture what it means to be able to use a compactness argument, or induction, or transfinite recursion, etc. – Amit Kumar Gupta Nov 23 2010 at 18:27
2
@Amit: Yes, I'm familiar with reverse mathematics. But let me repeat what I said above: Think closely! How would you axiomatize what it means to be able to diagonalize? What candidate do you have in mind for a model in which the reals are countable? I stand by what I said; nobody has a clue. – Timothy Chow Nov 24 2010 at 2:28
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Mathematical logicians often joke that the diagonal method is the only proof method that we have in logic. This method is the principal idea behind a huge number of fundamental results, among them:
• The uncountability of the reals.
• More generally, the fact that the power set $P(X)$ of a set is strictly larger in cardinality.
• The Russell paradox.
• The undecidability of the halting problem.
• The Recursion theorem.
• More generally, huge parts of computability theory are based on diagonalization, such as uses of the priority method.
• The fixed-point lemma and its use in proving the Incompleteness theorem.
• The strictness of the arithmetic hierarchy, the projective hierarchy, etc.
• Etc. etc. etc.
-
3
Thank you. All this evidence is not a joke for me. By now I believe the founding of diagonal argumentation is tantamount to the the founding of the group concept. – To be cont'd Nov 22 2010 at 18:53
3
A lot of these can be captured by Lawvere's formalisation of the diagonal argument as a fixed-point theorem: tac.mta.ca/tac/reprints/articles/15/tr15abs.html – David Roberts Nov 22 2010 at 22:47
David, I agree, but another perspective is simply that the fixed-point theorem is another instance of diagonalization. That is, these arguments are already unified as diagonalizations. – Joel David Hamkins Nov 22 2010 at 23:30
Joel - I agree that calling them diagonalisation arguments or fixed point theorems is just a point of linguistics (actually the diagonal argument is the contrapositive of the fixed point version), it's just that Lawvere's version, to me at least, looks more like a single theorem than a collection of results that rely on an particular line of reasoning. This, I hope, helps the OP or those answering the question in isolating what a diagonal argument "is", and avoid it if possible. – David Roberts Nov 23 2010 at 4:54
Alternatively,
Prove that the reals are connected.
Prove that every countable dense subset $X$ of the reals must be order isomorphic to the rationals.
Prove that the rationals are not connected.
-
1
@Bill : Doesn't the non-connectedness of ${\mathbb Q}$ rely on the uncountability of ${\mathbb R}$? How do you argue about it directly? – Andres Caicedo Nov 23 2010 at 0:03
2
The rationals are clearly not connected. Partition then into the open sets of rationals less than $\sqrt{2}$ and the rationals greater than $\sqrt{2}$. – George Lowther Nov 23 2010 at 0:38
1
All you need to do is prove that between two rationals is an irrational. A variant of the well known proof that sqrt(2) is irrational should do the trick here. Just exploit the sparsity of squares among "large" integers. – Michael Renardy Nov 23 2010 at 0:40
1
+1. I voted this up, but it doesn't satisfy Gowers criterion (1), since the proof of Cantor's theorem that every countable dense subset of $\mathbb{R}$ is order isomorphic to $\mathbb{Q}$ involves enumerating $\mathbb{Q}$. – Joel David Hamkins Nov 23 2010 at 0:52
6
Bill, the point is that to get the final contradiction, if $f:\mathbb{Q}\to\mathbb{R}$ is the order isomorphism, then the resulting disconnection of $\mathbb{R}$ is the cut determined by $f[\\{q\mid q\lt\sqrt{2}\\}]$, say, which has least upper bound $z$, but no real works, since the $n$-th real in the enumeration was placed into the range of $f$ at stage $n$. So this is diagonalization. – Joel David Hamkins Nov 23 2010 at 10:43
show 4 more comments
I thought about this question a while ago, while teaching a topics course. Since one can easily check that $${}|{\mathbb R}|=|{\mathcal P}({\mathbb N})|$$ by a direct construction that does not involve diagonalization, the question can be restated as:
Is there a proof of Cantor's theorem that ${}|X|<|{\mathcal P}(X)|$ that is not a diagonal argument?
I suspect the following works. Even if it doesn't, I believe there may be some interest in this presentation (Please let me know if you spot diagonalization somewhere).
A remark of François Dorais helped me (re)locate the argument in print. It is presented in A. Kanamori-D. Pincus. "Does GCH imply AC locally?", in Paul Erdős and his mathematics, II (Budapest, 1999), 413-426, Bolyai Soc. Math. Stud., 11, János Bolyai Math. Soc., Budapest, 2002. I believe it actually dates back to Zermelo's 1904 well-ordering paper. (I now think I learned the argument from Kanamori-Pincus, since I certainly used the paper in the topics course.)
a. There is obviously an injection $g:X\to{\mathcal P}(X)$. It is enough to show there is no surjection. Suppose there is, and call it $f$. Then $f^{-1}:{\mathcal P}^2(X)\to{\mathcal P}(X)$ is 1-1.
(If $h:A\to B$, $h^{-1}:{\mathcal P}(B)\to{\mathcal P}(A)$ is the map that to $C\subseteq B$ assigns $\{a\in A\mid h(a)\in C\}$. Since $f$ is surjective, we have that $f^{-1}$ is injective.)
(Of course, we could simply use an injection $g:{\mathcal P}(X)\to X$ and invoke Schröder-Bernstein at this point, but this route seems shorter.)
b. There is no injection $F:{\mathcal P}(Y)\to Y$ for any set $Y$. The reason is that for any $F$ we can (definably from $F$) produce a pair $(A,B)$ with $A\ne B$ and $F(A)=F(B)$. In effect, Zermelo proved that:
For any $F:{\mathcal P}(Y)\to Y$ there is a unique a unique well-ordering $(W, \lt)$ with $W\subseteq Y$ such that:
1. $\forall x\in W (F (\{y ∈ W \mid y \lt x\}) = x)$, and
2. $F (W )\in W$.
We can then take $A=W$ and $B=\{y\in W\mid y\lt F(W)\}$.
c. Zermelo's theorem can be proved as follows: Simply notice that $W=\{a_\alpha\mid \alpha\lt \beta\}$ where $$a_\alpha= F(\{a_\gamma\mid \gamma\lt \alpha\})$$ and $\beta$ is largest so that this sequence is injective.
That $\beta$ exists is a consequence of Hartogs theorem that for any set $A$ there is a least ordinal $\alpha$ does not inject into $A$.
Uniqueness of $W$ is shown by considering the first place where two potential candidates for $(W, \lt)$ disagree.
d. Hartogs theorem is proved by noticing that if $\alpha$ is an ordinal and injects into $A$, then there is a subset $B$ of $A$ and a binary relation $R$ on $B$ such that $(B,R)$ is order isomorphic to $\alpha$. From this one easily sees that the collection of $\alpha$s that inject into $A$ forms a set, that is in fact an ordinal $\beta$. Then $\beta$ is least that does not inject into $A$.
Let me close with a remark, and a question: The proof above is formalizable in ZF, without choice. In fact, Zermelo's theorem is provable without using replacement, although the argument I sketched uses it.
The question is mentioned in Kanamori-Pincus: We showed that if $F:{\mathcal P}(Y)\to Y$ then $F$ is not injective by exhibiting a pair $(A,B)$ with $F(A)=F(B)$. If instead of Zermelo's argument we had used at this point the construction from the diagonal argument, we would have taken $$A=\{y\in Y\mid \exists Z(y=F(Z)\notin Z)\},$$ and checked that there must be a $B\ne A$ with $F(A)=F(B)$.
Can we define such $B$ from $F$?
-
Although I think this is very interesting, I still wonder why we bother trying to avoid the diagonal argument by using tons of more advanced arguments, which perhaps in the end, when we enfold them into elementary arguments, use some sort of diagonal argument. – Martin Brandenburg Nov 23 2010 at 3:12
Hehe. As I mentioned, I found this argument while teaching a topics course; meaning: I was lecturing on ideas related to the arguments above, and while preparing notes for the class, it came to me that one would get a diagonalization-free proof of Cantor's theorem by following the indicated path; I looked in the literature, and couldn't find evidence of this being known. I wasn't explicitly looking for it at any point. Anyway, there is technical interest in the matter, precisely because the argument is so ubiquitous, as Joel's answer indicates. – Andres Caicedo Nov 23 2010 at 3:46
Now that I see it, I think I had seen this proof in a talk by Aki Kanamori. If I remember correctly, Aki attributed the proof to Tarski. Since this is from a long time ago, my memory may be off... – François G. Dorais♦ Nov 26 2010 at 15:25
@François : Oh, I should email Aki and ask him, then. Thanks! – Andres Caicedo Nov 26 2010 at 16:22
@Martin: I think rather than "coming to me", I read the argument in a paper by Kanamori-Pincus. I added a reference. – Andres Caicedo Nov 26 2010 at 17:37
show 2 more comments
Although I very much take Timothy Chow's point, and don't have a way of constructing anything like a model where Cantor's diagonal argument is blocked (I'm not sure what the diagonal argument is in the abstract, given that there are variants), some sickness in me makes me want to try to answer the question anyway. One small thought that occurs to me is that all proofs depend (or can be very easily transformed so that they depend) on the following ingredients: a bijection between the countable set and the natural numbers, the use of the ordering on the natural numbers to order the countable set, the construction of a sequence that lives in a sequence of nested intervals that avoid the points of the countable set, one at a time.
Here are some questions that are more specific than the one in the OP. They are off the top of my head and therefore not guaranteed to be sensible.
1. Suppose we tried artificially to block the use of the ordering. It might seem impossible, since the definition of countability is that there is a bijection to the natural numbers, but we could, for instance, try proving the result for sets that are in bijection with the rationals and insist that at no point does the proof define an enumeration of that set.
2. Or we could start with the stronger hypothesis that X is a set of reals that is order-isomorphic to the rationals. Is it possible to prove that this set does not contain all reals without at the same time proving that it is countable?
I don't know how relevant this is, but I'd also like to mention a fascinating fact that I heard from Harvey Friedman recently that feels as though it's in the same ball park. He told me that there exists a Borel function f defined on sequences of reals such that for every sequence S the value f(S) is not a term of S. That's easy to prove from the diagonal argument. On the other hand, there is no Borel function from countable subsets of reals such that f(X) is not an element of X for any countable set X. (I think I remember that that's what he said, but I'm not certain that the result wasn't stronger.) Equivalently, you can't find an f that works for sequences and is also invariant under permutations of the terms in the sequence. This gives us a sort of hint that some kind of enumeration is essential to the proof, but I don't see how to make that hint into a precise thought.
-
1
I'm obliged for your thoughts. – To be cont'd Nov 23 2010 at 11:21
That is a really great point about the importance of having an order for countability. I can't say that I see why for every Borel function $f$ defined on countable subsets of reals there is a countable set of reals $X$ such that such that $f(X) \in X$ (if that is a fair restatement) but I don't see any reason to doubt that. – Aaron Meyerowitz Nov 24 2010 at 2:03
In case somebody missed it, Aaron posted the above as a question: mathoverflow.net/questions/47185/… – Andres Caicedo Nov 28 2010 at 23:21
What about using Lebesgue outer measure? The interval $[0,1]$ has Lebesgue outer measure 1, while countable sets have Lebesgue outer measure $0$.
For the purposes of the proof, I define the Lebesgue outer measure $\mathcal{L}(E)$ of a set $E\subset\mathbb{R}$ as the infimum of the sums $\sum_i (b_i-a_i)$, where $E\subset \bigcup_i (a_i,b_i)$ (e.g. the infimum is over all countable coverings by open intervals).
It is a direct consequence of the definition that any countable set has Lebesgue outer measure 0. This can be even proved in the spirit of Gowers' first suggestion: suppose that $f:\mathbb{Q}\cap (0,1)\to A$ is a bijection. Then, given $\varepsilon>0$, the family $$\{ ( f(p/q)-\varepsilon/q^3, f(p/q)+\varepsilon/q^3): p/q\in [0,1], \text{g.c.d.}(p,q)=1\}$$ is a cover of $A$ by intervals, such that the sum of the lengths is $O(\varepsilon)$.
To prove that $\mathcal{L}([0,1])=1$, the following is the key claim: Let $\{ (a_i,b_i)\}$ be a finite cover of the interval $[c,d]$ with no proper subcover. Then $\sum_i (b_i-a_i) > d-c$.
The claim is proved by induction in the number of elements of the cover. It is clearly true if the cover has just one interval. Now if $[c,d] \subset \bigcup_{i=1}^n (a_i,b_i)$ with $n>1$, then $[c,d]\backslash (a_1,b_1)$ is either a closed interval $I$ or the union $I\cup I'$ of two disjoint closed intervals. In the first case $\bigcup_{i=2}^n (a_i,b_i)$ is a cover of $I$ and we apply the inductive hypothesis to it. Otherwise, $\{(a_i,b_i)\}_{i=2}^n$ can be split into two disjoint subfamilies, one which covers $I$ and one which covers $I'$. We then apply the inductive hypothesis to these families. (We use the property that the original cover has no proper subcover to make sure the covers of $I$ and $I'$ are disjoint.)
Now the claim and compactness of $[0,1]$ (ie. Heine-Borel) yield that $\mathcal{L}([0,1])\ge 1$.
Hence, $[0,1]$ is uncountable and so is $\mathbb{R}$.
-
3
I'd be quite surprised if you could prove that that family you define doesn't cover the whole of [0,1] without using something very like the nested-intervals version of the diagonal argument. (That is, I'd like to know more about your proof that [0,1] has outer measure 1.) – gowers Nov 25 2010 at 22:15
1
I edited my post with a sketch of a proof that $[0,1]$ has outer measure $1$, modulo compactness of closed intervals. I do not see that I'm using a diagonal/nested intervals argument, but I could be totally wrong. The usual proof of Heine-Borel also doesn't appear to me to use a diagonal argument (see e.g. math.utah.edu/~bobby/3210/heine-borel.pdf). At no point in the proof there is an enumeration of an infinite set. – Pablo Shmerkin Nov 26 2010 at 0:38
1
Maybe you've convinced me. One can take the family you describe and define x to be the supremum over all reals x such that [0,x] can be covered by finitely many of its members. Such an x can't be covered itself (since the sets are open). This is just the proof of Heine-Borel in this special case. Also, your lemma gives us that x can't be 1. So one ends up using the least upper bound axiom instead of the nested-intervals property, which is perhaps enough to qualify the argument as genuinely different. – gowers Nov 27 2010 at 18:06
1
this proof appears in hardy's pure mathematics, about 100 years ago. – roy smith Dec 5 2010 at 5:36
I am sorry I have the book in hard, but can't find where the proof is. Could you tell me where that is? – To be cont'd Dec 22 2010 at 16:38
A nice proof based on the property that each bounded subset of the reals has a surpremum can be found in this article.
-
Cantor's original proof of uncountability of the reals did not explicitly mention diagonalization. Nor did it use decimal digits.
-
5
Unfortunately, this proof uses a diagonal argument. – Andres Caicedo Nov 23 2010 at 2:27
+1. I consider this to be a very different proof. It is constructive, or can be made so with little change. The diagonal argument actually proves the uncountability of 2^N, and no effective bijection between R and 2^N exists. – Daniel Mehkeri Nov 23 2010 at 2:27
4
@Daniel : I am not sure I understand what you mean. The proof uses a diagonal argument. The typical diagonal argument proofs are constructive. And it is easy to describe explicitly bijections between ${\mathbb R}$ and $2^{\mathbb N}$. – Andres Caicedo Nov 23 2010 at 3:03
Here is a hack to fix links to Wikipedia: go to "cite this page" and remove &oldid=XXXX from the URL. – Victor Protsak Nov 23 2010 at 4:54
Fixed the link. – David Roberts Nov 23 2010 at 5:00
show 3 more comments
As Andres implicitly pointed out, we may avoid diagonalization by working with ordinals directly. We can appeal to Hartog's Theorem to show that there is an ordinal $\beta$ that does not inject into $\omega$. It is then easy to verify that the least such $\beta$ will be $\omega_1$ (i.e., the set of all countable ordinals). Now using Choice, we can construct an injection $f: \omega_1 \rightarrow \mathcal{P}(\omega)$ by encoding each countable ordinal as a unique subset of $\omega$. This can be done by letting $\langle f_{\alpha}| \alpha < \omega_1\rangle$ be a sequence such that each $f_{\alpha}$ is a bijection from $\omega$ into $\alpha$ and then defining $f(\alpha) =$ {$\langle m, n\rangle| f_{\alpha}(m) < f_{\alpha}(n)$} where $\langle \cdot, \cdot\rangle: \omega \times \omega \rightarrow \omega$ is the Cantor pairing function. This completes the proof as if there were an injection from the powerset of $\omega$ (or the Reals) into $\omega$, then there would be an injection from $\omega_1$ into $\omega$.
It is worth noting that in a standard proof of Hartog's Theorem, we use the fact that an ordinal cannot be a member of itself ($\beta \notin \beta$). But because ordinals are well-ordered by the $\in$ relation, we can prove this fact without appealing to Foundation.
-
What about the Baire category theorem? It implies that every complete metric space without isolated points is uncountable. But of course, every proof uses some construction or rather characterization of $\mathbb{R}$. I think Cantor's diagonal argument is not bad at all.
-
4
The proof of the Baire Category Theorem I have in mind is a fairly direct generalisation of Cantor's Diagonal Argument. – HW Nov 22 2010 at 18:03
2
Yes, the Baire category argument is (I believe) equivalent to the axiom of dependent choice in ZF, and thus non-constructive. – Todd Trimble Nov 22 2010 at 18:14
3
Similarly, we can use the existence of (countably additive) Lebesgue measure to conclude that $\mathbb{R}$ is uncountable. Or on $2^{\mathbb{N}}$ the existence of the (countably additive) product measure. Or, from probability theory, the existence of an i.i.d. sequence of non-trivial random variables. – Gerald Edgar Nov 22 2010 at 18:23
5
If you unwind the Baire Category proof of uncountability, it actually turns into a diagonal argument after all. Given any sequence $(x_i)$ of real numbers, you argue that the intersection over $i$ of $\mathbb{R} \setminus \{x_i\}$ is dense, by BCT, hence non-empty. The standard proof of BCT constructs an element of this intersection by first taking a ball contained in $\mathbb{R} \setminus \{x_0\}$, then a sub-ball of this contained $\mathbb{R} \setminus \{x_1\}$, and so-on. In other words, we construct a real by a countable sequence of approximations, [cont’d] – Peter LeFanu Lumsdaine Nov 22 2010 at 18:24
5
with the $n$th approximation ensuring that the resulting limit is not equal to $x_n$. So this is a version diagonal argument in the same sense that the nested limit theorem is. – Peter LeFanu Lumsdaine Nov 22 2010 at 18:25
show 5 more comments
One can use the following theorem:
Every countable dense linear order without endpoints is order-isomorphic to $\Bbb Q$.
Since the real numbers are ordered densely and without endpoints, if $\Bbb R$ was countable it was isomorphic to $\Bbb Q$.
However $\Bbb R$ is order-complete, and $\Bbb Q$ is not. So they are clearly not isomorphic, and therefore $\Bbb R$ is uncountable.
-
Isn’t this basically the same as Bill Johnson’s answer? – Emil Jeřábek Apr 29 at 11:20
Oh crap, I didn't even see that. I feel bad now... – Asaf Karagila Apr 29 at 11:42
Cantor gave several proofs of uncountability of reals; one involves the fact that every bounded sequence has a convergent subsequence (thus being related to the nested interval property). All his proofs are discussed here:
MR2732322 (2011k:01009) Franks, John: Cantor's other proofs that R is uncountable. (English summary) Math. Mag. 83 (2010), no. 4, 283–289.
-
I have the following candidate: http://arxiv.org/abs/1003.3557, section 7.4. Notice that in the setting of the article one cannot use diagonalization.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 148, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461356997489929, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/38274?sort=newest
|
## What are the shapes of rational functions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I would like to understand and compute the shapes of rational functions, that is, holomorphic maps of the Riemann sphere to itself, or equivalently, ratios of two polynomials, up to Moebius transformations in both domain and range. For degree 1 and 2, there is only one equivalence class. For degree 3, there is a well-understood one-complex-parameter family, so the real challenge is for higher degrees.
Given a set of points to be the critical values [in the range], along with a covering space of the complement homeomorphic to a punctured sphere, the uniformization theorem says this Riemann surface can be parametrized by $S^2$, thereby defining a rational function. Is there a reasonable way to compute such a rational map?
I'm interested in ideas of good and bad ways to go about this. Computer code would also be most welcome.
Given a set of $2d-2$ points on $CP^1$ to be critical points [in the domain], it has been known since Schubert that there are Catalan(d) rational functions with those critical points. Is there a conceptual way to describe and identify them?
In the case that all critical points are real, Eremenko and Gabrielov, Rational functions with real critical points and the B. and M. Shapiro conjecture in real enumerative geometry. Annals of Mathematics, v.155, p.105-129, 2002 gave a good description. They are determined by $f^{-1}(R)$, which is $R$ together with mirror-image subdivisions of the upper and lower half-plane by arcs. These correspond to the various standard things that are enumerated by Catalan numbers. Is there a global conceptual classification of this sort? And, is there a way to find a rational map with given critical points along with some kind of additional combinatorial data?
Note that for the case of polynomials, this is very trivial: the critical points are zeros of its derivative, so there is only one polynomial, which you get by integrating its the derivative.
Is there a complete characterization of the Schwarzian derivative for a rational map, starting with the generic case of $2d-2$ distinct critical points?
Cf. the recent question by Paul Siegel. The Schwarzian $q$ for a generic rational map has a double pole at each critical point. As a quadratic differential, it defines a metric $|q|$ on the sphere - critical points which is isometric to an infinitely long cylinder of circumference $\sqrt 6 \pi$ near each. Negative real trajectories of the quadratic differential go from pole to pole, defining a planar graph.
What planar graphs occur for Schwarzian derivatives of rational functions? What convex (or other) inequalities do they satisfy?
The map from the configuration space of $(2d-2)$ points together with branching data to the configuration space of $2d-2$ points, defined by mapping (configuration of critical values plus branched cover data) to (configuration of critical points) is a holomorphic map, which implies it is a contraction of the Teichmuller metric.
Is this map a contraction for other readily described metrics?
-
Perhaps the "na" tag was meant to be "na.numerical-analysis". – Gjergji Zaimi Sep 10 2010 at 8:36
Yes, na-numerical-analysis was what I meant and what I actually typed. I surmise the software truncated it, and I don't see how to fix it. – Bill Thurston Sep 10 2010 at 12:35
3
The first question for three critical values seems to be the problem of "dessin d'enfant". A reasonable amount of calculation (some by hand, some by computer) has been done but the number of explicit examples is not large. – Torsten Ekedahl Sep 10 2010 at 17:02
1
The first question (which, as TE says, is the problem of computing equations for dessins d'enfant when there are three branch points) appears to be hard. One person who's done a lot of work on it is Jean-Marc Couveignes; see math.univ-toulouse.fr/~couveig/publi/volk.pdf for a representative piece of work. – JSE Sep 12 2010 at 2:19
2
Thanks TE and JSE for the pointer. I'm not yet convinced that the computation should be hard in principle, even if nobody has yet implemented an efficient process. The map from {rational functions/up to precomposition with Moebius} to {critical values, branching data} is biholomorphic, so at worst the inverse function theorem should be efficient once implemented, although annoying to implement because of the complexity of tracking the branch data, the braid group action, degeneracy relationships and orbifold singularities well enough to get good local coordinates, especially in the range. – Bill Thurston Sep 12 2010 at 18:13
show 2 more comments
## 2 Answers
The algorithmic unsolvability of the general polynomial equation of degree greater than 4 implies that this is not possible in the general case.
-
1
Shows that what is not possible? – Gerald Edgar Jan 14 at 18:24
5
If "algorithmic unsolvability" here is intended to mean unsolvability by radicals, then I don't see its relevance. If it's intended to mean something else, then I'd appreciate more information about the intended meaning. – Andreas Blass Jan 14 at 18:32
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
1. There is a characterization of Schwarzian derivatives of rational maps: section 3 in the text: http://www.math.purdue.edu/~eremenko/dvi/schwarz.pdf There is something similar also in arXiv:math/0512370, chapter 2. All these descriptions are various systems of algebraic equations. One of them, the "Bethe ansatz equations for the Gaudin model", proved to be very useful, see Mukhin, Tarasov and Varchenko, The B. and M. Shapiro conjecture in real algebraic geometry and the Bethe ansatz, Ann. Math. 170, 2 2009, 863-15.
2. There is some cell decomposition of the sphere which can be intrinsically related to a ratonal function. It is described in the paper Bonk, Eremenko, Schlicht regions of entire and meromorphic functions, J. d'Analyse, 77, 1999, 69-104, Sections 7.8. For a given cell decomposition, a rational function can be recovered using an algorithm similar to Thurston's circle packing algorithm. However, with this description, critical points or critical valued cannot be prescribed, and the cell decomposition does not determine the rational function completely.
Alex Eremenko.
-
Prof. Eremenko, welcome to MO! – David Speyer Aug 3 at 20:04
Thanks, David. I should have posted my comment as a "comment", not an "answer", sorry, have not mastered the rules completely yet:-( – Alexandre Eremenko Aug 3 at 20:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908464789390564, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/79807-fundamental-group-free-group.html
|
# Thread:
1. ## fundamental group, free group
Let $Y$ be the complement of the following subset of the plane $\mathbb{R}^2:$
$\{(x,0) \in \mathbb{R}^2: x \in \mathbb{Z} \}$
Prove that $\pi_1(Y)$ is a free group on a countable set of generators.
I don't know how to start this problem. Thanks in advance.
2. Originally Posted by mingcai6172
Let $Y$ be the complement of the following subset of the plane $\mathbb{R}^2:$
$\{(x,0) \in R^2: x \in \mathbb{Z} \}$
Prove that $\pi_1(Y)$ is a free group on a countable set of generators.
I don't know how to start this problem. Thanks in advance.
The deformation retract to the wedge product of circles would be the proper way to approach this problem.
Once you get the deformation retract, I think it would not be hard to prove this.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929917573928833, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/110092/ising-model-on-a-cycle/110222
|
## Ising model on a cycle
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Ising model on $\mathbb{Z} / 2d\mathbb{Z}$ gives to the configuration $x=(x_0, \ldots, x_{2d-1}) \in \{-1,+1\}^{2d}$ a probability proportional to $\exp\big(\beta \sum_i x_ix_{i+1} \big)$. The Gibbs sampler with block updates is a Markov chain $X_k$ that evolves on the set of such configurations and updates the odd (resp. even) indices conditionally on the even (resp. odd) indices with probability a half.
It seems like a relatively straightforward application of the path coupling [1] approach (two configurations are neighbours if they agree on all odd or all even coordinates; distance between two neighbours is $1+H(x,y)/d$ where $H$ is the Hamming distance) shows that the mixing time of the Gibbs sampler stays bounded as the size $d$ of the system goes to infinity, which looks rather surprising. Any intuition behind that? If this is already written somewhere, any reference concerning this (or similar) result?
• [1] Chapter 14 of Markov Chains and Mixing Times by D. Levin, Y. Peres and E. Wilmer
-
1
I guess you mean $\mathbb{Z}/2d\mathbb{Z}$ and $(x_0,\dots, x_{2d-1})\in \{-1, +1\}^{2d}$? Also $d$ is probably more naturally "size" than "dimension" here, since everything is really 1-dimensional. Anyway, are you sure you are asking the question you mean? Maybe I misunderstand, but for most sensible definitions the mixing time for, say $d=1$ (a system with only 2 sites) will not be the same as that for $d=1000$. Often in these kind of problems the mixing time is considered in an asymptotic regime as some parameter gets large - but that seems to be absent here. – James Martin Oct 19 at 14:58
Thank you for the comment, I have updated the notations. As you said, I really meant "size" instead of "dimension". And I should not have written that the mixing time does not depend on d; what I really meant is that the mixing time $\tau(d)$ seems to stay bounded as $d$ goes to infinity. – Alekk Oct 19 at 15:26
## 1 Answer
This is not so surprising, and is related to the lack of phase transition in the one dimensional Ising model.
Consider first why the mixing time might be large. If $\beta$ is very high, and we start with a configuration where half the circle is + and half -, it will take a fairly long time for the chain to converge to one of the extreme states. (Roughly $d^2$, as the interface will perform a random walk.)
However, if $\beta$ is fixed and $d$ is large, then at every step the process will create islands of the opposite sign, at distance of order $e^{4\beta}$, regardless of $d$. Notethat the stationary distribution also has a finite correlation length.
Finally, another way to see the bounded mixing time is by a coupling argument. The simplest local coupling, will create agreement with some density, and segments of agreement will grow at positive rate, so after roughly at most $e^{4\beta}$ steps any two starting configurations will couple. This bound can be improved.
-
Thank, that's a great answer. I was trying the other day to see how long it would take to couple an all +1 configuration with an all -1 configuration, and one can see that the coupling time stays bounded wrt $d$. – Alekk Oct 21 at 10:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556754231452942, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?p=4167168
|
Physics Forums
## Successive Measurements of angular momentum
Hello,
I believe this to be a rather simple problem but I am not quite sure if my thinking is correct.
We have a particle in a j=1 state of angular momentum J. I am first asked to find some eigenvectors of the matrix J(y):
$$J_{y}=\frac{\hbar}{\sqrt{2}i}\begin{pmatrix} 0 & -1 & 0\\ 1 & 0 & -1\\ 0 & -1 & 0 \end{pmatrix}$$
Is this correct?
I got the eigenvalues of this matrix to be i√2, -i√2 and 0 which I believe correspond, respectively to,
$$\hbar, -\hbar, 0$$
Though my main question (provided the above is correct) is that we are told the system is in a state of the J(y) corresponding to the positive non-zero eigenvector, we are then asked to find the probability of finding each value :
$$\hbar, -\hbar, 0$$
of the J(z) angular momentum.
Surely this is just given by the normalized coefficients of the eigenvectors of J(z) and the fact it is in a state of J(y) is not relevant?
I can post the eigenvectors I attained for the J(y) matrix if required.
Thanks,
SK
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
I found the eigenvectors of J(y), shown above, to be: $$\begin{pmatrix} 1\\ \sqrt{2}i\\ -1 \end{pmatrix} \begin{pmatrix} 1\\ -\sqrt{2}i\\ -1 \end{pmatrix} \begin{pmatrix} \sqrt{2}\\ 0\\ \sqrt{2} \end{pmatrix}$$ Normalized by a factor 0.5 in front of each of them, these eigenvectors have the eigenvalues: i√2, -i√2 and 0 respectively, I believe that they can be manipulated into h-bar, negative h-bar and zero.
Blog Entries: 1
Recognitions:
Science Advisor
we are told the system is in a state of the J(y) corresponding to the positive non-zero eigenvector, we are then asked to find the probability of finding each value of the J(z) angular momentum. Surely this is just given by the normalized coefficients of the eigenvectors of J(z) and the fact it is in a state of J(y) is not relevant?
The probability amplitude of finding each value of Jz in an eigenstate of Jy is the overlap, <Jz= mz|Jy= my>, and the probability itself is that thing squared, |<|>|2.
## Successive Measurements of angular momentum
First off, I think you meant for the bottom middle entry of the matrix to be a positive one, no? Otherwise, that Jy just has eigenvalues of zero.
I would think you take the eigenvector corresponding to the $J_y = \hbar$ measurement, which corresponds to your state, $\psi$. Then, you know the eigenvalues of the Jz matrix,
[itex]J_z = \hbar \begin{pmatrix}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1
\end{pmatrix}[/itex]
has eigenvalues $-\hbar, 0, \hbar$
find the eigenvectors corresponding to each of those values, then to find the probabilities, you do
$P_\lambda = |\langle \psi_z \mid \psi_y \rangle|^2$
where $P_\lambda$ is the probability of a measurement of eigenvalue λ for Jz, $\psi_z$ is the eigenvector for that eigenvalue of Jz, and $\psi_y$ is the eigenvector corresponding to the $\hbar$ eigenvalue for Jy.
You may want a second opinion, because Angular Momentum was not my strongest subject in QM, but this seems right. Remember to normalize all eigenvectors!
Thanks Bill_K and soothslayer, I was getting confused and I also thought that particular eigenvector component should of been a different sign. This now makes sense, I'll try and apply it and come back soon if I'm having trouble. Thanks guys!
Hey I'm back, I think I have done this question now, does the following seem correct. Right so I determined the Jy angular momentum matrix with the following normalized eigenvectors and corresponding eigenvalues: $$J_{y}=\frac{\hbar}{\sqrt{2}i}\begin{pmatrix} 0 &1 & 0\\ -1 & 0 & 1\\ 0 & -1 & 0 \end{pmatrix}\; ,\; \frac{\hbar}{\sqrt{2}i} : \frac{1}{2}\begin{pmatrix} 1\\ \sqrt2i\\ -1 \end{pmatrix}\; ,\; -\frac{\hbar}{\sqrt{2}i}: \frac{1}{2}\begin{pmatrix} 1\\ -\sqrt2i\\ -1 \end{pmatrix}\; ,\; 0: \frac{1}{2}\begin{pmatrix} \sqrt2\\ 0\\ \sqrt2 \end{pmatrix}$$ Though I'm not sure if this labelling of eigenvalues is correct! I don't know if I can just list them as +hbar, -hbar and 0. Anyway, the eigenvectors and corresponding eigenvalues of the Jz matrix are: $$\hbar: \begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix}\; ,\; -\hbar:\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix}\; ,\; 0:\begin{pmatrix} 0\\ 1\\ 0 \end{pmatrix}$$ Using the +ve eigenstate of Jy I found the following probabilities of attaining states +hbar and -hbar of Jz as 0.25 each and 0.5 for the 0 state of Jz, they sum to 1 so I'm guessing I've done it right? Is it? Thanks, SK
Yeah, you got it!
Ahh good, thanks. I think the eigenvalues for the Jy matrix can be simplified to hbar, -hbar and 0.
Yes, they can. You actually can't have imaginary numbers as eigenvalues in quantum mechanics, for the most part. Most operators in QM are Hermitian, because the eigenvalues must correspond to observables, which must be real. Not ALL the time, but certainly for something that should be observable, like angular momentum.
Ahh yes of course, I forgot about the whole operators are observables and the fact that the eigenvalue is the measured result etc. Thanks again soothslayer for your help, much appreciated! SK
Thread Tools
| | | |
|------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Successive Measurements of angular momentum | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 0 |
| | Introductory Physics Homework | 2 |
| | Quantum Physics | 3 |
| | Introductory Physics Homework | 12 |
| | Introductory Physics Homework | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9112603664398193, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/208363-quick-question-about-carnality-sets.html
|
2Thanks
• 1 Post By coolge
• 1 Post By jakncoke
Thread:
1. A quick question about carnality of sets
I can't figure this out, it seems almost like a trick question.
When is |A u B| = |A| + |B|?
I'm not even sure what type of answer my professor is looking for so I've been working with quantifiers.
Using quantifiers the closest to the correct answer I can come up with is something like this:
|A u B| = |A| + |B| when "for all"x(x is an element of A and x is an element of B), but it still doesn't satisfy the problem
Thank you
2. Re: A quick question about carnality of sets
when A intersection B is empty. |AUB| = |A| + |B| - |A \intersect B|
3. Re: A quick question about carnality of sets
Thank you. If that's the case, then when is |A n B| = |A| + |B|?
Would it be when A u B is empty?
4. Re: A quick question about carnality of sets
Indeed as coolge said, when you do $|A \cup B|$ the intersection only gets counted once, but |A| + |B|, would mean the intersection gets counted twice. So they are equal precisely when there is nothing in the intersection.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450748562812805, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/43337/how-much-space-to-simulate-a-small-hilbert-space
|
# How much space to simulate a small Hilbert space?
I'm thinking about trying to do a numerical simulation of some very simple QM problems.
How much space do I need? To simulate the Hilbert space?
I'd like to eventually simulate the absorption or emission of a photon by a hydrogen atom. So at least three particles (two fermions, one boson). Let's generalize that to three particles with arbitrary spin, so I can look at three photons or three electrons if I want to.
In order to do a numerical simulation I need to replace the continuous spacetime with a rectangular grid or lattice. I'd like to eventually get more precision, but let's start with just ten cells per dimension to begin with. So including time, I need ten thousand cells in a four dimensional lattice. `
How many cells do I need to simulate the Hilbert space? and what goes in each cell?
If I put restrictions on the shape of the wavefunction, does that help?
-
## 3 Answers
You need a basis large enough to track the Schroedinger equation reasonably well over the time of interest, or to represent the ground state if you are interested in that. This depends a lot both on the initial state and the Hamiltonian.
Much of numerical quantum mechanics is concerned with finding useful basis function sets that do not grow too rapidly with the number of dof treated. This means they are usually applicable only for fairly specific classes of problems.
You might want to have a look at GAMESS http://www.msg.ameslab.gov/gamess/ which procesds with Gaussian states for electronic ground state calculations. But to make it feasible for many electrons, they need to do lots of extra trickery.
In a much simpler but practically infeasible approach, you discretize each component of $R^{3N}$ with $p%$ points (and $p=10$ will still give very poor accuracy) you are left with a discrete problem in $(2p^3)^N$ dimensions. GAMESS tries instead to have a complexity growing less than $O(N^2)$.
-
The full system has configuration space $\mathbb{R}^{3n}$ where $n$ is the number of particles. Note that because photons of different frequencies are distinct, you will need $n$ to depend on how finely you discretise space. You can remove one $\mathbb{R}^3$ for centre of mass motion, and perhaps another 3 dimensions for orientation, but that still leaves a very high dimensional space in which the wavefunction of the system will live.
A better way to proceed is to write down the Hamiltonian of the system in a non-interacting frame, e.g. the hydrogen atom can be solved exactly and so can the non-interacting photon. Truncating in this basis should give you something you can actually implement as an interaction operator.
-
Thanks for your answer. I am aware that my approach is not very practical and consumes a lot of space. I am doing it to try to follow "What happens where" so to speak – Jim Graber Nov 3 '12 at 17:44
First of all, I think I want not $L^2(X)$, but some finite, numerical approximation of $L^2(X)$. Second, I think I need something like $L^2(\mathbb{R}^{3N}) \otimes \mathbb{C}^{2^N}$ to include spin. (I am still confused as to whether N should be an exponent or a multiplier in each of the two places where it occurs) – Jim Graber Nov 3 '12 at 17:45
Then I need to convert this finite approximation to a specific number of cells (and perhaps multiply yet again by a time variable). Finally, I will need to figure out what to store in each cell? One compex number? Two? More? Something else? – Jim Graber Nov 3 '12 at 17:46
@JimGraber: Your exponents are correct. – Arnold Neumaier Nov 4 '12 at 15:59
@JimGraber: In each cell of phase space, you'd have to specify as many numbers as the wave function has components, which means $2^N$ complex numbers. – Arnold Neumaier Nov 5 '12 at 9:47
It is of a matter of detail. your algorithm should be independent of the amount of available memory and only depend on the lattice size.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430772066116333, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/41036/why-are-eulers-equations-of-motion-coupled-physical-explanation?answertab=active
|
# Why are Euler's equations of motion coupled? Physical explanation
I have a problem with one of my study questions for an oral exam:
Euler’s equation of motion around the $z$ axis in two dimensions is $I_z\dot{\omega}_z = M_z$, whereas it in three dimensions is $I_z\dot{\omega}_z =-(I_y-I_x)\omega_x\omega_y+M_z$, assuming that the $xyz$ coordinate systems is aligned with the principal axis. Why does Euler’s equation of motion for axis $z$ contain the rotational velocities for axes $x$ and $y$?
How can one explain this physically? I mean I can derive Euler's equation of motion, but how can I illustrate that the angular velocities are changing in 3 dimensions?
-
## 1 Answer
As explained on Wikipedia, the nice tensor form of the equations is $$\mathbf{I} \cdot \dot{\boldsymbol\omega} + \boldsymbol\omega \times \left( \mathbf{I} \cdot \boldsymbol\omega \right) = \mathbf{M}$$ This reduces to your equations if one diagonalizes the tensor of the moment of inertia $I$ and labels the diagonal entries etc.
The three components are mixed with each other because quantities like $\vec\omega$ and $\vec M$ are really associated with rotations in space and rotations around the axis $x,y,z$ don't commute with each other – unlike translations. Translations commute with each other which is why the 3 components in $\vec F=m\vec a$ don't mix with each other.
For example, take the Earth, rotate it by 90 degrees around the $x$ axis, then 90 degrees around $y$ axis, then you rotate back by 90 degrees but first around $x$ axis again, so that you aren't undoing the $y$ rotation immediately, but then you undo the $y$ rotation, too. You don't get back where you have been: instead, you end up rotating the Earth around the $z$ axis. We say that rotations form the group $SO(3)$ which is non-abelian, $gh\neq hg$. The moment of force wants to rotate the rigid body around an axis but because it was already rotating around another axis given by $\vec \omega$ and the rotations don't commute with each other, the effect of the moment of force also influences the "remaining third" component.
A natural way to write the vectors $\vec \omega, \vec M$ is actually an "antisymmetric tensor" – they're "pseudovectors", not actual vectors. At any rate, when you correctly derive the equations, you should reproduce what Euler got.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396941661834717, "perplexity_flag": "head"}
|
http://www.sagemath.org/doc/reference/combinat/sage/combinat/integer_matrices.html
|
# Counting, generating, and manipulating non-negative integer matrices¶
Counting, generating, and manipulating non-negative integer matrices with prescribed row sums and column sums.
AUTHORS:
• Franco Saliola
class sage.combinat.integer_matrices.IntegerMatrices(row_sums, column_sums)¶
Bases: sage.structure.unique_representation.UniqueRepresentation, sage.structure.parent.Parent
The class of non-negative integer matrices with prescribed row sums and column sums.
An integer matrix $$m$$ with column sums $$c := (c_1,...,c_k)$$ and row sums $$l := (l_1,...,l_n)$$ where $$c_1+...+c_k$$ is equal to $$l_1+...+l_n$$, is a $$n \times k$$ matrix $$m = (m_{i,j})$$ such that $$m_{1,j}+\dots+m_{n,j} = c_j$$, for all $$j$$ and $$m_{i,1}+\dots+m_{i,k} = l_i$$, for all $$i$$.
EXAMPLES:
There are $$6$$ integer matrices with row sums $$[3,2,2]$$ and column sums $$[2,5]$$:
```sage: from sage.combinat.integer_matrices import IntegerMatrices
sage: IM = IntegerMatrices([3,2,2], [2,5]); IM
Non-negative integer matrices with row sums [3, 2, 2] and column sums [2, 5]
sage: IM.list()
[
[2 1] [1 2] [1 2] [0 3] [0 3] [0 3]
[0 2] [1 1] [0 2] [2 0] [1 1] [0 2]
[0 2], [0 2], [1 1], [0 2], [1 1], [2 0]
]
sage: IM.cardinality()
6
```
cardinality()¶
The number of integer matrices with the prescribed row sums and columns sums.
EXAMPLES:
```sage: from sage.combinat.integer_matrices import IntegerMatrices
sage: IntegerMatrices([2,5], [3,2,2]).cardinality()
6
sage: IntegerMatrices([1,1,1,1,1], [1,1,1,1,1]).cardinality()
120
sage: IntegerMatrices([2,2,2,2], [2,2,2,2]).cardinality()
282
sage: IntegerMatrices([4], [3]).cardinality()
0
sage: len(IntegerMatrices([0,0], [0]).list())
1
```
This method computes the cardinality using symmetric functions. Below are the same examples, but computed by generating the actual matrices:
```sage: from sage.combinat.integer_matrices import IntegerMatrices
sage: len(IntegerMatrices([2,5], [3,2,2]).list())
6
sage: len(IntegerMatrices([1,1,1,1,1], [1,1,1,1,1]).list())
120
sage: len(IntegerMatrices([2,2,2,2], [2,2,2,2]).list())
282
sage: len(IntegerMatrices([4], [3]).list())
0
sage: len(IntegerMatrices([0], [0]).list())
1
```
column_sums()¶
The column sums of the integer matrices in self.
OUTPUT:
• Composition
EXAMPLES:
```sage: from sage.combinat.integer_matrices import IntegerMatrices
sage: IM = IntegerMatrices([3,2,2], [2,5])
sage: IM.column_sums()
[2, 5]
```
row_sums()¶
The row sums of the integer matrices in self.
OUTPUT:
• Composition
EXAMPLES:
```sage: from sage.combinat.integer_matrices import IntegerMatrices
sage: IM = IntegerMatrices([3,2,2], [2,5])
sage: IM.row_sums()
[3, 2, 2]
```
to_composition(x)¶
The composition corresponding to the integer matrix.
This is the composition obtained by reading the entries of the matrix from left to right along each row, and reading the rows from top to bottom, ignore zeros.
INPUT:
• x – matrix
EXAMPLES:
```sage: from sage.combinat.integer_matrices import IntegerMatrices
sage: IM = IntegerMatrices([3,2,2], [2,5]); IM
Non-negative integer matrices with row sums [3, 2, 2] and column sums [2, 5]
sage: IM.list()
[
[2 1] [1 2] [1 2] [0 3] [0 3] [0 3]
[0 2] [1 1] [0 2] [2 0] [1 1] [0 2]
[0 2], [0 2], [1 1], [0 2], [1 1], [2 0]
]
sage: for m in IM: print IM.to_composition(m)
[2, 1, 2, 2]
[1, 2, 1, 1, 2]
[1, 2, 2, 1, 1]
[3, 2, 2]
[3, 1, 1, 1, 1]
[3, 2, 2]
```
sage.combinat.integer_matrices.integer_matrices_generator(row_sums, column_sums)¶
Recursively generate the integer matrices with the prescribed row sums and column sums.
INPUT:
• row_sums – list or tuple
• column_sums – list or tuple
OUTPUT:
• an iterator producing a list of lists
EXAMPLES:
```sage: from sage.combinat.integer_matrices import integer_matrices_generator
sage: iter = integer_matrices_generator([3,2,2], [2,5]); iter
<generator object integer_matrices_generator at ...>
sage: for m in iter: print m
[[2, 1], [0, 2], [0, 2]]
[[1, 2], [1, 1], [0, 2]]
[[1, 2], [0, 2], [1, 1]]
[[0, 3], [2, 0], [0, 2]]
[[0, 3], [1, 1], [1, 1]]
[[0, 3], [0, 2], [2, 0]]
```
#### Previous topic
Tools for generating lists of integers in lexicographic order
#### Next topic
(Non-negative) Integer vectors
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6527010798454285, "perplexity_flag": "middle"}
|
http://physics.aps.org/articles/print/v2/30
|
# Viewpoint: Pauling’s dreams for graphene
, Department of Physics, Boston University, 590 Commonwealth Ave., Boston, MA 02215
Published April 20, 2009 | Physics 2, 30 (2009) | DOI: 10.1103/Physics.2.30
Graphene, believed to be a semimetal so far, might actually be an insulator when suspended freely.
Graphene research is probably one of the fastest growing fields in condensed matter physics. The material is one atom thick, albeit it can be seen with an ordinary optical microscope [1] (see Fig. 1). It has the properties of a good metal, although its electronic properties do not fit the standard theory of metals because its electrons propagate as massless Dirac particles [2]. Graphene is also resistant against extrinsic impurities because its chemical bonding is very specific and consequently graphene conducts electricity better, with less energy loss, than silicon [3] (the platform of all modern electronics). Moreover, graphene is one of the strongest materials ever measured in terms of Young’s modulus and elastic stiffness [4] (the only other material that is comparable in strength is diamond), nevertheless it is one of softest (the only example of a metallic membrane). It can be used as an ultrasensitive nano-mechanical resonator besides being highly impermeable [5]. Hence it is not surprising that so many high-tech industries are interested in developing graphene-based devices for a plethora of applications, from high-frequency transistors [6] to reversible hydrogen storage [7, 8].
However, all the currently proposed applications of graphene are based on the idea that graphene is a semimetal, that is, a system without an electronic gap. In fact, there are many technological advantages for graphene to be a semiconductor instead of a semimetal. The most important one is that the presence of a gap would increase tremendously the on-off ratio for current flow that is needed for many electronic applications. In the last few years, researchers have been trying different ways to produce electronic gaps in graphene but they all come with serious problems. Gaps can be produced by geometrically confining graphene into nanoribbons [9] and quantum dots [10], but those systems are very sensitive to disorder introduced by the cutting process of the graphene sheet. Another possibility is to grow graphene on substrates that induce lattice potentials that can open a gap [11] but the disorder due to the growth process, and the charge transfer between graphene and the substrate, can change the nice electronic properties such as ambipolarity (equal conduction of electrons and holes) that one wants to preserve.
Although graphene has proven to be almost unbeatable in terms of electronic conduction and structural stability, it seems that electron-electron interactions have little effect on graphene’s properties. The lack of strongly interacting states in graphene is rather puzzling given that more than 50 years ago Linus Pauling [12] proposed that graphene should be an insulator due to strong electron-electron interactions, what is today called a Mott insulator [13]. Mott insulators should be contrasted with the more ordinary band insulators, where insulating behavior is generated by electron-ion interaction and can be understood within the independent-electron picture. Pauling based his arguments on the fact that graphene can be thought of as an infinite collection of benzene rings, from which the hydrogens were extracted, and its ground state, just like in benzene, should be a resonant valence bond (RVB) liquid with an electronic gap. Interestingly enough, more or less at the same time as Pauling, Philip Russell Wallace proposed, based on a theory that did not consider any electron-electron interactions, that graphene should be a semimetal [14]. So far Wallace has been “winning the race” but a paper by Joaquin Drut of Ohio State University and Timo Lähde of the University of Washington, published in Physical Review B [15], indicates that Pauling’s dreams for graphene may not be far from reach. Preliminary results were presented first by the same authors in Physical Review Letters [16].
Proposals that electron-electron interactions in graphene could generate an electronic gap were investigated in the context of the properties of graphite (from which graphene can be extracted), and actually preceded the discovery of graphene [17, 18]. Because the elementary particles in graphene are Dirac fermions, gap opening is an analogue of the “chiral symmetry” breaking process that occurs in quantum electrodynamics (QED) in two dimensions [19]. Unlike two-dimensional QED, where the fermions propagate at speed of light $c$, in graphene the Dirac fermions propagate at much smaller velocity $v∼c/300$. The parameter that controls the gap opening is the so-called graphene fine-structure constant, $αg=e2/(εħv)$, which is the analogue of QED fine-structure constant, $αQED≈e2/(ħc)≈1/137$ ($e$ is the electron charge, $ε$ is the dielectric constant of the environment, $ħ$ is Planck’s constant).
Since Dirac fermions in two dimensions have a vanishing density of states, the semimetal to semiconductor transition requires a large value of $αg$ , the so-called critical coupling, $αc$. Thus the gap opening only occurs if $αg>αc$. The two fundamental questions are as follows: (1) On the theoretical side, what is the value of $αc$ ? (2) On the experimental side, can one find an environment with a sufficiently low $ε$ so that this condition is fulfilled? Notice that $αg$ depends inversely on $ε$, whose smallest value is $ε=1$ (the value in vacuum). Hence $αg<αg,vac≈300αQED≈2.16$. For $SiO2$, which is a common substrate for graphene [1], one has $αg,SiO≈0.79$. Because the problem in graphene, unlike QED, is inherently of a strong coupling nature, perturbative approaches [20] are not able to capture this transition and approximate solutions [17, 18] can miss important quantum fluctuations. So one has to rely on nonperturbative methods.
Using Monte Carlo simulations that are analogous to the ones used in lattice gauge theory, Drut and Lähde start from the low-energy theory of graphene (the linearization of the spectrum around the Dirac points [2]) and discretize it on a hypercubic lattice [15]. In doing that they lose information of graphene’s honeycomb lattice. Nevertheless, because the system is particle-hole symmetric, the Monte Carlo simulation does not suffer from the infamous “sign problem” that plagues simulations of interacting fermionic systems. After careful analysis of the Monte Carlo data they found that $αc≈1.1$, that is, $αg,SiO<αc<αg,vac$, and hence the gap opening should be observed for graphene in vacuum but not for graphene on top of $SiO2$ (see Fig. 1). With the advent of ultrahigh mobility suspended graphene samples [21, 22, 23] it will be possible to reach the value predicted by Drut and Lähde and check Pauling’s 50-year-old prediction.
However, many questions remain: the linearization procedure that is used by Drut and Lähde does not allow for an exact determination of the size of the gap since that depends on the lattice details. If the gap is too small the result will be interesting but purely academic. Moreover, suspended samples are known to have ripples [21] that are the result of the softness of the material and it is not exactly known how those ripples would affect the value of $αc$. Nevertheless, graphene continues to amaze and present us with new challenges. It seems that this is just the beginning of a new adventure in the world of graphene physics.
## Acknowledgments
This research is partially supported by the U.S. Department of Energy under the grant DE-FG02-08ER46512.
### References
1. K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Gregorieva, and A. A. Firsov, Science 306, 666 (2004).
2. A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
3. A. K. Geim and K. S. Novoselov, Nature Materials 6, 183 (2007).
4. C. Lee, X. Wei, and J. Hone, Science 321, 385 (2008).
5. J. S. Bunch, S. S. Verbridge, J. S. Alden, A. M. van der Zande, J. M. Parpia, H. G. Craighead, and P. L. McEuen, Nano Letters 8, 2458 (2008).
6. Y.-M. Lin, K. A. Jenkins, A. Valdes-Garcia, J. P. Small, D. B. Farmer, and P. Avouris, Nano Letters 9, 422 (2009).
7. S. Patchkovskii, J. S. Tse, S. N. Yurchenko, L. Zhechkov, T. Heine, and G. Seifert, PNAS 102, 10439 (2005).
8. D. C. Elias, R. R. Nair, T. M. G. Mohiuddin, S. V. Morozov, P. Blake, M. P.. Halsall, A. C. Ferrari, D. W. Boukhvalov, M. I. Katsnelson, A. K. Geim, and K. S. Novoselov, Science 323, 610 (2009).
9. M. Y. Han, B. Özyilmaz, Y. Zhang, and P. Kim, Phys. Rev. Lett. 98, 206805 (2007).
10. L. A. Ponomarenko, F. Schedin, M. I. Katsnelson, R. Yang, E. H. Hill, K. S. Novoselov, and A. K. Geim, Science 320, 356 (2008).
11. S. Y. Zhou, G.-H. Gweon, A. V. Fedorov, P. N. First, W. A. de Heer, D.-H. Lee, F. Guinea, A. H. Castro Neto, and A. Lanzara, Nature Materials 6, 770 (2007).
12. L. Pauling, The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry (Cornell University Press, Ithaca, NY, 1960)[Amazon][WorldCat].
13. P. Phillips, Ann. Phys. NY 321, 1634 (2006).
14. P. R. Wallace, Phys. Rev. 71, 622 (1947).
15. J. E. Drut and T. A. Lähde, Phys. Rev. B 79, 165425 (2009).
16. J. E. Drut, and T. A. Lähde, Phys. Rev. Lett. 102, 026802 (2009).
17. D. V. Khveshchenko, Phys. Rev. Lett. 87, 246802 (2001).
18. E. V. Gorbar, V. P. Gusynin, V. A. Miransky, and I. A. Shovkovy, Phys. Rev. B 66, 045108 (2002).
19. T. Appelquist, D. Nash, and L. Wijewardhana, Phys. Rev. Lett. 60, 2575 (1988).
20. J. Gonzalez, F. Guinea, and M. A. H. Vozmediano, Nucl. Phys. B 424, 595 (1994).
21. J. C. Meyer, A. K. Geim, M. I. Katsnelson, K. S. Novoselov, T. J. Booth, and S. Roth, Nature 446, 60 (2007).
22. K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun. 146, 351 (2008).
23. X. Du, I. Skachko, A. Barker, and E. Y. Andrei, Nature Nanotechnology 3, 491 (2008).
### Highlighted article
#### Lattice field theory simulations of graphene
Joaquín E. Drut and Timo A. Lähde
Published April 20, 2009 | PDF (free)
### Figures
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8822539448738098, "perplexity_flag": "middle"}
|
http://ams.org/bookstore?fn=20&arg1=ulectseries&ikey=ULECT-26
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Return to List
Representations of Quantum Algebras and Combinatorics of Young Tableaux
Susumu Ariki, Research Institute for Mathematical Sciences, Kyoto University, Kyoto, Japan
SEARCH THIS BOOK:
University Lecture Series
2002; 158 pp; softcover
Volume: 26
ISBN-10: 0-8218-3232-8
ISBN-13: 978-0-8218-3232-5
List Price: US\$37
Member Price: US\$29.60
Order Code: ULECT/26
See also:
Finite Dimensional Algebras and Quantum Groups - Bangming Deng, Jie Du, Brian Parshall and Jianpan Wang
Geometric Representation Theory and Extended Affine Lie Algebras - Erhard Neher, Alistair Savage and Weiqiang Wang
This book contains most of the nonstandard material necessary to get acquainted with this new rapidly developing area. It can be used as a good entry point into the study of representations of quantum groups.
Among several tools used in studying representations of quantum groups (or quantum algebras) are the notions of Kashiwara's crystal bases and Lusztig's canonical bases. Mixing both approaches allows us to use a combinatorial approach to representations of quantum groups and to apply the theory to representations of Hecke algebras.
The primary goal of this book is to introduce the representation theory of quantum groups using quantum groups of type $$A_{r-1}^{(1)}$$ as a main example. The corresponding combinatorics, developed by Misra and Miwa, turns out to be the combinatorics of Young tableaux.
The second goal of this book is to explain the proof of the (generalized) Lascoux-Leclerc-Thibon conjecture. This conjecture, which is now a theorem, is an important breakthrough in the modular representation theory of the Hecke algebras of classical type.
The book is suitable for graduate students and research mathematicians interested in representation theory of algebraic groups and quantum groups, the theory of Hecke algebras, algebraic combinatorics, and related fields.
Readership
Graduate students and research mathematicians interested in representation theory of algebraic groups and quantum groups, the theory of Hecke algebras, algebraic combinatorics, and related fields.
Reviews
"The author gives a good introduction to the algebraic aspects of this fast-developing field ... Overall, this is a well-written and clear exposition of the theory needed to understand the latest advances in the theory of the canonical/global crystal basis and the links with the representation theory of symmetric groups and Hecke algebras. The book finishes with an extensive bibliography of papers, which is well organised into different areas of the theory for easy reference."
-- Zentralblatt MATH
"Well written and covers ground quickly to get to the heart of the theory ... should serve as a solid introduction ... abundant references to the literature are given."
-- Mathematical Reviews
• Introduction
• The Serre relations
• Kac-Moody Lie algebras
• Crystal bases of $$U_v$$-modules
• The tensor product of crystals
• Crystal bases of $$U_v^-$$
• The canonical basis
• Existence and uniqueness (part I)
• Existence and uniqueness (part II)
• The Hayashi realization
• Description of the crystal graph of $$V(\Lambda)$$
• An overview of the application to Hecke algebras
• The Hecke algebra of type $$G(m,1,n)$$
• The proof of Theorem 12.5
• Reference guide
• Bibliography
• Index
AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8242717981338501, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/75220?sort=oldest
|
## Physicist’s request for intuition on covariant derivatives and Lie derivatives
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A friend of mine is studying physics, and asks the following question which, I am sure, others could respond to better:
What is the difference between the covariant derivative of $X$ along the curve $(t)$ and a Lie derivative of $X$ along $y(t)?$ I know the technical stuff about not needing to define a connection with a Lie derivative, needing to define the fields $X$ and $Y$ over a greater neighborhood, etc.
I am looking for a more physical sense. If a Lie derivative gives the sense of the change of a vector field along the direction of another field, how does the covariant derivative differ?
-
3
I don't think the Lie derivative "gives the sense of the change of the vector field along the direction of another field". One primary difference is that $\nabla_Z$ is $C^\infty$-linear in $Z$, while $\mathcal{L}_Z$ is only $\mathbb{R}$-linear. Conceptually for the Lie derivative you flow the entire manifold/neighborhood along the vector field $Z$ (so in fact you cannot define the Lie derivative of a vector field $X$ along a curve $y(t)$; at the very least you need to have a congruence, instead of just a single curve). – Willie Wong Sep 12 2011 at 14:01
1
you'll find an extensive discussion, with many pointers to the literature at physicsforums.com/showthread.php?t=150200 – Carlo Beenakker Sep 12 2011 at 14:12
## 4 Answers
The Lie derivative of a vector field $X$ with respect to another vector field $Y$ is just the Lie bracket of the two vector fields. It is well-defined given only the smooth structure and does not require any connection. In other words, it is independent of changes of co-ordinates and is preserved under any diffeomorphism. Given how flexible diffeomorphisms are, it can't be a pointwise or even curvewise concept, since you can basically map any pair of nonzero vectors to any other pair and even any nonvanishing transversal vector field along a curve to any other nonvanishing transversal vector field along another curve.
But we know what the Lie derivative tells us. It tells us how "coherent" or "independent" the two vector fields are with respect to each other locally (on an open set and not just at a point). It measures to what extent the generated flows commute, i.e. what happens if you first travel along an integral curve of one and then along one of the other versus the opposite order.
Another way to think about this is, discussed in control theory, to think about the set you get if you flow first along one vector field, then the other, then the first one again, etc. If the Lie bracket vanishes, then you stay inside a 2-dimensional surface. If it doesn't, then the value of the Lie bracket (and its iterates) tells you the dimension of the set that you stay inside.
A connection allows you to define the concept of a "constant" vector along a curve, i.e. parallel translation along a curve. It is important to understand that defining parallel translation is an extra assumption or geometric structure added to the smooth manifold.
-
1
Another natural question is whether the Lie derivative and connection are related somehow. The answer is that for an arbitrary connection, they don't have to be related at all. But it turns out to be useful (and therefore natural?) to assume that they are related. Again, however, this is an additional assumption and not forced on you. – Deane Yang Sep 12 2011 at 15:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $T$ be a tensor field on the manifold $M$, $\nabla$ a connection, $v$ a tangent vector at $x\in M$, and $V$ a vector field such that $V(x)=v$.
Then the intuition is as follows:
The covariant derivative $\nabla_v T$ is the derivative of $T$ along a geodesic arc $\gamma$ for $\nabla$ which has direction $v$ at $x=\gamma(0)$. The derivation is computed in the finite dimensional tangent space $T_xM$, as nearby values $T(y)$, $y\in M$, are compared via parallel transport.
(Remark: here "geodesic arc" should be made more precise, as geodesics emanating from $x$ are determined as parametrized curves and it may happen that the geodesic in the direction $v$ doesn't have velocity $v$)
The (value at the point $x$ of the) Lie derivative $\mathcal{L}_VT$ is the derivative of $T$ along the flowline of $V$ (passing through $x$). The derivation is computed in the finite dimensional tangent space $T_xM$, as nearby values $T(y)$, $y\in M$, are compared via pullback along the local flow of $V$.
-
3
I don't see any reason why a covariant derivative has to be computed using a geodesic. You get the same answer using any curve with tangent vector $V$. – Deane Yang Sep 12 2011 at 16:18
3
And the explanation of Lie derivative is quite misleading, because it makes it seem like the Lie derivative depends only on $T$ along the flow line of $V$. In fact, it depends on how both $T$ and $V$ behave in a neighborhood of the flowline. If you really want to use only data along the flow line, then you need to know their first order jets along the curve. – Deane Yang Sep 12 2011 at 17:27
@DeanYang: of course I agree the connection only depends on $v$, but he asked for an intuitive explanation, and I think the most intuitive meaning I can attach to the covariant derivative along a direction is: "directional derivative along the stright line (with respect to the connection, or metric if the connection is metric)". I agree you don't have to compute it along a geodesic, but it's what is it morally supposed to mean. – Qfwfq Sep 12 2011 at 21:38
@DeanYang: your second remark make me think my interpretation of the Lie derivative is not quite correct as stated. Is there a way (given $T$ and $V$ in a neighborhood of $x$) to compute the Lie derivative $\mathcal{L}_VT$ within a fixed finite dimensional vector space (depending on $x$)? – Qfwfq Sep 12 2011 at 21:45
"I agree you don't have to compute it along a geodesic, but it's what is it morally supposed to mean": I don't really agree with the second half. It's just a directional derivative associated with the tangent vector. A geodesic works, but in this case plays no special role. So mentioning it is misleading. It is true that when you first learn about directional derivatives on $R^n$, you tend to define them in terms of straight lines. However, it is rather important in differential geometry to understand that straight lines are not special when computing or defining a directional derivative. – Deane Yang Sep 13 2011 at 9:49
show 1 more comment
Lie derivative is based on a Lie group (or Lie algebra) which acts on the manifold. This derivative cannot be defined just at one point because the action cannot be defined at a point even if you give explicitly the direction at that point. On the other hand, using connection, covariant derivative can be defined pointwise. I think this is the main technical difference between them.
-
I'm not sure what you mean by "defined pointwise". With either type of derivative, you need to know something about the vectors involved at more than one point. So you need to be more precise about what the distinction is. – Deane Yang Sep 12 2011 at 19:53
1
Covariant derivative is the analogue of directional derivative in R^n case. So if we fix a connection and assign a direction to a point, the covariant derivative at that point is well-defined. But for Lie derivative, one direction is not enough. We have to point out the vector field. L_X(f) might not equal to L_Y(f) even if X(p)=Y(p). – Xiao Xinli Sep 12 2011 at 20:20
Yes, that's a good clarification of what you wrote. – Deane Yang Sep 12 2011 at 20:30
I suppose you mean the action of the Lie algebra of the diffeomorphism group, which is the Lie algebra $\mathfrak{X}(M)$ of vector fields on $M$, right? How does it clarify the picture about the Lie derivative? – Qfwfq Sep 12 2011 at 21:50
First let me say that what is intuitive to a physicist may be not be so to a geometer and vice-versa. To many physicists a connection is the potential of a field satisfying a gauge invariance. For this point of view I refer to vol. 1, Chap. 6 sect 41 of the three volume book by Dubrovin-Fomenko-Novikov: Modern Geometry-Methods and applications.
I find this point of view less intuitive only because I was trained as a mathematician.
The notion of covariant derivative appears naturally when one tries to solve the following problem. Suppose that $E\to M$ is a smooth vector bundle over a smooth manifold $M$. For example, $E$ could be the tangent bundle of $M$. We seek a notion of parallel transport. More precisely, this is a correspondence that associates to each smooth path
$$\gamma: [a,b]\to M$$
a linear map $T_\gamma$ from the fiber of $E$ at the initial point of $\gamma$ to the fiber of $E$ over the final point of $\gamma$
$$T_\gamma: E_{\gamma(a)}\to E_{\gamma(b)}.$$
The map $T_\gamma$ is called the parallel transport along the path $\gamma$.The assignment $\gamma\mapsto T_\gamma$ should satisfy two natural conditions .
(a) $T_\gamma$ should depend smoothly on $\gamma$. (The precise meaning of this smoothness is a bit technical to formulate, but in the end it means what your intuition tells you it should mean.)
(b) If $\gamma_0: [a,b]\to M$ and $\gamma_1:[b,c]\to M$ are two smooth paths such that the initial point of $\gamma_1$ coincides with the final point, then we obtain by concatenation a path $\gamma:[a,c]\to M$ and we require that
$$T_\gamma= T_{\gamma_1}\circ T_{\gamma_0}.$$
Suppose we have a concept of parallel transport. Given a smooth path $\gamma:[0,1]\to M$ and a section $\boldsymbol{u}(t)\in E_{\gamma(t)}$, $t\in [0,1]$ of $E$ over $\gamma$, then we can define a concept of derivative of $\boldsymbol{u}$ along $\gamma$. More precisely
$$\nabla_{\dot{\gamma}} \boldsymbol{u}|_{t=t_0}=\lim_{\varepsilon \to 0} \frac{1}{\varepsilon} \left( T^{t_0,t_0+\varepsilon}_\gamma \boldsymbol{u}(t_0+\varepsilon)- \boldsymbol{u}(t_0)\right),$$
where $T^{t_0,t_0+\varepsilon}_\gamma$ denotes the parallel transport along $\gamma$ from the fiber of $E$ over $\gamma(t_0+\varepsilon)$ to the fiber of $E$ over $\gamma(t_0)$. The left-hand-side of the above equality is called the covariant derivative of $\boldsymbol{u}$ along the vector field $\dot{\gamma}$ determined by the parallel transport. Thus, a choice of parallel transport leads to a concept of covariant derivative.
Conversely, a covariant derivative $\nabla$ leads to a parallel transport. Given a smooth path $\gamma:[0,1]\to M$ the parallel transport
$$T_{\gamma}: E_{\gamma(0)}\to E_{\gamma(1)}$$
is defined as follows. Fix $u_0\in E_{\gamma(0)}$. Then there exists a unique section $\boldsymbol{u}(t)$ of $E$ over $\gamma$ satisfying
$$\boldsymbol{u}(0)=u_0,\;\;\nabla_{\dot{\gamma}}\boldsymbol{u}(t)=0,\;\;\forall t\in [0,1].$$
We then set
$$T_\gamma u_0:= \boldsymbol{u}(1).$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 97, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455780386924744, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/2303/in-rsa-rationale-for-prime-p-with-p-1-having-prime-factor-u-with-u-1-ha?answertab=oldest
|
# In RSA, rationale for prime $p$ with $p-1$ having prime factor $u$ with $u-1$ having large prime factor?
In the 1978 RSA paper, it is recommended, among other things, to choose primes $p$ such that $(p-1)$ has a large prime factor $u$. This was motivated by Pollard's p-1 algorithm. Further, the authors state:
Additional security is provided by ensuring that $(u−1)$ also has a large prime factor.
What was the motivation for that?
-
It should be noted that elliptic curve factorization has made this security requirement redundant. On the other hand, ensuring `(p - 1)` has a large prime factor requires very little extra effort. – Brett Hale Jul 6 '12 at 8:44
## 1 Answer
This issue, and its history, was discussed at length in Silverman and Rivest. The relevant passage here is in Section 6, which I quote:
In 1977 Simmons and Norris [53] discussed the following "cycling" or "superencryption" attack on the RSA cryptosystem: given a ciphertext C, consider decrypting it by repeatedly encrypting it with the same public key used to produce it in the first place, until the message appears. Thus, one looks for a fixed point of the transformation of the plaintext under modular exponentiation. Since the encryption operation effects a permutation of $\mathbb{Z}_n = \{0,1,\ldots,n-1\}$, the message can eventually be obtained in this manner. Rivest [46] responds to their concern by (a) showing that the odds of success are minuscule if the n is the product of two $p^{--}$-strong primes, and (b) arguing that this attack is really a factoring algorithm in disguise, and should be compared with other factoring attacks.
-
– fgrieu Apr 7 '12 at 23:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594439268112183, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Defective_matrix
|
# Defective matrix
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.
A defective matrix always has fewer than n distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues λ with algebraic multiplicity $m > 1$ (that is, they are multiple roots of the characteristic polynomial), but fewer than m linearly independent eigenvectors associated with λ. However, every eigenvalue with multiplicity m always has m linearly independent generalized eigenvectors.
A Hermitian matrix (or the special case of a real symmetric matrix) or a unitary matrix is never defective; more generally, a normal matrix (which includes Hermitian and unitary as special cases) is never defective.
## Jordan block
Any Jordan block of size 2×2 or larger is defective. For example, the n × n Jordan block,
$J = \begin{bmatrix} \lambda & 1 & \; & \; \\ \; & \lambda & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda \end{bmatrix},$
has an eigenvalue, λ, with multiplicity n, but only one distinct eigenvector,
$v = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}.$
## Example
A simple example of a defective matrix is:
$\begin{bmatrix} 3& 1 \\ 0 & 3 \end{bmatrix}$
which has a double eigenvalue of 3 but only one distinct eigenvector
$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$
(and constant multiples thereof).
## References
• Strang, Gilbert (1988). Linear Algebra and Its Applications (3rd ed.). San Diego: Harcourt. ISBN 970-686-609-4.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094218611717224, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/60034/is-there-a-continuous-function-on-f-mathbbr-rightarrow-mathbbr-with-unco/60061
|
Is there a continuous function on $f:\mathbb{R} \rightarrow \mathbb{R}$ with uncountably infinite turning points?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was thinking about the statement "if f is continuous on the interval I, there is not necessarily an interval J in I on which f is monotone." and this led me to the question "does there exist a continuous function $f:\mathbb{R} \rightarrow \mathbb{R}$ that has uncountably infinite turning points?" when I say turning point I'm talking about a point (x,f(x)) s.t there exists an open ball around that point where f(x) is either the highest or lowest value within that ball.
eg. $f(x)=sin(x)$ has countably infinite turning points as opposed to $f(x)=x^2$ which has one.
I cant think of a reason that convinces me that its impossible yet I can conceptualize a function that does this. Is it impossible? or does there exist such a function? I certainly get the impression this is impossible . . .
-
1
You need to be more specific. The (graph of the) real-valued function f(x,y) = x - x^2 has uncountably many points (x,y) with a partial derivative of 0 and second partial negative. It is likely there are 2-D versions of Brownian motion which might come closer to what you actually intend to visualize. Gerhard "Ask Me About System Design" Paseman, 2011.03.29 – Gerhard Paseman Mar 30 2011 at 5:55
Please use the "edit" link below the question, and describe the definition of "turning point" that you are using. – S. Carnahan♦ Mar 30 2011 at 11:11
Sorry for being so late to edit and for being vague, I think the definition of turning point I use above makes it impossible. – Kate Mar 30 2011 at 12:20
Still confusing. You ask about functions on ${\bf R}^2$ but your examples are of functions on $\bf R$ - only the graph is in ${\bf R}^2$. So what do you mean? – Gerry Myerson Mar 30 2011 at 12:24
Ah - you answered my question while I was typing it in. Thanks. – Gerry Myerson Mar 30 2011 at 12:25
show 2 more comments
2 Answers
Of course you want to rule out the constant function, so you probably mean that there is a unique highest and lowest point in the neighborhood. Assuming this, with your new definition of turning point, you can choose your neighborhoods to be intervals with rational endpoints. This will force the number of turning points to be countable.
-
Thanks, I will check that out. I realized this when I defined turning points more precisely. Thanks for helping despite the problem not being quite up to scratch! – Kate Mar 30 2011 at 12:43
I do not follow this argument. You can choose the neighborhoods to be rational intervals, but how do you know that such a neighborhood contains only one turning point? – Michael Renardy Mar 30 2011 at 13:13
1
@Michael: You don't know that such a neighborhood N contains only one turning point, but you do know that at most one point attains the maximum of f in N and at most one attains the minimum of f in N. Any other turning points, even if they're in N, will have had other neighborhoods N' assigned to them --- neighborhoods in which they achieve the maximum or minimum value of f. – Andreas Blass Mar 30 2011 at 13:48
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It seems to me that two dimensional Brownian motion is the example you are looking for. Can you please be more precise about what do you mean by turning point?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469673037528992, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58769/orthogonal-and-parallel-morphisms
|
## “orthogonal” and “parallel” morphisms?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathbf C$ a category with an initial object named $0$.
Is there a name for the pair of arrows $f,g\colon A\to B$ such that the unique arrow $0\to A$ is their equalizer? And dually, is there a special name for $f,g\colon A\to B$ such that the coequalizer is $B\to 1$, when $1$ is the terminal object of $\mathbf C$? Finally, is it useful to name them? :)
I can figure out how it works in case $\mathbf C$ is concrete: I want to map the fact that a couple of arrows is "everywhere equal" (if "coker(f,g)=the whole") or "nowhere equal" (if "ker(f,g)=nothing unnecessary").
No ideas for general situation + I'm searching references (something make me think about Lawvere but I'm not able to recover anything).
Thanks a lot!
-
3
The former might be called "having disjoint images", at least if C is extensive. I think "everywhere equal" is the wrong intuition for the dual notion; for instance, $id_N\colon N\to N$ and $(+1)\colon N\to N$ have both properties. – Mike Shulman Mar 17 2011 at 18:14
I agree with Mike's comment about the intuition "everywhere equal", and that the former is something to do with disjointness. In fact Diers calls a pair with the latter property "codisjointed". – Steve Lack Mar 18 2011 at 12:19
For the first, surely "disjoint" (or even just "unequal" or "apart") would be enough: it's all about limits and no "image" need ever be formed. I wonder what happens under Stone duality if we apply this to parallel homomorphism that agree only on constants. – Paul Taylor Mar 19 2011 at 18:23
For the dual property, the situation in $\bf Set$ is a binary relation whose equivalence closure is the total relation, or "indiscriminate" as I called it in my book, so maybe we could call this an indiscriminate pair. We could also think of the relation dynamically: it gets you from anywhere to anywhere else, which is called "transitive" in the theory of permutation groups. However, some more examples in other categories would be useful before fixing a name. – Paul Taylor Mar 19 2011 at 18:27
1
Further to my objection to "disjoint <i>image</i>", consider the identity and swap maps $2\rightrightarrows 2$. These have the property of the question. However, their images are both $2\rightarrowtail 2$ and the intersection of these is again the whole of $2$, not $0$. – Paul Taylor Mar 19 2011 at 19:50
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425912499427795, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/132423-jacobi-symbol.html
|
# Thread:
1. ## Jacobi Symbol
Evaluate the Jacobi symbol ((n−1)(n+1)/n) for any odd natural number n.
Trying out some numbers, I THINK it alternates between 1 and -1, but how can we PROVE it formally?
Any help is appreciated!
[also under discussion in math link forum]
2. $\left(\frac{(n-1)(n+1)}{n}\right) = \left(\frac{n-1}{n}\right)\left(\frac{n+1}{n}\right) = \left(\frac{-1}{n}\right)\left(\frac{1}{n}\right) = \left(\frac{-1}{n}\right)$
For $n$ odd, $\left(\frac{-1}{n}\right) = (-1)^\frac{n-1}{2} = \begin{cases} \;\;\,1 & \text{if }n \equiv 1 \pmod 4\\ -1 &\text{if }n \equiv 3 \pmod 4\end{cases}$.
For $n$ even, take $n=2k$, then $\left(\frac{-1}{n}\right) = \left(\frac{-1}{2k}\right) = \left(\frac{-1}{2}\right)\left(\frac{-1}{k}\right) = \left(\frac{-1}{k}\right) = \begin{cases} \;\;\,1 & \text{if }k \equiv 1 \pmod 4\\ -1 &\text{if }k \equiv 3 \pmod 4\end{cases}$.
3. Originally Posted by chiph588@
$\left(\frac{(n-1)(n+1)}{n}\right) = \left(\frac{n-1}{n}\right)\left(\frac{n+1}{n}\right) = \left(\frac{-1}{n}\right)\left(\frac{1}{n}\right) = \left(\frac{-1}{n}\right)$
For $n$ odd, $\left(\frac{-1}{n}\right) = (-1)^\frac{n-1}{2} = \begin{cases} \;\;\,1 & \text{if }n \equiv 1 \pmod 4\\ -1 &\text{if }n \equiv 3 \pmod 4\end{cases}$.
For $n$ even, take $n=2k$, then $\left(\frac{-1}{n}\right) = \left(\frac{-1}{2k}\right) = \left(\frac{-1}{2}\right)\left(\frac{-1}{k}\right) = \left(\frac{-1}{k}\right) = \begin{cases} \;\;\,1 & \text{if }k \equiv 1 \pmod 4\\ -1 &\text{if }k \equiv 3 \pmod 4\end{cases}$.
Thank you!
But I thought the Jacobi symbol is defined only for ODD positive integers at the bottom. In your proof, why is there a case where "n" is even??? How is this possible??
4. Originally Posted by kingwinner
Thank you!
But I thought the Jacobi symbol is defined only for ODD positive integers at the bottom. In your proof, why is there a case where "n" is even??? How is this possible??
Oops! You're right!
5. But I'm concerned with one special case. For the case n=1, ((n−1)(n+1)/n)=(0/1)
Is (0/1)=1 or (0/1)=0 ?? Which one is the correct answer and why?
Thanks!
6. Originally Posted by kingwinner
But I'm concerned with one special case. For the case n=1, ((n−1)(n+1)/n)=(0/1)
Is (0/1)=1 or (0/1)=0 ?? Which one is the correct answer and why?
Thanks!
$\left(\frac{a}{n}\right) <br /> = \begin{cases}<br /> \;\;\,0\mbox{ if } \gcd(a,n) \ne 1<br /> <br /> \\\pm1\mbox{ if } \gcd(a,n) = 1\end{cases}$
So sub in $0$ and $1$ and see what you get.
7. Originally Posted by chiph588@
$\left(\frac{a}{n}\right) <br /> = \begin{cases}<br /> \;\;\,0\mbox{ if } \gcd(a,n) \ne 1<br /> <br /> \\\pm1\mbox{ if } \gcd(a,n) = 1\end{cases}$
So sub in $0$ and $1$ and see what you get.
gcd(0,1)=1, so that rule says that (0/1)=+1 OR -1, but how do we know whether it is +1 or -1?
8. Jacobi symbol
Apparently the answer is $0$. I haven't had too much exposure to the Jacobi symbol so I can't really tell you why. Check out the site for yourself though.
9. $\left( \frac{0}{1} \right)=1$ because the set of prime factors of $1$ is empty. if $n > 1$ is an odd integer, then $\left( \frac{0}{n} \right)=0.$
10. Originally Posted by NonCommAlg
$\left( \frac{0}{1} \right)=1$ because the set of prime factors of $1$ is empty. if $n > 1$ is an odd integer, then $\left( \frac{0}{n} \right)=0.$
Is there any reason why (0/1)=1?? Is this simply becuase by convention, we define it to be that way?
1 has no prime factorization, so the product is empty. Is it conventional to define the "empty" product to be equal to +1??
Also, is it true that, by definition, (a/1)=1 for any integer a?
Can someone clarify this? Thank you!
11. Originally Posted by kingwinner
Is there any reason why (0/1)=1?? Is this simply becuase by convention, we define it to be that way?
1 has no prime factorization, so the product is empty. Is it conventional to define the "empty" product to be equal to +1??
Also, is it true that, by definition, (a/1)=1 for any integer a?
Can someone clarify this? Thank you!
the answer is "yes" to both your questions.
12. Originally Posted by NonCommAlg
the answer is "yes" to both your questions.
Is there any real reason why we define (a/1)=1 for any integer a?
Why not define it to be 0? why not -1?
Thanks for answering!
13. Originally Posted by kingwinner
Is there any real reason why we define (a/1)=1 for any integer a?
Why not define it to be 0? why not -1?
Thanks for answering!
It could perhaps be to avoid making annoying "special cases" in theorems.
Example (which is not linked to your question, but just to show you) :
Why do we define 1 as not being prime ?
If we defined 1 as being prime, we would have to reformulate the Fundamental Theorem of Arithmetic in a rather heavy way, stating that "apart from adding 1's, the prime decomposition of a number is unique without taking into account the order" ...
And since there is no particular reason to define 1 as a prime, we just prefer to say it isn't one
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311425685882568, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/33438/projection-of-the-co-derivative-co-derivative-of-the-projection
|
## projection of the co-derivative == co-derivative of the projection ?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hey,
here is the formal question.
M is a riemannian sub-manifold in N. a,b are vector fields such that for each p$\in$M, $a_p$,$b_p$ in $T_p$M $\subset$ $T_p$N
prove
$\nabla^M_b$a = pr($\nabla^N_b$a)
where pr is the projection funtion pr:$T_p$N$\rightarrow T_p$M and $\nabla^N$ and $\nabla^M$ are the covariant derivative operators (by riemannian connection) in N and M respectively.
I don't really understand why is this not immediate from definitions. the covariant derivative in a manifold is just the regular derivate since there's no need to late project onto the manifold since the derivative of a vector field in a sub-manufold will surely we already completley in the manifold thus the projection will just be Identity. thus when I will project this vector on $T_p$M ofcourse I wil get the covariant derivative on M with is also just the derivative then projected onto M. maybe what I'm asked to prove is that the derivative of the vector field a with respect to b is equal to the covariant derivative in N since the derived vector is 'fully' in N?
-
This appears to be a homework problem. I suggest consulting either a professor or a classmate. – Deane Yang Jul 27 2010 at 2:40
## 1 Answer
Homework or not, it is not true that your covariant derivative in N is already parallel to the submanifold M; indeed the normal component eliminated by the projection is the second fundamental form of M in N. Consider the simple example of the 2-sphere in $N=\mathbb{R}^3$; if X is a normal (=radial) vector field on the sphere then `$(\nabla^N_b a)\cdot X=-a\cdot(\nabla^N_b X)$` is proportional to the inner product $a\cdot b$. To prove the projection formula, it suffices to (i) observe that the projected connection respects the induced metric on M, and (ii) prove that it has zero torsion, which is related to the fact that the second fundamental form is symmetric in a,b.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949060320854187, "perplexity_flag": "head"}
|
http://www.conservapedia.com/Real_analysis
|
# Real analysis
### From Conservapedia
Jump to: navigation, search
$\frac{d}{dx} \sin x=?\,$ This article/section deals with mathematical concepts appropriate for a student in late high school or early university.
Real analysis is a field in mathematics that focuses on the set of real numbers, their properties, sequences and functions. Included in this branch of mathematics are the concepts of limits and convergence, calculus, and properties of real-valued functions such as continuity. It also includes measure theory.
For the purposes of this article, "analysis" will be limited to the generalization and extension of the concepts of calculus, using the concepts of elementary point-set topology.
The reader should be quite familiar with the concepts of calculus, especially limits and continuity. In particular, the reader should be comfortable with the ramifications of the phrase "for every epsilon".
## Open sets
Open sets (and, by extension, closed sets, which are just the complements of open sets) are the fundamental concept of analysis. Analysis and topology are really just the study of open sets.
Before giving the definition of open sets in Euclidean space, we present some examples. Readers who are aware of the general intuitive notion of open sets should find these examples familiar.
### In one dimension
The simplest open sets in 1-dimensional Euclidean space (formally called $\mathbb{R}^1$; informally called the real numbers of the "real line") are open intervals. An open interval consists of those numbers lying strictly between two endpoints a and b. In set-theoretic notation:
$\{ x\ |\ a < x < b \}\,$
A shorter notation for this set consists of the two endpoints in parentheses:
$( a, b )\,$
A closed interval (we will have more to say about closed sets later) would include the endpoints. It is commonly denoted with brackets:
$[ a, b ] = \{ x\ |\ a \le x \le b \}$
An interval that includes one endpoint but not the other is called semi-open:
$[ a, b ) = \{ x\ |\ a \le x < b \}$
$( c, d ] = \{ x\ |\ c < x \le d \}$
When drawing pictures of intervals, those same symbols are typically used:
Need a picture here!
Open intervals are not the only open sets. Any union of open intervals is an open set. For example:
$\bigcup_{N \textrm{\ is\ an\ integer\ } \geq 2} (N, N+1/N)$
Bizarrely defined sets like the one above are commonly used as examples and counterexamples in analysis and topology.
### In two or more dimensions
In two or more dimensions the situation becomes more complicated, because even simple open sets can come in an endless variety of shapes. The fundamental open set (equivalent to an open interval) is the open neighborhood, also called an open ball. An open neighborhood has a center point and a nonzero radius, and is the set of all points whose distance from the center is strictly less than that radius. In set-theoretic notation:
$\{ x\ |\ \|x-C\| < r \}\,$
The double-stroke absolute value sign is the norm or the metric distance function. In the common case it is the Euclidean/Pythagorean distance:
$\|a-b\| = \sqrt{(a_1-b_1)^2 + (a_2-b_2)^2}\,$ in two dimensions (similarly for higher dimensions)
The double-stroke absolute value sign is similar to the usual absolute value operation, generalized to arbitrary dimensions or other metric spaces.
It is easy to see that, in two dimensions, an open neighborhood is the interior of a circle. It does not include the actual boundary of the circle, because it consists of the points whose distance from C is strictly less than r. This point is crucial—the whole subject of analysis and topology depends on it!
To draw a picture of an open neighborhood, use a circle bounded by a dotted line:
Need a picture here!
(To make a closed ball, the formula would be:
$\{ x\ |\ \|x-C\| \le r \}\,$
and the picture would be a solid circle. But open neighborhoods are the important sets from a theoretical standpoint.)
## Definition of open set
Here is the proper theoretical definition:
Definition: A set is open if it contains a neighborhood of each of its points.
What this means is that a set X is open if, for every point x in X, there is a neighborhood N such that $x \in N$ and $N \subseteq X$. This construction is shown in the following diagram:
Need a picture here!
If the point x were allowed to lie exactly on the edge of X, it wouldn't be possible to draw a nonzero neighborhood around x that lies in X. So the important feature of X's openness is that no point can lie exactly on its edge. Every point in X must be some finite distance back from the edge, which makes it possible to draw a neighborhood around it.
## Theorems
Here are a few extremely fundamental and far-reaching theorems. Some of them are surprisingly simple:
Theorem: Neighborhoods are open sets.
Proof: Suppose a neighborhood has center C and radius r. If a point x is in that neighborhood, its distance from C must be strictly less than r, call it k.
$\|x-C\| = k,\ \ \ k < r\,$
Place a new neighborhood, of radius (r − k) / 2, around x. Every point in that neighborhood has a distance less than k + (r − k) / 2 from C. That distance is less than r, so every point in the new neighborhood is in the original neighborhood, so the new neighborhood lies within the original one.
Theorem: Any union of open sets, including unions of an infinite number of open sets, is an open set.
Proof: If a point x lies in the union, it must lie within one of the constituent open sets. There must be a neighborhood of x contained in that constituent open set. That neighborhood must be contained in the union.
Theorem: The intersection of two open sets is an open set.
Proof: Let $X = P_1 \cap P_2$, and let $x \in X$. Then $x \in P_1$ and $x \in P_2$. Since P1 and P2 are open, there must be neighborhoods $N_1 \subseteq P_1$ and $N_2 \subseteq P_2$ that contain x. Whichever of those two neighborhoods has the smaller radius will be a subset of both P1 and P2, so it will be a subset of X.
This theorem can be extended for any finite intersection, but it does not work for infinite intersections. Here is an example:
Let Pi be an infinite sequence of ever-decreasing open intervals:
$P_i = \{ x\ |\ -1-1/i < x < 1+1/i \}\,$
for integer $i \ge 1$ The intersection of all of the Pi's is the closed interval
$[ -1, 1 ]\,$
which is not open.
So the topological rule of thumb is:
Any union of open sets is open.
Any finite intersection of open sets is open
Theorem: The null set (empty set) is open.
Proof: It needs to contain a neighborhood of each of its points. But it has no points.
Theorem: The entire space is open.
Proof: We need a neighborhood of each point in the space. The neighborhood centered on that point, with radius 1, will do the trick.
This means that the real line is open. It is not an open interval, because that interval would have to be "$(-\infty, \infty)$", and infinity is not a number. The real line is an open set, because it is:
$\mathbb{R}^1 = \bigcup_{n}\ (n-1, n+1)$
over all integers n. (Infinite unions are allowed, even though infinity is not a number.)
Theorem: Every open set is a union of neighborhoods.
Proof: It contains a neighborhood of each of its points; those are its constituent neighborhoods.
This means that any open set in the plane, for example, is a union of open circles. (It is also the union of open rectangles, open diamonds, open 5-pointed stars, and so on. This is a consequence of the invariance of the metric in defining a topology.)
In the field of topology, a collection of open sets, whose unions comprise all of the opens sets that exist, is called a basis. So what we have just shown is that the open neighborhoods (open intervals, open circles, open spheres, etc.) are a basis for the topology of finite-dimensional Euclidean spaces.
## Closed sets
Definition: A set is closed if its complement is open.
That's all there is to it.
Because of some simple theorems of set theory, including DeMorgan's laws, some of the preceding theorems relating to open sets can be reformulated for closed sets.
Any intersection of closed sets, including the intersection of an infinite number of closed sets, is closed.
Any union of a finite number of closed sets is closed.
The null set is closed.
The entire space (for example, the real line) is closed.
## Limit points, and the other definition of closed sets
Closed sets are sometimes given a different definition, as sets containing their limit points.
Retrieved from "http://www.conservapedia.com/Real_analysis"
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510524868965149, "perplexity_flag": "head"}
|
http://en.m.wikibooks.org/wiki/Trigonometry/Worked_Example:_Simplifying_Angles
|
Trigonometry/Worked Example: Simplifying Angles
Worked Examples in Simplifying Angles
↑Jump back a section
Sign Changes
Sign changes (or otherwise) $\displaystyle \cos( -3x )$ We know $\displaystyle \cos( -t ) = \cos( t )$ so $\displaystyle \cos( -3x ) = \cos( 3x )$
Sign changes (or otherwise) $\displaystyle \sin( 180^\circ - \theta )$ We know $\displaystyle \sin( -t ) = -\sin( t )$ so: $\displaystyle \sin( 180^\circ - \theta ) = - \sin( \theta -180^\circ)$ We swapped the order of the terms at the same time just to save having to write $\displaystyle -180^\circ + \theta$, saving us one plus sign! Of course we can do that because the sum of two terms does not depend on their order. We also know that shifting the argument of sine (or cosine) by $\pm$ 180 degrees inverts the sign. So we can now remove the -180 and invert the sign to get: $\displaystyle \sin( 180^\circ - \theta ) = \sin \theta$
Sign changes (or otherwise) $\displaystyle \cos( 360^\circ - t )$ We know shifting by 180 degrees inverts the sign. Shifting by 360 degrees is shifting by 180 degrees twice. Another way to think about it is that we have gone one complete revolution round the unit circle. Anyway, the 360 degrees in the expression makes no difference at all, so we have. $\displaystyle \cos( 360^\circ - t ) = \cos( -t)$ and we also know $\displaystyle \cos( -t ) = \cos( t )$ so $\displaystyle \cos( 360^\circ - t ) = \cos t$
Sign changes (or otherwise) $\displaystyle \cos( -5x )\sin^2( 180^\circ - t )$ The minus in the $\displaystyle -5x$ will have no effect on the result since it is 'buried' inside the cosine. Likewise the 180 degree shift and the minus in the sine will have no effect on the sign of the result, since quite aside from the fact that they cancel each other, the sine is squared. (to spell that out, if we had got -sine of some expression, that all being squared would remove the negative sign again). So: $\displaystyle \cos( -5x )\sin^2( 180^\circ - t )= \cos5x\sin^2 t$
↑Jump back a section
Cosine to Sine
Complementary angles are pairs of angles that add up to $\displaystyle 90^\circ$ or if we are using Radian measure, $\displaystyle \frac{\pi}{2}$.
In a right angle triangles the other two angles, the two that are not the right angle, are complementary to each other. From the definitions of cosine and sine the cosine of an angle is the sine of the complementary angle. Also the sine of an angle is the cosine of the complementary angle.
Complementary Angles Complementary angles: $\displaystyle \cos( 90^\circ - \theta ) = \sin \theta$ $\displaystyle \sin( 90^\circ - \theta ) = \cos \theta$
Cosine is an even function Because cosine is an even function, $\displaystyle \cos( \theta - 90^\circ ) = \cos( 90^\circ-\theta )$ so $\displaystyle \cos( \theta - 90^\circ ) = \sin \theta$
Sine is an odd function Because sine is an odd function, $\displaystyle \sin( \theta - 90^\circ ) = -\sin( 90^\circ-\theta )$ so $\displaystyle \sin( \theta - 90^\circ ) = - \cos \theta$
Adding 90o at a time We can keep adding or subtracting 90o and switch between sine and cosine and possibly switch signs. We need to be careful to get the signs right. You can look at the graph to figure these ones out as needed, or just make sure you know about complementary angles, that sine is odd that cosine is even, and that adding or subtracting 180o flips the sign. $\displaystyle \sin( \theta - 180^\circ) = -\sin \theta$ add 180o flips the sign. $\displaystyle \sin( \theta - 90^\circ ) = -\cos \theta$ taking the negative, then complementary angle (one sign flip) $\displaystyle \sin \theta = \sin \theta$ $\displaystyle \sin( \theta + 90^\circ ) = \cos \theta$ subtract 180o, then negative, then complementary angle (two sign flips). $\displaystyle \sin( \theta + 180^\circ ) = -\sin \theta$ 180o flips the sign once. $\displaystyle \sin( \theta + 270^\circ ) = -\cos \theta$ subtract 360o, then negative, then complementary angle (three sign flips) $\displaystyle \sin( \theta + 360^\circ ) = \sin \theta$ 360o flips the sign twice and in these ones the step of taking the negative does not flip the sign since we are dealing with cosine $\displaystyle \cos( \theta - 180^\circ) = -\cos \theta$ add 180o flips the sign. $\displaystyle \cos( \theta - 90^\circ ) = \sin \theta$ taking the negative, then complementary angle (no sign flips) $\displaystyle \cos( \theta ) = \cos \theta$ $\displaystyle \cos( \theta + 90^\circ ) = -\sin \theta$ subtract 180o, then negative, then complementary angle (one sign flip). $\displaystyle \cos( \theta + 180^\circ ) = -\cos \theta$ 180o flips the sign once. $\displaystyle \cos( \theta + 270^\circ ) = \sin \theta$ subtract 360o, then negative, then complementary angle (two sign flips) $\displaystyle \cos( \theta + 360^\circ ) = \cos \theta$ 360o flips the sign twice
↑Jump back a section
Addition Formulas
Using Addition Formulas
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8800995945930481, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/94602-factoring-grouping.html
|
# Thread:
1. ## Factoring by grouping
Just a basic question, but how are you supposed to know an expression can be factored by grouping if the instructions do not point it out to you?
For instance the expression $6xy^2+4xy+9xy+6x$ may be factored by grouping, however at first glance im almost certain I would miss it unless the instructions specifically stated "factor by grouping".
Many thanks for any tips.
2. Originally Posted by allyourbass2212
Just a basic question, but how are you supposed to know an expression can be factored by grouping if the instructions do not point it out to you?
For instance the expression $6xy^2+4xy+9xy+6x$ may be factored by grouping, however at first glance im almost certain I would miss it unless the instructions specifically stated "factor by grouping".
Many thanks for any tips.
Unfortunately there is not any definitive way to know whether an expression can be factorised by grouping. It just takes some experience.
3. Hello, allyourbass2212!
My rule is: If it has four or more terms, try "grouping".
Sometimes, we must rearrange the terms,
. . so some imagination may be required.
. . $\begin{array}{cc}\text{Example:} & x^2 + 3x + y^2 + 3y + 2xy \\ \\<br /> <br /> \text{Rearrange:} &\underbrace{x^2 + 2xy + y^2} + \underbrace{3x + 3y} \\ \\<br /> <br /> \text{Factor:} & (x+y)^2 + 3(x+y) \\ \\<br /> <br /> \text{Factor:} & (x+y)(x+y+3) \end{array}$
4. The important word in what Soroban said is "try"! Like so many other things in mathematics (or life for that matter!) we don't know what will work so we try different things until one works.
5. Thank you everyone for your insight into the matter!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191546440124512, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/47837/goldfish-perspective/47873
|
# Goldfish perspective
What does the world look like from the Goldfish point of view, from inside a spherical aquarium? If our eyes were inside, would we be able to see the straight lines, focus on different objects and what would a light point source looks like?
(elaborate on the curvatures, with or without the flat water above)
-
Light point source looks like only point source. – Inquisitive Dec 29 '12 at 9:48
## 2 Answers
Compound Fish Bowl/Fish Eye Lens System
The bowl would make a compound lens system with the fish's eye. First, I will assume that we have the following indices of refraction : $$n_{\text{air}}\\ n_{\text{glass}}\\ n_{\text{water}}.$$ Usually $n_{\text{air}} \approx 1$, $n_{\text{glass}} \approx 1.5$, and $n_{\text{water}} \approx 1.33$, but I'll work the general problem here. I will assume first order geometric optics in the paraxial approximation for simplicity. If you wanted a more robust answer this system could be modeled and analyzed in an optical ray-tracing program like Zemax or Code5.
We have the following variables besides the indices of refraction shown in the figure below: $t_1$ is the distance from the object to the first spherical glass interface with radius of curvature $r_1$, $t_2$ is the distance from the glass interface to the spherical glass/water interface with radius of curvature $r_2$, and finally $t_3$ is the distance from the the glass/water interface to the front principal plane of the eye of the fish. Note that the green color denotes the glass and the blue color denotes the water.
The following Gaussian optics equations will be used: $$\phi = \frac{n_2-n_1}{r}\\ \phi_{\text{tot}} = \phi_1 + \phi_2 - \tau\phi_1\phi_2\\ f_R' = n_{\text{water}}f_e$$ where $\phi$ is the power of a single surface, $\phi_{\text{tot}}$ is the power of two combined surfaces separated by the reduced distance $\tau = \frac{t}{n}$, and $f_e = \frac{1}{\phi}$ is the effective focal length.
The first step is to calculate the power of the air/glass interface and the power of the glass/water interface: $$\phi_1=\frac{n_{\text{glass}}-n_{\text{air}}}{r_1}\\ \phi_2=\frac{n_{\text{water}}-n_{\text{glass}}}{r_2}$$
This implies that: $$\phi_{\text{3}}=\frac{n_{\text{glass}}-n_{\text{air}}}{r_1}+\frac{n_{\text{water}}-n_{\text{glass}}}{r_2}-\frac{t_2}{n_{\text{glass}}}\cdot\frac{(n_{\text{glass}}-n_{\text{air}})(n_{\text{water}}-n_{\text{glass}})}{r_1 r_2}\\ =\frac{n_{\text{glass}}-1}{r_1}+\frac{n_{\text{water}}-n_{\text{glass}}}{r_2}-\frac{r_1-r_2}{n_{\text{glass}}}\cdot\frac{(n_{\text{glass}}-1)(n_{\text{water}}-n_{\text{glass}})}{r_1 r_2}\\ =\frac{n_{\text{water}}n_{\text{glass}}-n_{\text{water}}}{n_{\text{glass}}r_1}+\frac{n_{\text{water}}-n_{\text{glass}}}{n_{\text{glass}}r_2}\\ \approx \frac{0.443}{r_1}-\frac{0.113}{r_2}$$ since $t_2 = r_1-r_2$, with the last line using the numerical values of the indices of refraction specified above.
Now the fish eye will have power $\phi_{\text{fish}}$, so we can use the Gaussian equations once again: $$\phi_{\text{tot}} = \phi_3 + \phi_{\text{fish}} - \frac{t_3}{n_{\text{water}}}\phi_3\phi_{\text{fish}}$$ Plugging in $\phi_3$ yields: $$\phi_{\text{tot}} = \frac{n_{\text{water}}n_{\text{glass}}-n_{\text{water}}}{n_{\text{glass}}r_1}+\frac{n_{\text{water}}-n_{\text{glass}}}{n_{\text{glass}}r_2} + \phi_{\text{fish}} - \frac{t_3}{n_{\text{water}}}\left(\frac{n_{\text{water}}n_{\text{glass}}-n_{\text{water}}}{n_{\text{glass}}r_1}+\frac{n_{\text{water}}-n_{\text{glass}}}{n_{\text{glass}}r_2}\right)\phi_{\text{fish}}$$ Which can probably be simplified but I'm not going to spend the time to do so. If we plug in some numerical values, including the estimated indices of refraction listed above, we can get an answer: $$r_1=100mm\\ r_2=97mm\\ t_2=3mm\\ t_3=97mm$$ corresponding to a fish bowl 200mm (8in) in diameter, the fish is at the center, and the glass thickness is 3mm. I compute the following values: $$\phi_1=0.0050/mm\\ \phi_2=-0.0018/mm\\ \phi_3=0.0033/mm\\ \phi_{\text{tot}}=0.0033+0.7619\cdot\phi_{\text{fish}}$$ This implies the the effective focal length of the compound system (fish bowl + fish eye) will be $$f_{e,\text{tot}}=\frac{1}{0.0033+0.7619\cdot\phi_{\text{fish}}}mm$$
I found the average focal length for a goldfish eye in
Role of the lens and vitreous humor in the refractive properties of the eyes of three strains of goldfish from Seltner et al (1989)
to be about $3mm$. Then we obtain $$f_{e,\text{tot}}=\frac{1}{0.0033+\frac{0.7619}{3}}mm=3.89mm.$$
What if the fish was really close to the glass, say $t_3=10mm$? Then $$\phi_{\text{tot}}=0.0033+0.9752\cdot\phi_{\text{fish}}=0.3284/mm,$$ i.e., $$f_{e,\text{tot}}=3.05mm.$$ or if the fish is looking the other way, i.e., $t_3 = 187mm$ then $$f_{e,\text{tot}}=5.56mm.$$
The above is the most extreme case. What is the effect? Note that the above focal lengths are the air (or optical) equivalent lengths and not the actual focal lengths.
Effect of Changing the Focal Length
Most focusing systems (including cameras and eyes) are set up such that the focal length doesn't change a whole lot, and for far away objects not much adjustment is needed to attain focus. We will assume this here (i.e., that the focal plane or lens system, or crystalline lens changes such that the basic focal length remains unchanged to maintain focus). We shall assume that the fish can focus on something normally only $10mm$ away, this would set the range of $z'$ to be $3mm < z' < 4.3mm$.
We now need to calculate the offset of the combined system principal plane from the principal plane of the original fish eye using: $$\frac{d'}{n{\text{water}}} = -\frac{\phi_3}{\phi_{\text{tot}}}\frac{t_3}{n_{\text{water}}}\\ \implies d' = -\frac{\phi_3}{\phi_{\text{tot}}}t_3$$ we assume above that the rear principal plane of the fish eye system is actually in an index of refraction of water. When the fish is at the extreme $f_{e,\text{tot}}=5.56mm$, then $$d' = -3.39mm.$$ When the fish is in the center, then $$d' = -1.25mm.$$
The air equivalent original fish eye had $f_{e,\text{fish}}=3mm$, and the extreme part of the fish on one side of the bowl changed the effective focal length to $f_{e,\text{tot}}=5.56mm$. Let's assume that the fish is viewing an object at $1000mm$ away. We can use the lens equation to get the difference between the two focal lengths. $$\frac{1}{z'}=\frac{1}{z}+\frac{1}{3mm}\text{vs.}\frac{1}{z'}=\frac{1}{z}+\frac{1}{5.56mm}\\ \implies \frac{1}{z'}=-\frac{1}{1000mm}+\frac{1}{3mm}\text{vs.}\frac{1}{z'}=-\frac{1}{1000mm}+\frac{1}{5.56mm}\\ \implies z'=3.009mm\quad\text{vs.}\quad z'=5.59mm$$ which means the magnifications would be : $$m_{\text{fish}}=-0.003009\\ m_{\text{tot}}=-0.005591$$ The new effective focal length must take into account the shift in the principal plane, the focal plane would be located $5.59mm-3.39mm=2.2mm$. This would result in the fish seeing objects at 1000mm as blurry (defocused) outside the bowl, when the fish is near the edge of the bowl and looking through the other side of the bowl, because the fish cannot shift the retina to the 2.2mm position (infinity focus is at the 3mm position). The fish could begin to see things when the compound system has a $z'=3mm+3.39mm = 6.39mm$ which corresponds to $z \approx 43mm$, i.e., when objects are about $43mm$ from the front principal plane in air (I don't know if this would be inside or outside the bowl in physical space, likely inside, so it cannot actually happen).
What about when the goldfish is in the middle of the bowl, i.e., $f_{e,\text{tot}}=3.89mm$? Then something at a distance of $1000mm$ has $z'=3.91mm$, and the focal plane is at $2.66mm$ from the rear principal plane of the original fish eye system. The fish could begin to see things when the compound system has a $z'=3mm+1.25mm = 4.25mm$ which corresponds to $z \approx 45mm$, nearly the same as the extreme case.
In both cases it is likely the gold fish will see objects as blurry.
If the goldfish is only $20mm$ from the edge, then objects closer than about $122mm$ can be seen without being blurry. If he gets right against the edge then the effect is minimized and he can probably see quite far.
-
There also could be TIR, but it would be at very wide angles if at all... – daaxix Dec 30 '12 at 1:04
I can't follow your deductions, so I won't dispute them. But there's something fishy about at least one of your results: from the very center of the fishbowl, the shortest light path to any point is straight to it. So I have trouble figuring out how it could affect in anyway what the goldfish sees, because light can only get there from outside the bowl in a straight line, as if there was no bowl at all... – Jaime Dec 30 '12 at 4:23
@Jaime, the shortest light path must take into account the index of refraction of the material. It is the shortest in "optical space" not physical space. In order for the shortest path to coincide with the center of the fishbowl the incident light would have to be in the form of a converging spherical wave, centered exactly on center of the fish bowl. You can think of a single point, somewhere outside of the fish bowl, generating an expanding spherical wave (equivalently a set of radial rays from the source). The ray connecting the point to the center of the fish bowl, will be the shortest – daaxix Dec 30 '12 at 7:14
but, the other rays will refract, since they won't be perpendicular to the fish bowl surface. – daaxix Dec 30 '12 at 7:14
We are ray tracing, diffraction doesn't play any role here, so there's no need to consider spherical waves. So let me insist: the shortest light path, taking into account refractive indices and all, from any point outside the bowl to its center, is a straight line. And so any light reaching the center of the bowl does so geometrically as if there was no bowl at all, and an eye located exactly there will see objects outside undistorted. My guess is that the paraxial approximation breaks down in this particular case, although I am not sure why. – Jaime Dec 30 '12 at 9:58
show 5 more comments
The only weird effect from inside the bowl will be total internal reflection from the glass-air interface. This is what causes the distinctive fisheye effect when looking at a water-air interface from underwater, where you see the full $2\pi$ steradians of the air half-space concentrated on a smaller disk on the surface. There is significant distortion in this case, but only for objects outside the water.
But having a spherical bowl would actually lessen, rather than increase, this effect, and a fish located at the exact center of the bowl would not see any of it, as it would be looking perpendicular to the glass-air interface in every direction.
EDIT
If the glass is sufficiently thin, it will have a negligible refraction effect, since the refraction angle of the ray entering the glass, and the incidence angle of that same ray leaving the glass will be virtually the same, and their effects cancel out when applying Snell's law twice. So optically this is the same as being inside a sphere of water floating in space. Lets say this sphere has radius $r$, and we are looking at the water air interface from a distance $x$.
A point on the sphere that we see $\varphi$ degrees away from the normal, would be seen at an angle $\sin \theta = \frac{r}{x} \tan \varphi$ from the center of the sphere, and the angle between a ray with angle $\varphi$ and the normal to the interface will be $\theta_3 = \varphi - \theta$. The same ray in air will have angle with the normal calculated from Snell's law, $\sin \theta_1 = \frac{n_3}{n_1} \sin \theta_3$.
If you do the paraxial approximation, and take first order approximations for all trigonometric functions, you eventually get that, when looking with a small angle $\varphi$ from the normal, you are actually seeing light coming from an angle $\varphi'$ that can be calculated as:
$$\varphi' = \varphi (n - \frac{x}{r} n + \frac{x}{r})$$
where $n=\frac{n_3}{n_1} = 1.33$ for water.
So if you are really close to the glass, you will see things outside smaller than they are, by a factor of $\frac{1}{n} = 0.75$, if you are at the center of the bowl you will see everything undistorted, and if you are looking across the whole bowl, you will see things magnified by a factor of $\frac{1}{2-n} = 1.5$.
-
2
Jaime this is incorrect, see my answer below. – daaxix Dec 30 '12 at 0:14
2
There will be a lens effect, I'm working out what the fish would actually see now...I have to compute the principal planes of the combined system... – daaxix Dec 30 '12 at 0:56
Jaime I think you are conflating what the fish will see being inside and what we will see outside. Your analysis works for a point source at the center of the sphere, such that the rays are exiting. Then an imaging system will will, of course, see the point located at the center of the bowl (no change of location). But a point source emitting from outside the bowl has many rays which intersect at non-normal angles of incidence! – daaxix Dec 30 '12 at 21:35
@daaxix As far as the geometry goes, direction of propagation makes no difference, and a ray going out of the fish bowl will follow the exact same path as a ray going in. That's why ray tracing traces rays from the location of the observer through an image plane, to light sources, rather than the other way around, which would have most of the rays never reaching the observer. Come up with one single example of a ray reaching the center of a spherical bowl that intersected the sphere at a non-normal angle, and I'll quit arguing. But I am pretty sure that example does not exist... – Jaime Dec 31 '12 at 1:26
I agree, but you have to think about an integral of an uncountable number of point sources, not just a point source at the center. The fish eye is not a delta function, it has an area. The fish eye will be imaging rays not going through the center of the bowl. – daaxix Dec 31 '12 at 5:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 19, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446737170219421, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/software?sort=faq&pagesize=50
|
# Tagged Questions
The software tag has no wiki summary.
14answers
21k views
### What software programs are used to draw physics diagrams, and what are their relative merits?
People undoubtedly use a variety of programs to draw diagrams for physics, but I am not familiar with many of them. I usually hand-draw things in GIMP. GIMP is powerful in some regards, but it's ...
2answers
1k views
### Where can I find simulation software for electricity and magnets?
Is there easily-available* software to simulate coils, solenoids, and other magnetic and electromagnetic devices? I'd like to play around with some design ideas, such as Halbach arrays, but physics ...
0answers
87 views
### Is there any Calculator capable of calculating and displaying differential geometry? [closed]
Is there any Calculator capable of calculating and displaying differential geometry (display curvature of spacetime)? $$ds^2~=~g_{ab}dx^adx^b.$$
1answer
178 views
### Matlab package: graphical calculus for quantum operations (esp. linear optics)
I need a matlab package that will make my life easier. I have quantum circuits with optical beam splitters, polarizing beam splitters and photodetectors. These circuits are getting very complicated ...
2answers
295 views
### Software for simulating 3D Newtonian dynamics of simple geometric objects (with force fields)
I'm looking for something short of a molecular dynamics package, where I can build up simple geometric shapes with flexible linkages/etc and simulate the consequences of electrostatic repulsion ...
1answer
439 views
### Stellar evolution simulation engine or software
Is there any general purpose stellar evolution simulation engine or software? Something to throw in properties of the star and to watch how (and why) they change along the timeline - with or without ...
2answers
2k views
### Software for geometrical optics
Is there any good software for construction optical path's in geometrical optics. More specifically I want features like: draw $k \in \mathbb{N}$ objects $K_1,\dots,K_n$ with indices of refraction ...
1answer
317 views
### What is the best tool for simulating Vacuum and Fluids together?
I require a software to simulate Fluid simulation with the capability of supporting vacuum simulation. My requirements are that all numbers must reflect their real counterparts almost exactly. For ...
1answer
443 views
### Software for simulating supersonic aerodynamics [closed]
Could you please suggest the software, where I can load my 3D model and see how it behave on various conditions (speed - preferably including supersonic, temperature, pressure)? Both free & ...
1answer
285 views
### Software to calculate forces between magnets
I am working on a complex configuration of magnets and every time I make an experiment something unforseen happens. Now I believe I could speed up the development by sitting down and calculating the ...
0answers
118 views
### Matlab package: graphical calculus for quantum operations including beam splitters and polarizing beam splitters [duplicate]
I need a matlab package that will make my life easier. I have quantum circuits with optical beam splitters, polarizing beam splitters and photodetectors. These circuits are getting very complicated ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8931350111961365, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.